3D Visualization: Using Digital Reconstructions to Interpret the Past

Three dimensionality is how we view the natural, every day world. In today’s technological based world, 3D modeling is becoming more and more common in various work fields, especially historic preservation. Two dimensional drawings are no longer sophisticated and so the need for inclusion of 3D visualization is becoming more prevalent in historic preservation. The process assists in documentation and reconstruction of historic structures and landscapes.

3D visualization provides a(n):

  • non-intrusive and non-destructive means of exploring a building
  • common language between people/cultures
  • opportunity to possibly fulfill ADA requirements
  • educational/academic/entertainment opportunity
  • Marketing opportunity for a restoration project
  • attraction for possible donors

However, concerns also arise about these models and their interpretations. These concerns can involve questions such as:

  • How authentic are 3D models?
  • What academic concerns can these models raise?
  • What can we do to improve the authenticity of these models?

No matter how detailed or primitive the 3D model is, it still provides an opportunity to document potentially endangered historic sites. Depending on accessibility and affordability, anyone is able to document and interpret historic structures in a way that two dimensional drawings fall short in.


3D visualization allows for interpretation and documentation of historic structures.

Image sources:

Sapelo Island Scanning


Our class has been assigned with laser scanning tabby remains on the coastal island, Sapelo. The scanner we used was a Faro Focus scanner, which uses wavelengths to read data. It can also capture color data through a camera to add textures to any scans. On board power, storage, and interface allows easy use. The Faro Focus is a medium range scanner, which makes it ideal for capturing building exteriors. Things to consider when scanning on site:

  • How much data needs to be collected?
  • What resolution should it be in?
  • How much time needs to be spent on site?

When laser scanner, quarter resolution is perhaps the best resolution to begin scanning in exterior conditions. This equates to each scan taking about 10 minutes to capture around 40 million points of data.

The class began working with the Faro Focus scanner in class on May 8, with a demonstration of how to begin operating the software and scanner. We completed preview scans of the lab in the Clarence Thomas Center (CTC).


The class working with the Faro Focus scanner for the first time in the CTC lab.


The site in which the class will be laser scanning is Sapelo Island, located sixty miles south of Savannah, Georgia. As the fourth largest island in the state, Sapelo is state owned and managed. Resources located there include the African American community of Hog Hammock, the University of Georgia Marine Institute, the Richard J. Reynolds Wildlife Management Area, and the Sapelo Island National Estuarine Research Reserve.

Settlement began some 4,500 years ago, with Native American settlements located throughout the island. A Native American Shell Ring (prehistoric mound for ceremonies) is located at the north end. Spanish colonists landed in the area between 1573 and 1686. The English began their colonization following James Oglethorpe’s settling of Savannah just north in 1733. Over its history, the island became a place of agriculture, timbering, and livestock raising. It was in the beginning of the 19th century that the plantation economy reached the shores of Sapelo, with Thomas Spalding industrializing the island by erecting a sugar mill, lighthouse, and tabby structures. This complex became an extensive antebellum plantation.

The African American settlements on the island are historically significant. During its industrial planting years, Sapelo was home to 385 slaves belonging to the Spalding family. Following the Civil War, the freed slaves remained on the island, eventually purchasing their own lots to create communities, like Hog Hammock. African American residents played a massive role in the agriculture, timber, and oyster economies on the island.

In the 20th century, the island survived many changes, including the rebuilding of Spalding’s original tabby mansion (ca. 1810), the expansion of the planting, milling, and seafood industries, and the installations of roads and wells. In 1934, Richard J. Reynolds Jr. purchased the island, eventually combining the African American communities into one (Hog Hammock) and creating the Sapelo Island Research Foundation.

Today, many of the 115 residents are descendants of Spalding’s slaves. The traditional Geechee culture of the region is slowly diminishing and is becoming a concern for historians, preservationists, and local residents.

sapelo map

Scanning location (red dot) on Sapelo Island, Georgia (http://sapelonerr.org/education-training/nature-trails/)

Information for this history was provided by www.georgiaencyclopedia.org.


Meeting on Wednesday, May 10, 2017, the class began the scanning process. After taking the ferry to Sapelo Island at 8:30, we were driven by JD, an employee, through dense trees to our scanning location. Our focus was directed towards scanning one of the buildings of the Chocolate Plantation on the north end of the island. The area consists of close to a dozen tabby ruins spread around a reconstructed tabby barn. In the intense heat, we began the process by taking field notes, drawings, and photographs of the various buildings on the property. In all, there were around 13 ruins that were clearly visible to us. I sketched the location of each ruin to ensure proper documentation of the surrounding area. The ruins were beautiful.

After sketching and photographing the ruins, professor agreed with us that one of the ruins that was completely open in the field was the best option for the first laser scanning. The other students were brought in to the ruins and we plotted where exactly we would place the laser scanner to ensure proper data capturing. The tabby ruin, of what appears to be a slave cabin, was comprised of thick walls and an interior wall, dividing the space into two. I planned that we would place the Faro Focus scanner at each exterior corner and inside each room to ensure we fully captured the exterior and interior. At around 10:30, the scanning process began. The first scan, with the scanner on the tripod, was a 360 degree preview scan of the southwest corner. A preview scan, which took 4.5 minutes, was necessary to ensure we captured the building. We selected Parameters – Preview – Home – Start to preview scan.

scanning locations

Scanning locations in regards to the ruin (not to scale)

As the preview  completely captures a 360 degree scan, it was necessary for us to zoom in on the building only to reduce time and to only capture the building itself. In order to do so, we selected Parameters – Horizontal/Vertical – and zoomed to the building to only capture a certain area. Once completed, we set up an official first scan. To do that, we selected Parameters – Resolution – 1/4 – Outdoor 20. These options allowed for a 10 minute scan to capture around 40 million points of data. This preview and scan process was completed for each of the corners of the buildings.

As for the interior space, we decided to reduce the resolution to save time. It was also a small enough space that such a high resolution as 1/4 was not necessary. Since we were capturing the room in 360, we only needed to complete an actual scan as we did not need to zoom in on a certain area. Using the steps for actual scans, we changed the resolution to 1/5. This reduced the scan time from 10 minutes to 6 minutes, only capturing around 27 million points. This process was repeated for the second interior room. Once those scans were completed around 11:30, our job on Sapelo was done and we returned to the ferry.


This slideshow requires JavaScript.


This slideshow requires JavaScript.


To begin the registration process, we copied the file with the scans from Professor’s USB drive to each computer in the CTC lab. Once the file (workspace.fws) was automatically transferred into the Faro SCENE program, they were saved into a local workspace (folder) onto the desktop titled, “Chocolate Plantation.” With the scans open in SCENE, each was assigned a name, such as CP_001, for Chocolate Plantation_scan number.

The first step in the registration was to remove the preview scans. Knowing that we took a preview scan before each scan, save the last two, I selected every other scan and made sure it was a preview scan (was 360 degrees of data). Right clicking on the scan name on the left hand side allows for the scan to be deleted. Once the previews were deleted, we were left with scans CP_002, CP_004, CP_006, CP_008, CP_009, and CP_010.

Next, color must be added to the scans. To do so, right click each scan on the left hand side of the program, select Operations – Color Pictures – Apply Pictures. This applies the color photographs that the laser scan took while scanning the ruins. To bring color to each of the scans, complete the same process for each. Along the top tool bar, there is a tab named “Gexcel.” By selecting this, it exports the scans into JRC Reconstructor, which is a better program for registering the data than SCENE. When exporting the scans to JRC, make sure to select all of the scans, select the color option, and leave the setting for export 1 out of every 1 point of data (this ensures no data is lost).

During the exporting process, RGP files and folders are then created. A separate folder was created to house these files and folders for working with, labeled “Reconstructor” in a larger folder for the class project. The exporting process also creates a JRC Reconstructor file, which needs to be opened to begin the registration process. Again, along the left hand side, all of the scans are organized by name. The work space in JRC is completely empty, as each scan must be loaded into the space. To do so, right click on the desired scan and select Load Model. The model should then appear. Multiple scans can be loaded into the work space simultaneously. The next step in the registration process involves bringing the color back into the scans. By selecting an individual scan,  the property browser (lower left corner) provides a selection Color Mapped. Make sure to select color to change the data in the scan to colored.

Next, the scans must be processed. To do so, select all of the scans on the left hand side and select Line Up – Process- and then remove the selection for Auto Registration and Fine Registration (this ensures that any data alignment is done manually). Clicking Process will process the data of the scans. Once completed, the scans will have to be reloaded into the workspace for viewing.

JRC processing

The processing of a scan in JRC Reconstructor.

From this point on in JRC Reconstructor, there is no undo option, so ensure that everything done is done correctly in order to reduce error. The next step is to remove excess data aside from the ruins. To easily complete this, using the mouse to view each scan from above. In the top right hand corner of the tool bar is a Selection Tool. Drawing a rectangle around the building and then selecting Delete Outside will delete any scanned objects around the selected ruins. Complete this process for each of the scans. Now begins the aligning of the various scans.

JRC cropped scan

Once outlier data has been deleted, the building becomes clearer for registration.

Referencing field notes, select two scans that have overlap for easy alignment. The alignment process is very similar to that of the Artec and NextEngine software. For the tabby ruins, I began with scans CP_002 and CP_008. By selecting the two scans on the left hand side, I selected the Registration tool in the tool bar and selected Manual Preregistration. A pop up window will be opened for the special registration process. In the lower right corner, the two selected scans will be presented. Choose one as a reference scan (what the other scan will be align to) and the moving scan (the scan that will be moved to be aligned to the other). This will reveal the scans in a black and white window above. The color can be presented by selecting the Reflectance drop down and choosing color. Like the other laser scan software, registration involves placing three points on corresponding points on each scan. Using only the mouse wheel to zoom in and out, double click on any points on each scan to select a point. The program requires at least three points. Make sure to select points that are in the data cloud (or on the actual object) and not in the black space or else it will be invalid and will not properly register. Once three points are selected, click Commute to align the scans. A message will then pop up and present the error size for the registration. An acceptable error is under 0.5. The first error I received for aligning CP_002 and CP_008 was 0.030673. This was great and so I clicked Apply. The scans were preregistered and aligned together. This process must be repeated for each consecutive scan. Make sure to select one of the previously registered scans for the next registration. For the next process, I selected CP_008 and CP_006 to align. This will ensure that the CP_006 scan will then be aligned with CP_002 and CP_008 scans that were just aligned. However, after the first scan, careful consideration must be paid to which scan is selected as the reference scan. Since it has already been aligned, the previous scan (CP_008) was selected. The error for that alignment was 0.067780.

JRC registration

The preregistration window will allow a user to select which scan will be used a reference scan or a moving scan. 3 points on each scan will allow proper alignment of the scans.

The next scans that were aligned were CP_004 and CP_006, with CP_006 as the reference scan. The error was 0.051578. Following, CP_010 and CP_002 were aligned, with CP_002 as the reference with an error of 0.035620. Finally, scans CP_009 and CP_010 were registered together with CP_010 as the reference.

I had some difficulty with the first few attempts at selecting points on each scan. Some of the points where too close to the edge and therefore the error was higher. I found that selecting the origin and other points about mid-way up the tabby walls resulted in the least amount of error. The interior scans also proved to be the most difficult to align together (CP_009 and CP_010). These scans had little recognizable points to select to align. After zooming in fairly close, I was able to distinguish subtle differences and align it perfectly.

Back in the workspace, all of the scans were reloaded and were successfully aligned together. To refine the alignment and reduce error, select Cloud to Cloud Registration. This will further refine the registration of each scans and fix any mistakes that may have previously occurred. This process is very simple. Selecting the same scan combinations as the preregistration on the left side of the window, (i.e. CP_002 and CP_008), simply click Process. The pop up window will display the program completing the refinement process. Another error window will be presented. If satisfied with the margin of error, select Apply. Once clicked, the refinement is finished. Repeat the process for the same scan pairs that were selected for preregistration.

JRC finished alignment

Once all scans are aligned, the comprehensive 3D scans document the ruins.

The final steps of registration involved combining all of the scan clouds of data into one single point cloud. In order to do so, select all of the scans on the left panel and right click on them. Select Filtering & Clustering – Make Single Cloud. The new cloud will appear under a new section titled “Unstructured Point Clouds.” I made sure to load the “chocolate cluster” model and turned on the color to see the point cloud. The next step involves creating a Universal Coordinate System (UCS) that orients the model to an XYZ grid so that it can easily be uploaded into Revit or AutoCAD and be oriented correctly. Click the Cross Sections tool in the overhead tool bar, and select Edit Plane. A pop up window will appear on the right hand side. This window will allow you to select an origin point (X) and two other points on other axes (Y and Z) to orient the model. To select a point, hold down ALT and double click on a point. We selected a corner of the ruin as the origin. Once the origin is selected, select two other points on other axes. Once completed, select Create/Edit from Specified Points. This will create a grey opaque plane that rests beneath your model. Ensure that it is perfectly horizontal by selecting Make Plane Horizontal. This levels the plane. Shifting focus to the left hand panel of scans and models, select the new plane under the Planes Section. Right click on the new plane and select Create UCS from this pose. Locate the new UCS model and set it as current. The goal of this step is to ensure that the walls of the model are parallel to the XYZ lines of the coordinate system. For this model, the longer wall aligned well with the Y (green) axis, and the shorter wall with the X (red) axis.

This process was tricky, as it was very difficult to try to get the longest walls aligned with the axis. Twice, I deleted the previous selected points and axis to redo the process for better results. After the third attempt, the results were really great and so I saved the file.

This process is necessary as it enables the model to be plotted according to axes that are consistent through Revit or AutoCAD, allowing easy detailed drawings or elevations to be drawn from scale.


The model is now oriented to the XYZ (red, blue, green) axes and is now a Universal Coordinate System (UCS).

The final step is to export the model. Right click on the single cloud, “Chocolate cluster” in this case and select Export model as… Select an e57 file. Make sure to select Export Position in Current UCS and to change the color from Range to Color. This ensures the model is exported oriented to the XYZ axes and that it retains its color.

Once exported, the e57 file (287 mb) was opened in Autodesk ReCap 360 and saved as a .rcp file.

recap model

The data model in Autodesk ReCap 360.

In the hopes of importing this model in Sketchfab, we selected the Mesh Tools – 3D Mesh option on the overhead tool bar. A pop up window opens and select the cluster (“chocolate cluster.”) Another pop up window opens. Make sure to change the color output from Range to Color. Select the output triangles as Average. This will only create enough faces that it will not exceed the file size limit for Sketchfab. After a few minutes of waiting, a 3D mesh file was created. Under the Triangle Meshes section on the left hand side in JRC Reconstructor, right click and select Export model as… Select the file types as a .ply (polygon mesh). A pop up window will be presented. To export the desired properties, select Export Color and Export Normals. Save it to a desired location. This file will then be able to be uploaded into Sketchfab for viewing. The class had trouble figuring out what file format was optimal for uploading the model into Sketchfab.


Converting the point cloud to a 3D mesh enables it to be saved as a .ply file (polygon mesh) and uploaded into Sketchfab for viewing.

Overall, the 3D mesh removed quite a bit of detail and removed the rugged texture of the tabby ruins. But the model looks great and offers a great platform for viewing the comprehensive reconstruction.

The scanning process and JRC Reconstructor software were both much easier than was expected. The software allows for easy to find tools and processes that are not too confusing for first-time users. And I am beyond happy with the final results.

3D Laser Scanning: Digital Data Capture

Laser Scanning involves 3D point capture. The points are oriented in a point cloud and are a representation of 3D space. Multiple scans are unorganized until they are registered (stitched together).

Scanning is a non-invasive technology that can assist in the documentation and recording of existing buildings, features, and objects. A combination of scanning formats to achieve highly accurate results efficiently within millimeter or sub-millimeter accuracy. Point cloud data is registered to create a 3D model. Once registered, it can be exported to multiple formats to create 2D drawings or polygonal 3D CAD models, which can assist in the research/interpretation of historic structures.

Laser scanning is used in:

  • Engineering
  • Manufacturing/Inspection
  • Surveys (infrastructure)
  • Forensics
  • Cultural Heritage

There are a wide range of laser scanners. Time of Flight (ToF) operates by emitting a light pulse. Range coordinates are determined by calculating the difference between when the laser sends out the signal and when it returns. It can measure distances from a few feet to thousands of feet. Accuracy of the laser is greater over longer ranges in comparison to the phase-based scanner. Time of Flight scanners are more appropriate for scans of large buildings and landscapes or objects without high levels of details. Leica C10 is a great example of a Time of Flight scanner.


The Time of Flight laser scanner measures the distance the laser beams travels in relation to time to capture objects.

Phase-based Scanners (what we are using in class) emit a continuous laser scanner. Range coordinates are determined by calculating the difference between the wave-length when it was sent and the wave-length when it returns, rather than time. It can acquire points much faster than ToF. Objects must be in a certain distance for it to capture details.


Phase-based scanners calculate the distance and wavelength of the laser beam, and not time, to capture an object.

Optical Triangulation Scanners use a laser and camera to work in unison to calculate the distance. Range coordinates are determined by calculating the distance between the project laser dot, the camera, and the laser. They have a limited range but are highly accurate and are ideal for scanning ornaments or detailed objects.


An optical triangulation scanner uses a laser and camera to capture data.

Structured Light Scanners use a combination of a projected light source and camera to determine 3D point values. A series of light patterns are projected onto an object and the 3D points are calculated by analyzing distortions in the pattern. These scanners have a limited range but are extremely fast and work best in low light conditions.


A structure light scanner uses patterns of light and cameras to capture differences of surface and texture.

Considerations for choosing the right scanner:

  • Scope of project (more than one scanner?)
  • Size of the project
  • Amount of detail to be captured

Limitations for scanners are numerous. Shiny or reflective surfaces will not reflect light accurately back to the scanner. Dark surfaces will absorb the light and not reflect it back to the scanner. Transparent objects allow light to pass through, making it difficult to capture shapes.

Equipment needed:

  • Scanner
  • Tripod
  • Targets
  • Computer
  • Data Storage
  • Power Source (internal and external)

The data capture process includes collecting point cloud data of the object or structure being scanned. Making use of multiple scan locations with 30% overlap. Processing and registering the collected information follows. The point cloud data is converted to a 3D mesh. Scan registration can either be manual or automatic. Manual involves manually identifying points between scans. Automatic registration registers the scans using software.Output can include 2D drawings in AutoCAD using the point cloud or as a basis for 3D models.

The scanner we will be using in class is the NextEngine Scanner, an optical triangulation scanner.

Image Sources:

Telfair Museum Scanning


Our class partnered with the Jepson Museum, part of the Telfair Museum complex here in Savannah. On Friday, April 28, 2017, our Digital Practices class met at the Jepson Center at 9:30. Sarah, Madeline, Marina, and I were in charge of scanning the first set of objects provided by the museum collections staff. We then split up into groups of two. I worked with Madeline. We were selected to use the small Artec Scanner, a handheld scanner that is very user-friendly and easy to use, on an 18th century Wedgwood vase. Sarah and Marina used the class NextEngine Scanner to capture a small wood spill vase.


Our station in the atrium of the Jepson Center, where we scanned objects from their collections.


Madeline and I began scanning the Wedgwood vase. We saved each scan to a designated folder on the computer desktop named “Telfair” with this object in another folder labeled “Item 1-Wedgwood vase.” The process we completed for these scans was reasonable. We first placed the object on the turntable with assistance from the museum staff. With the scanner plugged into the computer and the program open, we pressed the preview button on the scanner to begin the process. The scanner required capturing the base first, so that it could be automatically removed. The left hand side of the program presents a range of error, with a light green splotch showing where the current scan would land in the amount of errors. When the splotch was in the best range (minimal errors), we pressed the preview button again to begin scanning. The object was rotated slowly in a clockwise rotation. The scanning only captured a small portion of the object and so multiple scans were needed. After the base was completed, we moved our way up the vase, eventually flipping it over to capture the beautiful detail on the bottom.


The Artec scanner was used to scan the Wedgwood vase on the turn-table (right).

Professor advised us that each scan preferably should have an error margin of less than 0.4 errors. Our first several scans of the base of the vase were at 0.5, which was a little more than we desired for great models for the museum. However, upon completing all of the scans needed for this vase, the program crashed and it was revealed that we forgot to save the majority of the scans. Knowing that we needed to save after every single scan, we restarted. The second round of scanning provided better scans and fewer errors. The errors for the second set of scans were, in order, 0.5, 0.1, 0.1, 0.2, 0.2, 0.2, 0.2, 0.2, 0.1, and 0.2. These show we scanned the vase fairly well.

Once we felt we had completely scanned the vase, we switched with Sarah and Madeline and I moved to the NextEngine scanner. Moving from such an easy hand-held scanner to a troublesome scanner was difficult. The immobility of the scanner itself posed problems with trying to capture a fragile object like the wood spill vase. We continued Sarah’s scanning of the wooden spill vase. We ran into trouble when it came to scanning the base of the vase as the curve of the bottom and stem would not allow the laser to accurately capture detail at such an angle. A museum employee assisted us in using museum wax to hold the vase as we angled it against the scanning arm towards the scanner. This alleviated the issues of holes in the scans.

Madeline and I struggled with the dark, neutral, and light settings for the scans. Dark captured some detail but left out portions of the wood grain. Light simply left out too much information to work. Neutral light  proved best for capturing the vase.As for distance, all of the settings (Macro, Wide, Extended) produced similar results, but Macro ended up producing slightly crisper details. With the scanner set to Macro distance and neutral light, we captured more and more detail.  Once the base was fully captured, we angled the vase even more, directing the scanning to inside to hopefully capture the interior void. At 1:30, the second group of students arrived and we allowed them to finish the scanning.

Another issue we overcame with using the NextEngine scanner was the overhead sunlight in the atrium. The brightness drastically reduced data collection, and I was forced to hold a piece of cardboard above the scanner and vase to reduce the distraction. We also tried to move the laster scanner to face the opposite direction of its original orientation, but this produced no difference.


Using the NextEngine scanner to scan the wooden spill vase of the Telfair Museum.


On Monday, May 1, we met in the computer lab of the Clarence Thomas Center to begin registration of the data we collected while 3D scanning. We connected the NextEngine scanning to the one computer that has the software. Jarles began to align 3 points between each scan to properly stitch together the scans. We each took a turn to align two scans to continue to complete the 3D model. The registration of the Artec scanner requires the class laptop.

The registration process for the NextEngine scanner involves plugging the scanner into the computer and opening the program. Once the file of scans is opened, each individual scan will appear along the bottom. To begin stitching the scans, select the Align tool in the top tool bar. This will open up two scans side by side. On either scan, there are 3 dots (1 red, 1 yellow, 1 blue) that must be dragged onto the scan. Placing them on a specific location that is found in both scans will align them. For the wood spill vase we are stitching in class, we primarily selected letters of the text in each scan. Once all three dots have been placed in the scans accordingly, clicking the Align tool again will stitch them together to create one scan. Clicking the Refine tool will help clear up the attachment. This process must be done for each consequent scan. Clicking the Align tool again will restart the process with the next scan in line.


Selecting three points on two different scans allows the program to stitch the scans into one.

Once the registration/stitching is complete, the next step is to trim the excess data collected. Using the different selecting tools, select outlying specks of data and click the scissors button to delete. Next, the fuse tab will present a tool to create a mesh from the registered point clouds (data collected from scans). Selecting Fuse will present a window for settings for an automatic fuse (flat fill is preferred and leaving the largest hole will leave the handle in place). The process will take a long time. Finally, the polish tool is used to fill holes in any of the data. The finished project should be exported as an .obj file.

The hole filling, outlier removal, and polishing of the jug can be seen below:

This slideshow requires JavaScript.

As for the registration of the data collected using the Artec Scanner, the Artec Studio program is required. After the program is opened, the scan files are duplicated in the event of corruption. One file was opened as a new project. Making all 10 scans visible, it was easy to gauge how much data was captured. To begin the registration process, it was necessary to remove the base that was visible in some scans. To do so, the Edit tab holds the Eraser tool. With only one scan visible, select the cut-off plane selection option. Pressing CTRL, select the scanned base. Once it’s selected, hold CTRL and SHIFT to move the red plane up or down to cover all of the base. Press erase and the base will be removed. Repeat this process for each scan that has a base. The next step is aligning the scans with each other. One option is to use the automatic alignment, selecting the Align tab and Automatic Align. If the automatic alignment does not work properly (like in my case), manual alignment is necessary. To do so, select the scans that need aligned that are located on the left menu. With two scans visible, holding shift will allow you to maneuver the second scan for proper alignment. Left click  on each scan to drop colored points that will match up exact points on the scans to align. Once around 3 points have been dropped, click Align. The two scans will be aligned according to how accurate your points were. Continue the process for each scan until the entire object is aligned.

The next stop is to register the model using global registration. This joins the points to a more condensed form. Using the Tools tab, select global registration. Select geometry and apply. This will create one form. Next, select the outlier removal. This will remove excess data outside of the model. For the resolution, look to the scans you have on the right hand side to see your maximum error (usually around 0.5). The largest error is your resolution. For the Wedgwood vase, our max error was 0.3, so the resolution was 0.3. After a long wait, the excess outliers were removed. The next step is to create a fusion (solid object). In the same Tools tab, select sharp fusion. Choose watertight to create a solid object that will automatically fill in any holes in the scans. Again, set the resolution to your maximum error (0.3). Apply and wait for the model to be generated.

When it came to our Wedgwood vase, there were severe holes towards the base and around the handles. Such large holes did not allow for a smooth fusion in those areas. The vast majority of the vase was beautifully captured. Rescanning of those areas will be necessary to entirely capture every area of the vase so that those holes are filled properly. The next step was to fill the holes. Since all of the holes needed to be filled, I selected the Edges tab, where it listed each of the holes present. As there were only a few, I manually filled them in, ensuring it was done correctly. The holes in the base and mouth of the vase filled in perfectly smooth. There was some trouble with the holes in the arms, so I smoothed edges of the holes first (with 100 strength) and then filled them in with a smooth fill. This did an acceptable job. Once the holes were all filled, the project was saved and then I ran a fast mesh simplification. This was found in the Tools tab, under Postprocessing. This reduces the amount of faces and polygons in the model, allowing it to easily be imported into other programs. As the tutorial video for the processing called for 200,000 polygons, I entered the same amount for this model. A quick wait yielded a much smaller mesh.

Next, it was finally time to apply the textures captured by the Artec Scanner. To do so, select the Textures tab, and ensure that the sharp fusion object is selected in the top left corner. By selecting all of the scans you want the texture from below, its ensuring that the color photos captured from those scans will be applied to your model. I left all of the other settings as default. A short wait presented a high definition texture applied to the vase. After adjusting brightness, contrast, and other settings, I was satisfied by the results and saved the project.

To end the process, I exported the mesh as an .obj file, saving the texture as a jpeg file. With the .obj (object file), the .mtl (texture file), and a jpeg of texture, I created a zip folder for the three so they were compressed together to ensure the texture was brought into Sketchfab. The files were uploaded into Sketchfab for viewing.

Overall, the registration process using the Artec software was vastly easier than the NextEngine. They took about the same time to register data (quite a long time) but the texture and detail of the surface of the Wedgwood vase was better than the texture of the jug, which was scanned and registered using the NextEngine.


The various holes located in the Artec scan of the Wedgwood vase can be seen below:

This slideshow requires JavaScript.

As for the other object we were tasked with registering, Madeline and I encountered many obstacles. The file was having issues when it came to polishing and filling in holes while operating the scans in the NextEngine software. The program took several hours to fill only a few of the dozens of holes. We continued to try and fill in holes but it proved too difficult. Occasionally, the program crashed. I think the files we registered together was so large that the program had trouble working with that much data. With the majority of the holes filled, we (along with Professor) agreed that it was acceptable to continue finishing the model with some holes left in the mesh. After, it was saved as an .obj file (object). However, the file size was too large (293 mb) for the Sketchfab program (max. 200 mb). As a result, it was imported into Meshmixer and reduced. Unfortunately, the file once uploaded to Sketchfab lost its texture and was only displayed as a white object. I attempted to upload both the .obj and .mtl (texture) files to Sketchfab, but it did not work.

Screen Shot 2017-05-07 at 6.39.41 PM

The imported .obj file in Sketchfab with the missing texture.

The Wedgwood Vase proved to be very difficult. The holes seemed too large to fix. However, after working for an hour at Fahm with the Artec Studio 12 software, I was able to successfully fill in the large holes on the side and mouth of the vase. However, the holes that were located on the arms were too tricky for the software to fill properly, so there are some issues with the arms in the final model. As for the texture, it came out extremely crisp. Since the laser scanner had difficulty capturing inside the mouth of the vase, no texture was captured, and therefore the auto texture was placed on top, resulting in an uneven texture.


The NextEngine scanner captures great images paired with laser scanner. The process for capturing the scans is long and tiresome, having to select settings, scan, and wait to see how the results came out. Stationed in the atrium of the museum, the lighting overhead also made the scanning difficult to capture details in certain light. The laser scanner is also stationary on the table, and one is forced to move the object itself (difficult when only the museum staff can touch the objects). The NextEngine was used only to capture a single scan of the object, and not a complete 3D scan. On the other hand, the Artec Scanner was very easy to use. Connected to the computer via USB cord, one can freely move the scanner around the object as it sits on a rotating table. Free independence makes it much more easier to use. The methodology for the Artec also allows the user to see a gauge of the errors present in the scanning, so changes can be made in real-time to ensure better results rather than having to wait for the scan to be completed. The Artec’s mobility allows a complete 3D scan of certain portions, allowing less scans to be needed for registration. Overall, the experience of using the Artec scanner to freely scan the objects was more interesting and fruitful than the clunky NextEngine scanner. The resulting model from the Artec scanner was far clearer and superior to that of the NextEngine.

3D Printing

The next project was to use the 3D model of an object from Colonial Park Cemetery to 3D print. Using the model generated from AutoDesk ReCap 360, I downloaded the .obj file from my profile. The file was downloaded as a ZIP file, which means that the individual files must be extracted to be actually used. Once downloaded and extracted, I removed the mesh file from the folder to allow it to be independent and able to be manipulated.


I began using the program Rhino to import the .obj file and create a solid using the mesh (the thin exterior shell of the 3D model). With an error warning of over 20,000 faces to stitch together, the program simply could not handle creating a solid from so many faces. It froze and crashed. I then tried AutoDesk 3DMax. The program, recommended from previous students, allowed me to check for holes and errors within the mesh. It stated around 50,000 errors. With no prior experience with the program, I decided to try the staff at Fahm Hall, the jewelry building, to assist in closing holes and creating a solid for 3D printing.

The staff at the digital lab at Fahm had no idea how to work with the programs and files I was using for this project. Defeated, I decided to finally download AutoDesk Meshmixer to my personal computer. Once downloaded, I imported the .obj mesh file (only that file) into the program. I began to explore the program and looked online for some assistance when it came to ensuring all of the holes were filled and it was a solid. To get rid of excess ground that I did not want printed in my model, I used the left hand navigation bar to select EDIT-SEPARATE SHELLS. With the pop up box, I deleted every other shell aside from the exterior of the object itself. Doing so removed everything other than the object I wanted printed. I then needed to fill in the bottom and interior space to create a solid object. Again on the left hand tool bar, I selected ANALYSIS-INSPECTOR-AUTO FILL FLAT to create a solid bottom to the object and to fill in any holes in the mesh. A few minutes wait yielded a complete mesh and a bottom. But, it was still not solid.

Screen Shot 2017-04-21 at 2.07.07 PM.png

Using Inspector to fill holes before removing the ground resulted in abnormal results. I then figured to remove the excess ground using “Separate Shells” to delete those portions.

I went to the tool bar and used EDIT-MAKE SOLID to fill in the model to create a solid object. The program automatically fills in the mesh. To ensure the most detail in printing, I chose “Accurate” for the solid type and made sure that I changed the accuracy and mesh density to the highest possible values. Updating the model produced a detailed final model. I then wanted to mount the object to a simple base. Using the MESHMIXER tool on the tool bar and then selecting a box, I created a 1.5″ x 1.5″ x 0.15″ base for the object to sit on. Then holding down the shift key, I selected both the base and the object and then chose EDIT-COMBINE to fuse the two together.

Screen Shot 2017-04-21 at 2.18.34 PM.png

Holes in the mesh are being filled automatically using the inspector tool. This ensures there are no holes in the final 3D print.

With the object as one complete solid, it was time to scale it. Knowing that the average and affordable size to 3D print, I clicked ANALYSIS-UNITS/DIMENSIONS to scale the object to my desired sized of 2″ tall. The overall dimensions were 1.402″ x 1.402″ x 2.” Once scaled, it was ready to be saved. I saved it as a .mix file to ensure I could edit it again at a future time, but I also remembered that I was told to save it as a .stl (stereolithograph) file for printing. Choosing a binary .stl (a compressed file to reduce the file size), I saved it to a thumb drive and it was ready to be printed.

Screen Shot 2017-04-21 at 2.42.53 PM.png

The solid object on a square base needed to be combined together to create one object for printing. The object has been scaled to a height of 2″


Running to Fahm Hall, I filled out the proper from for printing the object in EDM (basic plastic printing). The employee said my file was done correctly and was approved for printing. It was $10 to print. After a day and a half wait (printing and washing in a special solution to remove excess plastic), I picked up the model and it came out much better than I expected.

It was difficult to clearly photograph the text on the monument, and for the program to read and recognize it. Therefore, the Meshmixer and 3D printing was not able to fully capture the details of the text, which was disappointing to see.

This slideshow requires JavaScript.

The additional resources I used to complete this project are:

Next Engine Laser Scanner

ImToday I was introduced to the Next Engine 3D Laser Scanner. The scanner, about the size of a stack of college textbooks, relies both on digital photography and laser scanning to completely record and register a 3D model of a selected object.


The NextEngine scanner uses a camera, laser, and turn table to fully capture objects in 360 degree detail.

To begin, we set up the laser scanner in the Clarence Thomas Center computer lab. The scanner is connected to the computer via USB and is powered by a power cord. The actual scanner is connected to a small turn table, connected via an ethernet cable. The object we selected to practice working with was a concrete corbel. The corbel was placed on the turntable and the program was opened on the computer. Its interface and accessibility is remarkable easy and straight forward for a first-timer.

When opened, the program displays a small window on the right side that depicts what the scanner’s camera is capturing. This is used to gauge if the object is within the window of sight. On the left side, a plethora of options are available for users to select and modify depending if the scanning is able to capture enough data. For the first trial, we selected a single scan (without the turntable creating a 360 degree model) in standard definition (taking about 2.5 minutes to scan). The options for whether to use macro, wide, or extended view and whether it focused on dark, neutral, or light colors were changed constantly throughout our trials. We began the first scan and not much data of the object was collected.

It took several scans and several changes to the lighting or distance of the object in relation to the scanner to scan as much of the corbel as possible. Following the scans, the collected data was registered. The program also has a feature that enables you to align the various scans. Using three different points, you select the same point within two scans so that the program is able to successfully and accurately stitch together the scans into one cohesive and comprehensive 3D model.

In the end, our first few tries using the Next Engine 3D Laser Scanner was tricky and will require many more tries to fully understand how it works.


Trials using the Next Engine 3D Laser Scanner were completed in scanning an architectural decoration. The scanner (left) uses photography and lasers to capture and generate a 3D model of an object on the turntable (right).

Image Sources:


Photogrammetry, Structure from Motion, and Drones

Photogrammetry is essentially photography based and is the act/science of gathering measurements of objects or buildings via photographs. These photos are then used to create two dimensional and three dimensional modeling. Once the photos are captured, they must be rectified to eliminate distortion in the photo using measurements from one plane.

Structure from motion is the act of creating three dimensional models based off of multiple photos. The structure model is based off of movement and documentation from multiple vantage points around the object. Using a program like Autodesk Recap, up to 250 photos can be uploaded and stitched together to generate a 3D model. A requirement for completing structure from motion is the inclusion of at least one measurement to ensure everything else is to scale. Photos collected must overlap each other by 50%, to ensure every detail is thoroughly captured. This entire process is automated on the server side, meaning the process is completed in a cloud rather than by hand. In the event of discrepancies, manual photo registration is possible. Positive benefits of  structure from motion include being less expensive than scanning, higher megapixels equating to better quality, and less time needed at the site. Drawbacks result from a decrease in accuracy from scanning.

Drones enable a perspective of historic buildings and objects that previously wasn’t available in standard documentation. Although drones are expensive and licensing may be necessary to operate in certain environments, they truly provide a great insight of sites from above, capturing more information and photographs than on the ground. Photographs collected from the air can then be uploaded into computer programs to be stitched together to create an in-depth 3D model.


  1. Shoot sequential around buildings
  2. Have at least 50% overlap between images
  3. Be aware of occlusions
    • 5-10 degree intervals
  4. Shoot no more than 200 photos
  5. Create identifiable features if needed
  6. Symmetrical features/transparency/shiny surfaces may prove difficult
  7. Do not move object while photographing
  8. Consistent lighting is best/do not use flash

Image Sources:



AutoDesk ReCap 360

As the first project for this quarter, we were required to select a piece of architectural decoration from somewhere in the Clarence Thomas Center for Historic Preservation. I found a plaster Corinthian capital in the lab room. This selected piece was going to be the item that I would photograph to create a three dimensional model using AutoDesk’s program ReCap 360.


This program uses photographs and stitches them together in order to create an amazing 3D model of the object. In order to do so, I placed the plaster capital on top of a stool in order to achieve complete freedom to capture the object from any angle. I used my iPhone 6 camera to initially photograph the object in indirect lighting to best capture the texture. The first round of photographs were taken on the same level as the object. Photos began facing the front of the object and shifted clock-wise around, ensuring that each photo taken overlapped the previous by at least 50% to reduce loss of detail. After the entirety of the object as captured from a head-on perspective, photos were then taken from above to document the top of the piece.

After the 40 images were taken, the images were then transferred to the computer for upload to the AutoDesk ReCap 360 program. Using the ReCap program, the photos were selected off of the computer and uploaded in. After uploading, the photo project was named “Capital,” and the program began to stitch the photos together.


Uploaded photos in the ReCap program being stitched together to create a comprehensive 3D model.

An hour wait yielded a complete 3D model of the plaster capital.


The final 3D model.

The obj. file of the capital was uploaded to Sketchfab and a more accessible 3D model was generated.


I am impressed by the abilities of ReCap 360 to carefully stitch the images together as well as it was able to. However, I am disappointed that the software does not fully enable editing to remove excess data that was captured through the photographing process. Therefore, the capital is almost lost and overwhelmed by the sheer amount of excess data around it. I wish the software had an erasing option, like Meshmixer offers.

Colonial Park Cemetery 3D Models

Using AutoDesk ReCap 360 again, we applied the structure from motion method to capturing objects within the Colonial Park Cemetery in Savannah, Georgia. As the original burial lot for the city when it was first planned in 1733 by James Oglethorpe, the cemetery is home to some of the oldest graves in the city. What also makes the area unique is the inclusion of half above ground burial vaults/mausoleums. These structures are commonly brick and resemble those in other southern cemeteries (i.e. New Orleans.)


With a great location available to us, we met at the site around 12:00 to photograph two to three objects or tombstones, which we would later upload into ReCap to create 3D models. With direct sunlight overhead, each student was tasked with finding objects to capture. I found a mausoleum, a box tomb, and a Die, Cap, and Base. With each object, I began taking photos parallel to the major surfaces, usually kneeling on the ground to get better coverage. Moving counter clock-wise, I used my iPhone 6 camera to photographed the objects, ensuring that each photo overlapped the previous by 50% to reduce any details being left out. After the lower portions

of the objects were thoroughly photographed, I tried to take photos from a much higher perspective to attempt to capture the top surfaces of these objects. Overall, I captured 31 photos for  the Mausoleum , 41 photos for the box tomb, and 37 photos for the Die, Cap, and Base.

While photographing, several challenges were encountered. First, the overhead direct sunlight made capturing details nearly impossible. Its full effect will revealed in the final 3D models. Second, the close proximity of other objects/tombstones made moving around the objects difficult. Overall, alternative positions were found and the proper photographs were taken.

Once the images were completed, they were transferred to a computer for uploading. The JPEG images were sorted according to object and then uploaded into AutoDesk ReCap 360. Once uploaded, certain settings were altered to ensure proper results for future editing. Ultra quality was selected to ensure fine details were translated, smart cropping was turned on to crop out unnecessary graphics behind the camera positions, smart texture was turned on to improve overall texture, and the OBJ and RCM formats were added.

The models were successfully generated from the images I captured from the cemetery. The files from ReCap 360 (.obj files) were then uploaded to a Sketchfab account.

Screen Shot 2017-04-10 at 12.18.00 PM

The editing window in Sketchfab, where the XYZ orientation and background were edited for easier viewing.

Once uploaded, the website regenerated the 3D model into a more accessible viewer for the public. The background was changed to a simple black to be less distracting for viewers. For several of the models, the orientation were askew and were accordingly adjusted on XYZ coordinates. They were then saved and published.


Overall, I am impressed with the amount of detail that the iPhone camera was able to catch, and how well the ReCap 360 program was able to align the photos together and stitch them to create a 3D model. The software was so easy to use, especially for someone who had no experience with any of this process. All it took was uploading the photos and allowing the program to stitch it together. The results are very satisfactory, but I wish the program also allowed for easy removal of the excess data in the model (i.e. the ground around the object).