3D Laser Scanning: Digital Data Capture

Laser Scanning involves 3D point capture. The points are oriented in a point cloud and are a representation of 3D space. Multiple scans are unorganized until they are registered (stitched together).

Scanning is a non-invasive technology that can assist in the documentation and recording of existing buildings, features, and objects. A combination of scanning formats to achieve highly accurate results efficiently within millimeter or sub-millimeter accuracy. Point cloud data is registered to create a 3D model. Once registered, it can be exported to multiple formats to create 2D drawings or polygonal 3D CAD models, which can assist in the research/interpretation of historic structures.

Laser scanning is used in:

  • Engineering
  • Manufacturing/Inspection
  • Surveys (infrastructure)
  • Forensics
  • Cultural Heritage

There are a wide range of laser scanners. Time of Flight (ToF) operates by emitting a light pulse. Range coordinates are determined by calculating the difference between when the laser sends out the signal and when it returns. It can measure distances from a few feet to thousands of feet. Accuracy of the laser is greater over longer ranges in comparison to the phase-based scanner. Time of Flight scanners are more appropriate for scans of large buildings and landscapes or objects without high levels of details. Leica C10 is a great example of a Time of Flight scanner.

TOF

The Time of Flight laser scanner measures the distance the laser beams travels in relation to time to capture objects.

Phase-based Scanners (what we are using in class) emit a continuous laser scanner. Range coordinates are determined by calculating the difference between the wave-length when it was sent and the wave-length when it returns, rather than time. It can acquire points much faster than ToF. Objects must be in a certain distance for it to capture details.

0032_precizios_mezogazdasag

Phase-based scanners calculate the distance and wavelength of the laser beam, and not time, to capture an object.

Optical Triangulation Scanners use a laser and camera to work in unison to calculate the distance. Range coordinates are determined by calculating the distance between the project laser dot, the camera, and the laser. They have a limited range but are highly accurate and are ideal for scanning ornaments or detailed objects.

3d-scanning-101_9

An optical triangulation scanner uses a laser and camera to capture data.

Structured Light Scanners use a combination of a projected light source and camera to determine 3D point values. A series of light patterns are projected onto an object and the 3D points are calculated by analyzing distortions in the pattern. These scanners have a limited range but are extremely fast and work best in low light conditions.

teacher-shares-designs-for-low-cost-scanner-3

A structure light scanner uses patterns of light and cameras to capture differences of surface and texture.

Considerations for choosing the right scanner:

  • Scope of project (more than one scanner?)
  • Size of the project
  • Amount of detail to be captured

Limitations for scanners are numerous. Shiny or reflective surfaces will not reflect light accurately back to the scanner. Dark surfaces will absorb the light and not reflect it back to the scanner. Transparent objects allow light to pass through, making it difficult to capture shapes.

Equipment needed:

  • Scanner
  • Tripod
  • Targets
  • Computer
  • Data Storage
  • Power Source (internal and external)

The data capture process includes collecting point cloud data of the object or structure being scanned. Making use of multiple scan locations with 30% overlap. Processing and registering the collected information follows. The point cloud data is converted to a 3D mesh. Scan registration can either be manual or automatic. Manual involves manually identifying points between scans. Automatic registration registers the scans using software.Output can include 2D drawings in AutoCAD using the point cloud or as a basis for 3D models.

The scanner we will be using in class is the NextEngine Scanner, an optical triangulation scanner.


Image Sources:
http://aqua.epfl.ch/page-96308-en.html
http://www.tankonyvtar.hu/en/tartalom/tamop425/0032_precizios_mezogazdasag/ch02s04.html
http://3dscanningservices.net/blog/need-know-3d-scanning/
http://www.3ders.org/articles/20150409-teacher-builds-diy-structured-light-3d-scanner-using-a-video-projector-and-webcams.html

Telfair Museum Scanning

BACKGROUND

Our class partnered with the Jepson Museum, part of the Telfair Museum complex here in Savannah. On Friday, April 28, 2017, our Digital Practices class met at the Jepson Center at 9:30. Sarah, Madeline, Marina, and I were in charge of scanning the first set of objects provided by the museum collections staff. We then split up into groups of two. I worked with Madeline. We were selected to use the small Artec Scanner, a handheld scanner that is very user-friendly and easy to use, on an 18th century Wedgwood vase. Sarah and Marina used the class NextEngine Scanner to capture a small wood spill vase.

IMG_8863

Our station in the atrium of the Jepson Center, where we scanned objects from their collections.

PROCESS

Madeline and I began scanning the Wedgwood vase. We saved each scan to a designated folder on the computer desktop named “Telfair” with this object in another folder labeled “Item 1-Wedgwood vase.” The process we completed for these scans was reasonable. We first placed the object on the turntable with assistance from the museum staff. With the scanner plugged into the computer and the program open, we pressed the preview button on the scanner to begin the process. The scanner required capturing the base first, so that it could be automatically removed. The left hand side of the program presents a range of error, with a light green splotch showing where the current scan would land in the amount of errors. When the splotch was in the best range (minimal errors), we pressed the preview button again to begin scanning. The object was rotated slowly in a clockwise rotation. The scanning only captured a small portion of the object and so multiple scans were needed. After the base was completed, we moved our way up the vase, eventually flipping it over to capture the beautiful detail on the bottom.

IMG_8865

The Artec scanner was used to scan the Wedgwood vase on the turn-table (right).

Professor advised us that each scan preferably should have an error margin of less than 0.4 errors. Our first several scans of the base of the vase were at 0.5, which was a little more than we desired for great models for the museum. However, upon completing all of the scans needed for this vase, the program crashed and it was revealed that we forgot to save the majority of the scans. Knowing that we needed to save after every single scan, we restarted. The second round of scanning provided better scans and fewer errors. The errors for the second set of scans were, in order, 0.5, 0.1, 0.1, 0.2, 0.2, 0.2, 0.2, 0.2, 0.1, and 0.2. These show we scanned the vase fairly well.

Once we felt we had completely scanned the vase, we switched with Sarah and Madeline and I moved to the NextEngine scanner. Moving from such an easy hand-held scanner to a troublesome scanner was difficult. The immobility of the scanner itself posed problems with trying to capture a fragile object like the wood spill vase. We continued Sarah’s scanning of the wooden spill vase. We ran into trouble when it came to scanning the base of the vase as the curve of the bottom and stem would not allow the laser to accurately capture detail at such an angle. A museum employee assisted us in using museum wax to hold the vase as we angled it against the scanning arm towards the scanner. This alleviated the issues of holes in the scans.

Madeline and I struggled with the dark, neutral, and light settings for the scans. Dark captured some detail but left out portions of the wood grain. Light simply left out too much information to work. Neutral light  proved best for capturing the vase.As for distance, all of the settings (Macro, Wide, Extended) produced similar results, but Macro ended up producing slightly crisper details. With the scanner set to Macro distance and neutral light, we captured more and more detail.  Once the base was fully captured, we angled the vase even more, directing the scanning to inside to hopefully capture the interior void. At 1:30, the second group of students arrived and we allowed them to finish the scanning.

Another issue we overcame with using the NextEngine scanner was the overhead sunlight in the atrium. The brightness drastically reduced data collection, and I was forced to hold a piece of cardboard above the scanner and vase to reduce the distraction. We also tried to move the laster scanner to face the opposite direction of its original orientation, but this produced no difference.

IMG_8868.JPG

Using the NextEngine scanner to scan the wooden spill vase of the Telfair Museum.

REGISTRATION

On Monday, May 1, we met in the computer lab of the Clarence Thomas Center to begin registration of the data we collected while 3D scanning. We connected the NextEngine scanning to the one computer that has the software. Jarles began to align 3 points between each scan to properly stitch together the scans. We each took a turn to align two scans to continue to complete the 3D model. The registration of the Artec scanner requires the class laptop.

The registration process for the NextEngine scanner involves plugging the scanner into the computer and opening the program. Once the file of scans is opened, each individual scan will appear along the bottom. To begin stitching the scans, select the Align tool in the top tool bar. This will open up two scans side by side. On either scan, there are 3 dots (1 red, 1 yellow, 1 blue) that must be dragged onto the scan. Placing them on a specific location that is found in both scans will align them. For the wood spill vase we are stitching in class, we primarily selected letters of the text in each scan. Once all three dots have been placed in the scans accordingly, clicking the Align tool again will stitch them together to create one scan. Clicking the Refine tool will help clear up the attachment. This process must be done for each consequent scan. Clicking the Align tool again will restart the process with the next scan in line.

IMG_8902

Selecting three points on two different scans allows the program to stitch the scans into one.

Once the registration/stitching is complete, the next step is to trim the excess data collected. Using the different selecting tools, select outlying specks of data and click the scissors button to delete. Next, the fuse tab will present a tool to create a mesh from the registered point clouds (data collected from scans). Selecting Fuse will present a window for settings for an automatic fuse (flat fill is preferred and leaving the largest hole will leave the handle in place). The process will take a long time. Finally, the polish tool is used to fill holes in any of the data. The finished project should be exported as an .obj file.

The hole filling, outlier removal, and polishing of the jug can be seen below:

This slideshow requires JavaScript.

As for the registration of the data collected using the Artec Scanner, the Artec Studio program is required. After the program is opened, the scan files are duplicated in the event of corruption. One file was opened as a new project. Making all 10 scans visible, it was easy to gauge how much data was captured. To begin the registration process, it was necessary to remove the base that was visible in some scans. To do so, the Edit tab holds the Eraser tool. With only one scan visible, select the cut-off plane selection option. Pressing CTRL, select the scanned base. Once it’s selected, hold CTRL and SHIFT to move the red plane up or down to cover all of the base. Press erase and the base will be removed. Repeat this process for each scan that has a base. The next step is aligning the scans with each other. One option is to use the automatic alignment, selecting the Align tab and Automatic Align. If the automatic alignment does not work properly (like in my case), manual alignment is necessary. To do so, select the scans that need aligned that are located on the left menu. With two scans visible, holding shift will allow you to maneuver the second scan for proper alignment. Left click  on each scan to drop colored points that will match up exact points on the scans to align. Once around 3 points have been dropped, click Align. The two scans will be aligned according to how accurate your points were. Continue the process for each scan until the entire object is aligned.

The next stop is to register the model using global registration. This joins the points to a more condensed form. Using the Tools tab, select global registration. Select geometry and apply. This will create one form. Next, select the outlier removal. This will remove excess data outside of the model. For the resolution, look to the scans you have on the right hand side to see your maximum error (usually around 0.5). The largest error is your resolution. For the Wedgwood vase, our max error was 0.3, so the resolution was 0.3. After a long wait, the excess outliers were removed. The next step is to create a fusion (solid object). In the same Tools tab, select sharp fusion. Choose watertight to create a solid object that will automatically fill in any holes in the scans. Again, set the resolution to your maximum error (0.3). Apply and wait for the model to be generated.

When it came to our Wedgwood vase, there were severe holes towards the base and around the handles. Such large holes did not allow for a smooth fusion in those areas. The vast majority of the vase was beautifully captured. Rescanning of those areas will be necessary to entirely capture every area of the vase so that those holes are filled properly. The next step was to fill the holes. Since all of the holes needed to be filled, I selected the Edges tab, where it listed each of the holes present. As there were only a few, I manually filled them in, ensuring it was done correctly. The holes in the base and mouth of the vase filled in perfectly smooth. There was some trouble with the holes in the arms, so I smoothed edges of the holes first (with 100 strength) and then filled them in with a smooth fill. This did an acceptable job. Once the holes were all filled, the project was saved and then I ran a fast mesh simplification. This was found in the Tools tab, under Postprocessing. This reduces the amount of faces and polygons in the model, allowing it to easily be imported into other programs. As the tutorial video for the processing called for 200,000 polygons, I entered the same amount for this model. A quick wait yielded a much smaller mesh.

Next, it was finally time to apply the textures captured by the Artec Scanner. To do so, select the Textures tab, and ensure that the sharp fusion object is selected in the top left corner. By selecting all of the scans you want the texture from below, its ensuring that the color photos captured from those scans will be applied to your model. I left all of the other settings as default. A short wait presented a high definition texture applied to the vase. After adjusting brightness, contrast, and other settings, I was satisfied by the results and saved the project.

To end the process, I exported the mesh as an .obj file, saving the texture as a jpeg file. With the .obj (object file), the .mtl (texture file), and a jpeg of texture, I created a zip folder for the three so they were compressed together to ensure the texture was brought into Sketchfab. The files were uploaded into Sketchfab for viewing.

Overall, the registration process using the Artec software was vastly easier than the NextEngine. They took about the same time to register data (quite a long time) but the texture and detail of the surface of the Wedgwood vase was better than the texture of the jug, which was scanned and registered using the NextEngine.

RESULTS

The various holes located in the Artec scan of the Wedgwood vase can be seen below:

This slideshow requires JavaScript.

As for the other object we were tasked with registering, Madeline and I encountered many obstacles. The file was having issues when it came to polishing and filling in holes while operating the scans in the NextEngine software. The program took several hours to fill only a few of the dozens of holes. We continued to try and fill in holes but it proved too difficult. Occasionally, the program crashed. I think the files we registered together was so large that the program had trouble working with that much data. With the majority of the holes filled, we (along with Professor) agreed that it was acceptable to continue finishing the model with some holes left in the mesh. After, it was saved as an .obj file (object). However, the file size was too large (293 mb) for the Sketchfab program (max. 200 mb). As a result, it was imported into Meshmixer and reduced. Unfortunately, the file once uploaded to Sketchfab lost its texture and was only displayed as a white object. I attempted to upload both the .obj and .mtl (texture) files to Sketchfab, but it did not work.

Screen Shot 2017-05-07 at 6.39.41 PM

The imported .obj file in Sketchfab with the missing texture.

The Wedgwood Vase proved to be very difficult. The holes seemed too large to fix. However, after working for an hour at Fahm with the Artec Studio 12 software, I was able to successfully fill in the large holes on the side and mouth of the vase. However, the holes that were located on the arms were too tricky for the software to fill properly, so there are some issues with the arms in the final model. As for the texture, it came out extremely crisp. Since the laser scanner had difficulty capturing inside the mouth of the vase, no texture was captured, and therefore the auto texture was placed on top, resulting in an uneven texture.



CONCLUSION

The NextEngine scanner captures great images paired with laser scanner. The process for capturing the scans is long and tiresome, having to select settings, scan, and wait to see how the results came out. Stationed in the atrium of the museum, the lighting overhead also made the scanning difficult to capture details in certain light. The laser scanner is also stationary on the table, and one is forced to move the object itself (difficult when only the museum staff can touch the objects). The NextEngine was used only to capture a single scan of the object, and not a complete 3D scan. On the other hand, the Artec Scanner was very easy to use. Connected to the computer via USB cord, one can freely move the scanner around the object as it sits on a rotating table. Free independence makes it much more easier to use. The methodology for the Artec also allows the user to see a gauge of the errors present in the scanning, so changes can be made in real-time to ensure better results rather than having to wait for the scan to be completed. The Artec’s mobility allows a complete 3D scan of certain portions, allowing less scans to be needed for registration. Overall, the experience of using the Artec scanner to freely scan the objects was more interesting and fruitful than the clunky NextEngine scanner. The resulting model from the Artec scanner was far clearer and superior to that of the NextEngine.

3D Printing

The next project was to use the 3D model of an object from Colonial Park Cemetery to 3D print. Using the model generated from AutoDesk ReCap 360, I downloaded the .obj file from my profile. The file was downloaded as a ZIP file, which means that the individual files must be extracted to be actually used. Once downloaded and extracted, I removed the mesh file from the folder to allow it to be independent and able to be manipulated.

PROCESS

I began using the program Rhino to import the .obj file and create a solid using the mesh (the thin exterior shell of the 3D model). With an error warning of over 20,000 faces to stitch together, the program simply could not handle creating a solid from so many faces. It froze and crashed. I then tried AutoDesk 3DMax. The program, recommended from previous students, allowed me to check for holes and errors within the mesh. It stated around 50,000 errors. With no prior experience with the program, I decided to try the staff at Fahm Hall, the jewelry building, to assist in closing holes and creating a solid for 3D printing.

The staff at the digital lab at Fahm had no idea how to work with the programs and files I was using for this project. Defeated, I decided to finally download AutoDesk Meshmixer to my personal computer. Once downloaded, I imported the .obj mesh file (only that file) into the program. I began to explore the program and looked online for some assistance when it came to ensuring all of the holes were filled and it was a solid. To get rid of excess ground that I did not want printed in my model, I used the left hand navigation bar to select EDIT-SEPARATE SHELLS. With the pop up box, I deleted every other shell aside from the exterior of the object itself. Doing so removed everything other than the object I wanted printed. I then needed to fill in the bottom and interior space to create a solid object. Again on the left hand tool bar, I selected ANALYSIS-INSPECTOR-AUTO FILL FLAT to create a solid bottom to the object and to fill in any holes in the mesh. A few minutes wait yielded a complete mesh and a bottom. But, it was still not solid.

Screen Shot 2017-04-21 at 2.07.07 PM.png

Using Inspector to fill holes before removing the ground resulted in abnormal results. I then figured to remove the excess ground using “Separate Shells” to delete those portions.

I went to the tool bar and used EDIT-MAKE SOLID to fill in the model to create a solid object. The program automatically fills in the mesh. To ensure the most detail in printing, I chose “Accurate” for the solid type and made sure that I changed the accuracy and mesh density to the highest possible values. Updating the model produced a detailed final model. I then wanted to mount the object to a simple base. Using the MESHMIXER tool on the tool bar and then selecting a box, I created a 1.5″ x 1.5″ x 0.15″ base for the object to sit on. Then holding down the shift key, I selected both the base and the object and then chose EDIT-COMBINE to fuse the two together.

Screen Shot 2017-04-21 at 2.18.34 PM.png

Holes in the mesh are being filled automatically using the inspector tool. This ensures there are no holes in the final 3D print.

With the object as one complete solid, it was time to scale it. Knowing that the average and affordable size to 3D print, I clicked ANALYSIS-UNITS/DIMENSIONS to scale the object to my desired sized of 2″ tall. The overall dimensions were 1.402″ x 1.402″ x 2.” Once scaled, it was ready to be saved. I saved it as a .mix file to ensure I could edit it again at a future time, but I also remembered that I was told to save it as a .stl (stereolithograph) file for printing. Choosing a binary .stl (a compressed file to reduce the file size), I saved it to a thumb drive and it was ready to be printed.

Screen Shot 2017-04-21 at 2.42.53 PM.png

The solid object on a square base needed to be combined together to create one object for printing. The object has been scaled to a height of 2″

RESULTS

Running to Fahm Hall, I filled out the proper from for printing the object in EDM (basic plastic printing). The employee said my file was done correctly and was approved for printing. It was $10 to print. After a day and a half wait (printing and washing in a special solution to remove excess plastic), I picked up the model and it came out much better than I expected.

It was difficult to clearly photograph the text on the monument, and for the program to read and recognize it. Therefore, the Meshmixer and 3D printing was not able to fully capture the details of the text, which was disappointing to see.

This slideshow requires JavaScript.

The additional resources I used to complete this project are:

Next Engine Laser Scanner

ImToday I was introduced to the Next Engine 3D Laser Scanner. The scanner, about the size of a stack of college textbooks, relies both on digital photography and laser scanning to completely record and register a 3D model of a selected object.

scanner-with-autodrive.jpg

The NextEngine scanner uses a camera, laser, and turn table to fully capture objects in 360 degree detail.

To begin, we set up the laser scanner in the Clarence Thomas Center computer lab. The scanner is connected to the computer via USB and is powered by a power cord. The actual scanner is connected to a small turn table, connected via an ethernet cable. The object we selected to practice working with was a concrete corbel. The corbel was placed on the turntable and the program was opened on the computer. Its interface and accessibility is remarkable easy and straight forward for a first-timer.

When opened, the program displays a small window on the right side that depicts what the scanner’s camera is capturing. This is used to gauge if the object is within the window of sight. On the left side, a plethora of options are available for users to select and modify depending if the scanning is able to capture enough data. For the first trial, we selected a single scan (without the turntable creating a 360 degree model) in standard definition (taking about 2.5 minutes to scan). The options for whether to use macro, wide, or extended view and whether it focused on dark, neutral, or light colors were changed constantly throughout our trials. We began the first scan and not much data of the object was collected.

It took several scans and several changes to the lighting or distance of the object in relation to the scanner to scan as much of the corbel as possible. Following the scans, the collected data was registered. The program also has a feature that enables you to align the various scans. Using three different points, you select the same point within two scans so that the program is able to successfully and accurately stitch together the scans into one cohesive and comprehensive 3D model.

In the end, our first few tries using the Next Engine 3D Laser Scanner was tricky and will require many more tries to fully understand how it works.

IMG_8824.JPG

Trials using the Next Engine 3D Laser Scanner were completed in scanning an architectural decoration. The scanner (left) uses photography and lasers to capture and generate a 3D model of an object on the turntable (right).


Image Sources:

https://www.eevblog.com/forum/reviews/3d-scanner-the-nextengine-2020i-my-latest-purchase/

Photogrammetry, Structure from Motion, and Drones

Photogrammetry is essentially photography based and is the act/science of gathering measurements of objects or buildings via photographs. These photos are then used to create two dimensional and three dimensional modeling. Once the photos are captured, they must be rectified to eliminate distortion in the photo using measurements from one plane.

Structure from motion is the act of creating three dimensional models based off of multiple photos. The structure model is based off of movement and documentation from multiple vantage points around the object. Using a program like Autodesk Recap, up to 250 photos can be uploaded and stitched together to generate a 3D model. A requirement for completing structure from motion is the inclusion of at least one measurement to ensure everything else is to scale. Photos collected must overlap each other by 50%, to ensure every detail is thoroughly captured. This entire process is automated on the server side, meaning the process is completed in a cloud rather than by hand. In the event of discrepancies, manual photo registration is possible. Positive benefits of  structure from motion include being less expensive than scanning, higher megapixels equating to better quality, and less time needed at the site. Drawbacks result from a decrease in accuracy from scanning.

Drones enable a perspective of historic buildings and objects that previously wasn’t available in standard documentation. Although drones are expensive and licensing may be necessary to operate in certain environments, they truly provide a great insight of sites from above, capturing more information and photographs than on the ground. Photographs collected from the air can then be uploaded into computer programs to be stitched together to create an in-depth 3D model.


Advice:

  1. Shoot sequential around buildings
  2. Have at least 50% overlap between images
  3. Be aware of occlusions
    • 5-10 degree intervals
  4. Shoot no more than 200 photos
  5. Create identifiable features if needed
  6. Symmetrical features/transparency/shiny surfaces may prove difficult
  7. Do not move object while photographing
  8. Consistent lighting is best/do not use flash

Image Sources:

https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=images&cd=&ved=0ahUKEwjY3oSn_IjUAhUQcCYKHQjnAewQjxwIAw&url=https%3A%2F%2Fwww.linkedin.com%2Fpulse%2Fdrone-preservation-cultural-heritage-architecture-texas-jonathan-kohn&psig=AFQjCNE4pDIFYZELzl59RGjcr4Lt_sSNEw&ust=1495730436470049

http://www.preservationmaryland.org/drone-documentation-whites-hall-johns-hopkins-home/