During a 6-month internship at KADDB, two of my friends and I have designed and constructed a 3D scanner.
The scanner consists of a high precision laser depth sensor mounted on an assembly of linear actuators with two degrees of freedom. The sensor is swept across a 2D plane (facing the object to be scanned) and measurements are collected to construct a depth array. When this done, the scanned object (which is placed on a high precision rotary stage) is rotated, exposing another side of the object to the sensor, and the process is repeated.
The schematics of the scanner are shown below.
After scanning the object from a sufficient number of viewpoints, the depth arrays are mapped to 3D space to create a point cloud with a surface resolution of 9 points/mm2. The point cloud is then filtered with a 3rd party software and used to construct a polygon-based model using Delaunay triangulation.
Below is an example 3D scan performed by the device.
While the principle of operation is easy to grasp, the collection of depth samples, inferring their 2D coordinates and mapping them into 3D space while accounting for mechanical misalignment and other sources of errors was not as straightforward. We've spent a good amount of time refining the device's output in software, as the figure below shows.
This is another point cloud which has been color-coded to show the individual 2D depth arrays collected from different angular viewpoints
and two close ups on the face ...
The 3D scanner won the “Best Design Award” at the KADDB Technology Showcase in 2009.
Software subparts and documentation on GitHub: