## Saturday, August 25, 2012

### Python Constructive Solid Geometry Update

In a earlier posts I've alluded to a Python Constructive Solid Geometry (CSG) library that I was working on to allow parametric design.  You can do this with OpenSCAD, which is great software, but in my opinion the language leaves a bit to be desired. I wanted a solution that worked with existing languages, specifically C, C++ and Python, so that the results could be integrated easily with other software such as remeshers or FEA packages.

Of course writing a robust CSG library is a daunting undertaking.  Fortunately there are existing libraries such as CGAL and Carve that handle this.  In my opinion CGAL is the more robust of the two, however it currently has compilation issues under OS-X and is substantially slower than Carve.

Regardless, neither have the interface that I'm looking for, like the ability to directly load meshes, affine transformations and minimal-code ways to perform boolean operations on meshes.  So I started work on a C++ wrapper for Carve that would give me the interface I wanted, with a wrapper for Python.

I'm pleased to say that it's coming along quite well and able to produce parts that are non-trivial.  The interface is considerably cleaned up from before and I'm now starting to use it for projects.  Here's two examples from (another) CNC project:

The code that generated these models is here:

from pyCSG import *

def inch_to_mm( inches ):
return inches*25.4

def mm_to_inch( mm ):
return mm/25.4

def hole_compensation( diameter ):
return diameter+1.0

mounting_hole_radius = 0.5*hole_compensation( inch_to_mm( 5.0/16.0 ) )

def axis_end():
obj = box( inch_to_mm( 4.5 ), inch_to_mm( 1.75 ), inch_to_mm( 0.75 ), True )

screw_hole = cylinder( mounting_hole_radius, inch_to_mm( 3.0 ), True, 20 )

shaft_hole = cylinder( 0.5*hole_compensation( inch_to_mm( 0.5 ) ), inch_to_mm(1.0), True, 20 ).rotate( 90.0, 0.0, 0.0 )

center_hole = cylinder( 0.5*hole_compensation( inch_to_mm( 1.0 ) ), inch_to_mm(1.0), True, 20 ).rotate( 90.0, 0.0, 0.0 )
mount_hole = cylinder( 0.5*hole_compensation( 4.0), inch_to_mm(1.0), True, 10 ).rotate( 90.0, 0.0, 0.0 )

notch = box( inch_to_mm( 1.5 ), 2.0, inch_to_mm( 1.0 ), True )

obj = obj - ( shaft_hole.translate( inch_to_mm( 1.5 ), 0.0, 0.0 ) + shaft_hole.translate( inch_to_mm( -1.5 ), 0.0, 0.0 ) )
obj = obj - ( notch.translate( inch_to_mm( 2.25 ), 0.0, 0.0 ) + notch.translate( inch_to_mm( -2.25 ), 0.0, 0.0 ) )
obj = obj - ( center_hole + mount_hole.translate( -15.5, -15.5, 0.0 ) + mount_hole.translate(  15.5, -15.5, 0.0 ) + mount_hole.translate( 15.5, 15.5, 0.0 ) + mount_hole.translate( -15.5, 15.5, 0.0 ) )

obj = obj - ( screw_hole.translate( inch_to_mm(1.0), 0.0, 0.0 ) + screw_hole.translate( inch_to_mm(-1.0), 0.0, 0.0 ) )
obj = obj - ( screw_hole.translate( inch_to_mm(2.0), 0.0, 0.0 ) + screw_hole.translate( inch_to_mm(-2.0), 0.0, 0.0 ) )

return obj

def carriage():
obj = box( inch_to_mm( 5 ), inch_to_mm( 5 ), inch_to_mm( 1.0 ), True )
shaft_hole = cylinder( inch_to_mm( 0.75 )/2.0, inch_to_mm( 5.5 ), True )
screw_hole = cylinder( inch_to_mm( 0.5 )/2.0, inch_to_mm( 5.5 ), True )

leadnut_hole = cylinder( inch_to_mm(0.25)*0.5, inch_to_mm( 1.0 ), True );
leadnut_access = box( inch_to_mm( 1.5 ), inch_to_mm( 3.0/8.0 ), inch_to_mm( 1.0 ), True )

mhole = cylinder( mounting_hole_radius, inch_to_mm( 2.0 ), True ).rotate( 90.0, 0.0, 0.0 )

obj = obj - ( shaft_hole.translate( inch_to_mm( 1.5 ), 0.0, 0.0 ) + shaft_hole.translate( inch_to_mm( -1.5 ), 0.0, 0.0 ) + screw_hole )
obj = obj - ( leadnut_hole.translate( inch_to_mm( 0.5 ), inch_to_mm( -2.5 ), 0.0 ) + leadnut_hole.translate( inch_to_mm( -0.5 ), inch_to_mm( -2.5 ), 0.0 ) + leadnut_access.translate( 0.0, inch_to_mm( -2.0 ), inch_to_mm( 0.2 ) ) )

for i in range( -2, 3 ):
for j in range( -2, 3 ):
if i != 0 and j != 0:
obj = obj - ( mhole.translate( inch_to_mm( 1.0*i ), inch_to_mm( 1.0*j ), 0.0 ) )
return obj

axis_end().save_mesh("axis_end.obj" )
carriage().save_mesh("carriage.obj" )


As you can see, this approach gives lots of flexibility in terms of manipulating and patterning objects using custom code.  The examples above are not great examples of parametric design, but I'm sure you can imagine the sort of stuff that can be done.

I still have to perform a bit of cleanup outside the library to get printable models.  I just run each model through MeshLab's planar edge-flipping optimizer. This is a pretty simple step and I plan to integrate it into the library shortly, along with the ability to extrude custom profiles and build surfaces of revolution.  When these features are finished I plan to release the code for the library and Python wrapper.

## Saturday, August 11, 2012

### Mint Tin Parallax Protoboard

I recently bought a Parallax Propeller Protoboard.  This seems like a nice little processor, 160 MIPS, 32bit, 32 IO pins all at \$25.  The ability to have eight cores is also nice, it seems like it would make a good embedded CNC controller, particularly since it is now supported by GCC, so the ongoing GCode interpreter that I'm occasionally working on should be portable to this platform, but be able to offer extended capabilities like a pendent or DRO.  But it doesn't come with a case so I decided to build it into an Altoids tin, a la the Mintduino.

Nothing difficult, a few drilled holes and one filed opening for the USB cable.  I couldn't fit the cable into the tin with the Protoboard unfortunately, so an elastic keeps it together.  Inside I soldered on male headers just below the female headers.  I left out one pin, which allows an IDE cable to be used and provides a polarized connection for other projects.  I will probably design a small board breaking out the IDE connector to screw terminals sometime in the future to be able to easily interface with the Protoboard.

The IDE cable is cut down to just the first two connectors, allowing it to be rolled up into the Altoids tin when not in use.  The board itself is supported on a anti-static foam, which raises the board a bit but insulated from the board with a layer of thin cardboard, since the foam is slightly conductive and could short everything otherwise.

## Friday, August 10, 2012

### Optical Tomography Setup

As noted in a previous post, my Stochastic Tomography paper was accepted to SIGGRAPH 2012.  Last Tuesday I was in Los Angeles for the conference to present the paper, including present the synopsis in the conference 'fast-forward' to an appallingly large audience.  The photo below shows the seating but during the actual event it was standing room only.

A bit nerve-wracking to say the least.  However it went well and after presenting my main talk to a MUCH smaller crowd, I'd like to post some photos of the setup that we used for the paper.  I should point out that this project actually did not contribute much to the capture setup, this was previously in place from work done by Borislav Trifonov, Michael Krimerman, Brad Atcheson, Derek Bradley and a slew of others.  My work on this paper focused on the algorithms primarily, but I thought people might be interested in a quick overview of the tomography capture apparatus.

The goal of the paper was to build 3D animated models of mixing liquids from multiview video.  To accomplish this, we used an array of roughly 16 Hi-Def Sony camcorders arranged in a semi-circle around our capture volume to record video streams of the two fluids mixing.

You can see the cameras in the photo above, all focused on the capture volume which is inside the glass cylinder.  Each of these records a video of the mixing process, producing 16 streams of video that look more or less like the photo shown below:

You can see one of the cameras peeking out at the right side of the frame.  The cameras are controlled by an Arduino based box that talks to each camcorder using the SONY LANC protocol.  This is an unpublished protocol used by SONY editing consoles, however it has been reverse-engineered by others to allow people to control SONY equipment.  We implemented this protocol on an Arduino, which allows us to start all the cameras recording, turn them on and off as arrays, switch them between photo and video mode and so on.  Unfortunately we can't easily set the exposure levels, transfer files to and from the device, instead we have to painstakingly do this by hand through the on-camera menus, which is error-prone and time-consuming.

The two fluids we use are water, for the clear liquid, and Fluoroscein-Sodium fluorescing dye for the mixing liquid.  This fluorescent dye is available in a powder that is soluble in water, which allows us to perform several types of capture.  The image above is dye powder dropped onto the surface of the water, this mixes with the water and is slightly denser, forming the Rayleigh-Taylor mixing process you see in that shot.  We can also pre-mix the dye powder and simple pour or inject it into the domain, this was the process used for the following two captures that were used in the paper.

This shows an unstable vortex propaging downwards, leaving a complex wake.  I recommend watching in high-def (720p or 1080p).  The next is alcohol mixed with the dye powder, injected into the cylinder from the bottom.  Since alcohol is less dense than water, it rises under buoyancy, mixing as it goes.

In the video above you can see a laminar to turbulent transition as well as lots of complex eddies that form as part of the mixing process.

The captures are illuminated with a set of white LED concert strobe panels.  These panels serve two purposes. First they let us get LOTS of light into the scene in a controlled fashion.  Second we actually use a strobed illumination at about 30Hz to optically synchronize the cameras and remove rolling-shutted shear effects.

All captures start in darkness so we can tell the time offset from the start of the video to the first frame where there is significant illumination.  In fact we can do better than alignment to a single frame, since with the rolling shutter used by these cameras, we can actually determine the first scanline that is exposed.  Using a 30Hz illumination pattern, we can also determine the exposure setting of the camera by looking for the last scanline before the light goes off again.

We then have a rolling shutter compensation program that scans through each video and reassembles a new video from the exposed and dark scanlines.  The result is a set of videos that are optically synchronized and that have minimal shearing introduced.

This gives us a set of input data, however we also need to perform some geometric calibration of the scene in order to know from what angle each video was recorded and to be able to obtain the ray inside of the capture volume that corresponds to every observed pixel.

To do this, we use an optical calibration library called CalTag that detects self-identifying marker patterns similar to QR codes in images.  We print a calibration target on overheard transparencies and mount this pattern to a 3D printed calibration jig that is placed in the glass cylinder.

This jig fits tightly in the cylinder and is registered with a set of detents that fit into recesses in a registration plate that is glued to the inside of the capture cylinder.  The marker pattern that you see in the photo above is also registered to a set of registration tabs.  We have a calibrated pattern on the front of the target as shown above, but also on the back.

When a camera takes an image of this jig after placing it into the capture domain (filled with water), an image similar to the following is obtained, although generally with far less blur due to condensation.

CalTag can then give us the corners of the marker patterns, which can be interpolated to associate with every image pixel, the corresponding 3D point on the calibration target that it 'sees'.  We then rotate the target 180 degrees to face away from the camera and take a picture of an identical and carefully aligned target on the back side of the jig, giving another 3D point for each pixel.  Connecting the points gives a ray in 3D space inside the cylinder, without having to account for any optical interactions between the interior liquid and cylinder.

We do this for every camera, by mounting the capture domain on a rotation stage, which is again controlled by an Arduino.  An automated calibration procedure rotates the stage and triggers each camera to image the front calibration plane, then rotates an additional 180 degrees to repeat the process.  The whole mess is controlled by a python script using pySerial, including the strobes, the rotation stage and the embedded camera controller.

This gives us the needed calibration data to express our scene as a tomographic inverse problem.  Here we look for the scene content that would reproduce the measurements (videos) we obtained, given a physical model for the scene.  In this capture case, the scene is simply emissivity adding up along a ray-path, so we get a linear inverse problem, that we solve using our new Stochastic Tomography algorithm.  The result is volumetric 3D fields that you can animate, inspect and slice through and re-render however you like, as seen below in the submission video.

Stochastic Tomography and its Applications in 3D Imaging of Mixing Fluids from al. et Hullin et al. on Vimeo.

### 3-Axis CNC Controller

In a previous post, I showed the single axis stepper driver boards that I sent out to be made by OSH Park. These seemed to be electrically fine, although it was tricky to properly test without the connectors and other components.  After a quick order from DigiKey, I had the bits I needed.

I'm pleased to say that these work as expected, allowing the microstep mode to be chosen by DIP switch, breaking out all inputs and outputs with screw terminals, and providing the connections needed for high and low limit switches.  I've assembled three of these and screwed them to a piece of MDF to serve as the basis for a 3-Axis CNC controller board based on an Arduino Uno and GRBL.

The start of this board is shown above. Before it's complete I need to add the power connections for the high-power side, along with the limit switches.  I have the GRBL firmware flashed onto the Arduino and have connected a few motors to this setup and everything works great!

Shown below is a closeup of the boards.  The screw terminals in the front connect the limit switches for the high and low endstops.  These have pulldown resistors and are connected to two of the screw-terminal positions on the logic side of the board (the two un-wired stops).  The remaining pulldown resistors are connected to the microstep selection pins, which are set by the red DIP switch.  On the right side of the board are the motor connections (the 4-position terminal block) and the motor power connections (the two position terminals).  All connections are with 3.5mm terminal blocks, which actually meet the power requirements for multi-amp 24V operation.  They also allow multiple connections to be made which allows the daisy-chain type wiring shown above.  The low-power side also has these connections since even though they are not needed it's nice to only need one screwdriver to do the wiring.

I'm quite pleased with my first attempt at getting a board made.  It worked first try, the quality of the boards is excellent and I think these drivers can form the basis of a good many other projects.