## Thursday, May 31, 2012

### Boost::python + Distutils Python Extension Module Example

In getting my ongoing python CSG library functioning, I needed a way to compile and build python extension modules.  Typically I use qmake from the QT library to build projects, as it's relatively cross-platform (Xcode support is broken as of version 4+) and has relatively little magic sauce making it function (I like transparency in a build system).

However, this isn't so hot for python modules where it's nice to have a simple setup.py script that compiles and installs your module.   Distutils provides just this functionality, while being pretty easy to pick up.  So I decided to do the building with distutils.

Then you have to choose how to actually generate the wrapper code.  Doing this by hand using the python API is manageable but cumbersome and results in a lot more code to maintain.  To improve this, there are a number of other wrapper code generators, including SWIG and boost::python.  I prefer boost::python because it i) is very explicit, you define the exposed interface directly and ii) works with all compilers, without custom build steps or interface file compilation.  It's a bit more typing than SWIG, but I (again) like the transparency.

Of course the downside to boost::python is bjam, the boost build system that appears to summon demons from the ether to build your project through sorcery.  Most tutorials assume you will use bjam but I have yet to get it to work on an even modestly complicated project.  So I set about building python modules wrapped by boost::python using Distutils.  This is actually pretty easy, but I had to dig around through forums and documentation to find out how. Consequently I decided to post a full example from start to finish.

I'm going to assume that we're wrapping two source files functions_to_wrap.cpp and classes_to_wrap.cpp, located in root_dir/src, which have headers functions_to_wrap.h and classes_to_wrap.h located in root_dir/include.  The two headers are:

functions_to_wrap.h
#ifndef FUNCTIONS_TO_WRAP_H
#define FUNCTIONS_TO_WRAP_H

// returns a random string
const char *get_string();

// returns true if values are equal
bool are_values_equal( int a, int b );

// returns the number of supplied arguments to demonstrate
int num_arguments( bool arg0, bool arg1=false, bool arg2=false, bool arg3=false );

#endif


classes_to_wrap.h
#ifndef CLASSES_TO_WRAP_H
#define CLASSES_TO_WRAP_H

class wrapped_class {
private:
int m_value;
public:
wrapped_class();
wrapped_class( const int value );

void set_value( const int value );
int  get_value( ) const;
};
#endif


There are also the implementation files, but these don't actually matter as far as the build process or wrapper definitions are concerned.  To wrap these, we write an additional c++ file in root_dir/src called boost_python_wrapper.cpp that will define the python interface.  The contents of this file are:

boost_python_wrapper.cpp
#include<boost/python.hpp>

// include the headers for the wrapped functions and classes
#include"functions_to_wrap.h"
#include"classes_to_wrap.h"

using namespace boost::python;

// generate overloads for default arguments, this example is a function
// that takes a minimum of 1 argument and a maximum of four. the
// then supplied to boost::python when defining the wrapper

// generate wrapper code for both the functions and classes
BOOST_PYTHON_MODULE(ExtensionExample){
// declare the functions to wrap. note inclusion of the
// that handle default function arguments
def( "get_string",       get_string );
def( "are_values_equal", are_values_equal );

// declare the classes to wrap. both the default
// constructor and the constructor taking an
// integer argument are exposed
class_<wrapped_class>("wrapped_class",init<>())
.def(init<int>())
.def("get_value", &wrapped_class::get_value )
.def("set_value", &wrapped_class::set_value )
;
}


This file calls a bunch of boost::python macros which instantiate the wrapper code, including handling default arguments for functions and overloaded constructors for classes.  It is also possible to do function overloading, although I've left this out for simplicity.  Note that there's nothing specific about the filenames I chose, as long as the functions and classes to be wrapped are included in the wrapper source code file, you can use any structure/names you want.

When boost_python_wrapper.cpp, functions_to_wrap.cpp and classes_to_wrap.cpp are compiled as a shared library, boost::python will generate wrapper automatically generate wrapper code inside of classes_to_wrap.cpp by macro-expansion, so no additional source files are needed.  Just link the results with the python and boost::python libraries and you get a python module.

Distutils provides a clean way of doing this and installing the module automatically.  To make a build/install script, we create a script setup.py in the root directory. The contents of this script are little more than compiler and linker flags.

Here's the setup.py script:
from distutils.core import setup, Extension

# define the name of the extension to use
extension_name    = 'ExtensionExample'
extension_version = '1.0'

# define the directories to search for include files
# to get this to work, you may need to include the path
# to your boost installation. Mine was in
# '/usr/local/include', hence the corresponding entry.
include_dirs = [ '/usr/local/include', 'include' ]

# define the library directories to include any extra
# libraries that may be needed.  The boost::python
# library for me was located in '/usr/local/lib'
library_dirs = [ '/usr/local/lib' ]

# define the libraries to link with the boost python library
libraries = [ 'boost_python-mt' ]

# define the source files for the extension
source_files = [ 'src/boost_python_wrapper.cpp', 'src/functions_to_wrap.cpp', 'src/classes_to_wrap.cpp' ]

# create the extension and add it to the python distribution
setup( name=extension_name, version=extension_version, ext_modules=[Extension( extension_name, source_files, include_dirs=include_dirs, library_dirs=library_dirs, libraries=libraries )] )


Note that the include and library paths are most likely included by default. I have added them simply to demonstrate how you would include (potentially custom) headers and libraries into the build process without overcomplicating the example.

To build and install the module, you can then type:

\$ python setup.py install

This should produce a lot of compiler spew.  If there are no errors, you should then be able to use the module.  Here is an example script that does just that:

demo.py
from ExtensionExample import *

print 'calling get_string(), should print out a random string'
print get_string()

print 'calling get_string(), should print out a random string'
print get_string()

print 'calling are_values_equal( 10, 10 ), should print True'
print are_values_equal( 10, 10 )

print 'calling are_values_equal( 10, 1 ), should print False'
print are_values_equal( 10, 1 )

print 'calling num_arguments( True ), should print 1'
print num_arguments( True )

print 'calling num_arguments( True, True ), should print 2'
print num_arguments( True, True )

print 'calling num_arguments( True, True, True ), should print 3'
print num_arguments( True, True, True )

print 'running: a = wrapped_class()'
a = wrapped_class()
print 'running: a.get_value(), should print -1'
print a.get_value()
print 'running: a.set_value( 5 )'
a.set_value( 5 )
print 'running: a.get_value(), should print 5'
print a.get_value()


If you run this script you should see the expected output from the script (evident from the script itself). You can download the entire example as a zip file here which includes all the source/header files as well as the build scripts and a readme.  Hopefully it will be helpful to other people, or just me later on.

## Tuesday, May 29, 2012

### Python Constructive Solid Geometry

As mention in a previous post, I've started work on a Constructive Solid Geometry (CSG) library. The library is currently written in C++ and uses Carve to do the heavy lifting. I am also planning a CGAL back-end since so far, it seems more robust than Carve to geometric degeneracies. Unfortunately, lack of strict compliance to the IEE754 spec by the LLVM and Clang compilers means that I can't develop and test this under OS-X 10.7, my primary development machine.

The ultimate goal is a simple, easy to use library with functionality akin to OpenSCAD, but usable from mainstream languages (C++ and Python). I currently have a basic implementation functional, including mesh loading and saving (.OFF, .OBJ and .STL), affine transformations, basic primitives and CSG operations union, difference, symmetric difference, and intersection. All of this functionality is available through python via a BOOST.python wrapper.

Here's a sample python script, generating a weird table thing:
from pyCSG import *

table = polyhedron()
table_top = polyhedron()
table_leg = polyhedron()

table_top.make_box( 1.0, 1.0, 1.0, True )
table_leg.make_sphere( 0.5, True )

table = table_top.scale( 4.0, 0.25, 3.0 )
table = table.union( table_leg.translate(-1.5,-0.4,-1.0 ) )
table = table.union( table_leg.translate( 1.5,-0.4,-1.0 ) )
table = table.union( table_leg.translate( 1.5,-0.4, 1.0 ) )
table = table.union( table_leg.translate(-1.5,-0.4, 1.0 ) )

torus = polyhedron();
torus.make_torus( 1.0, 0.25, True )
table = table.difference( torus.translate( 0.0, 0.25, 0.0 ) )

table.save_mesh("table.obj");


Which produces the output (viewed in MeshLab):

Note that this is a single watertight mesh suitable for 3D printing, machining or meshing for finite element analysis.  I'm still finalizing the API, but will release the code when it's a bit more stable which I hope will make parametric design and optimization much more practical with open source tools.

## Sunday, May 27, 2012

### Example code for building CGAL::Polyhedron_3

I've found myself redoing this code relatively frequently, so I thought I would post it.  Nothing special, just a snippet that loads the file input.obj, builds a CGAL::Polyhedron_3 from it, and writes it out as dump.off. The code also includes a rudimentary Wavefront OBJ loader.

#include<fstream>
#include<vector>
#include<string>
#include<algorithm>

#include<CGAL/Simple_cartesian.h>
#include<CGAL/Polyhedron_incremental_builder_3.h>
#include<CGAL/Polyhedron_3.h>
#include<CGAL/IO/Polyhedron_iostream.h>

typedef CGAL::Simple_cartesian<double>     Kernel;
typedef CGAL::Polyhedron_3<Kernel>         Polyhedron;
typedef Polyhedron::HalfedgeDS             HalfedgeDS;

// A modifier creating a triangle with the incremental builder.
template<class HDS>
class polyhedron_builder : public CGAL::Modifier_base<HDS> {
public:
std::vector<double> &coords;
std::vector<int>    &tris;
polyhedron_builder( std::vector<double> &_coords, std::vector<int> &_tris ) : coords(_coords), tris(_tris) {}
void operator()( HDS& hds) {
typedef typename HDS::Vertex   Vertex;
typedef typename Vertex::Point Point;

// create a cgal incremental builder
CGAL::Polyhedron_incremental_builder_3<HDS> B( hds, true);
B.begin_surface( coords.size()/3, tris.size()/3 );

for( int i=0; i<(int)coords.size(); i+=3 ){
B.add_vertex( Point( coords[i+0], coords[i+1], coords[i+2] ) );
}

for( int i=0; i<(int)tris.size(); i+=3 ){
B.begin_facet();
B.end_facet();
}

// finish up the surface
B.end_surface();
}
};

// reads the first integer from a string in the form
// "334/455/234" by stripping forward slashes and
// scanning the result
int get_first_integer( const char *v ){
int ival;
std::string s( v );
std::replace( s.begin(), s.end(), '/', ' ' );
sscanf( s.c_str(), "%d", &ival );
return ival;
}

// barebones .OFF file reader, throws away texture coordinates, normals, etc.
// stores results in input coords array, packed [x0,y0,z0,x1,y1,z1,...] and
// tris array packed [T0a,T0b,T0c,T1a,T1b,T1c,...]
void load_obj( const char *filename, std::vector<double> &coords, std::vector<int> &tris ){
double x, y, z;
char line[1024], v0[1024], v1[1024], v2[1024];

// open the file, return if open fails
FILE *fp = fopen(filename, "r" );
if( !fp ) return;

// read lines from the file, if the first character of the
// line is 'v', we are reading a vertex, otherwise, if the
// first character is a 'f' we are reading a facet
while( fgets( line, 1024, fp ) ){
if( line[0] == 'v' ){
sscanf( line, "%*s%lf%lf%lf", &x, &y, &z );
coords.push_back( x );
coords.push_back( y );
coords.push_back( z );
} else if( line[0] == 'f' ){
sscanf( line, "%*s%s%s%s", v0, v1, v2 );
tris.push_back( get_first_integer( v0 )-1 );
tris.push_back( get_first_integer( v1 )-1 );
tris.push_back( get_first_integer( v2 )-1 );
}
}
fclose(fp);
}

int main() {
// two vectors to hold point coordinates and
// triangle vertex indices
std::vector<double> coords;
std::vector<int>    tris;

if( coords.size() == 0 )
return 1;

// build a polyhedron from the loaded arrays
Polyhedron P;
polyhedron_builder<HalfedgeDS> builder( coords, tris );
P.delegate( builder );

// write the polyhedron out as a .OFF file
std::ofstream os("dump.off");
os << P;
os.close();

return 0;
}


## Saturday, May 26, 2012

### Python Involute Spur Gear Script

It's pretty difficult to find reasonably priced gears around and even if you can find them, they're often not exactly what you want. Finding a gear with the right number of teeth, width, pressure-angle and bore is often not possible.

To this end I have written a python script for generating involute gears. Other scripts are available, an OpenSCAD script thingiverse for example, however i) I like to do things myself, from scratch and ii) I'd prefer my setup work with a more mainstream programming language. The source-code is available at the bottom of this post.

The script generates involute spur gears, with pressure angles up to about 30 degrees. I hope to extend it to include racks and internal gears eventually, but it is quite useful now. Output is in SVG for easy editing in graphic-design/laser-cutting or DXF for use with OpenSCAD. I am currently working on a CGAL-based constructive solid geometry module for python that will allow CSG operations to be performed by python scripts. This would allow fully parameteric CSG design to be done in an open, mainstream language.

I've generated some examples using the following script:
# import the gears script
from gears import *

pa = 14.5   # pressure angle, in degrees
P  = 24     # pitch, teeth per unit distance

# generate three gears
ax, ay = gears_make_gear( pa, 12, P )
bx, by = gears_make_gear( pa, 24, P )
cx, cy = gears_make_gear( pa, 48, P )
dx, dy = gears_make_gear( 30.0, 8, P )

# write them out as svg files, scaled uniformly by 150 and 300
gears_svg_out( ax, ay, 'section_a.svg', 150 )
gears_svg_out( bx, by, 'section_b.svg', 150 )
gears_svg_out( cx, cy, 'section_c.svg', 150 )
gears_svg_out( dx, dy, 'section_d.svg', 300 )


The output is below:

14.5 degree pressure angle, 12 teeth

14.5 degree pressure angle, 24 teeth

14.5 degree pressure angle, 48 teeth

30 degree pressure angle, 8 teeth

Since I haven't finished the CGAL CSG python CSG library, I've been calling OpenSCAD from the command-line via python to generate actual gears. I 3D printed some of these on the Vancouver Hackspace (VHS) Makerbot, with reasonably good results:

These parts will eventually be used as part of a 1:4 drive for the leadscrews for my CNC. Currently I have ample torque, but cannot spin the motors fast enough to get fast rapid traversals. With a 1:4 drive, I will hopefully be able to drive the machine quite quickly.

## Friday, May 4, 2012

### Stochastic Tomography paper accepted to SIGGRAPH!

My most recent project has just been accepted to SIGGRAPH 2012!  The paper is called Stochastic Tomography and its Applications in 3D Imaging of Mixing Fluids, more information (including the paper, submission video and datasets) can be found on the project site: http://www.cs.ubc.ca/labs/imager/tr/2012/StochasticTomography/

The paper presents a new method for tomographic reconstruction based on a point-sampling process similar to the Metropolis-Hastings algorithm.  In the paper this is applied to building reconstructions of a fluorescent dye mixing with alcohol from multi-view video.  The algorithm is well suited to this task since it allows arbitrary projections and is naturally adaptive.  It is also very straightforward to incorporate effectively any regularizer into the algorithm, allowing application specific prior-knowledge to be exploited during the reconstruction process.

I hope to post a bit more about the hardware setup that is used for this project in the new week or so.