syntax-highlighter

Wednesday, November 30, 2011

Matching calibrated cameras with OpenGL

When working with calibrated cameras it is often useful to be able to display things on screen for debugging purposes.  However the camera model used by OpenGL is quite different from the calibration parameters from, for example, OpenCV.  The linear parameters that OpenCV provides are the following:



where (from http://en.wikipedia.org/wiki/Camera_resectioning) is the skew between the x and y axes, are the image principle point , with f being the focal length and being scale factors relating pixels to distance.  Multiplying a point by this matrix and dividing by resulting z-coordinate then gives the point projected into the image.

The OpenGL parameters are quite different.  Generally the projection is set using the glFrustum command, which takes the left, right, top, bottom, near and far clip plane locations as parameters and maps these into "normalized device coordinates" which range from [-1, 1].  The normalized device coordinates are then transformed by the current viewport, which maps them onto the final image plane.  Because of the differences, obtaining an OpenGL projection matrix which matches a given set of intrinsic parameters is somewhat complicated.

Roughly following this post, (update: a much-improved update from Kyle, the post's author is available here) the following code will produce an OpenGL projection matrix and viewport.  I have tested this code against the OS-X OpenGL implementation (using gluProject) to verify that for randomly generated intrinsic parameters, the corresponding OpenGL frustum and viewport reproduce the x and y coordinates of the projected point.  The code works by multiplying a perspective projection matrix by an orthographic projection to map into normalized device coordinates, and setting the appropriate box for the glViewport command.

/**
 @brief basic function to produce an OpenGL projection matrix and associated viewport parameters
 which match a given set of camera intrinsics. This is currently written for the Eigen linear
 algebra library, however it should be straightforward to port to any 4x4 matrix class.
 @param[out] frustum Eigen::Matrix4d projection matrix.  Eigen stores these matrices in column-major (i.e. OpenGL) order.
 @param[out] viewport 4-component OpenGL viewport values, as might be retrieved by glGetIntegerv( GL_VIEWPORT, &viewport[0] )
 @param[in]  alpha x-axis focal length, from camera intrinsic matrix
 @param[in]  alpha y-axis focal length, from camera intrinsic matrix
 @param[in]  skew  x and y axis skew, from camera intrinsic matrix
 @param[in]  u0 image origin x-coordinate, from camera intrinsic matrix
 @param[in]  v0 image origin y-coordinate, from camera intrinsic matrix
 @param[in]  img_width image width, in pixels
 @param[in]  img_height image height, in pixels
 @param[in]  near_clip near clipping plane z-location, can be set arbitrarily > 0, controls the mapping of z-coordinates for OpenGL
 @param[in]  far_clip  far clipping plane z-location, can be set arbitrarily > near_clip, controls the mapping of z-coordinate for OpenGL
*/
void build_opengl_projection_for_intrinsics( Eigen::Matrix4d &frustum, int *viewport, double alpha, double beta, double skew, double u0, double v0, int img_width, int img_height, double near_clip, double far_clip ){
    
    // These parameters define the final viewport that is rendered into by
    // the camera.
    double L = 0;
    double R = img_width;
    double B = 0;
    double T = img_height;
    
    // near and far clipping planes, these only matter for the mapping from
    // world-space z-coordinate into the depth coordinate for OpenGL
    double N = near_clip;
    double F = far_clip;
    
    // set the viewport parameters
    viewport[0] = L;
    viewport[1] = B;
    viewport[2] = R-L;
    viewport[3] = T-B;
    
    // construct an orthographic matrix which maps from projected
    // coordinates to normalized device coordinates in the range
    // [-1, 1].  OpenGL then maps coordinates in NDC to the current
    // viewport
    Eigen::Matrix4d ortho = Eigen::Matrix4d::Zero();
    ortho(0,0) =  2.0/(R-L); ortho(0,3) = -(R+L)/(R-L);
    ortho(1,1) =  2.0/(T-B); ortho(1,3) = -(T+B)/(T-B);
    ortho(2,2) = -2.0/(F-N); ortho(2,3) = -(F+N)/(F-N);
    ortho(3,3) =  1.0;
    
    // construct a projection matrix, this is identical to the 
    // projection matrix computed for the intrinsicx, except an
    // additional row is inserted to map the z-coordinate to
    // OpenGL. 
    Eigen::Matrix4d tproj = Eigen::Matrix4d::Zero();
    tproj(0,0) = alpha; tproj(0,1) = skew; tproj(0,2) = u0;
                        tproj(1,1) = beta; tproj(1,2) = v0;
                                           tproj(2,2) = -(N+F); tproj(2,3) = -N*F;
                                           tproj(3,2) = 1.0;
    
    // resulting OpenGL frustum is the product of the orthographic
    // mapping to normalized device coordinates and the augmented
    // camera intrinsic matrix
    frustum = ortho*tproj;
}

The code uses the Eigen linear algebra library, which conveniently stored matrices in column-major order, so applying the resulting frustum matrix is as simple as:
glMatrixMode(GL_PROJECTION);
glLoadMatrixd( &frustum(0,0) );


Sunday, November 27, 2011

Got my face did!

I recently got to visit a local studio and while I was there they asked if I wanted my face 3D scanned.  Of course I did. Who wouldn't? Well at least one person.  But I did.  And here's the result, after runing it through the quadric-collapse decimator of Meshlab.



It's pretty damn good quality wise, even on challenging areas like my beard.  Now I just have to merge the scans and create a watertight mesh, ready for 3D printing!

Saturday, November 26, 2011

Getting things done with qmake

I frequently deal with multiple operating systems on various machines and so make efforts to write portable code and use portable third-party libraries where possible.  On some machines I have administrator access while on others I am a lowly user.  How to consistently configure and build projects across all systems became an issue.

Without much forethought, I began to use qmake from NVidia's Qt toolkit.  I did this originally because I was using Qt to build cross-platform GUIs for code, but continue to do so because it works well and is dead easy. There are other cross-platform build systems out there such as CMake, SCONS, and so on, but I've yet to find one that gives me a compelling reason to switch from qmake.

That said, the process that I went through to finally arrive at this process was not straightforward.  This post is intended to roughly document how I set up my machines to work cleanly and easily with qmake. Ultimately this boils down to:
  • Use ONLY portable third-party libraries
  • Create consistently-named qmake include files for each library on each system
  • Keep ALL those include files in a single directory, ideally with the source-tree of the library
  • Define a consistently-named environment variable on all machines that points to the directory
As an example, on OS-X, I have a directory Code/ThirdParty, pointed to by the environment variable $ThirdPartyDir which gives the full path to Code/ThirdParty.  The contents of this directory are:


The *.pri files are the qmake include files, while the directories hold the library source trees. Some libraries, like CGAL, automatically install themselves to a different location on the machine, such as /usr/local.  That doesn't matter, the project include file still goes in the directory pointed to by $ThirdPartyDir.  On Windows, I have another directory ThirdPartyDir, located at a completely different path, pointed to by the environment variable %ThirdPartyDir%, with the same sub-directories and versions of the *.pri files modified to work for Windows.

The *.pri files themselves handle adding linker flags, header search paths and any preprocessor definitions needed to the project.  They look like incomplete qmake project files, as seen by the excerpt of my vtk.pri file:

INCLUDEPATH += "/usr/local/include/vtk-5.8"
LIBS += -L/usr/local/lib/vtk-5.8
LIBS += -lvtkalglib \
        -lvtkCharts \
        -lvtkCommon \
        -lvtkGraphics \
        -lvtkDICOMParser \
        -lvtkexoIIc \
        -lvtkexpat
Now any project that needs vtk can simply include the qmake include file into it's project file.  Provided the include file exists and is correct, all the include paths, linker flags and preprocessor definitions will be set up correctly, as in the following project file:

TEMPLATE = app
QT       += opengl
CONfIG   += console debug_and_release
TARGET   = camera_model

!include( $$(ThirdPartyDir)/eigen.pri )
!include( $$(ThirdPartyDir)/cimg.pri )
!include( $$(ThirdPartyDir)/vtk.pri )

mac {
    CONFIG -= app_bundle
    MOC_DIR = build
}

INCLUDEPATH += include
HEADERS += include/camera.h
SOURCES += src/main.cpp \
           src/camera.cpp 

This project file is now completely portable, able to be built on Windows, OS-X and Linux without change.  The include files themselves eigen.pri, cimg.pri and vtk.pri may differ between systems as necessitated by versions, access permissions and install locations, but the project itself is consistent.

This process is ultimately similar to how CMake is intended to function, however I have wasted hours trying to get CMake find scripts to function, often without success.  By just creating these include files whenever I use third-party code, I've found that much of the frustration of cross-platform development just goes away.





Saturday, November 19, 2011

4x4 transformation matching OpenGL

An updated version of this code is available on Github, removing the need for an external linear algebra library to compute matrix inverses, although the remainder of this post is still valid. Please see: http://jamesgregson.blogspot.ca/2013/09/updated-4x4-transformation-matching.html to get the updated code.

Several times now I've wanted to develop/debug graphics applications which involve 3D transformations and visualize the results with OpenGL. I usually end up just using the OpenGL matrix functions like glRotatef(...), glFrustum(...) for everything in order to have consistent transformations between the debugging view and the underlying app.

This works OK, however it means that a valid OpenGL context is available, since many of these functions don't work properly without one. To avoid this, I have duplicated most of the OpenGL functions in a custom 4x4 transformation class, which produces nearly identical results (usually to within 10-6 -> 10-4) but which does not require a context. It also exposes some extra operations, such as transposition and inversion along with the GLU functions gluProject, gluUnProject and gluPerspective. These make it quite easy to duplicate the OpenGL vertex transformation pipelines in custom, non-OpenGL code.

For example:

// example code to pan a view

// declare some variables, grab an OpenGL matrix
double mat[16];
transformation T;
glGetDoublev( GL_MODELVIEW_MATRIX, mat );

// load mat into T
T.from_opengl( mat );

// pan the view 
T = transformation::glTranslate( dx, dy, 0.0 ) * T;

// convert back to OpenGL format and update 
T.to_opengl( mat );
glLoadMatrixd( mat );


Currently the library includes the following OpenGL/GLU functions:

transformation::glIdentity(...)
transformation::glScale(...)
transformation::glTranslate(...)
transformation::glRotate(...)
transformation::glOrtho(...)
transformation::glFrustum(...)
transformation::gluPerspective(...)
transformation::gluProject(...)
transformation::gluUnProject(...)

All of these have been tested against the OpenGL/GLU equivalents, and have matching interfaces except they return a transformation rather than operate on the active matrix stack. There are also a number of additional functions to transpose/invert transformations, access individual elements and get the right/up/forward/position vectors from a transformation.

The code is available to download Github:

https://github.com/jamesgregson/transformation

I hope people find it useful. I will be updating the code with any bug-fixes or improvements over time, please send them my way if you have a suggestion or a problem.

Friday, July 8, 2011

Installing mlabwrap on OS-X and Linux

Mlabwrap is an excellent Python wrapper for Matlab, allowing you to use Matlab functionality from within Python scripts. I recently tried to install it on OS-X (10.6) and Linux (11.4). I ran into minor issues with both OS'.

The first is that the setup.py that ships with version 1.1 of mlabwrap has a minor compile error related to the definition of a datatype. In my case this was fixed by changing:

# setup.py line 13
MATLAB_VERSION = None

to:
MATLAB_VERSION = 7.3

which tells mlabwrap to look for Matlab versions 7.3 and on. I made this change on both OS-X and Linux.

The second issue was with OS-X only, and relates to how mlabwrap references the Matlab shared libraries. Although mlabwrap builds successfully, import errors related to "@loader_path/libeng.dylib" and "@loader_path/libmx.dylib" occur.

I fixed this using the "install_name_tool" command to modify the mlabrawmodule.so library from mlabwrap so that it uses the absolute paths to these shared libraries:

The commands are:
sudo install_name_tool -change "@loader_path/libmx.dylib" "/Applications/Matlab_R2011a_Student/bin/maci64/libmx.dylib" mlabrawmodule.so


sudo install_name_tool -change "@loader_path/libeng.dylib" "/Applications/Matlab_R2011a_Student/bin/maci64/libeng.dylib" mlabrawmodule.so

Note that these commands assumes you are in the site-packages directory of your Python installation (or wherever you have install mlabwrap). You will also need to change the change the "/Matlab_R2011a_Student" portion of the paths to suit your Matlab installation.

Sunday, April 3, 2011

Dual Contouring

Dual Contouring is a method for meshing implicit surfaces, or surfaces which are defined as the level-set of scalar function. Conceptually it is similar to Marching Cubes, except that meshing is performed on a dual mesh, and requires that the scalar function be able to provide gradients or surface normals in addition to the function value.

The primary advantage of dual contouring is that it is able to reproduce sharp features, which is something that most other implicit surface meshing algorithms are unable to do. I also consider Dual Contouring to be simpler to implement from scratch than Marching Cubes, since you don't need to form large tables of stencils.

The overall algorithm is very simple. First the region to be meshed is divided into convex non-overlapping cells (like uniform cubes or tetrahedra). The scalar function that is being meshed is then evaluated at the vertices of those cells, and each vertex is labeled as being either inside or outside of the surface to be meshed.

Those cells that have a mix of inside and outside vertices must then contain a portion of the surface. It is this point where Dual Contouring begins to differ from Marching Cubes: instead of applying a stencil based on which vertices are inside or outside, Dual Contouring generates a single 'dual' vertex per cell straddling the surface. It then connects every dual vertex with it's neighboring dual vertices to form the final mesh. The only difficultly arises in choosing the location of the dual vertices to best represent the underlying surface being meshed.






To choose where to place vertices, dual contouring uses not only the intersection of the surface with the edges of the mesh, but also the surface normals at those locations. It then solves for the dual vertex location which minimizes the error in satisfying planes that pass through the edge intersections with the same normals as at the intersections. This corresponds to minimizing the following error:







where is the dual vertex position, is the location of the i'th (of n) edge intersection and is the corresponding normal for the i'th intersection. This is just a least squares system






which can be solved using a QR decomposition, or by forming the normal equations. One tricky aspect of this is that although there are always at least as many intersected edges as unknowns, it may not be the case that the row contributed by each edge is linearly independent from the other rows. In flat regions of the surface the normals will be nearly if not exactly the same, and the set of equations will be (nearly) singular.

One way to handle this is to add special-case code to check that the system is not singular, and if it is, solve a different system in 1D for the offset of the dual vertex in the normal direction that minimizes the fitting error. It is also possible to add a regularization term with low weight that prevents the system from being singular, but does not affect the results too badly in the event the system is well conditioned. I have done this by adding a (small) multiple of identity to the normal equations coefficient matrix, and the same small multiple of the original dual vertex position (i.e. the cell centroid) to the right hand side.



The image above shows an example of this in action. I generated an isosurface representation of the fandisk model, then ran it through dual contouring to produce the result you see above. Note that the method faithfully reproduces the sharp edges of this CAD model. I appear to have some minor bugs in the implementation however, as there are some artifacts on concave sharp edges.
The regularization approach seems to work well and eliminates the need for special case code. It also provides a knob to trade off mesh quality for approximation error: if the regularization term is small, then the method will try to reproduce the underlying surface faithfully. On the other hand, if the regularization term is large, dual vertices will be placed close to the centroid of the cells that generate them, producing a high quality mesh, but with staircasing artifacts where approximation error has been introduced.

Thursday, March 31, 2011

Fitting Oriented Bounding Boxes

It can be useful to fit oriented bounding boxes (OBBs) to sets of geometric primitives for a number of applications. This post describes one way to do this, based on the PhD thesis of Stefan Gottschalk. I have put together a sample implementation that you can use and adapt as you wish.

The method of fitting is based around the covariance matrix of the items to be fit. The covariance matrix is a generalization of variance to higher dimensions. The use of the covariance matrix in fitting an oriented bounding box is that if the covariance matrix is constructed carefully, its eigenvectors determine the rotation required to obtain a tightly fitting box.

The overall process for fitting a bounding box will then be:
  • Compute the mean position and covariance matrix of the inputs
  • Extract the eigenvectors of the covariance matrix
  • Build a transformation matrix using the mean position and eigenvectors
  • Size the box to fit the input

Given a decent linear algebra library (such as gmm++), only the first task is particularly complicated, so I'll discuss it first.

In the case of fitting a bounding box to a collections of objects, there are three dimensions X, Y and Z, so the covariance matrix is 3x3. The covariance matrix for this case is shown below:





In the above equation x, y and z are the three global coordinates, and E[arg] is the expected value of the argument arg, which in this case is the mean of arg. If the mean values of x, y and z are denoted by , and , can be written as





where the simplification is due to the properties of the expected value. Armed with this definition, we can begin computing covariance matrices for sets of points. This is the simplest case, and can produce decent bounding boxes if the points are well distributed.

We start by computing the mean positions:




where N is the number of points in the point set. This leaves us needing to compute the , , ..., terms. Since we are dealing with points, these turn out to be very simple:





and the analogous expressions for the other terms. With these expressions, we can now build the covariance matrix and extract its eigenvectors , and using our math library. Each of the three eigenvectors will be a three-component vector, and will be orthogonal to the other two eigenvectors since is symmetric.

To build the OBB transformation matrix we compute



and then use them as the columns of the rotation portion of the OBB transformation matrix , i.e.:





All that remains is to figure out how big to make the box and where to put it so that it contains all the points. To do this, we transform each point into the local frame of the box using and keep track of the minimum and maximum coordinates in each direction. Since the rotation part of the matrix is orthogonal and the translation part is zero, a shortcut for transforming the points without having to invert is as follows





where is the vector dot-product and everything else is defined as before. We then find the minimum and maximum of each coordinate of the transformed points for each coordinate, storing them in the 3D points and . The half-extents of the bounding box will then be and the center position , both in the local frame of the box. The corresponding world-space coordinate of the center can be found by multiplying by . Once has been computed, it can be added into the transformation matrix to get the final transformation matrix for the OBB:





The result is an OBB that contains the input. I have used the Stanford bunny below as a sample model:

Unfortunately, if the points are not well distributed the box may not be a tight fit. The reason for this is that variations in point sampling density can skew the eigenvectors of the covariance matrix and result in a poor orientation being found.

One way to make this process more robust so that tighter bounding boxes can be found is to build the covariance matrix using the triangles of the model rather than the vertices, and integrate over the surface area of the model rather than simply summing the contributions to from each vertex.

In this case, the formulas for the terms needed to build are slightly more complicated than for points:





where is the surface area of the entire model, , and are the coordinates of the centroid of triangle i and is the area of triangle i. The remaining terms , (and analogous), are also a bit more complicated:








where p, q and r are the vertices of triangle i. The remaining expressions for , , and are analogous, and can be found by replacing x and y with the appropriate variables in the above expressions. With these expressions for the covariance matrix of the model, we can then build an OBB in the same manner as before. In most cases, this change will be enough to obtain a tight box. However, if the model has large interior features, these can again mess up the eigenvectors of the covariance matrix and result in a box that doesn't fit the object tightly.

Using triangles instead of points gives a slightly tighter result for the Stanford bunny:



There is one final improvement that can be made which will result in a method that is robust for all inputs. Instead of building the covariance matrix of the input itself we instead build the covariance matrix of the convex hull of the input. The convex hull approximates the shape of the input, and in the process removes interior surfaces that cause errors in fitting tight boxes. Computing the convex hull of a 3D object is somewhat involved, however there are libraries that do it which can be used (for example CGAL). Note however that the convex hull may well have a very non-uniform point distribution, so the triangle-based covariance matrix should be used and not the point-based version.

The convex hull version gives an even tighter box than the point version, but is substantially slower in my implementation (and should be in general) due to the complexity of building the convex hull of the input. Even though I have isotropically remeshed the bunny, there are still significant differences in the OBB orientations for the three methods, even though the point sampling and triangle sizes are relatively uniform.



OBB fit using points


OBB fit using triangles


OBB fit using convex hull


I have put together a sample implementation of the three methods of fitting OBBs discussed in this posting. It depends on CGAL for computing convex hulls and gmm++ for linear algebra. Feel free to use it for whatever you'd like.