The title says it all, I've written some basic code for loading VTK rectilinear grid files into Matlab. The code supports uniformly spaced meshes in up to four dimensions for both point and cell data.

You can download the code from my Github repository:

https://github.com/jamesgregson/matlab_vtr

# James' Blog

Software development, graphics, scientific computing and building stuff.

## Sunday, December 8, 2013

## Tuesday, October 22, 2013

### Index transformation between bounding boxes and uniform grids

I end up rewriting this code all the time. It transforms from points in space to grid coordinates world_to_grid() and back grid_to_world(), such that p = grid_to_world( aabb, dim, world_to_grid( aabb, dim, p ) ). It's simple, but bugs here can mess up lots of things in ways that are hard to detect. No range checking is performed in order to make it easy to apply custom boundary conditions.

The code assumes that the dim variable holds the number of points in each direction (e.g. int dim[] = { nx, ny, nz };) and that the first point is coincident with the minimum value of the axis-aligned bounding box with the last point coincident with the maximum value of the bounding box. The aabb parameter is the axis-aligned bounding box defining the grid in world space, e.g. int aabb[] = { xmin, xmax, ymin, ymax, zmin, zmax };

template< typename real, typename index, typename real3 > inline real3 _world_to_grid( const real *aabb, const index *dim, const real3 &pos ){ return real3( real(dim[0]-1)*(pos[0]-aabb[0])/(aabb[1]-aabb[0]), real(dim[1]-1)*(pos[1]-aabb[2])/(aabb[3]-aabb[2]), real(dim[2]-1)*(pos[2]-aabb[4])/(aabb[5]-aabb[4]) ); } template< typename real, typename index, typename real3 > inline real3 _grid_to_world( const real *aabb, const index *dim, const real3 &pos ){ return real3( aabb[0]+real(pos[0])*(aabb[1]-aabb[0])/real(dim[0]-1), aabb[2]+real(pos[1])*(aabb[3]-aabb[2])/real(dim[1]-1), aabb[4]+real(pos[2])*(aabb[5]-aabb[4])/real(dim[2]-1) ); }

The code assumes that the dim variable holds the number of points in each direction (e.g. int dim[] = { nx, ny, nz };) and that the first point is coincident with the minimum value of the axis-aligned bounding box with the last point coincident with the maximum value of the bounding box. The aabb parameter is the axis-aligned bounding box defining the grid in world space, e.g. int aabb[] = { xmin, xmax, ymin, ymax, zmin, zmax };

Labels:
code,
graphics,
random,
scientific computing,
software

## Thursday, October 10, 2013

### A follow-up to fluid simulation on non-uniform grids

In my last post, I discussed preliminary results for fluid simulation on non-uniform Cartesian grids. In that post I showed some preliminary results, but there were some bugs that added disturbing artifacts.

I have fixed a number of bugs and now have a solver based on BFECC advection using min-max limited cubic interpolation for non-uniform and often highly anisotropic meshes for the velocity and pressure solve, with high-resolution uniform density fields for the density field. The results are fairly impressive:

This image shows a volume rendering (in Paraview) of a simulation computed using the grid in the background. Near the area of interest, the grid is uniform, but it grows very quickly (geometrically, with growth rate ~1.5) outside this region. The velocity/pressure grid is 133x127x134, but covers nearly a 10x6x10 m cubic volume, with 2cm cells in the fine region. The density field is 1x6x1 m with 1cm uniform resolution.

Being able to run different resolutions and gradings for the velocity and density fields is extremely helpful: fine fluid details distract from a poor velocity solution, and high-resolution densities help avoid diffusion in the physics. The image above shows the density as resolved by the fluid grid. It is terrible. However the density as resolved by the density grid is -way- better:

It's still not perfect, but given the cell size and anisotropy, I think it does extremely well. Although there are definite artifacts, the payoff is in the memory usage and runtime. The whole simulation is 5 seconds in real time and takes approximately 22 seconds per output frame, meaning my 5 second simulation completes in under an hour. These simulations used to take on the order of 4-5 hours.

The results look pretty good. I think the grading is too steep to get really nice results, but it's an excellent proof-of-concept.

I have fixed a number of bugs and now have a solver based on BFECC advection using min-max limited cubic interpolation for non-uniform and often highly anisotropic meshes for the velocity and pressure solve, with high-resolution uniform density fields for the density field. The results are fairly impressive:

This image shows a volume rendering (in Paraview) of a simulation computed using the grid in the background. Near the area of interest, the grid is uniform, but it grows very quickly (geometrically, with growth rate ~1.5) outside this region. The velocity/pressure grid is 133x127x134, but covers nearly a 10x6x10 m cubic volume, with 2cm cells in the fine region. The density field is 1x6x1 m with 1cm uniform resolution.

It's still not perfect, but given the cell size and anisotropy, I think it does extremely well. Although there are definite artifacts, the payoff is in the memory usage and runtime. The whole simulation is 5 seconds in real time and takes approximately 22 seconds per output frame, meaning my 5 second simulation completes in under an hour. These simulations used to take on the order of 4-5 hours.

The results look pretty good. I think the grading is too steep to get really nice results, but it's an excellent proof-of-concept.

Labels:
fluids,
graphics,
scientific computing

## Tuesday, October 8, 2013

### Fluid Simulation for Graphics on Non-Uniform Structured Grids

I've been playing around with fluid simulation on non-uniform structured grids. They have some charms in that it is easy to have very large domains with isolated regions of interest; e.g. near the camera or salient fluid features. One of the big advantages is that it makes far-field boundaries easy, you simply extend the mesh far away from the domain.

My solver pretty run-of-the-mill; only the pressure solver was updated in order to handle anisotropic cells. I've used a finite-volume formulation to derive the pressure-correction equation for this case, but it suffices to say that only the weights used in forming the Poisson equation change.

Here is an example, a grid that is roughly 10x10x5 meters, with 2cm fine cells for roughly 100x100x100 cells:

The aspect ratios get quite high, 100:1 is not uncommon. You can see a buoyant flow simulation that I'm running as the contour values. The simulation takes about 20 seconds per frame, of which 7 seconds is writing the VTK output files that I use to post-process. In spite of that, the visual detail in the region of interest is excellent (given the resolution):

I find that it helps to advect a uniform resolution density field that is roughly 2X the fine cell resolution. Writing the code with general-purpose fields that can be point-sampled arbitrarily makes this a trivial feature to implement, simply allocate different grids for density and velocity.

Finally here is a rendering of the final simulation in Paraview, total simulation time ~35 minutes. There are still some bugs to track down, but the results are pretty promising:

My solver pretty run-of-the-mill; only the pressure solver was updated in order to handle anisotropic cells. I've used a finite-volume formulation to derive the pressure-correction equation for this case, but it suffices to say that only the weights used in forming the Poisson equation change.

Here is an example, a grid that is roughly 10x10x5 meters, with 2cm fine cells for roughly 100x100x100 cells:

The aspect ratios get quite high, 100:1 is not uncommon. You can see a buoyant flow simulation that I'm running as the contour values. The simulation takes about 20 seconds per frame, of which 7 seconds is writing the VTK output files that I use to post-process. In spite of that, the visual detail in the region of interest is excellent (given the resolution):

I find that it helps to advect a uniform resolution density field that is roughly 2X the fine cell resolution. Writing the code with general-purpose fields that can be point-sampled arbitrarily makes this a trivial feature to implement, simply allocate different grids for density and velocity.

Finally here is a rendering of the final simulation in Paraview, total simulation time ~35 minutes. There are still some bugs to track down, but the results are pretty promising:

Labels:
fluids,
graphics,
scientific computing

## Sunday, September 8, 2013

### Updated C Mathematical Expression Parser

I've updated and posted my recursive descent mathematical expression parsing code. The new code is available from my Github repository:

https://github.com/jamesgregson/expression_parser

The original post describing the library is at: http://jamesgregson.blogspot.ca/2012/06/mathematical-expression-parser-in-c.html

New features include Boolean operations and a callback interface for custom named variables and functions. You can optionally disable the Boolean operations (and the 10 levels of parsing they trigger) if not needed.

https://github.com/jamesgregson/expression_parser

The original post describing the library is at: http://jamesgregson.blogspot.ca/2012/06/mathematical-expression-parser-in-c.html

New features include Boolean operations and a callback interface for custom named variables and functions. You can optionally disable the Boolean operations (and the 10 levels of parsing they trigger) if not needed.

### Updated 4x4 transformation matching OpenGL

I've updated the code from my previous post: 4x4 transformation matching OpenGL to remove the requirement for a separate linear algebra library by introducing some 4x4 inverse code from Rodolphe Vaillant (originally at http://www.irit.fr/~Rodolphe.Vaillant/?e=7) along with some other minor improvements. You can still use Eigen or GMM to perform matrix inverses, but it is no longer a requirement.

As before, the code re-implements the main OpenGL matrix functions and includes code for testing the output against the system OpenGL. I use the code for a number of my imaging projects where it is desirable (but optional) to have an OpenGL preview.

The code is now also hosted at Github which should simply future updates:

https://github.com/jamesgregson/transformation

As before, the code re-implements the main OpenGL matrix functions and includes code for testing the output against the system OpenGL. I use the code for a number of my imaging projects where it is desirable (but optional) to have an OpenGL preview.

The code is now also hosted at Github which should simply future updates:

https://github.com/jamesgregson/transformation

Labels:
code,
graphics,
OpenGL,
scientific computing,
software

### C++/Mex Image Deblurring using ADMM

I've posted some sample code on Github for performing image deblurring in Matlab using Mex. I wrote it as a way to play around with the ADMM algorithm for sparse signal reconstruction, as described in Stephen Boyd's ADMM paper, as well as to get some experience using C++ code from Matlab.

The code uses matrix-free linear operators to evaluate the forward and reverse measurement (blurring) process, solving the data-term subproblem with gradient descent. While you can do this much faster using FFTs for deconvolution, the code should generalize quite well to tomography with non-orthographic projections. I've tried to comment it well so that I can figure out what I did if I ever need to pick it up again.

In the image above, the left plot is the ground-truth image and the right is blurred by a point-spread function and corrupted by Gaussian noise. After running the demo, the following results are obtained:

The reconstructed image is shown on the right, while a slice through the center scanline shows the reconstruction (blue dots), ground-truth (solid black line) and input (red line). From the results it's clear that the method does well at preserving discontinuities even for fairly large blurs and noise levels.

The demo is pretty quick to run and is helpful for choosing parameters since you can see how the behavior of the method changes with the regularizer weight and the penalty term weight.

You can download the code from https://github.com/jamesgregson/MexADMMDeblurDemo

The code uses matrix-free linear operators to evaluate the forward and reverse measurement (blurring) process, solving the data-term subproblem with gradient descent. While you can do this much faster using FFTs for deconvolution, the code should generalize quite well to tomography with non-orthographic projections. I've tried to comment it well so that I can figure out what I did if I ever need to pick it up again.

In the image above, the left plot is the ground-truth image and the right is blurred by a point-spread function and corrupted by Gaussian noise. After running the demo, the following results are obtained:

The reconstructed image is shown on the right, while a slice through the center scanline shows the reconstruction (blue dots), ground-truth (solid black line) and input (red line). From the results it's clear that the method does well at preserving discontinuities even for fairly large blurs and noise levels.

The demo is pretty quick to run and is helpful for choosing parameters since you can see how the behavior of the method changes with the regularizer weight and the penalty term weight.

You can download the code from https://github.com/jamesgregson/MexADMMDeblurDemo

Labels:
code,
computer vision,
graphics,
matlab,
scientific computing,
software

Subscribe to:
Posts (Atom)