Archive for the ‘Processing’ Category

Colour mapping Kinect data in 3D

Thursday, April 26th, 2012

Quick update on my Processing and Kinect journey. This time it’s mapping the colour information from the on-board RGB camera to the infrared depth data.

The OpenKinect libraries come with a few demo files that pretty much have the nuts and bolts of it already, kind of. Here’s how it works:

  1. OpenKinect returns the depth data as a complete array
  2. It also returns the video data which is accessed as an image each frame
  3. The depth data is then adjusted a little to make it map to the screen and allow for perspective distortion
  4. Each depth point is then mapped to a static X and Y point, which matches the same X and Y in the video feed (*sort of, see below.)
  5. It’s then simply a case of reading the colour data from a point in the video feed and colouring the corresponding depth point.

However, it did require a little fiddling with the video scale to get it to match the infrared data scaling. The raw video was slightly bigger and offset, so that your ‘texture’ didn’t sit correctly on the model.

Here’s the initial feed without correction. Note how the image feed is too small and to the bottom/left.

And below is the corrected image. It wasn’t particularly scientific however, I just tried a few values until it looked about right.

Here’s the code I used:

int pixel = kinect.getVideoImage().get(int (x*0.91) + 9,int (y*0.94) + 25);

I.e. the video is scaled by 91% and shifted +9 pixels on the x axis and scaled by 94% and shifted +25 pixels on the y. Not quite sure why this is needed but if you’re having the same issue, maybe it’ll help. Likewise, if I’m missing something, please let me know. As much as I like a good hack, I also like to know why it’s needed!

Incidentally, the video above was ‘recorded’ from Processing using individual bitmap frames. All the screen capture software I have causes the Kinect video to either freeze or not work. Strange. So handily, Processing can save out screenshots each frame. I then used  Time Lapse Assembler to stitch them back together.

To save the first 300 frames to the same folder as your code, it’s as simple as…

frameCount ++;
if (frameCount <= 300) {
saveFrame(“kinect-####.jpg”);
}

So next step is to record the incoming data and store it so that I can do interesting effects on preset data sets.

Huston, we have Kinect data…

Thursday, April 26th, 2012

Earlier today I saw “Unnamed Soundsculpture” by Daniel Franke, a really evocative and quite strange video that combined Kinect data, live dance and Processing.

So that got me thinking I should have a go, albeit on a much more basic level. It’s early doors but if you want to get started with Kinect and Processing, there’s a great starter guide here.

So just to prove it really does work, here’s me made out of nothing but depth data.

kinect_depth

And this is an example of what the IR camera sees.

The exciting thing is that the depth data is available as a single array of depth points in 3D space, so it’s ripe for fiddling with to create all sorts of effects. Clearly this needs fiddling with, so I’ll post a few creations when I’ve dug a little deeper.

Experimenting with Processing

Wednesday, April 25th, 2012

test2_preview

So, what to do with a few hours spare? Learn Processing of course!

Processing is an open source programming language that makes it relatively easy to make pretty things. The language itself is almost identical to something like JavaScript or Flash ActionScript but the number of commands and functions are vastly reduced. They only really keep the ones that have something directly connected with visual prototyping. Some syntax is a bit strange, like defining object arrays but the rest of it it pretty familiar.

One thing that surprised me was the default Processing editor. If you’ve played around with an Arduino board, you’ve used the editor already! Yep, you save ‘sketches’ and press the ‘play’ button to see your code instantly come to life.

There’s is also a Javascript flavoured version Processing.js. I haven’t looked too much into it yet but it’s obviously the smart move for web based creative coding. You just download the processing javascript library and add your experiment as a html5 canvas element. Ran relatively slow compared to the java applet, but was still pretty impressive.

So my first experiment is a little basic, but you get the idea. It generates up to 10,000 particles and just bounces them about. Took maybe 2 hours to get this far. Next stop is to pop downstairs and grab the Xbox Kinect and try and do something funky with the point data in 3D (Processing has a bunch of native 3D commands too).

So click the image above, or here to launch my demo as a Java applet in a new window. ** Don’t forget to allow Java in your browser if it asks you. You can also see the code in the popup if you’re interested.

Then check out Processing yourself at Processing.org and download it for FREE here.

And pop over to OpenProcessing.org to see what everyone else is up to.