What with all the talk of service design, I’ve been ignoring my interactive roots, but for a research project about buildings as hybrid communication hubs (more on that another time) I’ve had a reason to take a look at Processing and a couple of interactive tools again. A few things have caught my eye recently.
The first is a piece currently on the Processing exhibition page. It’s a video work called Unnamed Soundsculpture by Daniel Franke and Cedric Kiefer. I’m not such a big fan of the music, but the way they created the piece is interesting:
[A dancer] was recorded by three depth cameras (Kinect), in which the intersection of the images was later put together to a three-dimensional volume (3d point cloud), so we were able to use the collected data throughout the further process. The three-dimensional image allowed us a completely free handling of the digital camera, without limitations of the perspective. The camera also reacts to the sound and supports the physical imitation of the musical piece by the performer.
I’m especially impressed by the use of the Kinect cameras – previously that kind of equipment was in the domain of specialised motion-capture studios for large budget visual effects. These guys are just in their studio with the Kinect cameras taped to some podiums.
I’ll blog about the others presently.