I’m proud to announce that my book, Making Things See: 3D Vision with Kinect, Processing, and Arduino, is now available from O’Reilly. You can buy the book through O’Reilly’s Early Release program here. The Early Release program lets us get the book out to you while O’Reilly’s still editing and designing it and I’m still finishing up the last chapters. If you buy it now, you’ll get the preface and the first two chapters immediately and then you’ll be notified as additional chapters are finished and you’ll be able to download them for free until you have the final book. This way you get the immediate access to the book and I get your early feedback to help me find mistakes and improve it before final publication.
So, what’s in these first two chapters? Chapter One provides an in-depth explanation of how the Kinect works and where it came from. It covers how the Kinect records the distance of the objects and people in front of it using an infrared projector and camera. It also explains the history of the open source efforts that made it possible to work with the Kinect in creative coding environments like Processing. After this technical introduction, the chapter includes interviews with seven artists and technologists who do inspiring work with the Kinect: Kyle McDonald, Robert Hodgin, Elliot Woods, blablablab, Nicolas Burrus, Oliver Kreylos, Alejandro Crawford, and Phil Torrone and Limor Fried of Adafruit. The idea for this section of the book was suggested to me by Zach Lieberman and it’s ended up being one of my favorites. Each one of the people I interviewed had a different set of interests and abilities that lead them to the Kinect and they’ve each used it in a radically different way. From Adafruit’s work initiating the project to create open drivers to Oliver Kreylos’s integration of the Kinect into his cutting edge virtual reality research to Alejandro Crawford’s use of the Kinect to create live visuals for the band MGMT, they each explore a different aspect of the creative possibilities unlocked by this new technology. Their diversity shows just how broad of an impact affordable depth cameras will potentially have going forwards.
Chapter Two begins the real work of learning to make interactive programs with the Kinect. It walks you through installing the SimpleOpenNI library for Processing and then shows you how to use that to access the depth image from the Kinect. We explore all kinds of aspects of the depth image and then use it to create a series of projects ranging from a virtual tape measure to a Minority Report-style app that lets you move photos around by waving your hands. Since the book as a whole is designed to be accessible to beginner programmers (and to help them “level up” to more advanced graphical skills), the examples in this chapter are all covered clearly and thoroughly to make sure that you understand fundamentals like how to loop through the pixels in an image.
I’m looking forward to more chapters coming out in the coming weeks, including the next two on working with point clouds and using the skeleton data. I’m currently working closely with Brian Jepson, my editor at O’Reilly, as well as Dan Shiffman (an ITP professor and the author of the first Kinect library for Processing) and Max Rheiner (an artist and lecturer at Zurich University and the author of SimpleOpenNI) to prepare them for publication. I can’t thank Brian, Dan, and Max enough for their help on this project.
I’m also excited to see what O’Reilly’s design team comes up with for a cover. The one pictured above is temporary. As soon as these new chapters (or the new cover) are available, I’ll announce it here.
Enjoy the book! And please let me know your thoughts and comments so I can improve it during this Early Release period.