On Thingpunk

The symbols of the divine initially show up at the trash stratum.
— Philip K. Dick

Increasingly, it feels like we live in a kind of Colonial Williamburg for the 20th century. With the introduction of networked digital devices we’ve gone through an epochal technological transformation, but it hasn’t much changed the design of our physical stuff. We’re like historical reenacters hiding our digital watches under bloused sleeves to keep from breaking period. We hide our personal satellite computers in our woodsman’s beards and flannels.

Is this a problem? Is the digital revolution incomplete until it visibly transforms our built environment? Is the form of our physical stuff a meaningful yardstick for progess?

British designer Russell Davies, of Newspaper Club and the Really Interesting Group, thinks so. Back in 2010, Davies wrote a lament for the lack of “futureness” in the physical stuff that populates our lives:

Every hep shop seems to be full of tweeds and leather and carefully authentic bits of restrained artisanal fashion. I think most of Shoreditch would be wandering around in a leather apron if it could. With pipe and beard and rickets. Every new coffee shop and organic foodery seems to be the same. Wood, brushed metal, bits of knackered toys on shelves. And blackboards. Everywhere there’s blackboards.

Davies has an expectation that the physical environment should be futuristic:

Cafes used to be models of the future. Shiny and modern and pushy. Fashion used to be the same – space age fabrics, bizarre concoctions. Trainers used to look like they’d been transported in from another dimension, now they look like they were found in an estate sale.

Davies worries that the Steampunk aesthetic of our physical things (all that brass, blackboard, and leather) is evidence of a fundamental conservatism in our design culture. But in focusing on physical stuff as the primary place to look for signs of the future, he ends up advocating a different, deeper kind of conservatism that I’ve taken to calling “Thingpunk”.

Thingpunk is a deep bias in design thinking that sees physical products and the built environment as the most important venues for design and innovation even as we enter a world that’s increasingly digital. It has roots in the history of design as a discipline over the last 100+ years, the relative stagnation of digital technology in the social media era, and “Tumblr Modernism”: a fetish for Modernist style as it appears in images as divorced from its built and political reality.

“Thingpunk”, as a term, came out of a conversation I had with Kellan Elliott-McCrea last year at Etsy. It is meant to be understood by analogy to Steampunk. Where Steampunk displaces 19th century styling onto 21st century products and spaces, Thingpunk attempts to continue the 20th century obsession with physical objects into a 21st century permeated by digital and network technologies. Thingpunk worries about the design of physical stuff above all else. Even when engaging with digital technologies, Thingpunk is primarily concerned not with their effect on our digital lives, but in how they will transform physical products and the built environment.

Less Than 100% Physical

The biggest technological change in our lifetimes is the rise of networked digital devices. Before the last 30 years or so, for most of us, no part of our daily experience took place through computers or networks. Now, at least some portion does, often quite a significant part. Since our days and our lives didn’t get any longer during that time, this new digital portion of our experience necessarily displaced physical (non-digital) experiences to at least some extent.

The core experience of what’s new about the digital is its non-physicality, the disembodied imaginary space it creates in our minds. This idea dates backs to the origins of Gibsonian cyberspace:

Cyberspace is the “place” where a telephone conversation appears to occur. Not inside your actual phone, the plastic device on your desk. Not inside the other person’s phone, in some other city. The place between the phones.

Invoking “cyberspace” may sound hopelessly old-fashioned. But regardless of that term being rendered retro by 90s overuse, the problem it expresses is still a pressing concern. Just this week Quinn Norton, noted chronicler of “decentralized networked organisms” such as Occupy and Anonymous, vividly described the challenge of writing compellingly of contempoary life:

There is an aesthetic crisis in writing, which is this: how do we write emotionally of scenes involving computers? How do we make concrete, or at least reconstructable in the minds of our readers, the terrible, true passions that cross telephony lines? Right now my field must tackle describing a world where falling in love, going to war and filling out tax forms looks the same; it looks like typing.

The digital non-space of the net didn’t turn out to be a visualization. No cubes of glowing information or 3D avatars. That itself was a fantasy of the continued primacy of the physical. Instead, our lives are shaped by the new aesthetic and personal experiences that actually happen in this digital non-space built of typing: the scrolling micro-updates through which we do both our social grooming and our collective experience of profound events, the emails and Facebook messages through which we conduct our courtships, affairs, and feuds, the alternatingly personal and random images from around the world that stream through our pocket satellite-connected supercomputers.

A common Thingpunk response to articulating this non-physcial quality of digital experience is to refocus on the objects and buildings that make up the physical infrastructure of the net. From Andrew Blum’s Tubes to James Bridle on the architecture of data centers, these accounts tend to have a slightly conspiratorial tone, as if they were revealing a secret truth hidden from us by the deception of our digital experiences. But while we should certainly pay attention to the massive construction projects being driven by the importance of networks, like the $1.5 billion dollar fiber cable through the Arctic, these physical portions of the network are not more real than the mental and inter-personal experiences that happen through them.

And it is exactly those latter experiences that most of today’s designers actually work on, with, and through rather than these physical mega-infrastructures.

Design Turns Like a Steamship

Interactive digital design has only been around for about 30 years and for half that time it was practiced solely by a tiny handful of designers at the few companies with the resources to ship operating systems or boxed software. The real explosion of GUI design as a major fraction of design as a discipline began in the late 90s with the rise of the web and (later) mobile applications.

Fifteen years of thinking about websites can’t overcome the past 100+ years of design as a tradition of thinking about and through physical things, especially when so much of design on the web is what design professionals would condescendingly call “vernacular”, i.e. made by amateurs. The towering figures of pre-digital design, from the Arts and Crafts movement through the Bauhaus to the work of Charles and Ray Eames, still shape design’s critical vocabulary, educational objectives, and work methods.

Where the tools for making websites and mobile apps differ from those for making furniture and appliances the ideas transmitted by this tradition become an increasingly bad match for today’s design work. The malleability of code, the distributed nature of collaboration, and the importance of math are just the first three of the many profound sources of this mismatch. Each of them are key to the craft of digital design and at best completely outside the scope of the pre-digitial tradition.

Further, this design tradition preaches a set of values that’s powerfully at odds with lived digital reality.

Despite their differences, pre-digital design movements are united in the qualities of experience they promise: authenticity, presence, realness, permanence, beauty, depth. These are essentially spiritual virtues that people have hungered after in different forms throughout modern history.

Digital technology is endlessly criticized for failing to provide these virtues, for being artificial, false, disposable, ugly, superficial, and shallow. Ironically, nearly identical arguments were made at the start of the Industrial Revolution against machine-made objects as detached from the human and spiritual virtues of handicrafts, arguments which the discourse of modern design spent much of its history trying to overcome. This historical echo is often audible in the Maker rhetoric around 3D printing and “the Internet of Things”: that they represent a return to something more authentic and personal than the digital. This move is most obviously visible with the Maker obsession with “faires” and hackerspaces, venues for in-person sociability, which is represented as obviously more spiritually nourishing than its remote digital equivalent.

The problem of the persistence of these traditional values is that they prevent us from addressing the most pressing design questions of the digital era:

How can we create these forms of beauty and fulfill this promise of authenticity within the large and growing portions of our lives that are lived digitally? Or, conversely, can we learn to move past these older ideas of value, to embrace the transience and changeability offered by the digital as virtues in themselves?

Thus far, instead of approaching these (extremely difficult) questions directly, traditional design thinking has lead us to avoid them by trying to make our digital things more like physical things (building in artificial scarcity, designing them skeumorphically, etc.) and by treating the digital as a supplemental add-on to primarily physical devices and experiences (the Internet of Things, digital fabrication).

The Great Social Media Technology Stagnation

And meanwhile our digital technologies have stagnated.

While there are a lot of reasons for this stagnation, one I’d like to highlight here is the role of social media. Building a technology that lets technologists and designers feel (and act) like celebrities is dangerously fascinating. Creating Yet Another Social Media startup or web framework will get you a lot of social attention, tens or hundreds of thousands of followers, maybe, which as social creatures we’re addicted to for evolutionary reasons. It’s like the ancient instinct that tells us to eat every fatty and sugary food within reach, which may have been a good plan when we never knew when the tribe would next bring down a buffalo, but doesn’t work as well in the industrial food landscape.

The result of this Junk Food Technology has been that digital technologies, and especially the web, have degraded into an endless series of elaborations on social media, making physical technologies seem more innovative by comparison.

But there are still lots of real hard important things to be done on the web and in digital technologies more generally, many of them arising from the profound design questions mentioned in the last section:

  • Taking the seemingly endless pile of technological wonders produced by cutting edge computer science research and making them into culture.
  • Doing more with the super-computer satellite camera sensor platforms we constantly carry with us (more than using them as clients for reading social media).
  • Figuring out how to teach each other and do new research without digging ourselves under mountains of debt.
  • Making media that moves people in 30 second chunks when consumed out of context.
  • Telling emotional stories through the strange lives of bots and pseudonymous twitter writing
  • Breaking out of our bubbles to find empathy with far-flung people less like us around the world.

It’s by wrestling with these problems (and many others like them) that we’ll define the appropriate values that should drive design in a digital era, not by trying to shoehorn the older era’s values into our new digital venues.

A Fetish for 20th Century Modernism: Do Big Things vs. Fuck Yeah Brutalism

To conclude, I want to return to Davies’ dream of a design futurism that would visibly transform our cafes and neighborhoods.

One of the chief dangers of a futurism that’s centered on the built environment is that it lives in the shadow of 20th Century Modernism, the high church of the religion that changing the visual style of the built environment was inseparable from radical transformations in how we live our lives. Modernism was a project of gigantic scale with huge ambitions from transforming our politico-economic systems to remaking our infrastructure and physical environment. Its legacy is extremely mixed: it changed the way we live substantially in ways that are sometimes quite troubling.

If you are committed to expressing the future through physical things, if you are going to speak in a Modernist language of transforming the built environment, what will your relationship be to that legacy? Do you want to transform the world with huge projects? Or is that ambition just another fetish for a historical style (of raw concrete, shiny metals, and polished glass instead of blackboards and brass)?

An example of the former is Neal Stephenson’s Heiroglyph Project. Stephenson wants to push science fiction authors to tell stories that can inspire the doing of new Modernist-scale dreams. Personally, he wants to build a 2km tall tower to make it cheaper to put things into space.

To this, I say: fuck yeah. We need these big dreams to try to dynamite us out of our incrementalism (in both physical and digital innovation). If Neal and his buddies can do it then I’d love to see them take the scale of Modernist ambition and prove that it can be done without the attendant de-humanizing that lead us to reject Modernism in the 20th Century.

The latter relationship to Modernism, though, is much more common. The design world is full of fetish material for 20th Century Modernism as a lifestyle, especially in interior design and minimalist magazines like Dwell and about a billion Tumblrs.

The worst offender, to my mind, is Fuck Yeah Brutalism, which posts a parade of pictures and drawings of Brutalist architecture (like this drawing of a proposed Seward Park Extension from 1970) and has over 100,000 followers.

This kind of pixel-deep appreciation treats Moderism as a sexy design style that looks pretty on websites, completely divorcing it from its huge, and often extremely troubling, human and political effects.

For three years, I lived across the street from the Riis Houses and the Lillian Wald Houses in Alphabet City, Manhattan:

They are what Brutalist architecture and Modernist planning often became in practice, a vertical filing cabinet for the city’s poorest and least politically powerful populations whose maintenance has been visibly abandoned by the city.


It’s easy to fetishize Brutalist buildings when you don’t have to live in them. On the other hand, when the same Brutalist style is translated into the digital spaces we daily inhabit, it becomes a source of endless whinging. Facebook, for example, is Brutalist social media. It reproduces much the same relationship with its users as the Riis Houses and their ilk do with their residents: focusing on control and integration into the high-level planning scheme rather than individual life and the “ballet of a good blog comment thread”, to paraphrase Jane Jacobs.

The divide between these two ways of adapting Modernism into the digital age, powerfully illustrates the threat of Thingpunk. Its real danger lies in its superficiality, its mistaking of the transformation of surface style for evidence of systemic change.

Thanks to Rune Madsen and Jorge Just for feedback on a draft of this.

Posted in Art | 4 Comments

Making Photomosaics in Processing

This past Friday, Tom Henderson tweeted me a question:

Upon further questioning, Tom pointed to some inspirations for the type of thing he wanted to try to make: Sergio Albiac’s collages

Sergio Albiac - You Are Not The News

Lewis Walsh’s typographical collages

Lewis Walsh

…and Dada collage (like this Hannah Hoch’s example):

Cut with the Kitchen knife

Having gotten a sense of what he was going for, I suggested that Processing might be a good place to start, mainly because of how easy it makes it to work with images and how many resources there are out there for learning it. (Though I did suggest checking out Zajal as well since he already knows some Ruby.)

Further, I offered to put together a Processing example of “computational collage” specifically to help. While there are a lot of great texts out there for getting started with Processing (I especially recommend Dan Shiffman’s Learning Processing) it can be really helpful to have an in-depth example that’s approximately in the aesthetic direction in which you’re trying to proceed. While such examples might be a lot more complex and therefore much more difficult to read through, they can demonstrate how someone with more experience might approach the overall problem and also point at a lot of little practical tips and tricks that will come in handy as you proceed.

So, after a bit of thinking about it, I decided to write a Processing sketch that produces photomosaics. A photomosaic reproduces a single large image by combining many other smaller images. The smaller images act as the “pixels” that make up the larger image, their individual colors blending in with their neighbors to produce the overall image.

Here’s an example of the effect, produced by the sketch I ended up creating for Tom:

Nuclear Fine Photomosaic

Check out the larger size to see the individual pictures that go into it.

Here’s another example based on a picture I took of some friend’s faces:

Corrie and Benji Photomosaic

Original size.

For the rest of this post, I’ll walk through the Processing (and bit of Ruby) code I used to create this photomosaic. I’ll explain the overall way it works and point out some of the parts that could be re-usable for other projects of this sort (loading images from a directory, dividing up an image into a grid, finding the average color of an image, etc.). At the end, I’ll suggest some ways I’d proceed if I wanted to produce more work in this aesthetic of “computational collage”.

A Note of Warning

This post is far longer and more detailed than your usual “tutorial”. That is intentional. I wanted to give Tom (and anyone else in a similar position) not just some code he could use to create an effect, but a sense of how I think through a problem like this. And also a solid introduction into some conceptual tools that will be useful to him in doing work in and around this area. I hope that the experience is a little like riding along in my brain as a kind of homunculus – but maybe a little better organized than that. This is exactly the kind of thing that I wished people would do when I was first starting out so I thought I’d give it a shot to see if it’s useful to anyone else.

Overall Plan

Let’s start by talking about the overall plan: how I approached the problem of making a sketch that produced photomosaics. After thinking about how photomosaics work for a little while (and looking at some), I realized the basic plan was going to look something like this:

  • Download a bunch of images from somewhere to act as the pixels.
  • Process a source image into a grid, calculating the average brightness of each square.
  • Go through each square in this grid and find one of the downloaded images that can substitute for it in the photomosaic.
  • Draw the downloaded images in the right positions and at the right sizes.

In thinking through this plan, I’d made some immediate decisions/assumptions. The biggest one: I knew the photomosaics were going to be black and white and that I’d mainly use black and white images as my downloaded images. This choice radically simplified the process of matching a portion of the original image with the downloaded images – it’s much easier to compare images along a single axis (brightness) than along the three that are necessary to capture color (red, green, blue or hue, saturation, value). Also, aesthetically, most of Tom’s example images were black and white so that seemed like a nice trade-off.

After a first section in which I explain how to use some Ruby code to download a bunch of images, in the rest of the sections, I’ll mainly describe the thinking behind how I approached accomplishing each of the stages in Processing. The goal is to give you an overall sense of the structure and purpose of the code rather than to go through every detail. To complement that, I’ve also posted a heavily-commented version of the photomosaic sketch that walks through all of the implementation details. I highly recommend reading through that as well to get a full understanding. I’ve embedded a gist of that code at the bottom of this post.

Downloading Images

The first step in making a photomosaic is to download all the images that are going to act as our tiles – the individual images that will stand in for the different grays in the original image. So, what we need is a bunch of black and white images with different levels of brightness ranging from pale white to dark black.

By far the easiest way to get these images is to download them from Flickr. Flickr has a great, rich API, which has been around for quite a long time. Hence there are libraries in tons of different languages for accessing its API, searching for images, and downloading them.

Even more conveniently, this is a task I’ve done lots of time before, so I already had my own little Ruby script sitting around that handles the job. Since Tom had mentioned he knew some Ruby this seemed like the perfect solution. You can get my Ruby script here: flickr_downloader.rb. To use this script you’ll have to go through a number of steps to authorize it with Flickr.

  • Apply for API access
  • Enter the API key and shared secret they give you in the appropriate place in the flickr_downloader.rb script.

Now you need permission to login as a particular user. This is done using an authentication process called “oauth”. It is surprisingly complicated, especially in the relatively simple case of what we want to do here. For our purposes, we’ll break down oauth into two steps:

  • Give our code permission to login as us on Flickr.
  • Capture the resulting token and token secret for reuse later.

This example from the flickraw gem will take you through the process of giving our code permission to log in to flickr: auth.rb. Download it and run it. It will guide you through the process of generating an oauth url, visiting Flickr, and giving permission to your code.

At the end of that process, be sure to capture the token and token secret that script will spit out. Once you’ve got those, go back to our flickr_downloader.rb script and paste them in the appropriate places marked ACCESS_TOKEN and ACCESS_SECRET.

Now the last step is to select a group to download photos from. I simply searched for “black and white flickr group” and picked the first one that came up: Black and White. Once you’ve found a group, grab its group id from the URL. This will look something like “16978849@N00” and it’s what you need for the API to access the group’s images. When you’ve got the group id, stick it in the flickr_downloader.rb script and you’re ready to run it.

Make sure you have a directory called “images” next to the flickr_downloader.rb script – that’s where it wants to put the images it downloads. Start it running and watch the images start coming down.

Process the Source Image into a Grid

Processing photo collage for Mathpunk

Now that we’ve got the images that will populate each of our mosaic’s tiles, the next step is to process the source image to determine which parts of it should be represented by which of our tile images.

When you look at the finished sketch, you’ll see that, there, the code that does this job actually comes at the end. However, in the process of creating the sketch it was actually one of the first things I did – while I was still thinking about exactly the best way to match downloaded images to each part of the source image – and it was a very early version of the sketch that produced the screenshot above. This kind of change is very common when working through a problem like this: you dive into one part because you have an idea for how to proceed regardless of whether that will be the first piece of the code in the final version.

Creating this grid of solid shades of gray consisted of two main components:

  1. Loop through the rows and columns of a grid and copy out just the portion of the original image within each cell.
  2. Pass these sub-images to a function that calculates the average brightness of an image.

First I defined the granularity of the grid: the number of rows and columns I wanted to break the original image up into. Based on this number, I could figure out how big each cell would be: just divide the width and height of the source image by how many cells you wanted to split each side into.

Once I knew those numbers, I could create a nested for-loop that would iterate through every column in every row in the image while keeping track of the x- and y-coordinates of each cell. With this information in-hand I used Processing copy() function to copy the pixels from each cell one-by-one into their own image so that I could calculate their average brightness.

See the drawPhotomosaic() function in the full Processing code below for a detailed description of this.

I implemented a separate function to calculate the average brightness of each of these sub-images. I knew I’d need this function again when processing the downloaded tile candidates. I was going to want to find their brightness as well so I could match them with with these cells. See the aveBrightness() function in the Processing code for the details of how to find the average brightness of an image.

In my original experiments with this, I simply drew a solid rectangle in place of each of these cells. Once I’d calculated the average brightness of that part of the source image, I set fill() to the corresponding color and drew a rectangle with rect() using the x- and y-coordinates I’d just calculated. Later, after I’d figured out how to match the tile images with these brightness colors, it was simple to simply draw the tile images at the same coordinates as these rectangles. The call to rect() simply got substituted for one to image().

Matching Tile Images

In many ways, this is the core of the photomosaic process. In order to replace our original image with the many images we downloaded, we need a way to match each cell in the original image to one of them.

Before settling on the final technique, I experimented with a few different ways of accomplishing this. Each of them had a different aesthetic effect and different performance characteristics (i.e. took a different amount of time to create the photomontage and that time got longer at different rates depending on different attributes).

For example, early on, it occurred to me that the grid of grayscale cells (as shown in the screenshot above) didn’t look very different if I used all 256 possible shades of gray or if I limited it to just 16 shades. This seemed promising because it meant that instead of having to use (and therefore download and process) hundreds of tile images, I could potentially use a much smaller number, i.e. as few as 8 or 16.

So, my first approach was to divide the possible range of 256 grayscale values into large “bins”. To do 16 shades of gray, for example, each bin would cover 16 different adjacent grayscale values. Then, I started loading the source images, checking to see which of these 16 bins they fit into, and moving on if I already had an image in that bin. The goal being to select just 16 images to cover the full range of values in the original image.

However, when actually running this approach, I found that it was surprisingly hard to find images for all of the bins. Most of my tile images had similar brightnesses. So while I’d find five or six of the middle bins immediately, it would process a huge number of images while failing to find the most extreme bins.

I eventually did manage to produce a few photomosaics using this method:

Processing photo collage for Mathpunk

However, I decided to abandon it since it required a really large set of tile images to search through – and then didn’t use 98 percent of them – and also created a distracting visual texture by repeating each tile image over and over (which could be a nice effect in some circumstances).

After trying a few other similar approaches, it eventually occurred to me: instead of starting with a fixed set of grayscale colors I was looking for as my “palette” I should just organize the actual tile images that I had on hand so that I could pick the best one available to match each cell in the source image.

Once I’d had that revelation, things proceeded pretty quickly. I realized that in order to implement this idea, I need to be able to sort all of the tile images based on their brightness. Then I could simply select the right image to match the cell in the source image based on its position, i.e. if I need a full black image, I could grab ones at the front of my sorted list, if I needed ones near full white, I could grab ones at the end, and so forth for everything between. The image I grabbed to correspond to a full black pixel might not be all-black itself (in fact it almost definitely wouldn’t be – who posts all-black images to Flickr?), but it would be the best match I could get given the set of tile images I’d downloaded.

In order to make my tile images sortable, I had to build a class to wrap them. This class would hold both the necessary information to load and display the images (i.e. their paths) as well as their average brightness – calculated using the same aveBrightness() function I’d already written. Then, once I had one of these objects for each of my tile images, I could simply sort them by their brightness score and I’d have everything I needed to select the right image to correspond to each cell in the source image.

The code to accomplish this makes up most of the sketch. See the PixelImage class, the PixelImageComparator class, and most of the setup() function in the full sketch for details. I’ve written lots of comments there walking your through all of the ins and outs.

Once it was in place, my sketch started producing pretty nice photomosaics, like this one based on Tom’s twitter icon:

Mathpunk Photomosaic

(View at original size.)

Though I found the result worked especially well with relatively high-contrast source images – like the black and white portrait I posted above or this one below based on an ink drawing of mine. I think this is because the tiles only have a limited range of grays that they cover. Hence, images that depend for their legibility on fine distinctions amongst grays can end up looking a little muddled.

Running people drawing as photomosaic

Future Improvements

At this point, I’m pretty happy with how the photomosaic sketch came out. I think its results are aesthetically nice and fit into the “computational collage” category that Tom set out to start with. I also think the code covers a lot of the bases that you’d need in doing almost any kind of work in this general area: loading images from a directory, processing a source image, laying things out in a grid, etc.

That said, there are obvious improvements that could be made as next steps starting from this code:

  • Use tile pictures that are conceptually related to the source image. To accomplish this I’d probably start by digging more into the Flickr api to make the downloader pick images based on search terms or person tags – or possibly I’d add some OpenCV to detect faces in images…
  • Vary the size of the individual images in the grid. While the uniformity of the grid is nice for making the image as clear as possible, it would be compositionally more interesting (and more collage-like) to have the size of the images vary more, as Tom’s original references demonstrate. For a more advanced version you could even try breaking up the rectangular shape of each of the source images (Processing’s mask() function would be a good place to start here).
  • Another obvious place to go would be to add color. To do this you’d need a different measure of similarity between each cell and the tile images. And you’d need one that wouldn’t involve searching through all of the tile images to match each cell. I’d think about extending the sorting technique we’re using in this version. If you figured out a way to translate each color into a single number in some way that was perceptually meaningful, you could use the same sorting technique to find the closest available tile. Or, you could treat the tile images as red, green, and blue pixels and then combine three of them in close proximity (and at appropriate levels of color intensity) to produce the average color of any cell in the image.
  • One aspect of Tom’s references not covered here is the use of typography. Rune Madsen’s Printing Code syllabus is an amazing resource for a lot of computational design and composition tasks in Processing and his section on typography would be especially useful for working in this direction.
  • Finally, one way to break off of the grid that structures so much of this code would be to explore Voronoi stippling. This is a technique for converting a grayscale image into a series of dots of different weights to represent the darkness of each region in a natural way, much like a stippled drawing created by hand. Evil Mad Science Laboratories recently wrote an extensive post about their weighted voronoi stippling Processing implementation to create art for their Egg Bot machine. They generously provide Processing code for the technique which would make an interesting and convenient starting point.

Posted in Art | 2 Comments

An ofxaddons.com Update

James George and I just ran an epic update to ofxaddons.com, the site we run that indexes extensions and libraries for OpenFrameworks. With the recent version switchover of the Github API, our addon detection code had gotten out of date and hadn’t run in a few weeks.

It’s fixed now and in running it we found a whole raft of new awesome addons, bringing the grand total up to 564 known addons. In this post I’ve collected a few of the addons that struck my eye as being exciting, though there are a bunch more. You can follow the ofxaddons changes page to keep up.

Let’s start off with two addons that have cool images to share. First is the very clever ofxRetroPixel by Akira Hayaska. ofxRetroPixel converts hi-res graphics into low-res retro pixels “like a 70s pong game”. Here’s a sample of the results:

ofxRetroPixel

Another neat visual addon is ofxSparklines by Christopher Baker. ofxSparklines produce small line graphs from data with a lot of options for visual refinement:

Seems like a great tool for adding some visual feedback to an app.

Another exciting (and timely) addon is ofxLytroFileTools from Jason Van Cleave, which lets you parse and view fiels from the Lytro re-focusable camera in OF.

The next two addons were enough to inspire us to add a new category to the site: Machine Learning. ofxSelfOrganizingMap implements an unsupervised machine learning algorithm that has applications in data clustering, compression, and pattern recognition. The addon’s readme has some awesome examples, including a visualization of the colors of the seasons created by clustering images from Google image search:

And a visual clustering of countries via UN poverty statistics:

Another machine learning addon, ofxSequence provides classification and recognition of numeric sequences, a technique that can be used in a lot of applications, including gesture recgnition. This is a classifier that you train with some example data. and then it uses a hidden markov model to recognize patterns in that data.

Two last addons I wanted to mention, both relating to computer vision and tracking people.

Joel Gethin Lewis has an new addon for background removal ofxBackground based on a classic example from Learning OpenCV.

And, last but not least, Chris O’Shea made ofxThermitrack, an OF interface to the Thermitrack thermal imaging camera, which provides “high-resolution position data from people moving within its field of view” (and looks like a smoke detector):

There’s also been a lot of excellent development on existing addons as well as some intriguing-looking projects that aren’t quite ready for release yet. You can find all of it at ofxaddons.com/changes.

Posted in Uncategorized | Leave a comment

In-Screen Graphics as Religious Experience: On the Purpose of Invoviz

In a recent Domus piece on In-Screen Sports Graphics, Max Gadney reports on a talk given by Ryan Ismert at Strata. Ismert works at Sportsvision, which makes the graphics that appear in most US TV sports telecasts.

In the piece, Gadney sets outs Sportsvision’s work as a model for other infoviz practices, especially those aimed at business decision makers and designers of public space. He offers some compelling evidence of Sportsvision’s success at integrating rich sensor-derrived data into concise and comprehensible on-screen graphics.

Then Gadney’s takes things a little farther. He asks “how do we go about getting the freeflowing, more subjective data that might better communicate life in buildings, football and business?” In sport he’s thinking about problems like getting the system to capture the subtleties of the fluid “whirling patterns of play” demonstrated in a FC Barcelona football match. He wants systems like Sportsvision’s on-screen infoviz to be able to understand and represent such seemingly ineffable properties (and, by analogy, for our business and building modeling systems to be able to do the same) not just hard numbers.

And Gadney has a suggestion for an answer: “The answer, for both football and buildings, will emerge from a more holistic, performative sophistication in collection and visualisation, as a ‘total design’”. He suggests that this “total design” could be based on “parametric models describing the interdependency between performative elements in buildings”:

“Parametric models indicate how a change to one component of a structure causes ripples of changes through all the other connected elements, mapped across structural loads but also environmental characteristics, financial models and construction sequencing. FC Barcelona’s activity is also clearly parametric in this sense. It cannot be understood through sensors tracking individuals but only through assembling the whole into one harmonious, interdependent system.”

Here, I think, is where Gadney gets into trouble. The problem with this idea of “total design” is that it bleeds into Cybernetics on the one side and AI on the other and hence falls into the problems that haunt both of those disciplines.

What is the victory condition for an advanced parametric model of the subtle strands of inter-relations between players on a football pitch?

Start with the Cybernetics option. Is the goal predicting the future of the game? Predicting when and if individual goals will be scored? If so, that falls into the classic modeling problem that doomed large-scale cybernetics efforts such as World3 and George Van Dyne’s work in the Colorado grasslands. Capturing increasing amounts of data doesn’t cause your model of the complex system to converge on the real world behavior. Instead it causes it to either act stochastically (as in Van Dyne’s case) or fail catastrophically because of flawed assumptions (as in the case of World3’s failure to predict the green revolution and the liberalization of global trade).

The failures of these statistics-based complex systems models lead directly to the rise of chaos theory and a new modesty across disciplines like ecology. The scientists involved learned respect for how little rich data actually helps in the problem of modeling complex systems. However, many other industries did not learn such respect. One of the most prominent of these was the financial industry which built probably the most sophisticated and detailed data-driven systems modeling tools in the world. As is now emerging these tools were a major contributor to the overdeveloped sense of confidence and control that plagued financial industry operators, leading directly to the 2008 crisis.

On the AI side, we have another proposal for the goal of such a “total design”. Maybe the victory condition is that the system shares our aesthetic appreciation of the game? The traditional aim of AI is to reproduce human capacities in the machine. Towards that aim we might use increased tracking of players and increasingly sophisticated models of game dynamics to make our computational systems into passionate fans of games, fans that can appreciate the complex shifting patterns of FC Barcelona’s tik-tak passing the way we do.

However, how you would measure (or even clearly define) machine “appreciation” is a philosophical problem that has plagued hard AI proponents since the beginning of the discipline. And, I would argue, a problem on which they’ve made basically no progress in that time. The reason for that lack of progress to my mind is that the core of AI itself is a bad metaphor. As Bruce Sterling argued compellingly in his Long Now talk, The Singularity: Your Future as a Black Hole: “we don’t know what cognition is and we don’t even really know what computation is”. So how can we expect to jump straight into subtle problems like building aesthetic appreciation into computation. Even if you’re not convinced by the Serle arguments against hard AI, I think you’d have to see this as something of an obstacle to setting something like this as the goal of a sport- or business- or building-infoviz system, despite the potential poetic beauty of doing so. (I wrote more about this problem and how I think it’s evolving in the context of present day technology in my post, AI Unbundled).

All of that said, what is the goal of such a “total design” system if it cannot be complex systems management or AI aesthetic appreciation? An alternate goal for such a system was articulated by Doug Engelbart at SRI in the 60s: that of human augmentation. Computational systems should seek to augment human experiences and abilities: learning, recall, access, communication.

What would an infoviz system look like that was aimed at the Augment goals? Rather than offering an illusory sense of control or the pathetic fallacy of some machinic aesthetic understanding, such a system would aim to enhance what human beings gain from sport: a sense of the beauty of perfected human movement, the thrill of competition, especially when rooted in the emotion-sharing and amplification herd-behaviour of crowds, etc. I don’t know how you’d go about using data to augment these human experiences, but I know that you’d be much better off with David Foster Wallace’s NY Times essay on Roger Federer as Religious Experience as a starting point than Cybernetics or AI.

Posted in Opinion | 1 Comment

Designing for and Against the Manufactured Normalcy Field

This post tells the story of the session at FOO camp this year that I co-ran with Matt Webb on the Manufactured Normalcy Field. It explains the background of the idea, describes the structure of the brainstorming session, outlines its results, and then tracks some of the uptake of the idea since FOO, specifically in a recent episode of A Show with Ze Frank.

A few months back, Nick Pinkston turned me on to Ribbonfarm, the blog of Venkatesh Rao, a researcher and entrepreneur. Ever since, it’s become a reliable source of mind-grenades for me: explosive ideas that carve up reality in a way I’d never imagined and stimulate new ideas. Ideas you can not just think about, but think with.

The most productive of these ideas for me so far has been the Manufactured Normalcy Field. The Field is Rao’s attempt to explain the process of technical adoption. Rao argues that when they’re presented with new technological experiences people work hard to maintain a “familiar sense of a static, continuous present”. In fact, he claims that we change our mental models and behaviors the minimum amount necessary to work productively with the results of any change.

In cultural practice this process of minimal change takes two primary forms. First, we create stories and metaphors that map strange new experiences back to something we already understand. Rao gives a number of examples of this: the smartphone uses a phone metaphor to make mobile computing comprehensible, the web uses a document metaphor, which has persisted in our user interfaces even as the underlying technology has changed, and “we understand Facebook in terms of school year-books”.

Secondly, we make intentional design choices aimed to de-emphasize the strangeness of new technologies. Here, Rao explains via the example of air travel (a field in which he was educated as an engineer):

"A great deal of effort goes into making sure passengers never realize just how unnatural their state of motion is, on a commercial airplane. Climb rates, bank angles and acceleration profiles are maintained within strict limits. Airline passengers don’t fly. They travel in a manufactured normalcy field.

When you are sitting on a typical modern jetliner, you are traveling at 500 mph in an aluminum tube that is actually capable of some pretty scary acrobatics. Including generating brief periods of zero-g. Yet a typical air traveler never experiences anything that one of our ancestors could not experience on a fast chariot or a boat."

Given this framework, much of the way we currently market new technology is misguided. Geeks, especially, are prone to praise an innovation as disruptively, radically new. But if we believe Rao, that’s the worst way we could advocate on its behalf. What we should do instead is try to normalize the new technology by figuring out the smallest stretch needed to get the Manufactured Normalcy Field to encompass it.

In fact, taking this into account, Rao describes a new role for user experience design:

“Successful products are precisely those that do not attempt to move user experiences significantly, even if the underlying technology has shifted radically. In fact the whole point of user experience design is to manufacture the necessary normalcy for a product to succeed and get integrated into the Field. In this sense user experience design is reductive with respect to technological potential.”

The Manufactured Normalcy Field and Design (at FOO)

Rao’s essay proceeds to examine the threats he currently sees to the MNF and the anxiety that produces in us. It’s a fascinating (and important) line of thought and I recommend you read the full article.

For my part, though, Rao’s account of the MNF got me thinking about how it might be useful to me as a designer. It occurred me that, when making, marketing, or designing products, there are two different relationships to the Field you might want to forge.

First, as already hinted at, you might have a new technology whose adoption you want to encourage. In this case, you would design the product to disturb the existing state of the Field as little as possible. You’d search for existing well-understood products and experiences to analogize it to. You’d try to make it familiar. Think of Apple’s advertising for the iPad, which depicts the device as a totally natural and harmless part of normal domestic life, basically a “glass magazine”.

Second, you might have the opposite situation: a product that’s become boring to the point of invisibility. Air travel. Routers. Refrigerators. If you wanted to make these seem more exciting or innovative, you’d want to “denormalize” or defamiliarize them: push them to the edge of the Manufactured Normalcy Field so that we notice them again and they feel new. For example, imagine an airplane with as much visibility for the passengers as was feasible: huge windows that really let you feel and see the speed and angle of the plane’s flight.

So, I came to FOO with this broad structure in mind for a brainstorming session based on Rao’s Manufactured Normalcy Field. I was feeling nervous about the idea because it was new and I’d barely talked to other people about it, let alone leading a brainstorming session with people of the incredible caliber that O’Reilly gathers for FOO.

Despite my trepidation, I reserved a session time: “Designing for and Against the Manufactured Normalcy Field”. And to buttress my nervousness, I recruited Matt Webb, CEO of the excellent BERG London to co-lead the session with me. Webb is an experienced invention workshop leader and I thought this idea would be right up his alley. He was generous enough to agree immediately with just a short semi-fevered pitch from me to go on.

In the run-up to the session, I explained a little bit more of what I was thinking to Matt (basically gave him a short, verbal, version of the above). He then boiled that down into a structure for a brainstorming session. After a short introduction from me, Matt divided the white board into three sections, labeled respectively “Things That Feel Weird” (i.e. things that need to be pushed further inside the Field), “Things That Feel Normal” (boring things that need de-normalization), and “Things That We Use To Feel About Things” (strategies for normalizing and de-normalizing).

The results of the session

Much to my surprise and delight what ensued was a fantastic brainstorming session. Part of that was the incredibly creativity of the FOO audience. You couldn’t hope for a better group for this kind of exercise than one that contains the likes of Ze Frank, Tom Coates, Tim O’Reilly, etc. etc. And another part of that was Matt’s expert execution of our structure.

Here’s a photo of the white board with the results:

Manufactured Normalcy Field board

The first category we started with was Things That Feel Weird. Unsurprisingly, given the audience, these tended towards cutting-edge technologies:

  • chips that can see smiles
  • Mechanical Turk
  • self-driving cars
  • smart prosthetics
  • Google Glass
  • smart drugs
  • brain reading

The Things That Feel Normal were interestingly more diverse, stretching from long-mundane parts of domestic life to bits of technology only recently incorporated into The Field:

  • keeping pets
  • earth
  • refrigerators
  • crowd-sourcing
  • screens
  • phones
  • centralized banking
  • producing things in China
  • self GPS-tracking
  • yeast

The last category, Things That We Use To Feel About Things, may have been the most fascinating and useful. It ended up eliciting existing cultural techniques that we use to normalize weird things or to allow us to defamiliarize the mundane.

  • personification / anthropomorphism
  • repetition / routine
  • empathy
  • desktop metaphor
  • skeuomorphs
  • gamification
  • domestication
  • medicine / pathologizing (treating something as an illness)
  • sport / play
  • treating as a moral failing

It’s an amazing list, both conceptually and practically. I don’t think I would have seen anything in common between these practices before seeing them emerge in this context. Also, they’re all things I can now actively imagine using in a design process.

After we’d filled in these three areas, Matt suggested a final step of the process that would lead towards actionable design concepts. He asked people to call out Things That Need Weirding and Things That Need Normaling and, for each thing, he asked the rest of the group to think of ways to make that thing either weirder or more normal, as appropriate.

Here were the candidates (time was getting short at this point so we only got to do a few):

Things That Need Weirding

  • advertising
  • money
  • driving

Things That Need Normaling

  • refrigerators
  • flying

(there were others of these called out, but I didn’t capture them)

And here were the concepts that emerged by trying to weird the normal things and normal the weird ones:

  • Everyone starts the plane together (passengers have placebo controls)
  • Pathologize driving (communicable?)
  • Fridge as Narnia
  • CCTV in toilets
  • AR that lets you see CCTV fields-of-view
  • Advertising in cemeteries
  • Advertising made just for you
  • Grinning Currency

This is a partial list I’m reconstructing from the white board photo and my own memories. It doesn’t do a great job capturing the thrill and playfulness of the ideas and the energy and excitement of the participants.

It was an incredibly fun session. I was surprised and very pleased by how well it came out. I can imagine running a similar brainstorming session with other groups in more targeted environments with productive results.

Ze Frank and Object-Orineted Ontology

After FOO camp, last week, Ze Frank (who was in the audience at the session and was a major contributor to the brainstorming) made an episde of his show, breaking normal, where he talked about the session. Ze focused on the making-normal-things-weird side of the spectrum. He gave the example of re-imagining how he pictures himself standing on the earth:

Breaking Normal by Ze Frank

Instead of always imagining himself standing on the top of the earth, he started imagining himself standing on the side of it looking down:

I started imagining that I was facing down when I was standing and looking forward when I was lying down and suddenly I got dizzy. So I lied down, but now lying down had the same feeling as this (dangling feet off the edge of a building), like my back was stuck to a ball and below me was just space.

At the end of the episode, Ze asked his audience to play along, inviting them to describe a normal thing in a way that reveals its inherent weirdness. Ze’s viewers did an amazing job of it. Here are some of my favorites from the comments on that video:

Fishspawned described a thermos:

a thermos is a container that contains a container inside of it surrounded by nothing because if you put stuff into something and surround it with nothing it will keep on being what it is and can’t change into something else. so a thermos acts as a sort of mobile suspended animation device

Ark86 on computers:

In reality, I’m staring at a flat panel made from superheated sand that is connected via strips of ores and really heavily processed dinosaur remains to a thing that we all pretend to understand called “the internet”. Also, I’m sitting on a cow skin painted black and stapled onto some more processed dinosaur remains. I think it’s weird how much ancient animal matter is still being used to make everything we do possible. Thanks, Stegosaurus!

Grendelkhan on work and money:

Five out of seven days, a significant proportion of people go to a small, confined space and sit still for roughly eight hours, staring at a screen and typing. They do not physically move or construct anything.

Later on, they go to other buildings, and take food and other necessities. These two activities are related in an entirely conceptual way–no physical tokens are moved, and the providers of physical goods don’t know anything about the small, confined space.

NephilimMuse on clapping:

Applauding a performance is weird. More specifically, clapping is weird. We just smack our hands together to make a noise that expresses some sort of satisfaction or adoration. It makes the receiving person(s) feel validated. I don’t get it. the motion of clapping is weird. Smack smack smack.

In reading these descriptions, it struck me that they are very resonant with Object-Oriented Ontology (which I’ve written about before here and here). Breaking the abstraction of some behavior or acculturated object (or “opening the black box” as Graham Harman describes it in Prince of Networks) lets us see all the objects and materials that actually constitute these concepts and abstractions. This weirding process puts the material of superheated sand, the air inside a thermos, and cow skin painted black on the same footing as computers, thermoses, and jobs – culturally important categories we routinely consider.

In Object-Oriented Ontology terms, this weirding process is pushing us towards a “flat ontology” where everything exists equally. It’s great that Ze and his viewers have found this game that vividly flattens their personal ontologies and that the result is wonder.

Posted in Opinion | 10 Comments

Winning the New Aesthetic Death Match

Yesterday I participated in the Flux Factory New Aesthetic Death Match, a lively public debate that the art space hosted. My fellow debaters were Kyle McDonald, Molly Steenson, and Carla Gannis. Molly and Kyle I already knew well, but Carla I hadn’t had the pleasure of meeting until just before the debate last night.

The debate was structured as a kind of 1980s MTV take on the traditional Oxford debating society rules. There were strict timed statement and rebuttal structures and a voted winner at the end. There were also smoke machines and “smack downs”. There was also a surprisingly large audience with something like three times as many people as chairs.

As panelists we were actually quite friendly and so it was, perhaps, good, at least for the audience’s amusement, that the rules were in place to ensure some conflict. The result was a stimulating and lively conversation that actually managed to touch on some of the deeper issues with the New Aesthetic. I was impressed by much of what my fellow panelists said. It’s surpassingly difficult to be coherent and entertaining off the cuff and under a ticking clock.

I’m also proud to say that at the end of the night, I was chosen the winner by audience applause.

It’s impossible to sum up all the points that were made, but I quite liked this trio of tweets by Marius Watz this morning summing things up:

Marius Watz NAFF tweets

There’s not, as far as I know, video of the event online anywhere. So the best documentation I can provide is my opening statement, which was requested to take up one minute and kicked off the night. I scrawled it in my notebook on my way out to Long Island City and read it over this video (the full text is below):


For the first forty years of their existence we thought of technologies like full text search, image processing, and large scale data analysis as components in a grand project to build an artificial humanlike intelligence.

During this time these technologies did not work very well.

In the last 15 years they’ve started working better. We have Google search, Facebook face detection, and high frequency trading systems.

More and more of our daily lives are lived through computer screens and the network services on them. Hence a huge amount of our visual, emotional, and social experiences take place in the context of these algorithmic artifacts, these digital things interacting with each other a billion times a second. Like the slinky on the treadmill here they take on a kind of life of their own, a life none of their human makers explicitly chose.

Our struggles to understand that life and learn to engage with it in our artistic and design practices is the heart of the New Aesthetic.

This quick statement summarized other things I’ve said at more length here, here, and here.

Posted in Art | 1 Comment

Teaching Makematics at ITP

I’m proud to announce that I’ll be teaching a class a NYU ITP next semester. The class grew out of my work on Makematics; it’s called “Makematics: Turning Computer Science Research into Creative Tools”. Here’s the full description from the Fall 2012 course listings (which will sound somewhat familiar to you if you read my intro to Makematics):

Artists build on top of science. Today’s cutting edge math and computer science research becomes tomorrow’s breakthrough creative projects.

Computer vision algorithms, machine learning techniques, and 3D topology are becoming vital prerequisites to doing daily work in creative fields from interactive art to generative graphics, data visualization, and digital fabrication. If they don’t grapple with these subjects themselves, artists are forced to wait for others to digest this new knowledge before they can work with it. Their creative options shrink to those parts of this research selected by Adobe and Autodesk for inclusion in prepackaged tools.

This class is designed to help you start seizing the results of this research for your own creative work. You’ll learn how to explore the published academic literature for techniques that will help you build your dream projects. And you’ll learn how to use those techniques to make those projects a reality.

Each week we’ll explore a technique from one of these research fields. We’ll learn to understand the original research and see how to implement it in code that you can use in your projects. You’ll learn to use the marching squares algorithm to detect fingers or make 3D models into something you can laser cut. You’ll learn how to use support vector machines to train your own object detector or analyze a body of text. We’ll cover a series of such topics, each of which has a wide range of applications in different creative media.

I’m still working on finalizing exactly the technical topics I’ll be covering. So far I have units planned on Marching Squares, Support Vector Machines, and Principle Component Analysis. I’m looking for a good topic in probability (and am open to suggestions). I’ll be teaching the class in Processing and producing libraries that facilitate each of these techniques (in fact, I’ve already started).

In addition to the motivations for this topic mentioned in the class description above, I also have another pet reason why I think this material matters. I hope this type of curriculum might be the start of something like an applied version of the New Aesthetic, teaching a set of skills and a body of knowledge that can move us beyond simply goggling at the output of drone vision systems, poetic spambots, and digitally fabricated high heels into deeply understanding the cluster of technologies that produce them and, in turn, using that understanding to produce things of our own. There’s no way a single 7-week class can hope to make more than a small start at a project like that, but a start is what comes first.

Posted in Uncategorized | Leave a comment

Paperless Post Tech Talk

A couple of weeks ago I delivered the inaugural tech talk at Paperless Post. I was invited by Paperless Post CTO Aaron Quint who’s been a friend for a long while.

Aaron asked me to talk about my work with the Kinect and anything else that was on my mind. I took the opportunity to talk about two current projects. One of these, Makematics, I’d launched just that week, but I haven’t talked about here much. It’s a project dedicated to turning computer science research into tools for creative work. I have more to announce on that topic shortly, but you can read my introductory post in the meantime.

The second project is one that’s not quite finished and this was the first time I’d talked about it publicly at all. It’s a design exploration into using faces as computer vision markers instead of abstract shapes. I call it You-R-Codes. I’ll have a more thorough presentation of it here soon so consider this a sneak preview.

Thanks to everyone who came out to the talk. It was a big friendly crowd with lots of great questions and discussion afterwards. And, of course, thanks to Aaron and the other folks at Paperless Post for inviting me and treating me so well. It was a good time.

Here are the slides:

Posted in Uncategorized | Leave a comment

Object-Oriented Sci-Fi: Harman’s Four Methods

The following is an excerpt from a talk by Graham Harman at the “Hello Everything” symposium. In it, Harman describes four methods for reversing common errors in failing to see objects. These methods are: counter-factuals, the hyperbolic method, simulation, and falsification. Each of them is an imaginative strategy for revealing the withdrawn core of objects, the aspect of them that makes them real for Harman’s Object-Oriented Ontology.

As philosophical techniques these four methods are quite striking. Together they constitute a kind of science fictional approach to philosophical thinking; each advocates imagining the world as different from reality in order to explore the limit and meaning of that reality.

I reproduce these methods here because I think they are promising ingredients in a recipe for something like an Object-Oriented Aesthetics or artistic methodology. Like much good SF I find them to be rich compost for my own imaginings, in this case of a set of procedures for generating multimedia art that inhabits an Object-Oriented perspective.

Here’s Harman:

"How do we reverse the error of seeing objects as events? We do that through counter-factuals. This is already a known method. You can imagine objects in different situations and imagine what the effects would be.[…]

"Imagining Lincoln in ancient Rome. How might he have played out there? Imagine a middle east with an Iranian atomic bomb or imagine an invaded Iraq instead. What are the possible things that would have happened in either of those cases. These help as allude to the thing as a style. Lincoln isn’t something that was confined to that historical period and that country but is something over and above that that could be translated.

"There are computers that do this. They take On Top of Old Smokey and turn it into a Bach fugue.

"Counter-factuals would be the first method for getting at the reality of things. The second would be what I call hyperbolic analysis, which I’ve used in three publications. This is reversing the error of impact. This is reversing the tendency to see things in terms of the effects they have. Instead of critique, also. I did this in the article on deLanda; I did this in the book on Latour; and I did this in the book on Meillassoux that hasn’t been published yet.

"In order to look at the impact of these philosophers what I did is not critique mistakes that they’ve made, but imagine that they have total success. Imagine that they become the dominant philosopher on the planet 20, 30 years from now. And then you imagine what would still be missing. What would still be missing if Meillassoux was the dominant world philosopher in 2050. Don’t fuss around with detailed mistakes that he makes but grant him everything and then see what’s still missing.

"If a philosophy can not survive the hyperbolic test then its less of a real philosophy, I would say. If you take some perfectly respectable minor article about some detailed point and then try to imagine that this is the most important philosophical text of the 21st century it can’t survive that test, obviously. It needs to be a work of a certain level, a certain comprehensiveness and that’s a more real philosophy. The more it can pass that sort of imaginative test the more real it is.

"The other two are a little harder. What we’re trying to do is talk about the mutual independence of a thing and its pieces where the thing is not reducible to its pieces and the pieces are not reducible to the thing. And we actually do this all the time: we call this simulation – where you’re removing a thing from its pieces and simply trying to treat it as a formal model. You’re testing the behavior of a tornado or the 1976 Cincinnati Reds – drawing on my sports writing career – without having to reassemble all the physical pieces that made them those things, of course. You’re simply testing them to see what will happen.

"And what I’ve realized while thinking about this is that paradoxically a thing is more real the more it can be simulated, the more it can be parodied. You can parody good poet better than bad ones, can’t you? If imitation is the sincerest form of flattery then simulation and parody are an even more sincere form. The less real something is the harder it is to simulate. It’s harder to simulate a bad writer, a bad philosopher than a good one.

"In other words the style of a thing is not just an aggregate of all of the deeds it has done. The style of a thing is something over and above those that can be simulated. And so here I would say, against some Luddite principles, if there were truly a computer that was able to write new Shakespeare plays I think that would be outstanding. I think this would be a tribute to Shakespeare, not some kind of cheapening of his greatness. It would show that the style there is perhaps something more real than the mass of works that one person wrote.

"And that leaves one last feature of pseudo-objects which is reducing them to sets, reducing them to pointing at an extensive number of things and saying that’s just a set it’s not a real thing with a unifying principle. We already saw that Rilke or earthquakes are substantial forms independent of their material components that can be removed and put on a computer and generate effects. What about the reverse? Is there a reverse situation where we can show those material components are real beneath all simulation?

"Actually yes. The answer to this is accidents: when things happen that weren’t expected. In what sense are accidents a method? Well, all the time. This is what falsification is about in science. You’re finding accidental things that happen to a theory that weren’t expected, things that point to the independence of the material components from the model that you had of them. So that would be the forth method to use.

“So now there are four methods to use: counter-factuals, the hyperbolic method, simulation, and falsification. And you could say that the humanities tend to benefit more from the first two and the sciences from the latter, but that’s not necessarily the case. There are significant exceptions. And what this suggests to me is that if this way of setting out the different methods is valid, the division between the human and natural sciences is actually an imperfect approximation to the real fissure running through human knowledge, which has to do with the kind of knowledge that shows the independence of a thing from its pieces and the kinds that show its difference from it outer effects, which are not strictly identifiable with either the sciences or the humanities.”

Posted in Opinion | 1 Comment

AI Unbundled

Shaky (1966-1972), Stanford Research Institute’s mobile AI platform, and the Google Street View car. The project of Artificial Intelligence has undergone a radical unbundling. Many of its sub-disciplines such as computer vision, machine learning, and natural language processing have become real technologies that permeate our world. However the overall metaphor of an artificial human-like intelligence has failed. We are currently struggling to replace that metaphor with new ways of understanding these technologies as they are actually deployed.

At the end of the 1966 spring term, Seymour Papert, a professor in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) initiated the Summer Vision Project, “an attempt to use our summer workers effectively in the construction of a significant part of a visual system” for computers. The problems Papert expected his students to overcome included “pattern recognition”, “figure-ground analysis”, “region description”, and “object identification”.

Papert had assigned a group of graduate students the task of solving computer vision as a summer homework assignment. He thought computer vision would make a good summer project because, unlike many other problems in the field of AI, “it can be segmented into sub-problems which allow individuals to work independently”. In other words, unlike “general intelligence”, “machine creativity”, and the other high-level problems in the AI program, computer vision seemed tractable.

Thirty five years later computer vision is a major sub-discipline of computer science with dozens of journals, hundreds of active researchers, and thousands of published papers. It’s a field that’s made substantial breakthroughs, particularly in the last few years. Many of its results are actively deployed in products you encounter every day, from Facebook’s face tagging to the Microsoft Kinect. But I doubt any of today’s researchers would call any of the problems Papert set for his grad students ‘solved’.

Papert and his graduate students were part of an Artificial Intelligence group within CSAIL lead by John McCarthy and Marvin Minsky. McCarthy defined the group’s mission as “getting a computer to do things which, when done by people, are said to involve intelligence”. In practice, they translated this goal into a set of computer science disciplines such as computer vision, natural language processing, machine learning, document search, text analysis, and robotic navigation and manipulation.

Over the last generation, each of these disciplines underwent similar arcs of development as computer vision: slow painstaking progress for decades punctuated by rapid growth sometime in the last twenty years resulting in increasingly practical adoption and acculturation. However, as they developed they showed no tendency to become more like McCarthy and Minsky’s vision of AI. Instead they accumulated conventional human and cultural uses. Shaky became the Google Street View car and begat 9-eyes. The semantic web became Twitter and Facebook and begat @dogsdoingthings. Machine learning became Bayesian spam filtering and begat Flarf poetry.

Now, looking back on them as mature disciplines, there’s little to be seen in these fields of their AI parentage. None of them seems to be on the verge of some Singularitarian breakthrough. Each of them is part of an ongoing historical process of technical and cultural co-evolution. Certainly these fields’ cultural and technological development overlap and relate and there’s a growing sense of them as some kind of new cultural zeitgeist, but, as Bruce Sterling has said, AI feels like “a bad metaphor” for them as a whole. While these technologies had their birth in the AI project, the signature themes of AI — “planning”, “general intelligence”, “machine creativity”, etc. — don’t do much to describe the way we experience them in their daily deployment.

What we need now is a new set of mental models and design procedures that address these technologies as they actually exist. We need a way to think of them as real objects that shape our world (in both its social and inanimate components) rather than as incomplete predecessors to some always-receding AI vision.

We should see Shaky (and its cousin, the SAIL cart, shown here) not as the predecessor not to the Terminator but to Google’s self-driving car.

Terminator mouth analysis

Rather than personifying these seeing-machines, embodying them as big burly Republican governor-types, we should try to imagine how they’ll change our roads both for blind people like Steve Mahan here as well as for all of the street signs, concrete embankments, orange traffic cones, and overpasses out there.

As I’ve written elsewhere I believe that the New Aesthetic is the rumblings of us beginning to do just this: to think through these new technologies outside of their AI framing with a close attention to their impact on other objects as well as ourselves. Projects like Adam Harvey’s CV Dazzle are replacing the AI understanding of computer vision embodied by the Terminator HUD with one based on the actual internal processes of face detection algorithms.

Rather than trying to imagine how computers will eventually think, we’ve started to examine how they currently compute. The “Clink. Clank. Think.” of the famous Time Magazine cover of IBM’s Thomas Watson is becoming “Sensor. Pixel. Print.”

PlayPlay
Posted in Art | Leave a comment