All Watched Over: On FOO, Cybernetics, and Big Data

Last weekend I had the privilege to attend FOO Camp. FOO is a loosely structured conference organized by O’Reilly, my publisher. At FOO, O’Reilly brings together a couple hundred people they think have an interesting perspective on contemporary goings on in the world of technology. Having all these people together gives O’Reilly a chance to take the pulse of the contemporary tech world and to create relationships with some of the people shaping that world. Being asked to attend is a privilege both because the invitation means that O’Reilly thinks you’re interesting enough to contribute and also because actually attending means you get to share this perspective — you get your own chance to see what the big trends are across the industry.

From my perspective, the central theme of FOO this year was: big data will save us. There were a bunch of participant-organized sessions about working with data (Big Data Best Practices, The Unreasonable Effectiveness of Data, Towards A Non-Creepy People Database, Data-Driven Parenting, even). One of the sub-themes was using data for social and political good. There were a number of participants from Code For America and “…for America!” became a kind of running gag, a suffix you could append to any technology or idea to make it fit the theme of the conference. Further, a striking number of the participants I met worked with data at various web companies big and small (including Google, Linked In, etc (interestingly, no one, that I met at least, was there from Twitter)). “Data scientist” was a relatively common job title.

Overall, there seemed to be a pervasive worldview that, if stated reductively, might be expressed thusly: Now, with so much of human behavior taking place over the web, mobile devices, and through other information-producing systems, we are collecting so much data that the only rational way of approaching most decision-making is through rigorous data analysis. And through the kind of thorough data analysis made possible by our new massive cloud computing resources we can finally break through the inherent irrationalities and subjectivities built into our individual observations, mental models, worldviews, and ideologies and into a new more objective data-driven representation of the world that can improve and rationalize our decision making.

I’m intentionally stating this idea more strongly and starkly than any individual FOO participant would ever have done in an actual session. These are incredibly smart people who live in the midst of the subtle distinctions and limitations that come up in practice when working on these kinds of problems in real life. By stating the underlying worldview this way, I’m not trying to create a straw man, but just to demonstrate the striking irony of these ideas emerging as dominant in this particular community and in this particular setting. The more I saw this “big data will save us” theme emerge, the more jarring it felt in contrast with the structure of FOO itself and, in many ways, O’Reilly’s philosophy as a company.

O’Reilly’s company slogan is: “changing the world by spreading the knowledge of innovators”. And FOO Camp itself is a perfect example of the company’s approach to achieving this goal. O’Reilly operates through a kind of personal networked social intelligence. They identify early adopters, create relationships with them, introduce them to each other, find out what they’re working on and what they’re interested in, and then use that knowledge to make informed guesses about where the tech world is going. Nearly all of these activities happen in person. All the books they publish, conferences they put on, and blogs they run are an epiphenomenon of this underlying process of personal relationship-building and hunch creation.

The key thing about this process is how human it is. O’Reilly’s process relies almost exclusively on human traits that aren’t represented in data or reproduced in a model: the trust between two peers that allows them to talk about a crazy idea that one of them is thinking about taking on outside of work, the ability to tell who’s highly respected in a field by the tone of voice people use when mentioning a name, the gut instinct of an experienced industry visionary for what will happen next, etc.

So, one question, then, is what would O’Reilly look like if they reinvented themselves as a Big Data company? Given all the resources of Big Data and the computation to crunch it how would you detect and spread the knowledge of innovators? How would you use data to attack the problem of identifying, tracking, predicting and collaborating with the early adopters and big thinkers that drive technological change?

I can think of a couple of notable attempts to do just this, but neither of them are exactly Big Data’s biggest triumphs. At the height of the blog era Technorati and other aggregators tried to automate the processes of bringing together this kind of knowledge by tracking blogs. And today Twitter Trends (along with a half-jillion Twitter analytics startups) does something similar. But neither of these seems to be any real threat to O’Reilly.

But that doesn’t mean that there isn’t a good idea out there somewhere to do just that. And if someone came up with a data-driven way to automate and beat O’Reilly’s human-centric process it seems like there’d be quite a lot of money in it — O’Reilly’s estate in Sebastopol is really quite nice.

All Watched Over by Machines of Loving Grace

There were two sessions at FOO that addressed this contrast between Big Data and Personal Knowledge head-on, attempting to put them into historical and theoretical context. The first one, organized by Matt Jones from Berg London, was a screening of All Watched Over by Machines of Loving Grace by Adam Curtis. Specifically, we watched Episode Two of this BBC documentary series, The Use and Abuse of Vegetational Concepts.

All Watched Over by Machines of Loving Grace is a three-part documentary film/polemical essay about the relationship between humans and computers. Episode Two looks at the history of cybernetics, how it arose out of developments in computer science and ecosystems theory, and how it came to shape much contemporary thinking about computers, the web, and web-mediated culture.

Cybernetics is the discipline of modeling the world as series of interlocking regulatory systems. It was developed by Norbert Weiner, Jay Forrester, and other engineers building massively complicated undertakings around the time of World War Two such as the SAGE anti-aircraft radar system. These systems were so complex that their designers had to move beyond the simple determinative rules normally enforced by computer code. Instead, they developed and embedded into their projects sophisticated models of the world. In order to do this, they developed a symbolic language for describing how various parts of the world interact as components in a system, influencing and altering each other in the process in order to produce various steady states.

This approach to design was known as cybernetics or systems theory. It gave birth to much of the last half century of ideas in computer and software design from object-oriented programming to HTTP. However, the cyberneticists didn’t stop with designing computer systems. They applied their systems modeling approach to everything from architecture to ecosystems to global economics.

Jay Forrester went on from his work on SAGE to to produce World3, a comprehensive computer simulation of the entire economy and environment of the earth. In the 70s, World3 predicted economic and societal collapse for the world from pollution and overcrowding and became the basis for the controversial Club of Rome study, The Limits to Growth. The World3 model included representations of people, technology, government, economics resources, nature, and, most importantly, all the interconnections between them. It attempted to model the entire world just as the SAGE software had attempted to model the relationship between the radar tracks of enemy aircraft and the anti-aircraft guns under its control.

Jay Forrester with his model for the entire global economic and environmental system.

One of the keys to supporting these models was collecting a huge amount of data. For example ecologist George Van Dyne attempted to build a comprehensive computer model of the Colorado grasslands ecosystem. He hired graduate students to watch and record every bite of food taken by every antelope on the slopes. He even went so far as to cut holes in the side of living bison so his staff could look inside the bison’s stomach to examine their daily diet.

Colorado bison with an open API.

As a polemic, Curtis’s film does more than present this history in a neutral manner. He constructs a critique of cybernetics. He argues that this emphasis on building ever-more accurate models of the world — and, especially, automating their results through the supposedly objective computer — represses any idea of individual agency to change the system while simultaneously causing us to project a false agency onto the system itself. In other words, Curtis focuses on cybernetics’ conservative political repercussions. In his account, this faith in the technologically augmented system model becomes a reason to defend the status quo.

In some ways, showing this film at FOO was an epic act of trolling on Matt Jones’ part. Cybernetics was the dominant philosophy of the 60s and 70s techno-counterculture within which O’Reilly arose. And much of how O’Reilly thinks and talks shows this influence clearly. Showing this film at an elite pow-wow at O’Reilly’s Sebastopol headquarters is a bit like screening Michael Moore’s Roger and Me at the annual GM shareholders meeting.

Curtis’s critique of cybernetics also served as an implicit critique of the “big data will save us” thematics of this year’s FOO. After watching the film, it was hard not to think of the data scientists as similar to those guys out in Colorado elbow-deep in bison trying to make their models add up.

This critique became even more explicit at the second Big Data vs. Personal Knowledge session of FOO: “Welcome to the Anthropocene”. Hosted by Matt Jones (again), Ben Ceverny (from Bloom), and Matt Biddulph (latest of Nokia), this fun sprawling session was much harder to pigeonhole than the All Watched Over screening (especially because of its deceptive title, which, as far as I could tell had only the most oblique relationship to the subject matter).

The subject matter was the coming return of predictive interfaces. The most famous (and despised) predictive interface is probably Clippy:

Clippy was a Microsoft Office feature that attempted to predict the user’s intention based on his preceding actions in order to jump in and help. The panel imagined a new Augmented Reality version of Clippy, “Reality Clippy”, that would use all of the available online data about your preferences and past actions (all your Foursquare checkins, Yelp and Amazon reviews, credit card purchases, Tweets, etc.) to suggest next actions you could take while moving about the city. As Cal Henderson reductio ad absurdum-ed it at one point: maybe we could get the interface down to an iPhone app that would superimpose a bright white line over the camera’s view of the surrounding street just telling us where to walk and what to do and buy all day long. Wouldn’t that be a bit of a relief?

Ben Ceverny described the instinct towards wanting to build these kinds of interfaces, and generally to so thoroughly augment our decision making with data, as a desire to “live in the model”. In other words, this new belief in Big Data takes the cybernetic vision one step further. Rather than simply building a comprehensive model of the world inside the computer so that we can make predictions and plans about it, the next step this time around is to actually try to live our lives inside this model, to superimpose it on the world in front of our eyes and place it as an intermediary in all of our online social interactions.

On my way home from FOO I sat staring out the car window, all of these impressions, ideas, and seeming contradictions bouncing around in my head. And then something occurred to me. O’Reilly’s human-centered approach is still a kind of systems thinking. O’Reilly is still building a model of what the geek world is working on. They’re just doing it through the social relationships that their employees form with other geeks. The “data” they gather is stored in their employees heads and hearts and in those of the wider community of geeks they bring to events like FOO. Instead of trying to live in the model, O’Reilly tries to live in the community.

As Matt Jones said in a kind of punchline to the Antropocene session: “The map-reduce is not the territory.” But the community just might be.

This entry was posted in Opinion. Bookmark the permalink.

11 Responses to All Watched Over: On FOO, Cybernetics, and Big Data

  1. Matt Edgar says:

    Thanks for this brilliant write-up, Greg. I also had a great time at Foo Camp and wondered about similar contradictions. The violated bison is instructive, but different, I think. Delving inside its stomach one day (we can presuppose) has little impact on what it chooses to eat the next. What’s potentially troubling about reality clippy is that our own and other people’s observable past behaviours – and the data scientists’ interpretations of them – come to shape our future options and choices. I’d call it a feedback loop, except that I still have that bison picture in my head 🙂 Matt

  2. Hi,

    Great article 🙂

    It strikes me that there is an interesting balance between the process of incremental improvements in what we know and the attempt to jump to a different perspective to get a new insight.

    The big data systems at present appear to me to be rather better at the former than the latter – i.e. they use past and present personal and peer performance to make proposals and projections (way too many ps)

    Where they are poorer appears to be in bringing in the jumps in perspective, the dislocations and lateral thinking insights. Perhaps they overly encourage confirmation bias.

    So the question I would have asked had I been at Foo Camp would have been “Are there ways that big data can help people make those leaps and encounter some of the ‘unknown unknowns'”

    My hunch is that the answer might come down to whether you believe in free will or rigid determinism – i.e. the “is there anything more to the world than axioms and logical inference”….

    Anyway… thanks for a really interesting article.


  3. Craig Hunt says:

    Brautigan lives!

  4. jkd says:

    Agreed entirely. And something that I think is both shocking and disappointing (but for predictable reasons) is the biggest missed opportunity in contemporary tech practice and research. Basically – the systems as currently constructed enable fairly easy collection of Big Data, but actually figuring out context (i.e., human meaning in actions) is hard. And expensive. So mostly, it’s not happening, and behavioral perceptual data relating to the human side of technology use is simply lost in the churn of interface change and the passage of time. But it’s easy to be lazy and just collect click-data, so that’s what happens. Looking back ten or twenty years from now, I think we’ll be kicking ourselves over all the data we didn’t collect, and questions we weren’t asking.

  5. Thanks for this insightful and enjoyable post, Greg.

    Across many fields, and particularly in technology, data is often synonymous with quantifiable data. If it fits in a spreadsheet, it’s data. If it’s indexible on Google, it’s data. If it’s a Tweet that raises (or lowers) my Klout score, it’s data.

    You eloquently point out the irony of data-related discourse in Silicon Valley: the very same people who wax poetic about cybernetics, Hadoop, and “… for America” inhabit an industry and culture that thrives upon nuanced personal relationships, off-the-record conversations, and subtle indicators of status and style that find only marginal correlation with measurable, web-based indicators.

    It would seem that there is a place for both measurable and unmeasurable data. Perhaps the trend of this decade will be the degree to which they begin to interoperate — say, as Twitter conversations, FourSquare check-ins, and various implementations of Augmented Reality.

  6. Dr William Hayward says:

    Nice conversational flow, history and insight. It’s easy to say AI has never realized its promises. Will we know if large data has the same (semi)fail?

  7. Barbara Saunders says:

    “All Watched Over by Machines of Loving Grace” is a poem by Richard Brautigan. It presents a vaguely dystopian prospect: the realization of the machine world where humans return to “a state of nature” – just one kind of mammal – while machines “watch over.”

  8. Alex says:

    maybe we could get the interface down to an iPhone app that would superimpose a bright white line over the camera’s view of the surrounding street just telling us where to walk and what to do and buy all day long. Wouldn’t that be a bit of a relief?

    You mean joining the Children of the Magenta Line?

  9. Pingback: A short homage to the ThinkPad keyboard | The Yorkshire Ranter

  10. Pingback: Big Data and Cyberwarfare on the Agenda at the Bilderberg Meeting | Illuminati Conspiracy Archive Blog

  11. Pingback: #ifIhadglass I’d probably crash the aeroplane | The Yorkshire Ranter

Leave a Reply

Your email address will not be published. Required fields are marked *