“Case and Molly” is a prototype for a game inspired by William Gibson’s Neuromancer. It’s about the coordination between the virtual and the physical, between “cyberspace” and “meat”.
Neuromancer presents a future containing two aesthetically opposed technological visions. The first is represented by Case, the cyberspace jockey hooked on navigating the abstract information spaces of The Matrix. The second is embodied by Molly, an augmented mercenary who uses physical prowess and weaponized body modifications to hurt people and break-in places.
In order to execute the heists that make up most of Neuromancer’s plot, Case and Molly must work together, coordinating his digital intrusions with her physical breakings-and-enterings. During these heists they are limited to an extremely asymmetrical form of communication. Case can access Molly’s complete sensorium, but can only communicate a single bit of information to her.
On reading Neuromancer today, this dynamic feels all too familiar. We constantly navigate the tension between the physical and the digital in a state of continuous partial attention. We try to walk down the street while sending text messages or looking up GPS directions. We mix focused work with a stream of instant message and social media conversations. We dive into the sudden and remote intimacy of seeing a family member’s face appear on FaceTime or Google Hangout.
(Note: “Case and Molly” is not a commercial project. It is a game design meant to explore a set of interaction ideas. It was produced as a project for an MIT Media Lab class Science Fiction to Science Fabrication. The code is available under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 license for educational purposes. Please do not use it to violate William Gibson’s intellectual property.)
“Case and Molly” uses the mechanics and aesthetics of Neuromancer’s account of cyberspace/meatspace coordination to explore this dynamic. It’s a game for two people: “Case” and “Molly”. Together and under time pressure they must navigate Molly through a physical space using information that is only available to Case. Case can see Molly’s point of view in 3D but can only communicate to her by flipping a single bit: a screen that’s either red or green.
Case is embedded in today’s best equivalent of Gibsonian cyberspace: an Oculus Rift VR unit. He oscillates between seeing Molly’s point of view and playing an abstract geometric puzzle game.
Molly carries today’s version of a mobile “SimStim unit” for broadcasting her point of view and “a readout chipped into her optic nerve”: three smartphones. Two of the phones act as a pair of stereo cameras, streaming her point of view back to Case in 3D. The third phone (not shown here) substitutes for her heads-up display, showing the game clock and a single bit of information from Case.
The game proceeds in alternating turns. During a Molly turn, Case sees Molly’s point of view in 3D, overlaid with a series of turn-by-turn instructions for where she needs to go. He can toggle the color of her “readout” display between red and green by clicking the mouse. He can also hear her voice. Within 30 seconds, Molly attempts to advance as far as possible, prompting Case for a single bit of direction over the voice connection. Before the end of that 30 second period, Molly has to stop at a safe point, prompting Case to type in the number of a room along the way. If time runs out before Case enters a room number, they lose. When Case enters a room number, Molly stays put and they enter a Case turn.
During his turn, Case is thrust into an abstract informational puzzle that stands in for the world of Cyberspace. In this prototype, the puzzle consists of a series of cubes arranged in 3D space. When clicked, each cube blinks a fixed number of times. Case’s task is to sort the cubes by the number of blinks within 60 seconds. He can cycle through them and look around by turning his head. If he completes the puzzle within 60 seconds they return to a Molly turn and continue towards the objective. If not, they lose.
At the top of this post is a video showing a run where Case and Molly make it through a Molly turn and a Case turn before failing on the second Molly turn.
Play Testing and Similarities and Differences from Neuromancer
In play testing the game and prototyping its constituent technology I found ways in which the experience resonated with Gibson’s account and others in which it radically diverged.
One of the strongest resonances was the dissonance between the virtual reality experience and being thrust into someone else’s point of view. In Neuromancer, Gibson describes Case’s first experience of “switching” into Molly’s subjective experience, as broadcast by a newly installed “SimStim” unit:
The abrupt jolt into other flesh. Matrix gone, a wave of sound and color…For a few frightened seconds he fought helplessly to control her body. Then he willed himself into passivity, became the passenger behind her eyes.
This dual description of sensory richness and panicked helplessness closely matches what it feels like to see someone else’s point of view in 3D. In Molly mode, the game takes the view from each of two iPhones aligned into a stereo pair and streams them into each eye of the the Oculus Rift. The resulting 3D illusion is surprisingly effective. When I first got it working, I had a lab mate carry the pair of iPhones around, placing me into different points of view. I found myself gripping the arms of my chair, white-knuckled as he flew the camera over furniture and through obstacles around the room. In conventional VR applications, the Oculus works by head tracking, making the motions of your head control the direction of a pair of cameras within the virtual scene. Losing that control, having your head turned for you, and having your actual head movements do nothing is extremely disorienting.
Gibson also describes the intimacy of this kind of link, as in this exchange where Molly speaks aloud to case while he rides along with her sensorium:
“How you doing, Case?” He heard the words and felt her form them. She slid a hand into her jacket, a fingertip circling a nipple under warm silk. The sensation made him catch his breath. She laughed. But the link was one-way. He had no way to reply.
While it’s not nearly as intimate as touch, the audio that streamed from “Molly”’s phone rig to “Case” in the game provided an echo of this same experience. Since Molly holds the phones closely and moves through a crowded public space, she speaks in a whisper, which stays close in Case’s ears even as she moves ever further away in space.
Even in simpler forms, this Case-Molly coordination can be interesting. Here’s a video from an early prototype where we try to coordinate the selection of a book using only the live camera feed and the single red/green bit.
One major aspect of the experience that diverged from Gibson’s vision is the experience of “cyberspace”. The essence of this classic idea is that visualizing complex data in immersive graphical form makes it easier to navigate. Here’s Gibson’s classic definition:
Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators…A graphic representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the non space of the mind, clusters and constellations of data. Like city lights, receding"
Throughout Neuromancer, Gibson emphasizes the fluency achieved by Case and other cyberspace jockeys, the flow state enabled by their spacial navigation of the Matrix. Here’s a passage from the first heist:
He flipped back. His program had reached the fifth gate. He watched as his icebreaker strobed and shifted in front of him, only faintly aware of his hands playing across the deck, making minor adjustments. Translucent planes of color shuffled like a trick deck. Take a card, he thought, any card. The gate blurred past. He laughed. The Sense/Net ice had accepted his entry as a routine transfer from the consortium’s Los Angeles complex. He was inside.
My experience of playing as Case in the game could not have been more opposed to this. Rather than a smooth flow state, the virtual reality interface and the rapid switches to and from Molly’s POV left me cognitively overwhelmed. The first time we successfully completed a Molly turn, I found I couldn’t solve the puzzle because I’d essentially lost the ability to count. Even though I’d designed the puzzle and played it dozens of times in the course of implementing it, I failed because I couldn’t stay focused enough to retain the number of blinks of each cube and where they should fit in the sorting. This effect was way worse than the common distractions of email, Twitter, texts, and IM many of us live with in today’s real computing environments.
Further, using a mouse and a keyboard while wearing a VR helmet is surprisingly challenging itself. Even though I am a very experienced touch-typist and am quite confident using a computer trackpad, I found that when presented with contradictory information about what was in front of me by the VR display, I struggled with basic input tasks like keeping my fingers on the right keys and mousing confidently.
Here you can see a video an early run where I lost the Case puzzle because of these difficulties:
Lacking an Ono Sendai and a mobile SimStim unit, I built this Case and Molly prototype with a hodgepodge of contemporary technologies. Airbeam Pro was essential for the video streaming. I ran their iOS app on both iPhones which turned each one into an IP camera. I then ran their desktop client which captured the feeds from both cameras and published them to Syphon, an amazingly-useful OSX utility for sharing GPU memory across multiple applications for synced real time graphics. I then used Syphon’s plugin for the Unity3D game engine to display the video feeds inside the game.
I built the game logic for both the Case and Molly modes in Unity using the standard Oculus Rift integration plugin. The only clever element involved was placing the Plane displaying the Syphon texture from each separate camera into its own Layer within Unity so the left and right cameras could be set to not see the overlapping layer from the other eye.
To communicate back from Case to Molly, I used the Websockets-Sharp plugin for Unity to send messages to a Node.js server running on Nodejitsu, the only Node host I could find that supported websockets rather than just socket.io. My Node app then broadcasts JSON with the button state (i.e. whether Case is sending a “red” or “green” message) as well as the game clock to a static web page on a third phone, which Molly also carries.
The code for all of this can be found here: case-and-molly on Github.
Special thanks to Kate Tibbetts for playing Molly to my Case throughout the building of this game and for her endlessly useful feedback.