Best Seat in the House: Being Every Drone
“Recently I joined some drone hobbyists who meet in a nearby park on Sundays to race their small quadcopters. The only way to fly drones at this speed is to get inside them. The hobbyists mount tiny eyes at the front of their drones and wear VR goggles to peer through them for what is called a first-person view. They are now the drone. As a visitor I don an extra set of goggles that piggyback on their camera signals and see what each pilot sees. One young guy who’s been flying radio control model airplanes since he was a boy said that being able to immerse himself into the drone and fly from inside was the most sensual experience of his life. There was no virtuality. The flying experience was real.”
– Kevin Kelly, The Inevitable, p. 227
I am whatever body I am in. When I have no body, I am nobody. When I’m asleep and dreamless, I am empty, null and void, my mind and self the still awareness resting timelessly without an object. (If you watch this in a body scanner, you will notice that the neural correlates of selfhood in my brain do not light up – so there is always a material perspective, even if we only glimpse it dimly.) When I’m awake, I’m usually a person – living through and emphasizing, out of all the possibilities, the ones created by a gravity-bound social mammal that can grab things with its hands.
That is, my body is a virtual reality created by the constellation of what interests and perceptions are declared important by the history of my body and its ancestors: I notice and remember faces (or things that look like faces); I assign intended meaning to the deeds of other actors in my culture; I treat the objects that I hold as if they are extensions of my hands.
My human body as a virtual reality is plastic. It adjusts to observations I hear others make about it. It transforms according to my mood. I only notice that this happens when I’m more-than-usually aware; when I was growing up, I took for granted I was “this” or “that,” and that this body simply “is what it is.” Then I learned that all of my potentials and my thoughts were molded by this ever-shifting image that reacts to how I am experienced by others in a group, or which sub-personalities are at “the helm.” Dynamic patterns, never quite repeating, but remarkably consistent, compete to be the “me” at any given moment and are stitched together in my memory to appear as if they’re all the same identity. They’re not.
This plastic body image can be hacked. Perhaps the simplest way is, if you have a camera and TV, to point the camera at the back of your own head from maybe six feet back; share that live image on the screen; and give yourself the weird experience of standing just behind yourself. The literature shows us plenty of perceptual illusions like this that can trick us into thinking that we’re levitating, that we’re warm when totally submerged in ice, and even more extravagant diversions from the norm. Hypnosis has employed our human brain’s innate plasticity of body image for a century, for entertainment and for therapy. When I was in third grade, I learned how I could push my arms against the inside of a door frame for a minute, then release them for a temporary hit of thinking they might float away. One of my friends in university took magic mushrooms and explained to me that she became a lizard. Stanford sociologist Jeremy Bailenson swaps hand and foot controls in VR simulations and has found it only takes a person minutes to rewire their limbs – to punch for kicks, and kick for punches. Brains are funny things.
It took less than a decade for the smartphone to insinuate its way into our sense of self. In some important senses, most of us are now dual-platform beings, living in at least two parts: the flesh and the machine. We don’t all think this way, yet – it takes time to steer the paradigm – but five billion people live with some device connecting them at all times to the sum of human knowledge, incredible machine intelligence, and the mediated life experiences of those other cyborgs. That phone is kept at all times inside the envelope of the magnetic fields projected by our hearts and brains, and someday soon we’ll think of human being-ness as slightly more about these organizing fields than what continuously cycling meat exists within them. So we’re each a binary: a person and a cellular device, a wizard and its owl familiar…if you don’t mind reviving the archaic to explain what modern language can’t sufficiently address.
That’s just the start of it. Jaron Lanier invented virtual reality (the headset kind) and thinks that we can use VR to hack our body image even deeper. In the Dawn Age of VR (the one that future archeologists will point to using fossil Oculus Rifts, like we use specific trilobites to date the Cambrian), most of the conversation is on simulating spaces: taking students into simulated arteries, or maps of outer space, as featureless observers. This misses the amazing opportunity to change not just what we experience around us, but what we experience as us.
The body is a powerful accomplice in the learning process. It is easier to solve equations when we are allowed to move our hands. The memories we formed while lying down are harder to retrieve when standing up. How surprised was I to learn that I was naturally more interested in my college courses if I struck the pose of interest, leaning forward with my eyes wide open, smiling? How much more can we learn by using virtual reality technology to move within the body images of other things?
It’s 2016. There are people who race flying drones professionally. It’s essentially first-person shooter video games “IRL” with help from cheap consumer-grade robotics. I can don the headset and the handset and dive nimbly through the beams of some construction site. The holy grail of Dawn Age Virtual Reality is “presence,” given that we only have our eyes and ears to orient ourselves within this space our human bodies know is not the chair in which we sit.
So this illusion flight’s a miracle, but incomplete. What I cannot do yet is feel the wind upon my chassis. That is coming soon – as soon as we can squeeze cheap sensors into robot frames at densities approaching those that simulate a sensate surface. Then VR will be full-bodied, not just discarnate magician’s hat and gloves but wetsuits we will wear to dive in cyberspace, the full regalia that lets us feel as though we’re not just somewhere else, but something else.
The brain can translate stimulation of the skin to mental images. Moon Ribas (BDYHAX Performer 2017) has acquired a seismic sense by planting tiny buzzers in her arms that vibrate any time her phone is notified of real-time earthquake readings. David Eagleman has substituted senses with vests containing many tiny microphones that translate sounds to patterns of vibration on the skin – and found it only takes a few months for the brains of his deaf patients to rewire and render this new input as if it were audible sound coming through their ears. And output, too: for years, we’ve had robot limbs that move according to the thoughts of users. So it may be that we humans automatically adapt to robot shells and surrogates, evolve new senses, and – at least in fits and starts – forget the membrane in between awareness and however-many drones you’re piloting at once, your flesh transfigured into a coherent swarm of flying sensors. Just as quickly as we can upload our mental functions to the cloud and print a gel of sensors in which we ensconce the world, we’ll evaporate this particle of selfhood, the consensus VR of a slower, simpler time – into a gauzy everywhere-ness.
Here is how this goes, as everything gets faster, smaller, and more easily accessible: first, we’ll see from the angles of the other camera-wielding AR lense-empowered people like ourselves; then, we will dip into our suits to work and play as drones that mimic both the form and sensitivity of living creatures; ultimately, we’ll be able to inhabit ecosystems all at once, coordinating murmurations of machines as if they are the organs of our vast, illuminated bodies. In other words, we recapitulate shamanic realms and powers in the “fallen” medium of gross corporeal devices: remote viewing, then astral projection and shape-shifting, then something like transcendence into higher orders of experience.
The “best seat in the house” won’t be a place – it will be measured by the relative degree of access to whichever point of view you want. Still tied to our geography by physical constraints – like distance, due to fiber optic speeds and network latencies, as well as ease of access due to economic status and communications infrastructure (personal, as well as regional and national) – the best seat in the house remains, of course, determined by the values of whomever’s in that seat. Some people are content to only watch one TV station, whereas some need hundreds; likewise, I expect to see it grow both ways, toward depth and breadth: both voyeurs who delight in living through the low-res full-suit lifelogs of celebrities, and national security staff who clock into cyberspaces in which aggregated browsing patterns map an orbital view of a region’s data traffic in which “space” is all relationships, a heat map that predicts the next domestic terrorist disturbance.
In the slightly shorter term, though:
- I’m at a concert trying to get a better view of what’s on stage, and so my options are to flip through my available head’s-up displays to see the drummer’s POV, or one of several airborne mini-copters, or to send a messaging request to that jerk standing up in front so I can show him his own rear end blocking people’s line of sight.
- I’m a student learning the anatomy of lobsters by “becoming one” in class. I wiggle all of my ten limbs, antennae, swimmerets…and haptic feedback from my suit convinces me I actually have them. I see simulated images of lobster vision when I look around, and regulation coils embedded in the fibers of the suit give the experience of temperature variations in the water on the ocean floor. When I grow up, I will have spent my life in training to possess, as if a ghost, the “manned” machinery that builds our habitats in orbit. Safer doing work like that from Terra Firma.
- It’s rare to send a soldier into war, these days; but those now-obsolete facilities that treated airstrikes like an arcade game are gone, replaced with urgent and convincing telepresence, soldiers stationed near the conflict for low latency as they inhabit military drones so rich with sensors they immerse our off-site troops in totally believable experiences of the conflict – (modulated) pain and all. This becomes two forms of entertainment: vicarious experience of war in which these spaces are recorded or live-streamed to passive users; and of course the robot combat leagues, some more sadistic, upping stakes by offering real pain to operators when their bots are damaged. The old term “theater of war” takes on new meaning.
- I am a dreamer and my brain activity is captured in high-definition, indexed against the earlier progressive scans I took to map my set of idiosyncratic neural firing patterns, and then rendered by some deepdream-like AI to make immersive VR spaces for my Patreon subscribers. It’s the closest they can get to living in the landscape of my dreaming mind. I cannot play an instrument, but I’m the instrument, and so I treat my brain with daily regimens of nootropics and divergent thinking exercises so that I can lucidly provide the most compelling spaces for my fans. Perhaps my dreams control robotics, also, and I am the sleeping star of MIT-esque future operas that feature flowing panoplies of dreamer-operated drones.
Where, in this future, does my body start, and yours begin?