While I can’t decide between spending my summer days on a tiny beach at a Lake in Berlin or reading through roughly 1 million articles about Artificial Intelligence, I stumbled upon a new paper that confirms some of my thoughts on Brain-Computer-Interfaces and where this technology might lead us in combination with AI-systems.
In “Brain-Supervised Image Editing” researcher Keith M. Davis III describes a BCI-System that takes your brainwaves and uses it to navigate latent space of a trained neural network. In other words: They recorded your brainwaves while you were watching a blonde person and used that to turn an image of a dark haired person into a blonde.
From the Abstract:
(We used) brain responses as a supervision signal for learning semantic feature representations. Participants (N=30) in a neurophysiological experiment were shown artificially generated faces and instructed to look for a particular semantic feature, such as "old" or "smiling", while their brain responses were recorded via electroencephalography (EEG). Using supervision signals inferred from these responses, semantic features within the latent space of a generative adversarial network (GAN) were learned and then used to edit semantic features of new images.
Let’s play.
The timeline is pretty clear: 10 years ago researchers extracted visuals from your brain activity while you were watching a movie, five years later scientists created a system that can describe what you see in text (which you can totally feed back into something like Dall-E), then they trained AIs to visualize what you see and now you are able to edit these visualizations with your thought.
If you hook up a system like this with VR-glasses you can have a brain controlled virtual environment, right now, albeit in a very rudimentary and slow form. It is very possible that we can have Virtual Reality-systems that can hook up to your brain and let you play inside a lucid dream like state that you create with your own thoughts within our lifetime.
In Lucid Dreaming, the dreamer is aware of his sleeping, dreaming state and thus can control the environment, to various extend. Some experienced lucid dreamers can fully control their dreams, while others only control parts of it, and only sometimes. Lucid Dreaming exists on a continuum and it’s super fascinating to read through the various experiences on the LD-subreddit, some mindblowing and awesome (flying, superhuman abilities, summoning anime characters), others creepy and frightening or plain weird (in-dream-characters turning on you when you tell them that it’s a dream, Inception-like scenarios where you are dreaming about lucid dreaming).
Now imagine doing all of this while you are truly awake, wearing a combination of VR/BCI-helmet. This technology would give you "artificial lucid dreaming” in which you could visualize and experience anything your mind can come up with.
One one hand, this ofcourse sounds beyond awesome. No game developer would design your virtual experiences, you could totally live your dreams - literally. Because of the chaotic nature of our thoughts, there would be neuro-guides that keep your thoughts and “VR-dreams” within certain boundaries or narratives - imagine using this technology while listening to an audio book, for instance. You’d experience hunting Moby Dick and then envision Captain Ahab as a cherokee from outer space and voilá.
On the other hand you’d have to ensure safety from the most horrific nightmares you can imagine, the addictive potential of such technology seems endless, and I’m very sure that such systems would inflict our perceptive psychology in ways we can’t anticipate. Already, AI-systems like Dall-E 2 make me question the reality of all kinds of images and I start to imagine variations of objects i see in photographs as if i can just prompt them at will. There are theories from Anil Seth that describe our reality-based conscious experience as controlled hallucination. What happens when you feed such a controlled hallucination into a controlled hallucination? What does tech like this do to our consciousness, when we can perceive anything we want with a thought?
The more pervasive such technology becomes, the more fluid and editable our perception of reality. And we have no clue about the psychological impact of such a liquid state of perception.
Now, there are boundaries to this tech at this point in time. Dall-E 2 needs 10 seconds to generate 6 images from a prompt and is not hooked up to your brain, yet. It would take a tremendous amount of computing power to generate a movie that you generate in real time from your thoughts and papers such as this are merely the embryonic state of such technology. This will take at the very least 50 years to come, and we don’t know about paradigm shifts or catastrophic failures along the way that can skew the picture into complete other directions.
But I also think that the perceptive potential of such a dream machine is way too interesting for it not to become a reality, and the question that arise are fasinating as fuck: What happens to intellectual property? Right now, your thoughts are free and I can think about Donald Duck smoking joints with Spiderman at the Hard Rock Café all day. Am I allowed to think this in a neuro guided VR-BCI-System from Coolcompanyfromthefuture™? Or will it just tell me “You can’t dream that, Dave?”
Some solutions to these questions may be a “narrow version” of this combined with Augmented Reality-Lenses which can display information about your environment by reading your thoughts about it, with the “full immersion VR-dreams” being like drug experiences, reserved for special occasions.
Or maybe you can outsource the direct experience to an AI, because it’s considered to dangerous to perceive directly. This AI was trained on your thougts, on your Neural Data coming from a BCI, a sometimes glitchy but overall pretty good copy of yourself, your language, your style, your knowledge, your personality, your taste, your choices, your preferences. You can (and will) make copies of that algorithm for different purposes that live on different platforms and in different technological environments. One to make your movies and photos, one for your autonomous vehicle, one for your employer and so forth. And one to explore the outer rims of your wake-dream-like-states to filter out the unwanted nightmares and chaotic interferences.
Such an AI-copy-of-yourself would be a safety-mechanism, which (who?) can do the dirty, dangerous work in virtual environments that became too fluid, too editable for humans to experience directly, because humans repeatedly went insane in the unknown unknowns of latent space.
And if you train a copy of yourself on neural data from visualized unknown unknowns of your own unconsciousness, what will it evolve into? We’re pretty deep in lovecraftian territory at this point and maybe it’s for the best to end this trip with the opening paragraph of his “Call of Cthulhu”:
The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.
This is the world we are building. Are you ready?