The Age of Pan - Part 1: Infinite Rabbit Holes
Why is Artificial Intelligence so hard to think about?
Fuzzy Forever
In the past weeks, I thought intensively about Artificial Intelligence, about the emergence of image synthesis, about art and consciousness and how all of this is connected. I have a billion aphorisms written in textfiles about this stuff and I'll try to put these into a coherent essay, but i can't promise i'll succeed. Most likely, this will be a long series of meandering essays that are aiming at something that is incredibly hard to talk about, and even more likely, i will state many more questions than i can answer. And that's fine.
Because interestingly, the things we are talking about in the context of Artificial Intelligence, from consciousness to sentience and now art and creativity -- these things are remarkably undefined and have no clear definitions you can look up and point your finger at. Likewise, there are 1 billion definitions of art and everybody explains it differently, because art tickles your senses in a myriad of different ways and art is a highly individual experience.
The emergence of AI-Systems forces us to find out who and what we really are, to get a sense of what that technology actually is that we are building here, so if we want to understand what the creation of AI means, we have to understand our own path to become what we are today, homo sapiens, the knowledgable man. We don't know exactly what this path was, but we have a lot of ideas and some historic documents and a lot of theories. Mostly, we encode the path of our species to humanity in stories and art and creativity. Everybody knows what we mean when we say things like art or consciousness, but when pressed, we can't really talk about it. This makes AI incredible hard to pin down.
Socrates famously warned of the dangers of Writing because it would change who we are as humans. In his view, what made us human was the ability to transfer knowledge from on to another via oral means in direct dialogues in which we were able to react, make counterarguments that changed the course of the conversation and in doing so, organic knowledge emerges for all participants in that dialogue. Writing, he said, was dangerous because it fixed speech onto a medium and so there was no room for emergence. In a way he was right, writing surely stiffled our ability to memorize stuff and it changed our oral traditions forever. But, to paraphrase Dr. Ian Malcolm in Jurassic Park: Knowledge always finds a way. So writing led to an intergenerational dialogue between groups that was not possible before, initiated the development of science and the accumulation of knowledge over extremely long periods of time and built the foundations of modern civilization.
I already wrote about how AI-Systems are cultural technologies in the tradition of language, writing or libraries and while this is helpful, it does not explain what to do with the stochastic character of that technology. Because, where language, writing and libraries worked with symbols that encoded some meaning that we form a consensus on, an AI system doesn't work with symbols and encoded meanings. An AI system just is an ocean of datapoints and the relations of those datapoints and the weights of that relations, which in turn gives us infinite options to manipulate those relations to get any outcome we desire. These image synthesis engines make this perfectly clear, when they can create endless streams of variations of the same visual.
The other day, I created 1000 images of white robot rabbits in 2 hours. No cultural technology up to now was ever able to achieve that, and to just create 1000 images of white robot rabbits out of nothing is both a wonder and a nightmare. Its hard to think about these things because infinity is hard to understand and all about AI is about infinite white robot rabbits.
Jason Scott (who you know as Internet Historian at archive.org) described these stochastic libraries as "a can of worms that opens up can of worms": AI is a rabbit hole in itself leading to more and more rabbit holes, simply because we are creating something that resembles cognition, and we know since René Descartes’ Evil Demon that we can not prove any cognition besides our own. This is why I am transfixed with infinite white rabbits, because AI gives us a technology to stare into infinity.
This problem is unsolvable. This is also why many religions forbid images of god, because god (or whatever infinity you believe in) is that which can't be understood or perceived, and so is any mind but our own. Thus, the thing we are building is forever fuzzy and can't be explained.
So let's try anyways.
“The Age of Pan” is the term I use to describe this technology of infinite possibilities. Pan was the greek naturegod of the wild and is associated with sex and fertility. The greek word “Pan” means “All” and “Everything” and is also connected to the greek root “peh-” which means “Guard”. I use “Pan” in the context of Artificial Intelligence is a metaphor for the tool that offers infinite possibilities with a click.
Welcome to The Age of Pan.
Great philosophical analysis, Man - Our human problem of missing definitions kills many insights - what is reality e.g.? We still haven't defined clearly and can't tell if there is one 'objective' beyond our perception. How could we even tell real intelligence if it's not well-defined? Same for soul, god, etc.etc...
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461