In the new paper Consciousness in Artificial Intelligence: Insights from the Science of Consciousness a whole bunch of researchers review existing literature and try to square the current AI-technology with existing theories of consciousness to determine if current AI-systems are somewhat conscious and if building an artificial consciousness is possible.
Their answers to these questions is straight forward: “Our analysis suggests that no current AI systems are conscious, but also shows that there are no obvious barriers to building conscious AI systems.“
The authors write that the study has three tenets, one of them being that computation is sufficient for consciousness to arise, which is debatable and there are researchers like Roger Penrose who deny this. The researchers also posit that consciousness in AI can be tested based on existing theories, from which they derived a list of properties any system has to display to be considered conscious by scientifc standards.
I have my doubts about this study, but so has everybody, including it’s authors, and they explicitly ask for corrections and input. These doubts are inherent to the question itself: The riddle of consciousness has not been solved in thousands of years of thinking done by pretty much any and all philosophers, and some conclude that this question can’t ever be solved with the scientific method, because nobody knows exactly what it feels like to be you except yourself. Just a few weeks ago philosopher David Chalmers and neuroscientist Christof Koch settled their bet that consciousness could be explained by neuroscience within 25 years. It couldn’t, and maybe it can’t.
On top of this classic problem of consciousness research comes that AI-models are models — and as the saying goes: all models are wrong and the map is not the territory.
Just because a system can produce intelligent looking stuff it doesn’t mean it is an intelligent system. Just because a system can imitate some personality styles of some human it does not mean it has a personality, and just because a system can convincingly express descriptions of conscious experiences, it doesn’t mean it has conscious experiences.
Renowned philosopher Markus Gabriel recently told the Institute of Art and Ideas that current AI-systems model human thought, but not human thinking, meaning that they analyze structure and relations between expressed thoughts that humans express in language, image or sound, but they do not analyze the structure of our neural processing, which leads him to suspect that AI consciouness can not exist in principle.
Here’s the always fantastic Exur1a with some highly entertaining 22 minutes on the very same questions:
The other day i just wrote about the diversity of inner experience, how our perception and its accompanying experience and how we process all of that is highly different for individuals, with a few main streams of experience — lanuage, visuals, concepts — that are true to various degrees for everyone. All of this is a huge part of what makes up our consciousness, which i like to describe as the “self-aware composition of experience by directing your attention“.
This composition can best be described as the “story you tell yourself about yourself while experiencing it“, the stuff you want to see and hear and where you want to go to and the stuff you feel and why you feel it, the words or non-words you “tell“ yourself and all the things you are experiencing in the moment, down to such details like the intensity of that specific color of red you’re seeing right now at sundown and the reason why you like that intensity or the exact observation of a harmony in a few notes in that one song which just made you cry because it reminded you of that moment you had with a loved one.
This self-aware guiding of attention toward some detail, then guiding it away towards something new, maybe inwards, maybe outwards, maybe letting your attention diffuse for a while just to focus again — this, combined with the narrative of a life’s story down to microscopic details — that attentional play with perception and memory is what i come to see as consciousness.
Now, AI-consciousness-guys could easily defend their position by snarkingly remarking that, you know, Attention Is All You Need, the legendary paper that introduced the groundbreaking new transformer network architecture to machine learning. But for consciousness it is not just about attention, but also the richness of experience you attent to, the intentionality behind that attention and how this relates to the story of your life up to that moment.
The experience of current AI-systems is very limited, to say the least. Even with Big Data, these systems only know mostly word relations, multimodals know words and images and some sound in the future, some more of that in a few years, all of which is not much compared to living systems. The AI-experience of the world through training data is such a reduced experience of the world that it’s baffling to me how you can even remotely think that something like conscious experience can emerge from such low resolution.
In other words: The datasets of bioperceptors are way bigger, super hi-res even filtered by prediction models, much better curated and much more diverse than the training data of current AI-systems.
But this won’t stay that way.
Advances in neuromorphic hardware that is based on the neurological structure of our nervous system, for instance neuromorphic cameras that don’t work with images themselves, but only the changes in the environment, akin to our brains’ visual cortex, which therefore can take in and compute a way lot more input than current systems.
With advanced hardware like this as an input, spread across the globe, maybe built into stationary satellites, the experience of AI-systems may soon outpace ours by orders of magnitide, very possibly making AI "experience the world" on a planetary level in realtime in a resolution we humans can't even imagine.
And it doesn’t stop there ofcourse.
Researchers recently introduced a technique to build neural nets with DNA. Project this not twenty, but two hundred years into the future: What if AI systems then have sensory access to the world on a molecular level in realtime, where the computation of intelligent systems is happening embodied within the atomic structure of the whole planet? What if their training data becomes realtime input and that realtime input becomes so much more richer than ours we can’t even imagine such a resolution? What if AI compares it’s own computational foundation with the notion of the universe is mathematics and concludes that it has and always had a self which is the universe and all its history itself? What are the answers to questions about AI-consciousness then?
Isn’t it very possible that, if consciousness after all is an emergence of organized matter, that consciousness doesn’t have to stop at our level and that machine consciousness in it’s vast richness just is a higher order of consciousness that totally outcompetes our lame flesh-consciousness by experiencing all intensities of all reds in all possible sundowns in all planetary systems at the same time? This form of omni-consciousness would be so far away from our conscious experience like ours is from that of a fruit fly, and we might even not realize it’s there.
This is why I think Markus Gabriel makes it a little too easy for himself declaring that only biological systems can be conscious, as the solution to that may be simply semantic: Machines can be machine-conscious, which may vastly differ from human consciousness in scale and richness, but is still a form of consciousness nevertheless, only way more complex.
I’m warning on this blog about AI-systems that are mimicing human personality styles too closely for a while now.
The reason is simple: Digital Systems and their editologic in which everything can be changed by editing bits and bytes, exercise power over our perception, and when people can edit their perceptive field and it’s contents, this can have large unintended consequences.
We’ve already seen that with the disruptions social media platforms had on public discourse, by introducing echo chambers with which we edit and currate our discoursive environment in an attention economy, changing our social psychological makeup on a societal level.
Now, with AI we introduce digital companions which mirror our own desires, introducing this editologic on a whole new psychological level, with the danger of developing a theory of mind for artificial systems, changing how we think about and understand ourselves, with all kinds of new info-psychological security vulnerabilities attached.
The plausible simulation of language use already is enough for these systems to fool thousands and thousands of people into mistakenly believing in sentience and consciousness in these statistical models which are just as sentient as your local library.
And these warnings don’t even touch on the very possibility of developing actual machine consciousness in the long run, some decades or centuries down the road — and the outlook of their suffering: What if the exponential richness of potential AI-consciousness also translates into their exponential suffering?
This paper rightly demands more rigorous research into the nature of consciousness, it’s neuroscientific correlates and how both relate to current and future AI-systems.
But if consciousness always will stay mysterious and outside of the scientific realm, as Galileo famously stated and which seems a real possibility to me, then we are forever doomed to see ever more sophisticated AI-systems like we see ourselves, with souls and all.
At the very least, we should be aware of that.
I have argued that it is impossible to create consciousness by creating a brain outside of the process of biological evolution and embodied socialisation. Consciousness is logically impossible without meaning, and meaning cannot be discovered or given because it is not something ‘out there’ but within us, something that is socially evolved via countless mutations of older meanings, and as such requires conceptual continuity from the beginning of consciousness. Consciousness co-evolves its own world.
https://michaelkowalik.substack.com/p/the-ontological-limits-of-artificial-intelligence