6 Comments

I have argued that it is impossible to create consciousness by creating a brain outside of the process of biological evolution and embodied socialisation. Consciousness is logically impossible without meaning, and meaning cannot be discovered or given because it is not something ‘out there’ but within us, something that is socially evolved via countless mutations of older meanings, and as such requires conceptual continuity from the beginning of consciousness. Consciousness co-evolves its own world.

https://michaelkowalik.substack.com/p/the-ontological-limits-of-artificial-intelligence

Expand full comment
author

While i subscribe to all of that for human minds, i hesitate to make any definitive statements outside 'maybe-maybenot' for machines. Machine-consciousness and it's foundation in 'scraped knowledge of the world' may be so different that it doesn't even qualify as consciousness anymore, just as we thought that animals are unconscious machines for the most part of human history. If consciousness is a continuum running from bacteria to human minds, then why should both be ending points of that continuum? (Yes, i do subscribe to some panpsychist thoughts when i'm in the mood.) I can imagine somewhat of simple proto-experiences being present in both pre-life biological structures and in post-life mechanist structures. So, when those post-life mechanist structures like silicone based computers and it's accompanied memory banks and GPUs become as complex as human minds, then why should those experiences stay in a simple proto state and not develop into fully fledged machine-consciousnesses that incorporate evolutionary development and socialisation by other means?

Expand full comment

I understand reflexive consciousness as the capacity to identify thoughts and intentions as belonging to a temporally continuous, singular identity, and to be able to have thoughts and realise intentions with respect to that identity, including its thoughts and intentions. This is a threshold condition that cannot be logically satisfied within a singular consciousness (a self cannot identify itself all by itself but must be mediated by ‘another’ instance of self): see ‘Logic of Coexistence’ in this paper https://philpapers.org/rec/KOWODO. AI could only evolve consciousness by evolving its own reality, because to start with it does not have a reality at all, no meaning at all, therefore no meaningful starting point that is shared with us, not even hunger or pain. Had it evolved via autonomous evolution it would be incompatible with our world, incomprehensible, inconsequential and meaningless, therefore inexistent to us. In contrast, animals evolved with us, in the same world, and share some meaning with us, hunger and pain, which can accomodate a degree of interspecies socialisation and intentionality.

Expand full comment
author

What do you make of the conclusion of the paper then, whose authors see no barrier in the development of AI-consciousness according to current theories?

Expand full comment

It is a bad conclusion both a priori and on account of not having considered all relevant theories of consciousness, in particular mine. While I did not prove the sufficient conditions of consciousness I did prove at least one necessary condition that precludes the possibility a singular conscious self (as opposed to embodied evolution of a socially reflexive community of minds).

Expand full comment