The funniest story in AI last week surely was that Air Canada must honor refund policy invented by airline’s chatbot. Besides the fact that a corporate robot is part of the corporation and thus the corporation is liable for the actions of the robot, it is remarkable how Air Canada argued in court:
In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission.
They went even further and said that Air Canada "cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot", which suggests that Air Canada thinks that it doesn't exist as a legal entity, as a corporation that consists of it's employees and products. I would suggest that Air Canada or its lawyers were not thinking very much when they made up that nonsensical argument and ofcourse the court didn't follow.
This episode is interesting besides it's legal shenanigans because we do have a word for "seperate legal entity" and that is legal personhood, and following Air Canadas argument here would mean that their chatbot would be legally recognized as a person, with it's own rights and responsibilites — in this case: Don't lie to our customer, chatbot, or you'll be held responsible and pay for the damages.
The debate of robot rights is old and came to mainstream prominence through Isaac Asimovs famed shortstory I, Robot. Many many books and essays have been written about the implications of AI or robots developing consciousness and if and how we as humans owe them legal rights, if that ever should be the case. For now and the foreseeable future, that's a neat intellectual exercise in the realm of science fiction, but not a legally pressing issue. AI systems are statistical models not able to perform any cognitive human functions. LLMs can simulate language, but they can't use language, because they lack any symbolic representation in their neural networks. They are not persons, but products, and the corporations developing them are legally responsible.
All of which, ofcourse, doesn't mean we don't anthropomorphize them.
In a 2015 paper robot researcher Kate Darling described "Empathic Concern and the Effect of Stories in Human-Robot Interaction", in which she finds that "story-shaped" robots (like, say, a Tamagochi or a Robot-Dinosaur -- Robots that resemble fictional or real life creatures that we project consciousness into) create emphatic reactions in humans.
LLMs and their simulated language are very much "story-shaped" robots in the literal meaning of the words, and anthropomorphizing them just comes natural to us: Toddlers at the age of nine months in a 'socio-cognitive revolution' first develop an understanding of another human as similar to themselves, as intentional beings, the first step to a theory of mind. This enables them to develop shared intentionality with other humans (parents, sibblings, friends) where both are focussing on the same thing and check each others attention. This enables kids to develop language and understanding of symbolic representations by imitation, to connect words to the signified object and, slowly, build a symbolic representation of their environment in their head, a world model. This also means that intelligence, the ability to form those models and then combine, dissect, transform them is inherently social.
But this also means that we, by human nature that is social, inherently ascribe cognitive functions to any system that can produce text, utter words, speak, point at and identify things, just like us. Where Tamagochis and Robots have a face for us to relate to, an LLM has text. It's just as synthetic and non-human as the plasticface on a robot, or the display on a Tamagochi, but that doesn't matter. We can't help it because it is built into our biology to recognize humans, and so we are bound to recognize a ghost in the machine where there is none.
Now, I don't think that Air Canada is suggesting that their chatbot is conscious or anything, i don't even think they anthropomorphize the chatbot on their website. But I do think they used this discourse around anthropomorphization and legal personhood for AI and it's philosophical implications as their last straw argument to save, what, 880 canadian dollars? That’s laughable corporate legal bullshit and nothing more.
However, the discourse about AI personhood gets interesting in context of what is called "responsible AI". The logic goes like, if you assign legal personhood to AI, it will make AI liable for it's own actions. If AI goes wrong and starts maximizing paperclips, we can at least sue them in court. (What a relief.)
We should be thankful to Air Canada and it's lawyers and the court to clear this one up: We already have legal responsibilities in AI, and they lie within the corporations that develop them. Air Canada is responsible for the bullshit their chatbot tells to its customers, which might also extend to ongoing defamation lawsuits against OpenAI. Just as Air Canadas chatbot is not a "seperate legal entity" with its own personality rights and responsibilities, ChatGPT is not a legal entity on it's own that can "hallucinate" bullshit about you or me or anyone else, but a product made by OpenAI, who are ultimately responsible for the damages it causes.
This would also apply to possible AI-systems of the future who found, steer and head new corporations: The legal personhood for them, then, would already exist as a legal fiction, without any need for a seperate personhood for the algorithm. Corporations led by sophisticated statistical models and human-organized corporations simply operate under the same rights and obligations with no need for "seperate legal entities".
In his good, long primer on the subject Lance Eliot calls this the "magic trick of pulling a rabbit out of a hat". But the trick here would not be to simply assign legal personhood to a corporation led by an AI (or the lame old corporation that merely develops said AI), but to not anthropomorphize it within the legal system.
As humans we can't help but see a ghost in the machine, but the law may not be prone to our psychological failings. Air Canada learned that the hard way in court and now must pay 880 bucks. I guess they'll get over it.
Tinmanning: anthropomorphizing machines.
Botblaming: obscuring liability by pinning
it on a legal non-person.
Yeah that's such a cop-out and a scary attempt at one as well!! AI should always be under the management of humans, and the companies who make up those humans. And thus those humans should be responsible for its output.