No I in AI

There is a lot wrong with AI.

One of the things – leading to many of the other things – is how we describe it. What we think it is doing. Which is not what it is doing at all.

If it quacks like a duck, looks like a duck and walks like a duck it is, in this case, still not a water fowl. Because AIs like ChatGPT seem conversational we talk about them in human terms. Because we can interact with it in more or less the same way we interact with other humans we think it has human-like intelligence. It does not.

Over the years I have found it helpful to remind myself that computers are big calculators and all the magic produced comes from processing a binary code. This also goes for AI which is still a system running on a calculator. More specifically, what we are dubbing AI now are actually LLMs; Large Language Model. These are models that are the result of releasing machine-learning on a large corpus of human interactions to generate the most likely next words in any given conversation. Very elaborate predictive text and very good at that but nothing more.

This is very different from how humans have conversations. We understand a topic (or not) that we are having a conversation in. We go back and forth to stored information in our heads, connected by the concepts we have embedded them in, puzzle them into the context we and the conversation are currently in and apply what we think is the perspective of our conversational partner to our next logical statement which we convert into a shared language. All steps are altered by emotional cues along the way.

Even if I am only trying to predict what my next likely language should be, I am tapping into grand concepts such as culture, perspective switching, emotional responsiveness and the moral ambiguity of appropriateness. In the blink of an eye and without really being aware of all the marvelous cognition that is going on.

AI or LLM does none of that. It isn’t really an IT. We are coherent entities constantly trying to make sense of the world, each other and ourselves. Within our thinking and emotional landscape we constitute ourselves or by our thinking and experiences we are who we are. We make ourselves up or are made by the things we interpret from our reflection in the world as ourselves to be. Whichever way around there is an I holding this all together. There is no such I in AI. There is a model that is the result of applying machine-learning to a corpus directed to predict language. This model cannot emotionally hold elaborate concepts. This model cannot think, understand, feel, ponder, question or imagine. It can produce language that seems conversational to you and you will do all the thinking and feeling.

Because ChatGPT interacts with us they way we interact with other entities, we think it is an entity and we name the products we see like the products we see of other entities. We ask AI questions and ‘it gives answers’. There is so much in this short statement and when interacting with AI/LLMs all of it is untrue.
Firstly, as mentioned above, there is no ‘it’ in the sense meant here. There is no coherent entity with a mind of its’ own. Even in the use of ‘gives’ something decidedly human is implied. The give-and-take of conversation in which we bend our minds to each other, take each other’s perspective, often turn our faces to each other, open ourselves up to emotional processing and share knowledge possibly gained from hard earned experiences. With so much going on every part of an interaction can be really seen as a gift – a part of yourself you give to another person. So that they might consider it, feel it and possible internalize it into their coherent self. So we say we give answers. AI/LLM does none of that. We might more accurately say ‘model produces’.
Answers. No, not really answers either. Answers exist only in relation to questions. We expect when I ask you a question and you make the effort to form an answer that you do all the cognitive and emotional processing and use all the knowledge that you posses and deem relevant to my question and formulate what you think it is I want to know to the best of your abilities into an appropriately packaged answer. Lovely. AI/LLM model produces the statistically most likely language following the language you input (your question) based on the patterns of language in the corpus the model is based on. So in the human sense of how we see and describe the question-and-answer interaction, these aren’t answers at all.

We do all these things so effortlessly that we think they are easy. We think if something has the same results they are made by the same process. But they are not. An AI/LLM produces conversational language that looks like we might have made it. But the model did not produce in the same way as we make.

There is so much more but it mostly comes down to AI/LLM isn’t an entity. The model cannot and does not consider because the model has nothing to consider with. AI/LLM has no concept of truth because there is no way for the model to hold any concept. AI/LLM also doesn’t actually hallucinate because the model has not imagination to make anything up with and doesn’t have a perception of reality to break. No learning, thinking, growing, understanding or analysing.

Models don’t know shit – we should know better.

This entry was posted in Writings and tagged , , . Bookmark the permalink.