Reply To: General discussion
-
Intelligence and Language.
Here I’m focussing on the idea that language itself is ‘intelligent’ or perhaps ‘represents’ intelligence and I propose some ideas for critique.
1. The Turing Test
One surprising observation during the recent release of these systems is that they seem so capable, given the obvious limitations of the learning they have engaged in. In essence, the AI Language model has been taught to predict the next word it will use. And that’s about it.
They are basically language prediction engines. I’m not suggesting they are in any way perfect at what they do- in fact there are numerous gaps in knowledge (they are mostly offline and/ or out of date by around 2 years) and they make mistakes, some obvious, some not. Sometimes they even make things up (hallucinate).
However, the experience of interacting with an AI is that it has many of the features we would associate with interacting with a person. This is sometimes so convincing that the general consensus seems to be that they will routinely pass the Turing Test.
2. Language is Refined and Distilled
Words map on to objects, relations and dimensions in the world around us. Through use they are gradually refined. It is not unreasonable to propose that these refinements shape the ‘fit’ of those words over time through feedback about their utility and that this represents a distillation of meaning as these associations flourish and begin to represent processes, dynamic interactions. I propose words and their associations are virtual, heuristic sound/ textual stimuli networks that model the world. The time-course of this shaping, refinement and distillation is inconceivably long and it is safe to assume, therefore, that it is profound.
3. Language is built in to human development
Arguments exist that humans are not the only organisms that use language. For the sake of argument I think it is fair to say that humans are the supreme language users. Language emergence in children is underpinned by neurology and physiology to such an extent that its emergence is spontaneous and is strongly represented in developmental models. Over time, language use transforms the neurology of the user is key areas of the brain.
In adults, language becomes such an ingrained and automatic process that, under the right circumstances, it will interfere with perceptual processes (e.g. Stroop Test).
So, not only is language a foundational feature of humans, it is represented in the architecture of human neurology, the characteristics for which are inheritable. In this sense, language evolves both as a virtual network and neurobiological network of meaning.
My thought here is that we might view the refinement and distillation of language both in use and in neurology as a virtual map of meaning which itself ‘contains’ or represents intelligence. This might account for the surprising experience of encountering an apparent intelligence when we interact with AI.
My thoughts are that the emergence of AI in modern culture was inevitable once the first Turing Machine was devised and built. To me it seems that this is the culmination of an arc that began almost 100 years ago. One question that I find intriguing is whether AI would be inevitable within any language-using species.
Please do pitch in if any of the above is of interest to you, or even (maybe especially even) if you think I am mistaken or you think the emergence of AI is a bad thing.