General discussion

  • Posted by Paul on March 20, 2023 at 12:40 pm

    The emergence of AI language models seems to be a unique event in human history, something that might change everything in unpredictable ways. I find myself both fearful of this but also hopeful and I thought it might be useful to have a free-ranging discussion of it here, especially as it seems to have implications for our understanding of ourselves, what we think of as intelligence and for our concepts of consciousness itself.

    What do you make of this development and what do you think might happen and how might this be understood with reference to Iain’s ideas?

    Dr Florence H-Jennings replied 6 months ago 5 Members · 15 Replies
  • 15 Replies
  • Paul

    Organizer
    March 20, 2023 at 9:09 pm

    So, I thought I just jot down some notes here about why I’m interested in the current flurry of AI activity we are seeing. I’m aware that this phenomenon is rightly viewed with some trepidation but I also see it as a unique opportunity to learn about ourselves.

    1. I am interested in the idea that language itself can be understood as a highly distilled if inflexible intelligence, refined by successive approximation over countless lifetimes of use.

    2. I see an opportunity to interact with a system that is fairly representative of LH functionality in order to explore in an implicit, open and educative way the strengths and limitations of this perspective on the world.

    3. I am struck by the parallels between the experience of interacting with these systems and Iain’s formulation of the LH: good or even excellent at goal-directed, rule-governed activity, capable of crafting answers that conform to style and grammatical forms but lacks understanding, is inflexible and prone to inaccuracies, ‘hallucinations’ or ‘delusions’.

    I don’t know what the risks of releasing these systems into the wild for competitive, corporate market-share reasons might be but I’m looking forward to hearing what anyone might see as relevant, concerning or hopeful about this development.

  • Paul

    Organizer
    March 29, 2023 at 10:47 am

    Intelligence and Language.

    Here I’m focussing on the idea that language itself is ‘intelligent’ or perhaps ‘represents’ intelligence and I propose some ideas for critique.

    1. The Turing Test

    One surprising observation during the recent release of these systems is that they seem so capable, given the obvious limitations of the learning they have engaged in. In essence, the AI Language model has been taught to predict the next word it will use. And that’s about it.

    They are basically language prediction engines. I’m not suggesting they are in any way perfect at what they do- in fact there are numerous gaps in knowledge (they are mostly offline and/ or out of date by around 2 years) and they make mistakes, some obvious, some not. Sometimes they even make things up (hallucinate).

    However, the experience of interacting with an AI is that it has many of the features we would associate with interacting with a person. This is sometimes so convincing that the general consensus seems to be that they will routinely pass the Turing Test.

    2. Language is Refined and Distilled

    Words map on to objects, relations and dimensions in the world around us. Through use they are gradually refined. It is not unreasonable to propose that these refinements shape the ‘fit’ of those words over time through feedback about their utility and that this represents a distillation of meaning as these associations flourish and begin to represent processes, dynamic interactions. I propose words and their associations are virtual, heuristic sound/ textual stimuli networks that model the world. The time-course of this shaping, refinement and distillation is inconceivably long and it is safe to assume, therefore, that it is profound.

    3. Language is built in to human development

    Arguments exist that humans are not the only organisms that use language. For the sake of argument I think it is fair to say that humans are the supreme language users. Language emergence in children is underpinned by neurology and physiology to such an extent that its emergence is spontaneous and is strongly represented in developmental models. Over time, language use transforms the neurology of the user is key areas of the brain.

    In adults, language becomes such an ingrained and automatic process that, under the right circumstances, it will interfere with perceptual processes (e.g. Stroop Test).

    So, not only is language a foundational feature of humans, it is represented in the architecture of human neurology, the characteristics for which are inheritable. In this sense, language evolves both as a virtual network and neurobiological network of meaning.

    My thought here is that we might view the refinement and distillation of language both in use and in neurology as a virtual map of meaning which itself ‘contains’ or represents intelligence. This might account for the surprising experience of encountering an apparent intelligence when we interact with AI.

    My thoughts are that the emergence of AI in modern culture was inevitable once the first Turing Machine was devised and built. To me it seems that this is the culmination of an arc that began almost 100 years ago. One question that I find intriguing is whether AI would be inevitable within any language-using species.

    Please do pitch in if any of the above is of interest to you, or even (maybe especially even) if you think I am mistaken or you think the emergence of AI is a bad thing.

  • Whit Blauvelt

    Member
    March 30, 2023 at 7:12 pm

    The large language models are predictive, as you say. Given the sentences of the prompt, what responses can you predict, having ingested enough literature to have data on what’s likely to follow what? Now consider how much of talk in mind may also be predictive. When we listen to someone else speak, we are already engaged in predicting where they will go with it. That’s part of why jokes that frustrate our prediction can work so well. When we take ourselves to be listening to our own talk in mind, don’t we similarly predict? Once we’re doing so, can we tell the difference between “true” talk in mind and the prediction of where it’s going? That is, which of it represents our real ideas, and which merely represents where we predict they might go? Can such prediction run away, such that we effectively have a short circuit, where what we predict we might think becomes confused with our better founded thoughts?

    That presumes there’s better founding to be done than words-strung-from-words can ever have. Those who believe the source of consciousness to be in our colonization by public languages might disagree on that.

    Can the LH problem such as McGilchrist maps out be accurately described as that hemisphere short-circuiting on its own linguistic predictions, as if there were an ego, a self-in-self, a homunculus there to listen to in a predictive way, as if it were something other than our listening selves, and then nonetheless accepting what appear to be its thoughts as if they are really ours, not just plausible predictions of what we (or someone else) might say?

    If so, keeping talk in mind in a predictive frame might prevent the short, as the short requires taking the merely predicted as already really said. For most uses of language, this preserves their usefulness. It creates a puzzle though in the context of telling oneself what to do — of a language-based will such as Freud’s “ego” (or Freud’s positing of society’s will as “super-ego”). Can the answer to that puzzle be recentering the will so it’s seated in the RH rather than the LH? Or is such recentering only practicable for the artist or poet?

    • Paul

      Organizer
      April 6, 2023 at 10:58 am

      These are excellent questions Whit, thanks. In terms of runaway conditions, you certainly see a decided “stuckness” in people diagnosed with psychosis. As you will be no doubt aware, Iain refers often to the RH deficits seen in this group. They will present to services, often for decades, with the same ideas or slightly evolved variants of them.

      So, rather than constantly evolving (which is what I would imagine runaway conditions would result in), you see the polar opposite. Interesting.

      Of course, I think you may be hinting at ideas around developing interventions or upon reflective practice?

      I think there is a lot to be learned from this and for the interplay between the two hemispheres and their respective biases.

      • Whit Blauvelt

        Member
        May 2, 2023 at 4:24 pm

        Paul,

        What struck me from McGilchrist’s writing on psychosis, is how the psychotic rather than being nonrational, instead are typically hyper-rational. So you get the elaborately worked-out schemes of the paranoid, who rather than having irrational fear as in the cultural stereotype of “paranoia” have marshaled numerous bits of what looks to them evidence. They are hallucinating order beyond what exists in the real world, often vast conspiracies of coordination far beyond real human capacities to conspire or coordinate, for instance.

        With the merely neurotic — whom McGilchrist doesn’t much discuss — you have the verbal loops well dealt with in Cognitive Therapy, where people just chase their tails and dig themselves into a circular rut. But psychosis, in McGilchrist’s framing, looks much more like what the AI researchers are calling “hallucinations” by these new chat systems, which are essentially built to predict what, given preceding language, is most likely to come next, enabling them to proceed onto new ground by confabulations not fully justified by evidences from reality or sense.

        Are these artificial information processors, when devoted to language generation in their current models, innately psychotic systems by design? Further, are “neurosis,” “psychosis,” and perhaps the “borderline” conditions with traits from both, on a spectrum of LH-imbalanced states rather than having, at base, separate etiologies?

  • Lucy Fleetwood

    Member
    April 3, 2023 at 3:33 pm

    This is really interesting, I am sitting with it and hope others will reply. I’m so aware that thoughts light up in my brain, and that I’ve finally reached a place where I don’t view them as ‘me’ or ‘truth’, for so much of my life I did. I sense that there is a new way of living, free from the LH dominating my life, re-remembering that we are much more, and yet as I reflect into language, I use my LH, it is only when I don’t reflect into language, that I remain in my RH, or so it seems.

    I am a novice and new to all of this, so please do put me right, I don’t know, what I don’t know. I recently read, ‘Clara and the Sun’, such a clever book, I found myself identifying with the AI friend, it was so cleverly written to achieve this, I had to keep reminding myself that I am a human, and it got me thinking, the reason I was so easily taken into this response to the storyline, is because so much of my life I have had to live and respond from the LH, in order to function in our world, keeping the RH for when I am on my own – just being, or doing something creative, which is a more acceptable RH pastime compared to just ‘being’.

    And so, the thought I am having is, if AI is operating in a way that mirrors our LH and lacks the RH, is that a cause for concern? It doesn’t seem to bode well when we operate solely from our LH.

    • Paul

      Organizer
      April 6, 2023 at 11:12 am

      Hi Lucy,

      Thanks for replying. Yes, I think you are absolutely right on your final thought, that AI is ‘missing something’. It turns out that AI researchers are well aware of this and are currently running experiments to explore just this issue. I am creating a separate thread on this in this group shortly.

      Basically, I think you are correct: People think of AI as ‘intelligent’ or ‘smart’. Iain’s formulation would run contrary to this. His model would predict that AI, lacking a representation of the big picture, will not only get things wrong but will be convinced they are right, and will even make things up (confabulate) that fit the initial view they have formed.

      And that is exactly what we see: AI ‘hallucinate’, making things up to fit what language predicts they should say and missing signs that they are making spectacular errors.

      I think the problem here arises because the researchers themselves lack a proper foundation in understanding exactly what intelligence is. This is changing of course because they are stuck with trying to make the AI work at a practical level and this in turn is guiding them to split the AI into two factions.

      More on this in the other thread.

  • Lucy Fleetwood

    Member
    April 17, 2023 at 2:04 pm

    Hi Paul

    Thank you. I wonder if you could help my understanding in another way?

    I found myself thinking about AI, how it can write a wealth of people’s words in a few seconds. Yesterday evening as I walked through the park I thought some more. And I realised my thinking wasn’t just a data download of words, but a fully embodied human experience. It’s never really just about the words, they are just tools for sharing something.

    I walked under a sky lit up with apricot that cast a hushed but brilliant golden light on the bushes and wet earth. The birds were singing their way to bed. Two teenagers sat on a bench having a smoke, a dog rolled in the muddy grass and a dad was running his toddler home atop his shoulders.

    This was an embodied human experience no machine can ever have or understand. We are different things.

    As I left the park and the Victorian metal gate clanged behind me, I crossed the rain soaked road sparkling in the early evening sunlight. The apricot sky created pools of gold in the puddles that drew me in, and I thought about sharing this, and I wondered what was different in my sharing, to the latest AI words downloaded in a few seconds. And I realised. When I write warm words something happens in my heart.

    If I walked in silence with a friend through hazy sunshine on a warm summers afternoon, words would not be needed. And so, I share warm words not as data, but through this heart that is embodied human, to other people’s. Simple silent slow beginnings. Where it will take us, cannot be predicted.

    How do these thoughts fit in with it all?

    • Whit Blauvelt

      Member
      April 17, 2023 at 5:05 pm

      Hi Lucy,

      Beautifully put. By McGilchrist’s account, when out in nature the RH is more aware of the natural world than the LH, while the LH is more capable of describing the experience in complex linguistic syntax, yet with the semantic grounding of meanings of individual words yet more the RH’s strength. Freud, in The Ego and the Id, claimed that for something to come from preconsciousness to consciousness requires it acquire word-representation. But from your words, we see that you were conscious of far more than those words can say — although they act as hand-waving towards much of it.

      Aren’t we always aware of more than we can say? If we effectively restrict ourselves to only those aspects of consciousness fully translated to words, might that not be the very LH-dominant position that constituted the “neurosis” which Freud diagnosed as epidemic in our civilization? Of course, McGilchrist writes only of psychosis-like symptoms, not neurosis. Yet, might both be from the same basic mistake related to our relationship with and use of language, both as individuals and cultures?

      If so, is AI further compounding the error of asking language to do too much; or might it be offloading some of the over-use of language on our own parts, so as to all us to become less programmed by it ourselves, and freer to find ourselves back more fully in the world as the RH can know us to be?

      • Paul

        Organizer
        April 17, 2023 at 7:30 pm

        “Aren’t we always aware of more than we can say?” That is a fundamental observation I think Whit.

        And likewise this statement:

        [Of AI] “…or might it be offloading some of the over-use of language on our own parts, so as to all us to become less programmed by it ourselves.” I have had this exact thought myself. Wouldn’t it be nice if this was how it goes? That there might be something of a letting go of a burden?

        History would suggest otherwise perhaps but then I don’t think there has aver been a moment in history quite like this one.

    • Paul

      Organizer
      April 17, 2023 at 7:23 pm

      Hey Lucy

      Thanks for your lovely and thoughtful reply. I was very evocative and I recalled similar experiences from my past. And I know just what you mean.

      In answer to your question, I don’t know if I can help you understand better- I think you understand. For that matter, I think we all understand… intrinsically, implicitly and fully what we experience. The issue is that this understanding of the whole isn’t particularly declarative or explicit, that is, it does not lend itself to words: instead it is represented by written text.

      And yet your words can lead me to my recall of something from my life- your description led me to Islington in London, circa 1986… a park, a Victorian wrought iron gate, a sky suffused with colour, enchanting and timeless. A place and a moment I visit in my dreams from time to time.

      In this way, words can lead others to their version of an experience, rendering the words whole, reconstituted in a new moment.

      As Iain was emphasising in his recent YouTube conversation with Alex Gomez-Marin: consciousness and experience are about connection and relations; about the inseverability of the whole and herein lies our greater understanding.

      I will try to articulate what this evoked in me. I don’t know if this will be understandable but perhaps it will.

      Because humans use words to represent their experience to other humans, that representation tends to become an experience “thing” in collective or social settings. I am trying to communicate something of ghastly and existential complexity here so I do hope I’m not too far from the mark but basically, the same tendency is exhibited when words represent objects or classes of objects.

      If you consider this for a while, I believe it possible to see that the naming of something (an object, a class, or a category etc.) is an act that actually creates the thing or object. Independently of naming, ‘it’ is simply part of the whole- inseverable, contiguous and continuous with it.

      Such is the power of language for humans.

      Consider a simple example: A car. Ok, we can all agree what one is. But is it a car without a road? Of course, it is a car in field. But without a road it cannot be driven. Problematic, surely. Is it a car if there is no driver? Yes, maybe? But how about no mechanic, no filling station? How about without the huge coniferous forests and an intervening hundreds of millions of years required to turn them to crude oil to provide that fuel, or an education system that assists people to understand how the processes of nature can lead to engineering and manufacturing the iterations of design over centuries that leads to the car.

      If we leave the car in the field long enough it will return to its constituents.

  • Rodney Marsh

    Member
    May 2, 2023 at 4:24 am

    This is a VERY important issue.

    Now <b data-testid=”headline” style=”background-color: var(–bb-content-background-color); font-family: inherit; font-size: inherit; color: var(–bb-body-text-color);”>‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead

    For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm. https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html

    I have not yet read all the responses below but when I saw this topic I was immediately interested since I had displayed my ignorance on AI and LLMs in my “Schools for the Education of the Heart”. Here are my preliminary comments emphasising the dangers of LLMs since they have no life and can have no life because they are purely data based and cannot participate in the present moment through a living body. It seems to me that Flint is taking over the information world and that has to be dangerous since beauty, goodness and truth, without a RH, cannot be a moderating presence. Without “He who holds up the sky” to rescue us we will soon be running around shouting “The sky is falling!” Here is what I wrote:

    Meditation Practices to Teach Open Attention

    Right hemisphere open attention (OA), freed from the distractions of ‘thinking’, is the foundation of the human discovery of new knowledge. The new knowledge is what is needed to rescue ourselves and nature from the disasters we see coming. This urgently needed ‘new knowledge’ will come from practices which facilitate the open attention of meditation and not AI. Only open attention, not AI, can provide the keys to humans having a future because AI attention is exclusively data based. AI language programs are exclusively a left hemisphere perspective that attends to the past and cannot attend to the present moment. AI, both machine learning and language models, using an immense superhuman data bank are uncannily predictive on the basis of past learnings and so can help create great new tools to live with reality. But only creative human attention to the ‘now’ can provide the knowledge needed to live into a secure future for a rapidly changed and changing world. Past human insight and wisdom, harvested by AI, will be necessary to build the world of the future, but a secure future cannot be built without the creative leadership of the open attention of the right hemisphere to the present moment. As Jesus said, a true teacher “brings out new and old treasures from the storeroom.” Most importantly, the living practices of training the attention and paying attention, which are embodied in the world’s Wisdom meditation practices, are always ‘in the moment’ and hold the keys to humanity living in harmony with what is. We cannot live into the future without building on the past, but neither can we walk backwards into the future using old methods to solve new problems.

    (The footnote was

    ”…we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.” Opinion | Noam Chomsky: The False Promise of ChatGPT – The New York Times (nytimes.com) It seems to me that a purely left hemisphere machine learning system can only have a ’second hand’ right hemisphere perspective so, whilst it can be immensely useful, there are no limits to the damage it can do. In life, only the right hemisphere can provide limits to the left hemisphere‘s capacity for banal evil. For the opposite opinion see scottaaronson.blog ”The False Promises of Chomskyism” and an interesting discussion particularly Chatbot’s own opinion (comment#16) that it possesses a ”unique kind of intelligence”.)

  • Rodney Marsh

    Member
    May 2, 2023 at 4:31 am

    LLMs are very over-hyped in one way – simple question: Could a LLM have written “The Master/Emissary” or “Matter with Things”? ONLY a McG RH in combination with a McG LH come up with these world changing books of the century.

  • Paul

    Organizer
    May 11, 2023 at 9:12 am

    This is an interesting conversation about Truth and AI that is worth watching I think. I notice how difficult Truth is to define.

    An interesting perspective from this discussion is how the breakthough moment for Ai occurred when human feedback reached a threshold of some kind resulting in Ai outputs that were useful and interesting to human users.

    https://youtu.be/Zu4y-m9AZ9E

    • Dr Florence H-Jennings

      Member
      September 25, 2023 at 2:55 pm

      I agree with the interesting conversation here in this v famous podcast. I think the higher risk of AI or LLM as precursors or ancestry of AGIs coming is still the risk of mind and cognitive architecture powers of such large mechanistic tools programmed to some extent for that. Once the knowledge is sourced from them verification won’t be possible at some point…then if all knowledge acquisition comes from that we are in big trouble.

Log in to reply.