Forum Replies Created

Page 2 of 3
  • Paul

    Member
    April 9, 2023 at 7:15 pm in reply to: Stop Press: AI researchers install a Right Hemisphere

    I’d be willing to accept that any or all of those states are possible Don. For me, occasional transcendent experiences are all I have to go on really. My experience is that unpreparedness to accept goes along with rigid mind states I find intolerable for any length of time. I don’t think these experiences make me special in any way and I find I am certain that thinking so would interfere with them happening. I actually think they are common, even typical but are generally ‘unshared’ experiences- anything unfrequent and not part of shared experience is treated as anomalous culturally and so reports are hard to find. The largest anonymous phone surveys suggest that maybe 1/4 of the population experience what have what could be classed as hallucinations. This before you get into anomalies that have a ready explanation available.

    Frankly, if I dismissed anything other people might classify as ‘nuts’ I fear I wouldn’t learn much of any value to me as most if not all of the things I value are unshared and idiosyncratic in nature.

  • Paul

    Member
    April 9, 2023 at 12:37 pm in reply to: Stop Press: AI researchers install a Right Hemisphere

    Hey Don.

    Hope you’re well! Yes, whether AI agents are conscious or not is a bit of a side issue for me. I mean, we could adopt the framework that everything in the universe is continuous and inseparable which renders the question of consciousness moot.

    I am more interested it what AI tells us about language, about our own experience and how solutions to the problems of optimising them illustrate our collective lack of understanding of consciousness generally.

  • Paul

    Member
    March 29, 2023 at 10:47 am in reply to: General discussion

    Intelligence and Language.

    Here I’m focussing on the idea that language itself is ‘intelligent’ or perhaps ‘represents’ intelligence and I propose some ideas for critique.

    1. The Turing Test

    One surprising observation during the recent release of these systems is that they seem so capable, given the obvious limitations of the learning they have engaged in. In essence, the AI Language model has been taught to predict the next word it will use. And that’s about it.

    They are basically language prediction engines. I’m not suggesting they are in any way perfect at what they do- in fact there are numerous gaps in knowledge (they are mostly offline and/ or out of date by around 2 years) and they make mistakes, some obvious, some not. Sometimes they even make things up (hallucinate).

    However, the experience of interacting with an AI is that it has many of the features we would associate with interacting with a person. This is sometimes so convincing that the general consensus seems to be that they will routinely pass the Turing Test.

    2. Language is Refined and Distilled

    Words map on to objects, relations and dimensions in the world around us. Through use they are gradually refined. It is not unreasonable to propose that these refinements shape the ‘fit’ of those words over time through feedback about their utility and that this represents a distillation of meaning as these associations flourish and begin to represent processes, dynamic interactions. I propose words and their associations are virtual, heuristic sound/ textual stimuli networks that model the world. The time-course of this shaping, refinement and distillation is inconceivably long and it is safe to assume, therefore, that it is profound.

    3. Language is built in to human development

    Arguments exist that humans are not the only organisms that use language. For the sake of argument I think it is fair to say that humans are the supreme language users. Language emergence in children is underpinned by neurology and physiology to such an extent that its emergence is spontaneous and is strongly represented in developmental models. Over time, language use transforms the neurology of the user is key areas of the brain.

    In adults, language becomes such an ingrained and automatic process that, under the right circumstances, it will interfere with perceptual processes (e.g. Stroop Test).

    So, not only is language a foundational feature of humans, it is represented in the architecture of human neurology, the characteristics for which are inheritable. In this sense, language evolves both as a virtual network and neurobiological network of meaning.

    My thought here is that we might view the refinement and distillation of language both in use and in neurology as a virtual map of meaning which itself ‘contains’ or represents intelligence. This might account for the surprising experience of encountering an apparent intelligence when we interact with AI.

    My thoughts are that the emergence of AI in modern culture was inevitable once the first Turing Machine was devised and built. To me it seems that this is the culmination of an arc that began almost 100 years ago. One question that I find intriguing is whether AI would be inevitable within any language-using species.

    Please do pitch in if any of the above is of interest to you, or even (maybe especially even) if you think I am mistaken or you think the emergence of AI is a bad thing.

  • Paul

    Member
    March 29, 2023 at 10:40 am in reply to: Hello everyone

    Hi Lucy,

    Yes, I’ve spent some time reading and posting on some threads here. It tends to be a bit quiet and I think that means that, sadly, some conversations die out but there seems to be a growing membership so I hope this means more sharing and collective wondering.

    I’m more interested in philosophy and psychology than I am neurology but I try to pitch in where I have thoughts.

    I’ve just started my own thread on the widespread emergence of language AI, which is perhaps not a popular subject for people here in general. At worst, this is somewhere I can jot down a few thoughts as they occur to me as I am reading and listening to ideas, especially Iain’s conversations with Alex Gomez-Marin on YouTube which I find utterly spellbinding and inspirational. I have quickly come to feel that not only is the emergence of language AI inevitable but also an evolution of this critical but virtual dimension of human existence. That is is also going to be an upheaval and possibly even a dangerous one, it is something I think we all need to engage with and find the very best in it because the genie is well and truly out of the bottle.

    Thanks for saying hello Lucy. All the best.

  • Paul

    Member
    March 20, 2023 at 9:09 pm in reply to: General discussion

    So, I thought I just jot down some notes here about why I’m interested in the current flurry of AI activity we are seeing. I’m aware that this phenomenon is rightly viewed with some trepidation but I also see it as a unique opportunity to learn about ourselves.

    1. I am interested in the idea that language itself can be understood as a highly distilled if inflexible intelligence, refined by successive approximation over countless lifetimes of use.

    2. I see an opportunity to interact with a system that is fairly representative of LH functionality in order to explore in an implicit, open and educative way the strengths and limitations of this perspective on the world.

    3. I am struck by the parallels between the experience of interacting with these systems and Iain’s formulation of the LH: good or even excellent at goal-directed, rule-governed activity, capable of crafting answers that conform to style and grammatical forms but lacks understanding, is inflexible and prone to inaccuracies, ‘hallucinations’ or ‘delusions’.

    I don’t know what the risks of releasing these systems into the wild for competitive, corporate market-share reasons might be but I’m looking forward to hearing what anyone might see as relevant, concerning or hopeful about this development.

  • Paul

    Member
    March 20, 2023 at 11:13 am in reply to: Counterfactuals

    Isn’t it remarkable Don, the traumatic loss of the child and the long and frustrating, socially denigrated attempt to recapture that mind?

    I think that’s one of the main attractions of Iain’s ideas for me: both an articulation of that loss and a reminder that this self hasn’t gone anywhere but, without words, must attempt to connect to others via this wordy intermediary (who then captures the whole). In that loss I see so much of the World’s suffering (and mine). Another thing Iain has helped me with is the confirmatory idea that the RH can oversee the subversion of language to express a fluid, dynamic and evolving vision, very much as an artist can use mere pigment to evoke depth, movement and emotion. It’s a beautiful notion and one that I had thought may times but doubted.

    Is this where the necessary settlement must be between the two selves?

    Side note: Interesting we have a different idea of the ‘two minds’, with you favouring unknowing/ implicit vs. knowing/ explicit and me tending knowing/ implicit vs. thinking/ explicit. I could go with either tbh.

  • Paul

    Member
    March 17, 2023 at 7:50 pm in reply to: Counterfactuals

    I play music and took degrees and worked in in psychology for 20 years, moving on to new things about 10 years ago Don. I earn a living otherwise now but continue with music and with psychology as a philosophical pursuit.

    Recently I suddenly realised that I’ve been thinking about consciousness and human experience since I was maybe 7, getting on for 50 years now. In some ways it seems the progress has been slow, with successive disappointments… But then I think of the magnitude of the mystery, its history and its tendency to sublimate suddenly into something else the moment you catch a glimpse of an answer out of the corner of your eye and I think maybe I’m doing ok 😄

    It is interesting that your formulation of pain chimes with how I think about psychological distress generally. When I had to comfort my son when he was a bit younger, having woken from a nightmare I would speak to him in a particular way, ostensibly providing an explanation for the experience but offering physical contact/ comfort and, most importantly, time for him to relax a little and for my mind to range into the poetic, seeking something overarching, something better than a simple explanation.

    After running your exercise just now, it put me in mind of these moments and I wonder if the cognitive/ verbal self that represents the embodied, comforting parent that is yet guided by something else, something working only in this moment; that needs time to formulate and find a beautiful resolution to the crisis in hand.

    This seems to me an example of the two parts of the self: the less responsive, explicit self is there with some immediate answers and some simple calming (having encountered the situation before) giving way as the more implicit, responsive self begins to germinate something better, tailored to the suffering it is encountering here in this moment.

    Maybe in this small drama we find the instinct for the counterfactual? Is it to risk speaking without thinking, to rely on ‘knowing’ rather than telling, dismissing or attributing as the overarching aim?

  • Paul

    Member
    March 16, 2023 at 4:05 pm in reply to: Counterfactuals

    Hi Don. Thanks for replying.

    I wasn’t thinking about pain but it is a very good example of something we feel we need to measure but can’t objectively. There are so many examples I’ve run into like using the Beck Depression Inventory for tracking the efficacy of anti-depressant drugs or therapy for example.

    I am really interested in anomalous experience and worked on a lot of research with people with psychosis back in the day. I don’t think it is really deniable that all of human experience is almost indistinguishable from hallucination. We tolerate dreaming on a daily basis, myself and a lot of musical people I know (and possibly literally everyone) can experience hearing self-generated music without stimulus. What really seems to distinguish psychotic experience is that it is distressing to the individual- hearing a voice speaking to you might of course be a delightful spiritual experience of course.

    I wonder what Sjahari would make of this from the perspective of evaluating our collective tendency to counterfactual? Are the experiences evaluated identical in nature or are there pre-existing emotional components that ‘colour’ the interpretation? If the latter, I imagine this would strongly influence the self-report of pain you were discussing Don.

  • Paul

    Member
    March 16, 2023 at 11:58 am in reply to: Counterfactuals

    Interesting discussion!

    I find an association with Iain’s idea that to resolve our dissonance at living in a complex, paradoxical and deeply interconnected universe, we have to be able to embrace ‘either/and’: This strongly resonates for me Sjahari with Marletto’s point that current scientific thinking focusses on what is observable rather than on what is possible, meaning that we inherently limit our understanding of the universe.

    The basic point here is that we have a ‘measurement’ problem in that some phenomena are not currently (or may never be) measurable. It is striking to me that this is not even close to being new idea.

    At the moment I’m tending to formulate the RH as ‘inclusive’ in that it recognises that there is almost certainly an over-arching paradigm that can explain all phenomena (whether or not this can or could be articulated), whereas the LH is ‘exclusive’ in that it only models what is measurable (at present). In this formulation, the problem comes when the LH does not leave ‘space’ in the model/ map for what it cannot measure. In other words, it presumes a model that excludes all other possibilities and at this moment the Emissary becomes the Master.

    And the biggest problem here is that all the things that are really important to us as humans are hard or even impossible to measure.

    In practice the existence of ‘anomalous events’ are formally expressed as a ‘of low probability’ but when you look at real world discourse you immediately notice an expression of definite exclusion. Why? I suspect the nature of the discourse switches suddenly from open discussion to rhetoric rather than discussion when areas of disagreement are reached.

    Does that come close to what you were driving at Sjahari? I hope it makes sense.

    Weirdly, I think things like this are best illustrated sideways rather than head on. The example that always comes to mind from the film, “Contact” and I think it nicely illustrates the problem. Here’s the scene in question.

    Conversation between Ellie Arroway (Jodie Foster) and Palmer Joss (Matthew McConaughey):

    Palmer Joss: You’re Dr. Arroway, the one who’s been looking for signals from outer space.

    Ellie Arroway: Guilty as charged.

    Palmer Joss: [smiling] And you believe in little green men, too?

    Ellie Arroway: [smiling back] Only intellectually. There’s no proof yet.

    Palmer Joss: [serious] So, you have faith.

    Ellie Arroway: Faith?

    Palmer Joss: Yeah. You have faith that there’s life out there, somewhere.

    Ellie Arroway: No, I have evidence.

    Palmer Joss: What evidence?

    Ellie Arroway: [gesturing towards the telescope] That.

    Palmer Joss: [looking at the telescope] A telescope?

    Ellie Arroway: [nodding] Yeah. It’s a tool we use to gather data.

    Palmer Joss: And your faith is in the data?

    Ellie Arroway: [smiling] My faith is in the universe, in its vastness and its mysteries.

    Palmer Joss: [nodding] And you think you can uncover those mysteries with data?

    Ellie Arroway: [shrugging] We try.

    Palmer Joss: [serious] But there are some things science can’t answer, aren’t there?

    Ellie Arroway: [curious] Like what?

    Palmer Joss: [leaning in] Like why we’re here. What the meaning of life is.

    Ellie Arroway: [smiling] Ah, the big questions.

    Palmer Joss: [smiling back] Yeah. The questions that science can’t answer.

    Ellie Arroway: [leaning in] But maybe it can.

    Palmer Joss: [curious] How?

    Ellie Arroway: [leaning in further] By finding evidence of life on other planets. Maybe that would give us a sense of our place in the universe.

    Palmer Joss: [nodding] And what if we don’t find any evidence?

    Ellie Arroway: [shrugging] Then we’ll keep looking. That’s what science does. It keeps looking until it finds an answer.

    Palmer Joss: Did you love your father?

    Ellie Arroway: What?

    Palmer Joss: Your dad. Did you love him?

    Ellie Arroway: Yes, very much.

    Palmer Joss: Prove it.

  • Paul

    Member
    April 17, 2023 at 7:23 pm in reply to: General discussion

    Hey Lucy

    Thanks for your lovely and thoughtful reply. I was very evocative and I recalled similar experiences from my past. And I know just what you mean.

    In answer to your question, I don’t know if I can help you understand better- I think you understand. For that matter, I think we all understand… intrinsically, implicitly and fully what we experience. The issue is that this understanding of the whole isn’t particularly declarative or explicit, that is, it does not lend itself to words: instead it is represented by written text.

    And yet your words can lead me to my recall of something from my life- your description led me to Islington in London, circa 1986… a park, a Victorian wrought iron gate, a sky suffused with colour, enchanting and timeless. A place and a moment I visit in my dreams from time to time.

    In this way, words can lead others to their version of an experience, rendering the words whole, reconstituted in a new moment.

    As Iain was emphasising in his recent YouTube conversation with Alex Gomez-Marin: consciousness and experience are about connection and relations; about the inseverability of the whole and herein lies our greater understanding.

    I will try to articulate what this evoked in me. I don’t know if this will be understandable but perhaps it will.

    Because humans use words to represent their experience to other humans, that representation tends to become an experience “thing” in collective or social settings. I am trying to communicate something of ghastly and existential complexity here so I do hope I’m not too far from the mark but basically, the same tendency is exhibited when words represent objects or classes of objects.

    If you consider this for a while, I believe it possible to see that the naming of something (an object, a class, or a category etc.) is an act that actually creates the thing or object. Independently of naming, ‘it’ is simply part of the whole- inseverable, contiguous and continuous with it.

    Such is the power of language for humans.

    Consider a simple example: A car. Ok, we can all agree what one is. But is it a car without a road? Of course, it is a car in field. But without a road it cannot be driven. Problematic, surely. Is it a car if there is no driver? Yes, maybe? But how about no mechanic, no filling station? How about without the huge coniferous forests and an intervening hundreds of millions of years required to turn them to crude oil to provide that fuel, or an education system that assists people to understand how the processes of nature can lead to engineering and manufacturing the iterations of design over centuries that leads to the car.

    If we leave the car in the field long enough it will return to its constituents.

  • Paul

    Member
    April 6, 2023 at 11:12 am in reply to: General discussion

    Hi Lucy,

    Thanks for replying. Yes, I think you are absolutely right on your final thought, that AI is ‘missing something’. It turns out that AI researchers are well aware of this and are currently running experiments to explore just this issue. I am creating a separate thread on this in this group shortly.

    Basically, I think you are correct: People think of AI as ‘intelligent’ or ‘smart’. Iain’s formulation would run contrary to this. His model would predict that AI, lacking a representation of the big picture, will not only get things wrong but will be convinced they are right, and will even make things up (confabulate) that fit the initial view they have formed.

    And that is exactly what we see: AI ‘hallucinate’, making things up to fit what language predicts they should say and missing signs that they are making spectacular errors.

    I think the problem here arises because the researchers themselves lack a proper foundation in understanding exactly what intelligence is. This is changing of course because they are stuck with trying to make the AI work at a practical level and this in turn is guiding them to split the AI into two factions.

    More on this in the other thread.

  • Paul

    Member
    April 6, 2023 at 10:58 am in reply to: General discussion

    These are excellent questions Whit, thanks. In terms of runaway conditions, you certainly see a decided “stuckness” in people diagnosed with psychosis. As you will be no doubt aware, Iain refers often to the RH deficits seen in this group. They will present to services, often for decades, with the same ideas or slightly evolved variants of them.

    So, rather than constantly evolving (which is what I would imagine runaway conditions would result in), you see the polar opposite. Interesting.

    Of course, I think you may be hinting at ideas around developing interventions or upon reflective practice?

    I think there is a lot to be learned from this and for the interplay between the two hemispheres and their respective biases.

  • Paul

    Member
    March 20, 2023 at 8:46 pm in reply to: Counterfactuals

    I look forward to joining your group Don 🙂

    I’ve heard of Kastrup but I must admit I can’t recall where at the moment.

    Well, you won’t get an argument from me about a very extended definition of consciousness. But arguments concerning whether computers can become conscious by this definition or that to one side, I think the in many ways this emergence of AI tools will I think provide some real world, implicit opportunities to explore the limitations and strengths of the LH in a self-generative, self-guided way.

    Anyone who has tried such a system… well, you need only take a straw poll of opinions online: frustrating, inflexible, answers sometimes wildly inaccurate, even ‘hallucinatory’ or ‘delusional’. Diligent, workmanlike answers that conform to style boundaries.

    Yes, can produce reams of accurate and grammatical text, more or less aligned to the prompt provided by the enquirer, can be shaped by further questioning or prompting but doesn’t really understand anything.

    Sounds familiar I think.

  • Paul

    Member
    March 20, 2023 at 3:56 pm in reply to: Counterfactuals

    Well, I think you raise an important point Don… how can we know what is knowing beyond the mind?

    Increasingly though, science and critical thinking in general is taking us away from this idea of the individual as somehow existing in splendid isolation and beginning to understand that we are intrinsically interconnected to the Universe and continuous with it. Can we begin to understand the individual as being a distributed, unfolding, dynamic process rather than a meat-based computer with an idiosyncratic operating system?

    I’ve just started a Group in here about Language AI systems because it seems to me that very modern phenomenon slips a knife between the meat and the bone of the verbal/ non verbal experience.

  • Paul

    Member
    March 20, 2023 at 12:55 pm in reply to: Counterfactuals

    Hi Don

    Yes, I’m sorry- let me clarify..

    By ‘implicit’ I mean those aspects of consciousness (cognition, emotion learning, memory etc.) acquired via dynamic interaction with the world around us and is typically (although not exclusively) non-verbal. This I would align with what I think of as knowing or understanding.

    In contrast, explicit (learning, memory, cognition) is that which can be represented in language. I would align this with thinking (explicit, declarative cognitions) that is largely verbal in nature and would include rules, social conventions, systems of social expression and calculations.

Page 2 of 3