What does the growing emergence of AI Language Models tell us about ourselves? Public Group Public Group Active 4 days ago A place to discuss AI and its implications for people Public Group Organizer: Organised by Group Description A place to discuss AI and its implications for people Leave Group Are you sure you want to leave ? Members 22 Discussions Documents Feed Photos Videos Reply To: General discussion Paul Organizer April 6, 2023 at 11:12 am Hi Lucy, Thanks for replying. Yes, I think you are absolutely right on your final thought, that AI is ‘missing something’. It turns out that AI researchers are well aware of this and are currently running experiments to explore just this issue. I am creating a separate thread on this in this group shortly. Basically, I think you are correct: People think of AI as ‘intelligent’ or ‘smart’. Iain’s formulation would run contrary to this. His model would predict that AI, lacking a representation of the big picture, will not only get things wrong but will be convinced they are right, and will even make things up (confabulate) that fit the initial view they have formed. And that is exactly what we see: AI ‘hallucinate’, making things up to fit what language predicts they should say and missing signs that they are making spectacular errors. I think the problem here arises because the researchers themselves lack a proper foundation in understanding exactly what intelligence is. This is changing of course because they are stuck with trying to make the AI work at a practical level and this in turn is guiding them to split the AI into two factions. More on this in the other thread.