Thanks for replying. Yes, I think you are absolutely right on your final thought, that AI is ‘missing something’. It turns out that AI researchers are well aware of this and are currently running experiments to explore just this issue. I am creating a separate thread on this in this group shortly.
Basically, I think you are correct: People think of AI as ‘intelligent’ or ‘smart’. Iain’s formulation would run contrary to this. His model would predict that AI, lacking a representation of the big picture, will not only get things wrong but will be convinced they are right, and will even make things up (confabulate) that fit the initial view they have formed.
And that is exactly what we see: AI ‘hallucinate’, making things up to fit what language predicts they should say and missing signs that they are making spectacular errors.
I think the problem here arises because the researchers themselves lack a proper foundation in understanding exactly what intelligence is. This is changing of course because they are stuck with trying to make the AI work at a practical level and this in turn is guiding them to split the AI into two factions.
More on this in the other thread.
Report
There was a problem reporting this post.
Block Member?
Please confirm you want to block this member.
You will no longer be able to:
See blocked member's posts
Mention this member in posts
Invite this member to groups
Message this member
Add this member as a connection
Please note:
This action will also remove this member from your connections and send a report to the site admin.
Please allow a few minutes for this process to complete.
Report
You have already reported this .
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.OkPrivacy Policy