Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the primer domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home/ikq167bdy5z8/public_html/propertyresourceholdingsgroup.com/wp-includes/functions.php on line 6114
AI has suddenly changed to get a theory of mind – Property Resource Holdings Group

AI has suddenly changed to get a theory of mind

Property Resource Holdings Group

A neural network now has the intuitive skills of a 9-year-old, which is a huge step forward.

The AI revolution is here, as super-smart machines keep getting better at the subtle art of being human at a shocking (and maybe scary) rate. It’s not new that computers have beaten humans at their own games, like chess and go, but our brains can do more than just check a king. There are also more subtle skills, such as inference and intuition, which help us understand and predict what other people will do.

But with the arrival of advanced AI platforms like Open AI’s Generative Pre-Training Transformer (GPT), the lines between humans and machines are starting to blur.

Michal Kosinski, a computational psychologist at Stanford University, did a new study in which he used several versions of OpenAI’s GPT neural network, from GPT-1 to the most recent GPT-3.5, to do “Theory of Mind” (ToM) tests. These are a series of tests that were first created in 1978 to measure how complex a chimpanzee’s mind is by seeing how well it can predict what other people will do.

In these tests, robots have to solve normal, everyday problems whose answers are easy for humans to figure out. In one scenario, a bag of popcorn is mislabeled as “chocolate,” and the test then asks the AI to guess what the human will do when the bag is opened. Kosinski’s team used “sanity checks” to see how well GPT networks understood the situation and how they thought humans would react. The results were put online on the pre-print server called arXiv.

Early versions of GPT-1, which were first released in 2018, did not do well on the test. However, the neural network showed amazing improvement over time, and by November 2022, it had developed the “theory of mind” of a 9-year-old human on its own (the release of the latest GPT-3.5). Kosinski says this could be a “watershed moment” for AI because these engines would be much more useful if they could understand and predict human behaviour.

The ability to programme empathy and morality could be a big help for things like self-driving cars, which might need to decide whether to put a driver in danger to save the life of a child crossing the street.

One question that still needs to be answered is whether or not these neural networks use TOM intuition or get around it by “using some unknown language patterns.” This could explain why language-based models, which try to understand the subtleties of human speech, are getting better at this.

But this also makes me wonder if people can do this trick with language and just don’t know it. Kosinski says that by studying how smart these AI systems are, we’re really just studying how the human mind works, which is still a big scientific mystery.

“Studying AI could teach us about how people think,” Kosinski writes. “As AI learns how to solve a wide range of problems, it may come up with ways to do so that are similar to what the human brain does.”

In other words, if you want to know how something works, you should make it yourself.