AI Replicates Human Personality: The Stanford Experiment | Microsoft artificial intelligence name | Google generative ai free course with certificate | Generative ai use cases in sales | Turtles AI
A team of researchers from Stanford and Google DeepMind has developed an AI system capable of replicating and simulating the behavior and personality of more than a thousand people by analyzing interviews conducted with GPT-4o. An experiment that raises ethical questions and future applications.
Key points:
- AI-based creation of human avatars.
- 1052 participants interviewed to generate the data.
- Striking results: AI replicates human behavior with 85% accuracy.
- Possible political and ethical applications for the use of such technologies.
A team of researchers from Stanford University and Google DeepMind has created an experiment that challenges the limits of generative AI, developing a system that can simulate not only the behavior but also the personality of more than a thousand individuals. Using GPT-4o, one of the latest versions of ChatGPT, the scientists created digital avatars based on interviews and questionnaires administered to human participants. The latter, 1052 people from different demographic and social groups, were interviewed for a couple of hours, with AI listening, adapting and responding to their answers to gather accurate information about their psychological and behavioral profiles. The data obtained were then used to train avatars, which sought to replicate the distinctive traits of each participant, including more complex aspects of personality and social attitudes. The work, published in arXiv, involved the use of a battery of scientific tests that included both analysis of the “Big Five” (the top five psychological traits) and measures of social attitudes. To compare the quality of replication, both avatars and real participants took the same tests, with participants repeating them two weeks apart to test the consistency of responses. The results surprised the researchers themselves: the avatars showed 85 percent similarity to the responses of humans. Notably, the responses of the real participants, which were variable but still similar to each other, had an 81 percent correlation when repeated two weeks apart. A finding that confirms not only the reliability of avatars in reflecting individuals’ personalities, but also the accuracy with which AI can emulate complex aspects of human psychology. This experiment opens the way for discussions about how such technologies might be employed in practical scenarios. On the one hand, the use of avatars to test public policies, social campaigns or collective behavior could improve the prediction of social and political impacts, providing administrators with more effective tools for informed decisions. On the other hand, ethical concerns arise related to the commercial use of such technologies, which could exploit simulated behaviors to manipulate consumer choices or influence public opinions. Although the experiment is currently restricted to strictly academic use, it cannot be ruled out that in the future these technologies could be used for far more controversial purposes. One fact is certain: the creation of artificial replicas of humans could revolutionize fields such as marketing, psychology, and even politics, while raising important questions about the ethics and control of such technologies.
In conclusion, all that remains is to watch how society will deal with future challenges related to the evolution of generative AI.