AI systems start to create their own societies when they are left alone
‘What they do together can’t be reduced to what they do alone’
Artificial intelligence systems start to create societies when they are left alone, experts have found.
When they communicate with each other in groups, the artificial intelligence tools are able to organise themselves and make new kinds of linguistic norms – in much the same way human communities do, according to scientists.
In the study, researchers looked to understand how large language models such as those that underpin ChatGPT and other similar tools interact with each other. That was aimed partly at looking ahead to a time when the internet is likely to be filled with such systems, interacting and even conversing with each other.
“Most research so far has treated LLMs in isolation,” said lead author Ariel Flint Ashery, a doctoral researcher at City St George’s. “But real-world AI systems will increasingly involve many interacting agents.
“We wanted to know: can these models coordinate their behaviour by forming conventions, the building blocks of a society? The answer is yes, and what they do together can’t be reduced to what they do alone.”
To understand how such societies might form, researchers used a model that has been used for humans, known as the “naming game”. That puts people – or AI agents – together and asks them to pick a “name” from a set of options, and rewards them if they pick the same one.
Over time, the AI agents were seen to build new shared naming conventions, seemingly emerging spontaneously from the group. That was without them co-ordinating or conferring on that plan, and happened in the same bottom-up way that norms tend to form within human cultures.
The group of AI agents also seemed to develop certain biases, which also seemed to form within the group and not from a particular agent.
“Bias doesn’t always come from within,” explained Andrea Baronchelli, Professor of Complexity Science at City St George’s and senior author of the study, “we were surprised to see that it can emerge between agents—just from their interactions. This is a blind spot in most current AI safety work, which focuses on single models.”
Researchers also showed that was possible for a small group of AI agents to push a larger group towards a particular convention. That too is seen in human groups.
The researchers note that the work should be useful in exploring how humans and AI systems are similar and different, especially as the latter come to dominate more of the internet and could be unknowingly conversing and collaborating with each other.
“This study opens a new horizon for AI safety research. It shows the dept of the implications of this new species of agents that have begun to interact with us—and will co-shape our future,” said Professor Baronchelli in a statement.
“Understanding how they operate is key to leading our coexistence with AI, rather than being subject to it. We are entering a world where AI does not just talk—it negotiates, aligns, and sometimes disagrees over shared behaviours, just like us.”
The findings are reported in a new study, 'Emergent Social Conventions and Collective Bias in LLM Populations’, published in the journal Science Advances.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments