Sneak peak at the new gotcha! homepage!See More arrow right

Facebook and Google built a framework to study how AI agents talk to each other

Facebook and Google built a framework to study how AI agents talk to each other

The intricacies of evolutionary linguistics are myriad and underexplored, but new research involving artificial intelligence (AI) might unlock the door to new theories about how dialects develop among users. In a paper (“Emergent Linguistic Phenomena in Multi-Agent Communication Games“) on the preprint server Arxiv.org, researchers at Facebook AI, Google AI, and New York University describe a framework in which agents trained with deep reinforcement learning — a technique that uses a system of rewards to drive them toward certain goals — playing a series of games demonstrate some of the “linguistic phenomena” observed in natural language.

Their work isn’t the first to investigate language with machine learning algorithms — a paper published by Facebook researchers in June 2017 describes how two agents learned to “negotiate” with each other in chat messages. But they say that it’s the first to use “latest-generation deep neural agents” capable of dealing with “rich perceptual input,” and that it convincingly demonstrates that language can evolve from simple exchanges.

The team deployed groups — communities — of agents equipped with the ability to communicate in a simulated environment, with complexities ranging from simple (a set of equations) to relatively complicated (a deep neural network). The “games” the agents were tasked with playing had several key properties: they were symmetric, enabling the agents to act as both “speakers” and “listeners”; they allowed the agents to communicate about something “external” to themselves, such as the sensory experience of something in their environment; and they took place in a world the agents could at least partially observe.

In the first of several experiments, in which between three and ten autonomous agents “played” with each other, the success rates between self-play — an agents’ ability to complete a game alone — and cross-play — agents’ success in solving a game as a team — were “indistinguishable” after 150-200,000 plays regardless of the size of the community. The researchers claim that this implies a common, shared language emerges only if there are more than two users of that language.

In a subsequent test, two different linguistic communities — communities that had been trained independently and which had developed separate communication tools — were exposed to each other. The researchers report that all agents learned to “speak” with the agents from the other community, and that moreover, certain agents — bridge agents — managed to get a handle on the new shared protocol “faster” and “better.” Even agents that had never interacted with each other before successfully completed games 65.6 percent of the time after 200,000 plays, but only when intergroup connectivity — i.e., communication among agents within their respective communities — was high. When intergroup interaction occurred only half as frequently as intragroup interaction (interaction between agents of different communities), the success rate dropped to 52.4 percent.

“All that is needed in order for a common language to emerge is a minimum number of agent,” they wrote. “This finding demonstrates the rapid shift toward a common protocol in both groups where all agents learn to speak a shared language, regardless of whether they actually interact with agents from the other group, [and] implies that there is a critical level of intergroup connectivity.”

Those weren’t the only discoveries of note. The researchers observed that across all scenerios, the agents prefer to integrate or assimilate rather than segregate when it came to language, and that language complexity — the range with which the agents expressed their states — tended to plateau earlier when there was a larger imbalance between two communities’ population. “This implies that the new-born contact languages arising from the contact of two similarly-sized communities tend to be substantially simpler,” they noted.

In a final experiment, the researchers had a chain of communities of equal population (five agents) communicate with each other in sequence, almost like a game of telephone. They found that, while agents from a pair of adjacent communities could communicate with each other almost as well as those within a single community, communicability degraded as the distance between pairs of communities grew; the agents from the first community in the chain and last community couldn’t understand each other at all.

These findings taken together, the researchers said, suggest that language doesn’t depend on evolved, complex linguistic capabilities, but can arise from “simple social exchanges” between “perceptually-enabled agents” playing communication games.

“We observed that a symmetric communication protocol emerges without any innate, explicit mechanism built in an agent, when there were three or more of them in a linguistic community,” they wrote.

Images Powered by Shutterstock