This post is also available in:
עברית (Hebrew)
A team of engineers and AI specialists at Microsoft, in collaboration with researchers from the University of Chicago, has created a new language designed to improve communication efficiency between large language models (LLMs). This development, outlined in a paper recently posted on the arXiv preprint server, titled “DroidSpeak: Enhancing Cross-LLM Communication”, introduces a novel approach that could significantly speed up the way AI systems interact with each other.
The core idea behind the new language, called DroidSpeak, is to streamline communication between LLMs by allowing them to speak using a language that mirrors the mathematical foundations of AI models. Currently, LLMs communicate in natural languages like English, which works well for human interactions but is not the most efficient method for AI-to-AI communication. As LLMs collaborate or share information, the process of reporting each step along the way introduces significant inefficiencies, especially when models need to process every detail of the exchange.
DroidSpeak aims to eliminate this issue by enabling LLMs to share only the essential data needed for the conversation, rather than communicating every intermediate step. In tests, this new language enabled two LLMs to communicate 2.78 times faster, showcasing its potential to greatly enhance the efficiency of AI systems.
The researchers also found that using the same type of LLM at both ends of the communication line yielded the best performance. However, they suggest that DroidSpeak is still in its early stages and will likely evolve over time, just like any other language, to become more vast and adaptable.
This breakthrough could pave the way for more sophisticated AI systems that are capable of interacting with each other more efficiently, especially in complex, problem-specific domains. By improving the way AIs communicate, the research team hopes to unlock new possibilities for building a more universal, interconnected AI infrastructure.