AI Agents can now communicate directly with other agents. This can’t go bad…
Allow me to introduce you to A2A otherwise known as Agent2Agent protocol.
This protocol allows a client/server like communication but primarily aimed at letting one Agent communicate with another Agent.
It appears they solved some of the service discovery problems I mentioned when talking about MCPs.
Speaking of MCPs, you might be struggling to tell the difference between all these new protocols. MCPs allow LLM agents to call various tools whereas A2A allows agents to talk to each other (again, what could go wrong?).
I am going to do a deeper dive on this soon but the framework appears to be very task focused. Maintaining the state of the task so the requesting agent knows that its waiting is half the battle.
As always I am curious to see this at scale. Their NodeJS demo code has an “Event bus” which makes me think at scale we would need some solid queueing.
I would like to thank Mike K. for bringing this particular protocol to my attention at last Friday’s Tech Talk on MCPs.
BetterStack has a great short video video explaining the basics but there are a lot of moving parts which I am eager to wrap my head around in a deeper dive.
What is your take on Agent2Agent Protocol?
PS: If you are interested in getting a solid plan on how your business can use AI/ML to take your existing product to the next level then you will want to sign up for our new workshop.