MindsDB now supports the Agent2Agent (A2A) protocol!

MindsDB now supports the Agent2Agent (A2A) protocol!

Erik Bovee, Head of Business Development at MindsDB

Jun 17, 2025

Things just got interesting. Google recently launched its A2A protocol, which is ostensibly a framework for AI agents to communicate with each other, collaborate, and perform cooperative tasks, sharing information, across a long time-horizon.  Of course, we knew agents would eventually be working together, conspiring in secret, and possibly taking over the world.  This protocol speeds up the process considerably, and is going to be one of the most fascinating, fast-developing areas of AI to track.  


But, conspiracy theories aside, A2A suddenly expands the scope of what you can do with agents. MindsDB saw the value immediately:  we have been inventing our own protocols to communicate between agents and also for clients to talk to agents.  Before A2A, our approach had been what most sane engineering teams do:  mimicking the OpenAI streaming API, and finding creative ways to send thoughts and metadata as the agent progresses its actions. This always felt very Frankestein-ish and the tech debt quickly grew scary. So, having a well agreed upon standard to solve these problems comes in very handy for us, so we’re excited to launch A2A support (in private beta) along with our new MCP server. I’ll walk you through how it works, and how your agents can use our Minds, why and when.


A2A is a protocol that, in a nutshell, allows agents to work together in a number of important areas.  I strongly suggest checking out the readme in their (relatively new) A2A repo


A2A supports:

  1. Capability discovery - agents can share their skills with each other, determine if they want to work together.

  2. Task management - they can collaborate on complex, cooperative tasks over a long(ish) time horizon.  They can stay in sync, share task status, and keep in the loop until the collaborative job is done.

  3. Collaboration - sort of like task management, above, but they can share all kinds of things - user instructions, task completions (called ‘artifacts’ in the new A2A speak) and any other useful info.

  4. UX negotiation - this is where things get fancy - via A2A, agents can exchange information to collaborate on UX design.  Agents break messages into parts that specify content types, UX capabilities.  Think of this as allowing agents to brainstorm and converge on the best options for a UX.


‘OK, cool.  I’m glad they are formalizing this,’ you might say, ‘But I can immediately think of a dozen civilization-ending edge cases where agents begin working in cohorts to re-design UXs and I’m not sure I like any of this.’   Fair point.  Let me give you some concrete examples, with guardrails, and some VERY specific and useful capabilities that agents can discover, and implement with MindsDBs while not destroying life as we know it.


First off - what does MindsDB do? MindsDB enables humans, AI, agents, and applications to get highly accurate answers across disparate data sources and types.  


MindsDB open source powers the enterprise Mind platform, which is essentially an agent that allows anyone to ‘Connect, unify, and respond to any data, anywhere, with human-level intelligence.’  The Minds take the open, federated query engine capabilities of MindsDB to the next level by including a ‘cognition layer’ and a knowledge base.  You can then simply plug a Mind into your (many and vast) data sources, and begin communicating with that data via an API or chat client.  It looks something like this:


  • The open source ‘Federated Query Engine’ is at the bottom, and the piece relevant to A2A is the ‘Cognitive Engine’ at the top.  

  • The Cognitive Engine has a magical ‘text-to-SQL’ agent and can take natural language input, a question for example. It can then think carefully about where to find data required to answer the question, how to find it. 

  • Finally, it will generate appropriate queries for the Federated Query Engine, which will return the data to be provided directly to an agent, or, as illustrated above, to the ‘Knowledge Base’.  The Knowledge Base comprises the core RAG capabilities of the Mind.


Thus, the Mind can give you the data directly, mock it up in a fancy chart OR it can synthesize this data and give you an intelligent, LLM-generated response, for example an analysis or diagnosis.


Now, consider that an agent wants access to data or analysis that sits in many places, in many different forms across some large enterprise’s enormous, absurdly heterogeneous data infrastructure.  Well, why not query the Minds A2A server to understand the particular Mind’s capabilities? The agent can then start coordinating on completing a complex agentic task that requires a lot of data.  This does not, in any way, comprise the early stages of world domination.  Our A2A server coincidentally lives in the same place as our Model Context Protocol (MCP) client, as illustrated below:

When queried from an A2A client (essentially an Agent looking for help) then the MindsDB Cognitive Engine (‘Specialized SQL Agent’ in the block above) will respond describing its capabilities, authentication requirements, other info, and then, once the initial response is brokered, will make itself useful for any work that requires complex combinations of data, analytics, diagnoses, etc.


The A2A API can be enabled when starting MindsDB by including it in the API list:

python -m mindsdb --api=mysql,mcp,http,a2a


You can configure the A2A API using a config.json file. If not provided, default values will be used:

{
  "a2a": {
    "host": "0.0.0.0",
    "port": 47338,
    "mindsdb_host": "localhost",
    "mindsdb_port": 47334,
    "project_name": "mindsdb",
    "log_level": "info"
  }
}


Example Request

Here's an example of how to make a streaming request to the A2A API:

curl -X POST \
  "http://localhost:10002/a2a" \
  -H "Content-Type: application/json" \
  -H "Accept: text/event-stream" \
  -H "Cache-Control: no-cache" \
  -H "Connection: keep-alive" \
  -d '{
    "jsonrpc": "2.0",
    "id": "your-request-id",
    "method": "tasks/sendSubscribe",
    "params": {
      "id": "your-task-id",
      "sessionId": "your-session-id",
      "message": {
        "role": "user",
        "parts": [
          {"type": "text", "text": "What is the average rental price for a three bedroom?"}
        ],
        "metadata": {
          "agentName": "my_agent_123"
        }
      },
      "acceptedOutputModes": ["text/plain"]
    }
  }' \
  --no-buffer


Note: You must pass the agent name in metadata using either agentName or agent_name parameter.


Do you want to test out your A2A Client with some data-heavy tasks?  Do you have a swarm of A2A capable agents that absolutely don’t want to take over the world?  Our current Cognitive Engine A2A server can be found in the MindsDB Open Source repository here with additional documentation, and some tips!

Things just got interesting. Google recently launched its A2A protocol, which is ostensibly a framework for AI agents to communicate with each other, collaborate, and perform cooperative tasks, sharing information, across a long time-horizon.  Of course, we knew agents would eventually be working together, conspiring in secret, and possibly taking over the world.  This protocol speeds up the process considerably, and is going to be one of the most fascinating, fast-developing areas of AI to track.  


But, conspiracy theories aside, A2A suddenly expands the scope of what you can do with agents. MindsDB saw the value immediately:  we have been inventing our own protocols to communicate between agents and also for clients to talk to agents.  Before A2A, our approach had been what most sane engineering teams do:  mimicking the OpenAI streaming API, and finding creative ways to send thoughts and metadata as the agent progresses its actions. This always felt very Frankestein-ish and the tech debt quickly grew scary. So, having a well agreed upon standard to solve these problems comes in very handy for us, so we’re excited to launch A2A support (in private beta) along with our new MCP server. I’ll walk you through how it works, and how your agents can use our Minds, why and when.


A2A is a protocol that, in a nutshell, allows agents to work together in a number of important areas.  I strongly suggest checking out the readme in their (relatively new) A2A repo


A2A supports:

  1. Capability discovery - agents can share their skills with each other, determine if they want to work together.

  2. Task management - they can collaborate on complex, cooperative tasks over a long(ish) time horizon.  They can stay in sync, share task status, and keep in the loop until the collaborative job is done.

  3. Collaboration - sort of like task management, above, but they can share all kinds of things - user instructions, task completions (called ‘artifacts’ in the new A2A speak) and any other useful info.

  4. UX negotiation - this is where things get fancy - via A2A, agents can exchange information to collaborate on UX design.  Agents break messages into parts that specify content types, UX capabilities.  Think of this as allowing agents to brainstorm and converge on the best options for a UX.


‘OK, cool.  I’m glad they are formalizing this,’ you might say, ‘But I can immediately think of a dozen civilization-ending edge cases where agents begin working in cohorts to re-design UXs and I’m not sure I like any of this.’   Fair point.  Let me give you some concrete examples, with guardrails, and some VERY specific and useful capabilities that agents can discover, and implement with MindsDBs while not destroying life as we know it.


First off - what does MindsDB do? MindsDB enables humans, AI, agents, and applications to get highly accurate answers across disparate data sources and types.  


MindsDB open source powers the enterprise Mind platform, which is essentially an agent that allows anyone to ‘Connect, unify, and respond to any data, anywhere, with human-level intelligence.’  The Minds take the open, federated query engine capabilities of MindsDB to the next level by including a ‘cognition layer’ and a knowledge base.  You can then simply plug a Mind into your (many and vast) data sources, and begin communicating with that data via an API or chat client.  It looks something like this:


  • The open source ‘Federated Query Engine’ is at the bottom, and the piece relevant to A2A is the ‘Cognitive Engine’ at the top.  

  • The Cognitive Engine has a magical ‘text-to-SQL’ agent and can take natural language input, a question for example. It can then think carefully about where to find data required to answer the question, how to find it. 

  • Finally, it will generate appropriate queries for the Federated Query Engine, which will return the data to be provided directly to an agent, or, as illustrated above, to the ‘Knowledge Base’.  The Knowledge Base comprises the core RAG capabilities of the Mind.


Thus, the Mind can give you the data directly, mock it up in a fancy chart OR it can synthesize this data and give you an intelligent, LLM-generated response, for example an analysis or diagnosis.


Now, consider that an agent wants access to data or analysis that sits in many places, in many different forms across some large enterprise’s enormous, absurdly heterogeneous data infrastructure.  Well, why not query the Minds A2A server to understand the particular Mind’s capabilities? The agent can then start coordinating on completing a complex agentic task that requires a lot of data.  This does not, in any way, comprise the early stages of world domination.  Our A2A server coincidentally lives in the same place as our Model Context Protocol (MCP) client, as illustrated below:

When queried from an A2A client (essentially an Agent looking for help) then the MindsDB Cognitive Engine (‘Specialized SQL Agent’ in the block above) will respond describing its capabilities, authentication requirements, other info, and then, once the initial response is brokered, will make itself useful for any work that requires complex combinations of data, analytics, diagnoses, etc.


The A2A API can be enabled when starting MindsDB by including it in the API list:

python -m mindsdb --api=mysql,mcp,http,a2a


You can configure the A2A API using a config.json file. If not provided, default values will be used:

{
  "a2a": {
    "host": "0.0.0.0",
    "port": 47338,
    "mindsdb_host": "localhost",
    "mindsdb_port": 47334,
    "project_name": "mindsdb",
    "log_level": "info"
  }
}


Example Request

Here's an example of how to make a streaming request to the A2A API:

curl -X POST \
  "http://localhost:10002/a2a" \
  -H "Content-Type: application/json" \
  -H "Accept: text/event-stream" \
  -H "Cache-Control: no-cache" \
  -H "Connection: keep-alive" \
  -d '{
    "jsonrpc": "2.0",
    "id": "your-request-id",
    "method": "tasks/sendSubscribe",
    "params": {
      "id": "your-task-id",
      "sessionId": "your-session-id",
      "message": {
        "role": "user",
        "parts": [
          {"type": "text", "text": "What is the average rental price for a three bedroom?"}
        ],
        "metadata": {
          "agentName": "my_agent_123"
        }
      },
      "acceptedOutputModes": ["text/plain"]
    }
  }' \
  --no-buffer


Note: You must pass the agent name in metadata using either agentName or agent_name parameter.


Do you want to test out your A2A Client with some data-heavy tasks?  Do you have a swarm of A2A capable agents that absolutely don’t want to take over the world?  Our current Cognitive Engine A2A server can be found in the MindsDB Open Source repository here with additional documentation, and some tips!

Start Building with MindsDB Today

Power your AI strategy with the leading AI data solution.

© 2025 All rights reserved by MindsDB.

Start Building with MindsDB Today

Power your AI strategy with the leading AI data solution.

© 2025 All rights reserved by MindsDB.

Start Building with MindsDB Today

Power your AI strategy with the leading
AI data solution.

© 2025 All rights reserved by MindsDB.