MindsDB Now Supports Google Gemini 2.5 Flash and Gemini 2.5 Pro

MindsDB Now Supports Google Gemini 2.5 Flash and Gemini 2.5 Pro

Erik Bovee, Head of Business Development at MindsDB

May 22, 2025

We’re delighted to announce support for Google's Gemini 2.5 family of models in our enterprise ‘Minds’ product. Google Gemini 2.5 offers cutting-edge capabilities that substantially boost the performance, intelligence, and versatility of Minds. The ability to switch the LLM model to Gemini 2.5 is coming soon to the MindsDB demo environment at mdb.ai. 


MindsDB provides a powerful bridge between complex enterprise data and artificial intelligence. Our 'Minds' product enables users to deploy intelligent, conversational AI agents that interact with diverse data sources -  databases, SaaS platforms, and files, using natural language. These ‘Minds’ rely on capable Large Language Models (LLMs) to understand queries and orchestrate data retrieval.


Understanding Minds: The Conversational Data Layer


A Mind functions as an intelligent system designed for secure interaction with enterprise data. It comprises several key components:

  • Cognitive Engine: An LLM core that interprets natural language, reasons through queries, and plans execution.

  • Knowledge Base: Connections to user-specified data sources, enhanced by techniques like Retrieval-Augmented Generation (RAG) for contextual understanding.

  • Federated Query Engine: MindsDB's core technology, enabling unified querying across numerous structured and unstructured data sources.

  • Orchestration & Reasoning Tools: Components managing the workflow, applying safeguards, evaluating data, synthesizing responses, and offering transparency into the Mind's process.




Users converse with a Mind, asking questions like, "What's the sales trend for Product X in Germany this year compared to last?" The Mind parses the query, identifies relevant data sources (e.g., a sales database, a CRM API), generates the necessary queries (SQL, API calls), retrieves the data, analyzes it, and delivers a coherent answer. MindsDB's flexible architecture already supports various AI engines, paving the way for integrating newer models.



Gemini 2.5: Advancing AI Capabilities


Google's Gemini 2.5 models represent a significant step forward in AI, characterized by:

  • Advanced Reasoning: Termed "thinking models," they exhibit enhanced capabilities for complex reasoning and multi-step problem-solving before generating output, leading to potentially more accurate and insightful responses from the Mind.  Gemini excels in complex tasks such as coding, analytics.

  • Expansive Context Window: Gemini 2.5 can handle extremely large context windows ( 1 million tokens, with 2 million coming soon!), far exceeding the capacity of many other models. This is key for processing large amounts of data with MindsDB.

  • Efficiency Focus: Includes variants like Gemini 2.5 Flash optimized for speed and cost-effectiveness, offering tunable parameters to balance quality, latency, and cost.


Specific Benefits of Gemini 2.5 for MindsDB Minds Users


There are some obvious benefits to using Gemini, which represents a nice evolution in terms of:

  1. Larger context window

  2. Advanced reasoning capabilities (code, markdown generation)

  3. Long-range pattern recognition including extended conversations memory

  4. Efficiency and cost effectiveness


MindsDB is deploying their Mind product with Gemini for a few very specific use cases that are common to enterprise customers.  One of the core use cases is ‘talk to your data’, wherever that data lives, and in whatever format. Gemini specifically provides features to support conversational interaction, summary and analysis over a huge corpus of large .pdf documents, for example. Traditionally, this would be challenging, given the limited context window of previous LLMs, limited conversational memory. The need for extracting accurate structure or markdown from these documents, the requirement for chunking introduced limitations that inhibited both performance and accuracy.  


Full Document Analysis: Gemini is good at processing and analyzing entire large reports, legal documents, or extensive codebases without chunking, preserving overall context for better summarization, Q&A, and insight extraction. Specific to MindsDB, Gemini allows extraction of markdown (and more generally structured data) from large, old .pdf documents.  


  1. Better Data Transformation: Generating more effective and accurate code/structure (e.g., markdown, schema) for general understanding and complex data manipulation tasks within the Mind's workflow.

  2. Massive Context Window: Gemini allows accurate extraction and analysis from large docs.  Some of the .pdfs for our production Gemini deployment are over 400 pages.   

  3. Deeper Insights via Massive Context: The significantly larger context window unlocks several possibilities:

    • Extended Conversational Memory: Maintain context over much longer user interactions, facilitating more complex, iterative data exploration.

    • Long-Range Pattern Detection: Identify trends and dependencies across vast datasets or long time periods more effectively.


Gemini provides a significant performance boost to the Minds’ accuracy and speed of response when providing AI-driven search and analytics via a chat interface or API over terabytes of unstructured data.


Conclusion: Smarter Data Interaction Through Choice


MindsDB Minds provide an innovative way to interact with enterprise data conversationally. Adding support for Google's Gemini 2.5 alongside other LLMs is a logical next step, offering users access to significant advancements in AI reasoning, context handling, and multimodality. This integration would enhance the intelligence and versatility of Minds, allowing for more complex analyses, deeper insights from larger datasets, and richer interactions. 


Get a demo of MindsDB with Gemini by getting in touch with us

We’re delighted to announce support for Google's Gemini 2.5 family of models in our enterprise ‘Minds’ product. Google Gemini 2.5 offers cutting-edge capabilities that substantially boost the performance, intelligence, and versatility of Minds. The ability to switch the LLM model to Gemini 2.5 is coming soon to the MindsDB demo environment at mdb.ai. 


MindsDB provides a powerful bridge between complex enterprise data and artificial intelligence. Our 'Minds' product enables users to deploy intelligent, conversational AI agents that interact with diverse data sources -  databases, SaaS platforms, and files, using natural language. These ‘Minds’ rely on capable Large Language Models (LLMs) to understand queries and orchestrate data retrieval.


Understanding Minds: The Conversational Data Layer


A Mind functions as an intelligent system designed for secure interaction with enterprise data. It comprises several key components:

  • Cognitive Engine: An LLM core that interprets natural language, reasons through queries, and plans execution.

  • Knowledge Base: Connections to user-specified data sources, enhanced by techniques like Retrieval-Augmented Generation (RAG) for contextual understanding.

  • Federated Query Engine: MindsDB's core technology, enabling unified querying across numerous structured and unstructured data sources.

  • Orchestration & Reasoning Tools: Components managing the workflow, applying safeguards, evaluating data, synthesizing responses, and offering transparency into the Mind's process.




Users converse with a Mind, asking questions like, "What's the sales trend for Product X in Germany this year compared to last?" The Mind parses the query, identifies relevant data sources (e.g., a sales database, a CRM API), generates the necessary queries (SQL, API calls), retrieves the data, analyzes it, and delivers a coherent answer. MindsDB's flexible architecture already supports various AI engines, paving the way for integrating newer models.



Gemini 2.5: Advancing AI Capabilities


Google's Gemini 2.5 models represent a significant step forward in AI, characterized by:

  • Advanced Reasoning: Termed "thinking models," they exhibit enhanced capabilities for complex reasoning and multi-step problem-solving before generating output, leading to potentially more accurate and insightful responses from the Mind.  Gemini excels in complex tasks such as coding, analytics.

  • Expansive Context Window: Gemini 2.5 can handle extremely large context windows ( 1 million tokens, with 2 million coming soon!), far exceeding the capacity of many other models. This is key for processing large amounts of data with MindsDB.

  • Efficiency Focus: Includes variants like Gemini 2.5 Flash optimized for speed and cost-effectiveness, offering tunable parameters to balance quality, latency, and cost.


Specific Benefits of Gemini 2.5 for MindsDB Minds Users


There are some obvious benefits to using Gemini, which represents a nice evolution in terms of:

  1. Larger context window

  2. Advanced reasoning capabilities (code, markdown generation)

  3. Long-range pattern recognition including extended conversations memory

  4. Efficiency and cost effectiveness


MindsDB is deploying their Mind product with Gemini for a few very specific use cases that are common to enterprise customers.  One of the core use cases is ‘talk to your data’, wherever that data lives, and in whatever format. Gemini specifically provides features to support conversational interaction, summary and analysis over a huge corpus of large .pdf documents, for example. Traditionally, this would be challenging, given the limited context window of previous LLMs, limited conversational memory. The need for extracting accurate structure or markdown from these documents, the requirement for chunking introduced limitations that inhibited both performance and accuracy.  


Full Document Analysis: Gemini is good at processing and analyzing entire large reports, legal documents, or extensive codebases without chunking, preserving overall context for better summarization, Q&A, and insight extraction. Specific to MindsDB, Gemini allows extraction of markdown (and more generally structured data) from large, old .pdf documents.  


  1. Better Data Transformation: Generating more effective and accurate code/structure (e.g., markdown, schema) for general understanding and complex data manipulation tasks within the Mind's workflow.

  2. Massive Context Window: Gemini allows accurate extraction and analysis from large docs.  Some of the .pdfs for our production Gemini deployment are over 400 pages.   

  3. Deeper Insights via Massive Context: The significantly larger context window unlocks several possibilities:

    • Extended Conversational Memory: Maintain context over much longer user interactions, facilitating more complex, iterative data exploration.

    • Long-Range Pattern Detection: Identify trends and dependencies across vast datasets or long time periods more effectively.


Gemini provides a significant performance boost to the Minds’ accuracy and speed of response when providing AI-driven search and analytics via a chat interface or API over terabytes of unstructured data.


Conclusion: Smarter Data Interaction Through Choice


MindsDB Minds provide an innovative way to interact with enterprise data conversationally. Adding support for Google's Gemini 2.5 alongside other LLMs is a logical next step, offering users access to significant advancements in AI reasoning, context handling, and multimodality. This integration would enhance the intelligence and versatility of Minds, allowing for more complex analyses, deeper insights from larger datasets, and richer interactions. 


Get a demo of MindsDB with Gemini by getting in touch with us

Start Building with MindsDB Today

Power your AI strategy with the leading AI data solution.

© 2025 All rights reserved by MindsDB.

Start Building with MindsDB Today

Power your AI strategy with the leading AI data solution.

© 2025 All rights reserved by MindsDB.

Start Building with MindsDB Today

Power your AI strategy with the leading
AI data solution.

© 2025 All rights reserved by MindsDB.