In this section, we present how to bring OpenAI models to MindsDB.

OpenAI is an AI research organization and company, known for developing AI models like GPT-3 and GPT-4. Their mission is to advance AI for the betterment of humanity, emphasizing transparency, safety, and ethics in AI development.

Read on to find out how to use OpenAI models within MinsdDB.

Setup

MindsDB provides the OpenAI handler that enables you to create OpenAI models within MindsDB.

AI Engine

Before creating a model, it is required to create an AI engine based on the provided handler.

If you installed MindsDB locally, make sure to install all OpenAI dependencies by running pip install .[openai] or from the requirements.txt file.

You can create an OpenAI engine using this command:

CREATE ML_ENGINE openai_engine
FROM openai
USING
    openai_api_key = 'your-openai-api-key';

Please note that you need to provide your OpenAI API key. See OpenAI’s help center or alternatively, watch this video.

The name of the engine (here, openai_engine) should be used as a value for the engine parameter in the USING clause of the CREATE MODEL statement.

AI Model

The CREATE MODEL statement is used to create, train, and deploy models within MindsDB.

CREATE MODEL model_name
PREDICT column_to_be_predicted
USING
    engine = 'openai',
    model_name = 'openai_model_name'
    openai_api_key = 'YOUR_OPENAI_API_KEY',
    ...;

Default Model When you create an OpenAI model in MindsDB, it uses the gpt-3.5-turbo model by default. But you can use the gpt-4 model as well by passing it to the model-name parameter in the USING clause of the CREATE MODEL statement.

Supported Models To see all supported models, including chat models, embedding models, and more, click here.

The USING clause takes more parameters depending on the operation mode. Follow the next section to learn about available operation modes.

Operation Modes

Let’s go through the available operation modes.

Please note that the examples presented here use SQL. To see how to create OpenAI models in Mongo database using MQL, check out this example on sentiment classification.

Answering Questions without Context

Here is how to create a model that answers questions without context:

CREATE MODEL openai_model
PREDICT answer
USING
    engine = 'openai',
    question_column = 'question',
    model_name = 'model_name'
    openai_api_key = 'YOUR_OPENAI_API_KEY';

Where:

ExpressionDescription
openai_modelThe model name is openai_model and it resides inside the mindsdb project by default. Learn more about MindsDB projects here.
answerIt is the value to be predicted.
engineThe openai engine is used.
question_columnIt is the column that stores input data.
model_nameOptional. By default, the text-davinci-002 model is used. If you prefer to use a cheaper model or a model that was fine-tuned outside of MindsDB, use this parameter.
openai_api_keyYour OpenAI API key.

Let’s look at an example.

CREATE MODEL openai_model
PREDICT answer
USING
    engine = 'openai',
    question_column = 'question';

On execution, we get:

Query successfully completed

Now we can query for answers.

SELECT question, answer
FROM openai_model
WHERE question = 'Where is Stockholm located?';

On execution, we get:

+---------------------------+-------------------------------+
|question                   |answer                         |
+---------------------------+-------------------------------+
|Where is Stockholm located?|Stockholm is located in Sweden.|
+---------------------------+-------------------------------+

Answering Questions with Context

Here is how to create a model that answers questions with context:

CREATE MODEL openai_model
PREDICT answer
USING
    engine = 'openai',
    question_column = 'question',
    context_column = 'context',
    openai_api_key = 'YOUR_OPENAI_API_KEY';

Where:

ExpressionDescription
openai_modelThe model name is openai_model and it resides inside the mindsdb project by default.
answerIt is the value to be predicted.
engineThe openai engine is used.
question_columnIt is the column that stores input data being a question.
context_columnIt is the column that stores input data being a context.
openai_api_keyYour OpenAI API key.

Let’s look at an example.

CREATE MODEL openai_model
PREDICT answer
USING
    engine = 'openai',
    question_column = 'question',
    context_column = 'context';

On execution, we get:

Query successfully completed

Now we can query for answers.

SELECT context, question, answer
FROM openai_model
WHERE context = 'Answer with a joke'
AND question = 'How to cook soup?';

On execution, we get:

+-------------------+------------------+---------------------------------------------------------+
|context            |question          |answer                                                   |
+-------------------+------------------+---------------------------------------------------------+
|Answer with a joke |How to cook soup? |How do you cook soup? You put it in a pot and heat it up!|
+-------------------+------------------+---------------------------------------------------------+

Prompt Completion

Here is how to create a model that offers the most flexible mode of operation. It answers any query provided in the prompt_template parameter.

Good prompts are the key to getting great completions out of large language models like the ones that OpenAI offers. For best performance, we recommend you read their prompting guide before trying your hand at prompt templating.

CREATE MODEL openai_model
PREDICT answer
USING
    engine = 'openai',
    prompt_template = 'input your query here',
    max_tokens = 100,
    temperature = 0.3,
    openai_api_key = 'YOUR_OPENAI_API_KEY';

Where:

ExpressionDescription
openai_modelThe model name is openai_model and it resides inside the mindsdb project by default.
answerIt is the value to be predicted.
engineThe openai engine is used.
prompt_templateIt is the column that stores a query to be answered. Please note that this parameter can be overridden at prediction time.
max_tokensIt defines the maximum token cost of the prediction. Please note that this parameter can be overridden at prediction time.
temperatureIt defines how risky the answers are. The value of 0 marks a well-defined answer, and the value of 0.9 marks a more creative answer. Please note that this parameter can be overridden at prediction time.
openai_api_keyYour OpenAI API key.

Let’s create a model:

CREATE MODEL openai_model
PREDICT answer
USING
    engine = 'openai',
    prompt_template = 'Context: {{context}}. Question: {{question}}. Answer:',
    max_tokens = 100,
    temperature = 0.3;

Let’s look at an example that uses parameters provided at model creation time.

SELECT context, question, answer
FROM openai_model
WHERE context = 'Answer accurately'
AND question = 'How many planets exist in the solar system?';

On execution, we get:

+-------------------+-------------------------------------------+----------------------------------------------+
|context            |question                                   |answer                                        |
+-------------------+-------------------------------------------+----------------------------------------------+
|Answer accurately  |How many planets exist in the solar system?| There are eight planets in the solar system. |
+-------------------+-------------------------------------------+----------------------------------------------+

Now let’s look at an example that overrides parameters at prediction time.

SELECT instruction, answer
FROM openai_model
WHERE instruction = 'Speculate extensively'
USING
    prompt_template = '{{instruction}}. What does Tom Hanks like?',
    max_tokens = 100,
    temperature = 0.5;

On execution, we get:

+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|instruction           |answer                                                                                                                                                                                                                         |
+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|Speculate extensively |Some people speculate that Tom Hanks likes to play golf, while others believe that he enjoys acting and directing. It is also speculated that he likes to spend time with his family and friends, and that he enjoys traveling.|
+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Model Modes

You can define a mode for OpenAI models using the mode parameter in the USING clause.

CREATE MODEL model_name
PREDICT target_column
USING
   engine = 'openai',
   mode = 'mode_name',
   prompt_template = 'message to the model';

The available modes include default, conversational, conversational-full, image, and embedding.

  • The default mode is used by default. The model replies to the prompt_template message.
  • The conversational mode enables the model to read and reply to multiple messages.
  • The conversational-full mode enables the model to read and reply to multiple messages, one reply per message.
  • The image mode is used to create an image instead of a text reply.
  • The embedding mode enables the model to return output in the form of embeddings.

Examples

Sentiment Analysis

Let’s go through a sentiment classification example to understand better how to bring OpenAI models to MindsDB as AI tables.

CREATE MODEL mindsdb.sentiment_classifier                           
PREDICT sentiment
USING
  engine = 'openai_engine',              
  prompt_template = 'predict the sentiment of the text:{{review}} exactly as either positive or negative or neutral';

On execution, we get:

Query successfully completed

Where:

ExpressionsValues
project_namemindsdb
predictor_namesentiment_classifier
target_columnsentiment
engineopenai
prompt_templatepredict the sentiment of the text:{{review}} exactly as either positive or negative or neutral

In the prompt_template parameter, we use a placeholder for a text value that comes from the review column, that is, text:{{review}}.

Before querying for predictions, we should verify the status of the sentiment_classifier model.

DESCRIBE sentiment_classifier;

On execution, we get:

+--------------------+------+-------+-------+--------+--------+---------+-------------+---------------+------+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+-------+
|NAME                |ENGINE|PROJECT|VERSION|STATUS  |ACCURACY|PREDICT  |UPDATE_STATUS|MINDSDB_VERSION|ERROR |SELECT_DATA_QUERY|TRAINING_OPTIONS                                                                                                                                       |TAG    |
+--------------------+------+-------+-------+--------+--------+---------+-------------+---------------+------+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+-------+
|sentiment_classifier|openai|mindsdb|1      |complete|[NULL]  |sentiment|up_to_date   |22.12.4.3      |[NULL]|[NULL]           |{'target': 'sentiment', 'using': {'prompt_template': 'predict the sentiment of the text:{{review}} exactly as either positive or negative or neutral'}}|[NULL] |
+--------------------+------+-------+-------+--------+--------+---------+-------------+---------------+------+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+-------+

Once the status is complete, we can query for predictions.

SELECT output.sentiment, input.review
FROM example_db.demo_data.amazon_reviews AS input
JOIN mindsdb.sentiment_classifier AS output
LIMIT 3;

Don’t forget to create the example_db database before using one of its tables, like in the query above.

CREATE DATABASE example_db
WITH ENGINE = "postgres",
PARAMETERS = {
    "user": "demo_user",
    "password": "demo_password",
    "host": "samples.mindsdb.com",
    "port": "5432",
    "database": "demo"
    };

On execution, we get:

+----------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| sentiment                              | review                                                                                                                                                                                                                                                                                                                                                                            |
+----------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| positive                               | Late gift for my grandson. He is very happy with it. Easy for him (9yo ).                                                                                                                                                                                                                                                                                                         |
| The sentiment of the text is positive. | I'm not super thrilled with the proprietary OS on this unit, but it does work okay and does what I need it to do. Appearance is very nice, price is very good and I can't complain too much - just wish it were easier (or at least more obvious) to port new apps onto it. For now, it helps me see things that are too small on my phone while I'm traveling. I'm a happy buyer.|
| positive                               | I purchased this Kindle Fire HD 8 was purchased for use by 5 and 8 yer old grandchildren. They basically use it to play Amazon games that you download.                                                                                                                                                                                                                           |
+----------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Fine-Tuning of OpenAI Models

All OpenAI models belong to the group of Large Language Models (LLMs). By definition, these are pre-trained on large amounts of data. However, it is possible to fine-tune these models with a task-specific dataset for a defined use case.

OpenAI supports fine-tuning of some of its models listed here. And with MindsDB, you can easily fine-tune an OpenAI model making it more applicable to your specific use case.

Let’s create a model to answer questions about MindsDB’s custom SQL syntax.

CREATE MODEL openai_davinci
PREDICT completion
USING
    engine = 'openai',
    model_name = 'davinci-002',
    prompt_template = 'Return a valid SQL string for the following question about MindsDB in-database machine learning: {{prompt}}';

You can check model status with this command:

DESCRIBE openai_davinci;

Once the status is complete, we can query for predictions:

SELECT prompt, completion
FROM openai_davinci as m
WHERE prompt = 'What is the SQL syntax to join input data with predictions from a MindsDB machine learning model?'
USING max_tokens=400;

On execution, we get:

+---------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+
| prompt                                                                                            | completion                                                                                           |
+---------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+
| What is the SQL syntax to join input data with predictions from a MindsDB machine learning model? | The SQL syntax is: SELECT * FROM input_data INNER JOIN predictions ON input_data.id = predictions.id |
+---------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+

If you followed one of the MindsDB tutorials before, you’ll see that the syntax provided by the model is not exactly as expected.

Now, we’ll fine-tune our model using a table that stores details about MindsDB’s custom SQL syntax.

Here is a table we’ll use to fine-tune our model:

SELECT prompt, completion
FROM files.openai_learninghub_ft;

And here is its content:

+---------------------------------------------------------------------------------------------------+------------------------------------------+
| prompt                                                                                            | completion                               |
+---------------------------------------------------------------------------------------------------+------------------------------------------+
| What is the SQL syntax to connect a database to MindsDB?                                          | CREATE DATABASE datasource_name          |
|                                                                                                   | [WITH] [ENGINE [=] engine_name] [,]      |
|                                                                                                   | [PARAMETERS [=] {                        |
|                                                                                                   |    "key": "value",                       |
|                                                                                                   |    ...                                   |
|                                                                                                   |    }];                                   |
|                                                                                                   |                                          |
| What is the SQL command to create a home rentals MindsDB machine learning model?                  | CREATE MODEL                             |
|                                                                                                   |   mindsdb.home_rentals_model             |
|                                                                                                   | FROM example_db                          |
|                                                                                                   |   (SELECT * FROM demo_data.home_rentals) |
|                                                                                                   | PREDICT rental_price;                    |
|                                                                                                   |                                          |
| What is the SQL syntax to join input data with predictions from a MindsDB machine learning model? | SELECT t.column_name, p.column_name, ... |
|                                                                                                   | FROM integration_name.table_name [AS] t  |
|                                                                                                   | JOIN project_name.model_name [AS] p;     |
+---------------------------------------------------------------------------------------------------+------------------------------------------+

This is how you can fine-tune an OpenAI model:

FINETUNE openai_davinci
FROM files
    (SELECT prompt, completion FROM openai_learninghub_ft);

The FINETUNE command creates a new version of the openai_davinci model. You can query all available versions as below:

SELECT *
FROM models_versions
WHERE name = 'openai_davinci';

Once the new version status is complete and active, we can query the model again, expecting a more accurate output.

SELECT prompt, completion
FROM openai_davinci as m
WHERE prompt = 'What is the SQL syntax to join input data with predictions from a MindsDB machine learning model?'
USING max_tokens=400;

On execution, we get:

+---------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+
| prompt                                                                                            | completion                                                                                           |
+---------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+
| What is the SQL syntax to join input data with predictions from a MindsDB machine learning model? | SELECT * FROM mindsdb.models.my_model JOIN mindsdb.input_data_name;                                  |
+---------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------+