Neon is now SOC 2 Type 2 compliant 🚀Read more
Engineering

Building an Intelligent SQL Query Assistant with Neon, .NET, Azure Functions, and Azure OpenAI service

Learn how to create a SQL Query Assistant with Neon Serverless Postgres AI features.

SQL Query Assistant with Neon, .NET, Azure Functions, and Azure OpenAI service

Neon SQL editor provides AI-driven features, such as SQL generation to easily convert natural language requests to SQL. However, there may be situations where you need to build your own AI query assistant for custom needs. For example, you might need tighter integration with your application via query API, add advanced domain-specific logic, hide business-critical data, or support for complex multi-step queries that built-in tools typically don’t handle.

Additionally, creating your assistant gives you control over data handling to comply with standards like GDPR or HIPAA. It lets you ask questions in plain English, translate them into SQL, query the Neon database, and deliver results securely.

In this guide, we’ll show you how to build an intelligent SQL Query Assistant using the following tools:

  • .NET Core: To handle the backend logic and API development in C#.
  • Azure Functions: To create two serverless APIs:
    • SchemaTraining API: Extracts the existing schema from your database, generates vector embeddings, and stores them in Neon.
    • QueryAssistant API: Processes user queries, generates SQL commands dynamically, executes them, and returns the results.
  • Azure OpenAI SDK: To leverage AI models in .NET code for generating embeddings and translating user queries into SQL.
  • Neon: To store vector embeddings and query-related documents based on vector similarity. using the pgvector extension.
  • Azure OpenAI Service: To deploy and manage AI models like gpt-4o and text-embedding-ada-002 efficiently.

You can also quickly jump on the source code hosted on our GitHub and try it yourself.

SQL Query Assistant with .NET in Azure diagram

Setting Up the Neon Project

Prerequisites

Before we begin, make sure you have the following:

Create a Neon Project

  1. Navigate to the Neon Console
  2. Click “New Project”
  3. Select Azure as your cloud provider
  4. Choose East US 2 as your region
  5. Give your project a name (e.g., “sq-data-assistant”)
  6. Click “Create Project”

Save your connection details – you’ll need these to connect from Azure Function APIs.

Create the Database Tables

Use the Neon SQL editor to create database tables customers and vector_data to store vectors:

Setting Up Azure AI Service to Use Models

Let’s set up Azure AI Service and deploy two models: GPT-4 for analyzing and generating SQL queries and text-embedding-ada-002 for creating vector embeddings for database schema and user queries. These models will power our intelligent SQL assistant.

Create an Azure OpenAI Resource

Before deploying models, you need an Azure OpenAI resource. Follow these steps:

  1. Go to the Azure Portal:
    • Sign in with your Azure account.
  2. Create a New OpenAI Resource:
    • Click Create a resource and search for Azure OpenAI.
    • Click Create to start setting up the resource.
  3. Fill in the Required Fields:
    • Subscription: Select your Azure subscription.
    • Resource Group: Choose an existing group or create a new one to organize your resources.
    • Region: Pick a region where Azure OpenAI is supported.
    • Name: Provide a unique name for your resource (e.g., MyOpenAIResource).
  4. Review and Create:
    • Click Next until you reach the “Review + Create” tab.
    • Review the settings and click Create.
    Note: It may take a few minutes for the resource to be ready.

Deploy the Models

Once your Azure OpenAI resource is created, you can deploy the models:

Deploy GPT-4o (For Chat and Query Understanding)

  1. Go to your Azure OpenAI resource in the Azure Portal.
  2. Click on the Model catalog tab.
  3. Find the gpt-4o model in the list.
  4. Click Deploy and follow the prompts:
    • Provide a name for the deployment (e.g., gpt4o).
    • Keep the default settings or adjust based on your needs.
  5. Wait for the deployment to complete. Once ready, Azure will provide:
    • Endpoint URL: The URL to send API requests.
    • API Key: The key to authenticate API calls.

Deploy text-embedding-ada-002 (For Embeddings)

  1. While in the same Model catalog, find the text-embedding-ada-002 model.
  2. Click Deploy and provide a deployment name (e.g., text-embedding-ada-002).
  3. Follow the same steps as above and wait for deployment.

Use the Models

After both models are deployed, you’ll use:

  • GPT-4 for processing natural language queries and generating SQL.
  • text-embedding-ada-002 to create vector embeddings for schema training and query optimization.

To connect .NET application to these models, we will use the Endpoint URL and API Key together with model names from your Azure OpenAI resource.

Creating the Azure Function App

Project Structure

Here’s how the Azure Function App project’s final structure should look:

Create a New .NET Core Project

Open a CLI terminal and run the following commands to create and set up a new .NET project with Azure Function:

Install Required NuGet Packages

Run the following commands in the terminal to install the required NuGet packages:

Create a configuration settings file

If you don’t already have a local.settings.json file created automatically in your SqlQueryAssistant.Functions project, create one at the root of the project. Add your configuration with environment variable values like this:

Create an Azure Function to Handle HTTP Requests

In your SqlQueryAssistant.Functions project, create a new function called QueryAssistantFunction.cs that handles both schema training and query processing:

Retrieving and Storing Database Schema

Create a serviceSchemaService.cs in the SqlQueryAssistant.Common project to retrieve the database schema from Neon. It queries tables (customers) and their columns and prepare the schema data for further embedding process:

Generating Embeddings and Storing Them in Neo

Next, we generate vector embeddings using the text-embedding-ada-002 model. These embeddings are used to match user queries with relevant database schemas.

Store the embeddings in Neon:

Dynamically Generating SQL Queries

To retrieve the most relevant database schema using a typical Retrieval-Augmented Generation (RAG) approach:

  • First, we calculate vector embeddings from the user query.
  • Next, we use Neon’s pgvector extension and its distance function operator <-> to compare these embeddings against stored schema embeddings, identifying the most relevant schema.
  • After we find matching database schema, we call chat completion endpoint to generate an SQL query for it.

We convert user queries into meaningful SQL commands by using ChatCompletionService:

Executing SQL Queries and Returning Results from Neon

Finally, we execute the generated SQL query against Neon to fetch relevant data:

Up to now, the function code has been implemented. Now we can run and test it locally.

Run the Project Locally

Navigate to the Functions Project:

Restore Dependencies:

Build the Solution:

Start the Azure Functions runtime:

You should see an output similar to:


Test the APIs

Example Query to Test SchemaTraining API

You can call the API using cURL:

When you send the request:

  1. The API should connect to your Neon database.
  2. It will retrieve the schema details (tables and columns).
  3. The API will generate embeddings for the schema using Azure OpenAI and store them in the Neon database using the pgvector extension.
  4. You should receive a successful response:

Example Query to Test QueryAssistant API

When calling the QueryAssistant API, you can send the natural language input in the body of the request like this:

Input (Natural Language Query):

API Output:

Deploy to Azure

Publish the Function App

If everything works locally, you can deploy the function app to Azure:

Future Improvements

Great! You did it! With this foundation, you can expand the capabilities of our SQL Query Assistant with other Neon features.

Database Branching: You can create isolated branches of your database, perfect for testing changes without affecting the main database. For example, you can create a branch to test new SQL assistant features like testing embeddings generation for a new schema branch.

Multi-Database Support: Extend the project to support multiple Neon databases, allowing the assistant to query across different datasets or tenants.

Conclusion

In conclusion, we’ve built a smart SQL Query Assistant using .NET Core, Azure Functions, Neon, and Azure OpenAI. This tool makes it easy to work with your Neon database by letting you ask questions in plain English, turn them into SQL, run the query, and simply show the results.

Additional Resources

Try it

You can check out our GitHub repository, and give us any feedback on our Discord server!


Neon is a serverless Postgres platform that helps teams ship faster via instant provisioning, autoscaling, and database branching. We have a Free Plan – you can get started without a credit card.