Skip to main content
Back to blog

Why I built Omnibase: a universal database MCP server

·11 min readWeb Dev

I use DataGrip almost every day. It is a great database IDE. But somewhere around the tenth time I copied a query result from DataGrip, pasted it into Claude Code, waited for the agent to reason about it, got a follow-up query back, pasted that into DataGrip, ran it, copied the result, and pasted it back, I realized I was being a very expensive clipboard.

The AI agent knew what it wanted to query. It knew how to write SQL. It just had no way to talk to the database directly. I was the bottleneck in a loop that should have been automated.

So I built Omnibase, a universal database MCP server that gives AI agents secure, direct access to over 50 databases through a single tool.

The problem with GUI database tools and AI

DataGrip and DBeaver are excellent at what they do. Schema browsing, query editing, result visualization, data export. But they were designed for humans to interact with databases, not for AI agents.

Both tools have added AI features recently. DBeaver has AI Smart Assistance in its paid editions. DataGrip added AI agents in its 2026.1 release. But these features are assistive, not agentic. They help you write SQL inside the GUI. They cannot be called programmatically by an external AI agent.

The workflow I kept running into looked like this:

  1. Ask the AI agent a question that requires database context
  2. The agent writes a SQL query it wants to run
  3. I copy the query into DataGrip
  4. I run it and read the results
  5. I copy the results back into the agent
  6. The agent asks a follow-up question that needs another query
  7. Repeat from step 2

For simple one-off queries, this is fine. For anything involving exploration, debugging, or multi-step analysis, it is painful. The agent might need to check a schema, sample some data, run a diagnostic query, adjust its approach based on the results, and then run the actual query. That is five round trips through the clipboard. Each one breaks the agent's flow and adds latency.

What MCP changes

I have written about MCP before, so I will keep the explanation brief. The Model Context Protocol lets AI agents connect to external tools through a standardized interface. An MCP server exposes capabilities (tools) that the agent can call during a conversation.

For databases, this means the agent can discover schemas, run queries, analyze data, and validate SQL without you acting as the middleman. It asks the question, gets the answer, and moves on. The entire exploration loop that used to take ten minutes of copy-pasting happens in seconds.

How Omnibase works

Omnibase is a TypeScript MCP server with a Go sidecar that handles the actual database connections. The architecture looks like this:

AI Agent (Claude Code, Cursor, Copilot, OpenCode)
  ↓ MCP Protocol (stdio)
TypeScript Server
  - SQL parsing and classification
  - Permission enforcement
  - Schema caching
  - Output formatting
  ↓ JSON-RPC (stdin/stdout)
Go Sidecar
  - Native database drivers via usql
  - Parameterized queries
  - Schema introspection
  ↓ Native drivers
Any Database

The Go sidecar exists because of usql, a universal SQL CLI written in Go that supports over 50 database drivers. Rather than writing and maintaining native database drivers in TypeScript for every database, Omnibase imports usql's driver packages directly. Adding support for a new database means adding a single import line to the Go sidecar and running go mod tidy. PostgreSQL, MySQL, SQLite, SQL Server, Oracle, ClickHouse, DuckDB, BigQuery, Snowflake, CockroachDB, and dozens more all work through this single interface.

The TypeScript layer handles everything the agent interacts with: MCP tool registration, SQL analysis, permission enforcement, and result formatting. The Go sidecar handles everything the database interacts with: connections, query execution, and schema introspection.

13 tools, not just "run query"

One thing I felt strongly about was giving the agent proper tools for database exploration, not just a single "execute SQL" endpoint. Other database MCP servers like DBHub take a minimal approach with just two tools. That works, but it means the agent has to figure out everything through raw SQL, including things like schema discovery and data sampling that have purpose-built tools in every database IDE.

Omnibase exposes 13 tools organized into four categories:

Discovery tools let the agent understand the database before writing queries. list_tables shows what is available with row counts. get_schema returns column types, indexes, and foreign keys. search_schema finds tables or columns by keyword. get_relationships maps foreign keys across the entire database. These tools are what you would use in DataGrip's schema browser, but available to the agent directly.

Query tools handle SQL execution. execute_sql runs queries with permission enforcement and parameterized inputs. explain_query shows the query plan without executing anything. get_sample previews rows from a table safely.

Analysis tools provide statistical context. get_table_stats returns column cardinality, null rates, and value ranges using sampling. get_distinct_values shows the unique values in a column with their counts. These help the agent understand the data distribution before writing complex queries.

Validation is its own category. validate_query checks syntax, verifies that referenced tables and columns exist, confirms the query is allowed by the current permission level, and estimates how many rows a write operation would affect. The agent can validate before executing, which catches mistakes before they touch real data.

The result is that the agent can do the same kind of exploratory database work that a developer does in a GUI tool: browse the schema, understand relationships, sample data, form a hypothesis, write a query, validate it, and execute it.

Security is not optional

Giving an AI agent direct database access sounds terrifying, and it should. The whole point of Omnibase's security model is to make it not terrifying.

Every SQL statement the agent sends gets parsed and classified before anything touches the database. The query analyzer uses node-sql-parser to understand exactly what the statement does: is it a SELECT, an INSERT, a DROP TABLE? Multi-statement queries are rejected outright. You cannot sneak a DROP TABLE after a semicolon.

Each database connection has a permission level: read-only, read-write, or admin. Read-only connections can only run SELECT queries. Read-write adds INSERT, UPDATE, and DELETE. Admin allows DDL operations like CREATE and ALTER. The default is read-only, because that is the right default.

Dangerous database functions are blocked regardless of permission level. Functions like pg_read_file, xp_cmdshell, LOAD_FILE, and lo_export that could be used for server-side file access or command execution are rejected during SQL parsing. The same applies to sensitive system tables like pg_shadow and mysql.user.

For write operations, Omnibase estimates the number of affected rows before executing. If an UPDATE or DELETE would touch an unexpected number of rows, the agent (and you) can catch it before it runs.

Credentials never reach the agent. Database connection strings are resolved server-side from the config file, and they can reference environment variables for additional security. The agent sees connection names like "production-db," not passwords.

Why not just use an existing database MCP server?

A few exist. The most established is DBHub by Bytebase, which supports PostgreSQL, MySQL, SQL Server, SQLite, and MariaDB. It is well-built and intentionally minimal, exposing just two tools. Anthropic also maintains separate PostgreSQL and SQLite MCP servers.

I built Omnibase because I needed three things none of them offered:

Breadth. I work with different databases across projects. PostgreSQL for one thing, SQLite for another, occasionally something else entirely. I did not want to configure a separate MCP server for each one. Omnibase handles all of them through a single config file with usql's 50+ driver support.

Write access with guardrails. Most database MCP servers are read-only by default with no write path. That is safe but limiting. I wanted my agent to be able to run migrations, insert test data, and fix records, with proper permission controls and impact estimation so it could not accidentally destroy anything.

Rich exploration tools. A single "execute SQL" tool works, but it puts all the burden on the agent to write the right introspection queries for whatever database dialect it is talking to. Dedicated tools for schema discovery, relationship mapping, and data analysis make the agent significantly more effective, especially when working with an unfamiliar database.

What this looks like in practice

Here is a real example. I have my production database configured as a read-only connection:

connections:
  production:
    dsn: "$DATABASE_URL"
    permission: read-only
    max_rows: 200

Users are reporting that confirmation emails are not arriving. Instead of opening DataGrip, browsing the schema, writing queries, and pasting results back and forth, I just ask the agent.

I type: "Some users say they never got a confirmation email. Can you check the production database and see what is going on?"

The agent calls list_tables on the production connection and sees tables including users, email_queue, and email_templates. It calls get_schema on email_queue and learns the columns: id, user_id, template, status, created_at, sent_at, error. It calls get_relationships and sees that email_queue.user_id references users.id.

Now it has enough context to investigate. It runs execute_sql:

SELECT status, COUNT(*) as count
FROM email_queue
WHERE template = 'confirmation'
AND created_at > NOW() - INTERVAL '24 hours'
GROUP BY status

The result comes back: 312 rows with status sent, 47 with status failed, 23 with status pending. There are stuck and failed emails. The agent runs a follow-up:

SELECT error, COUNT(*) as count
FROM email_queue
WHERE status = 'failed'
AND template = 'confirmation'
GROUP BY error
ORDER BY count DESC

It finds that 43 of the 47 failures have the error smtp_rate_limit_exceeded. The agent now tells me: the SMTP provider is rate-limiting outgoing emails, causing confirmation emails to fail. The pending ones are likely backed up behind the same bottleneck.

That entire investigation took maybe fifteen seconds and five tool calls. No tab switching, no copy-pasting, no re-explaining context. The agent had direct access to the data and could follow its own line of reasoning without me acting as the middleman.

Because the connection is configured as read-only, I am comfortable pointing it at production for this kind of investigation. The agent can query whatever it needs but cannot modify anything. That said, read-only access to a production database is still something you should think about carefully. Use a read replica if you have one, set a reasonable max_rows limit so a broad query does not pull back your entire users table, and always have proper backups regardless. Read-only does not mean risk-free, it just means the agent cannot be the one to break things.

Getting started

Setup takes about two minutes. If you use Claude Code:

claude mcp add omnibase -- npx -y omnibase-mcp@latest
npx omnibase-mcp init

The init command creates an omnibase.config.yaml file where you add your database connections. You saw the config format in the example above. DSNs prefixed with $ resolve from environment variables, so you never have to put credentials in the config file directly.

That is it. Your AI agent can now query your database. It works with Claude Code, Cursor, GitHub Copilot, OpenCode, and any other MCP-compatible client.

You can configure multiple connections with different permission levels. A local development database might be read-write, while production is locked to read-only:

connections:
  dev:
    dsn: "pg://localhost:5432/myapp_dev"
    permission: read-write
  production:
    dsn: "$DATABASE_URL"
    permission: read-only
    max_rows: 200

My first open source project

Omnibase is my first major open source project. I have used open source software for years, contributed patches here and there, but never built and published something from scratch that I intended other people to use.

The experience has been different from building a product. With Landbound and Routemade, I can cut corners on internal code quality because I am the only one who has to live with it. With an open source tool, the code is the product. The README matters as much as the implementation. Error messages need to be helpful to someone who cannot read the source. Configuration needs to be obvious without documentation.

I chose the Apache 2.0 license because I want people and companies to use this without worrying about license compatibility. The project uses conventional commits, automated releases with release-please, and integration tests that spin up real PostgreSQL and MySQL instances in Docker. These are things I would have skipped in a private project, but they matter when other people depend on your code.

If you work with databases and use AI coding agents, give Omnibase a try. And if you run into issues or want to add support for a database driver, contributions are welcome.

Sources

Enjoying the blog? Subscribe via RSS to get new posts in your reader.

Subscribe via RSS