Skip to main content
Back to blog

Building an MCP server with TypeScript

·16 min readAI

I wrote a post explaining what MCP servers are. This one is the practical follow-up: how to actually build one. We will create an MCP server in TypeScript that gives an AI assistant access to a SQLite database, with tools for querying, schema discovery, and data exploration. By the end you will understand the full anatomy of an MCP server and be able to build your own for any data source or API.

What we are building

Our server will expose three things to the AI assistant:

  1. A tool for running read-only SQL queries
  2. A resource that provides the database schema as context
  3. A prompt that sets up a guided data exploration workflow

This covers all three MCP primitives, so you will see how each one works in practice.

Project setup

Initialize a new TypeScript project and install the MCP SDK:

mkdir my-mcp-server
cd my-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk better-sqlite3 zod
npm install -D typescript @types/node @types/better-sqlite3

The MCP SDK uses Zod for defining tool input schemas. You might have used Zod for form validation or API request validation before. Here it serves the same purpose: defining typed, validated parameters that the AI model fills in when calling your tools.

Update your package.json to use ES modules and add build scripts:

{
  "type": "module",
  "bin": {
    "my-mcp-server": "./dist/index.js"
  },
  "scripts": {
    "build": "tsc && chmod 755 dist/index.js",
    "start": "node dist/index.js"
  },
  "files": ["dist"]
}

The bin field and files field matter if you want to publish your server to npm later. More on that at the end.

Create tsconfig.json:

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "Node16",
    "moduleResolution": "Node16",
    "outDir": "./dist",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "forceConsistentCasingInFileNames": true
  },
  "include": ["src/**/*"]
}

The server skeleton

Create src/index.ts. Start with the shebang line (required for npx distribution) and imports:

#!/usr/bin/env node
 
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import Database from "better-sqlite3";
import { z } from "zod";
 
const db = new Database("./data.db");
 
const server = new McpServer({
  name: "sqlite-explorer",
  version: "1.0.0",
});

The McpServer class is the high-level API for building servers. You give it a name and version, which get reported to clients during the initialization handshake. The client uses this to identify your server in its UI.

Defining tools

Tools are the most commonly used MCP primitive. A tool is a function the AI model can call, with typed inputs validated by Zod and a handler that returns results.

The query tool

server.tool(
  "query",
  "Run a read-only SQL query against the database",
  {
    sql: z.string().describe("The SQL query to execute (SELECT only)"),
  },
  async ({ sql }) => {
    const trimmed = sql.trim().toUpperCase();
    if (!trimmed.startsWith("SELECT")) {
      return {
        content: [
          {
            type: "text",
            text: "Error: Only SELECT queries are allowed",
          },
        ],
        isError: true,
      };
    }
 
    try {
      const rows = db.prepare(sql).all();
      return {
        content: [
          {
            type: "text",
            text: JSON.stringify(rows, null, 2),
          },
        ],
      };
    } catch (error) {
      return {
        content: [
          {
            type: "text",
            text: `Error: ${error instanceof Error ? error.message : "Unknown error"}`,
          },
        ],
        isError: true,
      };
    }
  }
);

Let me break down the server.tool() call:

  1. Name ("query"): What the model sees and references when calling the tool.
  2. Description ("Run a read-only SQL query..."): The model reads this to decide when to use the tool. Write descriptions that clearly explain what the tool does and when to use it. Vague descriptions lead to the model misusing tools.
  3. Input schema ({ sql: z.string()... }): A Zod schema defining the parameters. The SDK converts this to JSON Schema automatically. The .describe() calls on each field become parameter descriptions that help the model provide correct values.
  4. Handler (async ({ sql }) => ...): The function that runs when the tool is called. It receives validated inputs and returns a result.

Error handling

Notice there are two kinds of errors in the handler:

Tool execution errors return a normal result with isError: true. The model sees the error message and can react to it (retry with different parameters, tell the user what went wrong, try a different approach). Use this for expected failures like invalid SQL, permission denied, or no results found.

Protocol errors are exceptions that bubble up as JSON-RPC errors. These indicate something unexpected went wrong, like a crashed database connection. The SDK handles these automatically if your handler throws.

For most tools, you want to catch errors and return them as tool execution errors rather than letting them throw. This gives the model a chance to recover gracefully.

A second tool: listing tables

server.tool(
  "list_tables",
  "List all tables in the database with their row counts",
  {},
  async () => {
    const tables = db
      .prepare(
        "SELECT name FROM sqlite_master WHERE type='table' AND name NOT LIKE 'sqlite_%'"
      )
      .all() as Array<{ name: string }>;
 
    const results = tables.map((t) => {
      const count = db
        .prepare(`SELECT COUNT(*) as count FROM "${t.name}"`)
        .get() as { count: number };
      return { name: t.name, rows: count.count };
    });
 
    return {
      content: [
        {
          type: "text",
          text: JSON.stringify(results, null, 2),
        },
      ],
    };
  }
);

This tool takes no parameters (empty object {}). The model calls it when it needs to know what is in the database. Having dedicated discovery tools like this makes the model much more effective. Instead of the model guessing table names and running SELECT * FROM users, it can call list_tables first and know exactly what is available.

Richer input schemas

Zod gives you a lot of expressiveness for input schemas. Here is a more complex example:

server.tool(
  "get_sample",
  "Get a sample of rows from a table for data exploration",
  {
    table: z.string().describe("The table name to sample from"),
    limit: z
      .number()
      .min(1)
      .max(100)
      .default(10)
      .describe("Number of rows to return (1-100, default 10)"),
    where: z
      .string()
      .optional()
      .describe("Optional WHERE clause (without the WHERE keyword)"),
  },
  async ({ table, limit, where }) => {
    try {
      let sql = `SELECT * FROM "${table}"`;
      if (where) {
        sql += ` WHERE ${where}`;
      }
      sql += ` LIMIT ${limit}`;
 
      const rows = db.prepare(sql).all();
      return {
        content: [
          {
            type: "text",
            text: JSON.stringify(rows, null, 2),
          },
        ],
      };
    } catch (error) {
      return {
        content: [
          {
            type: "text",
            text: `Error: ${error instanceof Error ? error.message : "Unknown error"}`,
          },
        ],
        isError: true,
      };
    }
  }
);

You get number validation with min/max, optional parameters, and default values. The model sees all of this in the tool schema and uses it correctly. If the model tries to pass limit: 500, the SDK rejects it before your handler even runs.

Defining resources

Resources provide read-only context to the AI assistant. Unlike tools, resources are not called by the model during the conversation. They are managed by the host application and can be attached to the context by the user or the application.

server.resource(
  "schema",
  "sqlite://schema",
  async () => {
    const tables = db
      .prepare(
        "SELECT sql FROM sqlite_master WHERE type='table' AND name NOT LIKE 'sqlite_%'"
      )
      .all() as Array<{ sql: string }>;
 
    return {
      contents: [
        {
          uri: "sqlite://schema",
          mimeType: "text/plain",
          text: tables.map((t) => t.sql).join("\n\n"),
        },
      ],
    };
  }
);

The server.resource() call takes a name, a URI, and a handler that returns the resource contents. The URI can use any scheme you want. Custom schemes like sqlite:// are common for MCP servers that expose non-file data.

When a client connects and lists resources, it sees the sqlite://schema resource. If the user (or the application) attaches it to the conversation, the AI model gets the full database schema as context before any tools are called. This is useful because the model can write better queries when it knows the schema upfront.

When to use resources vs tools

Use resources when you have relatively static context the model should know about (schemas, configuration, documentation). Use tools when the model needs to perform dynamic operations (queries, API calls, searches). The schema is a good resource because it does not change during a conversation. The query results are a good tool because they depend on what the model asks for.

Defining prompts

Prompts are reusable interaction templates that users trigger explicitly. They are the least common primitive, but useful for complex workflows you want to standardize.

server.prompt(
  "explore",
  "Guide a data exploration session on a specific table",
  [
    {
      name: "table",
      description: "The table to explore",
      required: true,
    },
  ],
  async ({ table }) => {
    // Fetch the schema for context
    const schema = db
      .prepare(
        "SELECT sql FROM sqlite_master WHERE type='table' AND name = ?"
      )
      .get(table) as { sql: string } | undefined;
 
    if (!schema) {
      return {
        messages: [
          {
            role: "user" as const,
            content: {
              type: "text" as const,
              text: `Table "${table}" not found. Use the list_tables tool to see available tables.`,
            },
          },
        ],
      };
    }
 
    return {
      messages: [
        {
          role: "user" as const,
          content: {
            type: "text" as const,
            text: `I want to explore the "${table}" table. Here is its schema:\n\n${schema.sql}\n\nPlease:\n1. Use the get_sample tool to look at a few example rows\n2. Describe the shape of the data and what each column likely represents\n3. Suggest three interesting queries I could run to understand this data better`,
          },
        },
      ],
    };
  }
);

When a user triggers this prompt (often exposed as a slash command like /explore users in the UI), it generates a structured message that kicks off a guided exploration workflow. The model then follows the instructions, calling the tools it needs along the way.

Prompts are optional. Plenty of useful MCP servers have only tools. But for servers with complex workflows, prompts provide a nice way to guide the interaction.

Starting the server

The last piece is the transport layer. For a local server (which is what most MCP servers are), stdio is the standard:

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
}
 
main().catch(console.error);

One critical thing to know about stdio servers: never use console.log(). Stdout is reserved for JSON-RPC messages. If you write anything else to stdout, you corrupt the protocol and the client will disconnect. Use console.error() for any logging, which writes to stderr and gets captured separately.

// This will break your server:
console.log("Server started");
 
// This is fine:
console.error("Server started");

I have debugged this exact issue more than once. If your server connects but then immediately disconnects, stray stdout output is the first thing to check.

The complete server

Here is the full src/index.ts with all the pieces together:

#!/usr/bin/env node
 
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import Database from "better-sqlite3";
import { z } from "zod";
 
const db = new Database("./data.db");
 
const server = new McpServer({
  name: "sqlite-explorer",
  version: "1.0.0",
});
 
// Tools
 
server.tool(
  "query",
  "Run a read-only SQL query against the database",
  {
    sql: z.string().describe("The SQL query to execute (SELECT only)"),
  },
  async ({ sql }) => {
    const trimmed = sql.trim().toUpperCase();
    if (!trimmed.startsWith("SELECT")) {
      return {
        content: [{ type: "text", text: "Error: Only SELECT queries are allowed" }],
        isError: true,
      };
    }
 
    try {
      const rows = db.prepare(sql).all();
      return {
        content: [{ type: "text", text: JSON.stringify(rows, null, 2) }],
      };
    } catch (error) {
      return {
        content: [
          {
            type: "text",
            text: `Error: ${error instanceof Error ? error.message : "Unknown error"}`,
          },
        ],
        isError: true,
      };
    }
  }
);
 
server.tool(
  "list_tables",
  "List all tables in the database with their row counts",
  {},
  async () => {
    const tables = db
      .prepare(
        "SELECT name FROM sqlite_master WHERE type='table' AND name NOT LIKE 'sqlite_%'"
      )
      .all() as Array<{ name: string }>;
 
    const results = tables.map((t) => {
      const count = db
        .prepare(`SELECT COUNT(*) as count FROM "${t.name}"`)
        .get() as { count: number };
      return { name: t.name, rows: count.count };
    });
 
    return {
      content: [{ type: "text", text: JSON.stringify(results, null, 2) }],
    };
  }
);
 
server.tool(
  "get_sample",
  "Get a sample of rows from a table for data exploration",
  {
    table: z.string().describe("The table name to sample from"),
    limit: z
      .number()
      .min(1)
      .max(100)
      .default(10)
      .describe("Number of rows to return (1-100, default 10)"),
    where: z
      .string()
      .optional()
      .describe("Optional WHERE clause (without the WHERE keyword)"),
  },
  async ({ table, limit, where }) => {
    try {
      let sql = `SELECT * FROM "${table}"`;
      if (where) sql += ` WHERE ${where}`;
      sql += ` LIMIT ${limit}`;
 
      const rows = db.prepare(sql).all();
      return {
        content: [{ type: "text", text: JSON.stringify(rows, null, 2) }],
      };
    } catch (error) {
      return {
        content: [
          {
            type: "text",
            text: `Error: ${error instanceof Error ? error.message : "Unknown error"}`,
          },
        ],
        isError: true,
      };
    }
  }
);
 
// Resources
 
server.resource("schema", "sqlite://schema", async () => {
  const tables = db
    .prepare(
      "SELECT sql FROM sqlite_master WHERE type='table' AND name NOT LIKE 'sqlite_%'"
    )
    .all() as Array<{ sql: string }>;
 
  return {
    contents: [
      {
        uri: "sqlite://schema",
        mimeType: "text/plain",
        text: tables.map((t) => t.sql).join("\n\n"),
      },
    ],
  };
});
 
// Prompts
 
server.prompt(
  "explore",
  "Guide a data exploration session on a specific table",
  [{ name: "table", description: "The table to explore", required: true }],
  async ({ table }) => {
    const schema = db
      .prepare("SELECT sql FROM sqlite_master WHERE type='table' AND name = ?")
      .get(table) as { sql: string } | undefined;
 
    if (!schema) {
      return {
        messages: [
          {
            role: "user" as const,
            content: {
              type: "text" as const,
              text: `Table "${table}" not found. Use the list_tables tool to see available tables.`,
            },
          },
        ],
      };
    }
 
    return {
      messages: [
        {
          role: "user" as const,
          content: {
            type: "text" as const,
            text: `I want to explore the "${table}" table. Here is its schema:\n\n${schema.sql}\n\nPlease:\n1. Use the get_sample tool to look at a few example rows\n2. Describe the shape of the data and what each column likely represents\n3. Suggest three interesting queries I could run to understand this data better`,
          },
        },
      ],
    };
  }
);
 
// Start
 
async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error("sqlite-explorer MCP server running on stdio");
}
 
main().catch(console.error);

Building and testing

Build the server:

npm run build

Testing with the MCP Inspector

The MCP Inspector is the official debugging tool for MCP servers. It gives you a visual UI where you can connect to your server, browse its tools, resources, and prompts, and test them interactively.

npx @modelcontextprotocol/inspector node dist/index.js

This opens a web UI at http://localhost:6274. From there you can:

  • See all registered tools and their schemas
  • Call tools with custom inputs and inspect the results
  • Browse resources and view their contents
  • Test prompts with different arguments
  • Watch the JSON-RPC messages flowing between client and server

I use the Inspector constantly during development. It is much faster than connecting to a full AI client for testing. Make a change, rebuild, reconnect in the Inspector, and test the tool. The feedback loop is seconds, not minutes.

Testing with Claude Code

For a real end-to-end test, connect the server to Claude Code:

claude mcp add sqlite-explorer -- node /absolute/path/to/my-mcp-server/dist/index.js

Or add it to your project's .claude/settings.json:

{
  "mcpServers": {
    "sqlite-explorer": {
      "command": "node",
      "args": ["/absolute/path/to/my-mcp-server/dist/index.js"]
    }
  }
}

Now when you start a Claude Code session, it will connect to your server. Ask it "what tables are in the database?" and it will call list_tables. Ask it "show me a sample of the users table" and it will call get_sample. The model discovers and uses your tools automatically based on the conversation.

Publishing to npm

If you want other people to use your server, the standard distribution channel for TypeScript MCP servers is npm with the npx pattern. Users can run your server without installing it:

npx -y my-mcp-server

To make this work, your package.json needs the bin field pointing to your compiled entry file (which we already set up), and the entry file needs the #!/usr/bin/env node shebang at the top (which we already added).

Publish it:

npm publish

Then anyone can add it to their MCP client config:

{
  "mcpServers": {
    "sqlite-explorer": {
      "command": "npx",
      "args": ["-y", "my-mcp-server"]
    }
  }
}

The -y flag auto-confirms the npx install prompt. This pattern is how most community MCP servers are distributed. Users do not need to clone a repo or manage dependencies. They just reference the npm package.

Configuration patterns

Real MCP servers usually need configuration: database paths, API keys, connection strings. There are two standard ways to pass these in.

Command-line arguments work well for simple configuration:

{
  "mcpServers": {
    "sqlite-explorer": {
      "command": "node",
      "args": ["dist/index.js", "--db", "/path/to/database.db"]
    }
  }
}

Parse them in your server with process.argv or a library like commander.

Environment variables are better for secrets:

{
  "mcpServers": {
    "sqlite-explorer": {
      "command": "node",
      "args": ["dist/index.js"],
      "env": {
        "DATABASE_PATH": "/path/to/database.db",
        "API_KEY": "sk-..."
      }
    }
  }
}

The env object in the client config sets environment variables for the server process. This keeps secrets out of command-line arguments (which are visible in process listings) and lets users reference their existing environment variables.

Beyond databases

The patterns in this post work for any data source or service. The shape is always the same: create a server, register tools with Zod schemas, write handlers that call your underlying service, and return results as text content.

Some ideas for MCP servers you could build:

  • A wrapper around your company's internal API, letting the model search employees, check project status, or pull reports
  • A documentation search server that indexes your docs and exposes a semantic search tool
  • A monitoring server that lets the model query recent error logs, check service health, or pull metrics
  • A project management server connected to Linear or Jira that lets the model look up tickets, check sprint progress, or create issues

The protocol is simple. A tool takes typed inputs and returns text. Once you build one server, building the next one takes minutes because the structure is identical. Only the handler logic changes.

Sources

Enjoying the blog? Subscribe via RSS to get new posts in your reader.

Subscribe via RSS