Skip to main content

MCP Setup

HatiData's built-in MCP server connects Claude Desktop, Claude Code, Cursor, and any other MCP-compatible agent directly to your agent data layer. Every tool call passes through HatiData's full multi-stage security pipeline — policy evaluation, row-level filtering, column masking, quota metering, and immutable audit logging — so agents get safe, governed access to your data without any additional configuration.

ANDI (Agent-Native Data Interface) is the protocol layer that makes this work. When an agent calls the query tool, ANDI translates the request into HatiData's query pipeline, runs it against the query engine, and returns structured JSON results. The agent never touches raw storage; it interacts only through the governed interface.

What is MCP

The Model Context Protocol is an open standard for connecting AI models to external data sources and tools. HatiData's MCP server exposes four tools:

ToolDescription
queryExecute SQL against the data layer and return results as JSON
list_tablesList all tables the agent has permission to see
describe_tableGet column names, types, and nullability for a table
get_contextRAG context retrieval via full-text and semantic search

These tools give any MCP-compatible agent the ability to explore your data layer schema, run analytical queries, and retrieve contextual information — all governed by HatiData's ABAC policy engine.

Installation

The MCP server ships with the hatidata-agent Python package:

pip install hatidata-agent[mcp]

This installs the hatidata-mcp-server command-line tool.

Configuration

Claude Desktop

Add the following to your Claude Desktop configuration file:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
claude_desktop_config.json
{
"mcpServers": {
"hatidata": {
"command": "hatidata-mcp-server",
"args": [
"--host", "localhost",
"--port", "5439",
"--agent-id", "claude-desktop",
"--database", "hatidata"
]
}
}
}

For a cloud proxy, point to your org endpoint and provide an API key:

{
"mcpServers": {
"hatidata": {
"command": "hatidata-mcp-server",
"args": [
"--host", "your-org.proxy.hatidata.com",
"--port", "5439",
"--agent-id", "claude-desktop",
"--database", "hatidata",
"--password", "hd_live_your_api_key"
]
}
}
}

Claude Code

Add to your Claude Code MCP settings:

.claude/mcp.json
{
"mcpServers": {
"hatidata": {
"command": "hatidata-mcp-server",
"args": [
"--host", "localhost",
"--port", "5439",
"--agent-id", "claude-code",
"--database", "hatidata"
]
}
}
}

Cursor

Add to your Cursor MCP configuration:

.cursor/mcp.json
{
"mcpServers": {
"hatidata": {
"command": "hatidata-mcp-server",
"args": [
"--host", "localhost",
"--port", "5439",
"--agent-id", "cursor-agent",
"--database", "hatidata"
]
}
}
}
API Key Security

Avoid hardcoding API keys in configuration files. Use environment variables or a secret manager where possible.

CLI Options

hatidata-mcp-server [OPTIONS]

Options:
--host Proxy hostname [default: localhost]
--port Proxy port [default: 5439]
--agent-id Agent identifier [default: mcp-agent]
--database Database name [default: hatidata]
--user Username [default: agent]
--password Password or API key [default: ""]

Tool Reference

query

Execute SQL against the data layer. Supports both standard SQL and legacy dialect syntax (NVL, IFF, DATEDIFF, DATEADD) — these are auto-transpiled to native equivalents.

Input:

{ "sql": "SELECT customer_id, SUM(total) as revenue FROM orders GROUP BY 1 ORDER BY 2 DESC LIMIT 10" }

Output:

[
{"customer_id": 42, "revenue": 125000.00},
{"customer_id": 17, "revenue": 98500.00}
]

list_tables

List all tables the authenticated agent is permitted to see, based on ABAC policy evaluation.

Input: (none)

Output:

["customers", "orders", "products", "events"]

describe_table

Get column names, types, and nullability for a table.

Input:

{ "table_name": "orders" }

Output:

[
{"column_name": "id", "data_type": "INTEGER", "is_nullable": "NO"},
{"column_name": "customer_id", "data_type": "INTEGER", "is_nullable": "NO"},
{"column_name": "total", "data_type": "DECIMAL(10,2)", "is_nullable": "YES"},
{"column_name": "created_at", "data_type": "TIMESTAMP", "is_nullable": "YES"}
]

get_context

Retrieve relevant rows using full-text search (for RAG workflows). Returns the top-K results ranked by relevance.

Input:

{
"table": "knowledge_base",
"search_query": "enterprise pricing",
"top_k": 5
}

Transport Protocol

The MCP server uses stdio transport — it reads JSON-RPC messages from stdin and writes responses to stdout. The client spawns the hatidata-mcp-server process, which maintains a persistent PostgreSQL wire-protocol connection to the HatiData proxy for the duration of the session.

Programmatic Usage

Embed HatiData's MCP capabilities in a custom agent runtime:

from hatidata_agent.mcp_server import create_server, create_tools
from hatidata_agent import HatiDataAgent

agent = HatiDataAgent(
host="localhost",
port=5439,
agent_id="my-mcp-server",
)

# Get tool definitions for custom integration
tools = create_tools(agent)

# Or start the full MCP server programmatically
server = create_server(agent)
server.run() # Blocks, listening on stdio

Security

Every tool call passes through HatiData's full security pipeline:

LayerWhat it does
AuthenticationAPI key validated on every connection
AuthorizationABAC policies evaluate each query against the agent's role
Row-level securityAutomatic WHERE clause injection based on agent identity
Column maskingSensitive columns redacted based on role
Quota enforcementQuery costs tracked against the agent's credit limit
Audit loggingEvery call logged with agent ID, framework, and full query text

Additional Frameworks

HatiData works with any framework that supports Postgres connections or the Model Context Protocol. For framework-specific guides, see:

  • LangChain — Memory, VectorStore, and Toolkit for LangChain agents
  • CrewAI — Multi-agent workflows with per-role billing
  • AutoGen — GroupChat with shared HatiData state
  • Postgres Drivers — Connect any Postgres-compatible tool or framework

Stay in the loop

Product updates, engineering deep-dives, and agent-native insights. No spam.