Skip to main content

Proxy MCP Server (HTTP)

The Proxy MCP Server runs alongside the SQL proxy as part of the same hatidata-proxy binary. While the SQL proxy listens on port 5439 using the Postgres wire protocol, the MCP server is available over HTTP at the /mcp path on your API domain.

  • SQL Proxy: port 5439 (Postgres wire protocol)
  • MCP Server: HTTPS at https://<api-domain>/mcp
  • Protocol: JSON-RPC 2.0 over HTTP, following the MCP 2025-03-26 spec
  • Auth: Authorization: ApiKey hd_agent_* or hd_live_* header

The MCP server exposes 24 tools that give AI agents governed access to memory, chain-of-thought logging, semantic triggers, branch isolation, and SQL queries — all passing through HatiData's full security pipeline.

Connection URLs

EnvironmentMCP EndpointSQL Proxy
Local devhttp://localhost:5440/mcppostgres://user:key@localhost:5439/main
Preprodhttps://preprod-api.hatidata.com/mcppsql -h preprod.hatidata.com -p 5439 --set=sslmode=require
Productionhttps://api.hatidata.com/mcppsql -h proxy.hatidata.com -p 5439 --set=sslmode=require
info

In local development, the MCP server defaults to port 5440 over plain HTTP. In deployed environments, the MCP endpoint runs behind your API domain's TLS termination.

Authentication

Every MCP request must include an API key in the Authorization header:

Authorization: ApiKey <your-api-key>

HatiData supports two key prefixes:

PrefixPurposeTypical Use
hd_live_*Programmatic accessBackend services, scripts, CI/CD
hd_agent_*Agent-scoped accessAI agents with per-agent permission boundaries

Agent keys (hd_agent_*) are scoped to a specific agent identity, which means policy evaluation, quota metering, and audit logging are all tied to that agent. Use agent keys whenever an AI agent connects directly.

Key Format

All keys must be exactly 40 characters: prefix (hd_live_ = 8 chars, hd_agent_ = 9 chars) + alphanumeric random part (32 or 31 chars). Keys that are longer, shorter, or contain special characters will be rejected with Invalid API key format. See the Connection Guide for details.

Create API keys in the Dashboard under Settings > API Keys, or via the Control Plane API:

curl -X POST https://api.hatidata.com/v1/orgs/{org_id}/api-keys \
-H 'Authorization: Bearer <your-jwt>' \
-H 'Content-Type: application/json' \
-d '{"name": "research-agent", "prefix": "hd_agent"}'

JSON-RPC Request Format

All MCP requests use JSON-RPC 2.0. Every request is a POST with Content-Type: application/json.

1. tools/list — List Available Tools

Request:

{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/list",
"params": {}
}

Response:

{
"jsonrpc": "2.0",
"id": 1,
"result": {
"tools": [
{
"name": "store_memory",
"description": "Store a memory entry for an agent",
"inputSchema": {
"type": "object",
"properties": {
"content": { "type": "string", "description": "The memory content to store" },
"metadata": { "type": "object", "description": "Optional metadata key-value pairs" },
"tags": { "type": "array", "items": { "type": "string" }, "description": "Optional tags for categorization" }
},
"required": ["content"]
}
},
{
"name": "search_memory",
"description": "Search agent memories using hybrid SQL + vector search"
},
{
"name": "query",
"description": "Execute a SQL query against the data layer"
}
]
}
}
tip

The response is truncated above for brevity. The full response includes all 24 tools with complete input schemas. See MCP Tools Reference for the complete list.

2. tools/call with store_memory

Request:

{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "store_memory",
"arguments": {
"content": "User prefers weekly summary reports delivered on Monday mornings.",
"metadata": {
"category": "user_preference",
"confidence": 0.95
},
"tags": ["preference", "reporting"]
}
}
}

Response:

{
"jsonrpc": "2.0",
"id": 2,
"result": {
"content": [
{
"type": "text",
"text": "{\"memory_id\": \"a1b2c3d4-5678-90ab-cdef-1234567890ab\", \"status\": \"stored\", \"has_embedding\": false}"
}
]
}
}

The has_embedding field will become true once the embedding worker processes the memory asynchronously.

3. tools/call with search_memory

Request:

{
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "search_memory",
"arguments": {
"query": "What are the user's reporting preferences?",
"limit": 5
}
}
}

Response:

{
"jsonrpc": "2.0",
"id": 3,
"result": {
"content": [
{
"type": "text",
"text": "{\"memories\": [{\"memory_id\": \"a1b2c3d4-5678-90ab-cdef-1234567890ab\", \"content\": \"User prefers weekly summary reports delivered on Monday mornings.\", \"score\": 0.92, \"tags\": [\"preference\", \"reporting\"], \"created_at\": \"2026-03-20T10:30:00Z\"}], \"total\": 1}"
}
]
}
}

The search uses hybrid retrieval: Qdrant ANN for semantic similarity, joined with DuckDB metadata filtering. If the vector store is unavailable, it falls back gracefully to DuckDB-only search.

4. tools/call with log_reasoning_step

Request:

{
"jsonrpc": "2.0",
"id": 4,
"method": "tools/call",
"params": {
"name": "log_reasoning_step",
"arguments": {
"session_id": "session-abc-123",
"step_type": "observation",
"content": "The user's recent query pattern suggests they are investigating Q1 revenue trends. Cross-referencing with stored preferences for weekly reporting.",
"metadata": {
"confidence": 0.88,
"sources": ["memory:a1b2c3d4", "query:select-revenue-q1"]
}
}
}
}

Response:

{
"jsonrpc": "2.0",
"id": 4,
"result": {
"content": [
{
"type": "text",
"text": "{\"trace_id\": \"tr_9f8e7d6c\", \"session_id\": \"session-abc-123\", \"sequence\": 3, \"hash\": \"sha256:a4f2e8...\", \"prev_hash\": \"sha256:7b3d1c...\"}"
}
]
}
}

Each reasoning step is hash-chained (SHA-256) within its session, creating an immutable, verifiable audit trail. The sequence number increments per session, and prev_hash links to the prior step.

Example: cURL

# List all available tools
curl -X POST https://preprod-api.hatidata.com/mcp \
-H 'Content-Type: application/json' \
-H 'Authorization: ApiKey hd_agent_xxx' \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}'

# Store a memory
curl -X POST https://preprod-api.hatidata.com/mcp \
-H 'Content-Type: application/json' \
-H 'Authorization: ApiKey hd_agent_xxx' \
-d '{
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "store_memory",
"arguments": {
"content": "Customer reported issue with billing dashboard latency.",
"tags": ["support", "billing"]
}
}
}'

# Search memories
curl -X POST https://preprod-api.hatidata.com/mcp \
-H 'Content-Type: application/json' \
-H 'Authorization: ApiKey hd_agent_xxx' \
-d '{
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "search_memory",
"arguments": {
"query": "billing issues",
"limit": 10
}
}
}'

Example: Rust (reqwest)

use reqwest::Client;
use serde_json::json;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Client::new();
let api_key = std::env::var("HATIDATA_API_KEY")?;
let endpoint = "https://preprod-api.hatidata.com/mcp";

// List tools
let resp = client
.post(endpoint)
.header("Content-Type", "application/json")
.header("Authorization", format!("ApiKey {api_key}"))
.json(&json!({
"jsonrpc": "2.0",
"id": 1,
"method": "tools/list",
"params": {}
}))
.send()
.await?;

println!("Tools: {}", resp.text().await?);

// Store a memory
let resp = client
.post(endpoint)
.header("Content-Type", "application/json")
.header("Authorization", format!("ApiKey {api_key}"))
.json(&json!({
"jsonrpc": "2.0",
"id": 2,
"method": "tools/call",
"params": {
"name": "store_memory",
"arguments": {
"content": "User prefers dark mode dashboards.",
"tags": ["preference", "ui"]
}
}
}))
.send()
.await?;

println!("Stored: {}", resp.text().await?);

// Search memories
let resp = client
.post(endpoint)
.header("Content-Type", "application/json")
.header("Authorization", format!("ApiKey {api_key}"))
.json(&json!({
"jsonrpc": "2.0",
"id": 3,
"method": "tools/call",
"params": {
"name": "search_memory",
"arguments": {
"query": "user preferences",
"limit": 5
}
}
}))
.send()
.await?;

println!("Results: {}", resp.text().await?);

Ok(())
}

Example: Python (requests)

import requests
import os
import json

API_KEY = os.environ["HATIDATA_API_KEY"]
ENDPOINT = "https://preprod-api.hatidata.com/mcp"

headers = {
"Content-Type": "application/json",
"Authorization": f"ApiKey {API_KEY}",
}


def mcp_call(method: str, params: dict, req_id: int = 1) -> dict:
"""Send a JSON-RPC request to the MCP server."""
payload = {
"jsonrpc": "2.0",
"id": req_id,
"method": method,
"params": params,
}
resp = requests.post(ENDPOINT, headers=headers, json=payload)
resp.raise_for_status()
return resp.json()


# List available tools
tools = mcp_call("tools/list", {})
for tool in tools["result"]["tools"]:
print(f" {tool['name']}: {tool.get('description', '')}")

# Store a memory
result = mcp_call("tools/call", {
"name": "store_memory",
"arguments": {
"content": "Customer prefers email notifications over Slack.",
"tags": ["preference", "notifications"],
},
}, req_id=2)
print("Stored:", json.dumps(result["result"], indent=2))

# Search memories
result = mcp_call("tools/call", {
"name": "search_memory",
"arguments": {
"query": "notification preferences",
"limit": 5,
},
}, req_id=3)
print("Found:", json.dumps(result["result"], indent=2))

Example: Node.js (fetch)

const API_KEY = process.env.HATIDATA_API_KEY!;
const ENDPOINT = "https://preprod-api.hatidata.com/mcp";

interface JsonRpcResponse {
jsonrpc: "2.0";
id: number;
result?: unknown;
error?: { code: number; message: string; data?: unknown };
}

async function mcpCall(
method: string,
params: Record<string, unknown>,
id = 1
): Promise<JsonRpcResponse> {
const resp = await fetch(ENDPOINT, {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `ApiKey ${API_KEY}`,
},
body: JSON.stringify({ jsonrpc: "2.0", id, method, params }),
});

if (!resp.ok) {
throw new Error(`HTTP ${resp.status}: ${await resp.text()}`);
}

return resp.json() as Promise<JsonRpcResponse>;
}

async function main() {
// List tools
const tools = await mcpCall("tools/list", {});
console.log("Available tools:", tools.result);

// Store a memory
const stored = await mcpCall(
"tools/call",
{
name: "store_memory",
arguments: {
content: "Agent completed onboarding flow for org acme-corp.",
tags: ["onboarding", "lifecycle"],
},
},
2
);
console.log("Stored:", stored.result);

// Search memories
const found = await mcpCall(
"tools/call",
{
name: "search_memory",
arguments: { query: "onboarding status", limit: 5 },
},
3
);
console.log("Found:", found.result);
}

main().catch(console.error);

Available Tools (24)

The MCP server exposes 24 tools across five categories. See the MCP Tools Reference for complete schemas and examples.

CategoryToolsDescription
Queryquery, list_tables, describe_table, get_contextSQL queries, schema exploration, RAG context retrieval
Memorystore_memory, search_memory, get_agent_state, set_agent_state, delete_memoryPersistent agent memory with hybrid SQL + vector search
Chain-of-Thoughtlog_reasoning_step, replay_decision, get_session_historyImmutable, hash-chained reasoning traces
Triggersregister_trigger, list_triggers, delete_trigger, test_triggerSemantic triggers with ANN pre-filtering and webhook dispatch
Branchesbranch_create, branch_query, branch_merge, branch_discard, branch_listSchema-isolated data branches with copy-on-write and merge strategies

Additional tools cover agent identity, cost estimation, and administrative operations.

Error Handling

When a request fails, the MCP server returns a standard JSON-RPC error response:

{
"jsonrpc": "2.0",
"id": 1,
"error": {
"code": -32600,
"message": "Invalid request: missing required field 'method'",
"data": null
}
}

Common error codes:

CodeMeaningTypical Cause
-32700Parse errorMalformed JSON in request body
-32600Invalid requestMissing required JSON-RPC fields (jsonrpc, method)
-32601Method not foundUnknown method (typo in tools/list, etc.)
-32602Invalid paramsMissing or wrong-type arguments for a tool
-32603Internal errorServer-side failure (check logs)
-32001Authentication failedMissing, expired, or invalid API key
-32002Permission deniedAPI key lacks permission for this tool or resource
-32003Quota exceededAgent or org has exceeded its query/storage quota
-32004Policy blockedA security policy blocked the operation
warning

Always check for the error field in the response before accessing result. A 200 OK HTTP status does not guarantee the JSON-RPC call succeeded — JSON-RPC errors are returned with a 200 status code per the specification.

TLS Requirements

EnvironmentMCP TransportSQL Proxy Transport
Local devHTTP (http://localhost:5440/mcp)Plain (sslmode=disable)
PreprodHTTPS (required)TLS (sslmode=require)
ProductionHTTPS (required)TLS (sslmode=require)

All production and preprod connections must use HTTPS for the MCP server and sslmode=require for the SQL proxy. Local development can use plain HTTP and unencrypted Postgres connections.

The MCP server does not handle TLS termination directly — it runs behind a reverse proxy (Caddy, nginx, or cloud load balancer) that terminates TLS. In local dev, the server listens on plain HTTP on port 5440.

Stay in the loop

Product updates, engineering deep-dives, and agent-native insights. No spam.