Skip to main content

TypeScript SDK

The @hatidata/sdk package provides a typed client for connecting to HatiData from Node.js and TypeScript applications. It wraps the standard pg (node-postgres) driver with agent-aware connection parameters, passing agent_id and framework to the proxy for billing, audit, and policy targeting.

Installation

npm install @hatidata/sdk

Requirements: Node.js 18+.


HatiDataClient

Constructor

import { HatiDataClient } from '@hatidata/sdk';

const client = new HatiDataClient({
host: 'localhost',
port: 5439,
agentId: 'my-ts-agent',
framework: 'custom',
database: 'hatidata',
user: 'agent',
password: 'hd_live_...',
});

await client.connect();
interface HatiDataClientOptions {
host?: string; // Proxy hostname (default: "localhost")
port?: number; // Proxy port (default: 5439)
agentId?: string; // Unique agent identifier
framework?: string; // AI framework name (default: "custom")
database?: string; // Database name (default: "hatidata")
user?: string; // Username (default: "agent")
password?: string; // Password or API key
priority?: string; // Query priority: low | normal | high | critical
connectTimeout?: number; // Connection timeout in ms (default: 10000)
}

query<T>()

Execute a SELECT query and return typed rows.

interface Customer {
id: number;
name: string;
email: string;
region: string;
}

const customers = await client.query<Customer>(
'SELECT id, name, email, region FROM customers WHERE region = $1',
['US']
);
// customers is Customer[]

for (const c of customers) {
console.log(c.name, c.region);
}

The type parameter T is used for the return type only — HatiData does not enforce it at runtime. Columns masked by policy are redacted in the response before the type is applied.

Snowflake SQL works transparently:

const rows = await client.query(
`SELECT
customer_id,
NVL(name, 'Unknown') AS name,
IFF(revenue > 10000, 'enterprise', 'smb') AS tier,
DATEDIFF(day, first_order, CURRENT_DATE) AS days_active
FROM customers
WHERE status = 'active'`
);

The proxy auto-transpiles NVL, IFF, DATEDIFF, DATEADD, VARIANT, TIMESTAMP_NTZ, and all other Snowflake constructs to native equivalents.


execute()

Execute INSERT, UPDATE, or DELETE. Returns the affected row count.

const count = await client.execute(
'UPDATE users SET active = true WHERE id = $1',
[42]
);
console.log(`${count} rows updated`);
await client.execute(
'CREATE TABLE IF NOT EXISTS events (id INT, type VARCHAR, payload JSON)'
);

await client.execute(
"INSERT INTO events VALUES (1, 'click', '{\"page\": \"/home\"}')"
);

queryOne<T>()

Execute a query and return the first row, or null if no rows match.

interface User {
id: number;
name: string;
}

const user = await client.queryOne<User>(
'SELECT * FROM users WHERE id = $1',
[42]
);

if (user) {
console.log(user.name);
}

Connection Management

Lifecycle

const client = new HatiDataClient({ host: 'localhost', agentId: 'my-agent' });

try {
await client.connect();
const rows = await client.query('SELECT COUNT(*) FROM orders');
console.log(rows);
} finally {
await client.close();
}

Connection Pooling

For applications handling concurrent requests, use HatiDataPool:

import { HatiDataPool } from '@hatidata/sdk';

const pool = new HatiDataPool({
host: 'your-org.proxy.hatidata.com',
port: 5439,
agentId: 'web-api',
password: 'hd_live_...',
max: 20, // Maximum pool size
idleTimeout: 30000, // Close idle connections after 30s
});

// Auto-acquire and release
const rows = await pool.query('SELECT * FROM orders LIMIT 10');

// Manual connection management
const conn = await pool.acquire();
try {
await conn.query('BEGIN');
await conn.execute('INSERT INTO logs VALUES ($1, $2)', [1, 'started']);
await conn.query('COMMIT');
} finally {
pool.release(conn);
}

Push to Cloud

After working locally, push data to HatiData Cloud:

const client = new HatiDataClient({ host: 'localhost' });
await client.connect();

// Exports local tables and uploads to cloud storage
await client.push({ target: 'cloud' });

// Connect to cloud endpoint
const cloudClient = new HatiDataClient({
host: 'your-org.proxy.hatidata.com',
password: 'hd_live_...',
});
await cloudClient.connect();
const rows = await cloudClient.query('SELECT COUNT(*) FROM users');

SQL Compatibility

The proxy auto-transpiles Snowflake SQL to HatiData's native dialect. Write in either syntax:

// Snowflake functions — auto-transpiled
const rows = await client.query(`
SELECT
customer_id,
NVL(preferred_name, first_name) AS display_name,
IFF(lifetime_value > 50000, 'vip', 'standard') AS segment,
DATEDIFF(day, signup_date, CURRENT_DATE) AS tenure_days,
DATEADD(day, 30, last_order_date) AS next_followup
FROM customers
WHERE status = 'active'
AND ARRAY_AGG(tags) IS NOT NULL
`);

// Native functions — also work directly
const rows2 = await client.query(`
SELECT
json_extract_string(metadata, 'source') AS source,
DATE_DIFF('day', created_at, CURRENT_DATE) AS age_days
FROM events
WHERE created_at > CURRENT_DATE - INTERVAL '30 days'
`);

Using Standard pg with Self-Registration

HatiData speaks the Postgres wire protocol. Any pg-compatible client connects directly, and agents auto-register on first connect:

import { Client } from 'pg';

const client = new Client({
host: 'preprod.hatidata.com',
port: 5439,
user: 'agent',
password: 'hd_live_YOUR_KEY',
database: 'main',
application_name: 'my-agent', // becomes agent_id
ssl: { rejectUnauthorized: false },
});

// Agent auto-registers on first connect
await client.connect();
const result = await client.query('SELECT * FROM users');
console.log(result.rows);
await client.end();
Self-Registration

When connecting in cloud mode, the proxy auto-registers the agent using the application_name as the agent identity. The agent receives a fingerprint for billing, audit, and policy targeting without any manual setup.

The proxy parses application_name using the agent_id/framework convention for billing and audit when connecting outside the SDK.


Agent-Native Features (Coming in v0.4.0)

Coming Soon

The @hatidata/sdk v0.4.0 release will include built-in support for agent-native features:

  • Memory -- store and search long-term agent memories with hybrid vector + SQL search
  • Chain-of-Thought -- log immutable, hash-chained reasoning traces
  • Semantic Triggers -- register event-driven rules that fire on concept match
  • Branches -- create isolated copy-on-write workspaces for speculative operations

Until v0.4.0, these features are available via the MCP Tools or the Control Plane API.


Full Example

import { HatiDataClient } from '@hatidata/sdk';

async function analyzeRevenue(): Promise<void> {
const client = new HatiDataClient({
host: 'your-org.proxy.hatidata.com',
port: 5439,
agentId: 'analytics-agent',
framework: 'custom',
password: process.env.HATIDATA_API_KEY,
});

try {
await client.connect();

interface RevenueRow {
segment: string;
revenue: number;
customer_count: number;
}

const revenue = await client.query<RevenueRow>(`
SELECT
IFF(lifetime_value > 10000, 'enterprise', 'smb') AS segment,
SUM(total_revenue) AS revenue,
COUNT(*) AS customer_count
FROM customers
WHERE status = 'active'
GROUP BY 1
ORDER BY 2 DESC
`);

for (const row of revenue) {
console.log(`${row.segment}: $${row.revenue} (${row.customer_count} customers)`);
}
} finally {
await client.close();
}
}

analyzeRevenue();

Source Code

github.com/HatiOS-AI/HatiData-SDKs — sdk/typescript

Stay in the loop

Product updates, engineering deep-dives, and agent-native insights. No spam.