TypeScript SDK
The @hatidata/sdk package provides a typed client for connecting to HatiData from Node.js and TypeScript applications. It wraps the standard pg (node-postgres) driver with agent-aware connection parameters.
Installation
npm install @hatidata/sdk
Requirements: Node.js 18+.
Basic Usage
Connect and Query
import { HatiDataClient } from '@hatidata/sdk';
const client = new HatiDataClient({
host: 'localhost',
port: 5439,
agentId: 'my-ts-agent',
framework: 'custom',
database: 'hatidata',
user: 'agent',
password: '',
});
await client.connect();
// SELECT query -- returns typed rows
const rows = await client.query<{ id: number; name: string }>(
'SELECT id, name FROM users WHERE active = true'
);
for (const row of rows) {
console.log(row.id, row.name);
}
// Always close when done
await client.close();
Using Async/Await
import { HatiDataClient } from '@hatidata/sdk';
async function main() {
const client = new HatiDataClient({
host: 'your-org.proxy.hatidata.com',
port: 5439,
agentId: 'analytics-agent',
framework: 'langchain',
password: 'hd_live_your_api_key',
});
try {
await client.connect();
// Create table
await client.execute(
'CREATE TABLE IF NOT EXISTS events (id INT, type VARCHAR, payload JSON)'
);
// Insert data
await client.execute(
"INSERT INTO events VALUES (1, 'click', '{\"page\": \"/home\"}')"
);
// Query
const events = await client.query('SELECT * FROM events');
console.log(events);
} finally {
await client.close();
}
}
Constructor Options
interface HatiDataClientOptions {
host?: string; // Proxy hostname (default: "localhost")
port?: number; // Proxy port (default: 5439)
agentId?: string; // Unique agent identifier
framework?: string; // AI framework name (default: "custom")
database?: string; // Database name (default: "hatidata")
user?: string; // Username (default: "agent")
password?: string; // Password or API key
priority?: string; // Query priority: low | normal | high | critical
connectTimeout?: number; // Connection timeout in ms (default: 10000)
}
Query Methods
query<T>(sql: string, params?: any[]): Promise<T[]>
Execute a SELECT query and return typed rows:
interface User {
id: number;
name: string;
email: string;
}
const users = await client.query<User>('SELECT * FROM users WHERE id = $1', [42]);
// users is User[]
execute(sql: string, params?: any[]): Promise<number>
Execute an INSERT, UPDATE, or DELETE and return the affected row count:
const count = await client.execute(
'UPDATE users SET active = true WHERE id = $1',
[42]
);
console.log(`${count} rows updated`);
queryOne<T>(sql: string, params?: any[]): Promise<T | null>
Execute a query and return the first row, or null if no rows:
const user = await client.queryOne<User>(
'SELECT * FROM users WHERE id = $1',
[42]
);
if (user) {
console.log(user.name);
}
Push to Cloud
After working locally, push your data to HatiData Cloud:
import { HatiDataClient } from '@hatidata/sdk';
const client = new HatiDataClient({ host: 'localhost' });
await client.connect();
// Push exports local tables and uploads to cloud
await client.push({ target: 'cloud' });
// Now connect to your cloud endpoint
const cloudClient = new HatiDataClient({
host: 'your-org.proxy.hatidata.com',
password: 'hd_live_your_api_key',
});
await cloudClient.connect();
const rows = await cloudClient.query('SELECT COUNT(*) FROM users');
Connection Pooling
For applications handling multiple concurrent requests, use the built-in connection pool:
import { HatiDataPool } from '@hatidata/sdk';
const pool = new HatiDataPool({
host: 'your-org.proxy.hatidata.com',
port: 5439,
agentId: 'web-api',
password: 'hd_live_your_api_key',
max: 20, // Maximum pool size
idleTimeout: 30000, // Close idle connections after 30s
});
// Acquire and release connections automatically
const rows = await pool.query('SELECT * FROM orders LIMIT 10');
// Or manage connections manually
const conn = await pool.acquire();
try {
await conn.query('BEGIN');
await conn.execute('INSERT INTO logs VALUES ($1, $2)', [1, 'started']);
await conn.query('COMMIT');
} finally {
pool.release(conn);
}
Using with Standard pg Client
Since HatiData speaks the Postgres wire protocol, you can also use the standard pg library directly:
import { Client } from 'pg';
const client = new Client({
host: 'your-org.proxy.hatidata.com',
port: 5439,
user: 'analyst',
password: 'hd_live_your_api_key',
database: 'hatidata',
application_name: 'my-agent/custom',
});
await client.connect();
const result = await client.query('SELECT * FROM users');
console.log(result.rows);
await client.end();
The application_name connection parameter follows the agent_id/framework convention. The HatiData proxy parses this to extract agent identity for billing and audit purposes.
SQL Compatibility
When using either the @hatidata/sdk client or the standard pg client, you can write Snowflake-compatible SQL. The HatiData proxy automatically transpiles it to DuckDB:
// Snowflake SQL works transparently
const rows = await client.query(
`SELECT
customer_id,
NVL(name, 'Unknown') AS name,
IFF(revenue > 10000, 'enterprise', 'smb') AS tier,
DATEDIFF(day, first_order, CURRENT_DATE) AS days_active
FROM customers
WHERE status = 'active'`
);
See SQL Compatibility for the full list of supported functions and types.
Source Code
Next Steps
- Python SDK -- Python agent client
- dbt Adapter -- Run dbt models against HatiData
- Agent Integrations -- Framework-specific integrations
- API Reference -- Control plane REST API