Skip to main content

Cloud Mode

HatiData Cloud hosts a managed proxy and control plane so your team and AI agents can connect from anywhere. Everything you build in Local Mode migrates seamlessly with hati push.

Plan Details

FeatureIncluded
Price$29/month per workspace
ComputeManaged DuckDB proxy
Storage100 GB included (Iceberg-format)
ConnectionsUnlimited concurrent connections
UsersUp to 10 team members
EnvironmentsDevelopment + Production
DashboardFull query audit, policy management, billing
SupportEmail, community Slack

Pushing to Cloud

From your local workspace, push your data and schema to the cloud:

hati push --target cloud

This command:

  1. Exports your local DuckDB tables to Parquet format
  2. Uploads them to HatiData's managed storage
  3. Provisions a proxy endpoint for your workspace
  4. Returns a connection string you can use immediately
Pushing 4 tables (12.3 MB)...
users: 2.1 MB [===] 100%
orders: 5.7 MB [===] 100%
products: 3.2 MB [===] 100%
events: 1.3 MB [===] 100%

Cloud workspace ready!
Host: your-org.proxy.hatidata.com
Port: 5439
Database: hatidata
Dashboard: https://app.hatidata.com/your-org

Connecting to Cloud

psql

psql -h your-org.proxy.hatidata.com -p 5439 -U analyst -d hatidata

Python

from hatidata_agent import HatiDataAgent

agent = HatiDataAgent(
host="your-org.proxy.hatidata.com",
port=5439,
agent_id="cloud-agent",
framework="langchain",
user="analyst",
password="hd_live_your_api_key",
)

rows = agent.query("SELECT COUNT(*) FROM orders")

Node.js (pg)

import { Client } from 'pg';

const client = new Client({
host: 'your-org.proxy.hatidata.com',
port: 5439,
user: 'analyst',
password: 'hd_live_your_api_key',
database: 'hatidata',
});

await client.connect();
const result = await client.query('SELECT COUNT(*) FROM orders');
console.log(result.rows);

TypeScript SDK

import { HatiDataClient } from '@hatidata/sdk';

const client = new HatiDataClient({
host: 'your-org.proxy.hatidata.com',
port: 5439,
agentId: 'cloud-agent',
framework: 'custom',
password: 'hd_live_your_api_key',
});

await client.connect();
const rows = await client.query<{ count: number }>('SELECT COUNT(*) as count FROM orders');
console.log(rows[0].count);
await client.close();

Any Postgres Client

HatiData speaks the standard Postgres wire protocol. Any client that connects to PostgreSQL can connect to HatiData:

  • DBeaver -- Add a PostgreSQL connection with your cloud host and port
  • DataGrip -- Use the PostgreSQL driver
  • Tableau -- Use the PostgreSQL connector
  • dbt -- Use the dbt-hatidata adapter
  • MCP -- Use the HatiData MCP server with Claude Desktop, Claude Code, or Cursor

Dashboard Access

The HatiData dashboard at app.hatidata.com provides:

  • Query audit -- Full log of every query, who ran it, latency, and cost
  • Policy management -- Create and manage ABAC policies, row-level security
  • API key management -- Create keys with granular scopes and IP allowlists
  • Billing and usage -- Per-agent credit usage, quota management
  • Environment management -- Separate development and production environments
  • User management -- Invite team members, assign roles

Data Residency

HatiData Cloud runs in multiple cloud regions. During workspace creation, you choose your region:

RegionLocation
us-east-1N. Virginia, USA
eu-west-1Ireland, EU
ap-southeast-1Singapore, APAC

Your data stays in the selected region. Cross-region replication is available on the Enterprise plan.

API Keys

Cloud mode uses API keys for authentication. Keys are prefixed by environment:

  • hd_live_* -- Production keys
  • hd_test_* -- Development/test keys

Create keys via the dashboard or the API:

# List keys
curl -H "Authorization: Bearer $HATIDATA_JWT" \
https://api.hatidata.com/v1/environments/$ENV_ID/api-keys

# Create a key with specific scopes
curl -X POST -H "Authorization: Bearer $HATIDATA_JWT" \
-H "Content-Type: application/json" \
-d '{"name": "agent-key", "scopes": ["query:read", "schema:read"]}' \
https://api.hatidata.com/v1/environments/$ENV_ID/api-keys

Syncing Changes

After making changes locally, push incremental updates to the cloud:

# Push all tables
hati push --target cloud

# Push a specific table
hati push --target cloud --table orders

# Pull schema from cloud to local
hati pull --source cloud --schema-only

The sync process uses Parquet format for efficient data transfer and preserves Iceberg snapshot history.

Migrating from Local

Moving from Local to Cloud requires no SQL changes. The same Snowflake-compatible SQL, the same Python/TypeScript code, and the same dbt models work unchanged. The only change is the connection string:

# Local
agent = HatiDataAgent(host="localhost", port=5439, agent_id="my-agent")

# Cloud (only host and password change)
agent = HatiDataAgent(
host="your-org.proxy.hatidata.com",
port=5439,
agent_id="my-agent",
password="hd_live_your_api_key",
)

Upgrading to Enterprise

If your organization needs VPC isolation, PrivateLink connectivity, or custom SLAs, you can upgrade from Cloud to Enterprise without downtime. Contact sales@hatidata.com to start the process.

Next Steps

Stay in the loop

Product updates, engineering deep-dives, and agent-native insights. No spam.