Cloud Mode
HatiData Cloud hosts a managed proxy and control plane so your team and AI agents can connect from anywhere. Everything you build in Local Mode migrates seamlessly with hati push.
Plan Details
| Feature | Included |
|---|---|
| Price | $29/month per workspace |
| Compute | Managed DuckDB proxy |
| Storage | 100 GB included (Iceberg-format) |
| Connections | Unlimited concurrent connections |
| Users | Up to 10 team members |
| Environments | Development + Production |
| Dashboard | Full query audit, policy management, billing |
| Support | Email, community Slack |
Pushing to Cloud
From your local workspace, push your data and schema to the cloud:
hati push --target cloud
This command:
- Exports your local DuckDB tables to Parquet format
- Uploads them to HatiData's managed storage
- Provisions a proxy endpoint for your workspace
- Returns a connection string you can use immediately
Pushing 4 tables (12.3 MB)...
users: 2.1 MB [===] 100%
orders: 5.7 MB [===] 100%
products: 3.2 MB [===] 100%
events: 1.3 MB [===] 100%
Cloud workspace ready!
Host: your-org.proxy.hatidata.com
Port: 5439
Database: hatidata
Dashboard: https://app.hatidata.com/your-org
Connecting to Cloud
psql
psql -h your-org.proxy.hatidata.com -p 5439 -U analyst -d hatidata
Python
from hatidata_agent import HatiDataAgent
agent = HatiDataAgent(
host="your-org.proxy.hatidata.com",
port=5439,
agent_id="cloud-agent",
framework="langchain",
user="analyst",
password="hd_live_your_api_key",
)
rows = agent.query("SELECT COUNT(*) FROM orders")
Node.js (pg)
import { Client } from 'pg';
const client = new Client({
host: 'your-org.proxy.hatidata.com',
port: 5439,
user: 'analyst',
password: 'hd_live_your_api_key',
database: 'hatidata',
});
await client.connect();
const result = await client.query('SELECT COUNT(*) FROM orders');
console.log(result.rows);
TypeScript SDK
import { HatiDataClient } from '@hatidata/sdk';
const client = new HatiDataClient({
host: 'your-org.proxy.hatidata.com',
port: 5439,
agentId: 'cloud-agent',
framework: 'custom',
password: 'hd_live_your_api_key',
});
await client.connect();
const rows = await client.query<{ count: number }>('SELECT COUNT(*) as count FROM orders');
console.log(rows[0].count);
await client.close();
Any Postgres Client
HatiData speaks the standard Postgres wire protocol. Any client that connects to PostgreSQL can connect to HatiData:
- DBeaver -- Add a PostgreSQL connection with your cloud host and port
- DataGrip -- Use the PostgreSQL driver
- Tableau -- Use the PostgreSQL connector
- dbt -- Use the dbt-hatidata adapter
- MCP -- Use the HatiData MCP server with Claude Desktop, Claude Code, or Cursor
Dashboard Access
The HatiData dashboard at app.hatidata.com provides:
- Query audit -- Full log of every query, who ran it, latency, and cost
- Policy management -- Create and manage ABAC policies, row-level security
- API key management -- Create keys with granular scopes and IP allowlists
- Billing and usage -- Per-agent credit usage, quota management
- Environment management -- Separate development and production environments
- User management -- Invite team members, assign roles
Data Residency
HatiData Cloud runs in multiple cloud regions. During workspace creation, you choose your region:
| Region | Location |
|---|---|
us-east-1 | N. Virginia, USA |
eu-west-1 | Ireland, EU |
ap-southeast-1 | Singapore, APAC |
Your data stays in the selected region. Cross-region replication is available on the Enterprise plan.
API Keys
Cloud mode uses API keys for authentication. Keys are prefixed by environment:
hd_live_*-- Production keyshd_test_*-- Development/test keys
Create keys via the dashboard or the API:
# List keys
curl -H "Authorization: Bearer $HATIDATA_JWT" \
https://api.hatidata.com/v1/environments/$ENV_ID/api-keys
# Create a key with specific scopes
curl -X POST -H "Authorization: Bearer $HATIDATA_JWT" \
-H "Content-Type: application/json" \
-d '{"name": "agent-key", "scopes": ["query:read", "schema:read"]}' \
https://api.hatidata.com/v1/environments/$ENV_ID/api-keys
Syncing Changes
After making changes locally, push incremental updates to the cloud:
# Push all tables
hati push --target cloud
# Push a specific table
hati push --target cloud --table orders
# Pull schema from cloud to local
hati pull --source cloud --schema-only
The sync process uses Parquet format for efficient data transfer and preserves Iceberg snapshot history.
Migrating from Local
Moving from Local to Cloud requires no SQL changes. The same Snowflake-compatible SQL, the same Python/TypeScript code, and the same dbt models work unchanged. The only change is the connection string:
# Local
agent = HatiDataAgent(host="localhost", port=5439, agent_id="my-agent")
# Cloud (only host and password change)
agent = HatiDataAgent(
host="your-org.proxy.hatidata.com",
port=5439,
agent_id="my-agent",
password="hd_live_your_api_key",
)
Upgrading to Enterprise
If your organization needs VPC isolation, PrivateLink connectivity, or custom SLAs, you can upgrade from Cloud to Enterprise without downtime. Contact sales@hatidata.com to start the process.
Next Steps
- Enterprise Deployment -- In-VPC deployment with PrivateLink
- Security Overview -- RBAC, ABAC, encryption, and audit
- API Reference -- Full control plane API documentation
- Python SDK -- Agent-aware database access for Python
- TypeScript SDK -- Typed client for Node.js