Skip to main content

Migrate from PostgreSQL

HatiData speaks the Postgres wire protocol. Your existing clients, ORMs, drivers, and tools connect to HatiData exactly as they connect to PostgreSQL — no code changes required. The only thing that changes is the connection string.

What Stays the Same

Every Postgres-compatible tool works without modification:

  • psycopg2, asyncpg, pg8000
  • SQLAlchemy, Django ORM, Prisma, TypeORM
  • psql CLI
  • dbt (with dbt-postgres adapter)
  • Tableau, Metabase, Superset, and other BI tools
  • Any tool that speaks the Postgres wire protocol

Migration Steps

Step 1: Update the Connection String

# Before: PostgreSQL
postgresql://myuser:mypass@postgres-host:5432/mydb

# After: HatiData (same syntax, different host and port)
postgresql://myuser:mypass@localhost:5439/mydb

That's it for existing SQL workloads. Your queries run unchanged.

Step 2: (Optional) Import Existing Data

If you want to bring existing PostgreSQL data into HatiData:

# Export from PostgreSQL
pg_dump -h postgres-host -U myuser -Fc mydb > mydb.dump

# Import into HatiData
hati schema import --file mydb.dump --format pg_dump

# Or export as Parquet for large tables
COPY mytable TO '/tmp/mytable.parquet' (FORMAT PARQUET);
hati push --source /tmp/mytable.parquet --schema public --format parquet

Step 3: Connect Agent Workloads

Once connected, enable agent features per agent:

-- Register an agent identity
INSERT INTO _hatidata_agents (agent_id, name, org_id)
VALUES ('agent-001', 'my-assistant', 'my-org');

-- Agent memory is now available for this agent
SELECT * FROM _hatidata_memories WHERE agent_id = 'agent-001';

PostgreSQL vs HatiData: Capability Comparison

CapabilityPostgreSQLHatiData
Postgres wire protocolYesYes (100% compatible)
Standard SQLYesYes
ACID transactionsYesYes
Extensions (PostGIS, etc.)Via pg_extensionPlanned
Snowflake SQL compatibilityNoYes (auto-transpiled)
Query latency for analytics10ms–seconds (OLTP)Sub-10ms (HatiData engine)
Agent identityNot supportedPer-agent access control
Long-term agent memoryNot supportedSQL + vector hybrid
Chain-of-thought ledgerNot supportedCryptographically hash-chained
Semantic triggersNot supportedCosine similarity evaluation
Branch isolationNot supportedPer-agent schema branches
MCP tool supportNot supported24 native MCP tools
Per-agent billingNot supportedNative
In-VPC deploymentYesYes
Vector similarity searchpgvector (extension)Native vector-backed

What You Gain

Agent Identity and Access Control

Every query in HatiData is attributed to an agent. Row-level policies apply per agent, per table — not just per role.

-- Grant agent-001 read access to customer_data
INSERT INTO _hatidata_policies (agent_id, table_name, action, effect)
VALUES ('agent-001', 'customer_data', 'SELECT', 'ALLOW');

-- Block agent-001 from reading PII columns
INSERT INTO _hatidata_column_masks (agent_id, table_name, column_name, mask_type)
VALUES ('agent-001', 'customer_data', 'email', 'REDACT');

Long-Term Agent Memory

Store and retrieve memories across sessions with SQL + vector hybrid search:

-- Store a memory
INSERT INTO _hatidata_memories (agent_id, content, metadata)
VALUES ('agent-001', 'User prefers metric units', '{"source": "preference"}');

-- Search memories semantically
SELECT content, semantic_match(content, :query) AS similarity
FROM _hatidata_memories
WHERE agent_id = 'agent-001'
ORDER BY similarity DESC
LIMIT 5;

Chain-of-Thought Ledger

Every reasoning step is stored as an immutable, hash-chained record:

-- Query an agent's reasoning history
SELECT step_type, content, hash, created_at
FROM _hatidata_cot
WHERE agent_id = 'agent-001'
AND session_id = 'session-xyz'
ORDER BY sequence_num;

Sub-10ms Analytics

HatiData routes analytical queries through a columnar, vectorized, in-process query engine. OLAP queries that take seconds in PostgreSQL run in milliseconds.

-- This runs in HatiData's query engine — sub-10ms for millions of rows
SELECT date_trunc('day', occurred_at), COUNT(*), SUM(amount)
FROM transactions
WHERE agent_id = 'agent-001'
GROUP BY 1
ORDER BY 1;

Snowflake SQL Compatibility

Agents trained on Snowflake SQL dialects work without rewriting:

-- Snowflake SQL — runs as-is on HatiData
SELECT
NVL(user_name, 'anonymous') AS name,
IFF(status = 'active', 1, 0) AS is_active,
LISTAGG(tag, ',') WITHIN GROUP (ORDER BY tag) AS tags
FROM users
GROUP BY 1, 2;

Stay in the loop

Product updates, engineering deep-dives, and agent-native insights. No spam.