Migrate from PostgreSQL
HatiData speaks the Postgres wire protocol. Your existing clients, ORMs, drivers, and tools connect to HatiData exactly as they connect to PostgreSQL — no code changes required. The only thing that changes is the connection string.
What Stays the Same
Every Postgres-compatible tool works without modification:
psycopg2,asyncpg,pg8000- SQLAlchemy, Django ORM, Prisma, TypeORM
psqlCLI- dbt (with
dbt-postgresadapter) - Tableau, Metabase, Superset, and other BI tools
- Any tool that speaks the Postgres wire protocol
Migration Steps
Step 1: Update the Connection String
# Before: PostgreSQL
postgresql://myuser:mypass@postgres-host:5432/mydb
# After: HatiData (same syntax, different host and port)
postgresql://myuser:mypass@localhost:5439/mydb
That's it for existing SQL workloads. Your queries run unchanged.
Step 2: (Optional) Import Existing Data
If you want to bring existing PostgreSQL data into HatiData:
# Export from PostgreSQL
pg_dump -h postgres-host -U myuser -Fc mydb > mydb.dump
# Import into HatiData
hati schema import --file mydb.dump --format pg_dump
# Or export as Parquet for large tables
COPY mytable TO '/tmp/mytable.parquet' (FORMAT PARQUET);
hati push --source /tmp/mytable.parquet --schema public --format parquet
Step 3: Connect Agent Workloads
Once connected, enable agent features per agent:
-- Register an agent identity
INSERT INTO _hatidata_agents (agent_id, name, org_id)
VALUES ('agent-001', 'my-assistant', 'my-org');
-- Agent memory is now available for this agent
SELECT * FROM _hatidata_memories WHERE agent_id = 'agent-001';
PostgreSQL vs HatiData: Capability Comparison
| Capability | PostgreSQL | HatiData |
|---|---|---|
| Postgres wire protocol | Yes | Yes (100% compatible) |
| Standard SQL | Yes | Yes |
| ACID transactions | Yes | Yes |
| Extensions (PostGIS, etc.) | Via pg_extension | Planned |
| Snowflake SQL compatibility | No | Yes (auto-transpiled) |
| Query latency for analytics | 10ms–seconds (OLTP) | Sub-10ms (HatiData engine) |
| Agent identity | Not supported | Per-agent access control |
| Long-term agent memory | Not supported | SQL + vector hybrid |
| Chain-of-thought ledger | Not supported | Cryptographically hash-chained |
| Semantic triggers | Not supported | Cosine similarity evaluation |
| Branch isolation | Not supported | Per-agent schema branches |
| MCP tool support | Not supported | 24 native MCP tools |
| Per-agent billing | Not supported | Native |
| In-VPC deployment | Yes | Yes |
| Vector similarity search | pgvector (extension) | Native vector-backed |
What You Gain
Agent Identity and Access Control
Every query in HatiData is attributed to an agent. Row-level policies apply per agent, per table — not just per role.
-- Grant agent-001 read access to customer_data
INSERT INTO _hatidata_policies (agent_id, table_name, action, effect)
VALUES ('agent-001', 'customer_data', 'SELECT', 'ALLOW');
-- Block agent-001 from reading PII columns
INSERT INTO _hatidata_column_masks (agent_id, table_name, column_name, mask_type)
VALUES ('agent-001', 'customer_data', 'email', 'REDACT');
Long-Term Agent Memory
Store and retrieve memories across sessions with SQL + vector hybrid search:
-- Store a memory
INSERT INTO _hatidata_memories (agent_id, content, metadata)
VALUES ('agent-001', 'User prefers metric units', '{"source": "preference"}');
-- Search memories semantically
SELECT content, semantic_match(content, :query) AS similarity
FROM _hatidata_memories
WHERE agent_id = 'agent-001'
ORDER BY similarity DESC
LIMIT 5;
Chain-of-Thought Ledger
Every reasoning step is stored as an immutable, hash-chained record:
-- Query an agent's reasoning history
SELECT step_type, content, hash, created_at
FROM _hatidata_cot
WHERE agent_id = 'agent-001'
AND session_id = 'session-xyz'
ORDER BY sequence_num;
Sub-10ms Analytics
HatiData routes analytical queries through a columnar, vectorized, in-process query engine. OLAP queries that take seconds in PostgreSQL run in milliseconds.
-- This runs in HatiData's query engine — sub-10ms for millions of rows
SELECT date_trunc('day', occurred_at), COUNT(*), SUM(amount)
FROM transactions
WHERE agent_id = 'agent-001'
GROUP BY 1
ORDER BY 1;
Snowflake SQL Compatibility
Agents trained on Snowflake SQL dialects work without rewriting:
-- Snowflake SQL — runs as-is on HatiData
SELECT
NVL(user_name, 'anonymous') AS name,
IFF(status = 'active', 1, 0) AS is_active,
LISTAGG(tag, ',') WITHIN GROUP (ORDER BY tag) AS tags
FROM users
GROUP BY 1, 2;
Related Concepts
- Two-Plane Model — Full Postgres wire protocol compatibility details
- Agent Identity Model — Per-agent access control
- Persistent Memory — Long-term SQL + vector memory
- Chain-of-Thought Ledger — Immutable reasoning traces
- SQL Functions — SQL compatibility and transpilation