V1 to V2 Transition Guide
HatiData V2 adds a Governed Runtime layer on top of V1's analytical infrastructure. V1 continues to work — V2 is additive, not a replacement.
Your existing V1 queries, memory storage, and CoT ledger continue to work unchanged. V2 adds new tables under the hd_runtime schema and new /v2/ API endpoints. No V1 code needs to change immediately.
What Changes
New Schema: hd_runtime
V2 introduces 7 new tables in the hd_runtime schema:
| Table | Purpose | V1 Equivalent |
|---|---|---|
tasks | Intent: what needs to be done | agent_task_queue (partial) |
task_attempts | Execution: each run of a task | agent_task_logs (partial) |
model_decisions | LLM routing decisions with reasoning | (none) |
llm_invocations | Actual API calls with tokens/cost/latency | (none) |
artifact_instances | Produced outputs with content hashes | agent_memories (partial) |
artifact_validations | Schema + contract verification results | (none) |
workflow_events | Immutable audit trail (append-only) | agent_states + reasoning_steps |
New API Endpoints
All V2 endpoints live under /v2/runtime/:
GET /v2/runtime/tasks/:id # Task with attempts
GET /v2/runtime/attempts/:id # Attempt detail
GET /v2/explain/:attempt_id # Full ExplainBundle
POST /v2/runtime/tasks/:id/claim # Claim queued task
POST /v2/runtime/attempts/:id/fail # Report failure
POST /v2/recovery/:attempt_id # Initiate recovery
V1 endpoints (/v1/*) continue to work. You can run V1 and V2 in parallel.
New Visibility Modes (Branching)
V1 branching used a single branch_id column with implicit fallback to mainline. V2 introduces explicit visibility modes:
| Mode | Behavior | Use Case |
|---|---|---|
MainOnly | Read only from mainline (branch_id IS NULL) | Verification agents that need canonical state |
BranchLocal | Read only from this branch | Isolated experimentation |
BranchWithFallback | Read branch first, fall back to mainline | Normal branch development |
ExactKey | Read exact key match (no fallback) | Contract-driven lookup |
What Stays the Same
These V1 features are unchanged:
- SQL proxy (port 5439) — Postgres wire protocol, Snowflake transpilation, DuckDB execution
- MCP server — All 24 tools continue to work via
/mcp - Agent memories —
store_memory,search_memory,load_memory_exactunchanged - Chain-of-Thought —
log_reasoning_stepwith SHA-256 hash chains unchanged - Semantic triggers —
register_trigger,evaluate_triggersunchanged - Branch operations —
create_branch,merge_branchunchanged - ABAC policies —
enforce_abac_policyunchanged - Python/TypeScript SDKs — Existing SDK methods continue to work
Migration Path
For Platform Builders (like Marviy)
If your platform manages its own task queue and agent lifecycle:
Step 1: Shadow Writes (Week 1)
Start writing to V2 tables alongside V1. This is non-blocking — if V2 writes fail, V1 continues.
# Before (V1 only)
hatidata.insert_agent_task_log(project_id, agent_type, result)
# After (V1 + V2 shadow)
hatidata.insert_agent_task_log(project_id, agent_type, result) # V1 unchanged
hatidata.v2_create_task(project_id, agent_type, task_class) # V2 shadow write
Step 2: Dual Read (Week 2-3)
Start reading from V2 for new features (lineage, explain bundles) while keeping V1 reads for existing features.
# Lineage query — new in V2
bundle = hatidata.v2_explain(attempt_id)
print(f"Cost: ${bundle.total_cost_usd}, Model: {bundle.model_decision.primary_model}")
Step 3: State Pivot (Week 4+)
Once V2 shadow writes are stable, migrate your task queue to V2:
# Before: platform manages its own agent_task_queue
task = my_queue.pop_next()
result = agent.execute(task)
my_queue.mark_done(task.id)
# After: HatiData V2 manages the lifecycle
attempt = hatidata.v2_claim_task(task_id)
result = agent.execute(attempt)
hatidata.v2_complete_attempt(attempt.id, result)
Step 4: Delete Local Tables
Once all reads and writes go through V2, you can drop your local task management tables:
-- Safe to drop after V2 migration is complete
DROP TABLE IF EXISTS agent_task_queue;
DROP TABLE IF EXISTS active_phase_runs;
For Agent Developers
If you're building agents that connect to HatiData:
SDK Upgrade:
# Python
pip install --upgrade hatidata-agent>=2.0
# TypeScript
npm install hatidata-sdk@latest
No Code Changes Required — The SDKs detect V2 automatically and use the new endpoints when available. Your existing store_memory(), search_memory(), and log_reasoning_step() calls work unchanged.
New Capabilities:
from hatidata import HatiDataClient
client = HatiDataClient(host="api.hatidata.com", api_key="hd_agent_...")
# V2: Get lineage for an attempt
bundle = client.explain(attempt_id="att-001")
print(bundle.model_decision.primary_model) # "deepseek-ai/deepseek-v3.2-maas"
print(bundle.total_cost_usd) # 0.0031
# V2: Query with visibility mode
results = client.search_memory(
query="architecture decisions",
visibility="MainOnly", # Only mainline, no branch leakage
)
SQL Compatibility
V2 introduces the hd_runtime.* schema. These tables are queryable via the SQL proxy:
-- Query V2 tables directly
SELECT * FROM hd_runtime.tasks WHERE project_id = 'proj-abc';
-- Join V1 and V2 data
SELECT
t.agent_type,
a.status,
m.memory_key,
m.value
FROM hd_runtime.tasks t
JOIN hd_runtime.task_attempts a ON a.task_id = t.id
LEFT JOIN agent_memories m ON m.project_id = t.project_id
WHERE t.project_id = 'proj-abc';
hd_runtime.* queries bypass DuckDB and use Postgres passthrough — they execute directly against the control plane database. This means they don't benefit from DuckDB's columnar query optimization but have lower latency for transactional reads.
Breaking Changes
V2 has no breaking changes to V1 APIs. The following are behavioral changes to be aware of:
| Change | Impact | Action |
|---|---|---|
workflow_events is append-only | Cannot UPDATE/DELETE event rows | Use V2 event queries instead of V1 state mutations |
| One active attempt per task | Cannot have two agents on same task | Ensure your dispatcher doesn't double-assign |
| Lease heartbeat required | Agents must send heartbeats every 60s | SDK handles this automatically |
Rollback
If V2 causes issues:
- Sprint 1:
DROP SCHEMA hd_runtime CASCADE— complete removal - Sprint 2+: Feature flag disable only (schema preserved for audit)
V1 continues to work regardless of V2 state.
Next Steps
- Tasks & Attempts — The core V2 lifecycle
- Entity Relationship Model — Visual entity map
- Lineage & Explainability — The new tracing system