Skip to main content

Vercel AI SDK Integration

HatiData integrates with the Vercel AI SDK to provide persistent agent memory alongside streaming LLM responses. Store conversation history, inject retrieved context, and log reasoning steps -- all within Next.js route handlers and React Server Components.

Installation

npm install hatidata ai @ai-sdk/openai

Requirements: Node.js 18+, Next.js 14+ (App Router), a running HatiData proxy.

Set environment variables in .env.local:

OPENAI_API_KEY=sk-...
HATIDATA_HOST=localhost
HATIDATA_PORT=5439
HATIDATA_API_KEY=hd_live_your_api_key

Route Handler with Memory

Create an API route that retrieves relevant memories before streaming a response, then persists the interaction afterward.

// app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { HatiDataClient } from 'hatidata';

const hd = new HatiDataClient({
host: process.env.HATIDATA_HOST!,
port: Number(process.env.HATIDATA_PORT),
apiKey: process.env.HATIDATA_API_KEY!,
});

export async function POST(req: Request) {
const { messages, sessionId = 'default' } = await req.json();
const lastMessage = messages[messages.length - 1].content;

// Retrieve relevant memories
const memories = await hd.memory.search({
agentId: 'vercel-agent',
query: lastMessage,
filters: { session_id: sessionId },
topK: 5,
});

const contextBlock = memories.length > 0
? `\n\nRelevant context from prior interactions:\n${memories.map(m => `- ${m.content}`).join('\n')}`
: '';

const result = streamText({
model: openai('gpt-4o'),
system: `You are a helpful assistant.${contextBlock}`,
messages,
onFinish: async ({ text }) => {
// Persist the interaction
await hd.memory.store({
agentId: 'vercel-agent',
content: `User: ${lastMessage}\nAssistant: ${text.slice(0, 300)}`,
metadata: {
session_id: sessionId,
type: 'interaction',
timestamp: new Date().toISOString(),
},
});
},
});

return result.toDataStreamResponse();
}

React Client Component

Use the useChat hook from the Vercel AI SDK to connect to the memory-augmented route handler:

// app/chat/page.tsx
'use client';

import { useChat } from 'ai/react';

export default function ChatPage() {
const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat({
api: '/api/chat',
body: { sessionId: 'user-session-123' },
});

return (
<div className="max-w-2xl mx-auto p-4">
<div className="space-y-4">
{messages.map((m) => (
<div key={m.id} className={m.role === 'user' ? 'text-right' : 'text-left'}>
<span className="inline-block p-3 rounded-lg bg-gray-800">
{m.content}
</span>
</div>
))}
</div>
<form onSubmit={handleSubmit} className="mt-4 flex gap-2">
<input
value={input}
onChange={handleInputChange}
placeholder="Ask anything..."
className="flex-1 p-2 rounded bg-gray-900 border border-gray-700"
/>
<button type="submit" disabled={isLoading} className="px-4 py-2 bg-violet-600 rounded">
Send
</button>
</form>
</div>
);
}

Tool Integration

Define tools that query HatiData and make them available to the streaming response:

// app/api/chat/route.ts
import { openai } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import { z } from 'zod';

const queryTool = tool({
description: 'Query the data warehouse using SQL',
parameters: z.object({
sql: z.string().describe('The SQL query to execute'),
}),
execute: async ({ sql }) => {
const rows = await hd.query(sql);
return JSON.stringify(rows.slice(0, 20));
},
});

const searchMemoryTool = tool({
description: 'Search agent memory for relevant past interactions',
parameters: z.object({
query: z.string().describe('The search query'),
}),
execute: async ({ query }) => {
const results = await hd.memory.search({
agentId: 'vercel-agent',
query,
topK: 5,
});
return results.map(r => r.content).join('\n---\n');
},
});

export async function POST(req: Request) {
const { messages } = await req.json();

const result = streamText({
model: openai('gpt-4o'),
system: 'You are a data analyst with access to a SQL warehouse and memory search.',
messages,
tools: { query: queryTool, searchMemory: searchMemoryTool },
maxSteps: 5,
});

return result.toDataStreamResponse();
}

Edge-Compatible Patterns

HatiData's TypeScript SDK uses the Postgres wire protocol over TCP, which is not available in Edge Runtime. For edge deployments, use the HTTP API instead:

// lib/hatidata-edge.ts
const HATIDATA_API_URL = process.env.HATIDATA_API_URL!; // e.g., https://api.hatidata.com

export async function searchMemory(query: string, agentId: string) {
const res = await fetch(`${HATIDATA_API_URL}/v1/memory/search`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${process.env.HATIDATA_API_KEY}`,
},
body: JSON.stringify({ agent_id: agentId, query, top_k: 5 }),
});
return res.json();
}
// app/api/chat/route.ts
export const runtime = 'edge';

import { searchMemory } from '@/lib/hatidata-edge';

// Use searchMemory() in your route handler
note

The HTTP API adds a network hop compared to the direct Postgres wire protocol. For latency-sensitive workloads, use Node.js runtime instead of Edge.


Structured Output with Memory

Use the Vercel AI SDK's generateObject with HatiData memory for structured data extraction:

import { openai } from '@ai-sdk/openai';
import { generateObject } from 'ai';
import { z } from 'zod';

const TicketSchema = z.object({
priority: z.enum(['low', 'medium', 'high', 'critical']),
category: z.string(),
summary: z.string(),
suggestedAction: z.string(),
});

export async function classifyTicket(ticketBody: string) {
// Retrieve similar past tickets from memory
const pastTickets = await hd.memory.search({
agentId: 'classifier-agent',
query: ticketBody,
topK: 3,
});

const context = pastTickets.length > 0
? `\nSimilar past tickets:\n${pastTickets.map(t => t.content).join('\n')}`
: '';

const { object } = await generateObject({
model: openai('gpt-4o'),
schema: TicketSchema,
prompt: `Classify this support ticket:${context}\n\nTicket: ${ticketBody}`,
});

return object;
}

Stay in the loop

Product updates, engineering deep-dives, and agent-native insights. No spam.