Skip to main content

BI Tools

HatiData exposes a PostgreSQL wire-protocol compatible interface on port 5439. Any BI tool, SQL client, or application that supports PostgreSQL connections can connect to HatiData without modification.

Universal Connection Parameters

ParameterValue
HostYour HatiData proxy endpoint (e.g., data.yourcompany.com)
Port5439
Databasehatidata (or your configured catalog name)
UsernameYour HatiData user or service account ID
PasswordYour HatiData API key (hd_live_* or hd_test_*)
SSLRequired in production; optional in dev mode
ProtocolPostgreSQL v3 wire protocol
DriverAny PostgreSQL driver (JDBC, ODBC, native)

HatiData presents itself as PostgreSQL 15.0 to connected clients. Legacy warehouse SQL syntax is automatically transpiled to DuckDB-compatible SQL by the proxy.


Tableau

Tableau Desktop

  1. Open Tableau Desktop.
  2. Under Connect, select PostgreSQL.
  3. Enter connection details:
    • Server: data.yourcompany.com
    • Port: 5439
    • Database: hatidata
    • Authentication: Username and Password
    • Username: your HatiData user
    • Password: your API key
  4. Check Require SSL if connecting to a production endpoint.
  5. Click Sign In.
  6. Tableau will query information_schema and pg_catalog to discover tables. HatiData's compatibility layer returns metadata for these queries.

Tableau Server / Tableau Cloud

  1. In Tableau Server, go to Settings > Connections.
  2. Add a new PostgreSQL connection with the same parameters above.
  3. For published data sources, embed credentials or use OAuth if configured.
  4. For scheduled extracts, use a dedicated service account API key (hd_live_svc_...).

Tableau Tips

  • Use Custom SQL for complex queries; HatiData transpiles legacy warehouse syntax automatically.
  • Set Initial SQL to SET search_path = 'public' if schema discovery is slow.
  • Tableau issues many pg_type and pg_attribute queries on connect; HatiData handles all of these transparently.
  • When HATIDATA_COST_ESTIMATION_ENABLED=true, cost notices appear in Tableau's log as PostgreSQL NOTICE messages but do not affect query results.

Looker

Looker (Google Cloud)

  1. In Looker Admin, navigate to Database > Connections.
  2. Click New Connection.
  3. Select PostgreSQL 9.5+ as the dialect.
  4. Enter connection details:
    • Host: data.yourcompany.com
    • Port: 5439
    • Database: hatidata
    • Username: your HatiData service account
    • Password: your API key
  5. Under Additional JDBC parameters, add:
    ssl=true&sslmode=require
  6. Under Additional Settings, set Max Connections to match your HatiData concurrency limit (default: 100).
  7. Disable Persistent Derived Tables (PDTs) -- HatiData is a read-only query engine and does not support CREATE TABLE from Looker.
  8. Click Test to verify connectivity, then Save.

LookML Configuration

In your LookML project, define the connection:

connection: "hatidata_production"

HatiData supports standard SQL aggregations, window functions, and CTEs that Looker generates.

PDT Handling

HatiData does not support CREATE TABLE AS SELECT. If your LookML models use PDTs:

  • Convert PDTs to native derived tables (subqueries in FROM clause).
  • Or pre-materialize tables in your data lake and query them through HatiData.

Looker Tips

  • Aggregation-aware queries (GROUP BY, window functions) are fully supported.
  • SET and SHOW commands are handled without error.
  • Looker's information_schema queries for table discovery are supported natively.

Metabase

  1. Open Metabase Admin settings.
  2. Go to Databases > Add Database.
  3. Select PostgreSQL as the database type.
  4. Fill in:
    • Display name: HatiData Production
    • Host: data.yourcompany.com
    • Port: 5439
    • Database name: hatidata
    • Username: your HatiData user
    • Password: your API key
  5. Under Additional connection string options:
    ssl=true&sslmode=require
  6. Click Save.

Sync Settings

Metabase periodically syncs table metadata by querying information_schema. Recommended configuration:

  • Set sync schedule to daily rather than hourly to reduce unnecessary queries.
  • Disable Periodically refingerprint tables if your data does not change frequently.
  • The initial sync may take longer if you have many tables.

Metabase Tips

  • Native questions (visual query builder) generate standard SQL that HatiData handles natively.
  • Custom questions (raw SQL) support both standard SQL and legacy warehouse syntax (auto-transpiled).
  • Each scheduled execution goes through HatiData's full query pipeline (policy checks, audit, metering).
  • Increase HATIDATA_QUERY_TIMEOUT_SECS for long-running analytical queries.

DBeaver

  1. Open DBeaver and click New Database Connection (plug icon).
  2. Select PostgreSQL from the list.
  3. Enter:
    • Host: data.yourcompany.com
    • Port: 5439
    • Database: hatidata
    • Username: your HatiData user
    • Password: your API key
  4. On the SSL tab:
    • For production: check Use SSL and set SSL mode to require
    • For local dev: leave SSL disabled
  5. Click Test Connection to verify, then Finish.

DBeaver Features

  • SQL Editor (F3): Write queries directly. Both standard SQL and legacy warehouse syntax are supported.
  • Schema Browser: DBeaver queries information_schema and pg_catalog to populate the tree. Tables, columns, and types display correctly.
  • ERD: DBeaver's Entity Relationship Diagram tool works with discovered schema.
  • Data Export: CSV, JSON, SQL, and other export formats are all supported.
  • Extended Query Protocol: DBeaver uses Parse/Bind/Execute for parameterized queries. HatiData fully supports this protocol path.
  • Use EXPLAIN COST <sql> to preview query cost without executing.

DataGrip (JetBrains)

  1. Open DataGrip and go to File > New > Data Source > PostgreSQL.
  2. Enter:
    • Host: data.yourcompany.com
    • Port: 5439
    • Database: hatidata
    • User: your HatiData user
    • Password: your API key
  3. On the Advanced tab, set:
    • ssl = true
    • sslmode = require
  4. Click Test Connection, then OK.

DataGrip's introspection queries against pg_catalog are fully supported. The SQL console, query execution plans, and data export all work as expected.


Power BI

Power BI connects to HatiData via the PostgreSQL ODBC driver (psqlODBC).

Prerequisites

Install the PostgreSQL ODBC driver:

Windows (most common for Power BI): Download from https://www.postgresql.org/ftp/odbc/versions/msi/

macOS:

brew install psqlodbc

Power BI Desktop

  1. In Power BI Desktop, click Get Data > PostgreSQL.
  2. Enter server as data.yourcompany.com:5439.
  3. Enter database as hatidata.
  4. Select Database authentication and enter credentials.
  5. Click Connect.

Power BI via ODBC DSN

Alternatively, configure an ODBC DSN and connect through it:

  1. Open ODBC Data Source Administrator (64-bit).
  2. Add a new User DSN or System DSN.
  3. Select PostgreSQL Unicode driver.
  4. Fill in host, port, database, and credentials.
  5. In Power BI, use Get Data > ODBC and select your DSN.

JDBC / ODBC Migration

If you are migrating from a legacy cloud warehouse, you only need to swap connection strings. No code changes are required.

JDBC Migration

Use the standard PostgreSQL JDBC Driver (version 42.x or later):

<!-- Maven -->
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.7.3</version>
</dependency>

Connection URL:

jdbc:postgresql://<host>:5439/<database>?sslmode=require

Java example:

String url = "jdbc:postgresql://data.yourcompany.com:5439/hatidata?sslmode=require";
Properties props = new Properties();
props.setProperty("user", "analyst@yourcompany.com");
props.setProperty("password", "hd_live_...");

Connection conn = DriverManager.getConnection(url, props);
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery("SELECT * FROM orders LIMIT 10");

JDBC Property Mapping

Legacy JDBC PropertyPostgreSQL JDBC Equivalent
jdbc:snowflake://account.snowflakecomputing.comjdbc:postgresql://host:5439/database
account(not needed)
warehouse(not needed -- HatiData auto-manages compute)
dbDatabase name in URL path
schemaSet via SET search_path = schema_name after connect
roleMapped to HatiData RBAC role
authenticator(use password auth with API key)

ODBC Migration

Use psqlODBC (PostgreSQL ODBC driver).

DSN configuration (~/.odbc.ini on macOS/Linux):

[HatiData]
Driver = PostgreSQL Unicode
Server = data.yourcompany.com
Port = 5439
Database = hatidata
UserName = admin
Password = hd_live_your_api_key
SSLMode = require

ODBC Property Mapping

Legacy ODBC PropertyPostgreSQL ODBC Equivalent
Driver = SnowflakeDSIIDriverDriver = PostgreSQL
Server = account.snowflakecomputing.comServername = data.yourcompany.com
Port = 443Port = 5439
DatabaseDatabase
UIDUsername
PWDPassword (use HatiData API key)
Warehouse(not needed)
Role(mapped via HatiData RBAC)

psql (Command Line)

Connect directly with the standard PostgreSQL client:

# Basic connection
psql -h data.yourcompany.com -p 5439 -U admin -d hatidata

# With SSL
psql "host=data.yourcompany.com port=5439 dbname=hatidata user=admin sslmode=require"

# Local development (no SSL)
psql -h localhost -p 5439 -U admin -d hatidata

# Execute a single query
psql -h localhost -p 5439 -U admin -c "SELECT 1"

Environment Variables

export PGHOST=data.yourcompany.com
export PGPORT=5439
export PGDATABASE=hatidata
export PGUSER=admin
export PGPASSWORD=hd_live_your_api_key
export PGSSLMODE=require

# Then simply:
psql

dbt

HatiData works with dbt via the dbt-postgres adapter since it exposes a PostgreSQL-compatible interface.

profiles.yml

hatidata:
target: dev
outputs:
dev:
type: postgres
host: data.yourcompany.com
port: 5439
user: "{{ env_var('HATIDATA_USER') }}"
password: "{{ env_var('HATIDATA_API_KEY') }}"
dbname: hatidata
schema: public
threads: 4
connect_timeout: 30
sslmode: require

For more details, see the dbt adapter documentation.


Common Troubleshooting

Connection Refused

  • Verify the HatiData proxy is running and accessible on port 5439.
  • Check firewall rules and security groups.
  • For local development: HATIDATA_DEV_MODE=true cargo run -p hatidata-proxy

SSL Errors

  • In development, set sslmode=disable or sslmode=prefer.
  • In production, ensure your client trusts the HatiData TLS certificate.
  • For self-signed certs in dev, set sslmode=require (not verify-full).

Authentication Failed

  • Verify your API key is valid and not expired.
  • Check that your user has the correct role assigned.
  • Production keys start with hd_live_, development keys with hd_test_.

Table Not Found

  • Tables are only visible if they exist in the catalog.
  • Run SELECT * FROM information_schema.tables to see available tables.
  • Check that your role has access to the table (ABAC policy engine).

Slow Metadata Discovery

Some BI tools issue many catalog queries on first connect. This is normal. HatiData intercepts these queries in its compatibility layer and returns results without hitting the DuckDB engine.

If discovery is slow, set the search path explicitly:

SET search_path = 'public';

Query Syntax Errors

HatiData transpiles legacy warehouse SQL to DuckDB. If you encounter syntax errors:

  1. Check the transpiler supports your syntax (see SQL Compatibility).
  2. Try DuckDB-native syntax directly.
  3. If AI healing is enabled, the proxy will attempt to auto-correct and retry the query.

Timeout Errors

  • Default query timeout is 300 seconds.
  • Increase with: SET statement_timeout = 600000 (in milliseconds).
  • For BI tools, configure the connection timeout in the tool settings.

Character Encoding

HatiData uses UTF-8 encoding. If you see encoding issues:

SET client_encoding = 'UTF8';

Most BI tools send this automatically on connection.

Stay in the loop

Product updates, engineering deep-dives, and agent-native insights. No spam.