BI Tools
HatiData exposes a PostgreSQL wire-protocol compatible interface on port 5439. Any BI tool, SQL client, or application that supports PostgreSQL connections can connect to HatiData without modification.
Universal Connection Parameters
| Parameter | Value |
|---|---|
| Host | Your HatiData proxy endpoint (e.g., data.yourcompany.com) |
| Port | 5439 |
| Database | hatidata (or your configured catalog name) |
| Username | Your HatiData user or service account ID |
| Password | Your HatiData API key (hd_live_* or hd_test_*) |
| SSL | Required in production; optional in dev mode |
| Protocol | PostgreSQL v3 wire protocol |
| Driver | Any PostgreSQL driver (JDBC, ODBC, native) |
HatiData presents itself as PostgreSQL 15.0 to connected clients. Legacy warehouse SQL syntax is automatically transpiled to DuckDB-compatible SQL by the proxy.
Tableau
Tableau Desktop
- Open Tableau Desktop.
- Under Connect, select PostgreSQL.
- Enter connection details:
- Server:
data.yourcompany.com - Port:
5439 - Database:
hatidata - Authentication: Username and Password
- Username: your HatiData user
- Password: your API key
- Server:
- Check Require SSL if connecting to a production endpoint.
- Click Sign In.
- Tableau will query
information_schemaandpg_catalogto discover tables. HatiData's compatibility layer returns metadata for these queries.
Tableau Server / Tableau Cloud
- In Tableau Server, go to Settings > Connections.
- Add a new PostgreSQL connection with the same parameters above.
- For published data sources, embed credentials or use OAuth if configured.
- For scheduled extracts, use a dedicated service account API key (
hd_live_svc_...).
Tableau Tips
- Use Custom SQL for complex queries; HatiData transpiles legacy warehouse syntax automatically.
- Set Initial SQL to
SET search_path = 'public'if schema discovery is slow. - Tableau issues many
pg_typeandpg_attributequeries on connect; HatiData handles all of these transparently. - When
HATIDATA_COST_ESTIMATION_ENABLED=true, cost notices appear in Tableau's log as PostgreSQLNOTICEmessages but do not affect query results.
Looker
Looker (Google Cloud)
- In Looker Admin, navigate to Database > Connections.
- Click New Connection.
- Select PostgreSQL 9.5+ as the dialect.
- Enter connection details:
- Host:
data.yourcompany.com - Port:
5439 - Database:
hatidata - Username: your HatiData service account
- Password: your API key
- Host:
- Under Additional JDBC parameters, add:
ssl=true&sslmode=require - Under Additional Settings, set Max Connections to match your HatiData concurrency limit (default: 100).
- Disable Persistent Derived Tables (PDTs) -- HatiData is a read-only query engine and does not support
CREATE TABLEfrom Looker. - Click Test to verify connectivity, then Save.
LookML Configuration
In your LookML project, define the connection:
connection: "hatidata_production"
HatiData supports standard SQL aggregations, window functions, and CTEs that Looker generates.
PDT Handling
HatiData does not support CREATE TABLE AS SELECT. If your LookML models use PDTs:
- Convert PDTs to native derived tables (subqueries in
FROMclause). - Or pre-materialize tables in your data lake and query them through HatiData.
Looker Tips
- Aggregation-aware queries (
GROUP BY, window functions) are fully supported. SETandSHOWcommands are handled without error.- Looker's
information_schemaqueries for table discovery are supported natively.
Metabase
- Open Metabase Admin settings.
- Go to Databases > Add Database.
- Select PostgreSQL as the database type.
- Fill in:
- Display name:
HatiData Production - Host:
data.yourcompany.com - Port:
5439 - Database name:
hatidata - Username: your HatiData user
- Password: your API key
- Display name:
- Under Additional connection string options:
ssl=true&sslmode=require - Click Save.
Sync Settings
Metabase periodically syncs table metadata by querying information_schema. Recommended configuration:
- Set sync schedule to daily rather than hourly to reduce unnecessary queries.
- Disable Periodically refingerprint tables if your data does not change frequently.
- The initial sync may take longer if you have many tables.
Metabase Tips
- Native questions (visual query builder) generate standard SQL that HatiData handles natively.
- Custom questions (raw SQL) support both standard SQL and legacy warehouse syntax (auto-transpiled).
- Each scheduled execution goes through HatiData's full query pipeline (policy checks, audit, metering).
- Increase
HATIDATA_QUERY_TIMEOUT_SECSfor long-running analytical queries.
DBeaver
- Open DBeaver and click New Database Connection (plug icon).
- Select PostgreSQL from the list.
- Enter:
- Host:
data.yourcompany.com - Port:
5439 - Database:
hatidata - Username: your HatiData user
- Password: your API key
- Host:
- On the SSL tab:
- For production: check Use SSL and set SSL mode to
require - For local dev: leave SSL disabled
- For production: check Use SSL and set SSL mode to
- Click Test Connection to verify, then Finish.
DBeaver Features
- SQL Editor (F3): Write queries directly. Both standard SQL and legacy warehouse syntax are supported.
- Schema Browser: DBeaver queries
information_schemaandpg_catalogto populate the tree. Tables, columns, and types display correctly. - ERD: DBeaver's Entity Relationship Diagram tool works with discovered schema.
- Data Export: CSV, JSON, SQL, and other export formats are all supported.
- Extended Query Protocol: DBeaver uses Parse/Bind/Execute for parameterized queries. HatiData fully supports this protocol path.
- Use
EXPLAIN COST <sql>to preview query cost without executing.
DataGrip (JetBrains)
- Open DataGrip and go to File > New > Data Source > PostgreSQL.
- Enter:
- Host:
data.yourcompany.com - Port:
5439 - Database:
hatidata - User: your HatiData user
- Password: your API key
- Host:
- On the Advanced tab, set:
ssl=truesslmode=require
- Click Test Connection, then OK.
DataGrip's introspection queries against pg_catalog are fully supported. The SQL console, query execution plans, and data export all work as expected.
Power BI
Power BI connects to HatiData via the PostgreSQL ODBC driver (psqlODBC).
Prerequisites
Install the PostgreSQL ODBC driver:
Windows (most common for Power BI): Download from https://www.postgresql.org/ftp/odbc/versions/msi/
macOS:
brew install psqlodbc
Power BI Desktop
- In Power BI Desktop, click Get Data > PostgreSQL.
- Enter server as
data.yourcompany.com:5439. - Enter database as
hatidata. - Select Database authentication and enter credentials.
- Click Connect.
Power BI via ODBC DSN
Alternatively, configure an ODBC DSN and connect through it:
- Open ODBC Data Source Administrator (64-bit).
- Add a new User DSN or System DSN.
- Select PostgreSQL Unicode driver.
- Fill in host, port, database, and credentials.
- In Power BI, use Get Data > ODBC and select your DSN.
JDBC / ODBC Migration
If you are migrating from a legacy cloud warehouse, you only need to swap connection strings. No code changes are required.
JDBC Migration
Use the standard PostgreSQL JDBC Driver (version 42.x or later):
<!-- Maven -->
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.7.3</version>
</dependency>
Connection URL:
jdbc:postgresql://<host>:5439/<database>?sslmode=require
Java example:
String url = "jdbc:postgresql://data.yourcompany.com:5439/hatidata?sslmode=require";
Properties props = new Properties();
props.setProperty("user", "analyst@yourcompany.com");
props.setProperty("password", "hd_live_...");
Connection conn = DriverManager.getConnection(url, props);
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery("SELECT * FROM orders LIMIT 10");
JDBC Property Mapping
| Legacy JDBC Property | PostgreSQL JDBC Equivalent |
|---|---|
jdbc:snowflake://account.snowflakecomputing.com | jdbc:postgresql://host:5439/database |
account | (not needed) |
warehouse | (not needed -- HatiData auto-manages compute) |
db | Database name in URL path |
schema | Set via SET search_path = schema_name after connect |
role | Mapped to HatiData RBAC role |
authenticator | (use password auth with API key) |
ODBC Migration
Use psqlODBC (PostgreSQL ODBC driver).
DSN configuration (~/.odbc.ini on macOS/Linux):
[HatiData]
Driver = PostgreSQL Unicode
Server = data.yourcompany.com
Port = 5439
Database = hatidata
UserName = admin
Password = hd_live_your_api_key
SSLMode = require
ODBC Property Mapping
| Legacy ODBC Property | PostgreSQL ODBC Equivalent |
|---|---|
Driver = SnowflakeDSIIDriver | Driver = PostgreSQL |
Server = account.snowflakecomputing.com | Servername = data.yourcompany.com |
Port = 443 | Port = 5439 |
Database | Database |
UID | Username |
PWD | Password (use HatiData API key) |
Warehouse | (not needed) |
Role | (mapped via HatiData RBAC) |
psql (Command Line)
Connect directly with the standard PostgreSQL client:
# Basic connection
psql -h data.yourcompany.com -p 5439 -U admin -d hatidata
# With SSL
psql "host=data.yourcompany.com port=5439 dbname=hatidata user=admin sslmode=require"
# Local development (no SSL)
psql -h localhost -p 5439 -U admin -d hatidata
# Execute a single query
psql -h localhost -p 5439 -U admin -c "SELECT 1"
Environment Variables
export PGHOST=data.yourcompany.com
export PGPORT=5439
export PGDATABASE=hatidata
export PGUSER=admin
export PGPASSWORD=hd_live_your_api_key
export PGSSLMODE=require
# Then simply:
psql
dbt
HatiData works with dbt via the dbt-postgres adapter since it exposes a PostgreSQL-compatible interface.
profiles.yml
hatidata:
target: dev
outputs:
dev:
type: postgres
host: data.yourcompany.com
port: 5439
user: "{{ env_var('HATIDATA_USER') }}"
password: "{{ env_var('HATIDATA_API_KEY') }}"
dbname: hatidata
schema: public
threads: 4
connect_timeout: 30
sslmode: require
For more details, see the dbt adapter documentation.
Common Troubleshooting
Connection Refused
- Verify the HatiData proxy is running and accessible on port 5439.
- Check firewall rules and security groups.
- For local development:
HATIDATA_DEV_MODE=true cargo run -p hatidata-proxy
SSL Errors
- In development, set
sslmode=disableorsslmode=prefer. - In production, ensure your client trusts the HatiData TLS certificate.
- For self-signed certs in dev, set
sslmode=require(notverify-full).
Authentication Failed
- Verify your API key is valid and not expired.
- Check that your user has the correct role assigned.
- Production keys start with
hd_live_, development keys withhd_test_.
Table Not Found
- Tables are only visible if they exist in the catalog.
- Run
SELECT * FROM information_schema.tablesto see available tables. - Check that your role has access to the table (ABAC policy engine).
Slow Metadata Discovery
Some BI tools issue many catalog queries on first connect. This is normal. HatiData intercepts these queries in its compatibility layer and returns results without hitting the DuckDB engine.
If discovery is slow, set the search path explicitly:
SET search_path = 'public';
Query Syntax Errors
HatiData transpiles legacy warehouse SQL to DuckDB. If you encounter syntax errors:
- Check the transpiler supports your syntax (see SQL Compatibility).
- Try DuckDB-native syntax directly.
- If AI healing is enabled, the proxy will attempt to auto-correct and retry the query.
Timeout Errors
- Default query timeout is 300 seconds.
- Increase with:
SET statement_timeout = 600000(in milliseconds). - For BI tools, configure the connection timeout in the tool settings.
Character Encoding
HatiData uses UTF-8 encoding. If you see encoding issues:
SET client_encoding = 'UTF8';
Most BI tools send this automatically on connection.