Frequently Asked Questions

Everything you need to know about Harbinger Explorer. Can't find what you're looking for? Contact us.

Need step-by-step instructions instead? Browse the full Help Center.

Getting Started

How do I get started?
Sign up for free — no credit card required. The onboarding guides you through connecting your first data source (upload a file or paste an API docs URL), applying governance settings, and having your first AI conversation. You'll have a fully governed, queryable data catalog within minutes.
Do I need a credit card to sign up?
No. The Free plan gives you 5 AI chats per day, 3 API crawls per month, up to 3 data sources, and 50 MB of file uploads — all without entering any payment details. Paid plans (Starter, Pro, Pro + BYOK) offer a 7-day free trial, which does require a card, but you can cancel before day 8 and pay nothing.
What happens during onboarding?
The onboarding walks you through three steps: (1) add a data source — either upload a CSV/Excel/JSON/Parquet file or paste an API documentation URL, (2) review and configure column-level governance (Allow, Pseudonymize, or Exclude) for any detected PII, and (3) ask the AI agent your first question about the data. The entire flow takes under two minutes.
How fast can I go from sign-up to my first insight?
Under two minutes. There's no server to provision, no database to configure, and no pipeline to build. The query engine runs in your browser, so the moment you upload a file or the crawler finishes reading API docs, your data is queryable. Ask the agent a question and you'll get a chart or table back in seconds.

Data & Privacy

What happens to my data?
Everything runs entirely in your browser — uploaded files never leave your machine. The AI agent only receives table schemas and aggregate statistics, never raw row data. PII columns are auto-detected and governed before any processing begins. Only metadata (source names, schemas, governance settings) is stored server-side.
Is this GDPR compliant?
Yes. PII is auto-detected on every column with three governance modes: Allow (full analysis), Pseudonymize (reversible masking), or Exclude (column dropped entirely). Files are processed client-side and never stored on our servers. All server-side infrastructure runs in the EU (europe-west1). Governance is applied from the moment data enters the system — no retrofitting required.
How does PII detection work?
Every column is scanned automatically when data is loaded. The detector recognizes emails, phone numbers, IBANs, national IDs, IP addresses, and other common PII patterns. Each flagged column can be set to Allow (full analysis), Pseudonymize (reversible masking that preserves analytical value), or Exclude (column is dropped before the agent ever sees it). You can override any classification before proceeding.
Where is the infrastructure hosted?
All server-side services run in Google Cloud's europe-west1 region (Belgium). API keys stored in the vault are encrypted with AES-256-GCM. Your actual data files are never uploaded to our servers — everything is processed locally in your browser.
Can the AI agent see my raw data?
No. The agent receives table schemas, column metadata, and governance-safe aggregates — never individual rows. Columns marked as Pseudonymize are masked before any agent interaction, and columns marked as Exclude are completely invisible. You control exactly what the agent can access on a per-column basis.

Sources & File Formats

Which file formats are supported?
CSV, Excel (.xlsx/.xls), JSON (including JSON Lines), Parquet, XML, PDF, and ZIP archives containing any of the above. Drop a file into the workspace and it becomes a queryable dataset instantly — with automatic type detection and schema mapping.
How does API crawling work?
Give the agent any API documentation URL. It uses Gemini for intelligent content extraction and Cloudflare Workers for rendering JavaScript-heavy doc sites. The agent reads OpenAPI/Swagger specs, custom HTML docs, and developer portals — then automatically discovers endpoints, parameters, authentication requirements, and pagination patterns. Up to 10 APIs can be crawled in parallel.
Can I crawl APIs that require authentication?
Yes. The API Key Vault lets you store credentials (API keys, bearer tokens, OAuth secrets) encrypted with AES-256-GCM. Keys are injected only at runtime during crawl requests and are never stored in the browser or returned in API responses. The crawler automatically detects auth requirements from the documentation and applies the correct credentials.
How many data sources can I connect?
It depends on your plan. Free allows 3 sources, Starter allows 10, and Pro / Pro + BYOK allow unlimited sources. Each source can be an uploaded file or a crawled API. All sources land in a unified data catalog with metadata, governance settings, and version history.

AI Agent & Analysis

What can the AI agent do?
The agent understands your entire data catalog — schemas, column descriptions, governance rules, and relationships between sources. You can ask it to find relevant endpoints, explain what a dataset contains, write and execute SQL queries, generate charts (bar, line, area, pie), compare data across sources, and export results. It works within your governance settings at all times.
Do I need to know SQL?
No. The agent translates plain-English questions into SQL and runs them for you. Ask something like "Show me the top 10 customers by revenue last quarter" and the agent writes the query, executes it in your browser, and returns a table or chart. If you do know SQL, you can use the full SQL workspace with autocomplete and syntax highlighting.
Can the agent generate charts?
Yes. The agent can produce bar charts, line charts, area charts, and pie charts directly from query results. You can switch between visualizations without re-running the SQL. Charts and their underlying data can be exported as CSV, Parquet, or JSON.
What is the SQL workspace?
A full SQL editor that runs entirely in your browser. It includes autocomplete for table and column names, syntax highlighting, and sub-second execution for joins across multiple sources. You can write standard SQL, create derived tables, and export results to CSV, SQL, Parquet, or dbt model stubs (Pro plans).

Pricing & Billing

What plans are available?
Free (€0): 5 chats/day, 3 sources, 50 MB upload, 1 project — no credit card needed. Analyze (€15/mo or €150/year): 100 chats/day, 15 sources, 1 GB upload, 2 projects, CSV + SQL export. Pro (€24/mo): 200 chats/day, 100 crawls/month, unlimited sources, uploads & projects, MCP access, and scheduled re-crawls. Pro + BYOK (€12/mo): everything unlimited with your own AI key. Paid plans include a 7-day free trial.
Can I use my own AI model (BYOK)?
Yes. With Pro + BYOK (€12/mo) you connect your own OpenAI, Anthropic, or Google AI key. All agent calls use your key directly — we never see your usage. You get unlimited chats, crawls, sources, uploads, and projects at 50% off the standard Pro price. You can switch between Pro and Pro + BYOK anytime.
How does the free trial work?
Choose any paid plan (Starter, Pro, or Pro + BYOK) and enter your payment details. You get full access for 7 days with zero charge. Cancel anytime before the trial ends and you pay nothing. If you continue, billing starts on day 8 — monthly or annually depending on your choice.
Can I switch plans or cancel anytime?
Yes. You can upgrade, downgrade, or cancel from your account settings at any time. Upgrades take effect immediately; downgrades apply at the next billing cycle. After cancellation your data is retained for 30 days so you can reactivate without losing anything.

Still have questions?

Try Harbinger Explorer for free or see how we compare.