The Database Query Tool That Lives in Your Browser
The Database Query Tool That Lives in Your Browser
Here's the scenario: you need to quickly run a SQL query against a production data source. Not a full analysis — just a sanity check. You want to see whether that orders table has data for the last 48 hours before you tell your stakeholder everything looks fine.
What actually happens? You open pgAdmin and wait for it to load. It asks for your password. You've forgotten which of the three saved connections is the right one. You try the first one — connection timeout. You fire up DBeaver, configure the JDBC driver it's been nagging you to update, wait for the schema tree to expand, and eventually run your five-line query. The answer is "yes, data is there." You close the tool and get back to your actual work.
That was 12 minutes for a query that took 0.4 seconds to execute.
The problem isn't that database query tools are bad. pgAdmin, DBeaver, and DataGrip are mature, capable products. The problem is that they're heavy — optimized for power users doing complex, sustained work, not for the quick exploratory queries that dominate most analysts' and engineers' days.
What if your query tool lived in the browser, required no installation, and was ready in seconds?
Why Desktop Database Clients Create Friction at Every Step
Before talking about alternatives, it's worth being specific about where desktop clients create friction. This isn't about dismissing tools that millions of engineers rely on — it's about understanding when they become obstacles.
Installation and driver management. Every desktop database client needs to be installed, configured, and periodically updated. JDBC drivers, native connectors, extension packs — the dependency chain is real, and it breaks in surprising ways when your OS gets a major update or your DBA rotates the SSL certificates.
Connection management across environments. Most teams have at least three environments: development, staging, and production. Each needs a separate connection profile. Connection strings get updated when infrastructure changes. Someone on the team saves connection profiles with production passwords in plaintext. The credential management story for desktop SQL clients is... not great.
SSH tunnels and VPN requirements. For databases that live inside a private network (which is most production databases), you typically need to be on the VPN or set up an SSH tunnel before you can connect from a desktop client. This is the right security posture — but it adds three to five minutes of setup friction before your first query, every single time.
No collaboration. If you write a useful query in DBeaver, sharing it with a colleague means copying the SQL into Slack or a shared doc. There's no "send this query" button. There's no shared query history. Each analyst maintains their own local collection of .sql files in a folder that's probably not version-controlled.
Onboarding overhead. When a new analyst joins, getting them from zero to their first query involves: installing the client, configuring the database driver, setting up connection profiles, getting VPN access, and understanding which schemas are safe to query in which environments. On a good day, this takes half a day. On a bad day, it takes a week.
None of these are fatal flaws. Experienced teams build workflows around them. But every workaround has a cost — and for quick exploratory work, that cost often exceeds the value of the query itself.
The Alternatives: Good in Theory, Complicated in Practice
The natural response to desktop client friction is to look for browser-based alternatives. Several exist:
Cloud provider query consoles (AWS Athena, BigQuery UI, Azure Synapse Studio): Excellent for their native data stores. Completely siloed — you can't use the BigQuery UI to query your Postgres database or your REST API endpoints. These tools solve a narrow problem well but don't generalize.
Jupyter Notebooks. An analyst's best friend for sustained analytical work. But Jupyter is not a query tool — it's an analysis environment. Spinning up a notebook to run a five-line sanity check query is like driving a truck to pick up a coffee. The overhead doesn't match the task.
Metabase, Redash, or Superset. Business intelligence tools that include a query interface. Great if you need dashboards and sharing features. Require server-side infrastructure (self-hosted or managed). Not designed for individual exploratory querying against arbitrary data sources.
Raw psql/mysql CLI. Technically browser-independent, but not a browser. Requires the same SSH/VPN infrastructure. Not every analyst is comfortable in a terminal, nor should they have to be.
Online SQL fiddles (SQLFiddle, DB-Fiddle): Useful for testing SQL syntax against toy datasets. Not connected to your real data. Not useful for actual work.
The gap is real: there's no mainstream tool that lets you point at a real data source and run a SQL query against it, entirely in the browser, without a server-side deployment or a VPN tunnel. Until there is.
Browser-Based SQL Querying with DuckDB: The Better Approach
DuckDB is an in-process analytical SQL database engine. It's compact, fast, and — critically — it can run entirely in the browser via WebAssembly. This means a web application can execute real SQL queries against real data without ever sending that data to a server. The query runs locally, in your browser tab.
The implications are significant:
- No installation required — you open a URL, the engine is already there
- No connection strings, no VPN, no SSH tunnels for web-accessible sources
- Full SQL support:
WHERE,GROUP BY,JOIN,WINDOWfunctions, CTEs — the works - Data stays local — queries run in your browser, not on a third-party server
- Startup time measured in seconds, not minutes
This is the architecture that makes a genuinely browser-based database query tool possible.
Harbinger Explorer: Query Your Data Sources Directly in the Browser
Harbinger Explorer is built on this principle. It combines an AI Crawler for automatic source discovery with a DuckDB SQL query interface — all running in the browser, no installation required.
Here's what the actual experience looks like:
You open Harbinger Explorer and add a data source — a URL pointing to a JSON API, a CSV endpoint, a public dataset, or any web-accessible data resource. The AI Crawler maps the source's structure: column names, data types, example values. Within seconds, you have a schema overview in front of you.
Then you switch to the SQL editor and write your query. Standard SQL — SELECT, FROM, WHERE, JOIN, aggregations, window functions. Harbinger Explorer executes it via DuckDB, and results come back in the query panel. You can export results as CSV or copy them directly.
No pgAdmin. No DBeaver. No SSH tunnel. No VPN. Just a browser tab and a URL.
The catalog layer makes this even more powerful over time. Every source you add is stored in your personal catalog, searchable by column name or data type. When you come back a week later, the schema is already there — you just open the SQL editor and query. The discovery overhead only happens once.
Try it yourself — Start exploring for free. No credit card. 8 demo data sources ready to query.
How to Start Querying in Your Browser: A Step-by-Step Walkthrough
Getting from zero to your first query in Harbinger Explorer takes about five minutes.
Step 1: Create your account. Go to harbingerexplorer.com/register. No credit card, no setup wizard, no IT ticket required. Your account is active immediately, and 8 demo data sources are pre-loaded so you can start querying right away.
Step 2: Add a data source. Click "New Source" and paste in the URL of any web-accessible data endpoint — a public API, a hosted CSV, a JSON feed. Name it something memorable. The AI Crawler runs automatically.
Step 3: Review the schema. Harbinger Explorer displays the crawled schema: column names, inferred types, and any flagged fields (PII, sensitive data). You see the structure of your source before writing a single line of SQL.
Step 4: Open the SQL editor. From the source detail page, click "Query" to open the DuckDB SQL editor. Your table is already in scope — you don't need to configure a connection or reference a schema manually.
Step 5: Write and run your query.
Type your SQL — whether it's a simple SELECT * FROM source LIMIT 10 or a multi-step aggregation. Hit run. Results appear in the panel below. Export or copy as needed.
Step 6: Save and share.
Queries can be saved to your catalog entry, making it easy to rerun them or share the SQL with a colleague. No .sql file wrangling, no Slack pasting.
Power Features: Beyond Basic SELECT
For users who want to do more than quick checks, Harbinger Explorer supports several advanced workflows.
Cross-source JOINs. Because multiple sources are cataloged in the same system, you can JOIN across them in a single DuckDB query. Combine data from a public API endpoint with a CSV you've cataloged — the SQL is standard, and DuckDB handles the execution entirely in-browser.
Aggregations and window functions. DuckDB supports the full analytical SQL dialect including OVER(), PARTITION BY, RANK(), LAG(), and standard aggregate functions. If you've been writing this SQL in BigQuery or Snowflake, it will work the same way here.
PII Detection. The AI Crawler automatically flags columns likely to contain personally identifiable information. This is especially useful when you're querying unfamiliar data sources — the tool gives you an immediate heads-up before you accidentally include user_email in an export that was supposed to be anonymized.
Column Mapping. When you're working across multiple sources, Column Mapping surfaces shared column names and structural similarities. Useful for discovering implicit relationships before writing a JOIN that might not behave as expected.
Recrawling (Pro). Data sources evolve. A recrawl picks up new columns, deprecations, or schema changes without you having to re-add the source manually. Pro plan users can set up scheduled recrawls so the catalog stays current automatically.
Comparison: Desktop Query Tools vs. Browser-Based Querying
| Factor | pgAdmin / DBeaver | Harbinger Explorer |
|---|---|---|
| Installation required | Yes | No |
| VPN/SSH tunnel needed | Usually | Not for web-accessible sources |
| Time to first query | 5–15 min | Under 5 min |
| PII auto-detection | No | Yes |
| Cross-source SQL JOINs | Via multiple connections | Native, single query |
| Schema auto-discovery | Manual or driver-dependent | AI Crawler |
| Collaboration (query sharing) | File-based, manual | Catalog-native |
| Pricing | Free (desktop clients) | From €8/month |
| Runs in browser | No | Yes |
One note on the pricing row: the desktop clients are free, and if your workflow already works well with them, there's no reason to switch everything. Harbinger Explorer is most valuable for quick exploratory queries against web-accessible sources, for teams that want schema discovery without setup, and for analysts who want a catalog and a query tool in the same interface.
Pricing: Starter at €8/month (25 chats/day, 10 crawls/month) or Pro at €24/month (200 chats/day, 100 crawls/month, recrawling, priority support). See pricing →
Free 7-day trial, no credit card required. Start free →
Why DuckDB Specifically? The Technical Case
It's worth spending a moment on why DuckDB — rather than SQLite, a remote Postgres proxy, or a custom query engine — is the right foundation for a browser-based query tool.
DuckDB is built for analytics, not transactions. Traditional embedded databases like SQLite are optimized for transactional workloads (individual row inserts and updates). DuckDB is built for the analytical workload that data exploration actually represents: scanning large column sets, filtering, grouping, aggregating. On columnar data, DuckDB routinely outperforms SQLite by an order of magnitude.
DuckDB runs natively in the browser via WebAssembly. WebAssembly (WASM) lets compiled C++ code run at near-native speed inside a browser sandbox. DuckDB's WebAssembly build is the same query engine used in production analytics systems — just compiled to run client-side. There's no "lite" version or reduced functionality. You get the full engine.
DuckDB reads multiple formats natively. JSON, CSV, Parquet, and more — DuckDB can query these formats directly without requiring an import step. When the AI Crawler fetches a JSON API response, DuckDB can query against the resulting structure immediately. No ETL, no schema-on-write.
The SQL dialect is familiar. PostgreSQL users, BigQuery users, Snowflake users — DuckDB's SQL is close enough to all of these that the learning curve is minimal. The analytical functions you already know (OVER, PARTITION BY, RANK, LAG, LEAD, NTILE) work exactly as expected.
This combination — analytical performance, browser-native execution, multi-format support, and standard SQL — is why DuckDB is the right engine for a tool like Harbinger Explorer. It's not a compromise or a simplified version of "real" SQL; it's a production-grade engine that happens to run where you need it to.
When to Use Harbinger Explorer vs. a Desktop Client
Harbinger Explorer is not trying to replace every use case for pgAdmin or DBeaver. There are workflows where a desktop client remains the right tool — specifically, when you're doing sustained schema design work on a private database that requires a persistent connection.
Where Harbinger Explorer wins:
- Quick exploratory queries on web-accessible sources where setup overhead kills momentum
- Schema discovery on unfamiliar data sources where you need structure before writing SQL
- Cross-source analysis where you want to JOIN data from multiple web endpoints in a single query
- Onboarding workflows where a new analyst needs to understand what data exists without a two-day setup process
- Governance reviews where you need to surface PII-flagged fields across your catalog quickly
Where a desktop client remains appropriate:
- Private database management requiring persistent VPN/SSH sessions
- Database administration tasks (index creation, user management, schema DDL)
- Extremely large result sets that benefit from a native client's streaming capabilities
The honest answer is most teams will use both — Harbinger Explorer for discovery and quick queries, a desktop client for the heavy administration work that still requires it.
FAQ: Browser-Based Database Querying
Does Harbinger Explorer support private databases like PostgreSQL or MySQL? Currently, Harbinger Explorer is optimized for web-accessible data sources reachable via URL — REST APIs, JSON/CSV endpoints, and public datasets. Private database connectivity (with VPN/SSH tunnel equivalent) is on the roadmap. If that's your primary use case, the current product is best used for public or semi-public data sources.
Is my data sent to Harbinger's servers when I query? No. Queries run client-side via DuckDB WebAssembly. Harbinger Explorer stores schema metadata and query history, but source data is not transmitted to or stored on Harbinger's servers. Your query results appear in your browser and stay there until you export them.
How does DuckDB SQL compare to standard SQL?
DuckDB supports the ANSI SQL standard with extensions for analytics. If you're used to PostgreSQL, BigQuery, or Snowflake SQL, you'll find the syntax familiar. DuckDB-specific features (like SUMMARIZE, list comprehensions, and the PIVOT operator) are available but not required.
Can I share my catalog with teammates? Catalog sharing and team workspaces are features in active development. Currently, catalogs are per-account. The Pro plan includes priority support if you want to discuss team use cases with the Harbinger team directly.
What data formats does the AI Crawler support? The crawler handles JSON and CSV sources natively, as well as REST API endpoints that return structured data. Support for Parquet, XML, and additional formats is being expanded.
What's the Starter plan limit for queries? The Starter plan includes 25 chat/query interactions per day and 10 crawls per month, available from €8/month. If you're running more intensive analytical workflows, the Pro plan at €24/month offers 200 interactions per day and 100 crawls per month.
Conclusion: The Fastest Path from Question to Answer
The best query tool is the one that gets out of the way fastest. For quick exploratory queries — the kind that represent the majority of day-to-day data work — the overhead of a desktop client is a constant tax on your productivity.
Harbinger Explorer is designed to minimize that tax. Open the browser, add a source, query it. No installation, no VPN, no driver management. The AI Crawler handles schema discovery so you're not manually documenting what you're looking at. DuckDB handles the SQL so you're not limited to a point-and-click filter interface.
If you work with web-accessible data and you're tired of the setup overhead that precedes every simple query, try the browser-first approach. It takes five minutes to find out whether it fits your workflow.
Ready to skip the setup and start exploring? Try Harbinger Explorer free →
Continue Reading
API Data Quality Check Tool: Automatic Profiling for Every Response
API data quality breaks silently. Harbinger Explorer profiles every response automatically — null rates, schema changes, PII detection — before bad data reaches your dashboards.
API Documentation Search Is Broken — Here's How to Fix It
API docs are scattered, inconsistent, and huge. Harbinger Explorer's AI Crawler reads them for you and extracts every endpoint automatically in seconds.
API Endpoint Discovery: Stop Mapping by Hand. Let AI Do It in 10 Seconds.
Manually mapping API endpoints from docs takes hours. Harbinger Explorer's AI Crawler does it in 10 seconds — structured, queryable, always current.
Try Harbinger Explorer for free
Connect any API, upload files, and explore with AI — all in your browser. No credit card required.
Start Free Trial