Harbinger Explorer

Back to Knowledge Hub
Cloud News
Published:

Snowflake Cost Optimization: A Practical Guide

10 min read·Tags: snowflake, cost optimization, data warehousing, cloud costs, sql, finops, performance tuning

Snowflake's consumption-based pricing model is elegant in theory — you pay for what you use. In practice, most data teams discover their Snowflake bill climbing 30-50% quarter over quarter with no clear explanation. Snowflake cost optimization isn't about cutting corners; it's about understanding where compute and storage dollars actually go and eliminating the waste that accumulates silently.

This tutorial walks through the most impactful optimization strategies, complete with SQL you can run today against your SNOWFLAKE.ACCOUNT_USAGE schema.

TL;DR — The 80/20 of Snowflake Costs

Cost DriverTypical ShareOptimization LeverExpected Savings
Warehouse compute60-75%Right-sizing + auto-suspend20-40%
Query inefficiency15-25%Query profiling + clustering10-30%
Storage5-15%Time Travel + transient tables5-15%
Data transfer2-5%Region alignment + cachingVariable
Serverless featuresVariableMonitoring + limits10-20%

Compute is where the money goes. Start there.

Step 1: Understand Your Spend with Account Usage

Before optimizing anything, you need visibility. Snowflake provides the ACCOUNT_USAGE schema in the shared SNOWFLAKE database — a goldmine for cost analysis.

Find Your Most Expensive Warehouses

-- Snowflake SQL
-- Credit consumption by warehouse (last 30 days)
SELECT
    warehouse_name,
    ROUND(SUM(credits_used), 2) AS total_credits,
    ROUND(SUM(credits_used) * 3.00, 2) AS estimated_cost_usd,
    COUNT(DISTINCT DATE_TRUNC('day', start_time)) AS active_days,
    ROUND(AVG(credits_used_compute), 4) AS avg_credits_per_interval
FROM snowflake.account_usage.warehouse_metering_history
WHERE start_time >= DATEADD('day', -30, CURRENT_TIMESTAMP())
GROUP BY warehouse_name
ORDER BY total_credits DESC;

Note: $3.00/credit is the standard on-demand rate for Enterprise edition. Your contracted rate may differ. Last verified: April 2026. [PRICING-CHECK]

This single query often reveals that 2-3 warehouses account for 80%+ of spend. Focus your optimization there.

Find Idle Warehouse Time

-- Snowflake SQL
-- Warehouses running but doing nothing (last 7 days)
SELECT
    warehouse_name,
    DATE_TRUNC('hour', start_time) AS hour_slot,
    SUM(credits_used) AS credits,
    SUM(credits_used_compute) AS compute_credits,
    SUM(credits_used_cloud_services) AS cloud_credits,
    CASE
        WHEN SUM(credits_used_compute) = 0 AND SUM(credits_used) > 0
        THEN 'IDLE_BURNING'
        ELSE 'ACTIVE'
    END AS status
FROM snowflake.account_usage.warehouse_metering_history
WHERE start_time >= DATEADD('day', -7, CURRENT_TIMESTAMP())
GROUP BY warehouse_name, hour_slot
HAVING status = 'IDLE_BURNING'
ORDER BY credits DESC;

If you see warehouses in IDLE_BURNING status regularly, your auto-suspend settings need attention.

Step 2: Right-Size Your Warehouses

The most common mistake: picking an X-Large warehouse "just in case" and never revisiting. Snowflake warehouses scale linearly — an XL costs 4x a Small per second.

Warehouse SizeCredits/HourBest For
X-Small1Simple queries, BI dashboards, dev/test
Small2Standard analytics workloads
Medium4Complex transformations, moderate data volumes
Large8Heavy ETL, large joins
X-Large+16+Massive data processing (question this!)

The Right-Sizing Process

  1. Profile your workloads — Run the query above to identify top-spending warehouses
  2. Check queue times — If queries aren't queuing, your warehouse is probably oversized
  3. Test one size down — Drop the warehouse size and measure query duration impact
  4. Accept reasonable trade-offs — A query taking 45s instead of 30s saves you 50% on that warehouse
-- Snowflake SQL
-- Check if queries are queuing (indicates undersized warehouse)
SELECT
    warehouse_name,
    COUNT(*) AS total_queries,
    AVG(queued_overload_time) / 1000 AS avg_queue_seconds,
    MAX(queued_overload_time) / 1000 AS max_queue_seconds,
    COUNT_IF(queued_overload_time > 0) AS queued_query_count
FROM snowflake.account_usage.query_history
WHERE start_time >= DATEADD('day', -7, CURRENT_TIMESTAMP())
    AND warehouse_name IS NOT NULL
GROUP BY warehouse_name
ORDER BY avg_queue_seconds DESC;

Rule of thumb: If avg_queue_seconds is 0 and queued_query_count is 0, you're likely oversized. If avg_queue_seconds exceeds 10, you might need to scale up or use multi-cluster warehouses.

Step 3: Auto-Suspend and Auto-Resume Settings

This is the single highest-ROI optimization for most teams. The default auto-suspend is 10 minutes — that's 10 minutes of burning credits after every last query.

Recommended Settings by Workload

Workload TypeAuto-SuspendAuto-ResumeWhy
Ad-hoc / BI dashboards60 secondsYesShort bursts, long idle gaps
Scheduled ETL (e.g., dbt)60 secondsYesRuns at fixed times, idle between
Interactive dev/testing300 seconds (5 min)YesUsers pause between queries
Continuous streamingNever suspendN/AConstant load, suspension wastes startup time
-- Snowflake SQL
-- Set aggressive auto-suspend for an ETL warehouse
ALTER WAREHOUSE ETL_WH SET
    AUTO_SUSPEND = 60
    AUTO_RESUME = TRUE;

-- Check current settings for all warehouses
SHOW WAREHOUSES;

The 60-second rule: Unless you have a specific reason for longer, set auto-suspend to 60 seconds. Snowflake charges per-second (minimum 60s), so a 60-second auto-suspend means you pay at most one extra minute after your last query.

Step 4: Query Optimization — Stop Paying for Bad SQL

Compute costs are directly proportional to query runtime × warehouse size. Faster queries = lower bills.

Find Your Most Expensive Queries

-- Snowflake SQL
-- Top 20 most expensive queries (last 14 days)
SELECT
    query_id,
    query_text,
    warehouse_name,
    user_name,
    ROUND(total_elapsed_time / 1000, 2) AS duration_seconds,
    ROUND(credits_used_cloud_services, 4) AS cloud_credits,
    bytes_scanned / POWER(1024, 3) AS gb_scanned,
    partitions_scanned,
    partitions_total,
    ROUND(partitions_scanned / NULLIF(partitions_total, 0) * 100, 1) AS pct_partitions_scanned
FROM snowflake.account_usage.query_history
WHERE start_time >= DATEADD('day', -14, CURRENT_TIMESTAMP())
    AND total_elapsed_time > 0
    AND warehouse_name IS NOT NULL
ORDER BY total_elapsed_time DESC
LIMIT 20;

The pct_partitions_scanned column is key. If a query scans 90%+ of partitions on a large table, it's likely missing a clustering key or filter.

Common Query Anti-Patterns and Fixes

SELECT * on large tables — Only select the columns you need. Snowflake's columnar storage means unused columns are free if you don't select them.

Missing filters on clustered columns — If a table is clustered on event_date, always filter by date range. Without the filter, Snowflake scans every micro-partition.

Repeated subqueries — Use CTEs or temporary tables instead of running the same subquery multiple times.

-- Snowflake SQL
-- BAD: Scanning everything, selecting everything
SELECT * FROM events WHERE category = 'purchase';

-- BETTER: Filter on cluster key, select only needed columns
SELECT event_id, user_id, amount, event_timestamp
FROM events
WHERE event_date >= DATEADD('day', -7, CURRENT_DATE())
    AND category = 'purchase';

Step 5: Clustering Keys — Prune Partitions, Save Credits

Clustering keys tell Snowflake how to organize micro-partitions. Well-chosen keys dramatically reduce the data scanned per query.

-- Snowflake SQL
-- Check clustering efficiency
SELECT
    table_name,
    clustering_key,
    ROUND(average_overlaps, 2) AS avg_overlaps,
    ROUND(average_depth, 2) AS avg_depth
FROM snowflake.account_usage.table_storage_metrics
WHERE clustering_key IS NOT NULL
    AND deleted IS NULL
ORDER BY average_depth DESC;

When to cluster:

  • Tables larger than 1 TB (smaller tables rarely benefit)
  • Queries consistently filter on the same 1-3 columns
  • pct_partitions_scanned is consistently high despite filters

When NOT to cluster:

  • Tables under 100 GB
  • Highly random access patterns
  • Tables that are mostly appended to and rarely queried

Be aware: automatic clustering is a serverless feature that consumes credits. Monitor it.

Step 6: Storage Optimization

Storage is cheaper than compute, but it adds up — especially with Snowflake's default Time Travel and Fail-safe settings.

FeatureRetentionCost ImpactCan You Reduce It?
Time Travel1 day (default), up to 90 daysStores changed data for the retention periodYes — reduce to 0-1 day for staging tables
Fail-safe7 days (permanent tables)Non-negotiable for permanent tablesUse transient tables to skip Fail-safe
ClonesZero-copy until data changesFree initially, diverges over timeDrop unused clones

Use Transient Tables for Staging

-- Snowflake SQL
-- Transient tables: no Fail-safe, Time Travel up to 1 day
CREATE TRANSIENT TABLE staging.raw_events (
    event_id STRING,
    payload VARIANT,
    loaded_at TIMESTAMP_NTZ DEFAULT CURRENT_TIMESTAMP()
)
DATA_RETENTION_TIME_IN_DAYS = 0;

Transient tables skip the 7-day Fail-safe period entirely. For staging data that gets reloaded anyway, this eliminates unnecessary storage costs.

Find Storage Waste

-- Snowflake SQL
-- Tables consuming the most storage (including Time Travel + Fail-safe)
SELECT
    table_catalog AS database_name,
    table_schema AS schema_name,
    table_name,
    ROUND(active_bytes / POWER(1024, 3), 2) AS active_gb,
    ROUND(time_travel_bytes / POWER(1024, 3), 2) AS time_travel_gb,
    ROUND(failsafe_bytes / POWER(1024, 3), 2) AS failsafe_gb,
    ROUND((active_bytes + time_travel_bytes + failsafe_bytes) / POWER(1024, 3), 2) AS total_gb
FROM snowflake.account_usage.table_storage_metrics
WHERE deleted IS NULL
ORDER BY total_gb DESC
LIMIT 20;

Step 7: Resource Monitors — Set Guardrails

Never run a Snowflake account without resource monitors. They're free to create and prevent runaway costs.

-- Snowflake SQL
-- Create a monthly resource monitor with alerts and hard stop
CREATE RESOURCE MONITOR monthly_budget
    WITH CREDIT_QUOTA = 5000
    FREQUENCY = MONTHLY
    START_TIMESTAMP = IMMEDIATELY
    TRIGGERS
        ON 75 PERCENT DO NOTIFY
        ON 90 PERCENT DO NOTIFY
        ON 100 PERCENT DO SUSPEND
        ON 110 PERCENT DO SUSPEND_IMMEDIATE;

-- Assign to a warehouse
ALTER WAREHOUSE ANALYTICS_WH SET RESOURCE_MONITOR = monthly_budget;

Best practice: Create separate resource monitors per warehouse or team. A global monitor catches catastrophes, but per-warehouse monitors give you granularity.

Step 8: Serverless Feature Monitoring

Snowflake's serverless features (automatic clustering, materialized views, search optimization, Snowpipe) consume credits silently. They don't show up in warehouse metering.

-- Snowflake SQL
-- Monitor serverless credit consumption (last 30 days)
SELECT
    service_type,
    ROUND(SUM(credits_used), 2) AS total_credits,
    ROUND(SUM(credits_used) * 3.00, 2) AS estimated_cost_usd
FROM snowflake.account_usage.serverless_task_history
WHERE start_time >= DATEADD('day', -30, CURRENT_TIMESTAMP())
GROUP BY service_type
ORDER BY total_credits DESC;

If AUTOMATIC_CLUSTERING is your top serverless cost, consider whether all clustered tables actually benefit from clustering. Remove clustering keys from tables where the query patterns don't justify the maintenance overhead.

Common Mistakes to Avoid

  1. Oversizing warehouses "for safety" — Start small, scale up only if queries queue. A 2-minute query on a Small is cheaper than a 30-second query on an X-Large.

  2. Leaving auto-suspend at 10 minutes — Change this on day one. 60 seconds is almost always the right default.

  3. Ignoring the ACCOUNT_USAGE schema — It's free, it's comprehensive, and it tells you exactly where money goes. Query it weekly.

  4. Clustering small tables — Automatic clustering costs credits. On tables under 100 GB, the clustering cost often exceeds the query savings.

  5. Not using transient tables for staging — Every permanent table pays 7 days of Fail-safe storage. Staging data doesn't need that.

  6. Running the same expensive query repeatedly — Use RESULT_SCAN(LAST_QUERY_ID()) for quick re-access, or materialize results into a table.

  7. Ignoring serverless costs — They're invisible in warehouse metering but show up on your bill. Monitor them monthly.

Building a Cost Dashboard

Combine the queries above into a scheduled task that populates a cost-tracking table. Here's the skeleton:

-- Snowflake SQL
-- Create a daily cost snapshot table
CREATE TABLE IF NOT EXISTS monitoring.daily_cost_snapshot (
    snapshot_date DATE,
    warehouse_name STRING,
    credits_used FLOAT,
    estimated_cost_usd FLOAT,
    top_query_count INTEGER,
    avg_query_duration_seconds FLOAT
);

-- Populate daily (run via Snowflake Task or external scheduler)
INSERT INTO monitoring.daily_cost_snapshot
SELECT
    CURRENT_DATE() AS snapshot_date,
    wh.warehouse_name,
    ROUND(SUM(wh.credits_used), 2) AS credits_used,
    ROUND(SUM(wh.credits_used) * 3.00, 2) AS estimated_cost_usd,
    COUNT(DISTINCT qh.query_id) AS top_query_count,
    ROUND(AVG(qh.total_elapsed_time) / 1000, 2) AS avg_query_duration_seconds
FROM snowflake.account_usage.warehouse_metering_history wh
LEFT JOIN snowflake.account_usage.query_history qh
    ON wh.warehouse_name = qh.warehouse_name
    AND DATE_TRUNC('day', qh.start_time) = CURRENT_DATE() - 1
WHERE DATE_TRUNC('day', wh.start_time) = CURRENT_DATE() - 1
GROUP BY wh.warehouse_name;

For teams exploring their Snowflake usage data interactively, Harbinger Explorer lets you run SQL against exported cost data directly in the browser using DuckDB WASM — useful for ad-hoc cost analysis without needing a running warehouse, especially when you want to prototype queries before deploying them to Snowflake.

Next Steps

Start with Step 1 — run the warehouse metering query and identify your top 3 cost drivers. Then set auto-suspend to 60 seconds on every non-streaming warehouse. Those two changes alone typically save 15-25% on your next bill.

For ongoing optimization, schedule the cost snapshot query daily and review trends weekly. Cost creep is gradual — catching it early is the entire game.

Continue Reading


[PRICING-CHECK] Snowflake credit pricing ($3.00/credit for Enterprise on-demand) — verify against current Snowflake pricing page. [VERIFY] Minimum billing increment of 60 seconds — confirm this is still current for all editions.


Continue Reading

Try Harbinger Explorer for free

Connect any API, upload files, and explore with AI — all in your browser. No credit card required.

Start Free Trial

Command Palette

Search for a command to run...