From tasks and streams to tranquility: My first real project with Snowflake dynamic tables

23 / Jul / 2025 by Isha Vason 0 comments

Let me tell you about the moment I realized I’d been overcomplicating things for years.

I was working on a pipeline in Snowflake. You know the type — a multi-stage transformation process where a few base tables feed into intermediate tables, some reconciliation happens, and eventually it all lands in a final reporting layer. I’d set this up the traditional way: Streams + Tasks + a bunch of SQL scripts + orchestration logic to make sure everything fired in order.

It worked. But it felt… brittle. And heavy.

So when I heard about Snowflake Dynamic Tables — I’ll admit — I didn’t jump in with both feet. I’ve been burned by “simpler” approaches that turned into tech debt down the road. But I was curious.

And eventually, I was impressed.

What Are Dynamic Tables (In Real Life)?

In Snowflake’s words, Dynamic Tables are SQL-defined, automatically-refreshing, materialized tables. They sit somewhere between a view and a full data pipeline.

In my words? They’re like saying:

“Hey Snowflake, here’s the SQL that defines the result I want. Keep it fresh for me.”

And it listens. Reliably.

You don’t write logic for refreshes. You don’t schedule tasks. You just declare the desired outcome using SQL, and Snowflake takes care of the orchestration.

The Use Case That Converted Me

I had a reconciliation pipeline that joined transactional orders with warehouse shipment events. The idea was to identify mismatches — for example, orders marked “delivered” that didn’t have corresponding shipment scans.

  • My old approach involved:
  • A full join query baked into a SQL Task
  • A Stream tracking new order inserts
  • Manual logic to avoid reprocessing old rows
  • Retry scripts when tasks failed
  • Logging that I never quite trusted
    It was fine. But when I revisited the logic with Dynamic Tables, I realized I could cut it all down to this:

CREATE OR REPLACE DYNAMIC TABLE orders_reconciliation
TARGET_LAG = ’15 minutes’
WAREHOUSE = ‘analytics_wh’
AS
SELECT
      o.order_id,
      o.status AS order_status,
      s.status AS shipment_status,
      CASE
            WHEN s.status IS NULL THEN ‘Missing Shipment’
            WHEN o.status <> s.status THEN ‘Mismatch’
            ELSE ‘OK’
      END AS reconciliation_result,
      CURRENT_TIMESTAMP() AS processed_at
FROM raw.orders o
LEFT JOIN raw.shipments s
      ON o.order_id = s.order_id;

That’s it.

  • No streams.
  • No intermediate staging tables.
  • No DAG to maintain.
  • Just SQL + Snowflake = magic.

What Blew My Mind

  • It’s incremental by default: Snowflake tracks changes behind the scenes and applies them smartly — no need for CDC code or watermarks.
  • You can chain dynamic tables: One dynamic table feeds another, forming a DAG-like structure that Snowflake handles automatically.
  • Built-in observability: You can see the lag, the refresh history, and more — right in Snowsight.
  • Seamless between batch and streaming: Need faster updates? Just tweak the TARGET_LAG.

Gotchas I Hit (So You Don’t Have To)

It wasn’t all sunshine. A few things tripped me up:

  • Not all SQL is supported — I tried using a non-deterministic function (RAND() for sampling) and got rejected. Stick to deterministic logic.
  • No procedural logic — You can’t use stored procedures or complex branching logic. But that’s kind of the point — this is declarative by design.
  • Refresh isn’t instant — It’s automatic, not immediate. If you’re expecting sub-second freshness, this isn’t real-time streaming (yet).
  • Still, for 99% of my use cases — especially reporting, batch pipelines, and layered transformations — Dynamic Tables crushed it.

The Result
After switching:

  • My pipeline code shrank by ~60%.
  • Operational overhead dropped to zero (literally haven’t touched it in weeks).
  • Stakeholders got more frequent and accurate data.

Lessons Learned

  • If you’re still juggling Streams + Tasks + DML, consider giving Dynamic Tables a shot.
  • They’re not for everything, but for data transformation pipelines, they’re a game changer.
  • It’s SQL-first, automation-forward, and operationally quiet. And that’s a beautiful combo.

Final Thought

Dynamic Tables don’t just simplify Snowflake pipelines. They invite you to rethink what a pipeline even is.

So if you’re tired of writing glue code and chasing broken DAGs, maybe it’s time to let Snowflake handle the orchestration for you — and focus on what you actually care about: clean, accurate, trustworthy data.

FOUND THIS USEFUL? SHARE IT

Leave a Reply

Your email address will not be published. Required fields are marked *