Improving query performance in Snowflake and its related costs

15 / May / 2023 by kedhar.natekar 0 comments

In the previous blog, we understood how to Optimally use Snowflake Warehouse and Tables.

So let’s continue the blog series, where we will now focus on improving the performance in Snowflake and its associated costs.

As we know, Snowflake tables are micro-partitioned, which significantly improves query performance. However, over time, you may experience query slowness due to increased DML operations on large datasets. 

In such cases, you can cluster the table or use the Search Optimization service to improve performance. Before deciding which option to choose, you should consider the specific use cases in which they are most beneficial and the associated cost impacts. Otherwise, you may encounter fewer optimizations and higher maintenance costs.

Clustering in Snowflake

Clustering in Snowflake shouldn’t be confused with partitioning in big data terms.

Clustering in Snowflake is a process where it still manages the micro partitions but now in the order of the clustering key defined rather than the order of data inserts. This enhances the performance. There will be less micro partition scan needed for a defined clustered key value.

When choosing Clustering over a table, consider the following:

  • Recommended when the table contains multiple terabytes (TB) of data and the table performance degrades over time due to many updates and deletes over it.
  • Using a maximum of 3-4 clustering columns is recommended for a clustering key. Beyond that, there wouldn’t be a significant impact.
  • Order the columns in clustering with most used filters, then by most actively used join predicates, and then used ORDER BY, GROUP BY clauses.
  • Use a Clustering key column that has the following: 
    • Large enough number of distinct values to enable effective pruning on the table
    • Small enough number of distinct values to allow Snowflake to effectively group rows in the same micro-partitions.
  • Order the columns from lowest cardinality to highest cardinality.
  • If a High cardinality column needs to be used for clustering, then define the key as an expression on the column rather than on the column directly to reduce the number of distinct values.

Costs associated with ReClustering

When an existing table is clustered:

  • Original micro partitions are marked deleted but retained in the system to enable Time travel and Failsafe. They are purged only after Time Travel and Failsafe.
  • New micro partitions will be created of the same size as older partitions. Hence, maintaining old and new partitions is an additional cost.

  Eg: Before reclustering, say the table size is 100 GB, and if 2 GB is added daily and 1 GB is deleted, then the original table would now be 101 GB, and the time travel size would be 2 GB for that day. But after reclustering, the original will still hold 101 GB of active partitions according to the cluster key, and time travel will have 100 GB of data, too, according to old micro partitions. This time travel data of 100 GB will stay until it’s moved to the Failsafe zone.

  • Clustering always comes up with Automatic reclustering cost in the background in Snowflake’s self-managed warehouse and the associated cost should also be considered.

 

Search Optimization Queries Services

Clustering does not guarantee improved performance on non-clustered columns. 

If you have frequent queries on Non-Clustered columns and performance is the key irrespective of cost, then opt for a Search Optimization Service over the entire table of specific columns.

It’s similar to enabling indexing on RDBMS databases like Oracle on specific columns.

Cost Impact of using Search Optimization queries

Storage Resources:

The search optimization service creates a search access path data structure that requires space for each table on which search optimization is enabled.

The storage cost of the search access path depends upon multiple factors:

  • The number of distinct values in the table.
  • In the extreme case where all columns have data types that use the search access path, and all data values in each column are unique, the required storage can be as much as the original table’s size.
  • Typically, however, the size is approximately 1/4 of the original table’s size.

Compute Resources:

  • Resource consumption is higher when there is high churn.
  • These costs are roughly proportional to the amount of data ingested (added or changed)and distinct values on the table.
  • Deletes also have some cost.
  • Automatic clustering, while improving the latency of queries in tables with search optimization, can further increase the maintenance costs of search optimization.
  • Use the SYSTEM$ESTIMATE_SEARCH_OPTIMIZATION_COSTS function to estimate the cost of adding search optimization to a table.

To Reduce the cost while using Search Optimization:

  • DELETE less frequently.
  • INSERT, UPDATE, and MERGE: Batching these types of DML statements on the table can reduce the maintenance cost by the search optimization service.
  • If you recluster the entire table, consider dropping the SEARCH OPTIMIZATION property for that table before reclustering, and then add the SEARCH OPTIMIZATION property back to the table after reclustering.

Looking for in-depth analysis and expert opinions? Check out our other resources now.

FOUND THIS USEFUL? SHARE IT

Leave a Reply

Your email address will not be published. Required fields are marked *