Tutorial
8 min read

From 0 to MLOps with ❄️ Snowflake Data Cloud in 3 steps with the Kedro-Snowflake plugin

MLOps on Snowflake Data Cloud

MLOps is an ever-evolving field, and with the selection of managed and cloud-native machine learning services expanding by the day, it can be challenging to navigate the options available. With a plethora of managed and cloud-native machine learning services available, it's crucial to choose the right platform for running machine learning pipelines and deploying trained models. However, three significant pain points persist in the MLOps landscape

  • Lack of easy access to company's valuable data, 
  • the need for quick local iteration on ML pipelines
  • Lack of a seamless transition to the cloud environment.

With Snowflake being a powerful data warehouse and Snowpark's ease of use, together they make a strong candidate for building complex ML pipelines. If you are not familiar with Snowpark yet, there are a lot of great articles introducing its core concepts and how you can use it for writing data science and machine learning (ML) code, e.g. here, here or here

There are however at least a few shortcomings of the currently proposed approaches that have not yet been addressed:

  • ML pipelines orchestration - in the current state, two strategies can be pursued:

    • using an external orchestrator service or tool, such as AzureML Pipelines or Apache Airflow for invoking Snowpark code directly
    • manually wrapping Snowpark code into Python UDFs and using them for building a directed acyclic graph (DAG) of steps of the Snowflake native tasks mechanism

Unfortunately, neither of these methods seem to be free from flaws - the former requires additional scheduling components to be included in the architecture that makes it more complex and less platform-independent. The latter one is less user-friendly as it requires not only developing training code, but also defining Snowflake DAGs of tasks by means of plain SQL or Terraform programming language.

  • ML model lifecycle management - there isn’t any automation in place that makes it easy to promote/deploy training pipelines between stage/runtime environments - i.e. Development - Test - Production. This requires preparation of Continuous Integration/Continuous Training (CI/CT) processes on your own
  • Code standardization and project templates - in its current state, Snowpark does not come with any built-in mechanism for code structuring, unit testing or automated documentation generation. 

The above list of challenges clearly indicates missing the integration of the Snowflake environment with an MLOps framework, such as Kedro

Today we are proudly announcing a solution that will fill this gap - the kedro-snowflake plugin. In the next post we will also guide you through the whole MLOps platform and ML model deployment on Snowflake. However, let's first take a look at what Kedro is and then let's build an ML pipeline in Kedro and execute it in the Snowflake environment in 3 simple steps.

Kedro - the MLOps Framework

Kedro is a widely-adopted, open-source Python framework that has claimed to bring engineering back to the data science world. The rationale behind using Kedro as a framework for creating maintainable and modular training code is in many aspects, similar to preferring Terraform technology over cloud-vendor native SDK for infrastructure provisioning and can be summarized in the following points:

  • standardization of ML project layout,
  • portability of ML pipelines,
  • reusability code base, modules or even whole pipelines,
  • a faster development loop thanks to the possibility of running/testing pipelines locally,
  • clear and maintainable codebase with no dependencies on Cloud specific APIs (as an analogy to Terraform providers) and separation of runtime configurations
  • multi cloud readiness
  • hooks support for further automation,
  • seamless integration with plugins mechanism with 3rd party tools like MLflow, pandas-profiling or Docker,
  • suitable for easy integration with CI/CD tools for a true MLOps experience.

We at GetInData|Part of Xebia are strong advocates of the Kedro framework as our technology of choice for deploying robust and user-friendly MLOps platforms on many cloud platforms. With our open-source Kedro plugins, you can write your pipeline code and focus on the target model. Then, with the Kedro plugins, you deploy it to any supported platform  (see: Running Kedro… everywhere? Machine Learning Pipelines on Kubeflow, Vertex AI, Azure and Airflow - GetInData) without changing the code, making local iterations fast and moving to cloud - seamless.

As of May 2023 we support:

Now the time has come for Snowflake…

Kedro-Snowflake plugin behind the scenes

kedro-snowflake is our newest plugin that allows you to run full Kedro pipelines in Snowflake. Right now it supports:

  • Kedro starter, to get you up to speed fast
  • automatically creating Snowflake Stored Procedures from Kedro nodes (using Snowpark SDK)
  • translating the Kedro pipeline into Snowflake task DAGs
  • running the Kedro pipeline fully within Snowflake, without an external system
  • using Kedro's official SnowparkTableDataSet
  • automatically storing intermediate data results as Transient Tables (if Snowpark's DataFrames are used)

The core idea of this plugin is to programmatically traverse a Kedro pipeline and translate its nodes into corresponding Stored Procedures and at the same time wrap them into Snowflake tasks, while preserving the inter-node dependencies to form exactly the same pipeline DAG on the Snowflake side. The end result is a Snowflake DAG of tasks like this:

snowflake-dag-tasks-getindata

that correspond to the Kedro pipeline:

kedro-pipeline-getindata

It also comes with a built-in snowflights (port of the official spaceflights, extended with Snowflake-related features) starter that will help to bootstrap your Snowflake-based ML projects in seconds.

Quick start - your ML pipeline in 3 steps with Kedro-Snowflake plugin

Let’s start with the snowflights Kedro starter. First, prepare your environment (i.e. your preferred Python virtual environment). First, just install our kedro-snowlake plugin:

pip install "kedro-snowflake>=0.1.2" 

Next, create your first ML pipeline using Kedro and Snowlake. The starter will guide you through the Snowflake connection configuration, including the Snowlake account and warehouse details:

kedro new --starter=snowflights --checkout=0.1.2

Then run the starter pipeline:

kedro snowflake run --wait-for-completion

That’s it! You can see the ML pipeline execution in the Snowflake UI:

ml-pipeline-execution-snowflake-getindata

and in the terminal:

This starter will showcase the Kedro-Snowflake integration, including the connection with Snowflake, transforming an ML Pipeline in Kedro to a Snowflake compatible format, and execution of the pipeline in the Snowflake environment. Feel free to build your own pipeline based on this starter or from scratch with our plugin. See more in the following plugin documentation: Kedro Snowflake plugin documentation! 

We also recommend you our video tutorial in which Marcin Zabłocki shows how to run ML pipeline on Snowflake.

Summary

In this short blog post we presented our newest kedro-snowflake plugin. Thanks to this plugin, you can build your ML pipelines in Kedro and execute them in a scalable Snowflake environment in three simple steps. Stay tuned for the second part of this blogpost in which we are going to present the whole MLOps platform and ML model deployment with the kedro-snowflake plugin being the core component of it.

WATCH KEDRO-SNOWFLAKE TUTORIAL

Interested in ML and MLOps solutions? How to improve ML processes and scale project deliverability? Watch our MLOps demo and sign up for a free consultation.

machine learning
open source
MLOps
Kedro
Snowflake
Snowpark
Kedro-Snowflake plugin
17 May 2023

Want more? Check our articles

getindata 2021 lets celebrate our achivements big data world blog

GetInData in 2021 - let’s celebrate our achievements in the Big Data world!

The year 2021 passed in the blink of an eye and the time has come to summarize our goals at GetinData and define our challenges for the next year…

Read more
getindata data democratization 2

Data Democratization: Power Your Organizations with Data Accessibility

In today's digital age, data reigns supreme as the lifeblood of organizations across industries. From enabling informed decision-making to driving…

Read more
1712737211456
Big Data Event

A Review of the Big Data Technology Warsaw Summit 2024! Part 1: Takeaways from Spotify, Dropbox, Ververica, Hellofresh and Agile Lab

It was epic, the 10th edition of the Big Data Tech Warsaw Summit - one of the most tech oriented data conferences in this field. Attending the Big…

Read more
1712748338904
Big Data Event

A Review of the Big Data Technology Warsaw Summit 2024! Part 2: Private RAG-backed Data Copilot, Allegro and PLAY case studies

In this blogpost series, we share takeaways from selected topics presented during the Big Data Tech Warsaw Summit ‘24. In the first part, which you…

Read more
anomaly detection truecaller getindata machine learning
Success Stories

Revolutionizing Daily Analytics: Machine Learning for an Unusual Approach to Anomaly Detection. The Truecaller Story

Discovering anomalies with remarkable accuracy, our deployed model successfully identified 90% true anomalies within a 2-months evaluation period…

Read more
7 reasons to invest in real time streaming analytics based on apache flink
Tech News

7 reasons to invest in real-time streaming analytics based on Apache Flink. The Flink Forward 2023 takeaways

Last month, I had the pleasure of performing at the latest Flink Forward event organized by Ververica in Seattle. Having been a part of the Flink…

Read more

Contact us

Interested in our solutions?
Contact us!

Together, we will select the best Big Data solutions for your organization and build a project that will have a real impact on your organization.


What did you find most impressive about GetInData?

They did a very good job in finding people that fitted in Acast both technically as well as culturally.
Type the form or send a e-mail: hello@getindata.com
The administrator of your personal data is GetInData Poland Sp. z o.o. with its registered seat in Warsaw (02-508), 39/20 Pulawska St. Your data is processed for the purpose of provision of electronic services in accordance with the Terms & Conditions. For more information on personal data processing and your rights please see Privacy Policy.

By submitting this form, you agree to our Terms & Conditions and Privacy Policy