Tutorial
5 min read

From spreadsheets to automated data pipelines - and how this can be achieved with support of Google Cloud

CSVs and XLSXs files are one of the most common file formats used in business to store and analyze data. Unfortunately, such an approach is not scalable and it becomes more and more difficult to provide all team members access to one common file in which they can cooperate and share the results of their work with different teams.

Surely, there are available solutions to implement  in the spreadsheet files in real-time, but it’s still difficult to share data with different teams, process it or decide about the target data format, especially when using autodetection data formatting.

Where should we start?

The first aspect is about preparing the landscape of all processed spreadsheets. We start with the most important thing: understanding the data, verifying which information is important for target users and therefore  possibly reducing  the amount of processed data by deleting unused data.

The second part is focused on understanding how to create new spreadsheets - can we automate that step? Can it be only done by users manually? How frequently is data uploaded? How can we verify if there are any changes in the data schema?

When we have defined the logistics and we know what the input and desired output will be, we can hop over to the next step to define how to process data, create some aggregations for example, save data in the target database or storage, clean data and how to deliver output to the target users.

It is necessary to mention that we must know if there are any tools used by different teams to analyze or visualize data further. If users utilise one solution, it might be worth implementing in our new process to simplify the users’ onboarding process.

The last step is about adding a monitoring layer. Who can take this action if there's a problem with source data and how should we notify the analysts? How can we check data quality? What should we do to avoid human error in the case of a manual process? We should implement metrics reporters to our application and queries to detect incorrect records or those  with data that is too varied. We can create alerts and dashboards based on the findings.

The multistage process 

Public cloud such as Google Cloud Platform helps companies to improve their data pipelines and move quickly from local Excel development to scalable tools. It makes work faster, more efficient and with no human errors or problems with data formatting.

The first step is about data ingestion. The perfect place to store raw, unprocessed data is Google Cloud Storage. Users can upload data there or add a sync script between Cloud Storage and some remote drives. Here we start the journey with process alignment and data integration from multiple sources.

For data processing pipelines, we can go with multiple solutions. Due to the different use cases in each project, the best way is to create a custom Python script(s) to process data while the scripts themselves can be scheduled by tools like Google Cloud Composer (managed Apache Airflow), self-managed Apache Airflow, Google Cloud Tasks, Google Cloud Scheduler or even a mix of Cloud Pub/Sub with Cloud Functions.

In the example scenario, we use Composer with Python scripts executed on the Kubernetes pods of the Composer that is the most flexible solution and can easily be extended in the future.

As the final part of the CSVs and XLSXs processing platform, we need to ingest processed data somewhere. It depends on the exact use cases, the most common of which can be solved by inserting data into BigQuery which works perfectly as the data warehouse and can be used as the engine for Business Intelligence. The performance is great.

Last but not least, everything must be managed by the Infrastructure-as-a-Code. A mix of Terraform and CICD tools like GitHub Actions or GitLab Ci helps in making it happen fast and provides possibilities to easily manage infrastructure. If you want to read something more about terraform, check our blog post  “Terraform your Cloud Infrastructure”.

We also need to mention the monitoring layer. It's powered by Cloud Monitoring, Cloud Logging and BigQueries tables in which we can store information about potential errors in the source data. It can be visualized in Data Studio or a similar tool, while alerts can be sent via email to the stakeholders who can then take action.

Automate work and simplify the processes with Google Cloud

Another benefit of this solution is that it's not expensive. It delivers High Availability and can easily be scaled up, depending on the company's needs and the complexity of the next tasks that must be implemented by the processing platform. Here's an example of how we can quickly move from local Excel development to the automated cloud environment to simplify data management and start data-driven development in the cloud.

local-excel-development-the-automated-cloud-environment-simplify-data-management

Would you like to chance your spreadsheet files to automated data pipelines with Google Cloud? Let’s discuss about this, contact us!

big data
technology
google cloud platform
cloud
data pipelines
GCP
8 February 2022

Want more? Check our articles

18nX38qlhR2rMM2cQzZ0U3A
Use-cases/Project

How to build Digital Marketing Platform making the best out of Google Cloud

Nowadays digital marketing is a competitive business and it’s easy to tell that we are way past the point when a catchy slogan or shiny banner would…

Read more
datamass getindata adoption genai
Big Data Event

A Review of the Presentations at the DataMass Gdańsk Summit 2023

The Data Mass Gdańsk Summit is behind us. So, the time has come to review and summarize the 2023 edition. In this blog post, we will give you a review…

Read more
screenshot 2022 08 02 at 10.56.56
Tech News

2022 Big Data Trends: Retail and eCommerce become one of the hottest sectors for AI/ML

Nowadays, we can see that AI/ML is visible everywhere, including advertising, healthcare, education, finance, automotive, public transport…

Read more
3

Data Journey with Michał Wróbel (RenoFi) - Doing more with less with a Modern Data Platform and ML at home

In this episode of the RadioData Podcast, Adam Kawa talks with Michał Wróbel about business use cases at RenoFi (​​a U.S.-based FinTech), the Modern…

Read more
dbt machine learning getindataobszar roboczy 1 4
Tutorial

dbt & Machine Learning? It is possible!

In one of our recent blog posts Announcing the GetInData Modern Data Platform - a self-service solution for Analytics Engineers we shared with you our…

Read more
deploying serverless mlflow google cloud platform using cloud run machine learning getindata notext
Tutorial

Deploying serverless MLFlow on Google Cloud Platform using Cloud Run

At GetInData, we build elastic MLOps platforms to fit our customer’s needs. One of the key functionalities of the MLOps platform is the ability to…

Read more

Contact us

Interested in our solutions?
Contact us!

Together, we will select the best Big Data solutions for your organization and build a project that will have a real impact on your organization.


What did you find most impressive about GetInData?

They did a very good job in finding people that fitted in Acast both technically as well as culturally.
Type the form or send a e-mail: hello@getindata.com
The administrator of your personal data is GetInData Poland Sp. z o.o. with its registered seat in Warsaw (02-508), 39/20 Pulawska St. Your data is processed for the purpose of provision of electronic services in accordance with the Terms & Conditions. For more information on personal data processing and your rights please see Privacy Policy.

By submitting this form, you agree to our Terms & Conditions and Privacy Policy