Big Data Workshop
A one-day workshop focused on the practical side of using open-source, Big Data technologies. Participants will learn the basics of the most popular Big Data tools and technologies like: Hadoop, Hive, Spark and Kafka.
The workshop is highly focused on a practical experience. Instructors will not only teach you all the important theory but will also share their experience gained from working with Big Data technologies for several years.
Training outcome
During the workshop you will act as a Big Data engineer and analyst working for a fictional company StreamRockTM that creates an application for music streaming (like Spotify). The main goal of your work is to take advantage of Big Data technologies such as Hadoop, Spark or Hive to analyse various datasets about the users and the songs they played. We will process data in a batch manner to get the data-driven answers to important business questions. Each exercise will be executed on a remote multi-node Hadoop cluster.
Course agenda*
Part 1
Introduction to Big Data and Apache Hadoop
Description of StreamRock along with all its opportunities and challenges that come from Big Data technologies.
Introduction to core Hadoop technologies such as HDFS or YARN.
Hands-on exercise: Accessing a remote multi-node Hadoop cluster.
Part 2
Providing data-driven answers to business questions using SQL-like solution
Introduction to Apache Hive.
Hands-on exercise: Importing structured data into the cluster using HUE.
Hands-on exercise: Ad-hoc analysis of the structured data with Hive.
Hands-on exercise: The visualisation of results using HUE.
Part 3
Implementing scalable ETL processes on the Hadoop cluster
Introduction to Apache Spark, Spark SQL, and Spark DataFrames.
Hands-on exercise: Implementation of the ETL job to clean and massage input data using Spark.
Quick explanation of the Avro and Parquet binary data formats.
Practical tips for implementing ETL processes like process scheduling, schema management, integrations with existing systems.
Part 4
Advanced analysis of the diversified datasets
Hands-on exercise: Implementing ad-hoc queries using Spark SQL and DataFrames.
Hands-on exercise: Visualisation of the results of Spark queries using the Spark Notebook.
Contact person
Testimonials
Other Big Data Training
Machine Learning Operations Training (MLOps)
This four-day course will teach you how to operationalize Machine Learning models using popular open-source tools, like Kedro and Kubeflow, and deploy it using cloud computing.Hadoop Administrator Training
This four-day course provides the practical and theoretical knowledge necessary to operate a Hadoop cluster. We put great emphasis on practical hands-on exercises that aim to prepare participants to work as effective Hadoop administrators.Advanced Spark Training
This 2-day training is dedicated to Big Data engineers and data scientists who are already familiar with the basic concepts of Apache Spark and have hands-on experience implementing and running Spark applications.Data Analyst Training
This four-day course teaches Data Analysts how to analyse massive amounts of data available in a Hadoop YARN cluster.Real-Time Stream Processing
This two-day course teaches data engineers how to process unbounded streams of data in real-time using popular open-source frameworks.Analytics engineering with Snowflake and dbt
This 2-day training is dedicated to data analysts, analytics engineers & data engineers, who are interested in learning how to build and deploy Snowflake data transformation workflows faster than ever before.Mastering ML/MLOps and AI-powered Data Applications in the Snowflake Data Cloud
This 2-day training is dedicated to data engineers, data scientists, or a tech enthusiasts. This workshop will provide hands-on experience and real-world insights into architecting data applications on the Snowflake Data Cloud.Modern Data Pipelines with DBT
In this one day workshop, you will learn how to create modern data transformation pipelines managed by DBT. Discover how you can improve your pipelines’ quality and workflow of your data team by introducing a tool aimed to standardize the way you incorporate good practices within the data team.Real-time analytics with Snowflake and dbt
This 2-day training is dedicated to data analysts, analytics engineers & data engineers, who are interested in learning how to build and deploy real-time Snowlake data pipelines.
Contact us
Interested in our solutions?
Contact us!
Contact us!
Together, we will select the best Big Data solutions for your organization and build a project that will have a real impact on your organization.
What did you find most impressive about GetInData?