Hadoop Developer Training
This four-day course gives software engineers a practical introduction to Big Data application development using popular projects from the Hadoop ecosystem and beyond.
Participants will gain a detailed understanding of the architecture and role of the most important technologies from the Hadoop Ecosystem. They will be able to independently load and transform huge datasets with the help of technologies like Hive, Spark, Sqoop, Kafka and Oozie.
Introduction to Big Data and Apache Hadoop
Description of StreamRock along with all its opportunities and challenges that come from Big Data technologies.
- Hands-on exercise: Accessing a remote multi-node Hadoop cluster.
Introduction to HDFS
- Hands-on exercise: Importing structured data into the cluster using HUE
- Hands-on exercise: Interacting with HDFS using HDFS CLI, Snakebite and WebHDFS
Introduction to YARN
- Hands-on exercise: Familiarisation with YARN Web UI
A short overview of MapReduce
- Hands-on exercise: Submitting an example ETL map-reduce job to YARN cluster
Providing data-driven answers to business questions using SQL-like solution
Introduction to Apache Hive
- Hands-on exercise: Creating Hive databases and tables using HUE
- Hands-on exercise: Ad-hoc analysis of structured data with HiveQL
Advanced aspects of Hive e.g. partitioning, bucketing, strict-mode, execution plan
- Hands-on exercise: Hive partitioning
Extending Hive with custom UDFs and SerDes
- Hands-on exercise: Using custom Java UDF and SerDe for JSON
Hadoop File Formats (Avro, Parquet, ORC)
- Hands-on exercise: Interacting with Parquet and Avro in Hive
Implementing scalable ETL processes on the Hadoop cluster
Introduction to Apache Spark, Spark SQL, and Spark DataFrames
- Hands-on exercise: Implementation of the ETL job to clean and massage input data using Spark
- Hands-on exercise: Implementing ad-hoc queries using Spark SQL and DataFrames
- Hands-on exercise: Visualisation of the results of Spark queries using the Spark Notebook
Bonus: Overview of Fast-SQL on Hadoop – solutions like Hive, Spark SQL, Impala, Presto and Tez
Introduction to Apache Sqoop
- Hands-on exercise: Importing structured data from MySQL to HDFS and Hive using Sqoop
Real-time data collection with Apache Kafka
- Hands-on exercise: Interacting with a Kafka Cluster to produce and consume messages with CLI scripts
- Hands-on exercise: Using Kafka Java Producer with Avro Schema Registry
Introduction to Apache Oozie
- Hands-on exercise: Building and executing Oozie workflow
- Hands-on exercise: Scheduling Oozie workflow with Oozie scheduler
Completed in half the estimated time and with a fivefold improvement on data collection goals, the robust product has exponentially increased processing capabilities. GetInData’s in-depth engagement, reliability, and broad industry knowledge enabled seamless project execution and implementation.
GetInData had been supporting us in building production Big Data infrastructure and implementing real-time applications that process large streams of data. In light of our successful cooperation with GetInData, their unique experience and the quality of work delivered, we recommend the company as a Big Data vendor.
GetInData delivered a robust mechanism that met our requirements. Their involvement allowed us to add a feature to our product, despite not having the required developer capacity in-house.
Their consistent communication and responsiveness enabled GetInData to drive the project forward. They possess comprehensive knowledge of the relevant technologies and have an intuitive understanding of business needs and requirements. Customers can expect a partner that is open to feedback.
We sincerely recommend GetInData as a Big Data training provider! The trainer is a very experienced practitioner and he gave us a lot of tips regarding production deployments, possible issues as well as good practices that are invaluable for a Hadoop administrator.
The engineers and administrators at GetInData are world-class experts. They have proven experience in many open-source technologies such as Hadoop, Spark, Kafka and Flink for implementing batch and real-time pipelines.
Other Big Data Training
Machine Learning Operations Training (MLOps)This four-day course will teach you how to operationalize Machine Learning models using popular open-source tools, like Kedro and Kubeflow, and deploy it using cloud computing.
Hadoop Administrator TrainingThis four-day course provides the practical and theoretical knowledge necessary to operate a Hadoop cluster. We put great emphasis on practical hands-on exercises that aim to prepare participants to work as effective Hadoop administrators.
Advanced Spark TrainingThis 2-day training is dedicated to Big Data engineers and data scientists who are already familiar with the basic concepts of Apache Spark and have hands-on experience implementing and running Spark applications.
Data Analyst TrainingThis four-day course teaches Data Analysts how to analyse massive amounts of data available in a Hadoop YARN cluster.
Real-Time Stream ProcessingThis two-day course teaches data engineers how to process unbounded streams of data in real-time using popular open-source frameworks.
Modern Data Pipelines with DBTIn this one day workshop, you will learn how to create modern data transformation pipelines managed by DBT. Discover how you can improve your pipelines’ quality and workflow of your data team by introducing a tool aimed to standardize the way you incorporate good practices within the data team.
Interested in our solutions?
Together, we will select the best Big Data solutions for your organization and build a project that will have a real impact on your organization.