Real-Time Stream Processing
This two-day course teaches data engineers how to process unbounded streams of data in real-time using popular open-source frameworks.
After the training participants will be able to independently implement real-time big data processing scenarios with the help of Apache Kafka and Apache Flink.
They will have knowledge and understanding of the inner workings of these most widely-used open-source streaming technologies.
Introduction to Apache Kafka and Flink
Real-time data collection with Apache Kafka
- Key concepts of log-based approach
- Daemons and cluster infrastructure
- Hands-on exercise: Interacting with a Kafka Cluster to produce and consume messages with CLI scripts
Interactive reporting and data exploration with Elasticsearch
- A search engine as a core of data-driven decisions
- Live demo: visualizing continuously arriving data with Kibana
Introduction to Apache Flink
- Constructing DataStreams with Flink APIs
- Hands-on exercises: Applying simple filters to a stream of events and running jobs in YARN cluster
- Grouping data into windows based on different notions of time
- Hands-on exercises: Calculating user session statistics
- Connecting to the external world
- Hands-on exercises: Reading events from Kafka and writing statistics to Elasticsearch for real-time dashboards in Kibana
Apache Flink Advanced
Deep dive into Apache Flink
- Advanced time handling, when out-of-the-box solutions are not enough
- Daemons and cluster infrastructure, overview of deployment modes e.g. -YARN, Mesos, Docker, Standalone
- Accessing fault-tolerant state and how it is checkpointed
- Hands-on exercises: Using low-level functions and state for constructing complex time-based scenarios
- Advantages of relational approach with StreamSQL
- Hands-on exercises: Querying streams with SQL language
- Early alerting based on a sequence of events with Flink CEP library
- Hands-on exercises: Writing pattern sequences and converting matches to alerts
Comparison of other streaming frameworks like Spark Streaming, Kafka Streams, Storm
- Daemons and cluster infrastructure
- How they implement fault-tolerance
- Feature sets
Completed in half the estimated time and with a fivefold improvement on data collection goals, the robust product has exponentially increased processing capabilities. GetInData’s in-depth engagement, reliability, and broad industry knowledge enabled seamless project execution and implementation.
GetInData had been supporting us in building production Big Data infrastructure and implementing real-time applications that process large streams of data. In light of our successful cooperation with GetInData, their unique experience and the quality of work delivered, we recommend the company as a Big Data vendor.
GetInData delivered a robust mechanism that met our requirements. Their involvement allowed us to add a feature to our product, despite not having the required developer capacity in-house.
Their consistent communication and responsiveness enabled GetInData to drive the project forward. They possess comprehensive knowledge of the relevant technologies and have an intuitive understanding of business needs and requirements. Customers can expect a partner that is open to feedback.
We sincerely recommend GetInData as a Big Data training provider! The trainer is a very experienced practitioner and he gave us a lot of tips regarding production deployments, possible issues as well as good practices that are invaluable for a Hadoop administrator.
The engineers and administrators at GetInData are world-class experts. They have proven experience in many open-source technologies such as Hadoop, Spark, Kafka and Flink for implementing batch and real-time pipelines.
Other Big Data Training
Machine Learning Operations Training (MLOps)This four-day course will teach you how to operationalize Machine Learning models using popular open-source tools, like Kedro and Kubeflow, and deploy it using cloud computing.
Hadoop Administrator TrainingThis four-day course provides the practical and theoretical knowledge necessary to operate a Hadoop cluster. We put great emphasis on practical hands-on exercises that aim to prepare participants to work as effective Hadoop administrators.
Advanced Spark TrainingThis 2-day training is dedicated to Big Data engineers and data scientists who are already familiar with the basic concepts of Apache Spark and have hands-on experience implementing and running Spark applications.
Data Analyst TrainingThis four-day course teaches Data Analysts how to analyse massive amounts of data available in a Hadoop YARN cluster.
Modern Data Pipelines with DBTIn this one day workshop, you will learn how to create modern data transformation pipelines managed by DBT. Discover how you can improve your pipelines’ quality and workflow of your data team by introducing a tool aimed to standardize the way you incorporate good practices within the data team.
Interested in our solutions?
Together, we will select the best Big Data solutions for your organization and build a project that will have a real impact on your organization.