Camus, a MapReduce job that loads data from Kafka into HDFS, has a number of time-related configuration settings and assumptions. They control how many messages are consumed from Kafka in each Camus run and where the data is stored in HDFS. I summarize them in this blog post.
The LinkedIn Engineering blog is a great resource of technical blog posts related to building and using large-scale data pipelines with Kafka and its “ecosystem” of tools. In this post I provide several pictures and diagrams (including quotes) that summarise how data pipeline has evolved at LinkedIn over the years. The actual content is based […]
In the first part of this blog series I described a few challenges that I had to face to quickly implement a simple Hive query and schedule it periodically on the Hadoop cluster. These challenges include data cataloguing, data discovery, data lineage and process scheduling. I also explained how they can be addressed using existing […]
In this tutorial, we focus on HDFS snapshots. Common use cases of HDFS snapshots include backups and protection against user errors. To demonstrate functionality of HDFS snapshots, we create an “important” directory in HDFS, create its snapshot and “accidentally” remove a file from the directory. Finally, we recover the file from the snapshot.
We are happy to say that our Refcardz, titled Getting Started with Apache Hadoop, has been already published by DZone. This Refcard presents Apache Hadoop, a software framework that enables distributed storage and processing of large datasets using simple high-level programming models. The card covers the most important concepts of Hadoop, describes its architecture, and […]