Tutorial
6 min read

NiFi Ingestion Blog Series. PART I - Advantages and Pitfalls of Lego Driven Development

Apache NiFi, big data processing engine with graphical WebUI, was created to give non-programmers the ability to swiftly and codelessly create data pipelines and free them from those dirty, text based methods of implementation. Unfortunately, we live in a world of trade-offs, features come with a price. The purpose of our blog series is to present our experience and lessons learned when working with production NiFi pipelines. This will be organised into the following articles:

This post is our brief summary of strengths and weaknesses of Apache NiFi from the perspective of a flow developer.

Apache Nifi GetInData Tutorial

Rapid Development

Let’s go first through the things that are part of user experience: imagine you are data engineer and your task is to create a proof of concept for simple data ingestion to Data Lake. Assume you may have multiple sources, there are no transformations nor business logic required. The use-case is just moving data from point A to point B, with some logging in the middle. NiFi seems to be the right tool for the job.

The first part, development. You’d like to be done with your work and go on to more important stuff like saving the world or that funny youtube video. Fortunately for you, NiFi has hundreds of ready to use processors which include a variety of connectors to data sources and data sinks, all you have to do is drag and drop them from top bar to canvas and configure. If you have any doubts about properties or component in general, you can right-click on it and open documentation. After that, you create connections between processors to set the order, in which they will be executed. BAM, you’re pretty much done. Because data flow that you created is its own visual representation, checking if the logic is correct is a trivial task.

Next step, testing if everything works. NiFi allows you to start and stop each processors separately, so you can control until which point you want to execute your flow or from which to start. You can also check the input and output for every stage of processing. In case of failure, you get a bulletin with description of what happened. Combine those three features and you get a great insight into exact workings of the flow. With that, the proof of concept solution is done. You’re generally impressed with the simplicity and capabilities of your new tool.

Maturity of Development

Refactoring is natural part of the development process. This often includes reorganising flow and moving processors to other process group. At this point, it becomes apparent that any element containing data cannot be removed or moved to the different process group. This seems to be a valid point for NiFi architecture but in many cases, it is a pitfall that forces you to look for hidden flowfiles before refactoring process groups. One need to find them manually as there is no automatic way to delete all data within process group.

As a result of being lost in thought, you delete wrong component… yup, your worries are spot-on, NiFi does not have an “undo” button anywhere. When you do something, it’s set in stone. Frustration makes you look for a solution to this problem, the only one available is storing copy of process group in NiFi Registry, version control system for flows, although its functionality is fairly limited, it allows rollbacks to last committed version.

It rarely happens in IT that things are simple. Even if they appear like that at first, it doesn’t usually last long. More often than not, main functionality called “happy path” is only small fraction of actual work. Sooner or later, other scenarios need to be considered: whenever we connect external service retries are needed, error states need to be handled, etc.

Issues mentioned above are not NiFi -specific but rather a natural consequence of IT development. Because of that, most of currently used programming languages have mechanisms created specifically to mitigate them. In object oriented programming, whenever we have functionality used multiple times or one that is just complex, we can extract certain steps to different methods, classes or packages. Given the fact that method or object can have explicitly stated parameters which we supply, it’s possible to hide significant portion of complexity. This behaviour is, at least currently (June 2020), impossible to achieve in NiFi. All information in NiFi is stored inside flowfiles’ attributes which are passed implicitly throughout all steps of processing. There is no way of knowing what exactly is required without knowing specifics of the mechanism (abstraction leaks). This is like programming with all the instance variables being global and accessed directly. If that wasn’t enough, inner process group do not throw the exceptions: if the error occurs it has to be handled in all parent process groups.

Unit tests and integration testing are important. They help you modify the code, refactor to make it cleaner while being sure that business features behave the same way before and after the changes. Remember when you were a little kid and testing your code was basically running it on some input once and hoping that nothing will crush later? Yup, NiFi doesn’t have any testing framework, so no automated tests, no matter how loud you cry.

Summary

What we loved: NiFi web UI allows lightning fast development.

What we hated: It has serious limitations when compared to programming languages.

The more you know the technology, the more of its limitations you are aware of. Would we choose NiFi for our previous projects if we had the knowledge we are having now? The answer is YES but in some case we could utilize NiFi in a slightly different way. Stay tuned for further posts to read what we mean by that.

Don't forget to read the previous blog post "Apache NiFi - why do data engineers love it an hate it at the same time"

big data
apache nifi
getindata
Software Engineering
8 September 2020

Want more? Check our articles

how we work with customer scrum framework dema project
Use-cases/Project

How do we work with customers? Scrum Framework in Dema project

Main Goals GetInData has successfully introduced the Scrum framework in cooperation with Dema. Thanks to the use of Scrum, the results of the…

Read more
getindata blog big data machine learning models tools comparation no text
Tutorial

Machine Learning model serving tools comparison - KServe, Seldon Core, BentoML

Intro Machine Learning is now used by thousands of businesses. Its ubiquity has helped to drive innovations that are increasingly difficult to predict…

Read more
ingartboard 1 1 100
Success Stories

Customer Story: Platform focused on centralizing data sources and democratization of data with ING

The client who needs Data Analytics Platform ING is a global bank with a European base, serving large corporations, multinationals and financial…

Read more
getindata 1000 followers

5 reasons to follow us on Linkedin. Celebrating 1,000 followers on our profile!

We are excited to announce that we recently hit the 1,000+ followers on our profile on Linkedin. We would like to send a special THANK YOU :) to…

Read more
getindata big data tech main 1
Big Data Event

A Review of the Presentations at the Big Data Technology Warsaw Summit 2022!

The 8th edition of the Big Data Tech Summit is already over, and we would like to thank all of the attendees for joining us this year. It was a real…

Read more
deploying serverless mlflow google cloud platform using cloud run machine learning getindata notext
Tutorial

Deploying serverless MLFlow on Google Cloud Platform using Cloud Run

At GetInData, we build elastic MLOps platforms to fit our customer’s needs. One of the key functionalities of the MLOps platform is the ability to…

Read more

Contact us

Interested in our solutions?
Contact us!

Together, we will select the best Big Data solutions for your organization and build a project that will have a real impact on your organization.

The administrator of your personal data is GetInData Poland Sp. z o.o. with its registered seat in Warsaw (02-508), 39/20 Pulawska St. Your data is processed for the purpose of provision of electronic services in accordance with the  Terms & Conditions. For more information on personal data processing and your rights please see  Privacy Policy.

By submitting this form, you agree to our Terms & Conditions and Privacy Policy