Tutorial
7 min read

NiFi Ingestion Blog Series. Part VI - I only have one rule and that is … - recommendations for using Apache NiFi

Apache NiFi, a big data processing engine with graphical WebUI, was created to give non-programmers the ability to swiftly and codelessly create data pipelines and free them from those dirty, text-based methods of implementation. Unfortunately, we live in a world of trade-offs, and those features come with a price. The purpose of our blog series is to present our experience and lessons learned when working with production NiFi pipelines. This will be organised into the following articles:

getindata-nifi-ingestion

In this post we try to sit back, think of all the details presented in the previous articles and extract some general rules and lessons learned that may be useful to other data engineers. 

Harness the complexity 

Flow visualization is one of the greatest features of NiFi, usually when you create the flow, it’s basically self-explanatory. If we want to keep reaping the benefits of it, we must keep the structure of the flow fairly clear. Rules are mostly analogical to those used in writing code. It’s worth mentioning that what is considered a clear structure is subjective, so suggestions here will be more like rule of thumb, rather than a rigid framework, nonetheless here they are:

  • If your flow doesn’t fit on a visible canvas or you have to zoom out so you can see the names of processors, it’s probably time to put some parts of the flow in process groups. 
  • If your flow gets too deep so you have more than four nestings, it may be time to think about restructuring the flow or putting logic into scripts/custom processors.
  • If possible, avoid manipulating attributes that are widely used in the flow. An attribute once set is often used in many places and changing it in one place has a great chance of breaking logic in some steps later on. 
  • Keep as much as you can explicit. It's really tempting, especially while writing scripts, to just manipulate the attributes that are implicitly given, but if you have to debug where certain attributes change, it’s usually better to have it specified in processor properties than have to look for it in the script body. In case of custom processors, try keeping good documentation.
  • Sometimes your flow can have just too much logic inside. In this case, you might think about splitting it into a few flows and connect them with an external service like Kafka. A decrease in size makes the flow more readable and keeps the parts moderately separate, so you only have to debug specific parts. It is like eating spaghetti that has just a single noodle. The shorter the noodle is, the cleaner flow you have.

Choose the right tool

To avoid pitfalls while developing, it's important to remember that while NiFi is great for a variety of problems, for some it’s just… not. Developing some features in NiFi is just not feasible when they are available in other tools so it is a huge mistake (unfortunately happening more often than it should) to equate the size of your technological stack with the complexity of the solution. It is a core assumption built into the design of Nifi to integrate with other processing engines, databases, microservices etc. So even if most of the processing is in Nifi, it’s always good to ask yourself whether this is the right tool for this job.

If you want to do the stream processing with windowing or some logic, consider other technologies like Apache Flink. If you need batch processing on a Hadoop cluster, think of executing Hive queries from NiFi. If the processing cannot be defined with SQL, consider writing a separate Spark job for it. On the other hand, if one needs to manage files on HDFS or generate and run Hive queries, then NiFi is a really good choice. 

  • Choosing NiFi does not mean avoiding writing any code, but the amount of code that has to be written can be significantly reduced. 

Keep your finger on the pulse

The development in NiFi is based on using out-of-the-box processors, that makes the developers dependent on available solutions more than in classic development. We can of course create our own custom solutions by implementing the functionality with some flow, script or other custom approach, but it’s usually problematic maintenance-wise. In consequence, it’s vital to stay up to date with features added to new versions of NiFi. This happened to us when we needed a retry mechanism for communicating with 3rd party services like Hive, HDFS, etc. There was no available solution so we implemented a retry process group that has done what we needed. The only issue with this was that the process group contained eleven processors and was placed in multiple places in the flow, which resulted in around 250 extra processors. Fortunately, a couple of months later the RetryFlowFile processor was released and we upgraded Nifi to a newer version and used the available processor. 

The lessons learned are clear:

  • If you are solving a generic problem, sooner or later someone else will solve it too. 
  • Keep up to date with the NiFi change log to see what is coming with the latest releases. 

CI and CD takes time

From our experience, continuous integration and continuous deployment of NiFi projects are much more time consuming than other processing technologies. Depending on how sensitive the data is and how critical the process, there are a few options of handling it.

  • Develop directly on a single main NiFi cluster. Disclaimer: developing on production is generally a really bad idea. But in some cases it’s possible. You have one registry, versioning is easy.
  • Have a single NiFi registry and multiple NiFi clusters. If you can afford to have elements in production modifiable from a development environment, it can solve a lot of problems moving the flow.
  • If you can’t do any of the above… well good luck, there is a lot of work ahead of you. NiFi has no out-of-the-box support for migrating whole flows between environments. It is doable, but really hard if you want to have an automated process.

getindata-apache-nifi

Conclusion

This is the 6th post in our series and the last. We’ve seen certain comments saying that NiFi s**ks under some previous posts - which we don't agree with.  We are the engineers who have spent quite some time with  NiFi, so we write about the things that we had issues with and solved. The technology is not fully mature yet, it is still evolving. For many scenarios, the development of NiFi is lightning fast and is definitely, without any shadow of a doubt, the technology we recommend.

big data
technology
apache nifi
hadoop
hdfs
hive
kafka
2 December 2020

Want more? Check our articles

getindata success story izettle stream processing
Success Stories

Success Story: Fintech data platform gets a boost from stream processing

A partnership between iZettle and GetInData originated in the form of a two-day workshop focused on analyzing iZettle’s needs and exploring multiple…

Read more
podcast swedbank mlops cloud getindata
Radio DaTa Podcast

MLOps in the Cloud at Swedbank - Enterprise Analytics Platform

In this episode of the RadioData Podcast, Adama Kawa talks with Varun Bhatnagar from Swedbank. Mentioned topics include: Enterprise Analytics Platform…

Read more
power of bigdata
Tutorial

Power of Big Data: Marketing

In the "Power of Big Data" series, I will talk about the possibilities that Big Data solutions give to individual business sectors. It should be noted…

Read more
aiobszar roboczy 1 4
Tutorial

EU Artificial Intelligence Act - where are we now

It's coming up to a year since the European Commission published its proposal for the Artificial Intelligence Act (the AI Act/AI Regulation).  The…

Read more
getindator create an image set in a high tech data operations r cb3ee8f5 f68a 41b0 86c3 12eb597539c0
Tutorial

dbt-flink-adapter - job lifecycle management. Transforming data streaming

It's been a year since the announcement of the dbt-flink-adapter, and the concept of enabling real-time analytics with dbt and Flink SQL is simply…

Read more
bloggcpobszar roboczy 1 4
Tutorial

Data isolation in tenant architecture on the Google Cloud Platform (GCP)

Multi-tenant architecture, also known as multi-tenancy, is a software architecture in which a single instance of software runs on a server and serves…

Read more

Contact us

Interested in our solutions?
Contact us!

Together, we will select the best Big Data solutions for your organization and build a project that will have a real impact on your organization.


What did you find most impressive about GetInData?

They did a very good job in finding people that fitted in Acast both technically as well as culturally.
Type the form or send a e-mail: hello@getindata.com
The administrator of your personal data is GetInData Poland Sp. z o.o. with its registered seat in Warsaw (02-508), 39/20 Pulawska St. Your data is processed for the purpose of provision of electronic services in accordance with the Terms & Conditions. For more information on personal data processing and your rights please see Privacy Policy.

By submitting this form, you agree to our Terms & Conditions and Privacy Policy