Use-cases/Project
6 min read

Puzzles in the time of plague: truly over-engineered audio spectrum analyzer

Quarantaine project

Staying at home is not my particular strong point. But tough times have arrived and everybody needs to change their habits and re-organize their time from scratch. So, since I’ve increased the amount of time I spend at home, I wanted to use it to create something that has been in my mind for quite some time. I was wondering if I could create something with new-to-me technology and languages, like during good my old university days when the lecturer threw me into the unknown waters of programming languages, frameworks and approaches. For me, an avid Java developer, such water is Python. I hadn't written anything in this language before and I had used it mainly as a handy console calculator. Besides programming language, I wanted to gain some knowledge about Flink, which was on my horizon for more than a year, but I didn’t have the opportunity to switch to it from Apache Beam. To achieve all that, I needed an interesting problem to tackle.

Fourier analysis converts a signal from its time domain into the frequency domain. If like me, you spend hours staring at visualizations in Winamp/Foobar (when you had even more free time than now), you were watching a product of Fast Fourier Transformation. So, the problem was how to harness Python, Flink and some spices ( like Kafka and Avro ) and convert it into some kind of a spectrum analyzer. So, I had technologies, I had the problem, but these were nothing, without some restrictions.

getindata-flink-kafka-audi-spectrum-analyzer

Requirements

Everything needed to happen in real-time. I wanted to play some music and at the same time, I wanted to see the output from the FFT in the form of visualizations. The latencies were crucial. I wanted to hear and see the beat at the same time. Let’s go then!

Sampler

The first thing which had to be done was to sample sound. I created a Python application which used this library to get the sound wave into my application. This (on Linux and Windows) allowed me to sample the loopback interface. What was the data? It was a stream of numbers, sampled at 44100Hz frequency. In the case of stereo sound, in each second the lib returned 88200 (44100 for each channel) doubles representing the sound coming out from my speakers. I needed to decide how many numbers to record at once. How could I to choose the right value? Well, “1” was pure real time - but it would generate 44100 messages every second that would land in Kafka. It felt a little too much. “44100” on the other hand was cool from the throughput point of view ( only “1” message per second ), but it would create a significant delay between the live sound and the processing. “256” sounded perfect. It would create 44100/256=171 messages per second, each with a max delay of 5.8ms. Sounded perfect. So, the sampler was done. Next was a message broker, to distribute the messages to further processing.

Kafka is a very powerful and universal tool. It offers near real-time message distribution, thanks to its clever design and hardware optimization. Sounded like a message broker for real-time data analysis. Actually, one of the sidequests of this project was to measure how fast Kafka is. But, going back to the sampler application, I had data in Python’s arrays, and I had to publish it to Kafka. Apache has Python Avro library. Well… One thing I can say about this library is that you should consider using other libraries! This one is written purely in Python and it was terribly slow. I decided to use https://github.com/fastavro/fastavro library which was much faster than the official one. So, 171 messages per second were landing in Kafka, each one with ~6ms latency compared to the sound coming out from the speakers. So far so good.

Next in line was Flink. It’s streaming capabilities are well known and proved on the battlefield! Here, the code was written in Java. The Avro data from Kafka was deserialized into Java objects and the actual FFT happened. But there was a problem. Throwing 256 samples into the FFT produced 128 buckets of frequencies. It was far below my expectations when it came to the resolution. A much bigger input was needed. One way to work around this was to create a sliding window of samples. Such an aggregation would need to pre-aggregate samples and emit an 8192 window of the freshest samples every time a new 256 sample packet arrives from Kafka. It had a significant drawback though. Such a large window meant that FFT would produce frequency buckets from the sound of the last 44100Hz / 8192 = 185ms. Pricey, but I decided to go that way.

The FFT produced complex numbers, so after changing it to real ones, the output was serialized to Avro and sent to Kafka once again. On the other side of the pipeline, yet another Python application was waiting to draw these on the chart. Matplotlib is a commonly used tool for plotting various charts. But I didn’t know that it also has some serious animating features. This was really low-hanging fruit. What remained was to get data from Kafka, deserialize it via fastavro and plot.

The results

Quarantaine project: Kafka-Flink audio spectrum analyzer

You can find the output of the whole process here: Quarantaine project: Kafka-Flink audio spectrum analyzer. I was satisfied with the results. Firstly, it works , which is already a great success. Secondly, it works with only a minimal possible delay. And last, but not least: I’ve learned a lot and I’ve tried a lot of new technologies. Moving out from the comfy, well-known tech stack was one of the best things which had happened to me during the last couple of months.

---

PS. If you can see the little 180ms latency on the YouTube video, please consider moving ~60m from your screen and speakers :)

PS2: If you are interested in the code, you can find it here

big data
apache flink
apache kafka
avro
3 June 2020

Want more? Check our articles

gidlogopngobszar roboczy 1 4
Tutorial

Cloud data warehouses: Snowflake vs BigQuery. What are the differences between the pricing models?

Companies planning to process data in the cloud face the difficulty of choosing the right data warehouse. Choosing the right solution is one of the…

Read more
apache2xobszar roboczy 1 4
Tutorial

Introduction to GeoSpatial streaming with Apache Spark and Apache Sedona

We are  producing more and more geospatial data these days. Many companies struggle to analyze and process such data, and a lot of this data comes…

Read more
llm reading assistant getindataobszar roboczy 1 4
Tech News

Combining Kedro and Streamlit to build a simple LLM-based Reading Assistant

Generative AI and Large Language Models are taking applied machine learning by storm, and there is no sign of a weakening of this trend. While it is…

Read more
data pipelines dbt bigquery getindata
Tutorial

Up & Running: data pipeline with BigQuery and dbt

Nowadays, companies need to deal with the processing of data collected in the organization data lake. As a result, data pipelines are becoming more…

Read more
albert1obszar roboczy 1 100
Tutorial

Apache NiFi and Apache NiFi Registry on Kubernetes

Apache NiFi is a popular, big data processing engine with graphical Web UI that provides non-programmers the ability to swiftly and codelessly create…

Read more
power of big dataobszar roboczy 1 3x 100
Tutorial

Power of the Big Data: Industry

Welcome to the third part of the "Power of Big Data" series, in which we describe how Big Data tools and solutions support the development of modern…

Read more

Contact us

Interested in our solutions?
Contact us!

Together, we will select the best Big Data solutions for your organization and build a project that will have a real impact on your organization.


What did you find most impressive about GetInData?

They did a very good job in finding people that fitted in Acast both technically as well as culturally.
Type the form or send a e-mail: hello@getindata.com
The administrator of your personal data is GetInData Poland Sp. z o.o. with its registered seat in Warsaw (02-508), 39/20 Pulawska St. Your data is processed for the purpose of provision of electronic services in accordance with the Terms & Conditions. For more information on personal data processing and your rights please see Privacy Policy.

By submitting this form, you agree to our Terms & Conditions and Privacy Policy