Tutorial
6 min read

Semi-supervised learning on real-time data streams

Acquiring unlabeled data is inherent to many machine learning applications. There are cases when we do not know the result of the action provided by the input data immediately, because the result is postponed in time. Moreover, sometimes the results are not available at all, for example when there is a non-negligible cost associated with the acquisition of labels. So, how can you benefit from unlabeled data in order to improve prediction?​

In this blogpost, I will present a possible way to make better use of partially labeled data. Using certain techniques described further on in the article, I will augment the labeled data features with extracted features from nearby unlabeled data, in order to improve the accuracy of the label prediction. The augmentation algorithm will generate the features automatically, based on the provided hyperparameters, which can be an important tool during the data exploration process. Promising generated features may be, for example, used to improve machine learning models.

This blog post's subject and contents are based on my Master's thesis "Semisupervised learning under delayed label setting". Let’s start!

Semi-supervised problem example case

One example of a semi-supervised problem is the prediction of flight delays. In this case, while the features of the flight (such as distance, departure time, planned arrival time, origin and destination location) may be available before landing, the value that we want to predict, the actual delay, is only available after landing. This scenario, where the event features are available before the event labels, is called a delayed label acquisition scenario.

It is possible to use semi-supervised learning for such a scenario. To convert the input stream to suit semi-supervised learning in the simplest way, we can imagine the new data stream, which outputs one unlabeled record when the event features become available, and then the second labeled record when the event label becomes available.

The semi-supervised data stream augmentation algorithm concept

In order to make the input data augmentation useful, let's assume that events which occurred recently prior may contain valuable information when predicting the current event. In the case of the flight delay prediction example, this assumption makes sense, as flights taking place in a similar time and location experience shared external conditions, such as weather or traffic congestion. Therefore, we may expect patterns to emerge for nearby delay values.

As a result, if the nearby events are somehow correlated, in order to improve the quality of a prediction, the features of events that took place in a similar time and location should be included, whether those events are labeled or not.

The idea for the implementation of the automatic data augmentation algorithm is to keep a buffer of features of events that happened recently. Those features can be concatenated, clustered and aggregated, or otherwise extracted.

The data stream may be logically partitioned in order to increase the locality of the buffer. In the flight delay prediction example, it may make sense to create a separate buffer partition for each destination airport, because if there were recently many delayed flights at a particular airport, we can predict that following flights will also be delayed.

Solution

The semi-supervised data stream augmentation algorithm was created using [MOA](https://moa.cms.waikato.ac.nz/). Four main approaches for enriching input data with the vector of input records in the buffer were tested

1. The naïve approach

The simplest approach is to concatenate all of the buffer's records to the current record, without further feature extraction. However, this approach’s efficiency can be hampered by high data dimensionality - if an incoming event and the n-element buffer both contain p attributes per element, the resulting vector would contain (n + 1) ∗ p attributes.

2. The concatenated feature extraction approach

Using dimensionality reduction techniques on buffer records could be an improvement to the naïve approach. In this approach, each of the events currently in the buffer are processed using a feature extraction algorithm. Then, the resulting extracted features are concatenated with the original event attributes.

3. The cluster aggregation approach

The clustering model is created, which does an unsupervised clustering for each of the events currently kept in the buffer. Then, each element in the buffer is clustered, which gives us the number of elements for each cluster group that belong to that group. This information is concatenated on the current input data record.

4. The cluster complex event aggregation approach

As in the previous approach, each element in the buffer is clustered. However, instead of aggregating each cluster group count, occurrences of complex events decision rules are counted. Each of the rules is represented as a regular expression. An example of such a rule is Ca X* Cb. This rule will be satisfied, when there is an event in the buffer, which belongs to the cluster Ca, and there is an event which belongs to the cluster Cb situated later in the buffer.

Results

The tests were performed on semi-supervised datasets. Among them were three open source datasets, a proprietary closed source dataset, and three generated datasets. The results achieved by the most successful augmentation approaches slightly outperformed the unaugmented approach on three of the four real-life-based datasets. Which approach is best depends on the dataset - approaches 2, 3 and 4 were found the most performant in at least one tested dataset, while the naïve approach, as expected, was found to be subpar.

​Moreover, the computational overhead induced by the approaches was negligible in comparison to the overhead of a larger number of attributes.

​In other words, the model with fewer total attributes, some of which were generated by the augmentation algorithm, should be faster than the model with more total attributes without augmentation.

Source code​

An open source implementation of the methods tested in the thesis is available at https://github.com/kosmag/FeatExtream. The repository contains MOA extension classes and Jupyter notebooks for benchmark data creation and result visualization. The code is licensed with the GNU General Public License v3.0.

Further work

This area of research is quickly expanding and the solution presented above is meant to be a proof of the viability of the concept.​

In order to create a working implementation of the solution in the production environment, it makes sense to look into the capabilities of the state-of-the-art feature stores, for example [Feast](https://docs.feast.dev/). Using an existing feature store, it is possible to combine an improvement to the prediction performance with scalability, as well as the ability to use the existing integrations with on-premise and cloud data sources or the supported deployments.

Conclusions

In conclusion, automated data augmentation, as suggested in the thesis, may provide a boost in semi-supervised prediction performance, at the cost of the added complexity of the prediction algorithm. How much performance which can be added to the model in such a way will vary from dataset to dataset.

Low computational overheads means that such an augmentation may be a computationally cheap, automatic way of finding useful features for the model.

Finally, I would like to thank my thesis supervisor. He offered me critical advice as well as constant encouragement. His engagement throughout the thesis writing process made the whole experience much more pleasant.

machine learning
ML
real-time data
7 July 2023

Want more? Check our articles

18nX38qlhR2rMM2cQzZ0U3A
Use-cases/Project

How to build Digital Marketing Platform making the best out of Google Cloud

Nowadays digital marketing is a competitive business and it’s easy to tell that we are way past the point when a catchy slogan or shiny banner would…

Read more
how we work with customer scrum framework dema project
Use-cases/Project

How do we work with customers? Scrum Framework in Dema project

Main Goals GetInData has successfully introduced the Scrum framework in cooperation with Dema. Thanks to the use of Scrum, the results of the…

Read more
getindator man standing in front of a modern scheme showing mil 476f21ba 2f04 44d0 8c3b 8493e593b122
Tutorial

News Recommendation: the challenging area in building recommendation systems

Remember our whitepaper “Guide to Recommendation Systems. Implementation of Machine Learning in Business” from the middle of last year? Our data…

Read more
getindata ml innovations 2023
Tech News

If LLM’s did not exist. ML innovations in 2023 from a data scientist’s perspective

The year 2023 has definitely been dominated by LLM’s (Large Language Models) and generative models. Whether you are a researcher, data scientist, or…

Read more
getindata blog nifi tomasz nazarewicz
Tutorial

NiFi Scripted Components - the missing link between scripts and fully custom stuff

Custom components As we probably know, the biggest strength of Apache Nifi is the large amount of ready-to-use components. There are, of course…

Read more
maximizing personalization11
Tutorial

Maximizing Personalization: Real-Time Context and Persona Drive Better-Suited Products and Customer Experiences

Have you ever searched for something that isn't typical for you? Maybe you were looking for a gift for your grandmother on Amazon or wanted to listen…

Read more

Contact us

Interested in our solutions?
Contact us!

Together, we will select the best Big Data solutions for your organization and build a project that will have a real impact on your organization.


What did you find most impressive about GetInData?

They did a very good job in finding people that fitted in Acast both technically as well as culturally.
Type the form or send a e-mail: hello@getindata.com
The administrator of your personal data is GetInData Poland Sp. z o.o. with its registered seat in Warsaw (02-508), 39/20 Pulawska St. Your data is processed for the purpose of provision of electronic services in accordance with the Terms & Conditions. For more information on personal data processing and your rights please see Privacy Policy.

By submitting this form, you agree to our Terms & Conditions and Privacy Policy