While a lot of problems can be solved in batch, the stream-processing approach can give you even more benefits. In this blog post series we’ll discuss a real-world example of user session analytics to give you a use-case driven overview of business and technical problems that modern stream processing technologies like Apache Flink help you […]
We are honoured to announce that Adam Kawa & Piotr Krewski led a special WHUG & Data Sciene Warsaw meetup that took place in University of Warsaw Library on January 19th, 2017. There were more than 100 participants and we have received really positive feedback from after the event. Meetup Overview Title: Yellow elephant? What […]
As the Big Data Tech Warsaw 2017 conference is getting closer, we’d like to highlight the most interesting topics that will be covered during this exciting event. This year the event will contain +25 technical talks given in four parallel tracks.
Schema evolution of a Hive table backed by Avro file format allows you to modify the table schema in several “schema-compatible” ways without the need of rewriting all existing data. Thanks to that, your HiveQL queries can read old and new Avro files uniformly using the current table schema. In this blog post I briefly […]
In this short post I will focus on user management aspects of HUE. Something that every administrator needs to tackle.
During my 6-year Hadoop adventure, I had an opportunity to work with Big Data technologies at several companies ranging from fast-growing startups (e.g. Spotify) to global corporations and academic institutes. What really amazed me was the difference of how the use-cases were defined, how fast valid solutions were built and how money was spent and […]
We are excited to announce that GetInData becomes the coorganizer (together with our partner Evention) of Big Data Tech Warsaw 2017. The conference will be held in Warsaw (Poland), February 9th, 2017.
In this blog post we share motivation, current status and challenges for our new project, called AirHadoop. AirHadoop follows the sharing economy model and it aims to allow companies to use idle Hadoop clusters that belong to somebody else to temporarily gain more computing power and storage. Shared economy A sharing economy is an economic […]