map-white

Geospatial analytics on Hadoop

Post Comments (4)

Few months ago I was working on a project with a lot of geospatial data. Data was stored in HDFS, easily accessible through Hive. One of the tasks was to analyze this data and first step was to join two datasets on columns which were geographical coordinates. I wanted some easy and efficient solution. But here is the problem – there is very little support for this kind of operations in Hadoop world.

Problem

Ok, so what’s the problem actually? Let’s say we have two datasets (represented as Hive tables). First one is very large set of geo-tagged tweets. Second one is city/place geographic boundaries. We want to match them – for every tweet we want to know it’s location name.

Here are the tables (coordinates are given in simple WKT format):

So how to do it in Hive or Spark? Without any additional libraries or tricks we can simply do cross join, which means: compare every element from first dataset with element from the second one and then decide (using some user defined function) if there is a match.

But this solution has two major drawbacks:

  • it is super slow
  • we need to write some code (UDFs) which will operate on coordinates (checks if point is in polygon, etc.)

For sure there must be a better way!

What are the options?

There are few libraries which could help us with this task, but some of them give us only nice API (GIS Tools, Magellan) where other can do spatial joins effectively (SpatialSpark). Let’s look at them one by one!

Esri GIS Tools for Hadoop

People from Esri (international company which provides Geographic Information System software) developed and open sourced GIS Tools for Hadoop. This toolkit contains few elements, but two most important ones are:

  • Esri Geometry API for JAVA – it includes geometry objects, spatial operations and indexing. It can be used in standalone programs or MapReduce/Spark jobs.
  • Spatial Framework for Hadoop – this library includes user defined functions (UDF) that extend Hive to make spatial operations more user-friendly, internally it uses Esri Geometry API.

To install this toolkit you have to simply add jars to Hive classpath and then register needed UDFs. You can find more detailed tutorial here.

Finally you will be able to run Hive query like this:

If you know Postgis (GIS extension for PostgreSQL) this will look very familiar to you, because syntax is similar. Unfortunately these kind of queries are very inefficient in Hive. Hive will do cross join and it means that for big datasets computations will last for unacceptable amount of time.

Spatial binning

There is small trick which can help a bit with efficiency problem when doing spatial joins. It’s called spatial binning. The idea is to divide our space with points and polygons to numbered rectangular blocks. Then, for every object (like point or polygon) we assign corresponding block number to it.

Here is (hopefully) helpful image:

spatial binning

In the above example, space was divided into 8 blocks, there are some empty blocks and some with many points. For example there are 5 points which will get number 4 as their BIN ID.

Going back to our example with tweets (represented as points) and places (represented as polygons) we can assign BIN IDs to both of them and then join them block by block, calling UDFs only for objects with the same BIN ID. It will be more efficient because we will only do cross joins for significantly smaller sets (one block), but many of them (as many as total number of blocks).

Of course, there are some corner cases (like borders of blocks), but general idea is as explained. If you want to read more about this technique, please visit Esri Wiki.

Magellan

Second solution I’d like to show you is based on Apache Spark – more powerful (but also a bit more complicated) tool than Apache Hive.

Magellan is open source library for geospatial analytics that uses Spark as underlying engine. Hortonworks published blog post about it here and as far as I understand this library was created by one of the company’s engineers.

It is in very early stage of development and as of this date it gives us only nice API and unfortunately not so efficient algorithms for spatial joins.

Here is sample code in Spark (using Scala) to do spatial join using intersects predicate:

It is definitely library to watch, but as for now it’s not so useful in my opinion, mainly because it’s lacking features. If you want to know more, please visit Magellan github page.

SpatialSpark

Third solution and also my favourite one (maybe because I contributed to it a bit ;)) is SpatialSpark. It’s another library that is using Apache Spark as underlying engine. For low-level spatial functions and data structures (like indexes) it is using great and well tested JTS library.

It’s selling feature is that it can do spatial joins efficiently. It supports two kind of joins:

  • broadcast spatial join – it’s designed for joining big dataset with smaller one efficiently. Smaller data set is converted to index (R-tree) and kept in memory. Algorithm simply iterates (in distributed way) over big dataset and queries index from the other set efficiently.
  • partitioned spatial join – it’s designed for joining two big datasets and uses similar idea to binning, but it’s more complicated and more efficient. Sets are divided into small pieces (you can choose what algorithm could be responsible for this operation – there are few implemented to make splits as equal as possible depending on data characteristics) and then each small piece is processed individually (using R-trees).

Here is sample Spark code snippet to do broadcast spatial join for our case with tweets and places:

Unfortunately there are also drawbacks. API is not so clean and easy to use. You have to use classes as shown in example above or use command line tools that expect data in exactly one format (more details on github page). Even bigger problem is that development of SpatialSpark is not so active. Hopefully it will change in future.

Other options

If you can and want to keep data in some other systems than Hadoop there are few possibilities to do spatial joins. Of course not all of them have the same set of features, but all of them implement some kind of geospatial search that could be useful when dealing with geographic data.

Here are the links:

  • Cassandra with Lucene index – you can keep data in Cassandra and use secondary index that integrates Lucene features (geospatial search is one of many)
  • Elasticsearch (with Geohashes) – geohashes are a way of encoding latitude and longitude to string, you can keep and query them with Elasticsearch
  • GeoMesa – it’s whole geospatial distributed database built on top of Apache Accumulo
  • GeoWave – very similar to GeoMesa, but a bit newer

Summary

As you can probably see now, there is no big choice in terms of spatial joins when we have our data in Hadoop. If you want to do things efficiently then SpatialSpark is the only option IMHO. If you want something easier to use then Esri GIS Tools for Hadoop is the way to go, but unfortunately this only makes sense for really small datasets.

That’s all! Hopefully you’ve enjoyed this post. Feel free to comment below, especially if you have a suggestion how our problem could be solved in a better way!

Tweet about this on TwitterShare on LinkedIn0Share on Facebook0Share on Google+3Pin on Pinterest0Email this to someone
Kamil Gorlo

Kamil Gorlo

Kamil is a senior data engineer with interest in high-performance and scalable architectures, machine learning, reactive applications and clean code. During his work at GG, has been working with Hadoop ecosystem and implementing scalable application components like Storage System, GG disk, Internet Radio, Mailbox and more.
Kamil Gorlo

Latest posts by Kamil Gorlo (see all)

» Post » Geospatial analytics on Hadoop
On January 31, 2016
By
, , , ,

4 Responses to Geospatial analytics on Hadoop

  1. Shahab Yunus says:

    Hi there. Interesting round-up of major geo techs. One thing though. Except the high level statements, you did not explain why do you think Magellan is not useful? What features are missing? What performance test(s) did you perform based on which you are saying it is not fast? More information would be helpful if possible. Thanks.

  2. nura says:

    Hi thanks for sharing this such informative post with us it is a worth read

  3. christy says:

    Hadoop is an open source, Java-based programming framework which supports processing and storage of extremely large data sets in a distributed computing environment…..Thanks for sharing these types of informative posts….

  4. Raj says:

    great information on geospatial analytics on hadoop ..Thanks

Leave a Reply

Your email address will not be published. Required fields are marked *

Blue Captcha Image
Refresh

*

« »