using h3 library with pyspark dataframe

1.9k Views Asked by At

I have a spark dataframe that looks like this:

+-----------+-----------+-------+------------------+----------+--------+--------+--------+--------+
|client_id_x|client_id_y|   dist|              time|      date|   lat_y|   lng_y|   lat_x|   lng_x|
+-----------+-----------+-------+------------------+----------+--------+--------+--------+--------+
| 0700014578| 0700001710|13125.7|21.561666666666667|2021-06-07|-23.6753|-46.6788|-23.5933|-46.6382|
| 0700014578| 0700001760| 8447.8|13.103333333333333|2021-06-07|-23.6346|-46.6057|-23.5933|-46.6382|
| 0700014578| 0700002137| 9681.1|16.173333333333332|2021-06-07|-23.6309|-46.7059|-23.5933|-46.6382|
+-----------+-----------+-------+------------------+----------+--------+--------+--------+--------+

What I want to do is to obtain lat,lng unique identifiers based on H3 geospatial indexing system. To do that I'm trying to use the following code:

def get_geo_id(df: pd.DataFrame) -> pd.Series:
    return df.apply(lambda x: h3.geo_to_h3(x[lat_name], x[lng_name], resolution = 13))
    
get_geo_udf = pandas_udf(get_geo_id, returnType=StringType())

# calling function
new_df.withColumn("id_h3_x", get_geo_udf(new_df.select(["lat_x", "lng_x"])))    

However, I'm getting the following error:

TypeError: Invalid argument, not a string or column: DataFrame[lat_x: double, lng_x: double] of type <class 'pyspark.sql.dataframe.DataFrame'>. For column literals, use 'lit', 'array', 'struct' or 'create_map' function.

I have also tried with this:

def get_geo_id(lat_name: pd.Series, lng_name: pd.Series) -> pd.Series:
    return h3.geo_to_h3(lat_name, lng_name, resolution = 13)
    
get_geo_udf = pandas_udf(get_geo_id, returnType = StringType())

new_df.withColumn("id_h3_x", get_geo_udf(new_df["lat_x"], new_df["lng_x"])).show() 

But it's showing this error:

TypeError: cannot convert the series to <class 'float'>

I am new at spark so I'm not really sure about the error I'm having. I would really appreciate your help.

3

There are 3 best solutions below

2
On BEST ANSWER

I manage to solve the problem. I had to use the following function

@pandas_udf("client_id_y string, client_id_x string, dist double, time double, date string, lat_x double, lng_x double, lat_y double, lng_y double, geoid_x string, geoid_y string", PandasUDFType.GROUPED_MAP)
def get_geo_id(df):
    df["geoid_x"] = df.apply(lambda x: h3.geo_to_h3(x.lat_x, x.lng_x, resolution = 13), axis = 1)
    df["geoid_y"] = df.apply(lambda x: h3.geo_to_h3(x.lat_y, x.lng_y, resolution = 13), axis = 1)
    return df

# call the function
h3_dff = new_df.groupby("client_id_x").apply(get_geo_id)
h3_dff.show()

And the resulting dataframe is:

+-----------+-----------+-------+------------------+----------+--------+--------+--------+--------+---------------+---------------+
|client_id_y|client_id_x|   dist|              time|      date|   lat_x|   lng_x|   lat_y|   lng_y|        geoid_x|        geoid_y|
+-----------+-----------+-------+------------------+----------+--------+--------+--------+--------+---------------+---------------+
| 0700001710| 0700014578|13125.7|21.561666666666667|2021-06-07|-23.5933|-46.6382|-23.6753|-46.6788|8da8100e225e4ff|8da81000890577f|
| 0700001760| 0700014578| 8447.8|13.103333333333333|2021-06-07|-23.5933|-46.6382|-23.6346|-46.6057|8da8100e225e4ff|8da81001b0b353f|
| 0700002137| 0700014578| 9681.1|16.173333333333332|2021-06-07|-23.5933|-46.6382|-23.6309|-46.7059|8da8100e225e4ff|8da810056a5673f|

Which is exactly what I wanted.

0
On

.apply() function will be deprecated in future version so use applyInPandas()

def get_geo_id(df):
   df["geoid_x"] = df.apply(lambda x: h3.geo_to_h3(x.lat_x, x.lng_x, resolution = 13), axis = 1)
   df["geoid_y"] = df.apply(lambda x: h3.geo_to_h3(x.lat_y, x.lng_y, resolution = 13), axis = 1)
   return df

# call the function
h3_dff = new_df.groupby("client_id_x").applyInPandas(get_geo_id, schema="client_id_y string, client_id_x string, dist double, time double, 
date string, lat_x double, lng_x double, lat_y double, lng_y double, geoid_x 
string, geoid_y string")
h3_dff.show()

Fore more information on this, please visit https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.sql.GroupedData.applyInPandas.html

0
On

Now (since november 2021) there are PySpark bindings for the Uber's H3 library. https://pypi.org/project/h3-pyspark/#description With this bindings you can do the H3 operations without UDFs of any kind.