I want to generate float range which can be unnested into a column in PrestoDb. I am following documentation https://prestodb.io/docs/current/functions/array.html and trying out 'sequence' but looks like float ranges cannot be generated in sequence. I want to generate a table like below with the value interval reduced by 0.3
| date | value |
| 2020-01-31 | 47.6 |
| 2020-02-28 | 47.3 |
| 2020-03-31 | 47.0 |
I was trying to generate a sequence and then unnest it into column values. I am able to generate date column using the sequence in prestodb but not the value column
Any suggestions please
You can use sequence with bigint and convert to double after unnesting:
presto> SELECT x / 10e0 FROM UNNEST(sequence(476, 470, -3)) t(x);
_col0
-------
47.6
47.3
47.0
(verified on Presto 336)
Let's say, I have two pyspark dataframes, users and shops. A few sample rows for both the dataframes are shown below.
users dataframe:
+---------+-------------+---------+
| idvalue | day-of-week | geohash |
+---------+-------------+---------+
| id-1 | 2 | gcutjjn |
| id-1 | 3 | gcutjjn |
| id-1 | 5 | gcutjht |
+---------+-------------+---------+
shops dataframe
+---------+-----------+---------+
| shop-id | shop-name | geohash |
+---------+-----------+---------+
| sid-1 | kfc | gcutjjn |
| sid-2 | mcd | gcutjhq |
| sid-3 | starbucks | gcutjht |
+---------+-----------+---------+
I need to join both of these dataframes on the geohash column. I can do a naive equi-join for sure, but the users dataframe is huge, containing billions of rows, and geohashes are likely to repeat, within and across idvalues. So, I was wondering if there's a way to perform joins on unique geohashes in the users dataframe and geohashes in the shops dataframe. If we can do that, then it's easy to replicate the shops entries for matching geohashes in resultant dataframe.
Probably it can be achieved with a pandas udf, where I would perform a groupby on users.idvalue, do a join with shops within the udf by only taking the first row from the group (because all ids are same anyway within the group), and creating a one row dataframe. Logically it feels like this should work, but not sure sure on the performance aspect as udf(s) are usually slower than spark native transformations. Any ideas are welcome.
You said that your Users dataframe is huge and that "geohashes are likely to repeat, within and across idvalues". You didn't referred however if there might be duplicated geohashes in your shops dataframe.
If there are no repeated hashes in the latter, I think that a simple join would solve your problem:
val userDf = Seq(("id-1",2,"gcutjjn"),("id-2",2,"gcutjjn"),("id-1",3,"gcutjjn"),("id-1",5,"gcutjht")).toDF("idvalue","day_of_week","geohash")
val shopDf = Seq(("sid-1","kfc","gcutjjn"),("sid-2","mcd","gcutjhq"),("sid-3","starbucks","gcutjht")).toDF("shop_id","shop_name","geohash")
userDf.show
+-------+-----------+-------+
|idvalue|day_of_week|geohash|
+-------+-----------+-------+
| id-1| 2|gcutjjn|
| id-2| 2|gcutjjn|
| id-1| 3|gcutjjn|
| id-1| 5|gcutjht|
+-------+-----------+-------+
shopDf.show
+-------+---------+-------+
|shop_id|shop_name|geohash|
+-------+---------+-------+
| sid-1| kfc|gcutjjn|
| sid-2| mcd|gcutjhq|
| sid-3|starbucks|gcutjht|
+-------+---------+-------+
shopDf
.join(userDf,Seq("geohash"),"inner")
.groupBy($"geohash",$"shop_id",$"idvalue")
.agg(collect_list($"day_of_week").alias("days"))
.show
+-------+-------+-------+------+
|geohash|shop_id|idvalue| days|
+-------+-------+-------+------+
|gcutjjn| sid-1| id-1|[2, 3]|
|gcutjht| sid-3| id-1| [5]|
|gcutjjn| sid-1| id-2| [2]|
+-------+-------+-------+------+
If you have repeated hash values in your shops dataframe, a possible approach would be to remove those repeated hashes from your shops dataframe (if your requirements allow this), and then perform the same join operation.
val userDf = Seq(("id-1",2,"gcutjjn"),("id-2",2,"gcutjjn"),("id-1",3,"gcutjjn"),("id-1",5,"gcutjht")).toDF("idvalue","day_of_week","geohash")
val shopDf = Seq(("sid-1","kfc","gcutjjn"),("sid-2","mcd","gcutjhq"),("sid-3","starbucks","gcutjht"),("sid-4","burguer king","gcutjjn")).toDF("shop_id","shop_name","geohash")
userDf.show
+-------+-----------+-------+
|idvalue|day_of_week|geohash|
+-------+-----------+-------+
| id-1| 2|gcutjjn|
| id-2| 2|gcutjjn|
| id-1| 3|gcutjjn|
| id-1| 5|gcutjht|
+-------+-----------+-------+
shopDf.show
+-------+------------+-------+
|shop_id| shop_name|geohash|
+-------+------------+-------+
| sid-1| kfc|gcutjjn| << Duplicated geohash
| sid-2| mcd|gcutjhq|
| sid-3| starbucks|gcutjht|
| sid-4|burguer king|gcutjjn| << Duplicated geohash
+-------+------------+-------+
//Dataframe with hashes to exclude:
val excludedHashes = shopDf.groupBy("geohash").count.filter("count > 1")
excludedHashes.show
+-------+-----+
|geohash|count|
+-------+-----+
|gcutjjn| 2|
+-------+-----+
//Create a dataframe of shops without the ones with duplicated hashes
val cleanShopDf = shopDf.join(excludedHashes,Seq("geohash"),"left_anti")
cleanShopDf.show
+-------+-------+---------+
|geohash|shop_id|shop_name|
+-------+-------+---------+
|gcutjhq| sid-2| mcd|
|gcutjht| sid-3|starbucks|
+-------+-------+---------+
//Perform the same join operation
cleanShopDf.join(userDf,Seq("geohash"),"inner")
.groupBy($"geohash",$"shop_id",$"idvalue")
.agg(collect_list($"day_of_week").alias("days"))
.show
+-------+-------+-------+----+
|geohash|shop_id|idvalue|days|
+-------+-------+-------+----+
|gcutjht| sid-3| id-1| [5]|
+-------+-------+-------+----+
The code provided was written in Scala but it can be easily converted to Python.
Hope this helps!
This is an idea if it possible you used pyspark SQL to select distinct geohash and create to the tempory table. Then join from this table instead of dataframes.
I am working with a dataframe like this:
DeviceNumber | CreationDate | Name
1001 | 1.1.2018 | Testdevice
1001 | 30.06.2019 | Device
1002 | 1.1.2019 | Lamp
I am using databricks and pyspark to do the ETL process. How can I reduce the dataframe in a way that I will only have a single row per "DeviceNumber" and that this will be the row with the highest "CreationDate"? In this example I want the result to look like this:
DeviceNumber | CreationDate | Name
1001 | 30.06.2019 | Device
1002 | 1.1.2019 | Lamp
You can create a additional dataframe with DeviceNumber & it's latest/max CreationDate.
import pyspark.sql.functions as psf
max_df = df\
.groupBy('DeviceNumber')\
.agg(psf.max('CreationDate').alias('max_CreationDate'))
and then join max_df with original dataframe.
joining_condition = [ df.DeviceNumber == max_df.DeviceNumber, df.CreationDate == max_df.max_CreationDate ]
df.join(max_df,joining_condition,'left_semi').show()
left_semi join is useful when you want second dataframe as lookup and does need any column from second dataframe.
You can use PySpark windowing functionality:
from pyspark.sql.window import Window
from pyspark.sql import functions as f
# make sure that creation is a date data-type
df = df.withColumn('CreationDate', f.to_timestamp('CreationDate', format='dd.MM.yyyy'))
# partition on device and get a row number by (descending) date
win = Window.partitionBy('DeviceNumber').orderBy(f.col('CreationDate').desc())
df = df.withColumn('rownum', f.row_number().over(win))
# finally take the first row in each group
df.filter(df['rownum']==1).select('DeviceNumber', 'CreationDate', 'Name').show()
------------+------------+------+
|DeviceNumber|CreationDate| Name|
+------------+------------+------+
| 1002| 2019-01-01| Lamp|
| 1001| 2019-06-30|Device|
+------------+------------+------+
I have a file with 10,1900 lines with Delimiter as 5 ('|') [obviously 6 columns now] , and I have statement in sixth column like "Dropped 12 (0.01%)" !! I am longing to extract the number after Dropped within brackets;
Actual -- Dropped 12 (0.01%)
Expected -- 0.01
I need a solution using Apache pig.
You are looking for the REGEX_EXTRACT function.
Let's say you have a table A that looks like:
+--------------------+
| col1 |
+--------------------+
| Dropped 12 (0.01%) |
| Dropped 24 (0.02%) |
+--------------------+
You can extract the number in parenthesis with the following:
B = FOREACH A GENERATE REGEX_EXTRACT(col6, '.*\\((.*)%\\)', 1);
+---------+
| percent |
+---------+
| 0.01 |
| 0.02 |
+---------+
I'm specifying a regex capture group for whatever characters are between ( and %). Notice that I'm using \\ as the escape character so that I match the opening and closing parenthesis.
I want to use the same labels from a SQLAlchemy table, to re-aggregate some data (e.g. I want to iterate through mytable.c to get the column names exactly).
I have some spending data that looks like the following:
| name | region | date | spending |
| John | A | .... | 123 |
| Jack | A | .... | 20 |
| Jill | B | .... | 240 |
I'm then passing it to an existing function we have, that aggregates spending over 2 periods (using a case statement) and groups by region:
grouped table:
| Region | Total (this period) | Total (last period) |
| A | 3048 | 1034 |
| B | 2058 | 900 |
The function returns a SQLAlchemy query object that I can then use subquery() on to re-query e.g.:
subquery = get_aggregated_data(original_table)
region_A_results = session.query(subquery).filter(subquery.c.region = 'A')
I want to then re-aggregate this subquery (summing every column that can be summed, replacing the region column with a string 'other'.
The problem is, if I iterate through subquery.c, I get labels that look like:
anon_1.region
anon_1.sum_this_period
anon_1.sum_last_period
Is there a way to get the textual label from a set of column objects, without the anon_1. prefix? Especially since I feel that the prefix may change depending on how SQLAlchemy decides to generate the query.
Split the name string and take the second part, and if you want to prepare for the chance that the name is not prefixed by the table name, put the code in a try - except block:
for col in subquery.c:
try:
print(col.name.split('.')[1])
except IndexError:
print(col.name)
Also, the result proxy (region_A_results) has a method keys which returns an a list of column names. Again, if you don't need the table names, you can easily get rid of them.