Using pyspark to create a segment array from a flat record - arraylist

I have a sparsely populated table with values for various segments for unique user ids. I need to create an array with unique_id and relevant segment headers only
Please note that this is just an indicative dataset. I have several hundreds of segments like these.
------------------------------------------------
| user_id | seg1 | seg2 | seg3 | seg4 | seg5 |
------------------------------------------------
| 100 | M | null| 25 | null| 30 |
| 200 | null| null| 43 | null| 250 |
| 300 | F | 3000| null| 74 | null|
------------------------------------------------
I am expecting the output to be
-------------------------------
| user_id| segment_array |
-------------------------------
| 100 | [seg1, seg3, seg5] |
| 200 | [seg3, seg5] |
| 300 | [seg1, seg2, seg4] |
-------------------------------
Is there any function available in pyspark of pyspark-sql to accomplish this?
Thanks for your help!

I cannot find the direct way but you can do this.
cols= df.columns[1:]
r = df.withColumn('array', array(*[when(col(c).isNotNull(), lit(c)).otherwise('notmatch') for c in cols])) \
.withColumn('array', array_remove('array', 'notmatch'))
r.show()
+-------+----+----+----+----+----+------------------+
|user_id|seg1|seg2|seg3|seg4|seg5| array|
+-------+----+----+----+----+----+------------------+
| 100| M|null| 25|null| 30|[seg1, seg3, seg5]|
| 200|null|null| 43|null| 250| [seg3, seg5]|
| 300| F|3000|null| 74|null|[seg1, seg2, seg4]|
+-------+----+----+----+----+----+------------------+

Not sure this is the best way but I'd attack it this way:
There's the collect_set function which will always give you a unique value across a list of values you aggregate over.
do a union for each segment on:
df_seg_1 = df.select(
'user_id',
fn.when(
col('seg1').isNotNull(),
lit('seg1)
).alias('segment')
)
# repeat for all segments
df = df_seg_1.union(df_seg_2).union(...)
df.groupBy('user_id').agg(collect_list('segment'))

Related

SQL pivot table for unknown number of columns

I need some tips for the Postgres pivot below, please.
I have a table like this:
+------+---+----+
| round| id| kpi|
+------+---+----+
| 0 | 1 | 0.1|
| 1 | 1 | 0.2|
| 0 | 2 | 0.5|
| 1 | 2 | 0.4|
+------+---+----+
The number of Ids is unknown.
I need to convert the id column into multiple columns (same amount of different ids), with KPI value as their values and in the new table we keep the rounds like in the first table.
+------+----+----+
| round| id1| id2|
+------+----+----+
| 0 | 0.1| 0.5|
| 1 | 0.2| 0.4|
+------+----+----+
Is it possible to do this in SQL? How to do that?
It´s possible, check this question
This other is a pivot that I did, also with an unknown number of columns, maybe it can help you too: Advanced convert rows to columns (pivot) in SQL Server

SQL table transformation. How to pivot a certain table?

How would I do the pivot below?
I have a table like this:
+------+---+----+
| round| id| kpi|
+------+---+----+
| 0 | 1 | 0.1|
| 1 | 1 | 0.2|
| 0 | 2 | 0.5|
| 1 | 2 | 0.4|
+------+---+----+
I want to convert the id column into multiple columns (same amount of different ids), with KPI value as their values and in the new table we keep the rounds like in the first table.
+------+----+----+
| round| id1| id2|
+------+----+----+
| 0 | 0.1| 0.5|
| 1 | 0.2| 0.4|
+------+----+----+
Is it possible to do this in SQL? How to do that?
You are looking for a pivot function. You can find details on how to do this here and here. The first link also provides input into how to do this if you have an unknown number of columnnames.

Create new column in Pyspark Dataframe by filling existing Column

I am trying to create new column in an existing Pyspark DataFrame. Currently the DataFrame looks as follows:
+----+----+---+----+----+----+----+
|Acct| M1D|M1C| M2D| M2C| M3D| M3C|
+----+----+---+----+----+----+----+
| B| 10|200|null|null| 20|null|
| C|1000|100| 10|null|null|null|
| A| 100|200| 200| 200| 300| 10|
+----+----+---+----+----+----+----+
I want to fill null values in column M2C with 0 and create a new column Ratio. My expected output would be as follows:
+------+------+-----+------+------+------+------+-------+
| Acct | M1D | M1C | M2D | M2C | M3D | M3C | Ratio |
+------+------+-----+------+------+------+------+-------+
| B | 10 | 200 | null | null | 20 | null | 0 |
| C | 1000 | 100 | 10 | null | null | null | 0 |
| A | 100 | 200 | 200 | 200 | 300 | 10 | 200 |
+------+------+-----+------+------+------+------+-------+
I was trying to achieve my desired results by using following lines of code.
df = df.withColumn('Ratio', df.select('M2C').na.fill(0))
The above line of code resulted in an assertion error as shown below.
AssertionError: col should be Column
The possible solution that I found using this link was to use lit function.
I changed my code to
df = df.withColumn('Ratio', lit(df.select('M2C').na.fill(0)))
The above code led to AttributeError: 'DataFrame' object has no attribute '_get_object_id'
How can I achieve my desired output?
You're doing two things wrong here.
df.select will return a dataframe, not a column.
na.fill will replace null values in all columns, not just in specific columns.
The following code snippet will solve your usecase
from pyspark.sql.functions import col
df = df.withColumn('Ratio', col('M2C')).fillna(0, subset=['Ratio'])

How to get distinct value, count of a column in dataframe and store in another dataframe as (k,v) pair using Spark2 and Scala

I want to get the distinct values and their respective counts of every column of a dataframe and store them as (k,v) in another dataframe.
Note: My Columns are not static, they keep changing. So, I cannot hardcore the column names instead I should loop through them.
For Example, below is my dataframe
+----------------+-----------+------------+
|name |country |DOB |
+----------------+-----------+------------+
| Blaze | IND| 19950312|
| Scarlet | USA| 19950313|
| Jonas | CAD| 19950312|
| Blaze | USA| 19950312|
| Jonas | CAD| 19950312|
| mark | USA| 19950313|
| mark | CAD| 19950313|
| Smith | USA| 19950313|
| mark | UK | 19950313|
| scarlet | CAD| 19950313|
My final result should be created in a new dataframe as (k,v) where k is the distinct record and v is the count of it.
+----------------+-----------+------------+
|name |country |DOB |
+----------------+-----------+------------+
| (Blaze,2) | (IND,1) |(19950312,3)|
| (Scarlet,2) | (USA,4) |(19950313,6)|
| (Jonas,3) | (CAD,4) | |
| (mark,3) | (UK,1) | |
| (smith,1) | | |
Can anyone please help me with this, I'm using Spark 2.4.0 and Scala 2.11.12
Note: My columns are dynamic, so I can't hardcore the columns and do groupby on them.
I don't have exact solution to your query but I can surely provide you with some help that can get you started working on your issue.
Create dataframe
scala> val df = Seq(("Blaze ","IND","19950312"),
| ("Scarlet","USA","19950313"),
| ("Jonas ","CAD","19950312"),
| ("Blaze ","USA","19950312"),
| ("Jonas ","CAD","19950312"),
| ("mark ","USA","19950313"),
| ("mark ","CAD","19950313"),
| ("Smith ","USA","19950313"),
| ("mark ","UK ","19950313"),
| ("scarlet","CAD","19950313")).toDF("name", "country","dob")
Next calculate count of distinct element of each column
scala> val distCount = df.columns.map(c => df.groupBy(c).count)
Create a range to iterate over distCount
scala> val range = Range(0,distCount.size)
range: scala.collection.immutable.Range = Range(0, 1, 2)
Aggregate your data
scala> val aggVal = range.toList.map(i => distCount(i).collect().mkString).toSeq
aggVal: scala.collection.immutable.Seq[String] = List([Jonas ,2][Smith ,1][Scarlet,1][scarlet,1][mark ,3][Blaze ,2], [CAD,4][USA,4][IND,1][UK ,1], [19950313,6][19950312,4])
Create data frame:
scala> Seq((aggVal(0),aggVal(1),aggVal(2))).toDF("name", "country","dob").show()
+--------------------+--------------------+--------------------+
| name| country| dob|
+--------------------+--------------------+--------------------+
|[Jonas ,2][Smith...|[CAD,4][USA,4][IN...|[19950313,6][1995...|
+--------------------+--------------------+--------------------+
I hope this helps you in some way.

Executing a join while avoiding creating duplicate metrics in first table rows

There are two tables to join for an in depth excel report. I am trying to avoid creating duplicate metrics. I have already separately scraped competitor data using a python script
The first table looks like this
name |occurances |hits | actions |avg $|Key
---------+------------+--------+-------------+-----+----
balls |53432 | 5001 | 5| 2$ |Hgdy24
bats |5389 | 4672 | 3| 4$ |dhfg12
The competitor data is as follows;
Key | Ad Copie |
---------+------------+
Hgdy24 |Click here! |
Hgdy24 |Free Trial! |
Hgdy24 |Sign Up now |
dhfg12 |Check it out|
dhfg12 |World known |
dhfg12 |Sign up |
I have already tried joins to the following effect, (duplicate rows metric rows created here)
name |occurances | hits | actions | avg$|Key |Ad Copie
---------+------------+--------+-------------+-----+------+---------
Balls |53432 | 5001 | 5| 2$ |Hgdy24|Click here!
Balls |53432 | 5001 | 5| 2$ |Hgdy24|Free Trial!
Balls |53432 | 5001 | 5| 2$ |Hgdy24|Sign Up now
Bats |5389 | 4672 | 3| 4$ |dhfg12|Check it out
Bats |5389 | 4672 | 3| 4$ |dhfg12|World known
Bats |5389 | 4672 | 3| 4$ |dhfg12|Sign up
Here is the desired output
name |occurances | hits | actions | avg$|Key |Ad Copie
---------+------------+--------+-------------+-----+------+---------
Balls |53432 | 5001 | 5| 2$ |Hgdy24|Click here!
Balls | | | | |Hgdy24|Free Trial!
Balls | | | | |Hgdy24|Sign Up now
Bats |5389 | 4672 | 3| 4$ |dhfg12|Check it out
Bats | | | | |dhfg12|World known
Bats | | | | |dhfg12|Sign up
Does anyone have a clue on a good course of action for this? Lag function perhaps?
Your desired output is not a proper use-case for SQL. SQL is designed to create vies of data with all the fields filled in. When you want to visualize that data, you should do so in your application code and suppress the "duplicate" values there, not in SQL.