Dynamic/Variable Offset in SparkSQL Lead/Lag function - dataframe

Can we somehow use an offset value that depends on the column value in lead/lag function in spark SQL ?
Example : Here is what works fine.
val sampleData = Seq( ("bob","Developer",125000),
("mark","Developer",108000),
("carl","Tester",70000),
("peter","Developer",185000),
("jon","Tester",65000),
("roman","Tester",82000),
("simon","Developer",98000),
("eric","Developer",144000),
("carlos","Tester",75000),
("henry","Developer",110000)).toDF("Name","Role","Salary")
val window = Window.orderBy("Role")
//Derive lag column for salary
val laggingCol = lag(col("Salary"), 1).over(window)
//Use derived column LastSalary to find difference between current and previous row
val salaryDifference = col("Salary") - col("LastSalary")
//Calculate trend based on the difference
//IF ELSE / CASE can be written using when.otherwise in spark
val trend = when(col("SalaryDiff").isNull || col("SalaryDiff").===(0), "SAME")
.when(col("SalaryDiff").>(0), "UP")
.otherwise("DOWN")
sampleData.withColumn("LastSalary", laggingCol)
.withColumn("SalaryDiff",salaryDifference)
.withColumn("Trend", trend).show()
Now, my use case is such that the offset that we have to pass depends on a particular Column of type Integer. This is somewhat I wanted to work :
val sampleData = Seq( ("bob","Developer",125000,2),
("mark","Developer",108000,3),
("carl","Tester",70000,3),
("peter","Developer",185000,2),
("jon","Tester",65000,1),
("roman","Tester",82000,1),
("simon","Developer",98000,2),
("eric","Developer",144000,3),
("carlos","Tester",75000,2),
("henry","Developer",110000,2)).toDF("Name","Role","Salary","ColumnForOffset")
val window = Window.orderBy("Role")
//Derive lag column for salary
val laggingCol = lag(col("Salary"), col("ColumnForOffset")).over(window)
//Use derived column LastSalary to find difference between current and previous row
val salaryDifference = col("Salary") - col("LastSalary")
//Calculate trend based on the difference
//IF ELSE / CASE can be written using when.otherwise in spark
val trend = when(col("SalaryDiff").isNull || col("SalaryDiff").===(0), "SAME")
.when(col("SalaryDiff").>(0), "UP")
.otherwise("DOWN")
sampleData.withColumn("LastSalary", laggingCol)
.withColumn("SalaryDiff",salaryDifference)
.withColumn("Trend", trend).show()
This will throw an exception as expected since offset only takes Integer value.
Let us discuss if we can somehow implement a logic for this.

You can add a row number column, and do a self join based on the row number and offset, e.g.:
val df = sampleData.withColumn("rn", row_number().over(window))
val df2 = df.alias("t1").join(
df.alias("t2"),
expr("t1.rn = t2.rn + t1.ColumnForOffset"),
"left"
).selectExpr("t1.*", "t2.Salary as LastSalary")
df2.show
+------+---------+------+---------------+---+----------+
| Name| Role|Salary|ColumnForOffset| rn|LastSalary|
+------+---------+------+---------------+---+----------+
| bob|Developer|125000| 2| 1| null|
| mark|Developer|108000| 3| 2| null|
| peter|Developer|185000| 2| 3| 125000|
| simon|Developer| 98000| 2| 4| 108000|
| eric|Developer|144000| 3| 5| 108000|
| henry|Developer|110000| 2| 6| 98000|
| carl| Tester| 70000| 3| 7| 98000|
| jon| Tester| 65000| 1| 8| 70000|
| roman| Tester| 82000| 1| 9| 65000|
|carlos| Tester| 75000| 2| 10| 65000|
+------+---------+------+---------------+---+----------+

Related

Updating different unique value to each group

I have a table where everything that has the same classification_id and application_id have the same group_id.
id |classification_id |application_id |authorisation_id |group_id |
------------------------------------+------------------------------------+------------------------------------+------------------------------------+------------------------------------+
54f614f3-7582-4ae9-a07e-5ff6d29e7a3b|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|25a7e1f7-4d8c-4e12-a10f-3654d7ef5ee9|8e563f95-ff0c-41e7-b211-d5ac6f78d056|
a01571a1-4f04-4ff9-9a7b-3a720736b9ec|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|302b23f1-ce57-4219-bcae-7bdbc3b86cb4|8e563f95-ff0c-41e7-b211-d5ac6f78d056|
3e18f2d0-4d5f-41b3-baf5-ba0feac8f43e|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|5e3bce60-b0d8-436c-9d33-b3a1d4c9a308|8e563f95-ff0c-41e7-b211-d5ac6f78d056|
b2ebe2ee-ffed-4e32-8abe-cd8b7d400646|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|a4edd12d-c19e-4e0d-badd-d3cf5e6d6d82|8e563f95-ff0c-41e7-b211-d5ac6f78d056|
ef01e6f7-f6ad-4d4d-b129-9c756734bef5|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|5e3bce60-b0d8-436c-9d33-b3a1d4c9a308|8e563f95-ff0c-41e7-b211-d5ac6f78d056|
7d340811-b679-49fd-bdd6-32a1bb9bbfed|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|25a7e1f7-4d8c-4e12-a10f-3654d7ef5ee9|8e563f95-ff0c-41e7-b211-d5ac6f78d056|
c45d7bb6-2146-48d0-a804-929cc42484cd|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|a4edd12d-c19e-4e0d-badd-d3cf5e6d6d82|8e563f95-ff0c-41e7-b211-d5ac6f78d056|
ddec5929-a08f-4f48-97f8-ccc2b85531ac|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|302b23f1-ce57-4219-bcae-7bdbc3b86cb4|8e563f95-ff0c-41e7-b211-d5ac6f78d056|
ae9edbb2-def3-4c4e-9a27-72454a09e146|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|a4edd12d-c19e-4e0d-badd-d3cf5e6d6d82|8e563f95-ff0c-41e7-b211-d5ac6f78d056|
3a3fd904-1988-4f8c-bf27-8cdf349b8431|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|25a7e1f7-4d8c-4e12-a10f-3654d7ef5ee9|8e563f95-ff0c-41e7-b211-d5ac6f78d056|
27c669b9-763c-49cf-887a-b9b1f85dc1ab|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|302b23f1-ce57-4219-bcae-7bdbc3b86cb4|8e563f95-ff0c-41e7-b211-d5ac6f78d056|
03820732-32c4-4cd4-910b-4e27fdd44bdf|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|5e3bce60-b0d8-436c-9d33-b3a1d4c9a308|8e563f95-ff0c-41e7-b211-d5ac6f78d056|
I've managed to sort out subgroups of this group by authorisation_id and I've created a group_helper which basically shows my end goal - from this data set I want to get three different groups:
id |classification_id |application_id |authorisation_id |group_id |group_helper|
------------------------------------+------------------------------------+------------------------------------+------------------------------------+------------------------------------+------------+
54f614f3-7582-4ae9-a07e-5ff6d29e7a3b|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|25a7e1f7-4d8c-4e12-a10f-3654d7ef5ee9|8e563f95-ff0c-41e7-b211-d5ac6f78d056| 2|
a01571a1-4f04-4ff9-9a7b-3a720736b9ec|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|302b23f1-ce57-4219-bcae-7bdbc3b86cb4|8e563f95-ff0c-41e7-b211-d5ac6f78d056| 2|
3e18f2d0-4d5f-41b3-baf5-ba0feac8f43e|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|5e3bce60-b0d8-436c-9d33-b3a1d4c9a308|8e563f95-ff0c-41e7-b211-d5ac6f78d056| 2|
b2ebe2ee-ffed-4e32-8abe-cd8b7d400646|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|a4edd12d-c19e-4e0d-badd-d3cf5e6d6d82|8e563f95-ff0c-41e7-b211-d5ac6f78d056| 2|
ef01e6f7-f6ad-4d4d-b129-9c756734bef5|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|5e3bce60-b0d8-436c-9d33-b3a1d4c9a308|8e563f95-ff0c-41e7-b211-d5ac6f78d056| 3|
7d340811-b679-49fd-bdd6-32a1bb9bbfed|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|25a7e1f7-4d8c-4e12-a10f-3654d7ef5ee9|8e563f95-ff0c-41e7-b211-d5ac6f78d056| 3|
c45d7bb6-2146-48d0-a804-929cc42484cd|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|a4edd12d-c19e-4e0d-badd-d3cf5e6d6d82|8e563f95-ff0c-41e7-b211-d5ac6f78d056| 3|
ddec5929-a08f-4f48-97f8-ccc2b85531ac|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|302b23f1-ce57-4219-bcae-7bdbc3b86cb4|8e563f95-ff0c-41e7-b211-d5ac6f78d056| 3|
ae9edbb2-def3-4c4e-9a27-72454a09e146|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|a4edd12d-c19e-4e0d-badd-d3cf5e6d6d82|8e563f95-ff0c-41e7-b211-d5ac6f78d056| |
3a3fd904-1988-4f8c-bf27-8cdf349b8431|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|25a7e1f7-4d8c-4e12-a10f-3654d7ef5ee9|8e563f95-ff0c-41e7-b211-d5ac6f78d056| |
27c669b9-763c-49cf-887a-b9b1f85dc1ab|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|302b23f1-ce57-4219-bcae-7bdbc3b86cb4|8e563f95-ff0c-41e7-b211-d5ac6f78d056| |
03820732-32c4-4cd4-910b-4e27fdd44bdf|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|5e3bce60-b0d8-436c-9d33-b3a1d4c9a308|8e563f95-ff0c-41e7-b211-d5ac6f78d056| |
Now, I want each of those groups to have a different group_id. I don't have to update the one which has group_id = NULL since it is already unique. Now I want to give every row that has group_helper = 2 same (but different from those where group_id = NULL) UUID, every row that has group_helper = 3 same UUID (but different from those which have group_id = NULL or 2) and so on. This has to work on n amount of group_helper values because there can be much more than maximum 2.
So my end goal would look like this:
id |classification_id |application_id |authorisation_id |group_id |group_helper|
------------------------------------+------------------------------------+------------------------------------+------------------------------------+------------------------------------+------------+
54f614f3-7582-4ae9-a07e-5ff6d29e7a3b|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|25a7e1f7-4d8c-4e12-a10f-3654d7ef5ee9|fd3e63d1-d59c-477f-b58b-3ae3726c7992| 2|
a01571a1-4f04-4ff9-9a7b-3a720736b9ec|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|302b23f1-ce57-4219-bcae-7bdbc3b86cb4|fd3e63d1-d59c-477f-b58b-3ae3726c7992| 2|
3e18f2d0-4d5f-41b3-baf5-ba0feac8f43e|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|5e3bce60-b0d8-436c-9d33-b3a1d4c9a308|fd3e63d1-d59c-477f-b58b-3ae3726c7992| 2|
b2ebe2ee-ffed-4e32-8abe-cd8b7d400646|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|a4edd12d-c19e-4e0d-badd-d3cf5e6d6d82|fd3e63d1-d59c-477f-b58b-3ae3726c7992| 2|
ef01e6f7-f6ad-4d4d-b129-9c756734bef5|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|5e3bce60-b0d8-436c-9d33-b3a1d4c9a308|ed3ff96c-2f93-4182-8e4f-4594cb20cbb6| 3|
7d340811-b679-49fd-bdd6-32a1bb9bbfed|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|25a7e1f7-4d8c-4e12-a10f-3654d7ef5ee9|ed3ff96c-2f93-4182-8e4f-4594cb20cbb6| 3|
c45d7bb6-2146-48d0-a804-929cc42484cd|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|a4edd12d-c19e-4e0d-badd-d3cf5e6d6d82|ed3ff96c-2f93-4182-8e4f-4594cb20cbb6| 3|
ddec5929-a08f-4f48-97f8-ccc2b85531ac|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|302b23f1-ce57-4219-bcae-7bdbc3b86cb4|ed3ff96c-2f93-4182-8e4f-4594cb20cbb6| 3|
ae9edbb2-def3-4c4e-9a27-72454a09e146|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|a4edd12d-c19e-4e0d-badd-d3cf5e6d6d82|8e563f95-ff0c-41e7-b211-d5ac6f78d056| |
3a3fd904-1988-4f8c-bf27-8cdf349b8431|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|25a7e1f7-4d8c-4e12-a10f-3654d7ef5ee9|8e563f95-ff0c-41e7-b211-d5ac6f78d056| |
27c669b9-763c-49cf-887a-b9b1f85dc1ab|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|302b23f1-ce57-4219-bcae-7bdbc3b86cb4|8e563f95-ff0c-41e7-b211-d5ac6f78d056| |
03820732-32c4-4cd4-910b-4e27fdd44bdf|63a7b151-2b8d-4b6a-b9a1-108a80ae4cdf|cd3d597b-25d1-4b4b-92f0-2ad8fcb4698c|5e3bce60-b0d8-436c-9d33-b3a1d4c9a308|8e563f95-ff0c-41e7-b211-d5ac6f78d056| |
You can create a CTE which generates new group_id, selecting a single value for each group_helper column, then use update ... from .... (see demo)
with grouper(helper, gid) as
(select distinct on (group_helper)
group_helper
, gen_random_uuid()
from sometable
where group_helper is not null
order by group_helper
) --select * from grouper
update sometable
set group_id = gid
from grouper
where helper = group_helper;

SQL - How can I sum elements of an array?

I am using SQL with pyspark and hive, and I'm new to all of it.
I have a hive table with a column of type string, like this:
id | values
1 | '2;4;4'
2 | '5;1'
3 | '8;0;4'
I want to create a query to obtain this:
id | values | sum
1 | '2.2;4;4' | 10.2
2 | '5;1.2' | 6.2
3 | '8;0;4' | 12
By using split(values, ';') I can get arrays like ['2.2','4','4'], but I still need to convert them into decimal numbers and sum them.
Is there a not too complicated way to do this?
Thank you so so much in advance! And happy coding to you all :)
From Spark-2.4+
We don't have to use explode on arrays but directly work on array's using higher order functions.
Example:
from pyspark.sql.functions import *
df=spark.createDataFrame([("1","2;4;4"),("2","5;1"),("3","8;0;4")],["id","values"])
#split and creating array<int> column
df1=df.withColumn("arr",split(col("values"),";").cast("array<int>"))
df1.createOrReplaceTempView("tmp")
spark.sql("select *,aggregate(arr,0,(x,y) -> x + y) as sum from tmp").drop("arr").show()
#+---+------+---+
#| id|values|sum|
#+---+------+---+
#| 1| 2;4;4| 10|
#| 2| 5;1| 6|
#| 3| 8;0;4| 12|
#+---+------+---+
#in dataframe API
df1.selectExpr("*","aggregate(arr,0,(x,y) -> x + y) as sum").drop("arr").show()
#+---+------+---+
#| id|values|sum|
#+---+------+---+
#| 1| 2;4;4| 10|
#| 2| 5;1| 6|
#| 3| 8;0;4| 12|
#+---+------+---+
PySpark solution
from pyspark.sql.functions import udf,col,split
from pyspark.sql.types import FloatType
#UDF to sum the split values returning none when non numeric values exist in the string
#Change the implementation of the function as needed
def values_sum(split_list):
total = 0
for num in split_list:
try:
total += float(num)
except ValueError:
return None
return total
values_summed = udf(values_sum,FloatType())
res = df.withColumn('summed',values_summed(split(col('values'),';')))
res.show()
The solution could've been a one-liner if it were known the array values are of a given data type. However, it is better to go with a safer implementation that covers all cases.
Hive solution
Use explode with split and group by to sum the values.
select id,sum(cast(split_value as float)) as summed
from tbl
lateral view explode(split(values,';')) t as split_value
group by id
write a stored procedure which does the job:
CREATE FUNCTION SPLIT_AND_SUM ( s VARCHAR(1024) ) RETURNS INT
BEGIN
...
END

Spark: how to perform loop fuction to dataframes

I have two dataframes as below, I'm trying to search the second df using the foreign key, and then generate a new data frame. I was thinking of doing a spark.sql("""select history.value as previous_year 1 from df1, history where df1.key=history.key and history.date=add_months($currentdate,-1*12)""" but then I need to do it multiple times for say 10 previous_years. and join them back together. How can I create a function for this? Many thanks. Quite new here.
dataframe one:
+---+---+-----------+
|key|val| date |
+---+---+-----------+
| 1|100| 2018-04-16|
| 2|200| 2018-04-16|
+---+---+-----------+
dataframe two : historical data
+---+---+-----------+
|key|val| date |
+---+---+-----------+
| 1|10 | 2017-04-16|
| 1|20 | 2016-04-16|
+---+---+-----------+
The result I want to generate is
+---+----------+-----------------+-----------------+
|key|date | previous_year_1 | previous_year_2 |
+---+----------+-----------------+-----------------+
| 1|2018-04-16| 10 | 20 |
| 2|null | null | null |
+---+----------+-----------------+-----------------+
To solve this, the following approach can be applied:
1) Join the two dataframes by key.
2) Filter out all the rows where previous dates are not exactly years before reference dates.
3) Calculate the years difference for the row and put the value in a dedicated column.
4) Pivot the DataFrame around the column calculated in the previous step and aggregate on the value of the respective year.
private def generateWhereForPreviousYears(nbYears: Int): Column =
(-1 to -nbYears by -1) // loop on each backwards year value
.map(yearsBack =>
/*
* Each year back count number is transformed in an expression
* to be included into the WHERE clause.
* This is equivalent to "history.date=add_months($currentdate,-1*12)"
* in your comment in the question.
*/
add_months($"df1.date", 12 * yearsBack) === $"df2.date"
)
/*
The previous .map call produces a sequence of Column expressions,
we need to concatenate them with "or" in order to obtain
a single Spark Column reference. .reduce() function is most
appropriate here.
*/
.reduce(_ or _) or $"df2.date".isNull // the last "or" is added to include empty lines in the result.
val nbYearsBack = 3
val result = sourceDf1.as("df1")
.join(sourceDf2.as("df2"), $"df1.key" === $"df2.key", "left")
.where(generateWhereForPreviousYears(nbYearsBack))
.withColumn("diff_years", concat(lit("previous_year_"), year($"df1.date") - year($"df2.date")))
.groupBy($"df1.key", $"df1.date")
.pivot("diff_years")
.agg(first($"df2.value"))
.drop("null") // drop the unwanted extra column with null values
The output is:
+---+----------+---------------+---------------+
|key|date |previous_year_1|previous_year_2|
+---+----------+---------------+---------------+
|1 |2018-04-16|10 |20 |
|2 |2018-04-16|null |null |
+---+----------+---------------+---------------+
Let me "read through the lines" and give you a "similar" solution to what you are asking:
val df1Pivot = df1.groupBy("key").pivot("date").agg(max("val"))
val df2Pivot = df2.groupBy("key").pivot("date").agg(max("val"))
val result = df1Pivot.join(df2Pivot, Seq("key"), "left")
result.show
+---+----------+----------+----------+
|key|2018-04-16|2016-04-16|2017-04-16|
+---+----------+----------+----------+
| 1| 100| 20| 10|
| 2| 200| null| null|
+---+----------+----------+----------+
Feel free to manipulate the data a bit if you really need to change the column names.
Or even better:
df1.union(df2).groupBy("key").pivot("date").agg(max("val")).show
+---+----------+----------+----------+
|key|2016-04-16|2017-04-16|2018-04-16|
+---+----------+----------+----------+
| 1| 20| 10| 100|
| 2| null| null| 200|
+---+----------+----------+----------+

how to make row data to source and target zigzag using hive or pig

Input
id,name,time
1,home,10:20
1,product,10:21
1,mobile,10:22
2,id,10:24
2,bag,10:30
2,home,10:21
3,keyboard,10:32
3,home,10:33
3,welcome,10:36
I want to make name column as source and target based on the below output.
Earlier I tried with pig
The steps are:
a=load-->b=asc->c=dec -> then join the data
I got the output like this
(1,home,10:20,1,product,10:21)
(2,bag,10:30,2,id,10:24)
(3,home,10:32,3,welcome,10:36)
output
1,home,product
1,product,mobile
2,id,bag
2,bag,home
3,keyboard,home
3,home,welcome
In Hive (and in Spark), you can use Window function LEAD :
with t as
( select id, name, lead(name) over (partition by id) as zigzag from table)
select * from t where t.zigzag is not null
Should give you the output :
+---+--------+-------+
| id| name| zigzag|
+---+--------+-------+
| 1| home|product|
| 1| product| mobile|
| 2| bag| home|
| 2| home| id|
| 3|keyboard| home|
| 3| home|welcome|
+---+--------+-------+

How to read json with fields without values as spark dataframe?

I am working with Json files on spark-dataframe. I am trying to parse file with below Json Strings:
{"id":"00010005","time_value":864359000,"speed":1079,"acceleration":19,"la":36.1433530,"lo":-11.51577690}
{"id":"00010005","time_value":864360000,"speed":1176,"acceleration":10,"la":36.1432660,"lo":-11.51578220}
{"id":"00010005","time_value":864361000,"speed":1175,"acceleration":,"la":36.1431730,"lo":-11.51578840}
{"id":"00010005","time_value":864362000,"speed":1174,"acceleration":,"la":36.1430780,"lo":-11.51579410}
{"id":"00010005","time_value":864363000,"speed":1285,"acceleration":11,"la":36.1429890,"lo":-11.51580110}
Here acceleration field sometimes don't contain any values.Spark is marking those json as Corrupt_record which dont have acceleration value.
val df = sqlContext.read.json(data)
scala> df.show(20)
+--------------------+------------+--------+---------+-----------+-----+----------+
| _corrupt_record|acceleration| id| la| lo|speed|time_value|
+--------------------+------------+--------+---------+-----------+-----+----------+
| null| -1|00010005|36.143418|-11.5157712| 887| 864358000|
| null| 19|00010005|36.143353|-11.5157769| 1079| 864359000|
| null| 10|00010005|36.143266|-11.5157822| 1176| 864360000|
|{"id":"00010005",...| null| null| null| null| null| null|
|{"id":"00010005",...| null| null| null| null| null| null|
I dont want to drop these records. What would be the correct way to read these Json records?
I have tried below code and replaced "acceleration" with '0' value. But its not generic solution to handle scenario where value for any field can be missing.
val df1 = df.select("_corrupt_record").na.drop()
val stripRdd = df1.rdd.map( x => x.getString(0)).map(x=>x.replace(""""acceleration":""",""""acceleration":0"""))
val newDf = sqlContext.read.json(stripRdd)
val trimDf = df.drop("_corrupt_record").na.drop
val finalDf = trimDf.unionAll(newDf)
You can do it easily if you have a schema in place for your record, say you schema is called SpeedRecord with fields : acceleration, id, la, lo, speed, time_value
case class SpeedRecord(acceleration : Int, id : Long, la : Double , lo : Double, speed : Int, time_value : Long)
val schema = Encoders.bean(classOf[SpeedRecord]).schema
val speedRecord = spark.read.schema(schema).json("/path/data.json")
speedRecord.show()