How to read json with fields without values as spark dataframe? - apache-spark-sql

I am working with Json files on spark-dataframe. I am trying to parse file with below Json Strings:
{"id":"00010005","time_value":864359000,"speed":1079,"acceleration":19,"la":36.1433530,"lo":-11.51577690}
{"id":"00010005","time_value":864360000,"speed":1176,"acceleration":10,"la":36.1432660,"lo":-11.51578220}
{"id":"00010005","time_value":864361000,"speed":1175,"acceleration":,"la":36.1431730,"lo":-11.51578840}
{"id":"00010005","time_value":864362000,"speed":1174,"acceleration":,"la":36.1430780,"lo":-11.51579410}
{"id":"00010005","time_value":864363000,"speed":1285,"acceleration":11,"la":36.1429890,"lo":-11.51580110}
Here acceleration field sometimes don't contain any values.Spark is marking those json as Corrupt_record which dont have acceleration value.
val df = sqlContext.read.json(data)
scala> df.show(20)
+--------------------+------------+--------+---------+-----------+-----+----------+
| _corrupt_record|acceleration| id| la| lo|speed|time_value|
+--------------------+------------+--------+---------+-----------+-----+----------+
| null| -1|00010005|36.143418|-11.5157712| 887| 864358000|
| null| 19|00010005|36.143353|-11.5157769| 1079| 864359000|
| null| 10|00010005|36.143266|-11.5157822| 1176| 864360000|
|{"id":"00010005",...| null| null| null| null| null| null|
|{"id":"00010005",...| null| null| null| null| null| null|
I dont want to drop these records. What would be the correct way to read these Json records?
I have tried below code and replaced "acceleration" with '0' value. But its not generic solution to handle scenario where value for any field can be missing.
val df1 = df.select("_corrupt_record").na.drop()
val stripRdd = df1.rdd.map( x => x.getString(0)).map(x=>x.replace(""""acceleration":""",""""acceleration":0"""))
val newDf = sqlContext.read.json(stripRdd)
val trimDf = df.drop("_corrupt_record").na.drop
val finalDf = trimDf.unionAll(newDf)

You can do it easily if you have a schema in place for your record, say you schema is called SpeedRecord with fields : acceleration, id, la, lo, speed, time_value
case class SpeedRecord(acceleration : Int, id : Long, la : Double , lo : Double, speed : Int, time_value : Long)
val schema = Encoders.bean(classOf[SpeedRecord]).schema
val speedRecord = spark.read.schema(schema).json("/path/data.json")
speedRecord.show()

Related

Dynamic/Variable Offset in SparkSQL Lead/Lag function

Can we somehow use an offset value that depends on the column value in lead/lag function in spark SQL ?
Example : Here is what works fine.
val sampleData = Seq( ("bob","Developer",125000),
("mark","Developer",108000),
("carl","Tester",70000),
("peter","Developer",185000),
("jon","Tester",65000),
("roman","Tester",82000),
("simon","Developer",98000),
("eric","Developer",144000),
("carlos","Tester",75000),
("henry","Developer",110000)).toDF("Name","Role","Salary")
val window = Window.orderBy("Role")
//Derive lag column for salary
val laggingCol = lag(col("Salary"), 1).over(window)
//Use derived column LastSalary to find difference between current and previous row
val salaryDifference = col("Salary") - col("LastSalary")
//Calculate trend based on the difference
//IF ELSE / CASE can be written using when.otherwise in spark
val trend = when(col("SalaryDiff").isNull || col("SalaryDiff").===(0), "SAME")
.when(col("SalaryDiff").>(0), "UP")
.otherwise("DOWN")
sampleData.withColumn("LastSalary", laggingCol)
.withColumn("SalaryDiff",salaryDifference)
.withColumn("Trend", trend).show()
Now, my use case is such that the offset that we have to pass depends on a particular Column of type Integer. This is somewhat I wanted to work :
val sampleData = Seq( ("bob","Developer",125000,2),
("mark","Developer",108000,3),
("carl","Tester",70000,3),
("peter","Developer",185000,2),
("jon","Tester",65000,1),
("roman","Tester",82000,1),
("simon","Developer",98000,2),
("eric","Developer",144000,3),
("carlos","Tester",75000,2),
("henry","Developer",110000,2)).toDF("Name","Role","Salary","ColumnForOffset")
val window = Window.orderBy("Role")
//Derive lag column for salary
val laggingCol = lag(col("Salary"), col("ColumnForOffset")).over(window)
//Use derived column LastSalary to find difference between current and previous row
val salaryDifference = col("Salary") - col("LastSalary")
//Calculate trend based on the difference
//IF ELSE / CASE can be written using when.otherwise in spark
val trend = when(col("SalaryDiff").isNull || col("SalaryDiff").===(0), "SAME")
.when(col("SalaryDiff").>(0), "UP")
.otherwise("DOWN")
sampleData.withColumn("LastSalary", laggingCol)
.withColumn("SalaryDiff",salaryDifference)
.withColumn("Trend", trend).show()
This will throw an exception as expected since offset only takes Integer value.
Let us discuss if we can somehow implement a logic for this.
You can add a row number column, and do a self join based on the row number and offset, e.g.:
val df = sampleData.withColumn("rn", row_number().over(window))
val df2 = df.alias("t1").join(
df.alias("t2"),
expr("t1.rn = t2.rn + t1.ColumnForOffset"),
"left"
).selectExpr("t1.*", "t2.Salary as LastSalary")
df2.show
+------+---------+------+---------------+---+----------+
| Name| Role|Salary|ColumnForOffset| rn|LastSalary|
+------+---------+------+---------------+---+----------+
| bob|Developer|125000| 2| 1| null|
| mark|Developer|108000| 3| 2| null|
| peter|Developer|185000| 2| 3| 125000|
| simon|Developer| 98000| 2| 4| 108000|
| eric|Developer|144000| 3| 5| 108000|
| henry|Developer|110000| 2| 6| 98000|
| carl| Tester| 70000| 3| 7| 98000|
| jon| Tester| 65000| 1| 8| 70000|
| roman| Tester| 82000| 1| 9| 65000|
|carlos| Tester| 75000| 2| 10| 65000|
+------+---------+------+---------------+---+----------+

SQL - How can I sum elements of an array?

I am using SQL with pyspark and hive, and I'm new to all of it.
I have a hive table with a column of type string, like this:
id | values
1 | '2;4;4'
2 | '5;1'
3 | '8;0;4'
I want to create a query to obtain this:
id | values | sum
1 | '2.2;4;4' | 10.2
2 | '5;1.2' | 6.2
3 | '8;0;4' | 12
By using split(values, ';') I can get arrays like ['2.2','4','4'], but I still need to convert them into decimal numbers and sum them.
Is there a not too complicated way to do this?
Thank you so so much in advance! And happy coding to you all :)
From Spark-2.4+
We don't have to use explode on arrays but directly work on array's using higher order functions.
Example:
from pyspark.sql.functions import *
df=spark.createDataFrame([("1","2;4;4"),("2","5;1"),("3","8;0;4")],["id","values"])
#split and creating array<int> column
df1=df.withColumn("arr",split(col("values"),";").cast("array<int>"))
df1.createOrReplaceTempView("tmp")
spark.sql("select *,aggregate(arr,0,(x,y) -> x + y) as sum from tmp").drop("arr").show()
#+---+------+---+
#| id|values|sum|
#+---+------+---+
#| 1| 2;4;4| 10|
#| 2| 5;1| 6|
#| 3| 8;0;4| 12|
#+---+------+---+
#in dataframe API
df1.selectExpr("*","aggregate(arr,0,(x,y) -> x + y) as sum").drop("arr").show()
#+---+------+---+
#| id|values|sum|
#+---+------+---+
#| 1| 2;4;4| 10|
#| 2| 5;1| 6|
#| 3| 8;0;4| 12|
#+---+------+---+
PySpark solution
from pyspark.sql.functions import udf,col,split
from pyspark.sql.types import FloatType
#UDF to sum the split values returning none when non numeric values exist in the string
#Change the implementation of the function as needed
def values_sum(split_list):
total = 0
for num in split_list:
try:
total += float(num)
except ValueError:
return None
return total
values_summed = udf(values_sum,FloatType())
res = df.withColumn('summed',values_summed(split(col('values'),';')))
res.show()
The solution could've been a one-liner if it were known the array values are of a given data type. However, it is better to go with a safer implementation that covers all cases.
Hive solution
Use explode with split and group by to sum the values.
select id,sum(cast(split_value as float)) as summed
from tbl
lateral view explode(split(values,';')) t as split_value
group by id
write a stored procedure which does the job:
CREATE FUNCTION SPLIT_AND_SUM ( s VARCHAR(1024) ) RETURNS INT
BEGIN
...
END

how to remove white space in columns header in pyspark and how to convert string date to date time format

-I am newbie to pyspark, I am trying to remove white space, I am not going to be removed after that I tried to convert date string type to DateTime format I not converted. please help me how to do it.
I tried this:
emp=spark.read.csv("Downloads/dataset2/employees.csv",header=True)
dd=list(map(lambda x: x.replace(" ",""),emp.columns))
df=emp.toDF(*dd)
+----------+---------+-----------+--------------------+---------------+--------------------+--------------------+--------------------+---------+-------+-----------+--------+----------------+----------+--------------------+--------------------+----------+--------------------+
|EmployeeID| LastName| FirstName| Title|TitleOfCourtesy| BirthDate| HireDate| Address| City| Region| PostalCode| Country| HomePhone| Extension| Photo| Notes| ReportsTo| PhotoPath|
+----------+---------+-----------+--------------------+---------------+--------------------+--------------------+--------------------+---------+-------+-----------+--------+----------------+----------+--------------------+--------------------+----------+--------------------+
| 1|'Davolio'| 'Nancy'|'Sales Representa...| 'Ms.'|'1948-12-08 00:00...|'1992-05-01 00:00...|'507 - 20th Ave. ...|'Seattle'| 'WA'| '98122'| 'USA'|'(206) 555-9857'| '5467'|'0x151C2F00020000...|'Education includ...| 2|'http://accweb/em...|
| 2| 'Fuller'| 'Andrew'|'Vice President S...| 'Dr.'|'1952-02-19 00:00...|'1992-08-14 00:00...|'908 W. Capital Way'| 'Tacoma'| 'WA'| '98401'| 'USA'|'(206) 555-9482'| '3457'|'0x151C2F00020000...|'Andrew received ...| NULL|'http://accweb/em...|
+----------+---------+-----------+--------------------+---------------+--------------------+--------------------+--------------------+---------+-------+-----------+--------+----------------+----------+--------------------+--------------------+----------+--------------------+
After that tried this, But showing error:
import pyspark
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import *
emp.select("BirthDate").show()
Py4JJavaError: An error occurred while calling o197.select.
: org.apache.spark.sql.AnalysisException: cannot resolve '`BirthDate`' given input columns: [ PhotoPath, EmployeeID, Photo, City, HomePhone, ReportsTo, PostalCode, Title, Address, Notes, LastName, FirstName, HireDate, Region, Extension, Country, BirthDate, TitleOfCourtesy];;
after that I tried this :
df=emp.withColumn('BirthDate', from_unixtime(unix_timestamp('BirthDate','yyyy-mm-dd')))
but it showing null values:
df.select("BirthDate").show(4)
+---------+
|BirthDate|
+---------+
| null|
| null|
| null|
| null|
| null|
| null|
| null|
| null|
| null|
+---------+
Try this
for each in df.columns:
df = df.withColumnRenamed(each , each.strip())
Date time forward:
df=emp.withColumn('BirthDate', from_unixtime(unix_timestamp('BirthDate','yyyy-mm-dd')))

Spark: how to perform loop fuction to dataframes

I have two dataframes as below, I'm trying to search the second df using the foreign key, and then generate a new data frame. I was thinking of doing a spark.sql("""select history.value as previous_year 1 from df1, history where df1.key=history.key and history.date=add_months($currentdate,-1*12)""" but then I need to do it multiple times for say 10 previous_years. and join them back together. How can I create a function for this? Many thanks. Quite new here.
dataframe one:
+---+---+-----------+
|key|val| date |
+---+---+-----------+
| 1|100| 2018-04-16|
| 2|200| 2018-04-16|
+---+---+-----------+
dataframe two : historical data
+---+---+-----------+
|key|val| date |
+---+---+-----------+
| 1|10 | 2017-04-16|
| 1|20 | 2016-04-16|
+---+---+-----------+
The result I want to generate is
+---+----------+-----------------+-----------------+
|key|date | previous_year_1 | previous_year_2 |
+---+----------+-----------------+-----------------+
| 1|2018-04-16| 10 | 20 |
| 2|null | null | null |
+---+----------+-----------------+-----------------+
To solve this, the following approach can be applied:
1) Join the two dataframes by key.
2) Filter out all the rows where previous dates are not exactly years before reference dates.
3) Calculate the years difference for the row and put the value in a dedicated column.
4) Pivot the DataFrame around the column calculated in the previous step and aggregate on the value of the respective year.
private def generateWhereForPreviousYears(nbYears: Int): Column =
(-1 to -nbYears by -1) // loop on each backwards year value
.map(yearsBack =>
/*
* Each year back count number is transformed in an expression
* to be included into the WHERE clause.
* This is equivalent to "history.date=add_months($currentdate,-1*12)"
* in your comment in the question.
*/
add_months($"df1.date", 12 * yearsBack) === $"df2.date"
)
/*
The previous .map call produces a sequence of Column expressions,
we need to concatenate them with "or" in order to obtain
a single Spark Column reference. .reduce() function is most
appropriate here.
*/
.reduce(_ or _) or $"df2.date".isNull // the last "or" is added to include empty lines in the result.
val nbYearsBack = 3
val result = sourceDf1.as("df1")
.join(sourceDf2.as("df2"), $"df1.key" === $"df2.key", "left")
.where(generateWhereForPreviousYears(nbYearsBack))
.withColumn("diff_years", concat(lit("previous_year_"), year($"df1.date") - year($"df2.date")))
.groupBy($"df1.key", $"df1.date")
.pivot("diff_years")
.agg(first($"df2.value"))
.drop("null") // drop the unwanted extra column with null values
The output is:
+---+----------+---------------+---------------+
|key|date |previous_year_1|previous_year_2|
+---+----------+---------------+---------------+
|1 |2018-04-16|10 |20 |
|2 |2018-04-16|null |null |
+---+----------+---------------+---------------+
Let me "read through the lines" and give you a "similar" solution to what you are asking:
val df1Pivot = df1.groupBy("key").pivot("date").agg(max("val"))
val df2Pivot = df2.groupBy("key").pivot("date").agg(max("val"))
val result = df1Pivot.join(df2Pivot, Seq("key"), "left")
result.show
+---+----------+----------+----------+
|key|2018-04-16|2016-04-16|2017-04-16|
+---+----------+----------+----------+
| 1| 100| 20| 10|
| 2| 200| null| null|
+---+----------+----------+----------+
Feel free to manipulate the data a bit if you really need to change the column names.
Or even better:
df1.union(df2).groupBy("key").pivot("date").agg(max("val")).show
+---+----------+----------+----------+
|key|2016-04-16|2017-04-16|2018-04-16|
+---+----------+----------+----------+
| 1| 20| 10| 100|
| 2| null| null| 200|
+---+----------+----------+----------+

how to make row data to source and target zigzag using hive or pig

Input
id,name,time
1,home,10:20
1,product,10:21
1,mobile,10:22
2,id,10:24
2,bag,10:30
2,home,10:21
3,keyboard,10:32
3,home,10:33
3,welcome,10:36
I want to make name column as source and target based on the below output.
Earlier I tried with pig
The steps are:
a=load-->b=asc->c=dec -> then join the data
I got the output like this
(1,home,10:20,1,product,10:21)
(2,bag,10:30,2,id,10:24)
(3,home,10:32,3,welcome,10:36)
output
1,home,product
1,product,mobile
2,id,bag
2,bag,home
3,keyboard,home
3,home,welcome
In Hive (and in Spark), you can use Window function LEAD :
with t as
( select id, name, lead(name) over (partition by id) as zigzag from table)
select * from t where t.zigzag is not null
Should give you the output :
+---+--------+-------+
| id| name| zigzag|
+---+--------+-------+
| 1| home|product|
| 1| product| mobile|
| 2| bag| home|
| 2| home| id|
| 3|keyboard| home|
| 3| home|welcome|
+---+--------+-------+