In spark, I have a dataframe having a column named goals which holds numeric value. Here, I just want to append "goal or goals" string to the actual value
I want to print it as
if,
value = 1 then 1 goal
value = 2 then 2 goals and so on..
My data looks like this
val goalsDF = Seq(("meg", 2), ("meg", 4), ("min", 3),
("min2", 1), ("ss", 1)).toDF("name", "goals")
goalsDF.show()
+-----+-----+
|name |goals|
+-----+-----+
|meg |2 |
|meg |4 |
|min |3 |
|min2 |1 |
|ss |1 |
+-----+-----+
Expected Output:
+-----+---------+
|name |goals |
+-----+---------+
|meg |2 goals |
|meg |4 goals |
|min |3 goals |
|min2 |1 goal |
|ss |1 goal |
+-----+---------+
I tried below code but it doesn't work and prints the data as null
goalsDF.withColumn("goals", col("goals") + lit("goals")).show()
+----+-----+
|name|goals|
+----+-----+
| meg| null|
| meg| null|
| min| null|
|min2| null|
| ss| null|
+----+-----+
Please suggest if we can do this inside .withColumn() without any addition user defined method
You should use case when. It's pyspark example but you should be able to reference it and use scala.
DF.
withColumn('goals', F.When(F.col('goals') == 1, '1 goal').otherwise(F.concat_ws(" ", F.col("goals"), "goals"))
)
For scala example see here: https://stackoverflow.com/a/37108127/5899997
I have data like below
---------------------------------------------------|
|Id | DateTime | products |
|--------|-----------------------------|-----------|
| 1| 2017-08-24T00:00:00.000+0000| 1 |
| 1| 2017-08-24T00:00:00.000+0000| 2 |
| 1| 2017-08-24T00:00:00.000+0000| 3 |
| 1| 2016-05-24T00:00:00.000+0000| 1 |
I am using window.unboundedPreceding , window.unboundedFollowing as below to get the second recent datetime.
sorted_times = Window.partitionBy('Id').orderBy(F.col('ModifiedTime').desc()).rangeBetween(Window.unboundedPreceding,Window.unboundedFollowing)
df3 = (data.withColumn("second_recent",F.collect_list(F.col('ModifiedTime')).over(sorted_times)).getItem(1)))
But I get the results as below,getting the second date from second row which is same as first row
------------------------------------------------------------------------------
|Id |DateTime | secondtime |Products
|--------|-----------------------------|----------------------------- |--------------
| 1| 2017-08-24T00:00:00.000+0000| 2017-08-24T00:00:00.000+0000 | 1
| 1| 2017-08-24T00:00:00.000+0000| 2017-08-24T00:00:00.000+0000 | 2
| 1| 2017-08-24T00:00:00.000+0000| 2017-08-24T00:00:00.000+0000 | 3
| 1| 2016-05-24T00:00:00.000+0000| 2017-08-24T00:00:00.000+0000 | 1
Please help me in finding the second latest datetime on distinct datetime.
Thanks in advance
Use collect_set instead of collect_list for no duplicates:
df3 = data.withColumn(
"second_recent",
F.collect_set(F.col('LastModifiedTime')).over(sorted_times)[1]
)
df3.show(truncate=False)
#+-----+----------------------------+--------+----------------------------+
#|VipId|LastModifiedTime |products|second_recent |
#+-----+----------------------------+--------+----------------------------+
#|1 |2017-08-24T00:00:00.000+0000|1 |2016-05-24T00:00:00.000+0000|
#|1 |2017-08-24T00:00:00.000+0000|2 |2016-05-24T00:00:00.000+0000|
#|1 |2017-08-24T00:00:00.000+0000|3 |2016-05-24T00:00:00.000+0000|
#|1 |2016-05-24T00:00:00.000+0000|1 |2016-05-24T00:00:00.000+0000|
#+-----+----------------------------+--------+----------------------------+
Another way by using unordered window and sorting the array before taking second_recent:
from pyspark.sql import functions as F, Window
df3 = data.withColumn(
"second_recent",
F.sort_array(
F.collect_set(F.col('LastModifiedTime')).over(Window.partitionBy('VipId')),
False
)[1]
)
I have the raw data DataFrame like that:
+-----------+--------------------+------+
|device | timestamp | value|
+-----------+--------------------+------+
| device_A|2022-01-01 18:00:01 | 100|
| device_A|2022-01-01 18:00:02 | 99|
| device_A|2022-01-01 18:00:03 | 100|
| device_A|2022-01-01 18:00:04 | 102|
| device_A|2022-01-01 18:00:05 | 100|
| device_A|2022-01-01 18:00:06 | 99|
| device_A|2022-01-01 18:00:11 | 98|
| device_A|2022-01-01 18:00:12 | 100|
| device_A|2022-01-01 18:00:13 | 100|
| device_A|2022-01-01 18:00:15 | 101|
| device_A|2022-01-01 18:00:17 | 101|
I'd like to aggregate them and to build the listed 10 s aggregation like that:
+-----------+--------------------+------------+-------+
|device | windowtime | values| counts|
+-----------+--------------------+------------+-------+
| device_A|2022-01-01 18:00:00 |[99,100,102]|[1,3,1]|
| device_A|2022-01-01 18:00:10 |[98,100,101]|[1,2,2]|
To plot a heat-map graph of the values later.
I have succeed with getting the values column but not clear how to calculate the corresponding counts
.withColumn("values",collect_list(col("value")).over(Window.partitionBy($"device").orderBy($"timestamp".desc)))
How can I do the weighted list aggregation in Apache Spark?
Group by time window using window function with duration of 10 seconds to get counts by value and device, then group by device + window_time and collect list of structs:
val result = (
df.groupBy(
$"device",
window($"timestamp", "10 second")("start").as("window_time"),
$"value"
)
.count()
.groupBy("device", "window_time")
.agg(collect_list(struct($"value", $"count")).as("values"))
.withColumn("count", col("values.count"))
.withColumn("values", col("values.value"))
)
result.show()
//+--------+-------------------+--------------+---------+
//| device| window_time| values| count|
//+--------+-------------------+--------------+---------+
//|device_A|2022-01-01 18:00:00|[102, 99, 100]|[1, 2, 3]|
//|device_A|2022-01-01 18:00:10|[100, 101, 98]|[2, 2, 1]|
//+--------+-------------------+--------------+---------+
There is a DataFrame df_titles with one column "title":
+--------------------+
| title|
+--------------------+
| harry_potter_1|
| harry_potter_2|
+--------------------+
I want to know the number of unique terms appearing in the titles, where the terms are delimited by "_", and get something like this:
+--------------------+------+
| term| count|
+--------------------+------+
| harry| 2|
| potter| 2|
| 1| 1|
| 2| 1|
+--------------------+------+
I am thinking of creating a new_df with columns "term" and "count", and for each row in df_titles, split the string and insert [string, 1] to the new_df. Then maybe reduce the new df by "term":
val test = Seq.empty[Term].toDF()
df.foreach(spark.sql("INSERT INTO test VALUES (...)"))
...
But I am stuck with the code. How should I proceed? Is there a better way to do this?
You can use spark built-in functions such as split and explode to transform your dataframe of titles to dataframe of terms and then do a simple groupBy. Your code should be:
import org.apache.spark.sql.functions.{col, desc, explode, split}
df_titles
.select(explode(split(col("title"), "_")).as("term"))
.groupBy("term")
.count()
.orderBy(desc("count")) // optional, to have count in descending order
Usually, when you have to perform something over a dataframe, it is better to first try to use a combination of spark built-in functions that you can find in Spark documentation
Details
Starting from df_titles:
+--------------+
|title |
+--------------+
|harry_potter_1|
|harry_potter_2|
+--------------+
split creates an array of words separated by _:
+-------------------+
|split(title, _, -1)|
+-------------------+
|[harry, potter, 1] |
|[harry, potter, 2] |
+-------------------+
Then, explode creates one line per item in array created by split:
+------+
|col |
+------+
|harry |
|potter|
|1 |
|harry |
|potter|
|2 |
+------+
.as("term") renames column col to term:
+------+
|term |
+------+
|harry |
|potter|
|1 |
|harry |
|potter|
|2 |
+------+
Then .groupBy("term") with .count() aggregates counting by term, count() is a shortcut for .agg(count("term").as("count"))
+------+-----+
|term |count|
+------+-----+
|harry |2 |
|1 |1 |
|potter|2 |
|2 |1 |
+------+-----+
And finally .orderBy(desc("count")) orders count in reverse order:
+------+-----+
|term |count|
+------+-----+
|harry |2 |
|potter|2 |
|1 |1 |
|2 |1 |
+------+-----+
I have some data like this:
a,timestamp,list,rid,sbid,avgvalue
1,1011,1001,4,4,1.20
2,1000,819,2,3,2.40
1,1011,107,1,3,5.40
1,1021,819,1,1,2.10
In the data above I want to find which stamp has the highest tag value (avg. value) based on the tag. Like this.
For time stamp 1011 and a 1:
1,1011,1001,4,4,1.20
1,1011,107,1,3,5.40
The output would be:
1,1011,107,1,3,5.40 //because for timestamp 1011 and tag 1 the higest avg value is 5.40
So I need to pick this column.
I tried this statement, but still it does not work properly:
val highvaluetable = df.registerTempTable("high_value")
val highvalue = sqlContext.sql("select a,timestamp,list,rid,sbid,avgvalue from high_value") highvalue.select($"a",$"timestamp",$"list",$"rid",$"sbid",$"avgvalue".cast(IntegerType).as("higher_value")).groupBy("a","timestamp").max("higher_value")
highvalue.collect.foreach(println)
Any help will be appreciated.
After I applied some of your suggestions, I am still getting duplicates in my data.
+---+----------+----+----+----+----+
|a| timestamp| list|rid|sbid|avgvalue|
+---+----------+----+----+----+----+
| 4|1496745915| 718| 4| 3|0.30|
| 4|1496745918| 362| 4| 3|0.60|
| 4|1496745913| 362| 4| 3|0.60|
| 2|1496745918| 362| 4| 3|0.10|
| 3|1496745912| 718| 4| 3|0.05|
| 2|1496745918| 718| 4| 3|0.30|
| 4|1496745911|1901| 4| 3|0.60|
| 4|1496745912| 718| 4| 3|0.60|
| 2|1496745915| 362| 4| 3|0.30|
| 2|1496745915|1901| 4| 3|0.30|
| 2|1496745910|1901| 4| 3|0.30|
| 3|1496745915| 362| 4| 3|0.10|
| 4|1496745918|3878| 4| 3|0.10|
| 4|1496745915|1901| 4| 3|0.60|
| 4|1496745912| 362| 4| 3|0.60|
| 4|1496745914|1901| 4| 3|0.60|
| 4|1496745912|3878| 4| 3|0.10|
| 4|1496745912| 718| 4| 3|0.30|
| 3|1496745915|3878| 4| 3|0.05|
| 4|1496745914| 362| 4| 3|0.60|
+---+----------+----+----+----+----+
4|1496745918| 362| 4| 3|0.60|
4|1496745918|3878| 4| 3|0.10|
Same time stamp with same tag. This is considered as duplicate.
This is my code:
rdd.createTempView("v1")
val rdd2=sqlContext.sql("select max(avgvalue) as max from v1 group by (a,timestamp)")
rdd2.createTempView("v2")
val rdd3=sqlContext.sql("select a,timestamp,list,rid,sbid,avgvalue from v1 join v2 on v2.max=v1.avgvalue").show()
You can use dataframe api to find the max as below:
df.groupBy("timestamp").agg(max("avgvalue"))
this will give you output as
+---------+-------------+
|timestamp|max(avgvalue)|
+---------+-------------+
|1021 |2.1 |
|1000 |2.4 |
|1011 |5.4 |
+---------+-------------+
which doesn't include the other fields you require . so you can use first as
df.groupBy("timestamp").agg(max("avgvalue") as "avgvalue", first("a") as "a", first("list") as "list", first("rid") as "rid", first("sbid") as "sbid")
you should have output as
+---------+--------+---+----+---+----+
|timestamp|avgvalue|a |list|rid|sbid|
+---------+--------+---+----+---+----+
|1021 |2.1 |1 |819 |1 |1 |
|1000 |2.4 |2 |819 |2 |3 |
|1011 |5.4 |1 |1001|4 |4 |
+---------+--------+---+----+---+----+
The above solution would not still give you correct row-wise output so what you can do is use window function and select the correct row as
import org.apache.spark.sql.functions._
val windowSpec = Window.partitionBy("timestamp").orderBy("a")
df.withColumn("newavg", max("avgvalue") over windowSpec)
.filter(col("newavg") === col("avgvalue"))
.drop("newavg").show(false)
This will give row-wise correct data as
+---+---------+----+---+----+--------+
|a |timestamp|list|rid|sbid|avgvalue|
+---+---------+----+---+----+--------+
|1 |1021 |819 |1 |1 |2.1 |
|2 |1000 |819 |2 |3 |2.4 |
|1 |1011 |107 |1 |3 |5.4 |
+---+---------+----+---+----+--------+
You can use groupBy and find the max value for that perticular group as
//If you have the dataframe as df than
df.groupBy("a", "timestamp").agg(max($"avgvalue").alias("maxAvgValue"))
Hope this helps
I saw the above answers. Below is the one which you can try as well
val sqlContext=new SQLContext(sc)
case class Tags(a:Int,timestamp:Int,list:Int,rid:Int,sbid:Int,avgvalue:Double)
val rdd=sc.textFile("file:/home/hdfs/stackOverFlow").map(x=>x.split(",")).map(x=>Tags(x(0).toInt,x(1).toInt,x(2).toInt,x(3).toInt,x(4).toInt,x(5).toDouble)).toDF
rdd.createTempView("v1")
val rdd2=sqlContext.sql("select max(avgvalue) as max from v1 group by (a,timestamp)")
rdd2.createTempView("v2")
val rdd3=sqlContext.sql("select a,timestamp,list,rid,sbid,avgvalue from v1 join v2 on v2.max=v1.avgvalue").show()
OutPut
+---+---------+----+---+----+--------+
| a|timestamp|list|rid|sbid|avgvalue|
+---+---------+----+---+----+--------+
| 2| 1000| 819| 2| 3| 2.4|
| 1| 1011| 107| 1| 3| 5.4|
| 1| 1021| 819| 1| 1| 2.1|
+---+---------+----+---+----+--------+
All the other solutions provided here did not give me the correct answer so this is what it worked for me with row_number():
import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions.Window
val windowSpec = Window.partitionBy("timestamp").orderBy(desc("avgvalue"))
df.select("a", "timestamp", "list", "rid", "sbid", "avgvalue")
.withColumn("largest_avgvalue", row_number().over( windowSpec ))
.filter($"largest_avgvalue" === 1)
.drop("largest_avgvalue")
The other solutions had the following problems in my tests:
The solution with .agg( max(x).as(x), first(y).as(y), ... ) doesn't work because first() function "will return the first value it sees" according to documentation, which means it is non-deterministic,
The solution with .withColumn("x", max("y") over windowSpec.orderBy("m") ) doesn't work because the result of the max will be same as in the value that is selecting for the row. I believe the problem there is the orderBy()".
Hence, the following also gives the correct answer, with max():
val windowSpec = Window.partitionBy("timestamp").orderBy(desc("avgvalue"))
df.select("a", "timestamp", "list", "rid", "sbid", "avgvalue")
.withColumn("largest_avgvalue", max("avgvalue").over( windowSpec ))
.filter($"largest_avgvalue" === $"avgvalue")
.drop("largest_avgvalue")