Extend Groupby to include multiply aggregation - sql

I implemented a groupby function which groups columns based on a particular aggregation successfully. The issue is I am using a argument for chosen columns and aggregation as Map[String,String] which means multiple aggregations cannot be performed on one column. for example sum, mean and max all on one column.
below is what works soo far:
groupByFunction(input, Map("someSignal" -> "mean"))
def groupByFunction(dataframeDummy: DataFrame,
columnsWithOperation: Map[String,String],
someSession: String = "sessionId",
someSignal: String = "signalName"): DataFrame = {
dataframeDummy
.groupBy(
col(someSession),
col(someSignal)
).agg(columnsWithOperation)
}
Upon looking into it a bit more, the agg function can take a list of columns like below
userData
.groupBy(
window(
(col(timeStampColumnName) / lit(millisSecondsPerSecond)).cast(TimestampType),
timeWindowInS.toString.concat(" seconds")
),
col(sessionColumnName),
col(signalColumnName)
).agg(
mean("physicalSignalValue"),
sum("physicalSignalValue")).show()
So I decided to try to manipulate the input to look like that, below is how I did it:
val signalIdColumn = columnsWithOperation.toSeq.flatMap { case (key, list) => list.map(key -> _) }
val result = signalIdColumn.map(tuple =>
if (tuple._2 == "mean")
mean(tuple._1)
else if (tuple._2 == "sum")
sum(tuple._1)
else if (tuple._2 == "max")
max(tuple._1))
Now I have a list of columns, which is still a problem for agg funciton.

I was able to solve it using a sequence of tuples like this Seq[(String, String)] instead of Map[String,String]
def groupByFunction(dataframeDummy: DataFrame,
columnsWithOperation: Seq[(String, String)],
someSession: String = "sessionId",
someSignal: String = "signalName"): DataFrame = {
dataframeDummy
.groupBy(
col(someSession),
col(someSignal)
).agg(columnsWithOperation)
and then with the information
from below post:
https://stackoverflow.com/a/34955432/2091294
userData
.groupBy(
col(someSession),
col(someSignal)
).agg(columnsWithOperation.head, columnsWithOperation.tail: _*)

Related

Apply a function into grouped dataframe using Scala Spark

I'm trying to do DBSCAN in each group of latitudes and longitudes from users. The implementation of this clustering algorithm was done by irvingc here. I bumped up all dependencies to make the code work properly in my env.
Describing the sistuation: I have a Dataframe which is composed by events from user, each event has an id, a lat, and a long, you can see the columns by this case class. By that, I transform the dataframe to dataset to use the groupbykey and mapgroups methods to apply the function to the grouped data. However, the DBSCAN I'm using receive an RDD[linalg.Vector], so I have to transform the group into Vector of lat/lon, and this transformation gives the error SPARK-28702. Can you give some advice how to handle this issue?
case class StayDataset(objectID: Long, latitude: Double, longitude: Double, timeStart: Long, timeEnd: Long)
var dfs: Array[DataFrame] = Array()
val s = dataset.groupByKey(k => k.objectID).mapGroups{
case(k, iter) => {
POIDetection.groupStayPointsFromUser(k, iter, dataset.sparkSession)
dfs = dfs ++ Array(df)
k
}
}
def groupStayPointsFromUser(k: Long, dataset: Iterator[StayDataset], spark: SparkSession): DataFrame = {
val points = dataset.map(row => Vectors.dense(Array(row.latitude, row.longitude))).toSeq
val rddVector = spark.sparkContext.parallelize(points)
val size = points.length
val model = DBSCAN.train(rddVector, eps = 20, minPoints = (size * 0.18).toInt, maxPointsPerPartition = (size / 4).toInt)
val pointRDD = new PointRDD(model.labeledPoints.map(p => {
val point = POIDetection.geoFactory.createPoint(new Coordinate(p.x, p.y))
point.setUserData(p.cluster.toString())
point
}))
val df = Adapter.toDf(pointRDD, Seq("cluster"), spark)
.select(col("cluster").cast("long"), col("geometry"))
df
}
I think this problem arises when we want to apply a KNN in a grouped data. How to do that?
I don't understand what you want to achieve but first, you need to create a RDD[linalg.Vector], I suppose that you have the dataset of StayDataset already, to retrieve the RDDs, you need to transform the Dataset of StayDataset to linalg.Vector
val dsVector = dataset.transform[linalg.Vector](rec => linalg.Vectors.dense(rec.latitude, rec.longitude))
and then you retrieve the rdd[linalg.Vector]:
val rdd = dsVector.rdd
and you pass the rdd to your DBSCAN:
DBSCAN.train(rdd, ...)
These are necessary for your to get the rdd to do the train.
I think you also need to do some aggregation beforehand. If it is true, you need to manipulate on the dataset you have

scala spark foldLeft on map with array

I have a configuration in a form of a map:
val config = Map[String, Array[String]] = Map("column1" -> Array("field1"), "column2" -> Array("field1","field2"))
I want to use foldLeft to apply this to a dataframe and dynamically filter nested fields using dropFields functions.
I came out with:
val result = config.foldLeft(""){(k, v) =>
df.withColumn( v._1, col(k + v._1).dropFields(v._2: _*))
}
but struggle to make foldleft work, any help would be appreciated.

VarcharType mismatch Spark dataframe

I'am trying to change the schema of a dataframe. every time i have a column of string type i want to change it's type to VarcharType(max) where max is the maximum lentgh of string in that column. i wrote the following code. ( i want to export the dataframe later to sql server and i don't want to have nvarchar in sql server so i'am trying to limit it on spark side )
val df = spark.sql(s"SELECT * FROM $tableName")
var l : List [StructField] = List()
val schema = df.schema
schema.fields.foreach(x => {
if (x.dataType == StringType) {
val dataColName = x.name
val maxLength = df.select(dataColName).reduce((x, y) => {
if (x.getString(0).length >= y.getString(0).length) {
x
} else {
y
}
}).getString(0).length
val dataType = VarcharType(maxLength)
l = l :+ StructField(dataColName, dataType)
} else {
l = l :+ x
}
})
val newSchema = StructType(l)
val newDf = spark.createDataFrame(df.rdd, newSchema)
However when running it i get this error.
20/01/22 15:29:44 ERROR ApplicationMaster: User class threw exception: scala.MatchError:
VarcharType(9) (of class org.apache.spark.sql.types.VarcharType)
scala.MatchError: VarcharType(9) (of class org.apache.spark.sql.types.VarcharType)
Can a dataframe column can be of type VarcharType(n) ?
The data mapping from a database to/from dataframe happens in the dialect class. For MS SQL server the class is org.apache.spark.sql.jdbc.MsSqlServerDialect. You can inherit from this and override getJDBCType to influence datatype mapping from a dataframe to a table. Then register your dialect for it to take effect.
I have done this for Oracle (not sqlserver), however it can be done similarly.
//Change this
override def getJDBCType(dt: DataType): Option[JdbcType] = dt match {
case TimestampType => Some(JdbcType("DATETIME", java.sql.Types.TIMESTAMP))
case StringType => Some(JdbcType("NVARCHAR(MAX)", java.sql.Types.NVARCHAR))
case BooleanType => Some(JdbcType("BIT", java.sql.Types.BIT))
case _ => None
}
You can't use VarcharType because it is not a DataType. Also you can't check length of actual data because it is not exposed. You only have access to "dt: DataType", so you can set a default size for NVARCHAR if max is not acceptable.

Spark Dataframe size check on columns does not work as expected using vararg and if else - Scala

I do not want to use foldLeft or withColumn with when over all columns in a dataframe, but want a select as per https://medium.com/#manuzhang/the-hidden-cost-of-spark-withcolumn-8ffea517c015, embellished with an if else statement and cols with vararg. All I want is to replace an empty array column in a Spark dataframe using Scala. I am using size but it never computes the zero (0) correctly.
val resDF2 = aggDF.select(cols.map { col =>
( if (size(aggDF(col)) == 0) lit(null) else aggDF(col) ).as(s"$col")
}: _*)
if (size(aggDF(col)) == 0) lit(null) does not work here functionally, but it does run and size(aggDF(col)) returns the correct length if I return that.
I am wondering what the silly issue is. Must be something I am obviously overlooking!
if-else won't work with DataFrame API, these are for Scala logical expressions. With DataFrames you need when/otherwise:
val resDF2 = aggDF.select(cols.map { col => ( when(size(aggDF(col)) === 0,lit(null)).otherwise(aggDF(col))).as(s"$col") }: _*)
This can further be simplified because when without otherwise automatically returns null (i.e. otherwise(lit(null)) is the default):
val resDF2 = aggDF.select(cols.map { col => when(size(aggDF(col)) > 0,aggDF(col)).as(s"$col") }: _*)
See also https://stackoverflow.com/a/48074218/1138523

Strings concatenation in Spark SQL query

I'm experimenting with Spark and Spark SQL and I need to concatenate a value at the beginning of a string field that I retrieve as output from a select (with a join) like the following:
val result = sim.as('s)
.join(
event.as('e),
Inner,
Option("s.codeA".attr === "e.codeA".attr))
.select("1"+"s.codeA".attr, "e.name".attr)
Let's say my tables contain:
sim:
codeA,codeB
0001,abcd
0002,efgh
events:
codeA,name
0001,freddie
0002,mercury
And I would want as output:
10001,freddie
10002,mercury
In SQL or HiveQL I know I have the concat function available, but it seems Spark SQL doesn't support this feature. Can somebody suggest me a workaround for my issue?
Thank you.
Note:
I'm using Language Integrated Queries but I could use just a "standard" Spark SQL query, in case of eventual solution.
The output you add in the end does not seem to be part of your selection, or your SQL logic, if I understand correctly. Why don't you proceed by formatting the output stream as a further step ?
val results = sqlContext.sql("SELECT s.codeA, e.code FROM foobar")
results.map(t => "1" + t(0), t(1)).collect()
It's relatively easy to implement new Expression types directly in your project. Here's what I'm using:
case class Concat(children: Expression*) extends Expression {
override type EvaluatedType = String
override def foldable: Boolean = children.forall(_.foldable)
def nullable: Boolean = children.exists(_.nullable)
def dataType: DataType = StringType
def eval(input: Row = null): EvaluatedType = {
children.map(_.eval(input)).mkString
}
}
val result = sim.as('s)
.join(
event.as('e),
Inner,
Option("s.codeA".attr === "e.codeA".attr))
.select(Concat("1", "s.codeA".attr), "e.name".attr)