Need help on using Spark Filter - apache-spark-sql

I am new in Apache spark, need help in forming either SQL query or spark filter on dataframe.
Below is how my data is formed, i.e. i have large amount of users which contains below data.
{ "User1":"Joey", "Department": ["History","Maths","Geography"] }
I have multiple search conditions like below ones, wherein i need to search array of data based on operator defined by user say for example may be and / or.
{
"SearchCondition":"1",
"Operator":"and",
"Department": ["Maths","Geography"]
}
Can point me to a path of how to achieve this in spark ?
Thanks,
-Jack

I assume you use Scala and you have parsed the data in a DataFrame
val df = spark.read.json(pathToFile)
I would use DataSets for this because they provide type safety
case class User(department: Array[String], user1: String)
val ds = df.as[User]
def pred(user: User): Boolean = Set("Geography","Maths")subsetOf(user.department.toSet)
ds.filter(pred _)
You can read more about DataSets here and here.
If you prefer to use Dataframes you can do it with user defined functions
import org.apache.spark.sql.functions._
val pred = udf((arr: Seq[String]) => Set("Geography","Maths")subsetOf(arr.toSet))
df.filter(pred($"Department"))
At the same package you can find a spark built-in function for this. You can do
df.filter(array_contains($"Department", "Maths")).filter(array_contains($"Department", "Geography"))
but someone could argue that this is not so efficient and the optimizer can`t improve it a lot.
Note that for each search condition you need a different predicate.

Related

Slick plain sql query with pagination

I have something like this, using Akka, Alpakka + Slick
Slick
.source(
sql"""select #${onlyTheseColumns.mkString(",")} from #${dbSource.table}"""
.as[Map[String, String]]
.withStatementParameters(rsType = ResultSetType.ForwardOnly, rsConcurrency = ResultSetConcurrency.ReadOnly, fetchSize = batchSize)
.transactionally
).map( doSomething )...
I want to update this plain sql query with skipping the first N-th element.
But that is very DB specific.
Is is possible to get the pagination bit generated by Slick? [like for type-safe queries one just do a drop, filter, take?]
ps: I don't have the Schema, so I cannot go the type-safe way, just want all tables as Map, filter, drop etc on them.
ps2: at akka level, the flow.drop works, but it's not optimal/slow, coz it still consumes the rows.
Cheers
Since you are using the plain SQL, you have to provide a workable SQL in code snippet. Plain SQL may not type-safe, but agile.
BTW, the most optimal way is to skip N-th element by Database, such as limit in mysql.
depending on your database engine, you could use something like
val page = 1
val pageSize = 10
val query = sql"""
select #${onlyTheseColumns.mkString(",")}
from #${dbSource.table}
limit #${pageSize + 1}
offset #${pageSize * (page - 1)}
"""
the pageSize+1 part tells you whether the next page exists
I want to update this plain sql query with skipping the first N-th element. But that is very DB specific.
As you're concerned about changing the SQL for different databases, I suggest you abstract away that part of the SQL and decide what to do based on the Slick profile being used.
If you are working with multiple database product, you've probably already abstracted away from any specific profile, perhaps using JdbcProfile. In that case you could place your "skip N elements" helper in a class and use the active slickProfile to decide on the SQL to use. (As an alternative you could of course check via some other means, such as an environment value you set).
In practice that could be something like this:
case class Paginate(profile: slick.jdbc.JdbcProfile) {
// Return the correct LIMIT/OFFSET SQL for the current Slick profile
def page(size: Int, firstRow: Int): String =
if (profile.isInstanceOf[slick.jdbc.H2Profile]) {
s"LIMIT $size OFFSET $firstRow"
} else if (profile.isInstanceOf[slick.jdbc.MySQLProfile]) {
s"LIMIT $firstRow, $size"
} else {
// And so on... or a default
// Danger: I've no idea if the above SQL is correct - it's just placeholder
???
}
}
Which you could use as:
// Import your profile
import slick.jdbc.H2Profile.api._
val paginate = Paginate(slickProfile)
val action: DBIO[Seq[Int]] =
sql""" SELECT cols FROM table #${paginate.page(100, 10)}""".as[Int]
In this way, you get to isolate (and control) RDBMS-specific SQL in one place.
To make the helper more usable, and as slickProfile is implicit, you could instead write:
def page(size: Int, firstRow: Int)(implicit profile: slick.jdbc.JdbcProfile) =
// Logic for deciding on SQL goes here
I feel obliged to comment that using a splice (#$) in plain SQL opens you to SQL injection attacks if any of the values are provided by a user.

Slick Plain Sql Generic Return Type

I am trying to write a configurable sql query executor using Slick. User provides a prepared statement with ? and at run time the exact query is formed by replacing ? with values.
Generally this is how one would run a plain sql query using slick.
val query = sql"#$queryString".as[(String,Int)]
In my case i would not know the result type so i want to get back a generic result type. Maybe a List of Tuples with each tuple representing a row of result SET.
Any ideas on how this would be done?
I found a solution from one of the scala git issues. Here it is
ResultMap extends GetResult[Map[String, Any]] {
def apply(pr: PositionedResult) = {
val resultSet = pr.rs
val metaData = resultSet.getMetaData();
(1 to pr.numColumns).map { i =>
metaData.getColumnName(i) -> resultSet.getObject(i)
}.toMap
}
and then we can simply do val query = sql"#$queryString".as(ResultMap)
Hope it helps!!

Reading data from SQL Server using Spark SQL

Is it possible to read data from Microsoft Sql Server (and oracle, mysql, etc.) into an rdd in a Spark application? Or do we need to create an in memory set and parallize that into an RDD?
In Spark 1.4.0+ you can now use sqlContext.read.jdbc
That will give you a DataFrame instead of an RDD of Row objects.
The equivalent to the solution you posted above would be
sqlContext.read.jdbc("jdbc:sqlserver://omnimirror;databaseName=moneycorp;integratedSecurity=true;", "TABLE_NAME", "id", 1, 100000, 1000, new java.util.Properties)
It should pick up the schema of the table, but if you'd like to force it, you can use the schema method after read sqlContext.read.schema(...insert schema here...).jdbc(...rest of the things...)
Note that you won't get an RDD of SomeClass here (which is nicer in my view). Instead you'll get a DataFrame of the relevant fields.
More information can be found here: http://spark.apache.org/docs/latest/sql-programming-guide.html#jdbc-to-other-databases
Found a solution to this from the mailing list. JdbcRDD can be used to accomplish this. I needed to get the MS Sql Server JDBC driver jar and add it to the lib for my project. I wanted to use integrated security, and so needed to put sqljdbc_auth.dll (available in the same download) in a location that java.library.path can see. Then, the code looks like this:
val rdd = new JdbcRDD[Email](sc,
() => {DriverManager.getConnection(
"jdbc:sqlserver://omnimirror;databaseName=moneycorp;integratedSecurity=true;")},
"SELECT * FROM TABLE_NAME Where ? < X and X < ?",
1, 100000, 1000,
(r:ResultSet) => { SomeClass(r.getString("Col1"),
r.getString("Col2"), r.getString("Col3")) } )
This gives an Rdd of SomeClass.The second, third and fourth parameters are required and are for lower and upper bounds, and number of partitions. In other words, that source data needs to be partitionable by longs for this to work.

Spark SQL count() returns wrong number

I'm new to Apache Spark and Scala (also a beginner with Hadoop in general).
I completed the Spark SQL tutorial: https://spark.apache.org/docs/latest/sql-programming-guide.html
I tried to perform a simple query on a standard csv file to benchmark its performance on my current cluster.
I used data from https://s3.amazonaws.com/hw-sandbox/tutorial1/NYSE-2000-2001.tsv.gz, converted it to csv and copy/pasted the data to make it 10 times as big.
I loaded it into Spark using Scala:
// sc is an existing SparkContext.
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
// createSchemaRDD is used to implicitly convert an RDD to a SchemaRDD.
import sqlContext.createSchemaRDD
Define classes:
case class datum(exchange: String,stock_symbol: String,date: String,stock_price_open: Double,stock_price_high: Double,stock_price_low: Double,stock_price_close: Double,stock_volume: String,stock_price_adj_close: Double)
Read in data:
val data = sc.textFile("input.csv").map(_.split(";")).filter(line => "exchange" != "exchange").map(p => datum(p(0).trim.toString, p(1).trim.toString, p(2).trim.toString, p(3).trim.toDouble, p(4).trim.toDouble, p(5).trim.toDouble, p(6).trim.toDouble, p(7).trim.toString, p(8).trim.toDouble))
Convert to table:
data.registerAsTable("data")
Define query (list all rows with 'IBM' as stock symbol):
val IBMs = sqlContext.sql("SELECT * FROM data WHERE stock_symbol ='IBM'")
Perform count so query actually runs:
IBMs.count()
The query runs fine, but returns res: 0 instead of 5000 (which is what it returns using Hive with MapReduce).
filter(line => "exchange" != "exchange")
Since "exchange" is equal to "exchange" filter will return a collection of size 0. And since there is no data, querying for any result will return 0. You need to re-write your logic.

Creating User Defined Function in Spark-SQL

I am new to spark and spark sql and i was trying to query some data using spark SQL.
I need to fetch the month from a date which is given as a string.
I think it is not possible to query month directly from sparkqsl so i was thinking of writing a user defined function in scala.
Is it possible to write udf in sparkSQL and if possible can anybody suggest the best method of writing an udf.
You can do this, at least for filtering, if you're willing to use a language-integrated query.
For a data file dates.txt containing:
one,2014-06-01
two,2014-07-01
three,2014-08-01
four,2014-08-15
five,2014-09-15
You can pack as much Scala date magic in your UDF as you want but I'll keep it simple:
def myDateFilter(date: String) = date contains "-08-"
Set it all up as follows -- a lot of this is from the Programming guide.
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext._
// case class for your records
case class Entry(name: String, when: String)
// read and parse the data
val entries = sc.textFile("dates.txt").map(_.split(",")).map(e => Entry(e(0),e(1)))
You can use the UDF as part of your WHERE clause:
val augustEntries = entries.where('when)(myDateFilter).select('name, 'when)
and see the results:
augustEntries.map(r => r(0)).collect().foreach(println)
Notice the version of the where method I've used, declared as follows in the doc:
def where[T1](arg1: Symbol)(udf: (T1) ⇒ Boolean): SchemaRDD
So, the UDF can only take one argument, but you can compose several .where() calls to filter on multiple columns.
Edit for Spark 1.2.0 (and really 1.1.0 too)
While it's not really documented, Spark now supports registering a UDF so it can be queried from SQL.
The above UDF could be registered using:
sqlContext.registerFunction("myDateFilter", myDateFilter)
and if the table was registered
sqlContext.registerRDDAsTable(entries, "entries")
it could be queried using
sqlContext.sql("SELECT * FROM entries WHERE myDateFilter(when)")
For more details see this example.
In Spark 2.0, you can do this:
// define the UDF
def convert2Years(date: String) = date.substring(7, 11)
// register to session
sparkSession.udf.register("convert2Years", convert2Years(_: String))
val moviesDf = getMoviesDf // create dataframe usual way
moviesDf.createOrReplaceTempView("movies") // 'movies' is used in sql below
val years = sparkSession.sql("select convert2Years(releaseDate) from movies")
In PySpark 1.5 and above, we can easily achieve this with builtin function.
Following is an example:
raw_data =
[
("2016-02-27 23:59:59", "Gold", 97450.56),
("2016-02-28 23:00:00", "Silver", 7894.23),
("2016-02-29 22:59:58", "Titanium", 234589.66)]
Time_Material_revenue_df =
sqlContext.createDataFrame(raw_data, ["Sold_time", "Material", "Revenue"])
from pyspark.sql.functions import *
Day_Material_reveneu_df = Time_Material_revenue_df.select(to_date("Sold_time").alias("Sold_day"), "Material", "Revenue")