Scala MatchError while joining a dataframe and a dataset - dataframe

I have one dataframe and one dataset :
Dataframe 1 :
+------------------------------+-----------+
|City_Name |Level |
+------------------------------+------------
|{City -> Paris} |86 |
+------------------------------+-----------+
Dataset 2 :
+-----------------------------------+-----------+
|Country_Details |Temperature|
+-----------------------------------+------------
|{City -> Paris, Country -> France} |31 |
+-----------------------------------+-----------+
I am trying to make a join of them by checking if the map in the column "City_Name" is included in the map of the Column "Country_Details".
I am using the following UDF to check the condition :
val mapEqual = udf((col1: Map[String, String], col2: Map[String, String]) => {
if (col2.nonEmpty){
col2.toSet subsetOf col1.toSet
} else {
true
}
})
And I am making the join this way :
dataset2.join(dataframe1 , mapEqual(dataset2("Country_Details"), dataframe1("City_Name"), "leftanti")
However, I get such error :
terminated with error scala.MatchError: UDF(Country_Details#528) AS City_Name#552 (of class org.apache.spark.sql.catalyst.expressions.Alias)
Has anyone previously got the same error ?
I am using Spark version 3.0.2 and SQLContext, with scala language.

There are 2 issues here, the first one is that when you're calling your function, you're passing one extra parameter leftanti (you meant to pass it to join function, but you passed it to the udf instead).
The second one is that the udf logic won't work as expected, I suggest you use this:
val mapContains = udf { (col1: Map[String, String], col2: Map[String, String]) =>
col2.keys.forall { key =>
col1.get(key).exists(_ eq col2(key))
}
}
Result:
scala> ds.join(df1 , mapContains(ds("Country_Details"), df1("City_Name")), "leftanti").show(false)
+----------------------------------+-----------+
|Country_Details |Temperature|
+----------------------------------+-----------+
|{City -> Paris, Country -> France}|31 |
+----------------------------------+-----------+

Related

Join dataset with case class spark scala

I am converting a dataframe into a dataset using case class which has a sequence of another case class
case class IdMonitor(id: String, ipLocation: Seq[IpLocation])
case class IpLocation(
ip: String,
ipVersion: Byte,
ipType: String,
city: String,
state: String,
country: String)
Now I have another dataset of strings that has just IPs. My requirement is to get all records from IpLocation if ipType == "home" or IP dataset has the given IP from ipLocation. I am trying to use bloom filter on the IP dataset to search through that dataset but it is inefficient and not working that well in general. I want to join the IP dataset with IpLocation but I'm having trouble since this is in a Seq. I'm very new to spark and scala so I'm probably missing something. Right now my code looks like this
def buildBloomFilter(Ips: Dataset[String]): BloomFilter[String] = {
val count = Ips.count
val bloomFilter = Ips.rdd
.mapPartitions { iter =>
val b = BloomFilter.optimallySized[String](count, FP_PROBABILITY)
iter.foreach(i => b += i)
Iterator(b)
}
.treeReduce(_|_)
bloomFilter
}
val ipBf = buildBloomFilter(Ips)
val ipBfBroadcast = spark.sparkContext.broadcast(ipBf)
idMonitor.map { x =>
x.ipLocation.filter(
x => x.ipType == "home" && ipBfBroadcast.value.contains(x.ip)
)
}
I just want to figure out how to join IpLocation and Ips
Sample:
Starting from your case class,
case class IpLocation(
ip: String,
ipVersion: Byte,
ipType: String,
city: String,
state: String,
country: String
)
case class IdMonitor(id: String, ipLocation: Seq[IpLocation])
I have defined the sample data as follows:
val ip_locations1 = Seq(IpLocation("123.123.123.123", 12.toByte, "home", "test", "test", "test"), IpLocation("123.123.123.124", 12.toByte, "otherwise", "test", "test", "test"))
val ip_locations2 = Seq(IpLocation("123.123.123.125", 13.toByte, "company", "test", "test", "test"), IpLocation("123.123.123.124", 13.toByte, "otherwise", "test", "test", "test"))
val id_monitor = Seq(IdMonitor("1", ip_locations1), IdMonitor("2", ip_locations2))
val df = id_monitor.toDF()
df.show(false)
+---+------------------------------------------------------------------------------------------------------+
|id |ipLocation |
+---+------------------------------------------------------------------------------------------------------+
|1 |[{123.123.123.123, 12, home, test, test, test}, {123.123.123.124, 12, otherwise, test, test, test}] |
|2 |[{123.123.123.125, 13, company, test, test, test}, {123.123.123.124, 13, otherwise, test, test, test}]|
+---+------------------------------------------------------------------------------------------------------+
and the IPs:
val ips = Seq("123.123.123.125")
val df_ips = ips.toDF("ips")
df_ips.show()
+---------------+
| ips|
+---------------+
|123.123.123.125|
+---------------+
Join:
From the above example data, explode the array of the IdMonitor and join with the IPs.
df.withColumn("ipLocation", explode('ipLocation)).alias("a")
.join(df_ips.alias("b"), col("a.ipLocation.ipType") === lit("home") || col("a.ipLocation.ip") === col("b.ips"), "inner")
.select("ipLocation.*")
.as[IpLocation].collect()
Finally, the collected result is given as follows:
res32: Array[IpLocation] = Array(IpLocation(123.123.123.123,12,home,test,test,test), IpLocation(123.123.123.125,13,company,test,test,test))
You can explode your array sequence in your IpMonitor objects using explode function, then use a left outer join to match ips present in your Ips dataset, then filter out on ipType == "home" or ip is present in Ips dataset and finally rebuild your IpLocation sequence by grouping by id and collect_list.
Complete code is as follows:
import org.apache.spark.sql.functions.{col, collect_list, explode}
val result = idMonitor.select(col("id"), explode(col("ipLocation")))
.join(Ips, col("col.ip") === col("value"), "left_outer")
.filter(col("col.ipType") === "home" || col("value").isNotNull())
.groupBy("id")
.agg(collect_list("col").as("value"))
.drop("id")
.as[Seq[IpLocation]]

Spark dataset filter elements

I have 2 spark datasets:
lessonDS and latestLessonDS ;
This is my spark dataset POJO:
Lesson class:
private List<SessionFilter> info;
private lessonId;
LatestLesson class:
private String id:
SessionFilter class:
private String id;
private String sessionName;
I want to get all Lesson data where info.id in Lesson class not in LatestLesson id .
something like this:
lessonDS.filter(explode(col("info.id")).notEqual(latestLessonDS.col("value"))).show();
latestLessonDS contain:
100A
200C
300A
400A
lessonDS contain:
1,[[100A,jon],[200C,natalie]]
2,[[100A,jon]]
3,[[600A,jon],[400A,Kim]]
result:
3,[[600A,jon]
If your dataset size latestLessonDS is reasonable enough you can collect it and broadcast
and then simple filter transformtion on lessonDS will give you desired result.
like
import scala.collection.JavaConversions._
import spark.implicits._
val bc = spark.sparkContext.broadcast(latestLessonDS.collectAsList().toSeq)
lessonDS.mapPartitions(itr => {
val cache = bc.value;
itr.filter(x => {
//check in cache
})
})
Usually the array_contains function could be used as join condition when joining lessonDs and latestLessonDs. But this function does not work here as the join condition requires that all elements of lessonDs.info.id appear in latestLessonDS.
A way to get the result is to explode lessonDs, join with latestLessonDs and then check if for all elements of lessonDs.info an entry in latestLessonDs exists by comparing the number of info elements before and after the join:
lessonDs
.withColumn("noOfEntries", size('info))
.withColumn("id", explode(col("info.id")))
.join(latestLessonDs, "id" )
.groupBy("lessonId", "info", "noOfEntries").count()
.filter("noOfEntries = count")
.drop("noOfEntries", "count")
.show(false)
prints
+--------+------------------------------+
|lessonId|info |
+--------+------------------------------+
|1 |[[100A, jon], [200C, natalie]]|
|2 |[[100A, jon]] |
+--------+------------------------------+

how to extract a value from a json format

I need to extract the email from an intricate 'dict' (I am new to sql)
I have seen several previous posts on the same topic (e.g. this one) however, none seem to work on my data
select au.details
from table_au au
result:
{
"id":3526,
"contacts":[
{
"contactType":"EMAIL",
"value":"name#email.be",
"private":false
},
{
"contactType":"PHONE",
"phoneType":"PHONE",
"value":"025/6251111",
"private":false
}
]
}
I need:
name#email.be
select d.value -> 0 -> 'value' as Email
from json_each('{"id":3526,"contacts":[{"contactType":"EMAIL","value":"name#email.be","private":false},{"contactType":"PHONE","phoneType":"PHONE","value":"025/6251111","private":false}]}') d
where d.key::text = 'contacts'
Output:
| | email |
-------------------
|1 |"name#email.be"|
You can run it here: https://rextester.com/VHWRQ89385

How to automate a field mapping using a table in snowflake

I have one column table in my snowflake database that contain a JSON mapping structure as following
ColumnMappings : {"Field Mapping": "blank=Blank,E=East,N=North,"}
How to write a query that if I feed the Field Mapping a value of E I will get East or if the value if N I will get North so on and so forth without hard coding the value in the query like what CASE statement provides.
You really want your mapping in this JSON form:
{
"blank" : "Blank",
"E" : "East",
"N" : "North"
}
You can achieve that in Snowflake e.g. with a simple JS UDF:
create or replace table x(cm variant) as
select parse_json(*) from values('{"fm": "blank=Blank,E=East,N=North,"}');
create or replace function mysplit(s string)
returns variant
language javascript
as $$
res = S
.split(",")
.reduce(
(acc,val) => {
var vals = val.split("=");
acc[vals[0]] = vals[1];
return acc;
},
{});
return res;
$$;
select cm:fm, mysplit(cm:fm) from x;
-------------------------------+--------------------+
CM:FM | MYSPLIT(CM:FM) |
-------------------------------+--------------------+
"blank=Blank,E=East,N=North," | { |
| "E": "East", |
| "N": "North", |
| "blank": "Blank" |
| } |
-------------------------------+--------------------+
And then you can simply extract values by key with GET, e.g.
select cm:fm, get(mysplit(cm:fm), 'E') from x;
-------------------------------+--------------------------+
CM:FM | GET(MYSPLIT(CM:FM), 'E') |
-------------------------------+--------------------------+
"blank=Blank,E=East,N=North," | "East" |
-------------------------------+--------------------------+
For performance, you might want to make sure you call mysplit only once per value in your mapping table, or even pre-materialize it.

How to flatten bigquery record with multiple repeated fields?

I'm trying to query against app-engine datastore backup data. In python, the entities are described as something like this:
class Bar(ndb.Model):
property1 = ndb.StringProperty()
property2 = ndb.StringProperty()
class Foo(ndb.Model):
bar = ndb.StructuredProperty(Bar, repeated=True)
baz = ndb.StringProperty()
Unfortunately when Foo gets backed up and loaded into bigquery, the table schema gets loaded as:
bar | RECORD | NULLABLE
bar.property1 | STRING | REPEATED
bar.property2 | STRING | REPEATED
baz | STRING | NULLABLE
What I would like to do is to get a table of all bar.property1 and associated bar.property2 where baz = 'baz'.
Is there a simple way to flatten Foo so that the bar records are "zipped" together? If that's not possible, is there another solution?
As hinted in a comment by #Mosha, it seems that big query supports User Defined Functions (UDF). You can input it in the UDF Editor tab on the web UI. In this case, I used something like:
function flattenTogether(row, emit) {
if (row.bar && row.bar.property1) {
for (var i=0; i < row.bar.property1.length; i++) {
emit({property1: row.bar.property1[i],
name: row.bar.property2[i]});
}
}
};
bigquery.defineFunction(
'flattenBar',
['bar.property1', 'bar.property2'],
[{'name': 'property1', 'type': 'string'},
{'name': 'property2', 'type': 'string'}],
flattenTogether);
And then the query looked like:
SELECT
property1,
property2,
FROM
flattenBar(
SELECT
bar.property1,
bar.property2,
FROM
[dataset.foo]
WHERE
baz = 'baz')
Since baz is not repeated, you can simply filter on it in WHERE clause without any flattening:
SELECT bar.property1, bar.property2 FROM t WHERE baz = 'baz'