Stratified Sampling with Pyspark Multiples Columns - dataframe

I have a dataframe with millions of registers, like this:
CLI_ID OCCUPA_ID DIG_LABEL
125 2705 1
328 2708 7
400 2712 1
401 2705 2
525 2708 1
I want to take an aleatory sample of 100k rows that contains 70% of 2705, 20% of 2708, 10% of 2712 from OCCUPA_ID and 50% of 1, 20% of 2 and 30% 7 from DIG_LABEL.
How can I get this in Spark, using pyspark?

use sampleBy instead using sample function in pyspark,becasue sample only use for sampling without any column.so we will take sampleBy.
here,sampleBy we have in column,fraction and seed(Optional).
consider,
df_sample = df.sampleBy(column,fraction,seed)
where,
column is defined for selecting column you want to sampling
fraction is just defined for sampling ration like 10% so it will take as 0.1 vice versa.
seed for which of data show will saved as seed through becasue everytime it will show different data if not use this seed.
so your question required answer is,
dfsample = df.sampleBy("OCCUPA_ID",{"2705":0.7,"2708":0.2,"2712":0.1},42).sampleBy("DIG_LABEL",{"1":0.5,"2":0.2,"7":0.3},42)
just take two times of sampling OCCUPA_ID and after DIG_LABEL.
42 is seed here both time

You can use the sampleBy method for pyspark DataFrames to perform stratified sample and pass the column name and a dictionary for the fractions within each column. For example:
spark_df.sampleBy("OCCUPA_ID", fractions={"2705": 0.7, "2708": 0.2, "2712": 0.1}, seed=42).show()
+------+---------+---------+
|CLI_ID|OCCUPA_ID|DIG_LABEL|
+------+---------+---------+
| 1| 2705| 7|
| 4| 2705| 1|
| 5| 2705| 7|
| 7| 2708| 2|
| 12| 2705| 1|
| 16| 2708| 2|
| 18| 2708| 2|
| 20| 2705| 7|
| 25| 2705| 2|
| 26| 2705| 2|
| 38| 2705| 7|
| 40| 2705| 1|
| 44| 2705| 2|
| 48| 2708| 7|
| 50| 2708| 2|
| 53| 2705| 1|
| 57| 2705| 1|
| 58| 2712| 1|
| 61| 2705| 2|
| 63| 2708| 7|
+------+---------+---------+
only showing top 20 rows
Since you want one pyspark DataFrame with two samplings performed from two different columns, you can chain the sampleBy methods together:
spark_stratified_sample_df = spark_df.sampleBy("OCCUPA_ID", fractions={"2705": 0.7, "2708": 0.2, "2712": 0.1}, seed=42).sampleBy("DIG_LABEL", fractions={"1": 0.5, "2": 0.2, "7": 0.3}, seed=42)

Related

PySpark: How to concatenate two distinct dataframes?

I have multiple dataframes that I need to concatenate together, row-wise. In pandas, we would typically write: pd.concat([df1, df2]).
This thread: How to concatenate/append multiple Spark dataframes column wise in Pyspark? appears close, but its respective answer:
df1_schema = StructType([StructField("id",IntegerType()),StructField("name",StringType())])
df1 = spark.sparkContext.parallelize([(1, "sammy"),(2, "jill"),(3, "john")])
df1 = spark.createDataFrame(df1, schema=df1_schema)
df2_schema = StructType([StructField("secNo",IntegerType()),StructField("city",StringType())])
df2 = spark.sparkContext.parallelize([(101, "LA"),(102, "CA"),(103,"DC")])
df2 = spark.createDataFrame(df2, schema=df2_schema)
schema = StructType(df1.schema.fields + df2.schema.fields)
df1df2 = df1.rdd.zip(df2.rdd).map(lambda x: x[0]+x[1])
spark.createDataFrame(df1df2, schema).show()
Yields the following error when done on my data at scale: Can only zip RDDs with same number of elements in each partition
How can I join 2 or more data frames that are identical in row length but are otherwise independent of content (they share a similar repeating structure/order but contain no shared data)?
Example expected data looks like:
+---+-----+ +-----+----+ +---+-----+-----+----+
| id| name| |secNo|city| | id| name|secNo|city|
+---+-----+ +-----+----+ +---+-----+-----+----+
| 1|sammy| + | 101| LA| => | 1|sammy| 101| LA|
| 2| jill| | 102| CA| | 2| jill| 102| CA|
| 3| john| | 103| DC| | 3| john| 103| DC|
+---+-----+ +-----+----+ +---+-----+-----+----+
You can create unique IDs with
df1 = df1.withColumn("unique_id", expr("row_number() over (order by (select null))"))
df2 = df2.withColumn("unique_id", expr("row_number() over (order by (select null))"))
then, you can left join them
df1.join(df2, Seq("unique_id"), "left").drop("unique_id")
Final output looks like
+---+----+---+-------+
| id|name|age|address|
+---+----+---+-------+
| 1| a| 7| x|
| 2| b| 8| y|
| 3| c| 9| z|
+---+----+---+-------+

Compare columns from two different dataframes based on id

I have two dataframes to compare, the order of records are different, the name of columns might be different. Have to compare columns (more than one) based on the unique key (id)
Example: consider cataframes df1 and df2
df1:
+---+-------+-----+
| id|student|marks|
+---+-------+-----+
| 1| Vijay| 23|
| 4| Vithal| 24|
| 2| Ram| 21|
| 3| Rahul| 25|
+---+-------+-----+
df2:
+-----+--------+------+
|newId|student1|marks1|
+-----+--------+------+
| 3| Rahul| 25|
| 2| Ram| 23|
| 1| Vijay| 23|
| 4| Vithal| 24|
+-----+--------+------+
Here based on id and newId, I need to compare values studentName and Marks, and need to check that whether the student with same id has same name and marks
In this example student with id 2 has 21 marks but in df2 23 marks
df1.exceptAll(df2).show()
// +---+-------+-----+
// | id|student|marks|
// +---+-------+-----+
// | 2| Ram| 21|
// +---+-------+-----+
I think diff will give the result you are looking for.
scala> df1.diff(df2)
res0: Seq[org.apache.spark.sql.Row] = List([2,Ram,21])

Spark - compare 2 dataframes without using hardcoded column names

I need to create a way to compare 2 data frames and do not use any hardcoded data, so that I can upload at any time 2 files and compare them without changing anything.
df1
+--------------------+--------+----------------+----------+
| ID|colA. |colB. |colC |
+--------------------+--------+----------------+----------+
|(122C8984ABF9F6EF...| 0| 10| APPLE|
|(122C8984ABF9F6EF...| 0| 20| APPLE|
|(122C8984ABF9F6EF...| 0| 10| GOOGLE|
|(122C8984ABF9F6EF...| 0| 10| APPLE|
|(122C8984ABF9F6EF...| 0| 15| SAMSUNG|
|(122C8984ABF9F6EF...| 0| 10| APPLE|
+--------------------+--------+----------------+----------+
df2
+--------------------+--------+----------------+----------+
| ID|colA. |colB |colC |
+--------------------+--------+----------------+----------+
|(122C8984ABF9F6EF...| 0| 10| APPLE|
|(122C8984ABF9F6EF...| 0| 20| APPLE|
|(122C8984ABF9F6EF...| 0| 10| APPLE|
|(122C8984ABF9F6EF...| 0| 30| APPLE|
|(122C8984ABF9F6EF...| 0| 15| SAMSUNG|
|(122C8984ABF9F6EF...| 0| 15| GOOGLE|
+--------------------+--------+----------------+----------+
I need to compare these 2 data frames and count the differences from each column.
My output should look like this:|
+--------------+-------------+-----------------+------------+------+
|Attribute Name|Total Records|Number Miss Match|% Miss Match|Status|
+--------------+-------------+-----------------+------------+------+
| colA| 6| 0| 0.0 %| Pass|
colB. | 6| 3| 50 %| Fail|
| colC. | 6| 2| 33.3 %| Fail||
+--------------+-------------+-----------------+------------+------+
I know how to compare the columns when using hardcoded column names , by my requirement is to compare it dynamically.
What I did so far was to select a column from each data frame, but this doesn't seem the right way to do it.
val columnsAll = df1.columns.map(m=>col(m))
val df1_col1 = df1.select(df1.columns.slice(1,2).map(m=>col(m)):_*).as("Col1")
val df2_col1 = df2.select(df2.columns.slice(1,2).map(m=>col(m)):_*).as("Col2")
Two ways here:
The spark-fast-tests library has two methods for making DataFrame comparisons:
The assertSmallDataFrameEquality method collects DataFrames on the driver node and makes the comparison
def assertSmallDataFrameEquality(actualDF: DataFrame, expectedDF: DataFrame): Unit = {
if (!actualDF.schema.equals(expectedDF.schema)) {
throw new DataFrameSchemaMismatch(schemaMismatchMessage(actualDF, expectedDF))
}
if (!actualDF.collect().sameElements(expectedDF.collect())) {
throw new DataFrameContentMismatch(contentMismatchMessage(actualDF, expectedDF))
}
}
Using df2.except(df1)
For details of these 2 ways, you can refer DataFrame equality in Apache Spark
Compare two Spark dataframes

How to replace column values using replace() method while using dictionary?

While replacing values of a column in a df using replace method how can we make use of the dictionary to do the same.I am having problems with the syntax.
person = spark.createDataFrame([
(0, "Bill Chambers", 0, [100]),
(1, "Matei Zaharia", 1, [500, 250, 100]),
(2, "Michael Armbrust", 1, [250, 100]),
(1,'Adam',4,[200])])\
.toDF("id", "name", "graduate_program", "spark_status")
diz={'Bill Chambers':'ABC','Adam':'DEF'}
I saw that the syntax is:
person.replace(diz,1,'name')
What is the significance of 1 here in the arguments?
First of all, I encourage you to check pyspark documentation and search for replace(to_replace, value=<no value>, subset=None) function definition.
You are passing a dictionary diz with key/value pairs, and because of that value 1 will be ignored in your case, thus, you will get the following result:
>>> person.replace(diz,1,'name').show()
+---+----------------+----------------+---------------+
| id| name|graduate_program| spark_status|
+---+----------------+----------------+---------------+
| 0| ABC| 0| [100]|
| 1| Matei Zaharia| 1|[500, 250, 100]|
| 2|Michael Armbrust| 1| [250, 100]|
| 1| DEF| 4| [200]|
+---+----------------+----------------+---------------+
Note in your usage only column name that you specified as subset will be affected, and you can clearly see that your dictionary key/value pairs have been used as to_replace/value.
Now if you want to test how value argument should work, check this example:
>>> person.replace(['Adam', 'Bill Chambers'],['Bob', 'Omar'],'name').show()
+---+----------------+----------------+---------------+
| id| name|graduate_program| spark_status|
+---+----------------+----------------+---------------+
| 0| Omar| 0| [100]|
| 1| Matei Zaharia| 1|[500, 250, 100]|
| 2|Michael Armbrust| 1| [250, 100]|
| 1| Bob| 4| [200]|
+---+----------------+----------------+---------------+
Note if you want to specify another list of to_replace/value for two columns, check the following usage of dataframe.replace():
>>> person.replace([1, 0],[9, 5],['id', 'graduate_program']).show()
+---+----------------+----------------+---------------+
| id| name|graduate_program| spark_status|
+---+----------------+----------------+---------------+
| 5| Bill Chambers| 5| [100]|
| 9| Matei Zaharia| 9|[500, 250, 100]|
| 2|Michael Armbrust| 9| [250, 100]|
| 9| Adam| 4| [200]|
+---+----------------+----------------+---------------+
In the previous example we targeted two same value typed (int) columns [id, graduate_program], and forced all ones to be replaced with 9, and all zeros to be replaced with 5.
I hope this answers your question

extracting numpy array from Pyspark Dataframe

I have a dataframe gi_man_df where group can be n:
+------------------+-----------------+--------+--------------+
| group | number|rand_int| rand_double|
+------------------+-----------------+--------+--------------+
| 'GI_MAN'| 7| 3| 124.2|
| 'GI_MAN'| 7| 10| 121.15|
| 'GI_MAN'| 7| 11| 129.0|
| 'GI_MAN'| 7| 12| 125.0|
| 'GI_MAN'| 7| 13| 125.0|
| 'GI_MAN'| 7| 21| 127.0|
| 'GI_MAN'| 7| 22| 126.0|
+------------------+-----------------+--------+--------------+
and I am expecting a numpy nd_array i.e, gi_man_array:
[[[124.2],[121.15],[129.0],[125.0],[125.0],[127.0],[126.0]]]
where rand_double values after applying pivot.
I tried the following 2 approaches:
FIRST: I pivot the gi_man_df as follows:
gi_man_pivot = gi_man_df.groupBy("number").pivot('rand_int').sum("rand_double")
and the output I got is:
Row(number=7, group=u'GI_MAN', 3=124.2, 10=121.15, 11=129.0, 12=125.0, 13=125.0, 21=127.0, 23=126.0)
but here the problem is to get the desired output, I can't convert it to matrix then convert again to numpy array.
SECOND:
I created the vector in the dataframe itself using:
assembler = VectorAssembler(inputCols=["rand_double"],outputCol="rand_double_vector")
gi_man_vector = assembler.transform(gi_man_df)
gi_man_vector.show(7)
and I got the following output:
+----------------+-----------------+--------+--------------+--------------+
| group| number|rand_int| rand_double| rand_dbl_Vect|
+----------------+-----------------+--------+--------------+--------------+
| GI_MAN| 7| 3| 124.2| [124.2]|
| GI_MAN| 7| 10| 121.15| [121.15]|
| GI_MAN| 7| 11| 129.0| [129.0]|
| GI_MAN| 7| 12| 125.0| [125.0]|
| GI_MAN| 7| 13| 125.0| [125.0]|
| GI_MAN| 7| 21| 127.0| [127.0]|
| GI_MAN| 7| 22| 126.0| [126.0]|
+----------------+-----------------+--------+--------------+--------------+
but problem here is I can't pivot it on rand_dbl_Vect.
So my question is:
1. Is any of the 2 approaches is correct way of achieving the desired output, if so then how can I proceed further to get the desired result?
2. What other way I can proceed with so the code is optimal and performance is good?
This
import numpy as np
np.array(gi_man_df.select('rand_double').collect())
produces
array([[ 124.2 ],
[ 121.15],
.........])
To convert the spark df to numpy array, first convert it to pandas and then apply the to_numpy() function.
spark_df.select(<list of columns needed>).toPandas().to_numpy()