I have a string with various lengths, for example 11/8/16 and 1/27/16. The format is Month/Day/Year, how can I convert these to dates? I tried various combinations of MM, mm, DD etc but cant get it to work.
I checked that on Spark 3, had to change timeParserPolicy to LEGACY
import pyspark.sql.functions as F
spark.conf.set("spark.sql.legacy.timeParserPolicy","LEGACY")
data1 = [
["11/8/16"],
["1/27/16"]
]
df = spark.createDataFrame(data1).toDF("source")
tmp = df.withColumn("parsed_to_date", F.to_date(F.col("source"), "MM/dd/yy"))
tmp.show(truncate = False)
output
+-------+--------------+
|source |parsed_to_date|
+-------+--------------+
|11/8/16|2016-11-08 |
|1/27/16|2016-01-27 |
+-------+--------------+
Related
I am using the below script to do refining the data in silver layer:
# Read from existing internal table
dfAsset =(spark.read.option(Constants.SERVER,"xyz.sql.azuresynapse.net")
.synapsesql("abc.Salesforce.Asset")
.select("Id","ContactId","CreatedDate","CreatedById","LastModifiedDate")
.filter(col("productCode").contains("11061164")).limit(10))
dfAsset.show()
For particular column CreatedDate the data is appearing in the Unix format.Please refer
the below :
CreateDate
1652108980000
1632313243000
1632312269000
1632312410000
I need to convert the data into YYYY-MM-DD. In the above script
Please advise how it can be done.
Regards
RK
This is my sample Dataframe saved in the variable dfAsset.
#+-----------+
#| date1 |
#+-----------+
#|16521089 |
#|16323132 |
#|16323122 |
#|16323124 |
#+-----------+
Using below code you can convert the data into YYYY-MM-DD.
from pyspark.sql.types import TimestampType
from pyspark.sql.functions import col,to_date
df = dfAsset.withColumn('date',to_date(col('date1').cast(TimestampType())))
df.show()
Output:
I have a CSV like that:
COL,VAL
TEST,100000000.12345679
TEST2,200000000.1234
TEST3,9999.1234679123
I want to load it having the column VAL as a numeric type (due to other requirements of the project) and then persist it back to another CSV as per structure below:
+-----+------------------+
| COL| VAL|
+-----+------------------+
| TEST|100000000.12345679|
|TEST2| 200000000.1234|
|TEST3| 9999.1234679123|
+-----+------------------+
The problem I'm facing is that whenever I load it, the numbers become scientific notation, and I cannot persist it back without having to inform the precision and scale of my data (I want to use the one that it is already in the file, whatever it is - I can't infer it).
Here's what I have tried:
Loading it with DoubleType() it gives me scientific notation:
schema = StructType([
StructField('COL', StringType()),
StructField('VAL', DoubleType())
])
csv_file = "Downloads/test.csv"
df2 = (spark.read.format("csv")
.option("sep",",")
.option("header", "true")
.schema(schema)
.load(csv_file))
df2.show()
+-----+--------------------+
| COL| VAL|
+-----+--------------------+
| TEST|1.0000000012345679E8|
|TEST2| 2.000000001234E8|
|TEST3| 9999.1234679123|
+-----+--------------------+
Loading it with DecimalType() I'm required to specify precision and scale, otherwise, I lose the decimals after the dot. However, specifying it, besides the risk of not getting the correct value (as my data might be rounded), I get zeros after the dot:
For example, using: StructField('VAL', DecimalType(38, 18)) I get:
[Row(COL='TEST', VAL=Decimal('100000000.123456790000000000')),
Row(COL='TEST2', VAL=Decimal('200000000.123400000000000000')),
Row(COL='TEST3', VAL=Decimal('9999.123467912300000000'))]
Realise that in this case, I have zeros on the right side that I don't want in my new file.
The only way I found to address it was using a UDF where I first use the float() to remove the scientific notation and then I convert it to string to make sure it will be persisted as I want:
to_decimal = udf(lambda n: str(float(n)))
df2 = df2.select("*", to_decimal("VAL").alias("VAL2"))
df2 = df2.select(["COL", "VAL2"]).withColumnRenamed("VAL2", "VAL")
df2.show()
display(df2.schema)
+-----+------------------+
| COL| VAL|
+-----+------------------+
| TEST|100000000.12345679|
|TEST2| 200000000.1234|
|TEST3| 9999.1234679123|
+-----+------------------+
StructType(List(StructField(COL,StringType,true),StructField(VAL,StringType,true)))
There's any way to reach the same without using the UDF trick?
Thank you!
The best way I found to address it was as bellow. It is still using UDF, but now, without the workarounds with Strings to avoid scientific notation. I won't make it as correct answer yet, because I still expect someone coming over with a solution without UDF (or a good explanation of why it's not possible without UDFs).
The CSV:
$ cat /Users/bambrozi/Downloads/testf.csv
COL,VAL
TEST,100000000.12345679
TEST2,200000000.1234
TEST3,9999.1234679123
TEST4,123456789.01234567
Load the CSV applying the default PySpark DecimalType precision and scale:
schema = StructType([
StructField('COL', StringType()),
StructField('VAL', DecimalType(38, 18))
])
csv_file = "Downloads/testf.csv"
df2 = (spark.read.format("csv")
.option("sep",",")
.option("header", "true")
.schema(schema)
.load(csv_file))
df2.show(truncate=False)
output:
+-----+----------------------------+
|COL |VAL |
+-----+----------------------------+
|TEST |100000000.123456790000000000|
|TEST2|200000000.123400000000000000|
|TEST3|9999.123467912300000000 |
|TEST4|123456789.012345670000000000|
+-----+----------------------------+
When you are ready to report it (print or save in a new file) you apply a format to trailing zeros:
import decimal
import pyspark.sql.functions as F
normalize_decimals = F.udf(lambda dec: dec.normalize())
(df2
.withColumn('VAL', normalize_decimals(F.col('VAL')))
.show(truncate=False))
output:
+-----+------------------+
|COL |VAL |
+-----+------------------+
|TEST |100000000.12345679|
|TEST2|200000000.1234 |
|TEST3|9999.1234679123 |
|TEST4|123456789.01234567|
+-----+------------------+
You can use spark to do that with sql query :
import org.apache.spark.SparkConf
import org.apache.spark.sql.{DataFrame, SparkSession}
val sparkConf: SparkConf = new SparkConf(true)
.setAppName(this.getClass.getName)
.setMaster("local[*]")
implicit val spark: SparkSession = SparkSession.builder().config(sparkConf).getOrCreate()
val df = spark.read.option("header", "true").format("csv").load(csv_file)
df.createOrReplaceTempView("table")
val query = "Select cast(VAL as BigDecimal) as VAL, COL from table"
val result = spark.sql(query)
result.show()
result.coalesce(1).write.option("header", "true").mode("overwrite").csv(outputPath + table)
I need to convert a dataframe column of String Type to double and add the format mask like thousand seperator and decimal place.
input dataframe:
column(StringType)
2655.00
15722.50
235354.66
required format:
(-1) * to_number(df.column, format mask)
Data is delivered as . as thousand separator and , as decimal separator and with 2 decimal numbers
Output column:
2.655,00
15.722,50
235.354,66
Spark date_format returns string number formatted like #,###,###.## so you need to replace . by , and . by , to get the European format you want.
First, replace dots by # then commas by dots and finally replace # by a dot.
df.withColumn("european_format", regexp_replace(regexp_replace(regexp_replace(
format_number(col("column").cast("double"), 2), '\\.', '#'), ',', '\\.'), '#', ',')
).show()
Gives:
+---------+---------------+
| column|european_format|
+---------+---------------+
| 2655.00| 2.655,00|
| 15722.50| 15.722,50|
|235354.66| 235.354,66|
+---------+---------------+
You can simply do:
import pyspark.sql.functions as F
# create a new colum with formatted date
df = df.withColumn('num_format', F.format_number('col', 2))
# switch the dot and comma
df = df.withColumn('num_format', F.regexp_replace(F.regexp_replace(F.regexp_replace('num_format', '\\.', '#'), ',', '\\.'), '#', ','))
df.show()
+---------+----------+
| col|num_format|
+---------+----------+
| 2655.0| 2.655,00|
| 15722.5| 15.722,50|
|235354.66|235.354,66|
+---------+----------+
I have data set which looks like this :
key|StateName_13|lon|lat|col5_13|col6_13|col7_13|ImageName|elevation_13|Counter_13
P00005K9XESU|FL|-80.854196|26.712385|128402000128038||183.30198669433594|USGS_NED_13_n27w081_IMG.img|3.7742109298706055|1
P00005KC31Y7|FL|-80.854196|26.712385|128402000128038||174.34959411621094|USGS_NED_13_n27w082_IMG.img|3.553356885910034|1
P00005KC320M|FL|-80.846966|26.713182|128402000100953||520.3673706054688|USGS_NED_13_n27w081_IMG.img|2.2236201763153076|1
P00005KC320M|FL|-80.84617434521485|26.713200344482424|128402000100953||520.3673706054688|USGS_NED_13_n27w081_IMG.img|2.7960102558135986|2
P00005KC320M|FL|-80.84538|26.713219|128402000100953||520.3673706054688|USGS_NED_13_n27w081_IMG.img|1.7564013004302979|3
P00005KC31Y6|FL|-80.854155|26.712083|128402000128038||169.80172729492188|USGS_NED_13_n27w081_IMG.img|3.2237753868103027|1
P00005KATEL2|FL|-80.861664|26.703649|128402000122910||38.789894104003906|USGS_NED_13_n27w081_IMG.img|3.235154628753662|1
In this dataset, I want to find the duplicate lon,lat and want the name of images corresponding to those lon and lat.
Output should look like this:
lon|lat|ImageName
-80.854196|26.712385|USGS_NED_13_n27w081_IMG.img,USGS_NED_13_n27w082_IMG.img
Since the row 1 and 2 have similar lon and lat values but different image name.
Any pyspark code or sql query works.
Using #giser_yugang comment, we can do something like this :
from pyspark.sql import functions as F
df = df.groupby(
'lon',
'lat'
).agg(
F.collect_set('ImageName').alias("ImageNames")
).where(
F.size("ImageNames")>1
)
df.show(truncate=False)
+----------+---------+----------------------------------------------------------+
|lon |lat |ImageNames |
+----------+---------+----------------------------------------------------------+
|-80.854196|26.712385|[USGS_NED_13_n27w081_IMG.img, USGS_NED_13_n27w082_IMG.img]|
+----------+---------+----------------------------------------------------------+
If you need to write it in a csv, as the format does not support ArrayType, then you can use concat_ws
df = df.withColumn(
"ImageNames",
F.concat_ws(
", "
"ImageNames"
)
)
df.show()
+----------+---------+--------------------------------------------------------+
|lon |lat |ImageNames |
+----------+---------+--------------------------------------------------------+
|-80.854196|26.712385|USGS_NED_13_n27w081_IMG.img, USGS_NED_13_n27w082_IMG.img|
+----------+---------+--------------------------------------------------------+
Description
Given a dataframe df
id | date
---------------
1 | 2015-09-01
2 | 2015-09-01
1 | 2015-09-03
1 | 2015-09-04
2 | 2015-09-04
I want to create a running counter or index,
grouped by the same id and
sorted by date in that group,
thus
id | date | counter
--------------------------
1 | 2015-09-01 | 1
1 | 2015-09-03 | 2
1 | 2015-09-04 | 3
2 | 2015-09-01 | 1
2 | 2015-09-04 | 2
This is something I can achieve with window function, e.g.
val w = Window.partitionBy("id").orderBy("date")
val resultDF = df.select( df("id"), rowNumber().over(w) )
Unfortunately, Spark 1.4.1 does not support window functions for regular dataframes:
org.apache.spark.sql.AnalysisException: Could not resolve window function 'row_number'. Note that, using window functions currently requires a HiveContext;
Questions
How can I achieve the above computation on current Spark 1.4.1 without using window functions?
When will window functions for regular dataframes be supported in Spark?
Thanks!
You can use HiveContext for local DataFrames as well and, unless you have a very good reason not to, it is probably a good idea anyway. It is a default SQLContext available in spark-shell and pyspark shell (as for now sparkR seems to use plain SQLContext) and its parser is recommended by Spark SQL and DataFrame Guide.
import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.sql.expressions.Window
import org.apache.spark.sql.functions.rowNumber
object HiveContextTest {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("Hive Context")
val sc = new SparkContext(conf)
val sqlContext = new HiveContext(sc)
import sqlContext.implicits._
val df = sc.parallelize(
("foo", 1) :: ("foo", 2) :: ("bar", 1) :: ("bar", 2) :: Nil
).toDF("k", "v")
val w = Window.partitionBy($"k").orderBy($"v")
df.select($"k", $"v", rowNumber.over(w).alias("rn")).show
}
}
You can do this with RDDs. Personally I find the API for RDDs makes a lot more sense - I don't always want my data to be 'flat' like a dataframe.
val df = sqlContext.sql("select 1, '2015-09-01'"
).unionAll(sqlContext.sql("select 2, '2015-09-01'")
).unionAll(sqlContext.sql("select 1, '2015-09-03'")
).unionAll(sqlContext.sql("select 1, '2015-09-04'")
).unionAll(sqlContext.sql("select 2, '2015-09-04'"))
// dataframe as an RDD (of Row objects)
df.rdd
// grouping by the first column of the row
.groupBy(r => r(0))
// map each group - an Iterable[Row] - to a list and sort by the second column
.map(g => g._2.toList.sortBy(row => row(1).toString))
.collect()
The above gives a result like the following:
Array[List[org.apache.spark.sql.Row]] =
Array(
List([1,2015-09-01], [1,2015-09-03], [1,2015-09-04]),
List([2,2015-09-01], [2,2015-09-04]))
If you want the position within the 'group' as well, you can use zipWithIndex.
df.rdd.groupBy(r => r(0)).map(g =>
g._2.toList.sortBy(row => row(1).toString).zipWithIndex).collect()
Array[List[(org.apache.spark.sql.Row, Int)]] = Array(
List(([1,2015-09-01],0), ([1,2015-09-03],1), ([1,2015-09-04],2)),
List(([2,2015-09-01],0), ([2,2015-09-04],1)))
You could flatten this back to a simple List/Array of Row objects using FlatMap, but if you need to perform anything on the 'group' that won't be a great idea.
The downside to using RDD like this is that it's tedious to convert from DataFrame to RDD and back again.
I totally agree that Window functions for DataFrames are the way to go if you have Spark version (>=)1.5. But if you are really stuck with an older version(e.g 1.4.1), here is a hacky way to solve this
val df = sc.parallelize((1, "2015-09-01") :: (2, "2015-09-01") :: (1, "2015-09-03") :: (1, "2015-09-04") :: (1, "2015-09-04") :: Nil)
.toDF("id", "date")
val dfDuplicate = df.selecExpr("id as idDup", "date as dateDup")
val dfWithCounter = df.join(dfDuplicate,$"id"===$"idDup")
.where($"date"<=$"dateDup")
.groupBy($"id", $"date")
.agg($"id", $"date", count($"idDup").as("counter"))
.select($"id",$"date",$"counter")
Now if you do dfWithCounter.show
You will get:
+---+----------+-------+
| id| date|counter|
+---+----------+-------+
| 1|2015-09-01| 1|
| 1|2015-09-04| 3|
| 1|2015-09-03| 2|
| 2|2015-09-01| 1|
| 2|2015-09-04| 2|
+---+----------+-------+
Note that date is not sorted, but the counter is correct. Also you can change the ordering of the counter by changing the <= to >= in the where statement.