Find minimum for a timestamp through Spark groupBy dataframe - sql

When I try to group my dataframe on a column then try to find the minimum for each grouping groupbyDatafram.min('timestampCol') it appears I cannot do it on non numerical columns. Then how can I properly filter the minimum (earliest) date on the groupby?
I am streaming the dataframe from a postgresql S3 instance, so that data is already configured.

Just perform aggregation directly instead of using min helper:
import org.apache.spark.sql.functions.min
val sqlContext: SQLContext = ???
import sqlContext.implicits._
val df = Seq((1L, "2016-04-05 15:10:00"), (1L, "2014-01-01 15:10:00"))
.toDF("id", "ts")
.withColumn("ts", $"ts".cast("timestamp"))
df.groupBy($"id").agg(min($"ts")).show
// +---+--------------------+
// | id| min(ts)|
// +---+--------------------+
// | 1|2014-01-01 15:10:...|
// +---+--------------------+
Unlike min it will work on any Orderable type.

Related

Scala dataframe get last 6 months latest data

I have column in dataframe like below
+-------------------+
| timestampCol|
+-------------------+
|2020-11-27 00:00:00|
|2020-11-27 00:00:00|
+-------------------+
I need to filter the data based on this date and I want to get last 6 moths data only , could anyone please suggest how can I do that ?
import spark.sqlContext.implicits._
import org.apache.spark.sql.functions._
dataset.filter(dataset.col("timestampCol").cast("date")
.gt(add_months(current_date(),-6)));
This will filter all the timestampCol values that are older than 6 months.
Depending on the dataset schema you may need to cast the value as a date.
If it's a date just compare it directly with a java.sql.Timestamp instance.
val someMomentInTime =
java.sql.Timestamp.valueOf("yyyy-[m]m-[d]d hh:mm:ss")
val df: Dataframe =
???
df.filter(col("timestampCol") > someMomentInTime) //Dataframe is Dataset[Row]

How to add Extra column with current date in Spark dataframe

I am trying to add one column in my existing Pyspark Dataframe using withColumn method.I want to insert current date in this column.From my Source I don't have any date column so i am adding this current date column in my dataframe and saving this dataframe in my table so later for tracking purpose i can use this current date column.
I am using below code
df2=df.withColumn("Curr_date",datetime.now().strftime('%Y-%m-%d'))
here df is my existing Dataframe and i want to save df2 as table with Curr_date column.
but here its expecting existing column or lit method instead of datetime.now().strftime('%Y-%m-%d').
someone please guide me how should i add this Date column in my dataframe.?
use either lit or current_date
from pyspark.sql import functions as F
df2 = df.withColumn("Curr_date", F.lit(datetime.now().strftime("%Y-%m-%d")))
# OR
df2 = df.withColumn("Curr_date", F.current_date())
current_timestamp() is good but it is evaluated during the serialization time.
If you prefer to use the timestamp of the processing time of a row, then you may use the below method,
withColumn('current', expr("reflect('java.time.LocalDateTime', 'now')"))
There is a spark function current_timestamp().
from pyspark.sql.functions import *
df.withColumn('current', date_format(current_timestamp(), 'yyyy-MM-dd')).show()
+----+----------+
|test| current|
+----+----------+
|test|2020-09-09|
+----+----------+

drop record based on multile columns value using pyspark

I have a pyspark dataframe like below :
I wanted to keep only one record if two column uniq_id and date_time have same value.
Expected Output :
I wanted to achieve this using pyspark.
Thank you
You can group by uniq_id and date_time and use first()
from pyspark.sql import functions as F
df.groupBy("uniq_id", "date_time").agg(F.first("col_1"), F.first("col_2"), F.first("col_3")).show()
I can't get how you compare int column and timestamp one(though it can be done with casting timestamp to int) but such a filtering can be made via
from pyspark.sql import functions as F
# assume you already have your DataFrame
df = df.filter(F.col('first_column_name') == F.col('second_column_name'))
or just
df = df.filter('first_column_name = second_column_name')

How to transform pyspark dataframe 1x9 to 3x3

Im using pyspark dataframe.
I have a df which is 1x9
example
temp = spark.read.option("sep","\n").csv("temp.txt")
temp :
sam
11
newyork
john
13
boston
eric
22
texas
without using Pandas library, How can I transform this to 3x3 dataframe with columns name,age,city ?
like this :
name,age,city
sam,11,newyork
john,13,boston
I would read the file as an rdd to take advantage of zipWithIndex to add an index to your data.
rdd = sc.textFile("temp.txt")
We can now use truncating division to create an index with which to group records together. Use this new index as the key for the rdd. The corresponding values will be a tuple of the header, which can be computed using the modulus, and the actual value. (Note the index returned by zipWithIndex will be at the end of the record, which is why we use row[1] for the division/mod.)
Next use reduceByKey to add the value tuples together. This will give you a tuple of keys and values (in sequence). Use map to turn that into a Row (to keep column headers, etc).
Finally use toDF() to convert to a DataFrame. You can use select(header) to get the columns in the desired order.
from operator import add
from pyspark.sql import Row
header = ["name", "age", "city"]
df = rdd.zipWithIndex()\
.map(lambda row: (row[1]//3, (header[row[1]%3], row[0])))\
.reduceByKey(add)\
.map(lambda row: Row(**dict(zip(row[1][::2], row[1][1::2]))))\
.toDF()\
.select(header)
df.show()
#+----+---+-------+
#|name|age| city|
#+----+---+-------+
#| sam| 11|newyork|
#|eric| 22| texas|
#|john| 13| boston|
#+----+---+-------+

QPython Pandas Interaction

I have a question pertaining to Pandas Data Frame which I want to enrich with Timings from Tick Source(kdb Table).
Pandas DataFrame
Date sym Level
2018-07-01 USDJPY 110
2018-08-01 GBPUSD 1.20
I want to enrich this dataframe with timings (first time for a given currency pair for a given date when the level is crossed).
from qpython import qconnection
from qpython import MetaData
from qpython.qtype import QKEYED_TABLE
from qpython.qtype import QSTRING_LIST, QINT_LIST,
QDATETIME_LIST,QSYMBOL_LIST
q.open()
df.meta = MetaData(sym = QSYMBOL_LIST, val = QINT_LIST, Date =
QDATE_LIST)
q('set', np.string_('tbl'), df)
The above code converts pandas dataframe to q table.
Example Code to Access tick data(kdb Tables)
select Mid by sym,date from quotestackevent where date = 2018.07.01, sym = `CCYPAIR
How can I use dataframe columns sym and date to pull data from kdb tables using Qpython?
Suppose on the KDB+ side you have a table t with columns sym (of type symbol), date (of type date), and mid (of type float), for example generated by the following code:
t:`date xasc ([] sym:raze (3#) each `USDJPY`GBPUSD`EURBTC;date:9#.z.d-til 3;mid:9?`float$10)
Then to bring the data for enrichment from the KDB+ side to the Python side you can do the following:
from qpython import qconnection
import pandas as pd
df = pd.DataFrame({'Date': ['2018-09-08','2018-09-08','2018-09-07','2018-09-07'],'sym':['abc','def','abc','def']})
df['Date']=df['Date'].astype('datetime64[ns]')
with qconnection.QConnection(host = 'localhost', port = 5001, pandas = True) as q:
X = q.sync('{select sym,date,mid from t where date in `date$x}',df['Date'])
Here the first argument to q.sync() defines a function to be executed and the second argument is the range of dates you want to get from the table t. Inside the function the `date$x part converts the argument to a list of dates, which is needed because df['Date'] is sent as a list of timestamps to the KDB+ side.
The resulting X data frame will have the sym column as binary strings, so you may want to do something like
X['sym'].apply(lambda x: x.decode('ascii'))
to convert that to strings.
An alternative to sending the function definition is to have a function defined on the KDB+ side and send only its name from the Python side. So, if you can do something like
getMids:{select sym,date,mid from t where date in `date$x}
on the KDB+ side, then you can do
X = q.sync('getMids',df['Date'])
instead of sending the function definition.