Get MAX value per year in Apache Pig - apache-pig

I have been trying to get the max temperature per year using the data below.
Actual data looks like this but I am interested in only first column that is year and 4th column that is temperature..
2016-11-03 12:00:00.000 +0100,Mostly Cloudy,rain,10.594444444444443,10.594444444444443,0.73,13.2664,174.0,10.1913,0.0,1019.74,Partly cloudy throughout the day.
2016-11-03 13:00:00.000 +0100,Mostly Cloudy,rain,11.072222222222223,11.072222222222223,0.72,13.1698,176.0,12.4131,0.0,1019.45,Partly cloudy throughout the day.
2016-11-03 14:00:00.000 +0100,Mostly Cloudy,rain,11.172222222222222,11.172222222222222,0.71,12.654600000000002,175.0,10.835300000000002,0.0,1019.16,Partly cloudy throughout the day.
2016-11-03 15:00:00.000 +0100,Mostly Cloudy,rain,10.911111111111111,10.911111111111111,0.72,11.753,170.0,10.867500000000001,0.0,1018.94,Partly cloudy throughout the day.
2016-11-03 16:00:00.000 +0100,Mostly Cloudy,rain,10.350000000000001,10.350000000000001,0.72,10.6582,161.0,11.592,0.0,1018.81,Partly cloudy throughout the day.
DUMP B is like below
(2014,12.038889)
(2014,21.055555)
(2016,29.905556)
(2016,30.605556)
(2016,29.95)
(2016,29.972221)
The code I have write is like below..But, it throws me the error at D. I have also used ToDate function but seems it doesn't work too..
A = load 'file.csv' using PigStorage(',')......
B = foreach A GENERATE SUBSTRING(year,0,4) as year1, Atemp
C = group B by year1;
D = foreach C GENERATE group,MAX(Atemp);
Error I get :
Invalid field projection. Projected field [year1] does not exist in schema: group:chararray,B:bag{:tuple(year1:chararray,Atemp:float)}.

I figure out myself after posting question at stackoverflow :) I wonder why!
Instead of D = foreach C GENERATE group,MAX(Atemp);
I have used D= foreach C GENERATE group, MAX(B.Atemp) as max;
and it works!
If anyone wants me to delete the post I'm happy to do so. Kindly let me know

Related

PySpark Grouping and Aggregating based on A Different Column?

I'm working on a problem where I have a dataset in the following format (replaced real data for example purposes):
session
activity
timestamp
1
enter_store
2022-03-01 23:25:11
1
pay_at_cashier
2022-03-01 23:31:10
1
exit_store
2022-03-01 23:55:01
2
enter_store
2022-03-02 07:15:00
2
pay_at_cashier
2022-03-02 07:24:00
2
exit_store
2022-03-02 07:35:55
3
enter_store
2022-03-05 11:07:01
3
exit_store
2022-03-05 11:22:51
I would like to be able to compute counting statistics for these events based on the pattern observed within each session. For example, based on the table above, the count of each pattern observed would be as follows:
{
'enter_store -> pay_at_cashier -> exit_store': 2,
'enter_store -> exit_store': 1
}
I'm trying to do this in PySpark, but I'm having some trouble figuring out the most efficient way to do this kind of pattern matching where some steps are missing. The real problem involves a much larger dataset of ~15M+ events like this.
I've tried logic in the form of filtering the entire DF for unique sessions where 'enter_store' is observed, and then filtering that DF for unique sessions where 'pay_at_cashier' is observed. That works fine, the only issue is I'm having trouble thinking of ways where I can count the sessions like 3 where there is only a starting step and final step, but no middle step.
Obviously one way to do this brute-force would be to iterate over each session and assign it a pattern and increment a counter, but I'm looking for more efficient and scalable ways to do this.
Would appreciate any suggestions or insights.
For Spark 2.4+, you could do
df = (df
.withColumn("flow", F.expr("sort_array(collect_list(struct(timestamp, activity)) over (partition by session))"))
.withColumn("flow", F.expr("concat_ws(' -> ', transform(flow, v -> v.activity))"))
.groupBy("flow").agg(F.countDistinct("session").alias("total_session"))
)
df.show(truncate=False)
# +-------------------------------------------+-------------+
# |flow |total_session|
# +-------------------------------------------+-------------+
# |enter_store -> pay_at_cashier -> exit_store|2 |
# |enter_store -> exit_store |1 |
# +-------------------------------------------+-------------+
The first block was collecting list of timestamp and its activity for each session in an ordered array (be sure timestamp is timestamp format) based on its timestamp value. After that, use only the activity values from the array using transform function (and combine them to create a string using concat_ws if needed) and group them by the activity order to get the distinct sessions.

Pandas create graph from Date and Time while them being in different columns

My data looks like this:
Creation Day Time St1 Time St2
0 28.01.2022 14:18:00 15:12:00
1 28.01.2022 14:35:00 16:01:00
2 29.01.2022 00:07:00 03:04:00
3 30.01.2022 17:03:00 22:12:00
It represents parts being at a given station. What I now need is something that counts how many Columns have the same Day and Hour e.g. How many parts were at the same station for a given Hour.
Here 2 Where at Station 1 for the 28th and the timespan 14-15.
Because in the end I want a bar graph that show production speed. Additionally later in the project I want to highlight Parts that havent moved for >2hrs.
Is it practical to create a datetime object for every Station (I have 5 in total)? Or is there a much simpler way to do this?
FYI I import this data from an excel sheet
I found the solution. As they are just strings I can just add them and reformat the result with pd.to_datetime().
Example:
df["Time St1"] = pd.to_datetime(
df["Creation Day"] + ' ' + df["Time St1"],
infer_datetime_format=False, format='%d.%m.%Y %H:%M:%S'
)

Make a for loop for a dataframe to substract dates and put it in a variable

I have a dataframe that looks like this with a lot of products
Product
Start Date
00001
2021/08/10
00002
2021/01/10
I want to make a cycle so that it goes from product to product subtracting three months from the date and then putting it in a variable, something like this.
date[]=''
for dataframe in i:
date['3monthsbefore']=i['start date']-3 months
date['3monthsafter']=i['start date']+3 months
date['product']=i['product']
"Another process with those variables"
And then concat all this data in a dataframe I´m a little bit lost,
I want to do this because I need to use those variables in another process, so I't is possible to do this?.
Using pandas, you usually don't need to loop over your DataFrame. In this case, you can get the 3 months before/after for all rows pretty simply using pd.DateOffset:
df["Start Date"] = pd.to_datetime(df["Start Date"])
df["3monthsbefore"] = df["Start Date"] - pd.DateOffset(months=3)
df["3monthsafter"] = df["Start Date"] + pd.DateOffset(months=3)
This gives:
Product Start Date 3monthsbefore 3monthsafter
0 00001 2021-08-10 2021-05-10 2021-11-10
1 00002 2021-01-10 2020-10-10 2021-04-10
Data:
df = pd.DataFrame({"Product": ["00001", "00002"], "Start Date": ["2021/08/10", "2021/01/10"]})

Pandas group by date and get count while removing duplicates

I have a data frame that looks like this:
maid date hour count
0 023f1f5f-37fb-4869-a957-b66b111d808e 2021-08-14 13 2
1 023f1f5f-37fb-4869-a957-b66b111d808e 2021-08-14 15 1
2 0589b8a3-9d33-4db4-b94a-834cc8f46106 2021-08-13 23 14
3 0589b8a3-9d33-4db4-b94a-834cc8f46106 2021-08-14 0 1
4 104010f8-5f57-4f7c-8ad9-5fc3ec0f9f39 2021-08-11 14 2
5 11947b4a-ccf8-48dc-a6a3-925836b3c520 2021-08-13 7 1
I am trying get a count of maid's for each date in such a way that if a maid is included in day 1, I don't want to include in any of the subsequent days. For example, 0589b8a3-9d33-4db4-b94a-834cc8f46106 is present in both 13th as well as 14. I want to include the maid in the count for 13th but not on 14th as it is already included in 13th.
I have written the following code and it works for small data frames:
import pandas as pd
df=pd.read_csv('/home/ubuntu/uniqueSiteId.csv')
umaids=[]
tdf=[]
df['date']=pd.to_datetime(df.date)
df=df.sort_values('date')
df=df[['maid','date']]
df=df.drop_duplicates(['maid','date'])
dts=df['date'].unique()
for dt in dts:
if not umaids:
df1=df[df['date']==dt]
k=df1['maid'].unique()
umaids.extend(k)
dff=df1
fdf=df1.values.tolist()
elif umaids:
dfs=df[df['date']==dt]
df2=dfs[~dfs['maid'].isin(umaids)]
umaids.extend(df2['maid'].unique())
sdf=df2.values.tolist()
tdf.append(sdf)
ftdf = [item for t in tdf for item in t]
ndf=fdf+ftdf
ndf=pd.DataFrame(ndf,columns=['maid','date'])
print(ndf)
Since I have 1000's of data frames and most often my data frame is more than a million rows, the above takes a long time to run. Is there a better way to do this.
The expected output is this:
maid date
0 104010f8-5f57-4f7c-8ad9-5fc3ec0f9f39 2021-08-11
1 0589b8a3-9d33-4db4-b94a-834cc8f46106 2021-08-13
2 11947b4a-ccf8-48dc-a6a3-925836b3c520 2021-08-13
3 023f1f5f-37fb-4869-a957-b66b111d808e 2021-08-14
As per discussion in the comments, the solution is quite simple: sort the dataframe by date and then drop duplicates only by maid. This will keep the first occurence of maid, which also happens to be the first occurence in time since we sorted by date. Then do the groupby as usual.

Creating pandas series with all 1 values

I'm trying to generate a pandas timeseries where all values are 1.
start=str(timeseries.index[0].round('S'))
end=str(timeseries.index[-1].round('S'))
empty_series_index = pd.date_range(start=start, end=end, freq='2m')
empty_series_values = 1
empty_series = pd.Series(data=empty_series_values, index=empty_series_index)
print(start,end)
print(empty_series)
The printout reads
2019-09-20 00:30:51+00:00 2019-10-30 23:57:35+00:00
2019-09-30 00:30:51+00:00 1
Why is there only one value, even tough its a 2min frequency and its more than 10 days long?
in the line:
empty_series_index = pd.date_range(start=start, end=end, freq='2m')
you are using the frequency string: '2m' which actually means 2 months.
If you want to use minutes you should use: '2min' or '2T' (from documentation)
Hope this helps. Let me know if you have any more questions.