Trigger Point of Moving average crossover - pandas

I am trying to define the trigger point when wt1(Moving average 1) crosses over wt2(moving average 2) and add it to the column ['side'].
So basically add 1 to side at the moment wt1 crosses above wt2.
This is the current code I am using but doesn't seem to be working.
for i in range(len(df)):
if df.wt1.iloc[i] > df.wt2.iloc[i] and df.wt1.iloc[i-1] < df.wt2.iloc[i-1]:
df.side.iloc[1]
If I do the following:
long_signals = (df.wt1 > df.wt2)
df.loc[long_signals, 'side'] = 1
it return the value of 1 the entire time wt1 is above wt2, which is not what i am trying to do.
Expected outcome is when wt1 crosses above wt2 side should be labeled as 1.
Help would be appreciated!

Use shift in your condition:
long_signals = (df.wt1 > df.wt2) & (df.wt1.shift() <= df.wt2.shift())
df.loc[long_signals, 'side'] = 1
df
if you do not like NaNs in 'side', use df.fillna(0) at the end
Your first piece of code also works with the following small modification
for i in range(len(df)):
if df.wt1.iloc[i] > df.wt2.iloc[i] and df.wt1.iloc[i-1] <= df.wt2.iloc[i-1]:
df.loc[i,'side'] = 1

Related

Using Pandas and Numpy to search for conditions within binned data in 2 data frames

Python newbie here. Here's a simplified example of my problem. I have 2 pandas dataframes.
One dataframe lightbulb_df has data on whether a light is on or off and looks something like this:
Light_Time
Light On?
5790.76
0
5790.76
0
5790.771
1
5790.779
1
5790.779
1
5790.782
0
5790.783
1
5790.783
1
5790.784
0
Where the time is in seconds since start of day and 1 is the lightbulb is on, 0 means the lightbulb is off.
The second dataframe sensor_df shows whether or not a sensor detected the lightbulb and has different time values and rates.
Sensor_Time
Sensor Detect?
5790.8
0
5790.9
0
5791.0
1
5791.1
1
5791.2
1
5791.3
0
Both dataframes are very large with 100,000s of rows. The lightbulb will turn on for a few minutes and then turn off, then back on, etc.
Using the .diff function, I was able to compare each row to its predecessor and depending on whether the result was 1 or -1 create a truth table with simplified on and off times and append it to lightbulb_df.
# use .diff() to compare each row to the last row
lightbulb_df['light_diff'] = lightbulb_df['Light On?'].diff()
# the light on start times are when
#.diff is less than 0 (0 - 1 = -1)
light_start = lightbulb_df.loc[lightbulb_df['light_diff'] < 0]
# the light off start times (first times when light turns off)
# are when .diff is greater than 0 (1 - 0 = 1)
light_off = lightbulb_df.loc[lightbulb_df['light_diff'] > 0]
# and then I can concatenate them to have
# a single changed state df that only captures when the lightbulb changes
lightbulb_changes = pd.concat((light_start, light_off)).sort_values(by=['Light_Time'])
So I end up with a dataframe of on start times, a dataframe of off start times, and a change state dataframe that looks like this.
Light_Time
Light On?
light_diff
5790.771
1
1
5790.782
0
-1
5790.783
1
1
5790.784
0
-1
Now my goal is to search the sensor_df dataframe during each of the changed state times (above 5790.771 to 5790.782 and 5790.783 to 5790.784) by 1 second intervals to see whether or not the sensor detected the lightbulb. So I want to end up with the number of seconds the lightbulb was on and the number of seconds the sensor detected the lightbulb for each of the many light on periods in the change state dataframe. I'm trying to get % correctly detected.
Whenever I try to plan this out, I end up using lots of nested for loops or while loops which I know will be really slow with 100,000s of rows of data. I thought about using the .cut function to divide up the dataframe into 1 second intervals. I made a for loop to cycle through each of the times in the changed state dataframe and then nested a while loop inside to loop through 1 second intervals but that seems like it would be really slow.
I know python has a lot of built in functions that could help but I'm having trouble knowing what to google to find the right one.
Any advice would be appreciated.

Dataframe change column value on if statement and keeps the new value to next row

I wish you good health to you and your family.
In my dataframe I have a column 'condition' which is filled with .astype(float).
Based on information that i put in this dataframe for every row it makes math and if is over specific amount it increase the value of 'condition' by 1 . Everything works fine with it and as it should be.
I made another column named ['order']. Which change its value if ['condition'] has value of 3. That's the code with witch you can see what I mean:
import pandas as pd
import numpy as np
def graph():
df = (pd.DataFrame(np.random.randint(-3,4,size=(100, 1)), columns=[('condition')]))
df['order'] = 0
df.loc[(df['condition'] == 3) & (df['order'] == 0) , 'order'] = df['order'] + 1
df.loc[(df['condition'] == -3) & (df['order'] == 1) , 'order'] = df['order'] + -1
df.to_csv('copy_bars.csv')
graph()
As you can see it changes the value in 'order' row to 1 when it fill first condition. But it never change back from 1 to 0 because of second if statement. It changes to 0 just because at the begging I give the row amount of 0.
How could I modify the code so when it is changed to 1 to keep this new value until second if statement fill ?
Row, Condition, Order
0 -1 0
1 3 1
2 -1 0
3 2 0
4 -2 0
5 -3 0
6 0 0
instead of this I would like to get in Order column for line from 1 to 4 to be represented with value of 1 so can my second condition trigger.
If I understood what you want this should be something like what you want. Because it is row by row and is based on two values it is not easy to vectorize but probably someone else can do it. Hope it works for you.
order = []
have_found_plus_3 = False
for i, row in df.iterrows():
if row['condition'] == 3:
have_found_plus_3 = True
elif row['condition'] == -3:
have_found_plus_3 = False
if have_found_plus_3:
order.append(1)
else:
order.append(0)
df['order'] = order

How can I optimize my for loop in order to be able to run it on a 320000 lines DataFrame table?

I think I have a problem with time calculation.
I want to run this code on a DataFrame of 320 000 lines, 6 columns:
index_data = data["clubid"].index.tolist()
for i in index_data:
for j in index_data:
if data["clubid"][i] == data["clubid"][j]:
if data["win_bool"][i] == 1:
if (data["startdate"][i] >= data["startdate"][j]) & (
data["win_bool"][j] == 1
):
NW_tot[i] += 1
else:
if (data["startdate"][i] >= data["startdate"][j]) & (
data["win_bool"][j] == 0
):
NL_tot[i] += 1
The objective is to determine the number of wins and the number of losses from a given match taking into account the previous match, this for every clubid.
The problem is, I don't get an error, but I never obtain any results either.
When I tried with a smaller DataFrame ( data[0:1000] ) I got a result in 13 seconds. This is why I think it's a time calculation problem.
I also tried to first use a groupby("clubid"), then do my for loop into every group but I drowned myself.
Something else that bothers me, I have at least 2 lines with the exact same date/hour, because I have at least two identical dates for 1 match. Because of this I can't put the date in index.
Could you help me with these issues, please?
As I pointed out in the comment above, I think you can simply sum the vector of win_bool by group. If the dates are sorted this should be equivalent to your loop, correct?
import pandas as pd
dat = pd.DataFrame({
"win_bool":[0,0,1,0,1,1,1,0,1,1,1,1,1,1,0],
"clubid": [1,1,1,1,1,1,1,2,2,2,2,2,2,2,2],
"date" : [1,2,1,2,3,4,5,1,2,1,2,3,4,5,6],
"othercol":["a","b","b","b","b","b","b","b","b","b","b","b","b","b","b"]
})
temp = dat[["clubid", "win_bool"]].groupby("clubid")
NW_tot = temp.sum()
NL_tot = temp.count()
NL_tot = NL_tot["win_bool"] - NW_tot["win_bool"]
If you have duplicate dates that inflate the counts, you could first drop duplicates by dates (within groups):
# drop duplicate dates
temp = dat.drop_duplicates(["clubid", "date"])[["clubid", "win_bool"]].groupby("clubid")

Comparing timedelta fields

I am looking at file delivery times and can't work out how to compare two timedelta fields using a for loop if statement.
time_diff is the difference between cob_date and last_update_time
average_diff is based on the average for a particular file
I want to find the delay for each row.
I have been able to produce a column delay using average_diff - time_diff
However, when the average_diff - time_diff < 0 I just want to return delay = 0 as this is not a delay.
I have made a for loop but this isn't working and I don't know why. I'm sure the answer is very simple but I can't get there.
test_pv_import_v2['delay2'] = pd.to_timedelta('0')
for index, row in test_pv_import_v2.iterrows():
if test_pv_import_v2['time_diff'] > test_pv_import_v2['average_diff'] :
test_pv_import_v2['delay2'] = test_pv_import_v2['time_diff'] - test_pv_import_v2['average_diff']
Use Series.where for set 0 Timedelta by condition:
mask = test_pv_import_v2['time_diff'] > test_pv_import_v2['average_diff']
s = (test_pv_import_v2['time_diff'] - test_pv_import_v2['average_diff'])
test_pv_import_v2['delay2'] = s.where(mask, pd.to_timedelta('0'))

How to get the lowest low of a series in PineScript

I'm trying to get the lowest low of a series of candles after a condition, but it always returns the last candle of the condition. I try with min(), lowest() and a for loop but it doesn't work. Also try using blackCandle[] and min(ThreeinARow)/lowest(ThreeinARow) and sometimes it returns the last candle and other times it gives me compilation error.
blackCandle = close < open
ThreeinARow = blackCandle[3] and blackCandle[2] and blackCandle[1]
SL = ThreeinARow ? min(low[1], low[2], low[3]) : na
//#version=4
study("Help (low after 3DownBar)", overlay=true, max_bars_back=100)
blackCandle = close < open
ThreeinARow = blackCandle[3] and blackCandle[2] and blackCandle[1]
bar_ind = barssince(ThreeinARow)
//SL = lowest(max(1, nz(bar_ind))) // the lowest low of a series of candles after the condition
SL = lowest(max(1, nz(bar_ind)+1)) // the lowest low of a series of candles since the condition
plot(SL, style=plot.style_cross, linewidth=3)
bgcolor(ThreeinARow ? color.silver : na)
See also the second solution which is in the commented line
It seems that I was misinterpreting it. Using min() does return the minimum of a series of candles. The detail is that I must enter the specific number of candles that I will use to calculate the minimum, which, for now, does not generate any problem for me. In the end, this is how I ended up writing it:
blackCandle = close < open
ThreeinARow = blackCandle[3] and blackCandle[2] and blackCandle[1]
Lowest_Low = if ThreeinARow
min(low[1], low[2], low[3])
plot(Lowest_Low, color=color.red)