I'm working with a pandas dataframe that has multiple groups:
date | group | brand | calculated_value
_______________________________
5 | 1 | x | 1
6 | 1 | x | NaN
7 | 1 | x | NaN
5 | 2 | y | 1
6 | 2 | y | NaN
Within each date, group, and brand, I have initialized the first instance with a calculated_value. I am iterating through these with nested for loops so that I can update and assign the next sequential date occurrence of calculated_value (within date-group-brand).
The groupby()/apply() paradigm doesn't work for me, because in e.g. the third row above, the function being passed to apply() looks above and finds NaN. It is not a sequential update.
After calculating the value, I am attempting to assign it to the cell in question, using the right syntax to avoid the CopySettings problem:
df.loc[ (df.date == 5) & (df.group == 1) & (df.brand == 'x'), "calculated_value" ] = calc_value
However, this fails to set the cell, and it remains NaN. Why is that? I've tried searching many terms, but I was not able to find an answer relevant to my case.
I have confirmed that each of the for loops is incrementing properly, and that I'm addressing the correct row in each iteration.
EDIT: I discovered the problem. When I pass the cells to calculate_function as individual arguments, they each pass as a single-value series, and the function returns a single-value series, which cannot be assigned to the NaN cell. No error was thrown on the mismatched assignment, and the for loop didn't terminate.
I fixed this by passing
calculate_function(arg1.values[0], arg2.values[0], ...)
Extracting the value array and taking its first index seems inelegant and brittle, but the default is a quirky behavior compared what I'm used to in R.
You can use groupby().idxmin() to identify the first date in each group of group, band:
s = df.groupby(['group', 'brand']).date.idxmin()
df.loc[s,'calculated_value'] = 1
Output:
date group brand calculated_value
0 5 1 x 1.0
1 6 1 x NaN
2 7 1 x NaN
3 5 2 y 1.0
4 6 2 y NaN
I will do transform with min
s=df.groupby(['group','brand']).date.transform('min')
df['calculated_value']=df.date.eq(s).astype(int)
Related
I have looked at the COALESCE documentation and it mentions the typical case of using COALESCE to make default/situational parameters, e.g.
COALESCE(discount, 5)
which evaluates to 5 if discount is not defined as something else.
However, I have seen it used where COALESCE actually evaluated all the arguments, despite the documentation explicitly saying it stops evaluating arguments after the first non-null argument.
Here is an example similar to what I encountered, say you have a table like this:
id | wind | rain | snow
1 | null | 2 | 3
2 | 5 | null | 6
3 | null | 7 | 2
Then you run
SELECT *
FROM weather_table
WHERE
COALESCE(wind, rain, snow) >= 5
You would expect this to only select rows with wind >= 5, right? NO! It selects all rows with either wind, rain or snow more than 5. Which in this case is 2 rows, specifically these two:
2 | 5 | null | 6
3 | null | 7 | 2
Honestly, pretty cool functionality, but it really irks me that I couldn't find any example of this online or in the documentation.
Can anyone tell me what's going on? Am I missing something?
You would expect this to only select rows with wind >= 5, right?
No, I expect it to select rows with what the Coalesce function returns.
The Coalesce function delivers the value of the first non-null parameter. You had Coalesce(wind,rain,snow). The first row had (null,2,3), so coalesce returned 2. The second row had (5,null,6) so returned 5. The third row had (null,7,2) so returned 7.
The last two rows meet the condition >=5, so 2 rows are retrieved.
Notice that the value for snow was never returned in your example, because either wind or rain always had a value.
After writing out the question so clear, I realized what was going on myself. But I want to answer it here in case anyone else is confused.
Turns out the reason is the COALESCE function is run once for each row, which I suppose I could have known. Then it all makes sense.
It checks for each row, do I have non-null wind, if it is >= 5 I add this row to the result, if not I check if rain is non-null, and so on.
Notably though, if my table was had been like this:
id | wind | rain | snow
1 | 0 | 2 | 3
2 | 5 | 0 | 6
3 | 0 | 7 | 2
The command would have worked like I thought, and the COALESCE function completely useless, would have picked only that one row
2 | 5 | 0 | 6
equal to SELECT * FROM weather_table WHERE wind >= 5.
It only works if there are columns which are null (0 <> null).
I'm trying to label data in the original dataframe, based on multiple boolean conditions. This is easy enough when labeling based on one or two conditions, but as I begin requiring multiple conditions the code becomes difficult to manage. The solution seems to break the code down into copies, but that causes chain errors. Here is one example of the issue...
This is a simplified version of what my data looks like:
df=pd.DataFrame(np.array([['ABC',1,3,3,4], ['std',0,0,2,4],['std',2,1,2,4],['std',4,4,2,4],['std',2,6,2,6]]), columns=['Note', 'Na','Mg','Si','S'])
df
Note Na Mg Si S
0 ABC 1 3 3 4
1 std 0 0 2 4
2 std 2 1 2 4
3 std 4 4 2 4
4 std 2 6 2 6
A standard (std) is located throughout the dataframe. I would like to create a label when the instrument fails. This occurs in the data when:
String condition met (Note = standard/std)
Na>0 & Mg>0
Doesn't fall outside of a calculated range for more than 2 elements.
For requirement 3 - Here is an example of a range:
maxMin=pd.DataFrame(np.array([['Max',3,3,3,7], ['Min',1,1,2,2]]), columns=['Note', 'Na','Mg','Si','S'])
maxMin
Note Na Mg Si S
0 Max 3 3 3 7
1 Min 1 1 2 2
Calculating out of bound standard:
elements=['Na','Mg','Si','S']
std=df[(df['Note'].str.contains('std|standard'))&(df['Na']>0)&(df['Mg'])
df.loc[(std[elements].lt(maxMin.loc[1, :])|std[elements].gt(maxMin.loc[0, :]).select_dtypes(include=['bool'])).sum(axis=1)>2]
Note Na Mg Si S
3 std 4 4 2 4
Now, I would like to label this datapoint within the original dataframe. Desired result:
Note Na Mg Si S Error
0 ABC 1 3 3 4 False
1 std 0 0 2 4 False
2 std 2 1 2 4 False
3 std 4 4 2 4 True
4 std 2 6 2 6 False
I've tried things like:
df['Error'].loc[std.loc[(std[elements].lt(maxMin.loc[1, :])|std[elements].gt(mMmaxMinloc[0, :]).select_dtypes(include=['bool'])).sum(axis=1)>5].index.values.copy()]=True
That unfortunately causes a chain error.
How would you accomplish this without creating a chain error? Most books/tutorial revolve around creating one long expression, but as I dive deeper, I feel there might be a simpler solution. Any input would be appreciated
I figured it out a solution that works for me.
The solution was to use .index.value to create an array of the index that passed the bool conditions. That array can be used to pass edit the original dataframe.
##These two conditions can probably be combined
condition1=df[(df['Note'].str.contains('std|standard'))&(df['Na']>.01)&(df['Mg']>.01)]
##where condition1 is greater/less than the bounds of the known value.
##provides array where condition is true
OutofBounds=condition1.loc[(condition1[elements].lt(maxMin.loc[1, :])|condition1[elements].gt(maxMin.loc[0, :]).select_dtypes(include=['bool'])).sum(axis=1)>5].index.values
OutofBounds
out:array([ 3], dtype=int64)
Now I can pass the array into the original dataframe:
df.loc[OutofBounds, 'Error']=True
Let's say i have a dataframe
df = pd.Dataframe({'A': [6,5,9,6,2]})
I also have an array/series
ser = pd.Series([5,6,7])
How can i insert this series into the existing df as a new column, but start at the specific index, while "padding" missing indexes with nan (i think pandas does this automatically).
Ie. psuedo code:
insert ser into df at index 2 as column 'B'
Example output
A B
----------
1| 6 | Nan
2| 5 | 5
3| 9 | 6
4| 6 | 7
5| 2 | Nan
Assuming that the start index value is in startInd variable:
startInd = 2
use the following code:
df['B'] = pd.Series(data=ser.values, index=df.index[df.index >= startInd]
.to_series().iloc[:ser.size])
Details:
df.index[df.index >= startInd] - returns a fragment of df.index,
starting from the "start value" (for now, up to the end).
.to_series() - converts it to a Series (in order to by able to
"slice" it using iloc, in a moment).
.iloc[:ser.size] - takes as many values as needed.
index=... - what we got in the previous step use as the index of the
created Series.
pd.Series(data=ser.values, ... - Create a Series - the source of
data, which will be saved in a new column in df (in a moment).
df['B'] = - Save the above data in a new column (only in rows with
index values matching the above index, other rows will be set to NaN).
There is a subtle but unavoidable difference from your expected result:
As some values are NaN, the type of the new column is coerced to float.
So the result is:
A B
1 6 NaN
2 5 5.0
3 9 6.0
4 6 7.0
5 2 NaN
I have some data that I want to display in scatter chart. I have the following two dimensions:
Dimension1: This is each record in the table - say unique id for each row. So the number of dots should be equal to number of records.
Dimension2: This is a combination of 2 columns. tp and vc. Colors of each dot is based on these 2 columns.
tp vc
1 a 1
2 b 2
3 c 1
So there will be dots of 3 colors based on the above tp and vc combinations. Then there are 3 expressions representing X and Y and Size of dot. I am not sure how to configure the dimensions to achieve the goal.
Thanks
You will need a calculated dimmension which is the concatanation expression defined as =tp & vc in your case.
Then this will be your single dimmension. Then your x,y,size expressions make up the remaining requirements for this chart.
This will give you three colors, one for each unique record combination and they will be labled a1 and b2 and c1.
id tp vc x y size
1 | a | 1 | 3 | 5 | 7
2 | b | 2 | 1 | 2 | 10
3 | c | 1 | 9 | 5 | 5
I have a super strange problem which I spent the last hour trying to solve, but with no success. It is even more strange since I can't replicate it on a small scale.
I have a large DataFrame (150,000 entries). I took out a subset of it and did some manipulation. the subset was saved as a different variable, x.
x is smaller than the df, but its index is in the same range as the df. I'm now trying to assign x back to the DataFrame replacing values in the same column:
rep_Callers['true_vpID'] = x.true_vpID
This inserts all the different values in x to the right place in df, but instead of keeping the df.true_vpID values that are not in x, it is filling them with NaNs. So I tried a different approach:
df.ix[x.index,'true_vpID'] = x.true_vpID
But instead of filling x values in the right place in df, the df.true_vpID gets filled with the first value of x and only it! I changed the first value of x several times to make sure this is indeed what is happening, and it is. I tried to replicate it on a small scale but it didn't work:
df = DataFrame({'a':ones(5),'b':range(5)})
a b
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
z =Series([random() for i in range(5)],index = range(5))
0 0.812561
1 0.862109
2 0.031268
3 0.575634
4 0.760752
df.ix[z.index[[1,3]],'b'] = z[[1,3]]
a b
0 1 0.000000
1 1 0.812561
2 1 2.000000
3 1 0.575634
4 1 4.000000
5 1 5.000000
I really tried it all, need some new suggestions...
Try using df.update(updated_df_or_series)
Also using a simple example, you can modify a DataFrame by doing an index query and modifying the resulting object.
df_1
a b
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
df_2 = df_1.ix[3:5]
df_2.b = df_2.b + 2
df_2
a b
3 1 5
4 1 6
df_1
a b
0 1 0
1 1 1
2 1 2
3 1 5
4 1 6