Equivalent of Rs which in pandas - pandas

How do I get the column of the min in the example below, not the actual number?
In R I would do:
which(min(abs(_quantiles - mean(_quantiles))))
In pandas I tried (did not work):
_quantiles.which(min(abs(_quantiles - mean(_quantiles))))

You could do it this way, call np.min on the df as a np array, use this to create a boolean mask and drop the columns that don't have at least a single non NaN value:
In [2]:
df = pd.DataFrame({'a':np.random.randn(5), 'b':np.random.randn(5)})
df
Out[2]:
a b
0 -0.860548 -2.427571
1 0.136942 1.020901
2 -1.262078 -1.122940
3 -1.290127 -1.031050
4 1.227465 1.027870
In [15]:
df[df==np.min(df.values)].dropna(axis=1, thresh=1).columns
Out[15]:
Index(['b'], dtype='object')

idxmin and idxmax exist, but no general which as far as I can see.
_quantiles.idxmin(abs(_quantiles - mean(_quantiles)))

Related

Change NaN to None in Pandas dataframe

I try to replace Nan to None in pandas dataframe. It was working to use df.where(df.notnull(),None).
Here is the thread for this method.
Use None instead of np.nan for null values in pandas DataFrame
When I try to use the same method on another dataframe, it failed.
The new dataframe is like below
A NaN B C D E, the print out of the dataframe is like this:
Unnamed: 1 Unnamed: 2 Unnamed: 3 Unnamed: 4 Unnamed: 5 Unnamed: 6
0 A NaN B C D E
even when I use the working code run against the new dataframe, it failed.
I just wondering is it is because in the excel, the cell format has to be certain type.
Any suggestion on this?
This always works for me
df = df.replace({np.nan:None})
You can check this related question, Credit from here
The problem is that I did not follow the format.
The format I used that cause the problem was
df.where(df.notnull(), None)
If I wrote the code like this, there is no problem
df = df.where(df.notnull(), None)
To do it just over one column
df.col_name.replace({np.nan: None}, inplace=True)
This is not as easy as it looks.
1.NaN is the value set for any cell that is empty when we are reading file using pandas.read_csv()
2.None is the value set for any cell that is NULL when we are reading file using pandas.read_sql() or readin from a database
import pandas as pd
import numpy as np
x=pd.DataFrame()
df=pd.read_csv('file.csv')
df=df.replace({np.NaN:None})
df['prog']=df['prog'].astype(str)
print(df)
if there is compatibility issue of datatype , which will be because on replacing np.NaN will make the column of dataframe as object type.
so in this case first replace np.NaN with None and then choose the required datatype for the column
file.csv
column names : batch,prog,name
'prog' column is empty

pandas operations over multiple axis

How can I do operations over multiple columns in one go in pandas?
For example, I would like to calculate the df[['a',b']].mean(level=0) or df[['a',b']].kurtosis(level=0) (I need the level=0 as it's a multi indexed dataframe).
But I would like to have one single number and do the calculation over multiple axis in one go. A and B would be merged into one single column (or series).
In numpy this is possible I believe with axis=(0,1), but I'm unsure how this can be achieved in pandas.
Speed is very important, so apply or iterating is not a solution.
The expected result would be as follows:
np.random.seed([3, 1415])
df = pd.DataFrame(
np.random.rand(10, 2),
pd.MultiIndex.from_product([list('ab'), range(5)]),
list('AB')
)
df
Out[76]:
A B
a 0 0.444939 0.407554
1 0.460148 0.465239
2 0.462691 0.016545
3 0.850445 0.817744
4 0.777962 0.757983
b 0 0.934829 0.831104
1 0.879891 0.926879
2 0.721535 0.117642
3 0.145906 0.199844
4 0.437564 0.100702
expected result:
df.groupby(level=0).agg(['mean']).mean(axis=1)
Out[78]:
a 0.546125
b 0.529589
dtype: float64
But it needs to be achieved in one single calculation, not in mean of mean, as this will maybe work for the mean, but for other calculations it may not produce the same result as if it was done in one go (for example I'm not sure if the kurtosis of the kurtosis is equal to the kurtosis in one go.)
Consider the sample dataframe df
np.random.seed([3, 1415])
df = pd.DataFrame(
np.random.rand(10, 2),
pd.MultiIndex.from_product([list('ab'), range(5)]),
list('AB')
)
df
A B
a 0 0.444939 0.407554
1 0.460148 0.465239
2 0.462691 0.016545
3 0.850445 0.817744
4 0.777962 0.757983
b 0 0.934829 0.831104
1 0.879891 0.926879
2 0.721535 0.117642
3 0.145906 0.199844
4 0.437564 0.100702
Typical Solution
Use groupby and agg
df.groupby(level=0).agg(['mean', pd.Series.kurt])
A B
mean kurt mean kurt
a 0.599237 -2.885262 0.493013 0.018225
b 0.623945 -0.900488 0.435234 -3.105328
Solve Different
pd.concat([
df.mean(level=0),
df.kurt(level=0)
], axis=1, keys=['Mean', 'Kurt']).swaplevel(1, 0, 1).sort_index(1)
A B
Kurt Mean Kurt Mean
a -2.885262 0.599237 0.018225 0.493013
b -0.900488 0.623945 -3.105328 0.435234
This seems to work:
df.stack().mean(level=0)
Out[146]:
a 0.546125
b 0.529589
dtype: float64

frequency table for all columns in pandas

I want to run frequency table on each of my variable in my df.
def frequency_table(x):
return pd.crosstab(index=x, columns="count")
for column in df:
return frequency_table(column)
I got an error of 'ValueError: If using all scalar values, you must pass an index'
How can i fix this?
Thank you!
You aren't passing any data. You are just passing a column name.
for column in df:
print(column) # will print column names as strings
try
ctabs = {}
for column in df:
ctabs[column]=frequency_table(df[column])
then you can look at each crosstab by using the column name as keys in the ctabs dictionary
for column in df:
print(data[column].value_counts())
For example:
import pandas as pd
my_series = pd.DataFrame(pd.Series([1,2,2,3,3,3, "fred", 1.8, 1.8]))
my_series[0].value_counts()
will generate output like in below:
3 3
1.8 2
2 2
fred 1
1 1
Name: 0, dtype: int64

Assigning index column to empty pandas dataframe

I am creating an empty dataframe that i then want to add data to one row at a time. I want to index on the first column, 'customer_ID'
I have this:
In[1]: df = pd.DataFrame(columns = ['customer_ID','a','b','c'],index=['customer_ID'])
In[2]: df
Out[3]:
customer_ID a b c
customer_ID NaN NaN NaN NaN
So there is already a row of NaN that I don't want.
Can I point the index to the first column without adding a row of data?
The answer, I think, as hinted at by #JD Long is to set the index in a seprate instruction:
In[1]: df = pd.DataFrame(columns = ['customer_ID','a','b','c'])
In[2]: df.set_index('customer_ID',inplace = True)
In[3]: df
Out[3]:
Empty DataFrame
Columns: [customer_ID, a, b, c]
Index: []
I can then add rows:
In[4]: id='x123'
In[5]: df.loc[id]=[id,4,5,6]
In[6]: df
Out[7]:
customer_ID a b c
x123 x123 4.0 5.0 6.0
yes... and you can dropna at any time if you are so inclined:
df = df.set_index('customer_ID').dropna()
df
Because you didn't have any row in your dataframe when you just create it.
df= pd.DataFrame({'customer_ID': ['2'],'a': ['1'],'b': ['A'],'c': ['1']})
df.set_index('customer_ID',drop=False)
df

Replace None with NaN in pandas dataframe

I have table x:
website
0 http://www.google.com/
1 http://www.yahoo.com
2 None
I want to replace python None with pandas NaN. I tried:
x.replace(to_replace=None, value=np.nan)
But I got:
TypeError: 'regex' must be a string or a compiled regular expression or a list or dict of strings or regular expressions, you passed a 'bool'
How should I go about it?
You can use DataFrame.fillna or Series.fillna which will replace the Python object None, not the string 'None'.
import pandas as pd
import numpy as np
For dataframe:
df = df.fillna(value=np.nan)
For column or series:
df.mycol.fillna(value=np.nan, inplace=True)
Here's another option:
df.replace(to_replace=[None], value=np.nan, inplace=True)
The following line replaces None with NaN:
df['column'].replace('None', np.nan, inplace=True)
If you use df.replace([None], np.nan, inplace=True), this changed all datetime objects with missing data to object dtypes. So now you may have broken queries unless you change them back to datetime which can be taxing depending on the size of your data.
If you want to use this method, you can first identify the object dtype fields in your df and then replace the None:
obj_columns = list(df.select_dtypes(include=['object']).columns.values)
df[obj_columns] = df[obj_columns].replace([None], np.nan)
This solution is straightforward because can replace the value in all the columns easily.
You can use a dict:
import pandas as pd
import numpy as np
df = pd.DataFrame([[None, None], [None, None]])
print(df)
0 1
0 None None
1 None None
# replacing
df = df.replace({None: np.nan})
print(df)
0 1
0 NaN NaN
1 NaN NaN
Its an old question but here is a solution for multiple columns:
values = {'col_A': 0, 'col_B': 0, 'col_C': 0, 'col_D': 0}
df.fillna(value=values, inplace=True)
For more options, check the docs:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html
DataFrame['Col_name'].replace("None", np.nan, inplace=True)