I want to do operation [(b-a)/a] * 100 on a dataframe [i.e., percentage change from a reference value]. where a is my first column and b is all other columns of the dataframe.
I tried below steps and it is working but very messy !!
df = pd.DataFrame({'obj1': [1, 3, 4],
'obj2': [6, 9, 10], 'obj3':[2, 6, 8]},
index=['circle', 'triangle', 'rectangle'])
#first we subtract all columns with first col - as that is the starting point : b-a
df_aftersub = df.sub(pd.Series(df.iloc[:,[0]].squeeze()),axis='index')
#second we divide the result with first column to get change - b-a/a
df_change = df_aftersub.div(pd.Series(df.iloc[:,[0]].squeeze()),axis='index')
#third we multiply with 100 to get percent change - b-a/a*100
df_final = df_change*100
df_final
output needed
obj1 obj2 obj3
circle 0.0 500.0 100.0
triangle 0.0 200.0 100.0
rectangle 0.0 150.0 100.0
how to do it in less lines of code and if possible less temporary dataframes (if possible simple to understand)
First subtract first column by DataFrame.sub and divide by DataFrame.div, last multiple by 100:
s = df.iloc[:, 0]
df_final = df.sub(s, axis=0).div(s, axis=0).mul(100)
print (df_final)
obj1 obj2 obj3
circle 0.0 500.0 100.0
triangle 0.0 200.0 100.0
rectangle 0.0 150.0 100.0
Related
I'm a beginner with Pandas. I've got two dataframes df1 and df2 of three columns each, labelled by some index.
I would like to get a third dataframe whose entries are
min( df1-df2, 1-df1-df2 )
for each column, while preserving the index.
I don't know how to do this on all the three columns at once. If I try e.g. np.min( df1-df2, 1-df1-df2 ) I get TypeError: 'DataFrame' objects are mutable, thus they cannot be hashed, whereas min( df1-df2, 1-df1+df2 ) gives ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
I can't use apply because I've got more than one dataframe. Basically, I would like to use something like subtract, but with the ability to define my own function.
Example: consider these two dataframes
df0 = pd.DataFrame( [[0.1,0.2,0.3], [0.3, 0.1, 0.2], [0.1, 0.3, 0.9]], index=[2,1,3], columns=['px', 'py', 'pz'] )
In [4]: df0
Out[4]:
px py pz
2 0.1 0.2 0.3
1 0.3 0.1 0.2
3 0.1 0.3 0.9
and
df1 = pd.DataFrame( [[0.9,0.1,0.9], [0.1,0.2,0.1], [0.3,0.1,0.8]], index=[3,1,2], columns=['px', 'py', 'pz'])
px py pz
3 0.9 0.1 0.9
1 0.1 0.2 0.1
2 0.3 0.1 0.8
my desired output is a new dataframe df, made up of three columns 'px', 'py', 'pz', whose entries are:
for j in range(1,4):
dfx[j-1] = min( df0['px'][j] - df1['px'][j], 1 - df0['px'][j] + df1['px'][j] )
for df['px'], and similarly for 'py' and 'pz'.
px py pz
1 0.2 -0.1 0.1
2 -0.2 0.1 -0.5
3 -0.8 0.2 0.0
I hope it's clear now! Thanks in advance!
pandas is smart enough to match up the columns and index values for you in a vectorized way. If you're looping a dataframe, you're probably doing it wrong.
m1 = df0 - df1
m2 = 1 - (df0 + df1)
# Take the values from m1 where they're less than
# The corresponding value in m2. Otherwise, take m2:
out = m1[m1.lt(m2)].combine_first(m2)
# Another method: Combine our two calculated frames,
# groupby the index, and take the minimum.
out = pd.concat([m1, m2]).groupby(level=0).min()
print(out)
# Output:
px py pz
1 0.2 -0.1 0.1
2 -0.2 0.1 -0.5
3 -0.8 0.2 -0.8
Some would say this needs to be two separate questions, but they are inter-related so I just write them all here.
1. Making multi-indexed columns
I have three data frames:
data_large = pd.DataFrame({"name":["a", "b", "c"], "sell":[10, 60, 50], "buy":[20, 30, 40]})
data_mini = pd.DataFrame({"name":["b", "c", "d"], "sell":[60, 20, 10], "buy":[30, 50, 40]})
data_topix = pd.DataFrame({"name":["a", "b", "c"], "sell":[10, 80, 0], "buy":[70, 30, 40]})
But first of all, I want to make their columns multi-indexed like below.
This is what I tried, but doesn't work as expected. name gets under the index level Nikkei225Large
iterables = [['Nikkei225Large'], ['name', 'buy', 'sell']]
index_large = pd.MultiIndex.from_product(iterables, names=['product', 'sell_buy'])
data_large.columns = index_large
2. Joining multiple pandas with multi-indexed columns, for ex. using reduce
Next, outer-join the three data frames altogether on the column name. The expected output is:
For now, I just join them using reduce like below, but I want to do it with multi-indexed columns.
from functools import reduce
dfs = {0: data_large, 1: data_mini, 2: data_topix}
def agg_df(dfList):
df_agged = reduce(lambda left, right: pd.merge(left, right,
left_index=True, right_index=True,
on='name',
how='outer'), dfList)
return df_agged
df_final = agg_df(dfs.values())
Any help would be appreciated!
IIUC, you can do this using pd.concat with keys parameter:
df_out = pd.concat([dfi.set_index('name') for dfi in [data_large, data_mini, data_topix]],
keys=['Nikkei225Large', 'Nikkei225Mini', 'Topix'], axis=1)\
.rename_axis(index=['Name'], columns=['product','buy_sell'])
Output:
product Nikkei225Large Nikkei225Mini Topix
buy_sell sell buy sell buy sell buy
Name
a 10.0 20.0 NaN NaN 10.0 70.0
b 60.0 30.0 60.0 30.0 80.0 30.0
c 50.0 40.0 20.0 50.0 0.0 40.0
d NaN NaN 10.0 40.0 NaN NaN
I currently have a dataframe with n number of number-value columns and three columns that are datetime and string values. I want to convert all the columns (but three) to numeric values but am not sure what the best method is. Below is a sample dataframe (simplified):
df2 = pd.DataFrame(np.array([[1, '5-4-2016', 10], [1,'5-5-2016', 5],[2, '5-
4-2016', 10], [2, '5-5-2016', 7], [5, '5-4-2016', 8]]), columns= ['ID',
'Date', 'Number'])
I tried using something like (below) but was unsuccessful.
exclude = ['Date']
df = df.drop(exclude, 1).apply(pd.to_numeric,
errors='coerce').combine_first(df)
The expected output: (essentially, the datatype of fields 'ID' and 'Number' change to floats while 'Date' stays the same)
ID Date Number
0 1.0 5-4-2016 10.0
1 1.0 5-5-2016 5.0
2 2.0 5-4-2016 10.0
3 2.0 5-5-2016 7.0
4 5.0 5-4-2016 8.0
Have you tried Series.astype()?
df['ID'] = df['ID'].astype(float)
df['Number'] = df['Number'].astype(float)
or for all columns besides date:
for col in [x for x in df.columns if x != 'Date']:
df[col] = df[col].astype(float)
or
df[[x for x in df.columns if x != 'Date']].transform(lambda x: x.astype(float), axis=1)
You need to call to_numeric with option downcast='float', if you want it change to float. Otherwise, it will be int. You also need to join back to non-converted columns of the original df2
df2[exclude].join(df2.drop(exclude, 1).apply(pd.to_numeric, downcast='float', errors='coerce'))
Out[1815]:
Date ID Number
0 5-4-2016 1.0 10.0
1 5-5-2016 1.0 5.0
2 5-4-2016 2.0 10.0
3 5-5-2016 2.0 7.0
4 5-4-2016 5.0 8.0
I have thousands of series (rows of a DataFrame) that I need to apply qcut on. Periodically there will be a series (row) that has fewer values than the desired quantile (say, 1 value vs 2 quantiles):
>>> s = pd.Series([5, np.nan, np.nan])
When I apply .quantile() to it, it has no problem breaking into 2 quantiles (of the same boundary value)
>>> s.quantile([0.5, 1])
0.5 5.0
1.0 5.0
dtype: float64
But when I apply .qcut() with an integer value for number of quantiles an error is thrown:
>>> pd.qcut(s, 2)
...
ValueError: Bin edges must be unique: array([ 5., 5., 5.]).
You can drop duplicate edges by setting the 'duplicates' kwarg
Even after I set the duplicates argument, it still fails:
>>> pd.qcut(s, 2, duplicates='drop')
....
IndexError: index 0 is out of bounds for axis 0 with size 0
How do I make this work? (And equivalently, pd.qcut(s, [0, 0.5, 1], duplicates='drop') also doesn't work.)
The desired output is to have the 5.0 assigned to a single bin and the NaN are preserved:
0 (4.999, 5.000]
1 NaN
2 NaN
Ok, this is a workaround which might work for you.
pd.qcut(s,len(s.dropna()),duplicates='drop')
Out[655]:
0 (4.999, 5.0]
1 NaN
2 NaN
dtype: category
Categories (1, interval[float64]): [(4.999, 5.0]]
You can try filling your object/number cols with the appropriate filling ('null' for string and 0 for numeric)
#fill numeric cols with 0
numeric_columns = df.select_dtypes(include=['number']).columns
df[numeric_columns] = df[numeric_columns].fillna(0)
#fill object cols with null
string_columns = df.select_dtypes(include=['object']).columns
df[string_columns] = df[string_columns].fillna('null')
Use python 3.5 instead of python 2.7 .
This worked for me
I've tried reading similar questions before asking, but I'm still stumped.
Any help is appreaciated.
Input:
I have a pandas dataframe with a column labeled 'radon' which has values in the range: [0.5, 13.65]
Output:
I'd like to create a new column where all radon values that = 0.5 are changed to a random value between 0.1 and 0.5
I tried this:
df['radon_adj'] = np.where(df['radon']==0.5, random.uniform(0, 0.5), df.radon)
However, i get the same random number for all values of 0.5
I tried this as well. It creates random numbers, but the else statment does not copy the original values
df['radon_adj'] = df['radon'].apply(lambda x: random.uniform(0, 0.5) if x == 0.5 else df.radon)
One way would be to create all the random numbers you might need before you select them using where:
>>> df = pd.DataFrame({"radon": [0.5, 0.6, 0.5, 2, 4, 13]})
>>> df["radon_adj"] = df["radon"].where(df["radon"] != 0.5, np.random.uniform(0.1, 0.5, len(df)))
>>> df
radon radon_adj
0 0.5 0.428039
1 0.6 0.600000
2 0.5 0.385021
3 2.0 2.000000
4 4.0 4.000000
5 13.0 13.000000
You could be a little smarter and only generate as many random numbers as you're actually going to need, but it probably took longer for me to type this sentence than you'd save. (It takes me 9 ms to generate ~1M numbers.)
Your apply approach would work too if you used x instead of df.radon:
>>> df['radon_adj'] = df['radon'].apply(lambda x: random.uniform(0.1, 0.5) if x == 0.5 else x)
>>> df
radon radon_adj
0 0.5 0.242991
1 0.6 0.600000
2 0.5 0.271968
3 2.0 2.000000
4 4.0 4.000000
5 13.0 13.000000