Replace NaN values in all levels of a Pandas MultiIndex - pandas

After reading in an excel sheet with a MultiIndex, I am getting np.nan appearing in the index because some of the values are 'N/A' and pd.read_excel thinks it's a good idea to convert them. However I want to keep them as 'N/A' to preserve the multi-index. I thought it would be easy to change them back using MultiIndex.fillna but I get this error:
index = pd.MultiIndex(levels=[[u'foo', u'bar'], [u'one', np.nan]],
codes=[[0, 0, 1, 1], [0, 1, 0, 1]],
names=[u'first', u'second'])
df = pd.DataFrame(index=index, columns=['A', 'B'])
df
df.index.fillna("N/A")
Output:
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-17-09e14dcdc74f> in <module>
----> 1 df.index.fillna("N/A")
/anaconda3/envs/torch/lib/python3.7/site-packages/pandas/core/indexes/multi.py in fillna(self, value, downcast)
1456 fillna is not implemented for MultiIndex
1457 """
-> 1458 raise NotImplementedError("isna is not defined for MultiIndex")
1459
1460 #Appender(_index_shared_docs["dropna"])
NotImplementedError: isna is not defined for MultiIndex
Update:
Code updated to reflect Pandas 1.0.2. Prior to version 0.24.0 the codes attribute of pd.MultiIndex was called labels. Also, the traceback details changed from isnull is not defined to isna is not defined as above.

The accepted solution did not work for me either. It still left NA values in the index even though inspecting the df.index.levels individually did not show NA values.
Jorge's solution pointed me in the right direction but also wasn't quite right for my case. Here is my approach, including handling of the single Index case as discussed in the comments of the accepted answer.
if isinstance(df.index, pd.MultiIndex):
df.index = pd.MultiIndex.from_frame(
df.index.to_frame().fillna(my_fillna_value)
)
else:
df.index = df.index.fillna(my_fillna_value)

Use set_levels
df.index.set_levels([l.fillna('N/A') for l in df.index.levels], inplace=True)
df

The current solution didn't work for me when having multi level columns. What i did and worked for me was the following:
df.columns = pd.MultiIndex.from_frame(df.columns.to_frame().fillna(''))

Related

Dropping same rows in two pandas dataframe in python

I want to have uncommon rows in two pandas dataframes. Two dataframes are df1 and wildone_df. When I check their typy both of them are "pandas.core.frame.DataFrame" but when I use below mentioned code to omit their intersection:
o = pd.concat([wildone_df,df1]).drop_duplicates(subset=None, keep='first', inplace=False)
I face following error:
TypeError Traceback (most recent call last)
<ipython-input-36-4e158c0eeb97> in <module>
----> 1 o = pd.concat([wildone_df,df1]).drop_duplicates(subset=None, keep='first', inplace=False)
5 frames
/usr/local/lib/python3.8/dist-packages/pandas/core/algorithms.py in factorize_array(values, na_sentinel, size_hint, na_value, mask)
561
562 table = hash_klass(size_hint or len(values))
--> 563 uniques, codes = table.factorize(
564 values, na_sentinel=na_sentinel, na_value=na_value, mask=mask
565 )
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.factorize()
pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable._unique()
**TypeError: unhashable type: 'numpy.ndarray'**
How can I solve this issue?!
Omitting the intersection of two dataframes
Either use inplace=True or re-assign your dataframe when using pandas.DataFrame.drop_duplicates or any other built-in function that has an inplace parameter. You can't use them both at the same time.
Returns (DataFrame or None)
DataFrame with duplicates removed or None if inplace=True.
Try this :
o = pd.concat([wildone_df, df1]).drop_duplicates() #keep="first" by default
try this:
merged_df = merged_df.loc[:,~merged_df.columns.duplicated()].copy()
See this post for more info

Is dropna=True in pandas groupby useful?

I am not certain if this question is appropriate here, and apologies in advance if it is not.
I am a pandas maintainer, and recently I've been working on fixing bugs in pandas groupby when used with dropna=True and transform for the 1.5 release. For example, in pandas 1.4.2,
import pandas as pd
df = pd.DataFrame({'a': [1, 1, np.nan], 'b': [2, 3, 4]})
print(df.groupby('a', dropna=True).transform('sum'))
produces the incorrect (in particular, the last row) output
b
0 5
1 5
2 5
While working on this, I've been wondering how useful the dropna argument is in groupby. For aggregations (e.g. df.groupby('a').sum()) and filters (e.g. df.groupby('a').head(2)), it seems to me it's always possible to drop the offending rows prior to the groupby. In addition to this, in my use of pandas if I have null values in the groupers, then I want them in the groupby result. For transformations, where the resulting index should match that of the input, the value is instead filled with null. For the above code block, the output should be
b
0 5.0
1 5.0
2 NaN
But I can't imagine this result ever being useful. In case it is, it also is not too difficult to accomplish:
result = df.groupby('a', dropna=False).transform('sum')
result.loc[df['a'].isnull()] = np.nan
If we were able to deprecate and then remove the dropna argument to groupby (i.e. groupby always behaves as if dropna=False), then this would help simplify a good part of the groupby code.
So I'd like to ask if there are examples where dropna=True and the operation might be otherwise hard to accomplish.
Thanks!

Convert pandas to dask code and it errors out

I have pandas code which works perfectly.
import pandas as pd
courses_df = pd.DataFrame(
[
["Jay", "MS"],
["Jay", "Music"],
["Dorsey", "Music"],
["Dorsey", "Piano"],
["Mark", "MS"],
],
columns=["Name", "Course"],
)
pandas_df_json = (
courses_df.groupby(["Name"])
.apply(lambda x: x.drop(columns="Name").to_json(orient="records"))
.reset_index(name="courses_json")
)
But when I convert the dataframe to Dask and try the same operation.
from dask import dataframe as dd
df = dd.from_pandas(courses_df, npartitions=2)
df.groupby(["Name"]).apply(lambda x: x.to_json(orient="records")).reset_index(
name="courses_json"
).compute()
And the error i get is
UserWarning: `meta` is not specified, inferred from partial data. Please provide `meta` if the result is unexpected.
Before: .apply(func)
After: .apply(func, meta={'x': 'f8', 'y': 'f8'}) for dataframe result
or: .apply(func, meta=('x', 'f8')) for series result
df.groupby(["Name"]).apply(lambda x: x.to_json(orient="records")).reset_index(
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [37], in <module>
1 from dask import dataframe as dd
3 df = dd.from_pandas(courses_df, npartitions=2)
----> 4 df.groupby(["Name"]).apply(lambda x: x.drop(columns="Name").to_json(orient="records")).reset_index(
5 name="courses_json"
6 ).compute()
TypeError: _Frame.reset_index() got an unexpected keyword argument 'name'
My expected output from dask and pandas should be same that is
Name courses_json
0 Dorsey [{"Course":"Music"},{"Course":"Piano"}]
1 Jay [{"Course":"MS"},{"Course":"Music"}]
2 Mark [{"Course":"MS"}]
How do i achieve this in dask ?
My try so far
from dask import dataframe as dd
df = dd.from_pandas(courses_df, npartitions=2)
df.groupby(["Name"]).apply(
lambda x: x.drop(columns="Name").to_json(orient="records")
).compute()
UserWarning: `meta` is not specified, inferred from partial data. Please provide `meta` if the result is unexpected.
Before: .apply(func)
After: .apply(func, meta={'x': 'f8', 'y': 'f8'}) for dataframe result
or: .apply(func, meta=('x', 'f8')) for series result
df.groupby(["Name"]).apply(
Out[57]:
Name
Dorsey [{"Course":"Piano"},{"Course":"Music"}]
Jay [{"Course":"MS"},{"Course":"Music"}]
Mark [{"Course":"MS"}]
dtype: object
I want to pass in a meta arguement and also want the second column
to have a meaningful name like courses_json
For the meta warning, Dask is expecting you to specify the column datatypes for the result. It's optional, but if you do not specify this it's entirely possible that Dask may infer faulty datatypes. One partition could for example be inferred as an int type and another as a float. This is particularly the case for sparse datasets. See the docs page for more details:
https://docs.dask.org/en/stable/generated/dask.dataframe.DataFrame.apply.html
This should solve the warning:
from dask import dataframe as dd
df = dd.from_pandas(courses_df, npartitions=2)
new_df = df.groupby(["Name"]).apply(
lambda x: x.drop(columns="Name").to_json(orient="records"),
meta=("Name", "O")
).to_frame()
# rename columns
new_df.columns = ["courses_json"]
# use numeric int index instead of name as in the given example
new_df = new_df.reset_index()
new_df.compute()
The result of your computation is a dask Series, not a Dataframe. This is why you need to use numpy types here (https://www.w3schools.com/python/numpy/numpy_data_types.asp). It consists of an index and a value. And you're not directly able to name the second column without converting it back to a dataframe using the .to_frame() method.

Pandas dataframe - multiplying DF's elementwise on same dates - something wrong?

I've been banging my head over this, I just cannot seem to get it right and I don't understand what is the problem... So I tried to do the following:
#!/usr/bin/env python
import matplotlib.pyplot as plt
import numpy as np
import quandl
btc_usd_price_kraken = quandl.get('BCHARTS/KRAKENUSD', returns="pandas")
btc_usd_price_kraken.replace(0, np.nan, inplace=True)
plt.plot(btc_usd_price_kraken.index, btc_usd_price_kraken['Weighted Price'])
plt.grid(True)
plt.title("btc_usd_price_kraken")
plt.show()
eur_usd_price = quandl.get('BUNDESBANK/BBEX3_D_USD_EUR_BB_AC_000', returns="pandas")
eur_dkk_price = quandl.get('ECB/EURDKK', returns="pandas")
usd_dkk_price = eur_dkk_price / eur_usd_price
btc_dkk = btc_usd_price_kraken['Weighted Price'] * usd_dkk_price
plt.plot(btc_dkk.index, btc_dkk) # WHY IS THIS [4785 rows x 1340 columns] ???
plt.grid(True)
plt.title("Historic value of 1 BTC converted to DKK")
plt.show()
As you can see in the comment, I don't understand why I get a result (which I'm trying to plot) that has size: [4785 rows x 1340 columns] ?
Anyway, the code results in a lot of error messages, something like e.g.
> Traceback (most recent call last): File
> "/usr/lib/python3.6/site-packages/matplotlib/backends/backend_qt5agg.py",
> line 197, in __draw_idle_agg
> FigureCanvasAgg.draw(self) File "/usr/lib/python3.6/site-packages/matplotlib/backends/backend_agg.py",
...
> return _from_ordinalf(x, tz) File "/usr/lib/python3.6/site-packages/matplotlib/dates.py", line 254, in
> _from_ordinalf
> dt = datetime.datetime.fromordinal(ix).replace(tzinfo=UTC) ValueError: ordinal must be >= 1
I read some posts and I know that Pandas/Dataframe when using multiply is able to automatically only do an elementwise multiplication, on data-pairs, where the date is the same (so if one DF has timeseries for e.g. 1999-2017 and the other only has e.g. 2012-2015, then only common dates between 2012-2015 will be multiplied, i.e. the intersection subset of the data set) - so this problem about understanding the error message(s) (and the solution) - the whole problem is related to calculating btc_dkk variable and plotting it (which is the price for Bitcoin in the currency DKK)...
This should work:
usd_dkk_price.multiply(btc_usd_price_kraken['Weighted Price'], axis='index').dropna()
You are multiplying on columns, not index (this happens since you are multiplying a dataframe and a series, if you had selected the column in usd_dkk_price, this would not have happened). Then afterwards just drop the rows with NaN.

using pd.DataFrame.apply to create multiple columns

My first question here!
I'm having some trouble figuring out what I'm doing wrong here, trying to append columns to an existing pd.DataFrame object. Specifically, my original dataframe has n-many columns, and I want to use apply to append an additional 2n-many columns to it. The problem seems to be that doing this via apply() doesn't work, in that if I try to append more than n-many columns, it falls over. This doesn't make sense to me, and I was hoping somebody could either shed some light on to why I'm seeing this behaviour, or suggest a better approach.
For example,
df = pd.DataFrame(np.random.rand(10,2))
def this_works(x):
return 5 * x
def this_fails(x):
return np.append(5 * x, 5 * x)
df.apply(this_works, 1) # Two columns of output, as expected
df.apply(this_fails, 1) # Unexpected failure...
Any ideas? I know there are other ways to create the data columns, this approach just seemed very natural to me and I'm quite confused by the behaviour.
SOLVED! CT Zhu's solution below takes care of this, my error arises from not properly returning a pd.Series object in the above.
Are you trying to do a few different calculations on your df and put the resulting vectors together in one larger DataFrame, like in this example?:
In [39]:
print df
0 1
0 0.718003 0.241216
1 0.580015 0.981128
2 0.477645 0.463892
3 0.948728 0.653823
4 0.056659 0.366104
5 0.273700 0.062131
6 0.151237 0.479318
7 0.425353 0.076771
8 0.317731 0.029182
9 0.543537 0.589783
In [40]:
print df.apply(lambda x: pd.Series(np.hstack((x*5, x*6))), axis=1)
0 1 2 3
0 3.590014 1.206081 4.308017 1.447297
1 2.900074 4.905639 3.480088 5.886767
2 2.388223 2.319461 2.865867 2.783353
3 4.743640 3.269114 5.692369 3.922937
4 0.283293 1.830520 0.339951 2.196624
5 1.368502 0.310656 1.642203 0.372787
6 0.756187 2.396592 0.907424 2.875910
7 2.126764 0.383853 2.552117 0.460624
8 1.588656 0.145909 1.906387 0.175091
9 2.717685 2.948917 3.261222 3.538701
FYI in this trivial case you can do 5 * df !
I think the issue here is that np.append flattens the Series:
In [11]: np.append(df[0], df[0])
Out[11]:
array([ 0.33145275, 0.14964056, 0.86268119, 0.17311983, 0.29618537,
0.48831228, 0.64937305, 0.03353709, 0.42883925, 0.99592229,
0.33145275, 0.14964056, 0.86268119, 0.17311983, 0.29618537,
0.48831228, 0.64937305, 0.03353709, 0.42883925, 0.99592229])
what you want is it to create four columns (isn't it?). The axis=1 means that you are doing this row-wise (i.e. x is the row which is a Series)...
In general you want apply to return either:
a single value, or
a Series (with unique index).
Saying that I kinda thought the following may work (to get four columns):
In [21]: df.apply((lambda x: pd.concat([x[0] * 5, x[0] * 5], axis=1)), axis=1)
TypeError: ('cannot concatenate a non-NDFrame object', u'occurred at index 0')
In [22]: df.apply(lambda x: np.array([1, 2, 3, 4]), axis=1)
ValueError: Shape of passed values is (10,), indices imply (10, 2)
In [23]: df.apply(lambda x: pd.Series([1, 2, 3, 4]), axis=1) # works
Maybe I expected the first to raise about non-unique index... but I was surprised that the second failed.