List Comprehension is transposing Dataframe - pandas

I am trying to import a .csv of EMG data as a Dataframe and filter each column of data using a list comprehension. Below is a dummy dataframe.
from scipy.signal import butter, filtfilt
test_array = pd.DataFrame(np.random.normal(0,2,size=(1000,6)),columns = ['time','RF','VM','TA','GM','BF'])
b,a = butter(4,[0.05,0.9],'bandpass',analog=False)
columns = ['RF','VM','TA','GM','BF']
filtered_df = pd.DataFrame([filtfilt(b,a,test_array.loc[:,i] for i in test_array[columns]])
The code above gives a version of the expected output, but instead of returning filtered_df as a (1000,5) dataframe, it is returning a (5,1000) dataframe.
I've tried using df.transpose() on the back end to fix the orientation, but it seems like there should be a more straightforward solution to preventing the transposing in the first place. Is there a way to get the desired output?

This issue is related to how you building the new dataframe. Just passing in a list from:
[filtfilt(b,a,test_array.loc[:,i]) for i in test_array[columns]]
pandas will read that in as a dataframe with four rows and column names representing the indices of the numpy array. If you build your dataframe using a dictionary mapped to each column name like:
results = [filtfilt(b,a,test_array.loc[:,i]) for i in test_array[columns]]
filtered_df = pd.DataFrame(data = dict(zip(columns, results)))
you get your desired result
RF VM TA GM BF
0 -0.072520 0.025846 0.111571 0.043277 0.024290
1 -2.674829 3.139997 0.285869 -0.162487 3.759851
2 -0.521439 3.481993 0.427854 -1.411966 5.422871
3 -2.719175 5.162347 2.195120 -0.535819 -1.721818
4 0.451544 1.730292 0.930652 -2.017700 -0.926594
.. ... ... ... ... ...
995 -5.240183 -0.625118 2.176452 2.065998 1.561615
996 -3.084039 -0.017626 -0.377022 -1.996366 2.041706
997 -5.122489 1.476979 -3.219335 1.609466 -3.707151
998 -2.072177 -0.870773 0.546386 0.031297 0.247766
999 0.141538 -0.048204 -0.601213 0.499631 0.246530
[1000 rows x 5 columns]

Related

GroupBy-Apply even for empty DataFrame

I am using groupby-apply to create new DataFrame from given Data Frame. But if given DataFrame is empty result would look like given DataFrame with group keys not like target new DataFrame. So to get look of target new DataFrame I have to use if..else with length check and if given DataFrame is empty then manually create DataFrame with specified columns and indexes.
It is kinda broken flow of code. Also if in future structure of target DataFrame happen to change I would have to fix code in two places instead of one.
Is there a way to get look of a target DataFrame even if given DataFrame is empty with GroupBy only (or without if..else)?
Simplified example:
def some_func(df: pd.DataFrame):
return df.values.sum() + pd.DataFrame([[1,1,1], [2,2,2], [3,3,3]], columns=['new_col1', 'new_col2', 'new_col3'])
df1 = pd.DataFrame([[1,1], [1,2], [2,1], [2,2]], columns=['col1', 'col2'])
df2 = pd.DataFrame(columns=['col1', 'col2'])
df1_grouped = df1.groupby(['col1'], group_keys=False).apply(lambda df: some_func(df))
df2_grouped = df2.groupby(['col1'], group_keys=False).apply(lambda df: some_func(df))
Result for df1 is ok:
new_col1 new_col2 new_col3
0 6 6 6
1 7 7 7
2 8 8 8
0 8 8 8
1 9 9 9
2 10 10 10
And not ok for df2:
Empty DataFrame
Columns: [col1, col2]
Index: []
If..else to get expected result for df2:
df = df2
if df.empty:
df_grouped = pd.DataFrame(columns=['new_col1', 'new_col2', 'new_col3'])
else:
df_grouped = df.groupby(['col1'], group_keys=False).apply(lambda df: some_func(df))
Gives what I need:
Empty DataFrame
Columns: [new_col1, new_col2, new_col3]
Index: []

Joining two data frames on column name and comparing result side by side

I have two data frames which look like df1 and df2 below and I want to create df3 as shown.
I could do this using a left join to have all the rows in one dataframe and then did a numpy.where to see if they are matching or not.
I could get what I want but I feel there should be an elegant way of doing this which will eliminate renaming columns, reshuffling columns in dataframe and then using np.where.
Is there a better way to do this?
code to reproduce dataframes:
import pandas as pd
df1=pd.DataFrame({'product':['apples','bananas','oranges','pineapples'],'price':[1,2,3,7],'quantity':[5,7,11,4]})
df2=pd.DataFrame({'product':['apples','bananas','oranges'],'price':[2,2,4],'quantity':[5,7,13]})
df3=pd.DataFrame({'product':['apples','bananas','oranges'],'price_df1':[1,2,3],'price_df2':[2,2,4],'price_match':['No','Yes','No'],'quantity':[5,7,11],'quantity_df2':[5,7,13],'quantity_match':['Yes','Yes','No']})
An elegant way to do your task is to:
generate "partial" DataFrames from each source column,
and then concatenate them.
The first step is to define a function to join 2 source columns and append "match" column:
def myJoin(s1, s2):
rv = s1.to_frame().join(s2.to_frame(), how='inner',
lsuffix='_df1', rsuffix='_df2')
rv[s1.name + '_match'] = np.where(rv.iloc[:,0] == rv.iloc[:,1], 'Yes', 'No')
return rv
Then, from df1 and df2, generate 2 auxiliary DataFrames setting product as the index:
wrk1 = df1.set_index('product')
wrk2 = df2.set_index('product')
And the final step is:
result = pd.concat([ myJoin(wrk1[col], wrk2[col]) for col in wrk1.columns ], axis=1)\
.reset_index()
Details:
for col in wrk1.columns - generates names of columns to join.
myJoin(wrk1[col], wrk2[col]) - generates the partial result for this column from
both source DataFrames.
[…] - a list comprehension, collecting the above partial results in a list.
pd.concat(…) - concatenates these partial results into the final result.
reset_index() - converts the index (product names) into a regular column.
For your source data, the result is:
product price_df1 price_df2 price_match quantity_df1 quantity_df2 quantity_match
0 apples 1 2 No 5 5 Yes
1 bananas 2 2 Yes 7 7 Yes
2 oranges 3 4 No 11 13 No

Removing values of a certain object type from a dataframe column in Pandas

I have a pandas dataframe where some values are integers and other values are an array. I simply want to drop all of the rows that contain the array (object datatype I believe) in my "ORIGIN_AIRPORT_ID" column, but I have not been able to figure out how to do so after trying many methods.
Here is what the first 20 rows of my dataframe looks like. The values that show up like a list are the ones I want to remove. The dataset is a couple million rows, so I just need to write code that removes all of the array-like values in that specific dataframe column if that makes sense.
df = df[df.origin_airport_ID.str.contains(',') == False]
You should consider next time giving us a data sample in text, instead of a figure. It's easier for us to test your example.
Original data:
ITIN_ID ORIGIN_AIRPORT_ID
0 20194146 10397
1 20194147 10397
2 20194148 10397
3 20194149 [10397, 10398, 10399, 10400]
4 20194150 10397
In your case, you can use the .to_numeric pandas function:
df['ORIGIN_AIRPORT_ID'] = pd.to_numeric(df['ORIGIN_AIRPORT_ID'], errors='coerce')
It replaces every cell that cannot be converted into a number to a NaN ( Not a Number ), so we get:
ITIN_ID ORIGIN_AIRPORT_ID
0 20194146 10397.0
1 20194147 10397.0
2 20194148 10397.0
3 20194149 NaN
4 20194150 10397.0
To remove these rows now just use .dropna
df = df.dropna().astype('int')
Which results in your desired DataFrame
ITIN_ID ORIGIN_AIRPORT_ID
0 20194146 10397
1 20194147 10397
2 20194148 10397
4 20194150 10397

Pandas fill cells in a column with NaN values, derive the value from other cells in the row

I have a dataframe:
a b c
0 1 2 3
1 1 1 1
2 3 7 NaN
3 2 3 5
...
I want to fill column "three" inplace (update the values) where the values are NaN using a machine learning algorithm.
I don't know how to do it inplace. Sample code:
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
df=pd.DataFrame([range(3), [1, 5, np.NaN], [2, 2, np.NaN], [4,5,9], [2,5,7]],columns=['a','b','c'])
x=[]
y=[]
for row in df.iterrows():
index,data = row
if(not pd.isnull(data['c'])):
x.append(data[['a','b']].tolist())
y.append(data['c'])
model = LinearRegression()
model.fit(x,y)
#this line does not do it in place.
df[~df.c.notnull()].assign(c = lambda x:model.predict(x[['a','b']]))
But this gives me a copy of the dataframe. Only option I have left is using a for loop however, I don't want to do that. I think there should be more pythonic way of doing it using pandas. Can someone please help? Or is there any other way of doing this?
You'll have to do something like :
df.loc[pd.isnull(df['three']), 'three'] = _result of model_
This modifies directly dataframe df
This way you first filter the dataframe to keep the slice you want to modify (pd.isnull(df['three'])), then from that slice you select the column you want to modify (three).
On the right hand side of the equal, it expects to get an array / list / series with the same number of lines than the filtered dataframe ( in your example, one line)
You may have to adjust depending on what your model returns exactly
EDIT
You probably need to do stg like this
pred = model.predict(df[['a', 'b']])
df['pred'] = model.predict(df[['a', 'b']])
df.loc[pd.isnull(df['c']), 'c'] = df.loc[pd.isnull(df['c']), 'pred']
Note that a significant part of the issue comes from the way you are using scikit learn in your example. You need to pass the whole dataset to the model when you predict.
The simplest way is yo transpose first, then forward fill/backward fill at your convenience.
df.T.ffill().bfill().T

Python 3: Creating DataFrame with parsed data

The following data has been parsed from a stock API. The dataframe has the headers of each column in the Dataset respectively. Is there anyway I can link the data to the dataframe effectively creating a labeled data array/table?
DataFrame
df = pd.DataFrame(columns=['Date','Close','High','Low','Open','Volume'])
DataSet
20140502,36.8700,37.1200,36.2100,36.5900,22454100
20140505,36.9100,37.0500,36.3000,36.6800,13129100
20140506,36.4900,37.1700,36.4800,36.9400,19156000
20140507,34.0700,35.9900,33.6700,35.9900,66062700
20140508,33.9200,34.5700,33.6100,33.8800,30407700
20140509,33.7600,34.1000,33.4100,34.0100,20303400
20140512,34.4500,34.6000,33.8700,33.9900,22520600
20140513,34.4000,34.6900,34.1700,34.4300,12477100
20140514,34.1700,34.6500,33.9800,34.4800,17039000
20140515,33.8000,34.1900,33.4000,34.1800,18879800
20140516,33.4100,33.6600,33.1000,33.6600,18847100
20140519,33.8900,33.9900,33.2800,33.4100,14845700
20140520,33.8700,34.4700,33.6700,33.9900,18596700
20140521,34.3600,34.3900,33.8900,34.0000,13804500
20140522,34.7000,34.8600,34.2600,34.6000,17522800
20140523,35.0200,35.0800,34.5100,34.8500,16294400
20140527,35.1200,35.1300,34.7300,35.0000,13057000
20140528,34.7800,35.1700,34.4200,35.1500,16960500
20140529,34.9000,35.1000,34.6700,34.9000,9780800
20140530,34.6500,34.9300,34.1300,34.9200,13153000
20140602,34.8700,34.9500,34.2800,34.6900,9178900
20140603,34.6500,34.9700,34.5800,34.8000,6557500
20140604,34.7300,34.8300,34.2600,34.4800,9434100
I'm assuming that you are receiving the data as a list of lists. So something like -
vals = [[20140502,36.8700,37.1200,36.2100,36.5900,22454100], [20140505,36.9100,37.0500,36.3000,36.6800,13129100], ...]
In that case, you can populate your dataframe with loc -
for index, val in enumerate(vals):
df.loc[index] = val
Which will give you -
In [6]: df
Out[6]:
Date Close High Low Open Volume
0 20140502 36.87 37.12 36.21 36.59 22454100
1 20140505 36.91 37.05 36.3 36.68 13129100
...
Here, enumerate gives us the index of the row, so we can use that to populate the dataframe index.
If somehow the data was saved as csv, then you can simply use read_csv -
df = pd.read_csv('data.csv', names=['Date','Close','High','Low','Open','Volume'])