Dataframe index with isclose function - dataframe

I have a dataframe with numerical values between 0 and 1. I am trying to create simple summary statistics (manually). I when using boolean I can get the index but when I try to use math.isclose the function does not work and gives an error.
For example:
import pandas as pd
df1 = pd.DataFrame({'col1':[0,.05,0.74,0.76,1], 'col2': [0,
0.05,0.5, 0.75,1], 'x1': [1,2,3,4,5], 'x2':
[5,6,7,8,9]})
result75 = df1.index[round(df1['col2'],2) == 0.75].tolist()
value75 = df1['x2'][result75]
print(value75.mean())
This will give the correct result but occasionally the value result is NAN so I tried:
result75 = df1.index[math.isclose(round(df1['col2'],2), 0.75, abs_tol = 0.011)].tolist()
value75 = df1['x2'][result75]
print(value75.mean())
This results in the following error message:
TypeError: cannot convert the series to <class 'float'>
Both are type "bool" so not sure what is going wrong here...

This works:
rows_meeting_condition = df1[(df1['col2'] > 0.74) & (df1['col2'] < 0.76)]
print(rows_meeting_condition['x2'].mean())

Related

Drop rows that hasn't float value in a column

I have this df:
My task is to find results with this conditions:
[(df.neighbourhood_group == 'Manhattan') & (df.room_type == 'Entire home/apt') & (df.price.between(150.0, 175.0))]`
But this is not working. The error message says:
TypeError: '>=' not supported between instances of 'str' and 'float'
Because in the price column I have the value Private room wrote somewhere.
How can I write a piece of code that tells to keep only float values and drop all the others?
NOTE
These are not working:
df = df[df['price'].apply(lambda x: type(x) in [float])
clean['price']=df['price'].str.replace('Private room', '0.0')
clean.price = clean.price.astype(float)
df.select_dtypes(exclude=['str'])
This is the CSV data.
One way to achive it:
df['price'] = df.apply(lambda r: r['price'] if type(x['price'])==float else np.nan, axis=1)
df.dropna(inplace=True)
In this way you will replace any non-float row with np.nan, and later remove such row.

Creating new columns with pandas .loc

My dataset looks like this:
ex = pd.DataFrame.from_dict({'grp1': np.random.choice('A B'.split(), 20), 'grp2': np.random.choice([1, 2], 20), 'var1': np.random.rand(20), 'var2': np.random.randint(20)})
I want to create new columns with the next value within the groups, but the following code results in SettingWithCopyWarning:
ex[['next_var1', 'next_var2']] = ex.groupby(['grp1', 'grp2'])[['var1', 'var2']].shift(-1)
Therefore I tried to use .loc:
ex.loc[:, ['next_var1', 'next_var2']] = ex.groupby(['grp1', 'grp2'])[['var1', 'var2']].shift(-1)
However, it results in error:
KeyError: "None of [Index(['next_var1', 'next_var2'], dtype='object')] are in the [columns]"
What's wrong with the .loc usage?
With loc you can't create new columns. But you could do:
ex['next_var1'], ex['next_var2'] = None, None
ex.loc[:, ['next_var1', 'next_var2']] = ex.groupby(['grp1', 'grp2'])[['var1', 'var2']].shift(-1).values
However you could also do:
ex[['next_var1', 'next_var2']] = ex.groupby(['grp1', 'grp2'])[['var1', 'var2']].shift(-1)
Is what you tried but it works fine with python 3.7 and pandas 0.25.

pyspark PandasUDFDType.SCALAR convert Row array has wrong

I want to use PandasUDFDType.SCALAR to operate the Row arrays like belows:
df = spark.createDataFrame([([1, 2, 3, 2],), ([4, 5, 5, 4],)], ['data'])
#pandas_udf(ArrayType(IntegerType()), PandasUDFType.SCALAR)
def s(x):
z = x.apply(lambda xx: xx*2)
return z
df.select(s(df.data)).show()
but it went wrong:
pyarrow.lib.ArrowInvalid: trying to convert NumPy type int32 but got int64```

TypeError: ufunc add cannot use operands with types dtype('<M8[ns]') and dtype('<M8[ns]')

I am trying to set an ARIMA model to some data, for this, I used 'autocorrelation_plot()' with my time series. It's generates however the error in the title.
I have an attribute table composed, among others, of a Date and time fiels.
I extracted them (after transforming the attribute table into a numpy table), put them in a 'datetime' variable and appended them all in a list:
O,A = [],[]
dt = datetime.strptime(dt1, "%Y/%m/%d %H:%M")
A.append(dt)
I tried then to create time series and printed them to be sure of the results:
data2 = pd.Series(A, O)
print data2
The results were satisfying, until I decided to auto-correlate :
Auto-correlation command :
autocorrelation_plot(data2)
After this command, it returns:
TypeError: ufunc add cannot use operands with types dtype('M8[ns]') and dtype('M8[ns]')
I guess it's due to the conversion of the datetime.strptime to a numpy ?
I tried to follow some suggestions from previous questions
index.to_pydatetime() , dtype, M8[ns] error ..., in vain.
Minimal reproducible example:
from pandas import datetime
from pandas import DataFrame
import pandas as pd
from matplotlib import pyplot as plt
from pandas.tools.plotting import autocorrelation_plot
arr = arcpy.da.TableToNumPyArray(inTable ,("PROVINCE","ZONE_CODE","MEAN", "Datetime","Time"))
arr_length = len(arr)
j = 1
O,A = [],[]
while j<=55: #I have 55 provinces
i = 0
while i<arr_length:
if arr[i][1]== j:
O.append(arr[i][2])
c = str(arr[i][3])
d = str(c[0:4]+"/"+c[5:7]+"/"+c[8:10])
t = str(arr[i][4])
if t=="10":
dt1 = str(d+" 10:00")
else:
dt1 = str(d+" 14:00")
dt = datetime.strptime(dt1, "%Y/%m/%d %H:%M")
A.append(dt)
i = i+1
data2 = pd.Series(A, O)
print data2
autocorrelation_plot(data2)
del A[:]
del O[:]
j += 1
Screenshot of the results:
results
I used this to solve my issue:
import matplotlib.dates as mpl_dates
df.reset_index(inplace=True)
df['Date']=df['Date'].apply(mpl_dates.date2num)
df = df.astype(float)
I found a solution, it can look barbaric, but it works!
I've just "recreated" pd.Series() with the pd.Series I had:
data2 = pd.Series(O, A)
autocorrelation_plot(pd.Series(data2))
plt.show()

Why am I returned an object when using std() in Pandas?

The print for average of the spreads come out grouped and calculated right. Why do I get this returned as the result for the std_deviation column instead of the standard deviation of the spread grouped by ticker?:
pandas.core.groupby.SeriesGroupBy object at 0x000000000484A588
df = pd.read_csv('C:\\Users\\William\\Desktop\\tickdata.csv',
dtype={'ticker': str, 'bidPrice': np.float64, 'askPrice': np.float64, 'afterHours': str},
usecols=['ticker', 'bidPrice', 'askPrice', 'afterHours'],
nrows=3000000
)
df = df[df.afterHours == "False"]
df = df[df.bidPrice != 0]
df = df[df.askPrice != 0]
df['spread'] = (df.askPrice - df.bidPrice)
df['std_deviation'] = df['spread'].std(ddof=0)
df = df.groupby(['ticker'])
print(df['std_deviation'])
print(df['spread'].mean())
UPDATE: no longer being returned an object but now trying to figure out how to have the standard deviation displayed by ticker
df['spread'] = (df.askPrice - df.bidPrice)
df2 = df.groupby(['ticker'])
print(df2['spread'].mean())
df = df.set_index('ticker')
print(df['spread'].std(ddof=0))
UPDATE2: got the dataset I needed using
df = df[df.afterHours == "False"]
df = df[df.bidPrice != 0]
df = df[df.askPrice != 0]
df['spread'] = (df.askPrice - df.bidPrice)
print(df.groupby(['ticker'])['spread'].mean())
print(df.groupby(['ticker'])['spread'].std(ddof=0))
This line:
df = df.groupby(['ticker'])
assigns df to a DataFrameGroupBy object, and
df['std_deviation']
is a SeriesGroupBy object (of the column).
It's a good idea not to "shadow" / re-assign one variable to a completely different datatype. Try to use a different variable name for the groupby!