I have a csv file which contains 3000 rows and 5 columns, which constantly have more rows appended to it on a weekly basis.
What i'm trying to do is to find the arithmetic mean for the last column for the last 1000 rows, every week. (So when new rows are added to it weekly, it'll just take the average of most recent 1000 rows)
How should I construct the pandas or numpy array to achieve this?
df = pd.read_csv(fds.csv, index_col=False, header=0)
df_1 = df['Results']
#How should I write the next line of codes to get the average for the most 1000 rows?
I'm on a different machine than what my pandas is installed on so I'm going on memory, but I think what you'll want to do is...
df = pd.read_csv(fds.csv, index_col=False, header=0)
df_1 = df['Results']
#Let's pretend your 5th column has a name (header) of `Stuff`
last_thousand = df_1.tail(1000)
np.mean(last_thousand.Stuff)
A little bit quicker using mean():
df = pd.read_csv("fds.csv", header = 0)
results = df.tail(1000).mean()
Results will contain the mean for each column within the last 1000 rows. If you want more statistics, you can also use describe():
resutls = df.tail(1000).describe().unstack()
So basically I needed to use the pandas tail function. My Code below works.
df = pd.read_csv(fds.csv, index_col=False, header=0)
df_1 = df['Results']
numpy.average(df_1.tail(1000))
Related
Automating small business reporting from my Quickbooks P&L. I'm trying to get the net income value for the current month from a specific cell in a dataframe, but that cell moves one column to the right every month when I update the csv file.
For example, for the code below, this month I want the value from Nov[0], but next month I'll want the value from Dec[0], even though that column doesn't exist yet.
Is there a graceful way to always select the second right most column, or is this a stupid way to try and get this information?
import numpy as np
import pandas as pd
nov = -810
dec = 14958
total = 8693
d = {'Jan': [50], 'Feb': [70], 'Total':[120]}
df = pd.DataFrame(data=d)
Sure, you can reference the last or second-to-last row or column.
d = {'Jan': [50], 'Feb': [70], 'Total':[120]}
df = pd.DataFrame(data=d)
x = df.iloc[-1,-2]
This will select the value in the last row for the second-to-last column, in this case 70. :)
If you plan to use the full file, #VincentRupp's answer will get you what you want.
But if you only plan to use the values in the second right most column and you can infer what it will be called, you can tell pd.read_csv that's all you want.
import pandas as pd # 1.5.1
# assuming we want this month's name
# can modify to use some other month
abbreviated_month_name = pd.to_datetime("today").strftime("%b")
df = pd.read_csv("path/to/file.csv", usecols=[abbreviated_month_name])
print(df.iloc[-1, 0])
References
pd.read_csv
strftime cheat-sheet
Is there any way to read n rows from row say 25 of sas dataset in python. Can we provide range in chunksize?
we can do a hack with chunksize:
a = pd.read_sas('file.sas7bdat', chunksize=11)
df = a.read()
a.close()
print(df)
You can use pyreadstat, the main advantage over pandas.read_sas is that pyreadstat allows to read data starting from any rows up to any rows i.e. from n rows to m rows. pyreadstat.read_sas7bdat has two parameters: row_offset and row_limit, the reading will start from the row_offset+1 and number of rows read will be the row_limit
import pyreadstat
df, _ = pyreadstat.read_sas7bdat('file.sas7bdat', row_offset=10, row_limit=8)
#row_offset=10 means it will start reading from 11th row
#row_limit=8 means it will read next 8 rows from row_offset+1
import pandas as pd
a = pd.read_sas('file.sas7bdat')
df = a.loc[10:25]
print(df)
10 - start; 25 stop.
I need to extract dataframes from json data stored in every row of initial dataframe and concat them all together. Currently it works for me over iteration and takes ages.
Input data is dataframe, containing JSON dictionaries:
print(json_table)
json_responce timestamp request
27487 {'explore_tabs.. 2019-07-02 02:05:25 Lisboa, Portugal
27488 {'explore_tabs.. 2019-07-02 02:05:27 Ribeira, Portugal
The json_responce field is being unwraped to dataframe:
from pandas.io.json import json_normalize
from ast import literal_eval
json = literal_eval(json_table.loc[0,'json_responce'])
df_normalized = json_normalize(json['explore_tabs'][0]['sections'][0]
['listings'])
which gives a nice unwrapped dataframe for each row of the initial df
Having 27000 rows of json containing df, I iterate over initial df, which creates new df at every step and concat's to the final_df, to concat all the data together:
def unwrap_json_and_concat(json_table):
final_df = pd.DataFrame()
for i in json_table.index:
row = literal_eval(json_table.loc[i,'json_responce'])
df = json_normalize(row['explore_tabs'][0]['sections']
[0]['listings'])
final_df = pd.concat([final_df,df])
return final_df
As expected, that takes ages to iterate over, with significant slowing towards the end of calculation due to the increasing size of the final_df.
I know how to create functions for apply, but I believe it will not give much perfomance either, due to the fact, that new dataframe is being created every row anyways.
How to vectorize this calculation?
Thank you!
I have a large dataframe that I want to sample based on values on the target column value, which is binary : 0/1
I want to extract equal number of rows that have 0's and 1's in the "target" column. I was thinking of using the pandas sampling function but not sure how to declare the equal number of samples I want from both classes for the dataframe based on the target column.
I was thinking of using something like this:
df.sample(n=10000, weights='target', random_state=1)
Not sure how to edit it to get 10k records with 5k 1's and 5k 0's in the target column. Any help is appreciated!
You can group the data by target and then sample,
df = pd.DataFrame({'col':np.random.randn(12000), 'target':np.random.randint(low = 0, high = 2, size=12000)})
new_df = df.groupby('target').apply(lambda x: x.sample(n=5000)).reset_index(drop = True)
new_df.target.value_counts()
1 5000
0 5000
Edit: Use DataFrame.sample
You get similar results using DataFrame.sample
new_df = df.groupby('target').sample(n=5000)
You can use DataFrameGroupBy.sample method as follwing:
sample_df = df.groupby("target").sample(n=5000, random_state=1)
Also found this to be a good method:
df['weights'] = np.where(df['target'] == 1, .5, .5)
sample_df = df.sample(frac=.1, random_state=111, weights='weights')
Change the value of frac depending on the percent of data you want back from the original dataframe.
You will have to run a df0.sample(n=5000) and df1.sample(n=5000) and then combine df0 and df1 into a dfsample dataframe. You can create df0 and df1 by df.filter() with some logic. If you provide sample data I can help you construct that logic.
I am trying to learn Python and Pandas and coming from VBA I am still caught in the habit of looping through every single cell, but I am looking for ways to operate on entire rows at a time.
Below is my part of my code. I have about 3000 stocks in the columns and about 40 or so data points in the rows saved in a dataframe called df.
I do the same kind of loop as showed to test for multiple criterias based on row values for the stocks in each column. As you see my code uses .ix to loop through the 'cells' in the dataframe.
But I have looked for ways to operate on the entire rows at a time, but have failed every attempt.
This take about 7 minutes for the 3000 stocks (but only about 1 minut or so for 2000 stocks??). But this must be able to run much faster?
def piotrosky():
df_temp = pd.DataFrame(np.nan, index=range(10), columns=df.columns)
#bruger dictionary til rename input så man ikke skal gøre det for hver række
dic={0:'positiveNetIncome',1:'positiveOperatingCF',2:'increasingROA', 3:'QualityOfEarnings',4:'longTermDebtToAssets',
5:'currentRatio', 6:'sharesOutVsSharesLast',7:'increasingGrossM',8:'IncreasingAssetTurnOver', 9:'total' }
df_temp.rename(dic, inplace = True)
r=1
#df is a vector with stocks in the columns and datapoints in the rows
#so I always need to loop across the columns
for i in range(df.shape[1]-1):
#positive net income
if df.ix[2,r]>0:
df_temp.ix[0,r]=1
else:
df_temp.ix[0,r]=0
#positiveOpeCF
if df.ix[3,r]>0:
df_temp.ix[1,r]=1
else:
df_temp.ix[1,r]=0
#Continue with several simular loops
#total
df_temp.ix[9,r]=df_temp.ix[0,r]+df_temp.ix[1,r]+df_temp.ix[2,r]+df_temp.ix[3,r]+ \
df_temp.ix[4,r]+df_temp.ix[5,r]+df_temp.ix[6,r]+df_temp.ix[7,r]+df_temp.ix[8,r]
r=r+1
Edit:
All of the below is done on a dataframe that is the transpose of the one you describe in your post. df.T should produce properly formatted input.
Method:
For conditionals on pandas dataframes, you can use the numpy function np.where:
criteria = {}
# np.where(condition, value_if_true, value_if_false)
criteria['positive_net_income'] = np.where(df[2] > 0, 1, 0)
After you get these numpy arrays, you can construct a dataframe from them,
pd.DataFrame(criteria)
and sum across it
pd.DataFrame(criteria).sum(axis=1)
to get a Series you can add as a column to your initial DataFrame
def piotrosky(df):
criteria = {}
criteria['positive_net_income'] = np.where(df[2] > 0, 1, 0)
criteria['positive_operating_cf'] = np.where(df[3] > 0, 1, 0)
...
return pd.DataFrame(criteria).sum(axis=1)
df['piotrosky_score'] = piotrosky(df)