Why cannot I have a usual dataframe after using pivot()? - pandas

Under the variable names, there is an extra row that I do not want in my data set
fdi_autocracy = fdi_autocracy.pivot(index=["Country", "regime", "Year"],
columns="partner_regime",
values =['FDI_outward', "FDI_inward", "total_fdi"],
).reset_index()
Country regime Year FDI_outward FDI_inward total_fdi
partner_regime 0.0 0.0 0.0
0 Albania 0.0 1995 NaN NaN NaN
1 Albania 0.0 1996 NaN NaN NaN
2 Albania 0.0 1997 NaN NaN NaN
3 Albania 0.0 1998 NaN NaN NaN
4 Albania 0.0 1999 NaN NaN NaN
What I want is following:
Country regime Year FDI_outward FDI_inward total_fdi
0 Albania 0.0 1995 NaN NaN NaN
1 Albania 0.0 1996 NaN NaN NaN
2 Albania 0.0 1997 NaN NaN NaN
3 Albania 0.0 1998 NaN NaN NaN
4 Albania 0.0 1999 NaN NaN NaN

IIUC, you don't need the partner_regime?
this removes that title
fdi_autocracy.rename_axis(columns=[None, None])

Related

Filtering/Querying Pandas DataFrame after multiple grouping/agg

I have a dataframe that I first group, Counting QuoteLine Items grouped by stock(1-true, 0-false) and mfg type (K-Kit, M-manufactured, P-Purchased). Ultimately, I am interested in quotes that ALL items are either NonStock/Kit and/or Stock/['M','P'] :
grouped = df.groupby(['QuoteNum', 'typecode', 'stock']).agg({"QuoteLine": "count"})
and I get this:
QuoteLine-count
QuoteNum typecode stock
10001 K 0 1
10003 M 0 1
10005 M 0 3
1 1
10006 M 1 1
... ... ... ...
26961 P 1 1
26962 P 1 1
26963 P 1 2
26964 K 0 1
M 1 2
If I unstack it twice:
grouped = df.groupby(['QuoteNum', 'typecode', 'stock']).agg({"QuoteLine": "count"}).unstack().unstack()
# I get
QuoteLine-count
stock 0 1
typecode K M P K M P
QuoteNum
10001 1.0 NaN NaN NaN NaN NaN
10003 NaN 1.0 NaN NaN NaN NaN
10005 NaN 3.0 NaN NaN 1.0 NaN
10006 NaN NaN NaN NaN 1.0 NaN
10007 2.0 NaN NaN NaN NaN NaN
... ... ... ... ... ... ...
26959 NaN NaN NaN NaN NaN 1.0
26961 NaN 1.0 NaN NaN NaN 1.0
26962 NaN NaN NaN NaN NaN 1.0
26963 NaN NaN NaN NaN NaN 2.0
26964 1.0 NaN NaN NaN 2.0 NaN
Now I need to filter out all records where, this is where I need help
# pseudo-code
(stock == 0 and typecode in ['M','P']) -> values are NOT NaN (don't want those)
and
(stock == 1 and typecode='K') -> values are NOT NaN (don't want those either)
so I'm left with these records:
Basically: Columns "0/M, 0/P, 1/K" must be all NaNs and other columns have at least one non NaN value
QuoteLine-count
stock 0 1
typecode K M P K M P
QuoteNum
10001 1.0 NaN NaN NaN NaN NaN
10006 NaN NaN NaN NaN 1.0 NaN
10007 2.0 NaN NaN NaN NaN NaN
... ... ... ... ... ... ...
26959 NaN NaN NaN NaN NaN 1.0
26962 NaN NaN NaN NaN NaN 1.0
26963 NaN NaN NaN NaN NaN 2.0
26964 1.0 NaN NaN NaN 2.0 NaN
IIUC, use boolean mask to set rows that match your conditions to NaN then unstack desired levels:
# Shortcut (for readability)
lvl_vals = grouped.index.get_level_values
m1 = (lvl_vals('typecode') == 'K') & (lvl_vals('stock') == 0)
m2 = (lvl_vals('typecode').isin(['M', 'P'])) & (lvl_vals('stock') == 1)
grouped[m1|m2] = np.nan
out = grouped.unstack(level=['stock', 'typecode']) \
.loc[lambda x: x.isna().all(axis=1)]
Output result:
>>> out
QuoteLine-count
stock 0 1
typecode K M M P
QuoteNum
10001 NaN NaN NaN NaN
10006 NaN NaN NaN NaN
26961 NaN NaN NaN NaN
26962 NaN NaN NaN NaN
26963 NaN NaN NaN NaN
26964 NaN NaN NaN NaN
Desired values could be obtained by as_index==False, but i am not sure if they are in desired format.
grouped = df.groupby(['QuoteNum', 'typecode', 'stock'], as_index=False).agg({"QuoteLine": "count"})
grouped[((grouped["stock"]==0) & (grouped["typecode"].isin(["M" ,"P"]))) | ((grouped["stock"]==1) & (grouped["typecode"].isin(["K"])))]

Is it possible to turn quartely data to monthly?

I'm struggling with this problem and I'm not sure if I'm approaching it correctly.
I have this dataset:
ticker date filing_date_x currency_symbol_x researchdevelopment effectofaccountingcharges incomebeforetax minorityinterest netincome sellinggeneraladministrative grossprofit ebit nonoperatingincomenetother operatingincome otheroperatingexpenses interestexpense taxprovision interestincome netinterestincome extraordinaryitems nonrecurring otheritems incometaxexpense totalrevenue totaloperatingexpenses costofrevenue totalotherincomeexpensenet discontinuedoperations netincomefromcontinuingops netincomeapplicabletocommonshares preferredstockandotheradjustments filing_date_y currency_symbol_y totalassets intangibleassets earningassets othercurrentassets totalliab totalstockholderequity deferredlongtermliab ... totalcurrentliabilities shorttermdebt shortlongtermdebt shortlongtermdebttotal otherstockholderequity propertyplantequipment totalcurrentassets longterminvestments nettangibleassets shortterminvestments netreceivables longtermdebt inventory accountspayable totalpermanentequity noncontrollinginterestinconsolidatedentity temporaryequityredeemablenoncontrollinginterests accumulatedothercomprehensiveincome additionalpaidincapital commonstocktotalequity preferredstocktotalequity retainedearningstotalequity treasurystock accumulatedamortization noncurrrentassetsother deferredlongtermassetcharges noncurrentassetstotal capitalleaseobligations longtermdebttotal noncurrentliabilitiesother noncurrentliabilitiestotal negativegoodwill warrants preferredstockredeemable capitalsurpluse liabilitiesandstockholdersequity cashandshortterminvestments propertyplantandequipmentgross accumulateddepreciation commonstocksharesoutstanding
116638 JNJ.US 2019-12-31 2020-02-18 USD 3.232000e+09 NaN 4.218000e+09 NaN 4.010000e+09 6.039000e+09 1.363200e+10 6.119000e+09 6.500000e+07 4.238000e+09 NaN 85000000.0 208000000.0 81000000.0 -4000000.0 NaN 104000000.0 NaN 208000000.0 2.074700e+10 9.414000e+09 7.115000e+09 -1.200000e+08 NaN 4.010000e+09 4.010000e+09 NaN 2020-02-18 USD 1.577280e+11 4.764300e+10 NaN 2.486000e+09 9.825700e+10 5.947100e+10 5.958000e+09 ... 3.596400e+10 1.202000e+09 1.202000e+09 NaN -1.589100e+10 1.765800e+10 4.527400e+10 1.149000e+09 -2.181100e+10 1.982000e+09 1.448100e+10 2.649400e+10 9.020000e+09 3.476200e+10 NaN NaN NaN NaN NaN 3.120000e+09 NaN 1.106590e+11 -3.841700e+10 NaN 5.695000e+09 7.819000e+09 1.124540e+11 NaN 2.649400e+10 2.984100e+10 6.229300e+10 NaN NaN NaN NaN 1.577280e+11 1.928700e+10 NaN NaN 2.632507e+09
116569 JNJ.US 2020-03-31 2020-04-29 USD 2.580000e+09 NaN 6.509000e+09 NaN 5.796000e+09 5.203000e+09 1.364400e+10 8.581000e+09 7.460000e+08 5.788000e+09 NaN 25000000.0 713000000.0 67000000.0 42000000.0 300000000.0 58000000.0 NaN 713000000.0 2.069100e+10 7.135000e+09 7.047000e+09 6.210000e+08 NaN 5.796000e+09 5.796000e+09 NaN 2020-04-29 USD 1.550170e+11 4.733800e+10 NaN 2.460000e+09 9.372300e+10 6.129400e+10 5.766000e+09 ... 3.368900e+10 2.190000e+09 2.190000e+09 NaN -1.624300e+10 1.740100e+10 4.422600e+10 NaN -1.951500e+10 2.494000e+09 1.487400e+10 2.539300e+10 8.868000e+09 3.149900e+10 NaN NaN NaN NaN NaN 3.120000e+09 NaN 1.129010e+11 -3.848400e+10 NaN 5.042000e+09 NaN 7.539000e+09 NaN 2.539300e+10 2.887500e+10 6.003400e+10 NaN NaN NaN NaN 1.550170e+11 1.802400e+10 4.324700e+10 -2.584600e+10 2.632392e+09
116420 JNJ.US 2020-06-30 2020-07-24 USD 2.707000e+09 NaN 3.940000e+09 NaN 3.626000e+09 4.993000e+09 1.177900e+10 5.711000e+09 -5.000000e+06 3.990000e+09 NaN 45000000.0 314000000.0 19000000.0 -26000000.0 NaN 67000000.0 NaN 314000000.0 1.833600e+10 7.839000e+09 6.557000e+09 -8.500000e+07 NaN 3.626000e+09 3.626000e+09 NaN 2020-07-24 USD 1.583800e+11 4.741300e+10 NaN 2.688000e+09 9.540200e+10 6.297800e+10 5.532000e+09 ... 3.677200e+10 5.332000e+09 5.332000e+09 NaN -1.553300e+10 1.759800e+10 4.589200e+10 NaN -1.832500e+10 7.961000e+09 1.464500e+10 2.506200e+10 9.424000e+09 3.144000e+10 NaN NaN NaN NaN NaN 3.120000e+09 NaN 1.138980e+11 -3.850700e+10 NaN 5.782000e+09 NaN 7.805000e+09 NaN 2.506200e+10 2.803600e+10 5.863000e+10 NaN NaN NaN NaN 1.583800e+11 1.913500e+10 4.405600e+10 -2.645800e+10 2.632377e+09
116235 JNJ.US 2020-09-30 2020-10-23 USD 2.840000e+09 NaN 4.401000e+09 NaN 3.554000e+09 5.431000e+09 1.411000e+10 4.445000e+09 -1.188000e+09 5.633000e+09 NaN 44000000.0 847000000.0 12000000.0 -32000000.0 NaN 206000000.0 NaN 847000000.0 2.108200e+10 8.477000e+09 6.972000e+09 -1.268000e+09 NaN 3.554000e+09 3.554000e+09 NaN 2020-10-23 USD 1.706930e+11 4.700600e+10 NaN 2.619000e+09 1.062200e+11 6.447300e+10 5.615000e+09 ... 3.884700e+10 5.078000e+09 5.078000e+09 NaN -1.493800e+10 1.785500e+10 5.757800e+10 NaN -1.684000e+10 1.181600e+10 1.457900e+10 3.268000e+10 9.599000e+09 3.376900e+10 NaN NaN NaN NaN NaN 3.120000e+09 NaN 1.148310e+11 -3.854000e+10 NaN 6.131000e+09 NaN 7.816000e+09 NaN 3.268000e+10 2.907800e+10 6.737300e+10 NaN NaN NaN NaN 1.706930e+11 3.078100e+10 4.516200e+10 -2.730700e+10 2.632167e+09
116135 JNJ.US 2020-12-31 2021-02-22 USD 4.032000e+09 NaN 1.647000e+09 NaN 1.738000e+09 6.457000e+09 1.466100e+10 1.734000e+09 -2.341000e+09 4.075000e+09 NaN 87000000.0 -91000000.0 13000000.0 -74000000.0 NaN 97000000.0 NaN -91000000.0 2.247500e+10 1.058600e+10 7.814000e+09 -2.414000e+09 NaN 1.738000e+09 1.738000e+09 NaN 2021-02-22 USD 1.748940e+11 5.340200e+10 NaN 3.132000e+09 1.116160e+11 6.327800e+10 7.214000e+09 ... 4.249300e+10 2.631000e+09 2.631000e+09 NaN -1.524200e+10 1.876600e+10 5.123700e+10 NaN -2.651700e+10 1.120000e+10 1.357600e+10 3.263500e+10 9.344000e+09 3.986200e+10 NaN NaN NaN NaN NaN 3.120000e+09 NaN 1.138900e+11 -3.849000e+10 NaN 6.562000e+09 NaN 8.534000e+09 NaN 3.263500e+10 2.927400e+10 6.912300e+10 NaN NaN NaN NaN 1.748940e+11 2.518500e+10 NaN NaN 2.632512e+09
then I have this dataframe(daily) prices:
ticker date open high low close adjusted_close volume
0 JNJ.US 2021-08-02 172.470 172.840 171.300 172.270 172.2700 3620659
1 JNJ.US 2021-07-30 172.540 172.980 171.840 172.200 172.2000 5346400
2 JNJ.US 2021-07-29 172.740 173.340 171.090 172.180 172.1800 4214100
3 JNJ.US 2021-07-28 172.730 173.380 172.080 172.180 172.1800 5750700
4 JNJ.US 2021-07-27 171.800 172.720 170.670 172.660 172.6600 7089300
I have daily data in the price data but I have quarterly data in the first data frame. I want to merge the dataframe in a way that all the prices between Jan-01-2020 and Mar-01-2020 are being merged with the correct row.
I'm not sure exactly how to do this. I thought of extracting the date to month-year but I still don't know how to merge based on the range of values?
Any suggestions would be welcomed, if I'm not clear please let me know and I can clarify.
If I understand correctly you could create common year and quarter columns for each DataFrame and do a merge on those columns. I did a left merge if you only want to match columns in the left dataset (daily data).
If this is not what you are looking for, could you please clarify with a sample input/output?
# importing pandas as pd
import pandas as pd
# Creating dummy data of daily values
dt = pd.Series(['2020-08-02', '2020-07-30', '2020-07-29',
'2020-07-28', '2020-07-27'])
# Convert the underlying data to datetime
dt = pd.to_datetime(dt)
dt_df = pd.DataFrame(dt, columns=['date'])
dt_df['quarter_1'] = dt_df['date'].dt.quarter
dt_df['year_1'] = dt_df['date'].dt.year
print(dt_df)
date quarter_1 year_1
0 2020-08-02 3 2020
1 2020-07-30 3 2020
2 2020-07-29 3 2020
3 2020-07-28 3 2020
4 2020-07-27 3 2020
# Creating dummy data of quarterly values
dt2 = pd.Series(['2019-12-31', '2020-03-31', '2020-06-30',
'2020-09-30', '2020-12-31'])
# Convert the underlying data to datetime
dt2 = pd.to_datetime(dt2)
dt2_df = pd.DataFrame(sr2, columns=['date2'])
dt2_df['quarter_2'] = dt2_df['date2'].dt.quarter
dt2_df['year_2'] = dt2_df['date2'].dt.year
print(dt2_df)
date_quarter quarter_2 year_2
0 2019-12-31 4 2019
1 2020-03-31 1 2020
2 2020-06-30 2 2020
3 2020-09-30 3 2020
4 2020-12-31 4 2020
Then you can just merge on how ever you want.
dt_df.merge(dt2_df, how='left', left_on=['quarter_1', 'year_1'], right_on=['quarter_2', 'year_2'] , validate="many_to_many")
OUTPUT:
date quarter_1 year_1 date_quarter quarter_2 year_2
0 2020-08-02 3 2020 2020-09-30 3 2020
1 2020-07-30 3 2020 2020-09-30 3 2020
2 2020-07-29 3 2020 2020-09-30 3 2020
3 2020-07-28 3 2020 2020-09-30 3 2020
4 2020-07-27 3 2020 2020-09-30 3 2020

How to remove periods of time in a dataframe?

I have this df:
CODE YEAR MONTH DAY TMAX TMIN PP BAD PERIOD 1 BAD PERIOD 2
9984 000130 1991 1 1 32.6 23.4 0.0 1991 1998
9985 000130 1991 1 2 31.2 22.4 0.0 NaN NaN
9986 000130 1991 1 3 32.0 NaN 0.0 NaN NaN
9987 000130 1991 1 4 32.2 23.0 0.0 NaN NaN
9988 000130 1991 1 5 30.5 22.0 0.0 NaN NaN
... ... ... ... ... ... ...
20118 000130 2018 9 30 31.8 21.2 NaN NaN NaN
30028 000132 1991 1 1 35.2 NaN 0.0 2005 2010
30029 000132 1991 1 2 34.6 NaN 0.0 NaN NaN
30030 000132 1991 1 3 35.8 NaN 0.0 NaN NaN
30031 000132 1991 1 4 34.8 NaN 0.0 NaN NaN
... ... ... ... ... ... ...
50027 000132 2019 10 5 36.5 NaN 13.1 NaN NaN
50028 000133 1991 1 1 36.2 NaN 0.0 1991 2010
50029 000133 1991 1 2 36.6 NaN 0.0 NaN NaN
50030 000133 1991 1 3 36.8 NaN 5.0 NaN NaN
50031 000133 1991 1 4 36.8 NaN 0.0 NaN NaN
... ... ... ... ... ... ...
54456 000133 2019 10 5 36.5 NaN 12.1 NaN NaN
I want to change the values ​​of the columns TMAX TMIN and PP to NaN, only of the periods specified in Bad Period 1 and Bad period 2 AND ONLY IN THEIR RESPECTIVE CODE. For example if I have Bad Period 1 equal to 1991 and Bad period 2 equal to 1998 I want all the values of TMAX, TMIN and PP that have code 000130 have NaN values since 1991 (bad period 1) to 1998 (bad period 2). I have 371 unique CODES in CODE column so i might use df.groupby("CODE").
Expected result after the change:
CODE YEAR MONTH DAY TMAX TMIN PP BAD PERIOD 1 BAD PERIOD 2
9984 000130 1991 1 1 NaN NaN NaN 1991 1998
9985 000130 1991 1 2 NaN NaN NaN NaN NaN
9986 000130 1991 1 3 NaN NaN NaN NaN NaN
9987 000130 1991 1 4 NaN NaN NaN NaN NaN
9988 000130 1991 1 5 NaN NaN NaN NaN NaN
... ... ... ... ... ... ...
20118 000130 2018 9 30 31.8 21.2 NaN NaN NaN
30028 000132 1991 1 1 35.2 NaN 0.0 2005 2010
30029 000132 1991 1 2 34.6 NaN 0.0 NaN NaN
30030 000132 1991 1 3 35.8 NaN 0.0 NaN NaN
30031 000132 1991 1 4 34.8 NaN 0.0 NaN NaN
... ... ... ... ... ... ...
50027 000132 2019 10 5 36.5 NaN 13.1 NaN NaN
50028 000133 1991 1 1 NaN NaN NaN 1991 2010
50029 000133 1991 1 2 NaN NaN NaN NaN NaN
50030 000133 1991 1 3 NaN NaN NaN NaN NaN
50031 000133 1991 1 4 NaN NaN NaN NaN NaN
... ... ... ... ... ... ...
54456 000133 2019 10 5 36.5 NaN 12.1 NaN NaN
you can propagate the values in your bad columns with ffill, if the non nan values are always at the first row per group of CODE and your data is ordered per CODE. If not, with groupby.transform and first. Then use mask to replace by nan where the YEAR is between your two bad columns once filled with the wanted value.
df_ = df[['BAD_1', 'BAD_2']].ffill()
#or more flexible df_ = df.groupby("CODE")[['BAD_1', 'BAD_2']].transform('first')
cols = ['TMAX', 'TMIN', 'PP']
df[cols] = df[cols].mask(df['YEAR'].ge(df_['BAD_1'])
& df['YEAR'].le(df_['BAD_2']))
print(df)
CODE YEAR MONTH DAY TMAX TMIN PP BAD_1 BAD_2
9984 130 1991 1 1 NaN NaN NaN 1991.0 1998.0
9985 130 1991 1 2 NaN NaN NaN NaN NaN
9986 130 1991 1 3 NaN NaN NaN NaN NaN
9987 130 1991 1 4 NaN NaN NaN NaN NaN
9988 130 1991 1 5 NaN NaN NaN NaN NaN
20118 130 2018 9 30 31.8 21.2 NaN NaN NaN
30028 132 1991 1 1 35.2 NaN 0.0 2005.0 2010.0
30029 132 1991 1 2 34.6 NaN 0.0 NaN NaN
30030 132 1991 1 3 35.8 NaN 0.0 NaN NaN
30031 132 1991 1 4 34.8 NaN 0.0 NaN NaN
50027 132 2019 10 5 36.5 NaN 13.1 NaN NaN
50028 133 1991 1 1 NaN NaN NaN 1991.0 2010.0
50029 133 1991 1 2 NaN NaN NaN NaN NaN
50030 133 1991 1 3 NaN NaN NaN NaN NaN
50031 133 1991 1 4 NaN NaN NaN NaN NaN
54456 133 2019 10 5 36.5 NaN 12.1 NaN NaN

In Python, how can I update multiple rows in a DataFrame with a Series?

I have a dataframe as below.
a b c d
2010-07-23 NaN NaN NaN NaN
2010-07-26 NaN NaN NaN NaN
2010-07-27 NaN NaN NaN NaN
2010-07-28 NaN NaN NaN NaN
2010-07-29 NaN NaN NaN NaN
2010-07-30 NaN NaN NaN NaN
2010-08-02 NaN NaN NaN NaN
2010-08-03 NaN NaN NaN NaN
2010-08-04 NaN NaN NaN NaN
2010-08-05 NaN NaN NaN NaN
And I have a series as below.
2010-07-23
a 1
b 2
c 3
d 4
I want to update the DataFrame with the series as below. How can I do?
a b c d
2010-07-23 NaN NaN NaN NaN
2010-07-26 1 2 3 4
2010-07-27 1 2 3 4
2010-07-28 1 2 3 4
2010-07-29 NaN NaN NaN NaN
2010-07-30 NaN NaN NaN NaN
2010-08-02 NaN NaN NaN NaN
2010-08-03 NaN NaN NaN NaN
2010-08-04 NaN NaN NaN NaN
2010-08-05 NaN NaN NaN NaN
Thank you very much for the help in advance.
If there is one column instead Series in s add DataFrame.squeeze with concat by length of date range, last pass to DataFrame.update:
r = pd.date_range('2010-07-26','2010-07-28')
df.update(pd.concat([s.squeeze()] * len(r), axis=1, keys=r).T)
print (df)
a b c d
2010-07-23 NaN NaN NaN NaN
2010-07-26 1.0 2.0 3.0 4.0
2010-07-27 1.0 2.0 3.0 4.0
2010-07-28 1.0 2.0 3.0 4.0
2010-07-29 NaN NaN NaN NaN
2010-07-30 NaN NaN NaN NaN
2010-08-02 NaN NaN NaN NaN
2010-08-03 NaN NaN NaN NaN
2010-08-04 NaN NaN NaN NaN
2010-08-05 NaN NaN NaN NaN
Or you can use np.broadcast_to for repeat Series:
r = pd.date_range('2010-07-26','2010-07-28')
df1 = pd.DataFrame(np.broadcast_to(s.squeeze().values, (len(r),len(s))),
index=r,
columns=s.index)
print (df1)
a b c d
2010-07-26 1 2 3 4
2010-07-27 1 2 3 4
2010-07-28 1 2 3 4
df.update(df1)
print (df)
a b c d
2010-07-23 NaN NaN NaN NaN
2010-07-26 1.0 2.0 3.0 4.0
2010-07-27 1.0 2.0 3.0 4.0
2010-07-28 1.0 2.0 3.0 4.0
2010-07-29 NaN NaN NaN NaN
2010-07-30 NaN NaN NaN NaN
2010-08-02 NaN NaN NaN NaN
2010-08-03 NaN NaN NaN NaN
2010-08-04 NaN NaN NaN NaN
2010-08-05 NaN NaN NaN NaN

Add header to .data file in Pandas

Given a file with the extention of .data, I have read it with pd.read_fwf("./input.data", sep=",", header = None):
Out:
0
0 63.0,1.0,1.0,145.0,233.0,1.0,2.0,150.0,0.0,2.3...
1 67.0,1.0,4.0,160.0,286.0,0.0,2.0,108.0,1.0,1.5...
2 67.0,1.0,4.0,120.0,229.0,0.0,2.0,129.0,1.0,2.6...
3 37.0,1.0,3.0,130.0,250.0,0.0,0.0,187.0,0.0,3.5...
4 41.0,0.0,2.0,130.0,204.0,0.0,2.0,172.0,0.0,1.4...
... ...
292 57.0,0.0,4.0,140.0,241.0,0.0,0.0,123.0,1.0,0.2...
293 45.0,1.0,1.0,110.0,264.0,0.0,0.0,132.0,0.0,1.2...
294 68.0,1.0,4.0,144.0,193.0,1.0,0.0,141.0,0.0,3.4...
295 57.0,1.0,4.0,130.0,131.0,0.0,0.0,115.0,1.0,1.2...
296 57.0,0.0,2.0,130.0,236.0,0.0,2.0,174.0,0.0,0.0...
How can I add the following column names to it? Thanks.
col_names = ["age", "sex", "cp", "restbp", "chol", "fbs", "restecg",
"thalach", "exang", "oldpeak", "slope", "ca", "thal", "num"]
Update:
pd.read_fwf("./input.data", names = col_names)
Out:
age sex cp restbp chol fbs restecg thalach exang oldpeak slope ca thal num
0 63.0,1.0,1.0,145.0,233.0,1.0,2.0,150.0,0.0,2.3... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 67.0,1.0,4.0,160.0,286.0,0.0,2.0,108.0,1.0,1.5... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 67.0,1.0,4.0,120.0,229.0,0.0,2.0,129.0,1.0,2.6... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 37.0,1.0,3.0,130.0,250.0,0.0,0.0,187.0,0.0,3.5... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
4 41.0,0.0,2.0,130.0,204.0,0.0,2.0,172.0,0.0,1.4... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
292 57.0,0.0,4.0,140.0,241.0,0.0,0.0,123.0,1.0,0.2... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
293 45.0,1.0,1.0,110.0,264.0,0.0,0.0,132.0,0.0,1.2... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
294 68.0,1.0,4.0,144.0,193.0,1.0,0.0,141.0,0.0,3.4... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
295 57.0,1.0,4.0,130.0,131.0,0.0,0.0,115.0,1.0,1.2... NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
296 57.0,0.0,2.0,130.0,236.0,0.0,2.0,174.0,0.0,0.0... NaN NaN NaN NaN NaN NaN
If check read_fwf:
Read a table of fixed-width formatted lines into DataFrame.
So if there is separator , use read_csv:
col_names = ["age", "sex", "cp", "restbp", "chol", "fbs", "restecg",
"thalach", "exang", "oldpeak", "slope", "ca", "thal", "num"]
df = pd.read_csv("input.data", names=col_names)
print (df)
age sex cp restbp chol fbs restecg thalach exang oldpeak \
0 63.0 1.0 1.0 145.0 233.0 1.0 2.0 150.0 0.0 2.3
1 67.0 1.0 4.0 160.0 286.0 0.0 2.0 108.0 1.0 1.5
2 67.0 1.0 4.0 120.0 229.0 0.0 2.0 129.0 1.0 2.6
3 37.0 1.0 3.0 130.0 250.0 0.0 0.0 187.0 0.0 3.5
4 41.0 0.0 2.0 130.0 204.0 0.0 2.0 172.0 0.0 1.4
.. ... ... ... ... ... ... ... ... ... ...
292 57.0 0.0 4.0 140.0 241.0 0.0 0.0 123.0 1.0 0.2
293 45.0 1.0 1.0 110.0 264.0 0.0 0.0 132.0 0.0 1.2
294 68.0 1.0 4.0 144.0 193.0 1.0 0.0 141.0 0.0 3.4
295 57.0 1.0 4.0 130.0 131.0 0.0 0.0 115.0 1.0 1.2
296 57.0 0.0 2.0 130.0 236.0 0.0 2.0 174.0 0.0 0.0
slope ca thal num
0 3.0 0.0 6.0 0
1 2.0 3.0 3.0 1
2 2.0 2.0 7.0 1
3 3.0 0.0 3.0 0
4 1.0 0.0 3.0 0
.. ... ... ... ...
292 2.0 0.0 7.0 1
293 2.0 0.0 7.0 1
294 2.0 2.0 7.0 1
295 2.0 1.0 7.0 1
296 2.0 1.0 3.0 1
[297 rows x 14 columns]
Just do a read_csv without header and pass col_names:
df = pd.read_csv('input.data', header=None, names=col_names);
Output (head):
age sex cp restbp chol fbs restecg thalach exang oldpeak slope ca thal num
-- ----- ----- ---- -------- ------ ----- --------- --------- ------- --------- ------- ---- ------ -----
0 63 1 1 145 233 1 2 150 0 2.3 3 0 6 0
1 67 1 4 160 286 0 2 108 1 1.5 2 3 3 1
2 67 1 4 120 229 0 2 129 1 2.6 2 2 7 1
3 37 1 3 130 250 0 0 187 0 3.5 3 0 3 0
4 41 0 2 130 204 0 2 172 0 1.4 1 0 3 0