Reading CSV creates too many rows/columns - pandas

I am working with dataframes in the pandas library. I have a table of data in Excel that I save as a CSV then I call
df = pd.read_csv("file.csv")
I expect the frame to look something like
Item1 Item2 Item3
0 12.00 3 2
1 4.00 8 4
2 3.14 2 8
But instead I get
Item1 Item2 Item3 Unnamed: 3 Unnamed: 4
0 12.00 3 2 NaN NaN
1 4.00 8 4 NaN NaN
2 3.14 2 8 NaN NaN
Or sometimes extra rows with all NaN values. It appears that pandas is not aware of the real size of the CSV. The data in Excel is organized perfectly fine, the data values are all nonempty and are entirely in a rectangle. How do I fix this? Is there an edit I can make to the CSV that will specify the its correct size?
As requested here is a snippet of the data. It goes down to about 2500 rows, and there are no more values to the right.

You probably have a cell that is not empty (for example a space ) in the original Excel file. If you are getting 2 unnamed columns in pandas, try to delete 2 columns of the original Excel file.
Another way would be to keep all columns that are not unnamed. You could do this with:
real_cols = [x for x in df.columns if not x.startswith("Unnamed: ")]
df = df[real_cols]
And then you can save the csv.

Related

Trying to convert column to be row indexes, set_index error

data_new. set_index('Usual Mode of Transport to Work')
jupyter notebook
Trying to convert column to be row indexes, however, it shows up as NaN? How do i resolve it? Thanks. Im a beginner in python.
Lets start with a toy dataframe
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randint(0,5,size=(5, 4)), columns=list('ABCD'))
print(df)
A B C D
0 3 1 2 1
1 2 2 3 4
2 2 4 4 1
3 1 0 3 2
4 1 2 4 0
Now, let's set column A as the index
df.set_index('A')
B C D
A
3 1 2 1
2 2 3 4
2 4 4 1
1 0 3 2
1 2 4 0
This sets the index correctly but doesn't save this newly indexed dataframe in the original data frame variable, i.e., df. So when you check the value of df you see find the original dataframe.
To save the new indexing, you can do one of the following
df = df.set_index('A)
or
df.set_index('A', inplace=True)
Coming to the NaN values, I believe it has got something to do with using Jupyter notebook. Since Jupyter allows jumping between cells, it does not necessarily follow the linear execution order like traditional coding. This can get confusing. You can use the "Variable View" in Jupyter to cross-check if you are passing the value you intend to. I hope this can help you figure out the NaN issue.

Join values in different dataframes

I am trying trying to join two dataframes in such a way that the resulting union contains info about both of them. My dataframes are similar to:
>> df_1
user_id hashtag1 hashtag2 hashtag3
0000 '#breakfast' '#lunch' '#dinner'
0001 '#day' '#night' NaN
0002 '#breakfast' NaN NaN
The second dataframe contains a unique identifier of the hashtags and their respective score:
>> df_2
hashtag1 score
'#breakfast' 10
'#lunch' 8
'#dinner' 9
'#day' -5
'#night' 6
I want to add a set of columns on my first dataframe that contain the scores of each hashtag used, such as:
user_id hashtag1 hashtag2 hashtag3 score1 score2 score3
0000 '#breakfast' '#lunch' '#dinner' 10 8 9
0001 '#day' '#night' NaN -5 6 NaN
0002 '#breakfast' NaN NaN 10 NaN NaN
I tried to use df.join() but I get an error: "ValueError: You are trying to merge on object and int64 columns. If you wish to proceed you should use pd.concat"
My code is as follows:
new_df = df_1.join(df_2, how='left', on='hashtag1')
I appreciate any help, thank you
You should try pandas.merge:
pandas.merge(df_1, df_2, on='hashtag1', how='left')
If you want to use .join, you need to set the index of df_2.
df_1.join(df_2.set_index('hashtag1'), on='hashtag1', how='left')
Some resources:
https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html#database-style-dataframe-or-named-series-joining-merging
Trouble with df.join(): ValueError: You are trying to merge on object and int64 columns

How to manipulate data in arrays using pandas

Have data in dataframe and need to compare current value of one column and prior of value of another column. Current time is row 5 in this dataframe and here's the desired output:
target data is streamed and captured into a DataFrame, then that array is multiplied by a constant to generate another column, however unable to generate the third column comp, which should compare current value of prod with prior value of the comp from comp.
df['temp'] = self.temp
df['prod'] = df['temp'].multiply(other=const1)
Another user had suggested using this logic but it is generates errors because the routine's array doesn't match the size of the DataFrame:
for i in range(2, len(df['temp'])):
df['comp'].append(max(df['prod'][i], df['comp'][i - 1]))
Let's try this, I think this will capture your intended logic:
df = pd.DataFrame({'col0':[1,2,3,4,5]
,'col1':[5,4.9,5.5,3.5,6.3]
,'col2':[2.5,2.45,2.75,1.75,3.15]
})
df['col3'] = df['col2'].shift(-1).cummax().shift()
print(df)
Output:
col0 col1 col2 col3
0 1 5.0 2.50 NaN
1 2 4.9 2.45 2.45
2 3 5.5 2.75 2.75
3 4 3.5 1.75 2.75
4 5 6.3 3.15 3.15

Can you prevent automatic alphabetical order of df.append()?

I am trying to append data to a log where the order of columns isn't in alphabetical order but makes logical sense, ex.
Org_Goals_1 Calc_Goals_1 Diff_Goals_1 Org_Goals_2 Calc_Goals_2 Diff_Goals_2
I am running through several calculations based on different variables and logging the results through appending a dictionary of the values after each run. Is there a way to prevent the df.append() function to order the columns alphabetically?
Seems you have to reorder the columns after the append operation:
In [25]:
# assign the appended dfs to merged
merged = df1.append(df2)
# create a list of the columns in the order you desire
cols = list(df1) + list(df2)
# assign directly
merged.columns = cols
# column order is now as desired
merged.columns
Out[25]:
Index(['Org_Goals_1', 'Calc_Goals_1', 'Diff_Goals_1', 'Org_Goals_2', 'Calc_Goals_2', 'Diff_Goals_2'], dtype='object')
example:
In [26]:
df1 = pd.DataFrame(columns=['Org_Goals_1','Calc_Goals_1','Diff_Goals_1'], data = randn(5,3))
df2 = pd.DataFrame(columns=['Org_Goals_2','Calc_Goals_2','Diff_Goals_2'], data=randn(5,3))
merged = df1.append(df2)
cols = list(df1) + list(df2)
merged.columns = cols
merged
Out[26]:
Org_Goals_1 Calc_Goals_1 Diff_Goals_1 Org_Goals_2 Calc_Goals_2 \
0 0.028935 NaN -0.687143 NaN 1.528579
1 0.943432 NaN -2.055357 NaN -0.720132
2 0.035234 NaN 0.020756 NaN 1.556319
3 1.447863 NaN 0.847496 NaN -1.458852
4 0.132337 NaN -0.255578 NaN -0.222660
0 NaN 0.131085 NaN 0.850022 NaN
1 NaN -1.942110 NaN 0.672965 NaN
2 NaN 0.944052 NaN 1.274509 NaN
3 NaN -1.796448 NaN 0.130338 NaN
4 NaN 0.961545 NaN -0.741825 NaN
Diff_Goals_2
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
0 0.727619
1 0.022209
2 -0.350757
3 1.116637
4 1.947526
The same alpha sorting of the columns happens with concat also so it looks like you have to reorder after appending.
EDIT
An alternative is to use join:
In [32]:
df1.join(df2)
Out[32]:
Org_Goals_1 Calc_Goals_1 Diff_Goals_1 Org_Goals_2 Calc_Goals_2 \
0 0.163745 1.608398 0.876040 0.651063 0.371263
1 -1.762973 -0.471050 -0.206376 1.323191 0.623045
2 0.166269 1.021835 -0.119982 1.005159 -0.831738
3 -0.400197 0.567782 -1.581803 0.417112 0.188023
4 -1.443269 -0.001080 0.804195 0.480510 -0.660761
Diff_Goals_2
0 -2.723280
1 2.463258
2 0.147251
3 2.328377
4 -0.248114
Actually, I found "advanced indexing" to work quite well
df2=df.ix[:,'order of columns']
As I see it, the order is lost, but when appending, the original data should have the correct order. To maintain that, assuming Dataframe 'alldata' and dataframe to be appended data 'newdata', appending and keeping column order as in 'alldata' would be:
alldata.append(newdata)[list(alldata)]
(I encountered this problem with named date fields, where 'Month' would be sorted between 'Minute' and 'Second')

Pandas expanding window with min_periods

I want to compute expanding window statistics, but with a minimum number of periods of 3, rather than 1. That is, I want it start computing the statistic after the window of 3 values, and then include all values up until that point:
value expanding_min
------------------------
6 NaN
5 NaN
2 NaN
3 2
1 1
however, using
df['expanding_min']= df.groupby(groupby)['value'].transform(lambda x: pd.rolling_min(x, window=len(x), min_periods=3))
or
df['expanding_min']= df.groupby(groupby)['value'].transform(lambda x: pd.expanding_min(x, min_periods=3))
I get the following error:
ValueError: min_periods (3) must be <= window (1)
This works for me, changing from value to df.value:
pd.expanding_min(df.value, min_periods=3)
or
pd.rolling_min(df.value, window=len(df.value), min_periods=3)
both output:
0 NaN
1 NaN
2 2
3 2
4 1
dtype: float64
Perhaps your window is being set by some other 'value' whose length is 1? This is why pandas is giving the error message