Both tables that I merge have the cells formatted correctly, as numbers, but when I make a left join, the numbers in one of the original tables get dis-formatted (you see e+ in those numbers). What should I do to see those numbers un full?
Problem: When merging, some SKU values that appear in df1 do not appear in df2. In order to represent unavailable values, pandas automatically uses NaN, which is a floating point value. Thus, the integer ISBNs are converted to float. Given the size of the ISBNs, pandas then formats these floating point values in scientific notation.
You could solve this by defining your own floating point value formatter (pd.options.display.float_format), but in your case it might be easier / more effective to convert the ISBNs to a string before merging.
Example:
>>> import pandas as pd
>>> df1 = pd.DataFrame({"SKU": list("abcde"), "ISBN": list(range(1, 6))})
>>> df2 = pd.DataFrame({"SKU": list("bcef"), "ISBN": list(range(4, 8))})
Your problem:
>>> pd.merge(df1, df2, on="SKU", how="left")
SKU ISBN_x ISBN_y
0 a 1 NaN
1 b 2 4.0
2 c 3 5.0
3 d 4 NaN
4 e 5 6.0
>>> _.dtypes
SKU object
ISBN_x int64
ISBN_y float64 # <<< Problematic
vs possible solution:
>>> pd.merge(df1.astype(str), df2.astype(str), on="SKU", how="left")
SKU ISBN_x ISBN_y
0 a 1 NaN
1 b 2 4
2 c 3 5
3 d 4 NaN
4 e 5 6
>>> _.dtypes
SKU object
ISBN_x object
ISBN_y object
Related
I am trying to split a column into multiple columns based on comma/space separation.
My dataframe currently looks like
KEYS 1
0 FIT-4270 4000.0439
1 FIT-4269 4000.0420, 4000.0471
2 FIT-4268 4000.0419
3 FIT-4266 4000.0499
4 FIT-4265 4000.0490, 4000.0499, 4000.0500, 4000.0504,
I would like
KEYS 1 2 3 4
0 FIT-4270 4000.0439
1 FIT-4269 4000.0420 4000.0471
2 FIT-4268 4000.0419
3 FIT-4266 4000.0499
4 FIT-4265 4000.0490 4000.0499 4000.0500 4000.0504
My code currently removes The KEYS column and I'm not sure why. Could anyone improve or help fix the issue?
v = dfcleancsv[1]
#splits the columns by spaces into new columns but removes KEYS?
dfcleancsv = dfcleancsv[1].str.split(' ').apply(Series, 1)
In case someone else wants to split a single column (deliminated by a value) into multiple columns - try this:
series.str.split(',', expand=True)
This answered the question I came here looking for.
Credit to EdChum's code that includes adding the split columns back to the dataframe.
pd.concat([df[[0]], df[1].str.split(', ', expand=True)], axis=1)
Note: The first argument df[[0]] is DataFrame.
The second argument df[1].str.split is the series that you want to split.
split Documentation
concat Documentation
Using Edchums answer of
pd.concat([df[[0]], df[1].str.split(', ', expand=True)], axis=1)
I was able to solve it by substituting my variables.
dfcleancsv = pd.concat([dfcleancsv['KEYS'], dfcleancsv[1].str.split(', ', expand=True)], axis=1)
The OP had a variable number of output columns.
In the particular case of a fixed number of output columns another elegant solution to name the resulting columns is to use a multiple assignation.
Load a sample dataset and reshape it to long format to obtain a variable
called organ_dimension.
import seaborn
iris = seaborn.load_dataset('iris')
df = iris.melt(id_vars='species', var_name='organ_dimension', value_name='value')
Split the organ_dimension variable in 2 variables organ and dimension based on the _ separator.
df[['organ', 'dimension']] = df['organ_dimension'].str.split('_', expand=True)
df.head()
Out[10]:
species organ_dimension value organ dimension
0 setosa sepal_length 5.1 sepal length
1 setosa sepal_length 4.9 sepal length
2 setosa sepal_length 4.7 sepal length
3 setosa sepal_length 4.6 sepal length
4 setosa sepal_length 5.0 sepal length
Based on this answer "How to split a column into two columns?"
The simplest way to use is, vectorization
df = df.apply(lambda x:pd.Series(x))
maybe this should work:
df = pd.concat([df['KEYS'],df[1].apply(pd.Series)],axis=1)
Check this out
Responder_id LanguagesWorkedWith
0 1 HTML/CSS;Java;JavaScript;Python
1 2 C++;HTML/CSS;Python
2 3 HTML/CSS
3 4 C;C++;C#;Python;SQL
4 5 C++;HTML/CSS;Java;JavaScript;Python;SQL;VBA
... ... ...
87564 88182 HTML/CSS;Java;JavaScript
87565 88212 HTML/CSS;JavaScript;Python
87566 88282 Bash/Shell/PowerShell;Go;HTML/CSS;JavaScript;W...
87567 88377 HTML/CSS;JavaScript;Other(s):
87568 88863 Bash/Shell/PowerShell;HTML/CSS;Java;JavaScript...`
###Split the LanguagesWorkedWith column into multiple columns by using` data= data1['LanguagesWorkedWith'].str.split(';').apply(pd.Series)`.###
` data1 = pd.read_csv('data.csv', sep=',')
data1.set_index('Responder_id',inplace=True)
data1
data1.loc[1,:]
data= data1['LanguagesWorkedWith'].str.split(';').apply(pd.Series)
data.head()`
You may also want to try datar, a package ports dplyr, tidyr and related R packages to python:
>>> df
i j A
<object> <int64> <object>
0 AR 5 Paris,Green
1 For 3 Moscow,Yellow
2 For 4 NewYork,Black
>>> from datar import f
>>> from datar.tidyr import separate
>>> separate(df, f.A, ['City', 'Color'])
i j City Color
<object> <int64> <object> <object>
0 AR 5 Paris Green
1 For 3 Moscow Yellow
2 For 4 NewYork Black
For example I have a dataframe as follows:
A
B
C
D
0
1.049380
0.512696
0.135421
1.396424
1
-0.367589
-0.741008
-1.543296
0.355291
2
1.244623
-0.295761
1.238826
-0.017174
3
0.378124
0.870361
-0.733288
-0.228948
I want to call stats.ttest_ind on all combination of two columns and get new dataframe as follows (don't care the dummy values):
A
B
C
D
A
nan
0.512696
0.135421
1.396424
B
-0.367589
nan
-1.543296
0.355291
C
1.244623
-0.295761
nan
-0.017174
D
0.378124
0.870361
-0.733288
nan
You could use a list comprehension:
ttest_lists = [[ stats.ttest_ind(df[col_i], df[col_j]) if col_i!=col_j else np.nan
for col_i in df] for col_j in df]
To get a DataFrame rather than a list of lists, you can then use:
ttest_df = pd.DataFrame(ttest_lists, columns=df.columns, index=df.columns)
I've got a pandas DataFrame filled mostly with real numbers, but there is a few nan values in it as well.
How can I replace the nans with averages of columns where they are?
This question is very similar to this one: numpy array: replace nan values with average of columns but, unfortunately, the solution given there doesn't work for a pandas DataFrame.
You can simply use DataFrame.fillna to fill the nan's directly:
In [27]: df
Out[27]:
A B C
0 -0.166919 0.979728 -0.632955
1 -0.297953 -0.912674 -1.365463
2 -0.120211 -0.540679 -0.680481
3 NaN -2.027325 1.533582
4 NaN NaN 0.461821
5 -0.788073 NaN NaN
6 -0.916080 -0.612343 NaN
7 -0.887858 1.033826 NaN
8 1.948430 1.025011 -2.982224
9 0.019698 -0.795876 -0.046431
In [28]: df.mean()
Out[28]:
A -0.151121
B -0.231291
C -0.530307
dtype: float64
In [29]: df.fillna(df.mean())
Out[29]:
A B C
0 -0.166919 0.979728 -0.632955
1 -0.297953 -0.912674 -1.365463
2 -0.120211 -0.540679 -0.680481
3 -0.151121 -2.027325 1.533582
4 -0.151121 -0.231291 0.461821
5 -0.788073 -0.231291 -0.530307
6 -0.916080 -0.612343 -0.530307
7 -0.887858 1.033826 -0.530307
8 1.948430 1.025011 -2.982224
9 0.019698 -0.795876 -0.046431
The docstring of fillna says that value should be a scalar or a dict, however, it seems to work with a Series as well. If you want to pass a dict, you could use df.mean().to_dict().
Try:
sub2['income'].fillna((sub2['income'].mean()), inplace=True)
In [16]: df = DataFrame(np.random.randn(10,3))
In [17]: df.iloc[3:5,0] = np.nan
In [18]: df.iloc[4:6,1] = np.nan
In [19]: df.iloc[5:8,2] = np.nan
In [20]: df
Out[20]:
0 1 2
0 1.148272 0.227366 -2.368136
1 -0.820823 1.071471 -0.784713
2 0.157913 0.602857 0.665034
3 NaN -0.985188 -0.324136
4 NaN NaN 0.238512
5 0.769657 NaN NaN
6 0.141951 0.326064 NaN
7 -1.694475 -0.523440 NaN
8 0.352556 -0.551487 -1.639298
9 -2.067324 -0.492617 -1.675794
In [22]: df.mean()
Out[22]:
0 -0.251534
1 -0.040622
2 -0.841219
dtype: float64
Apply per-column the mean of that columns and fill
In [23]: df.apply(lambda x: x.fillna(x.mean()),axis=0)
Out[23]:
0 1 2
0 1.148272 0.227366 -2.368136
1 -0.820823 1.071471 -0.784713
2 0.157913 0.602857 0.665034
3 -0.251534 -0.985188 -0.324136
4 -0.251534 -0.040622 0.238512
5 0.769657 -0.040622 -0.841219
6 0.141951 0.326064 -0.841219
7 -1.694475 -0.523440 -0.841219
8 0.352556 -0.551487 -1.639298
9 -2.067324 -0.492617 -1.675794
Although, the below code does the job, BUT its performance takes a big hit, as you deal with a DataFrame with # records 100k or more:
df.fillna(df.mean())
In my experience, one should replace NaN values (be it with Mean or Median), only where it is required, rather than applying fillna() all over the DataFrame.
I had a DataFrame with 20 variables, and only 4 of them required NaN values treatment (replacement). I tried the above code (Code 1), along with a slightly modified version of it (code 2), where i ran it selectively .i.e. only on variables which had a NaN value
#------------------------------------------------
#----(Code 1) Treatment on overall DataFrame-----
df.fillna(df.mean())
#------------------------------------------------
#----(Code 2) Selective Treatment----------------
for i in df.columns[df.isnull().any(axis=0)]: #---Applying Only on variables with NaN values
df[i].fillna(df[i].mean(),inplace=True)
#---df.isnull().any(axis=0) gives True/False flag (Boolean value series),
#---which when applied on df.columns[], helps identify variables with NaN values
Below is the performance i observed, as i kept on increasing the # records in DataFrame
DataFrame with ~100k records
Code 1: 22.06 Seconds
Code 2: 0.03 Seconds
DataFrame with ~200k records
Code 1: 180.06 Seconds
Code 2: 0.06 Seconds
DataFrame with ~1.6 Million records
Code 1: code kept running endlessly
Code 2: 0.40 Seconds
DataFrame with ~13 Million records
Code 1: --did not even try, after seeing performance on 1.6 Mn records--
Code 2: 3.20 Seconds
Apologies for a long answer ! Hope this helps !
If you want to impute missing values with mean and you want to go column by column, then this will only impute with the mean of that column. This might be a little more readable.
sub2['income'] = sub2['income'].fillna((sub2['income'].mean()))
# To read data from csv file
Dataset = pd.read_csv('Data.csv')
X = Dataset.iloc[:, :-1].values
# To calculate mean use imputer class
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
imputer = imputer.fit(X[:, 1:3])
X[:, 1:3] = imputer.transform(X[:, 1:3])
Directly use df.fillna(df.mean()) to fill all the null value with mean
If you want to fill null value with mean of that column then you can use this
suppose x=df['Item_Weight'] here Item_Weight is column name
here we are assigning (fill null values of x with mean of x into x)
df['Item_Weight'] = df['Item_Weight'].fillna((df['Item_Weight'].mean()))
If you want to fill null value with some string then use
here Outlet_size is column name
df.Outlet_Size = df.Outlet_Size.fillna('Missing')
Pandas: How to replace NaN (nan) values with the average (mean), median or other statistics of one column
Say your DataFrame is df and you have one column called nr_items. This is: df['nr_items']
If you want to replace the NaN values of your column df['nr_items'] with the mean of the column:
Use method .fillna():
mean_value=df['nr_items'].mean()
df['nr_item_ave']=df['nr_items'].fillna(mean_value)
I have created a new df column called nr_item_ave to store the new column with the NaN values replaced by the mean value of the column.
You should be careful when using the mean. If you have outliers is more recommendable to use the median
Another option besides those above is:
df = df.groupby(df.columns, axis = 1).transform(lambda x: x.fillna(x.mean()))
It's less elegant than previous responses for mean, but it could be shorter if you desire to replace nulls by some other column function.
using sklearn library preprocessing class
from sklearn.impute import SimpleImputer
missingvalues = SimpleImputer(missing_values = np.nan, strategy = 'mean', axis = 0)
missingvalues = missingvalues.fit(x[:,1:3])
x[:,1:3] = missingvalues.transform(x[:,1:3])
Note: In the recent version parameter missing_values value change to np.nan from NaN
I use this method to fill missing values by average of a column.
fill_mean = lambda col : col.fillna(col.mean())
df = df.apply(fill_mean, axis = 0)
You can also use value_counts to get the most frequent values. This would work on different datatypes.
df = df.apply(lambda x:x.fillna(x.value_counts().index[0]))
Here is the value_counts api reference.
I have DF has 5 columns. 3 columns are character type, and other are numeric type. I wanted to update missing values of character type columns are "missing".
I have written update statement like below, but it's not working.
df.select_dtypes(include='object') = df.select_dtypes(include='object').apply(lambda x: x.fillna('missing'))
It's working only when i specify column names.
df[['Manufacturer','Model','Type']] = df.select_dtypes(include='object').apply(lambda x: x.fillna('missing'))
Could you please tell me how i can correct my first update statement?
Here df.select_dtypes(include='object') return new DataFrame, so cannot assign like in first answer, possible solution is use DataFrame.update (working inplace), also apply here is not necessary.
print (df)
Manufacturer Model Type a c
0 a g NaN 4 NaN
1 NaN NaN aa 4 8.0
df.update(df.select_dtypes(include='object').fillna('missing'))
print (df)
Manufacturer Model Type a c
0 a g missing 4 NaN
1 missing missing aa 4 8.0
Or get columns names with strings like:
cols = df.select_dtypes(include='object').columns
df[cols] = df[cols].fillna('missing')
print (df)
Why do simple DataFrame op DataFrame operations result in a union'ed DataFrame? Pandas documentation mentions unionizing because of alignment issues. I don't see any alignment issues with df1 and df2. Aren't alignment issues about different shapes, different dtypes, or different indexes?
df1 = pd.DataFrame([[1,2],[3,4]],columns=list('AB'))
df2 = pd.DataFrame([[5,6],[7,8]],columns=list('CD'))
>> df1*df2
A B C D
0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
Another source of alignment issues is non-matching column names. Here, alignment requires identical column names. Either make the column names the same or use .values. Using .values on just the right-hand DataFrame will retain the DataFrame type.
>> df1*df2.values
A B
0 5 12
1 21 32