DataFrame Column into multiple columns - pandas

Column
How can I split a data frame column that contain list of strings like
[{'1','1','1','1'},{'1','1','1','1'},{'1','1','1','1'},{'1','1','1','1'}]
In each cell, into multiple columns of data frame?
Consider that the lists in each cell of the column are not with the same length!
In above image on the left we have the first column and on the right we are watching the results that I want to make.

As #Oliver Prislan comments -- that is an unusual structure - did you mean something else? If your data is structured like that then here is a way you can get it into the new format:
# assumes that your original dataframe is called `df`
# creates a new dataframe called new_df
# removes the unwanted {} and [] and ''
# then expands the columns after splitting each string on the comma
new_df = pd.DataFrame(df['Column0'].str.replace('[{}\[\]\']','').str.split(',', expand=True),
index=df.index)
#renames the columns as you wanted them
new_df.rename(columns='col{}'.format, inplace=True)
If your values are always numeric and you may want to convert the dataframe columns to numeric datatypes:
for col in new_df.columns:
new_df[col] = pd.to_numeric(new_df[col])
Final result:
col0 col1 col2 col3 col4 col5 col6 col7 col8 col9 col10 col11 col12 col13 col14 col15
0 1 1 1 1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 1 1 1 1 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
2 1 1 1 1 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 NaN NaN NaN NaN
3 1 1 1 1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
4 1 1 1 1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
5 1 1 1 1 1.0 1.0 1.0 1.0 NaN NaN NaN NaN NaN NaN NaN NaN

Related

Pandas Long to Wide conversion

I am new to Pandas. I have a data set with in this format.
UserID ISBN BookRatings
0 276725.0 034545104U 0.0
1 276726.0 155061224 5.0
2 276727.0 446520802 0.0
3 276729.0 052165615W 3.0
4 276729.0 521795028 6.0
I would like to create this
ISBN 276725 276726 276727 276729
UserID
0 034545104U
1 0 155061224 0 0 0
2 0 0 446520802 0 0
3 0 0 0 052165615W 0
4 0 0 0 521795028 0
I tried pivot but was not successful. Any kind advice please?
I think that pivot() is the right approach here. The most difficult part is to get the arguments correctly. I think we need to keep the original index and the new columns should be the values in column UserID. Also, we want to fill the new dataframe with the values from column ISBN.
For this, I firstly extract the original index as column and then apply the pivot() function:
df = df.reset_index()
result = df.pivot(index='index', columns='UserID', values='ISBN')
# Make your float columns to integers (only works if all user ids are numbers, drop nan values first)
result.columns = map(int,result.columns)
Output:
276725 276726 276727 276729
index
0 034545104U NaN NaN NaN
1 NaN 155061224 NaN NaN
2 NaN NaN 446520802 NaN
3 NaN NaN NaN 052165615W
4 NaN NaN NaN 521795028
Edit: If you want the same appearance as in the original dataframe you have to apply the following line as well:
result = result.rename_axis(None, axis=0)
Output:
276725 276726 276727 276729
0 034545104U NaN NaN NaN
1 NaN 155061224 NaN NaN
2 NaN NaN 446520802 NaN
3 NaN NaN NaN 052165615W
4 NaN NaN NaN 521795028

Pandas DataFrame subtraction is getting an unexpected result. Concatenating instead?

I have two dataframes of the same size (510x6)
preds
0 1 2 3 4 5
0 2.610270 -4.083780 3.381037 4.174977 2.743785 -0.766932
1 0.049673 0.731330 1.656028 -0.427514 -0.803391 -0.656469
2 -3.579314 3.347611 2.891815 -1.772502 1.505312 -1.852362
3 -0.558046 -1.290783 2.351023 4.669028 3.096437 0.383327
4 -3.215028 0.616974 5.917364 5.275736 7.201042 -0.735897
... ... ... ... ... ... ...
505 -2.178958 3.918007 8.247562 -0.523363 2.936684 -3.153375
506 0.736896 -1.571704 0.831026 2.673974 2.259796 -0.815212
507 -2.687474 -1.268576 -0.603680 5.571290 -3.516223 0.752697
508 0.182165 0.904990 4.690155 6.320494 -2.326415 2.241589
509 -1.675801 -1.602143 7.066843 2.881135 -5.278826 1.831972
510 rows × 6 columns
outputStats
0 1 2 3 4 5
0 2.610270 -4.083780 3.381037 4.174977 2.743785 -0.766932
1 0.049673 0.731330 1.656028 -0.427514 -0.803391 -0.656469
2 -3.579314 3.347611 2.891815 -1.772502 1.505312 -1.852362
3 -0.558046 -1.290783 2.351023 4.669028 3.096437 0.383327
4 -3.215028 0.616974 5.917364 5.275736 7.201042 -0.735897
... ... ... ... ... ... ...
505 -2.178958 3.918007 8.247562 -0.523363 2.936684 -3.153375
506 0.736896 -1.571704 0.831026 2.673974 2.259796 -0.815212
507 -2.687474 -1.268576 -0.603680 5.571290 -3.516223 0.752697
508 0.182165 0.904990 4.690155 6.320494 -2.326415 2.241589
509 -1.675801 -1.602143 7.066843 2.881135 -5.278826 1.831972
510 rows × 6 columns
when I execute:
preds - outputStats
I expect a 510 x 6 dataframe with elementwise subtraction. Instead I get this:
0 1 2 3 4 5 0 1 2 3 4 5
0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
... ... ... ... ... ... ... ... ... ... ... ... ...
505 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
506 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
507 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
508 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
509 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
I've tried dropping columns and the like, and that hasn't helped. I also get the same result with preds.subtract(outputStats). Any Ideas?
There are many ways that two different values can appear the same when displayed. One of the main ways is if they are different types, but corresponding values for those types. For instance, depending on how you're displaying them, the int 1 and the str '1' may not be easily distinguished. You can also have whitespace characters, such as '1' versus ' 1'.
If the problem is that one set is int while the other is str, you can solve the problem by converting them all to int or all to str. To do the former, do df.columns = [int(col) for col in df.columns]. To do the latter, df.columns = [str(col) for col in df.columns]. Converting to str is somewhat safer, as trying to convert to int can raise an error if the string isn't amenable to conversion (e.g. int('y') will raise an error), but int can be more usual as they have the numerical structure.
You asked in a comment about dropping columns. You can do this with drop and including axis=1 as a parameter to tell it to drop columns rather than rows, or you can use the del keyword. But changing the column names should remove the need to drop columns.

Filtering/Querying Pandas DataFrame after multiple grouping/agg

I have a dataframe that I first group, Counting QuoteLine Items grouped by stock(1-true, 0-false) and mfg type (K-Kit, M-manufactured, P-Purchased). Ultimately, I am interested in quotes that ALL items are either NonStock/Kit and/or Stock/['M','P'] :
grouped = df.groupby(['QuoteNum', 'typecode', 'stock']).agg({"QuoteLine": "count"})
and I get this:
QuoteLine-count
QuoteNum typecode stock
10001 K 0 1
10003 M 0 1
10005 M 0 3
1 1
10006 M 1 1
... ... ... ...
26961 P 1 1
26962 P 1 1
26963 P 1 2
26964 K 0 1
M 1 2
If I unstack it twice:
grouped = df.groupby(['QuoteNum', 'typecode', 'stock']).agg({"QuoteLine": "count"}).unstack().unstack()
# I get
QuoteLine-count
stock 0 1
typecode K M P K M P
QuoteNum
10001 1.0 NaN NaN NaN NaN NaN
10003 NaN 1.0 NaN NaN NaN NaN
10005 NaN 3.0 NaN NaN 1.0 NaN
10006 NaN NaN NaN NaN 1.0 NaN
10007 2.0 NaN NaN NaN NaN NaN
... ... ... ... ... ... ...
26959 NaN NaN NaN NaN NaN 1.0
26961 NaN 1.0 NaN NaN NaN 1.0
26962 NaN NaN NaN NaN NaN 1.0
26963 NaN NaN NaN NaN NaN 2.0
26964 1.0 NaN NaN NaN 2.0 NaN
Now I need to filter out all records where, this is where I need help
# pseudo-code
(stock == 0 and typecode in ['M','P']) -> values are NOT NaN (don't want those)
and
(stock == 1 and typecode='K') -> values are NOT NaN (don't want those either)
so I'm left with these records:
Basically: Columns "0/M, 0/P, 1/K" must be all NaNs and other columns have at least one non NaN value
QuoteLine-count
stock 0 1
typecode K M P K M P
QuoteNum
10001 1.0 NaN NaN NaN NaN NaN
10006 NaN NaN NaN NaN 1.0 NaN
10007 2.0 NaN NaN NaN NaN NaN
... ... ... ... ... ... ...
26959 NaN NaN NaN NaN NaN 1.0
26962 NaN NaN NaN NaN NaN 1.0
26963 NaN NaN NaN NaN NaN 2.0
26964 1.0 NaN NaN NaN 2.0 NaN
IIUC, use boolean mask to set rows that match your conditions to NaN then unstack desired levels:
# Shortcut (for readability)
lvl_vals = grouped.index.get_level_values
m1 = (lvl_vals('typecode') == 'K') & (lvl_vals('stock') == 0)
m2 = (lvl_vals('typecode').isin(['M', 'P'])) & (lvl_vals('stock') == 1)
grouped[m1|m2] = np.nan
out = grouped.unstack(level=['stock', 'typecode']) \
.loc[lambda x: x.isna().all(axis=1)]
Output result:
>>> out
QuoteLine-count
stock 0 1
typecode K M M P
QuoteNum
10001 NaN NaN NaN NaN
10006 NaN NaN NaN NaN
26961 NaN NaN NaN NaN
26962 NaN NaN NaN NaN
26963 NaN NaN NaN NaN
26964 NaN NaN NaN NaN
Desired values could be obtained by as_index==False, but i am not sure if they are in desired format.
grouped = df.groupby(['QuoteNum', 'typecode', 'stock'], as_index=False).agg({"QuoteLine": "count"})
grouped[((grouped["stock"]==0) & (grouped["typecode"].isin(["M" ,"P"]))) | ((grouped["stock"]==1) & (grouped["typecode"].isin(["K"])))]

For every row in pandas, do until sample ID change

How can I iterarate over rows in a dataframe until the sample ID change?
my_df:
ID loc_start
sample1 10
sample1 15
sample2 10
sample2 20
sample3 5
Something like:
samples = ["sample1", "sample2" ,"sample3"]
out = pd.DataFrame()
for sample in samples:
if my_df["ID"] == sample:
my_list = []
for index, row in my_df.iterrows():
other_list = [row.loc_start]
my_list.append(other_list)
my_list = pd.DataFrame(my_list)
out = pd.merge(out, my_list)
Expected output:
sample1 sample2 sample3
10 10 5
15 20
I realize of course that this could be done easier if my_df really would look like this. However, what I'm after is the principle to iterate over rows until a certain column value change.
Based on the input & output provided, this should work.
You need to provide more info if you are looking for something else.
df.pivot(columns='ID', values = 'loc_start').rename_axis(None, axis=1).apply(lambda x: pd.Series(x.dropna().values))
output
sample1 sample2 sample3
0 10.0 10.0 5.0
1 15.0 20.0 NaN
Ben.T is correct that a pivot works here. Here is an example:
import pandas as pd
import numpy as np
df = pd.DataFrame(data=np.random.randint(0, 5, (10, 2)), columns=list("AB"))
# what does the df look like? Here, I consider column A to be analogous to your "ID" column
In [5]: df
Out[5]:
A B
0 3 1
1 2 1
2 4 2
3 4 1
4 0 4
5 4 2
6 4 1
7 3 1
8 1 1
9 4 0
# now do a pivot and see what it looks like
df2 = df.pivot(columns="A", values="B")
In [8]: df2
Out[8]:
A 0 1 2 3 4
0 NaN NaN NaN 1.0 NaN
1 NaN NaN 1.0 NaN NaN
2 NaN NaN NaN NaN 2.0
3 NaN NaN NaN NaN 1.0
4 4.0 NaN NaN NaN NaN
5 NaN NaN NaN NaN 2.0
6 NaN NaN NaN NaN 1.0
7 NaN NaN NaN 1.0 NaN
8 NaN 1.0 NaN NaN NaN
9 NaN NaN NaN NaN 0.0
Not quite what you wanted. With a little help from Jezreal's answer
df2 = df2.apply(lambda x: pd.Series(x.dropna().values))
In [20]: df3
Out[20]:
A 0 1 2 3 4
0 4.0 1.0 1.0 1.0 2.0
1 NaN NaN NaN 1.0 1.0
2 NaN NaN NaN NaN 2.0
3 NaN NaN NaN NaN 1.0
4 NaN NaN NaN NaN 0.0
The empty spots in the dataframe have to be filled with something, and NaN is used by default. Is this what you wanted?
If, on the other hand, you wanted to perform an operation on your data you would use the groupby instead.
df2 = df.groupby(by="A", as_index=False).mean()

Pandas access first column with duplicate column names

Looking for some help accessing the first empty df column that is also a duplicate name, by name.
Consider this dataframe
import pandas as pd
df = pd.DataFrame(columns=['A', 'B', 'C', 'C', 'C', 'C', 'D', 'E'], index=[0,1,2,3])
A B C C C C D E
0 NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN
3 NaN NaN NaN NaN NaN NaN NaN NaN
then access a slice by indexer and column name
indexer = [1,3]
df.loc[indexer, 'C']
C C C C
1 NaN NaN NaN NaN
3 NaN NaN NaN NaN
I want to edit only the first instance of column C so that I get
A B C C C C D E
0 NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN 99 NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN
3 NaN NaN 99 NaN NaN NaN NaN NaN
I tried df.loc[indexer, 'C'].iloc[:,0] = 99
But it did not set the values.
Thanks in advance for your replies and ideas.
IIUC:
indexer = [1, 3]
col = (df.columns == 'C').argmax()
df.iloc[indexer, col] = 99
df
A B C C C C D E
0 NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN 99 NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN
3 NaN NaN 99 NaN NaN NaN NaN NaN
I would use index.get_loc to get the slice of integer location of columns C and passing its start to .iloc as follows:
indexer = [1, 3]
df.iloc[indexer, df.columns.get_loc('C').start] = 99
Or using np.nonzero
c_loc = np.nonzero(df.columns == 'C')[0]
df.iloc[indexer, c_loc[0]] = 99
Out[87]:
A B C C C C D E
0 NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN 99 NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN
3 NaN NaN 99 NaN NaN NaN NaN NaN