I do not really understand why from the following code pandas return is Series but not a DataFrame.
import pandas as pd
df = pd.DataFrame([[4,9]]*3, columns = ["A", "B"])
def plus_2(x):
y =[]
for i in range(0, len(x)):
y.append(x[i]+2)
return y
df_row = df.apply(plus_2, axis = 1) # Applied to each row
df_row
While if I change axis=0 it produces DataFrame as expected:
import pandas as pd
df = pd.DataFrame([[4,9]]*3, columns = ["A", "B"])
def plus_2(x):
y =[]
for i in range(0, len(x)):
y.append(x[i]+2)
return y
df_row = df.apply(plus_2, axis = 0) # Applied to each row
df_row
Here is the output:
In first example where you put axis=1 you implement on row level.
It means that for each row plus_2 function returns y which is list of two element (but list as a whole is single element so this is pd.Series).
Based on your example it will be returned 3x list (2 element each). Here single list if single row.
You could expand this result and create two columns (each element from list will be new column) by adding result_type="expand" in apply:
df_row = df.apply(lambda x: plus_2(x), axis=1, result_type="expand")
# output
0 1
0 6 11
1 6 11
2 6 11
In second approach you have axis=0 co this is applied on column level.
It means that for each column plus_2 function returns y, so plus_2 is applied twice, separately for A column and for B column. This is why it returns dataframe: your input is DataFrame with columns A and B, each column applies plus_2 function and returns A and B columns as result of plus_2 functions applied.
Based on your example it will be returned 2x list (3 element each). Here single list is single column.
So the main difference between axis=1 and axis=0 is that:
if you applied on row level apply will return:
[6, 11]
[6, 11]
[6, 11]
if you applied on column level apply will return:
[6, 6, 6]
[11, 11, 11]
Related
I have a DataFrame with two pandas Series as follow:
value accepted_values
0 1 [1, 2, 3, 4]
1 2 [5, 6, 7, 8]
I would like to efficiently check if the value is in accepted_values using pandas methods.
I already know I can do something like the following, but I'm interested in a faster approach if there is one (took around 27 seconds on 1 million rows DataFrame)
import pandas as pd
df = pd.DataFrame({"value":[1, 2], "accepted_values": [[1,2,3,4], [5, 6, 7, 8]]})
def check_first_in_second(values: pd.Series):
return values[0] in values[1]
are_in_accepted_values = df[["value", "accepted_values"]].apply(
check_first_in_second, axis=1
)
if not are_in_accepted_values.all():
raise AssertionError("Not all value in accepted_values")
I think if create DataFrame with list column you can compare by DataFrame.eq and test if match at least one value per row by DataFrame.any:
df1 = pd.DataFrame(df["accepted_values"].tolist(), index=df.index)
are_in_accepted_values = df1.eq(df["value"]).any(axis=1).all()
Another idea:
are_in_accepted_values = all(v in a for v, a in df[["value", "accepted_values"]].to_numpy())
I found a little optimisation to your second idea. Using a bit more numpy than pandas makes it faster (more than 3x, tested with time.perf_counter()).
values = df["value"].values
accepted_values = df["accepted_values"].values
are_in_accepted_values = all(s in e for s, e in np.column_stack([values, accepted_values]))
I have a pandas DataFrame, which contains 610 rows, and every row contains a nested list of coordinate pairs, it looks like that:
[1377778.4800000004, 6682395.377599999] is one coordinate pair.
I want to unnest every row, so instead of one row containing a list of coordinates I will have one row for every coordinate pair, i.e.:
I've tried s.apply(pd.Series).stack() from this question Split nested array values from Pandas Dataframe cell over multiple rows but unfortunately that didn't work.
Please any ideas? Many thanks in advance!
Here my new answer to your problem. I used "reduce" to flatten your nested array and then I used "itertools chain" to turn everything into a 1d list. After that I reshaped the list into a 2d array which allows you to convert it to the dataframe that you need. I tried to be as generic as possible. Please let me know if there are any problems.
#libraries
import operator
from functools import reduce
from itertools import chain
#flatten lists of lists using reduce. Then turn everything into a 1d list using
#itertools chain.
reduced_coordinates = list(chain.from_iterable(reduce(operator.concat,
geometry_list)))
#reshape the coordinates 1d list to a 2d and convert it to a dataframe
df = pd.DataFrame(np.reshape(reduced_coordinates, (-1, 2)))
df.columns = ['X', 'Y']
One thing you can do is use numpy. It allows you to perform a lot of list/ array operations in a fast and efficient way. This includes "unnesting" (reshaping) lists. Then you only have to convert to pandas dataframe.
For example,
import numpy as np
#your list
coordinate_list = [[[1377778.4800000004, 6682395.377599999],[6582395.377599999, 2577778.4800000004], [6582395.377599999, 2577778.4800000004]]]
#convert list to array
coordinate_array = numpy.array(coordinate_list)
#print shape of array
coordinate_array.shape
#reshape array into pairs of
reshaped_array = np.reshape(coordinate_array, (3, 2))
df = pd.DataFrame(reshaped_array)
df.columns = ['X', 'Y']
The output will look like this. Let me know if there is something I am missing.
import pandas as pd
import numpy as np
data = np.arange(500).reshape([250, 2])
cols = ['coord']
new_data = []
for item in data:
new_data.append([item])
df = pd.DataFrame(data=new_data, columns=cols)
print(df.head())
def expand(row):
row['x'] = row.coord[0]
row['y'] = row.coord[1]
return row
df = df.apply(expand, axis=1)
df.drop(columns='coord', inplace=True)
print(df.head())
RESULT
coord
0 [0, 1]
1 [2, 3]
2 [4, 5]
3 [6, 7]
4 [8, 9]
x y
0 0 1
1 2 3
2 4 5
3 6 7
4 8 9
This is an extension of my previous question at: Comparing columns of dataframes and returning the difference.
After comparing the columns of all the dataframes in my collection of 37 dataframes, i found that some of the dataframes have similar columns while some have different. So there is now a need to compare these different dataframes and return the difference. This step should continue until all the dataframes have been sorted into two groups, i.e., dataframes with similar columns into one group and different columns dataframes into second group.
for example:
df = [None] * 6
df[0] = pd.DataFrame({'a':[1,2,3],'b':[3,4,5], 'c':[7,8,3], 'd':[1,5,3]})
df[1] = pd.DataFrame({'a':[1,2,3],'b':[3,4,5], 'c':[7,8,3], 'd':[1,5,3]})
df[2] = pd.DataFrame({'a':[1,2,3],'b':[3,4,5], 'x':[7,8,3], 'y':[1,5,3]})
df[3] = pd.DataFrame({'a':[1,2,3],'b':[3,4,5], 'c':[7,8,3], 'd':[1,5,3]})
df[4] = pd.DataFrame({'a':[1,2,3],'b':[3,4,5], 'x':[7,8,3], 'z':[1,5,3]})
df[5] = pd.DataFrame({'a':[1,2,3],'b':[3,4,5], 'x':[7,8,3], 'y':[1,5,3]})
# code to group the dataframes into similar and different cols groups
nsame = []
same = []
for i in range(0, len(df)):
for j in range(i+1, len(df)):
if not (df[i].columns.equals(df[j].columns)):
nsame.append(j)
else:
same.append(i)
When I print the above code for same group (same), the output is as:
print(same)
[0, 0, 1, 2]
Desired output:
print(same)
[0, 1, 3]
Perhaps I need a recursive function to group all similar columns into one group and all different columns dataframes into a different group. However, the tricky part is that there can more than two groups. For example, in the above code, there are 3 groups:
Group1: df[0], df[1], df[3]
Group2: df[2], df[5]
Group3: df[4]
Can someone help here?
Here is one way
s=pd.Series([','.join(x) for x in df])
s.groupby(s).groups # the out put here already make the dfs into groups
Out[695]:
{'a,b,c,d': Int64Index([0, 1, 3], dtype='int64'),
'a,b,x,y': Int64Index([2, 5], dtype='int64'),
'a,b,x,z': Int64Index([4], dtype='int64')}
[y.index.tolist() for x , y in s.groupby(s)]
Out[699]: [[0, 1, 3], [2, 5], [4]]
Isn't it easier to pass all column names as a different pandas dataframe i.e.:
a - b - c - d
a - b - c - d
a - b - x - y
...
and just do a simple groupby over the columns
the count() series over groupby res will be the desired result
Suppose I have a sorted dataframe and a list of target values as below
In [57]: df
Out[57]:
value
0 1
1 2
2 3
3 4
4 5
5 6
In [58]: target_values=[1.5, 3.5, 5.5]
What I want is to get the first row which has a value >= the target value respectively. In the example above, the index of such rows are [1, 3, 5].
I can achieve the goal with following code
In [60]: [df[df.value >= t].iloc[0] for t in target_values]
However, it will scan the dataframe for len(target_values) times. Is there a Pandas function which can achieve the goal with just one scan?
It's called searchsorted. You can use pandas method, or numpy
pandas
df.value.searchsorted(target_values)
array([1, 3, 5])
numpy
df.value.values.searchsorted(target_values)
array([1, 3, 5])
#build a pair wise difference matrix
pairwise_diff = df.values[:,None]-target_values
#find the non-negative min diff for each value in target values.
np.ma.array(pairwise_diff,mask=(pairwise_diff<0)).argmin(0)
Out[178]: array([[1, 3, 5]], dtype=int64)
I tried following code to select columns from a dataframe. My dataframe has about 50 values. At the end, I want to create the sum of selected columns, create a new column with these sum values and then delete the selected columns.
I started with
columns_selected = ['A','B','C','D','E']
df = df[df.column.isin(columns_selected)]
but it said AttributeError: 'DataFrame' object has no attribute 'column'
Regarding the sum: As I don't want to write for the sum
df['sum_1'] = df['A']+df['B']+df['C']+df['D']+df['E']
I also thought that something like
df['sum_1'] = df[columns_selected].sum(axis=1)
would be more convenient.
You want df[columns_selected] to sub-select the df by a list of columns
you can then do df['sum_1'] = df[columns_selected].sum(axis=1)
To filter the df to just the cols of interest pass a list of the columns, df = df[columns_selected] note that it's a common error to just a list of strings: df = df['a','b','c'] which will raise a KeyError.
Note that you had a typo in your original attempt:
df = df.loc[:,df.columns.isin(columns_selected)]
The above would've worked, firstly you needed columns not column, secondly you can use the boolean mask as a mask against the columns by passing to loc or ix as the column selection arg:
In [49]:
df = pd.DataFrame(np.random.randn(5,5), columns=list('abcde'))
df
Out[49]:
a b c d e
0 -0.778207 0.480142 0.537778 -1.889803 -0.851594
1 2.095032 1.121238 1.076626 -0.476918 -0.282883
2 0.974032 0.595543 -0.628023 0.491030 0.171819
3 0.983545 -0.870126 1.100803 0.139678 0.919193
4 -1.854717 -2.151808 1.124028 0.581945 -0.412732
In [50]:
cols = ['a','b','c']
df.ix[:, df.columns.isin(cols)]
Out[50]:
a b c
0 -0.778207 0.480142 0.537778
1 2.095032 1.121238 1.076626
2 0.974032 0.595543 -0.628023
3 0.983545 -0.870126 1.100803
4 -1.854717 -2.151808 1.124028