Selecting two sets of columns from a dataFrame with all rows - pandas

I have a dataFrame with 28 columns (features) and 600 rows (instances). I want to select all rows, but only columns from 0-12 and 16-27. Meaning that I don't want to select columns 12-15.
I wrote the following code, but it doesn't work and throws a syntax error at : in 0:12 and 16:. Can someone help me understand why?
X = df.iloc[:,[0:12,16:]]
I know there are other ways for selecting these rows, but I am curious to learn why this one does not work, and how I should write it to work (if there is a way).
For now, I have written it is as:
X = df.iloc[:,0:12]
X = X + df.iloc[:,16:]
Which seems to return an incorrect result, because I have already treated the NaN values of df, but when I use this code, X includes lots of NaNs!
Thanks for your feedback in advance.

You can use np.r_ to concatenate the slices:
x = df.iloc[:, np.r_[0:12,16:]]

iloc has these allowed inputs (from the docs):
An integer, e.g. 5.
A list or array of integers, e.g. [4, 3, 0].
A slice object with ints, e.g. 1:7.
A boolean array.
A callable function with one argument (the calling Series or DataFrame) and that returns valid output for indexing (one of the above). This is useful in method chains, when you don’t have a reference to the calling object, but would like to base your selection on some value.
What you're passing to iloc in X = df.iloc[:,[0:12,16:]] is not a list of integers or a slice of ints, but a list of slice objects. You need to convert those slices to a list of integers, and the best way to do that is using the numpy.r_ function.
X = df.iloc[:, np.r_[0:13, 16:28]]

Related

Check multiple columns for multiple values and return a dataframe

I have a list of strings and my dataframe has several columns that i need to search (each of type object).
I need to return all rows where any of the selected columns have any of the string items within them, or is part of the string.
How do i check if 4 columns in my dataframe has any one of the items in the list of strings? The string inside the column may have part of the string provided in the list object, but probably wont have it all.
Ive tried 'list' both as a tuple and as a python list:
list = ("25110", "25910", "25990", "30110", "33110", "43999")
new_df = df.loc[(df['column1'].isin(list))
| (df['column2'].isin(list))
| (df['column3'].isin(list))
| (df['column4'].isin(list))]
When i run new_df.shape, i get (0, 12).
Im new to pandas, got a mountain of analysis to do for an intense uni project, and cant get this to work. Do i need to convert each column to be a string datatype first? (ive actually already tried THAT as well, but each datatype is still stubbornly an 'object').
IIUC:
try:
lst = ["25110", "25910", "25990", "30110", "33110", "43999"]
cols=['column1','column2','column3','column4']
Finally:
m=df[cols].astype(str).agg(lambda x:x.str.contains('|'.join(lst)),1).any(1)
#you can also use apply() in place of agg()
df[m]
#OR
df.loc[m]

Assigning value to an iloc slice with condition on a separate column?

I would like to slice my dataframe using iloc (rather than loc) + some condition based on one of the dataframe's columns and assign a value to all the items in this slice (which is effectively a subset of the main dataframe).
My simplified attempt:
df.iloc[:, 1:21][df['column1'] == 'some_value'] = 1
This is meant to take a slice of the dataframe:
All rows;
Columns 2 to 20;
Then slice it again:
Only the rows where column1 = some_value.
The slicing works fine, but equalling this to 1 doesn't work. Nothing changes in df and I get this warning
A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead
I really need to use iloc rather than loc if possible. It feels like there should be a way of doing this?
You can search for the error on SO. In short, you should update on one single loc/iloc:
df.loc[df['column1']=='some_value', df.columns[1:21]] = 1

iloc using scikit learn random forest classifier

I am trying to build a random forest classifier to determine the 'type' of an object based on different attributes. I am having trouble understanding iloc and separating the predictors from the classification. If the 50th column is the 'type' column, I am wondering why the iloc (commented out) line does not work, but the line y = dataset["type"] does. I have attached the code below. Thank you!
X = dataset.iloc[:, 0:50].values
y = dataset["type"]
#y = dataset.iloc[:,50].values
Let's assume that the first column in your dataframe is named "0" and the following columns are named consequently. Like the result of the following lines
last_col=50
tab=pd.DataFrame([[x for x in range(last_col)] for c in range(10)])
now, please try tab.iloc[:,0:50] - it will work because you used slice to select columns indexes.
But if you try tab.iloc[:,50] - it will not work, because there is no column with index 50.
Slicing and selecting column by its index is just a bit different. From pandas documentation:
.iloc[] is primarily integer position based (from 0 to length-1 of the axis)
I hope this help.

Datetime column coerced to int when setting with .loc and slice

I have a column of datetimes and need to change several of these values to new datetimes. When I set the values using df.loc[indices, 'col'] = new_datetimes, the unaffected values are coerced to int while the new set values are in datetime. If I set the values one at a time, no type coercion occurs.
For illustration I created a sample df with just one column.
df = pd.DataFrame([dt.datetime(2019,1,1)]*5)
df.loc[[1,3,4]] = [dt.datetime(2019,1,2)]*3
df
This produces the following:
output
If I change indices 1,3,4 individually:
df = pd.DataFrame([dt.datetime(2019,1,1)]*5)
df.loc[1] = dt.datetime(2019,1,2)
df.loc[3] = dt.datetime(2019,1,2)
df.loc[4] = dt.datetime(2019,1,2)
df
I get the correct output:
output
A suggestion was to turn the list to a numpy array before setting, which does resolve the issue. However, if you try to set multiple columns (some of which are not datetime) using a numpy array, The issue arises again.
In this example the dataframe has two columns and I try to set both columns.
df = pd.DataFrame({'dt':[dt.datetime(2019,1,1)]*5, 'value':[1,1,1,1,1]})
df.loc[[1,3,4]] = np.array([[dt.datetime(2019,1,2)]*3, [2,2,2]]).T
df
This gives the following output:
output
Can someone please explain what is causing the coercion and how to prevent it from doing so? The code I wrote that uses this was written over a month ago and used to work just fine, could it be one of those warnings about future version of pandas deprecating certain functionalities?
An explanation of what is going on would be greatly appreciated because I wrote a other codes that likely employ similar functionality want to make sure everything works as intended.
The solution proposed by w-m has such an "awkward detail" than
the result column has also the time part (it didn't have it
before).
I have also such a remark, that DataFrames are tables not Series,
so they have columns, each with its name and it is a bad habit to
rely on default column names (consecutive numbers).
So I propose another solution, addressing both above issues:
To create the source DataFrame I executed:
df = pd.DataFrame([dt.datetime(2019, 1, 1)]*5, columns=['c1'])
Note that I provided a name for the only column.
Then I created another DataFrame:
df2 = pd.DataFrame([dt.datetime(2019,1,2)]*3, columns=['c1'], index=[1,3,4])
It contains your "new" dates and the numbers which you used in loc
I set as the index (again with the same column name).
Then, to update df, use (not surprisingly) df.update:
df.update(df2)
This function performs in-place update, so if you print(df), you will get:
c1
0 2019-01-01
1 2019-01-02
2 2019-01-01
3 2019-01-02
4 2019-01-02
As you can see, under indices 1, 3 and 4 you have new dates
and there is no time part, just like before.
[dt.datetime(2019,1,2)]*3 is a Python list of objects. This particular list happens to contain only datetimes, but Pandas does not seem to recognize that, and treats it as it is - a list of any kind of objects.
If you convert it into a typed array, then Pandas will keep the original dtype of the column intact:
df.loc[[1,3,4]] = np.asarray([dt.datetime(2019,1,2)]*3)
I hope this workaround helps you, but you may still want to file a bug with Pandas. I don't have an explanation as to why the datetime objects should be coerced to ints in the first output example.

Apply to each element in a Pandas dataframe

Since each series in the data frame is of tuple, I need to convert them into one number. Basically I have something like this:
price_table['Col1'].apply(lambda x: x[0])
But I actually need to do this for each column. x itself is a tuple but it has only 1 number inside, so I need to return x[0] to get its "value" which is of format float instead of tuple.
In R, I will put axis = c(1,2) but here seems that putting 2 numbers in axis doesnt work:
price_table.apply(lambda x: x[0],axis = 1)
TypeError: <lambda>() got an unexpected keyword argument 'axis'
Is there anyway to apply this simple function to each element in the data frame?
Thanks in advance.
For me the following works well:
"price_table['Col1'].apply(lambda x: x[0],1)"
I do not use the axis. But, I do not know the reason.