When to use pandas ‘loc’ for dataframe slicing [duplicate] - pandas

This question already has answers here:
Python: Pandas Series - Why use loc?
(3 answers)
Closed 1 year ago.
In pandas, if i have a dataframe , i can subset it like:
df[df.col == some_condition]
Also, i can do:
df.loc[df.col == some_condition]
What is the difference between the two? The ‘loc’ approach seems more verbose?

In simple words:
There are three primary indexers for pandas. We have the indexing operator itself (the brackets []), .loc, and .iloc. Let's summarize them:
[] - Primarily selects subsets of columns, but can select rows as well. Can't simultaneously select rows and columns.
.loc - selects subsets of rows and columns by label only
.iloc - selects subsets of rows and columns by integer location only
For more detailed explanation you can check this question

Related

Pandas Stack Column Number Mismatch [duplicate]

This question already has answers here:
Pandas: Adding new column to dataframe which is a copy of the index column
(3 answers)
Closed 1 year ago.
Try to stack and result in 3 columns not 1
Hello, I am trying to use the stack function in pandas, but when I use it results in only 1 column when using shape, but displays 3. I see that they are on different levels and I have tried stuff with levels with no success. What can I do I need 3 columns!?
-Thanks
Use new_cl_traff.reset_index()
As you can see in your screenshot you have a multi-index on your dataframe with year and month - see the line where you name the two index levels:
new_cl_traf.index.set_names(["Year","Month"], inplace=True)
You can see the documentation for pandas.stack here
if you use new_cl_traff.reset_index() the index or a subset of levels will be reset - see documentation here

Pandas - list of unique strings in a column [duplicate]

This question already has answers here:
Find the unique values in a column and then sort them
(8 answers)
Closed 1 year ago.
i have a dataframe column which contains these values:
A
A
A
F
R
R
B
B
A
...
I would like to make a list summarizing the different strings, as [A,B,F,...].
I've used groupby with nunique(), but I don't need counting.
How can I make the list ?
Thanks
unique() is enough
df['col'].unique().tolist()
pandas.Series.nunique() is to return the number of unique items.

filter multiple separate rows in a DataFrame that meet the condition in another DataFrame with pandas? [duplicate]

This question already has answers here:
How to filter Pandas dataframe using 'in' and 'not in' like in SQL
(11 answers)
Closed 2 years ago.
This is my DataFrame
df = pd.DataFrame({'uid': [109200005, 108200056, 109200060, 108200085, 108200022],
'grades': [69.233627, 70.130900, 83.357011, 88.206387, 74.342212]})
This is my condition list which comes from another DataFrame
condition_list = [109200005, 108200085]
I use this code to filter records that meet the condition
idx_list = []
for i in condition_list:
idx_list.append(df[df['uid']==i].index.values[0])
and get what I need
>>> df.iloc[idx_list]
uid grades
0 109200005 69.233627
3 108200085 88.206387
Job is done. I'd just like to know is there a simpler way to do the job?
Yes, use isin:
df[df['uid'].isin(condition_list)]

how to write list comprehension for selecting cells base on a substring [duplicate]

This question already has answers here:
Filter pandas DataFrame by substring criteria
(17 answers)
Closed 3 years ago.
I am trying to rewrite the following in one line using list comprehension. I want to select cells that contains substring '[edit]' only. ut is my dataframe and the column that I want to select from is 'col1'. Thanks!
for u in ut['col1']:
if '[edit]' in u:
print(u)
I expect the following output:
Alabama[edit]
Alaska[edit]
Arizona[edit]
...
If the output of a Pandas Series is acceptable, then you can just use .str.contains, without a loop
s = ut[ut["col1"].str.contains("edit")]
If you need to print each element of the Series separately, then loop over the Series using
for i in s:
print(i)

pyspark sql dataframe keep only null [duplicate]

This question already has answers here:
Filter Pyspark dataframe column with None value
(10 answers)
Closed 6 years ago.
I have a sql dataframe df and there is a column user_id, how do I filter the dataframe and keep only user_id is actually null for further analysis? From the pyspark module page here, one can drop na rows easily but did not say how to do the opposite.
Tried df.filter(df.user_id == 'null'), but the result is 0 column. Maybe it is looking for a string "null". Also df.filter(df.user_id == null) won't work as it is looking for a variable named 'null'
Try
df.filter(df.user_id.isNull())