below is my data table, from my code output:
| columnA|ColumnB|ColumnC|
| ------ | ----- | ------|
| 12 | 8 | 1.34 |
| 8 | 12 | 1.34 |
| 1 | 7 | 0.25 |
I want to dedupe and only left
| columnA|ColumnB|ColumnC|
| ------ | ----- | ------|
| 12 | 8 | 1.34 |
| 1 | 7 | 0.25 |
Usually when I try to drop duplicate, I am using .drop_duplicates(subset=). But this time, I want to drop same pair,Ex:I want to drop (columnA,columnB)==(columnB,columnA). I do some research, I find someone uses set((a,b) if a<=b else (b,a) for a,b in pairs) to remove the same list pair. But I don't know how to use this method on my pandas data frame. Please help, and thank you in advance!
Convert relevant columns to frozenset:
out = df[~df[['columnA', 'ColumnB']].apply(frozenset, axis=1).duplicated()]
print(out)
# Output
columnA ColumnB ColumnC
0 12 8 1.34
2 1 7 0.25
Details:
>>> set([8, 12])
{8, 12}
>>> set([12, 8])
{8, 12}
You can combine a and b into a tuple and call drop_duplicates based on the combined columne:
t = df[["a", "b"]].apply(lambda row: tuple(set(row)), axis=1)
df.assign(t=t).drop_duplicates("t").drop(columns="t")
Possible solution is the following:
# pip install pandas
import pandas as pd
# create test dataframe
df = pd.DataFrame({"colA": [12,8,1],"colB": [8,12,1],"colC": [1.34,1.34,0.25]})
df
df.loc[df.colA > df.colB, df.columns] = df.loc[df.colA > df.colB, df.columns[[1,0,2]]].values
df.drop_duplicates()
Returns
Related
Suppose I have the following df,
Column1 | Column2 | Column3
1 | 4 | 23.2
32 | 4.2 | 62.2
9 | 12 | 2.2
I want to be able to get dictionary in the following format,
{
0: {'Column1':1, 'Column2':4, 'Column3':23.2},
1: {'Column1': 32, 'Column2':4.2, 'Column3':62.2},
2: {'Column1':9, 'Column2':12, 'Column3':2.2}
}
How can I achieve this?
final_dict = df.set_index(df.index).T.to_dict('dict')
I'm trying to use DataFrame.to_markdown with a dataframe that contains float values that I'd like to have rounded off. Without to_markdown() I can just set pd.options.display.float_format and everything works fine, but to_markdown doesn't seem to be respecting that option.
Repro:
import pandas as pd
df = pd.DataFrame([[1, 2, 3], [42.42, 99.11234123412341234, -23]])
pd.options.display.float_format = '{:,.0f}'.format
print(df)
print()
print(df.to_markdown())
outputs:
0 1 2
0 1 2 3
1 42 99 -23
| | 0 | 1 | 2 |
|---:|------:|--------:|----:|
| 0 | 1 | 2 | 3 |
| 1 | 42.42 | 99.1123 | -23 |
(compare the 42.42 and 99.1123 in the to_markdown table to the 42 and 99 in the plain old df)
Is this a bug or am I missing something about how to use to_markdown?
It looks like pandas uses tabulate for this formatting. If it's installed, you can use something like:
df.to_markdown(floatfmt=".0f")
output:
| | 0 | 1 | 2 |
|---:|----:|----:|----:|
| 0 | 1 | 2 | 3 |
| 1 | 42 | 99 | -23 |
I have the following DataFrame in Python Pandas:
df.head(3)
+===+===========+======+=======+
| | year-month| cat | count |
+===+===========+======+=======+
| 0 | 2016-01 | 1 | 14 |
+---+-----------+------+-------+
| 1 | 2016-02 | 1 | 22 |
+---+-----------+------+-------+
| 2 | 2016-01 | 2 | 10 |
+---+-----------+------+-------+
year-month is a combination of year and month, dating back about 8 years.
cat is an integer from 1 to 10.
count is an integer.
I now want to plot count vs. year-month with matplotlib, one line for each cat. How can this be done?
Easiest is seaborn:
import seaborn as sns
sns.lineplot(x='year-month', y='count', hue='cat', data=df)
Note: it might also help if you convert year-month to datetime type before plotting, e.g.
df['year-month'] = pd.to_datetime(df['year-month'], format='%Y-%m').dt.to_period('M')
Currently attempting to convert a script I made from pandas to pyspark, I have a dataframe that contains data in the form of:
index | letter
------|-------
0 | a
1 | a
2 | b
3 | c
4 | a
5 | a
6 | b
I want to create the following dataframe in which the occurrence count for each instance of a letter is stored, for example the first time we see "a" its occurrence count is 0, second time 1, third time 2:
index | letter | occurrence
------|--------|-----------
0 | a | 0
1 | a | 1
2 | b | 0
3 | c | 0
4 | a | 2
5 | a | 3
6 | b | 1
I can achieve this in pandas using:
df['occurrence'] = df.groupby('letter').cumcount()
How would I go about doing this in pyspark? Cannot find an existing method that is similar.
The feature you're looking for is called window functions
from pyspark.sql.functions import row_number
from pyspark.sql.window import Window
df.withColumn("occurence", row_number().over(Window.partitionBy("letter").orderBy("index")))
I am using xgboost with objective='binary:logistic' to calculate each customer probability if he/she will make the spend.
Using predic_proba in sklearn will print two probability for both 0 and 1,like:
[[0.56651809 0.43348191]
[0.15598162 0.84401838]
[0.86852502 0.13147498]]
how to insert each customer ID by pandas to get something like:
+----+------------+------------+
| ID | prob_0 | prob_1 |
+----+------------+------------+
| 1 | 0.56651809 | 0.43348191 |
| 2 | 0.15598162 | 0.84401838 |
| 3 | 0.86852502 | 0.13147498 |
+----+------------+------------+
You can use pandas DataFrame() in order to make your form.
list_data = [[0.56651809, 0.43348191],[0.15598162, 0.84401838],[0.86852502, 0.13147498]]
columns = ['prob_0', 'prob_1']
index = [1, 2, 3]
pd.DataFrame(data = list_data, columns = columns, index= index)