Values in a list to a new dataframe - pandas

I have my data as list say
r1=[['Pearson Chi-square ( 4.0) = ', 1021938.0], ['p-value = ', 0.0], ["Cramer's V = ", 1.0]]
I want to extract 4.0 inside this colon 'Pearson Chi-square ( 4.0)' and form a seperate column as DOF
I want to extract 1021938.0 inside this bracket
'Pearson Chi-square ( 4.0) = ', 1021938.0 and form a seperate column
as Chisquare
['p-value = ', 0.0] as 0.0 and form a column called Pvalue
["Cramer's V = ", 1.0] as 1.0 and form a column Cramer's V
So that my output df will be
df
DOF Chisquare Pvalue Cramers'V
4 1021938.0 0.0 1.0
I have tried this line of code
DOF=r1[0][1]
chisquare_stat=r1[0][0]
p_value=r1[1][1]
cramers_v=r1[2][1]
I need some help in extracting individual values alone and write it as a new df for easy refernce.

If you create a new df whit
from pandas import DataFrame
new_df = pd.DataFrame(r1)
T = new_df.T # Transpost the new_df
var = df[1].values
i_want = var[0]
i_want
the output is
1021938.0
This code make a uggly DF in my opinion you can create a new df whit a dictonary... anyway
I think the metod df['column'].values should fix the problem you also can use Transposed matrix its always helpfull

Related

Computing for the mean of a given column from a dataframe

I need to find the arithmetic mean of each columns by returning res?
def ave(df, name):
df = {
'Courses':["Spark","PySpark","Python","pandas",None],
'Fee' :[20000,25000,22000,None,30000],
'Duration':['30days','40days','35days','None','50days'],
'Discount':[1000,2300,1200,2000,None]}
#CODE HERE
res = []
for i in df.columns:
res.append(col_ave(df, i))
I tried individually creating codes for the mean but Im having trouble

New column with word at nth position of string from other column pandas

import numpy as np
import pandas as pd
d = {'ABSTRACT_ID': [14145090,1900667, 8157202,6784974],
'TEXT': [
"velvet antlers vas are commonly used in tradit",
"we have taken a basic biologic RPA to elucidat4",
"ceftobiprole bpr is an investigational cephalo",
"lipoperoxidationderived aldehydes for example",],
'LOCATION': [1, 4, 2, 1]}
df = pd.DataFrame(data=d)
df
def word_at_pos(x,y):
pos=x
string= y
count = 0
res = ""
for word in string:
if word == ' ':
count = count + 1
if count == pos:
break
res = ""
else :
res = res + word
print(res)
word_at_pos(df.iloc[0,2],df.iloc[0,1])
For this df I want to create a new column WORD that contains the word from TEXT at the position indicated by LOCATION. e.g. first line would be "velvet".
I can do this for a single line as an isolated function world_at_pos(x,y), but can't work out how to apply this to whole column. I have done new columns with Lambda functions before, but can't work out how to fit this function to lambda.
Looping over TEXT and LOCATION could be the best idea because splitting creates a jagged array, so filtering using numpy advanced indexing won't be possible.
df["WORDS"] = [txt.split()[loc] for txt, loc in zip(df["TEXT"], df["LOCATION"]-1)]
print(df)
ABSTRACT_ID ... WORDS
0 14145090 ... velvet
1 1900667 ... a
2 8157202 ... bpr
3 6784974 ... lipoperoxidationderived
[4 rows x 4 columns]

How to replace NAN values based on the values in another column in pandas

I am using the breast-cancer-wisconsin dataset that looks as follows:
The Bare Nuclei column has 16 missing entries denoted by "?" which I replace with NAN as follows:
df.replace('?', np.NAN, regex=False, inplace = True)
resulting in this (a few of the 16 missing entries):
I want to replace the NANs with the most frequently occurring value with respect to each class. To elaborate, the most frequently occurring value in column 'Bare Nuclei' which has class=2 (benign cancer) should be used to replace all the rows that have 'Bare Nuclei' == NAN and Class == 2. Similarly for class = 4 (malignant).
I tried the following:
df[df['Class']== 2]['Bare Nuclei'].fillna(df_vals[df_vals['Class']==2]['Bare Nuclei'].mode(), inplace=True)
df[df['Class']== 4]['Bare Nuclei'].fillna(df_vals[df_vals['Class']==4]['Bare Nuclei'].mode(), inplace=True)
It did not result in any error but when I tried this:
df.isnull().any()
Bare Nuclei shows True which means the NAN values are still there.
(column "Bare Nuclei" is of type object)
I don't understand what I am doing wrong. Please help!
Thank you.
You can try via groupby()+agg()+fillna():
s=df_vals.groupby('class')['Bare Nuclei'].agg(lambda x:x.mode(dropna=False).iat[0])
df['Bare Nuclei']=df['Bare Nuclei'].fillna(df['class'].map(s))
OR
by your approach use loc:
df.loc[df['Class']== 2,'Bare Nuclei'].fillna(df_vals.loc[df_vals['Class']==2,'Bare Nuclei'].mode(), inplace=True)
As a late answer, if you want to replace every NaN you have in the "Bare Nuclei" column by the values in the column "Class":
selection_condition = pd.isna(df["Bare Nuclei"])
df["Bare Nuclei"].iloc[selection_condition] = df[selection_condition]["Class"]
If you you want to be class specific regarding your replacement:
selection_condition = pd.isna(df["Bare Nuclei"]) & (df["Class"] == 2)
df["Bare Nuclei"].iloc[selection_condition] = df[selection_condition]["Class"]
file.info()
file['Bare Nuclei'].loc[file['Bare Nuclei'] == '?'] = panda.nan
file.dropna(inplace = True)
file.drop(['Sample code number'],axis = 1,inplace = True)
file['Bare Nuclei'] = file.astype({"Bare Nuclei": int})
from sklearn.metrics import accuracy_score
for i in range(num_split):
first = filename.drop(['Class','Bare Nuclei'],axis=1)
second = filename['Class'].values
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size = 0.8, random_state = 0)
classifier = LogisticRegression(max_iter = 200, solver = 'newton-cg')
classifier.fit(x_train, y_train)
Sk_overall = Sk_overall + classifier.score(x_test,y_test)
Sk_Accuracy = Sk_overall/i

Quantile across rows and down columns using selected columns only [duplicate]

I have a dataframe with column names, and I want to find the one that contains a certain string, but does not exactly match it. I'm searching for 'spike' in column names like 'spike-2', 'hey spike', 'spiked-in' (the 'spike' part is always continuous).
I want the column name to be returned as a string or a variable, so I access the column later with df['name'] or df[name] as normal. I've tried to find ways to do this, to no avail. Any tips?
Just iterate over DataFrame.columns, now this is an example in which you will end up with a list of column names that match:
import pandas as pd
data = {'spike-2': [1,2,3], 'hey spke': [4,5,6], 'spiked-in': [7,8,9], 'no': [10,11,12]}
df = pd.DataFrame(data)
spike_cols = [col for col in df.columns if 'spike' in col]
print(list(df.columns))
print(spike_cols)
Output:
['hey spke', 'no', 'spike-2', 'spiked-in']
['spike-2', 'spiked-in']
Explanation:
df.columns returns a list of column names
[col for col in df.columns if 'spike' in col] iterates over the list df.columns with the variable col and adds it to the resulting list if col contains 'spike'. This syntax is list comprehension.
If you only want the resulting data set with the columns that match you can do this:
df2 = df.filter(regex='spike')
print(df2)
Output:
spike-2 spiked-in
0 1 7
1 2 8
2 3 9
This answer uses the DataFrame.filter method to do this without list comprehension:
import pandas as pd
data = {'spike-2': [1,2,3], 'hey spke': [4,5,6]}
df = pd.DataFrame(data)
print(df.filter(like='spike').columns)
Will output just 'spike-2'. You can also use regex, as some people suggested in comments above:
print(df.filter(regex='spike|spke').columns)
Will output both columns: ['spike-2', 'hey spke']
You can also use df.columns[df.columns.str.contains(pat = 'spike')]
data = {'spike-2': [1,2,3], 'hey spke': [4,5,6], 'spiked-in': [7,8,9], 'no': [10,11,12]}
df = pd.DataFrame(data)
colNames = df.columns[df.columns.str.contains(pat = 'spike')]
print(colNames)
This will output the column names: 'spike-2', 'spiked-in'
More about pandas.Series.str.contains.
# select columns containing 'spike'
df.filter(like='spike', axis=1)
You can also select by name, regular expression. Refer to: pandas.DataFrame.filter
df.loc[:,df.columns.str.contains("spike")]
Another solution that returns a subset of the df with the desired columns:
df[df.columns[df.columns.str.contains("spike|spke")]]
You also can use this code:
spike_cols =[x for x in df.columns[df.columns.str.contains('spike')]]
Getting name and subsetting based on Start, Contains, and Ends:
# from: https://stackoverflow.com/questions/21285380/find-column-whose-name-contains-a-specific-string
# from: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html
# from: https://cmdlinetips.com/2019/04/how-to-select-columns-using-prefix-suffix-of-column-names-in-pandas/
# from: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.filter.html
import pandas as pd
data = {'spike_starts': [1,2,3], 'ends_spike_starts': [4,5,6], 'ends_spike': [7,8,9], 'not': [10,11,12]}
df = pd.DataFrame(data)
print("\n")
print("----------------------------------------")
colNames_contains = df.columns[df.columns.str.contains(pat = 'spike')].tolist()
print("Contains")
print(colNames_contains)
print("\n")
print("----------------------------------------")
colNames_starts = df.columns[df.columns.str.contains(pat = '^spike')].tolist()
print("Starts")
print(colNames_starts)
print("\n")
print("----------------------------------------")
colNames_ends = df.columns[df.columns.str.contains(pat = 'spike$')].tolist()
print("Ends")
print(colNames_ends)
print("\n")
print("----------------------------------------")
df_subset_start = df.filter(regex='^spike',axis=1)
print("Starts")
print(df_subset_start)
print("\n")
print("----------------------------------------")
df_subset_contains = df.filter(regex='spike',axis=1)
print("Contains")
print(df_subset_contains)
print("\n")
print("----------------------------------------")
df_subset_ends = df.filter(regex='spike$',axis=1)
print("Ends")
print(df_subset_ends)

Assigning values to dataframe columns

In the below code, the dataframe df5 is not getting populated. I am just assigning the values to dataframe's columns and I have specified the column beforehand. When I print the dataframe, it returns an empty dataframe. Not sure whether I am missing something.
Any help would be appreciated.
import math
import pandas as pd
columns = ['ClosestLat','ClosestLong']
df5 = pd.DataFrame(columns=columns)
def distance(pt1, pt2):
return math.sqrt((pt1[0] - pt2[0])**2 + (pt1[1] - pt2[1])**2)
for pt1 in df1:
closestPoints = [pt1, df2[0]]
for pt2 in df2:
if distance(pt1, pt2) < distance(closestPoints[0], closestPoints[1]):
closestPoints = [pt1, pt2]
df5['ClosestLat'] = closestPoints[1][0]
df5['ClosestLat'] = closestPoints[1][0]
df5['ClosestLong'] = closestPoints[1][1]
print ("Point: " + str(closestPoints[0]) + " is closest to " + str(closestPoints[1]))
From the look of your code, you're trying to populate df5 with a list of latitudes and longitudes. However, you're making a couple mistakes.
The columns of pandas dataframes are Series, and hold some type of sequential data. So df5['ClosestLat'] = closestPoints[1][0] attempts to assign the entire column a single numerical value, and results in an empty column.
Even if the dataframe wasn't ignoring your attempts to assign a real number to the column, you would lose data because you are overwriting the column with each loop.
The Solution: Build a list of lats and longs, then insert into the dataframe.
import math
import pandas as pd
columns = ['ClosestLat','ClosestLong']
df5 = pd.DataFrame(columns=columns)
def distance(pt1, pt2):
return math.sqrt((pt1[0] - pt2[0])**2 + (pt1[1] - pt2[1])**2)
lats, lngs = [], []
for pt1 in df1:
closestPoints = [pt1, df2[0]]
for pt2 in df2:
if distance(pt1, pt2) < distance(closestPoints[0], closestPoints[1]):
closestPoints = [pt1, pt2]
lats.append(closestPoints[1][0])
lngs.append(closestPoints[1][1])
df['ClosestLat'] = pd.Series(lats)
df['ClosestLong'] = pd.Series(lngs)