I have a column-D which has value of other column names [Col A, COl B , COL C] , i want to add additional rows of missing combination. My dataframe looks like below:
Original Data
import pandas as pd
data={'colA':[0,0,0],'ColB':[0,0,0] ,'ColC':[0,0,0],'ColD':['ColA','ColA','ColB'],'Target':[1,1,1]}
df=pd.DataFrame(data)
print(df)
I need resulting df as:
data={'colA':[0,0,0,0,0,0,0,0,0],'ColB':[0,0,0,0,0,0,0,0,0] ,'ColC':[0,0,0,0,0,0,0,0,0],'ColD':['ColA','ColB','ColC','ColA','ColB','ColC','ColB','ColA','ColC'],'Target':[1,0,0,1,0,0,1,0,0]}
df=pd.DataFrame(data)
print(df)
Resulting Data needed
Given contents of ColA,B,C are irrelevant and you just want to repeat values in ColD and Target it just becomes a dict comprehension right. Nothing to do with pandas
data={'colA':[0,0,0],'ColB':[0,0,0] ,'ColC':[0,0,0],'ColD':['ColA','ColA','ColB'],'Target':[1,1,1]}
df=pd.DataFrame(data)
pd.DataFrame({k:v*3
if k not in ["Target","ColD"]
else [1,0,0]*3
if k=="Target" else ["ColA","ColB", "ColC"]*3
for k,v in data.items()})
Related
I want to keep the columns of df if its column name matches the index of df2.
My code below only returns the df.index but I want to return the entire subset of pandas dataframe.
import pandas as pd
df = df[df.columns.intersection(df2.index)]
From my understanding, you want to have datas from both dataframes matching with index of df2. Correct?
You can use Merge to join the dataframes.
df = pd.merge(df1, df2, how='inner', on=[df2.index])
When grouping by a single column, and using as_index=False, the behavior is expected in pandas. However, when I use .agg, as_index no longer appears to behave as expected. In short, it doesn't appear to matter.
# imports
import pandas as pd
import numpy as np
# set the seed
np.random.seed(834)
df = pd.DataFrame(np.random.rand(10, 1), columns=['a'])
df['letter'] = np.random.choice(['a','b'], size=10)
summary = df.groupby('letter', as_index=False).agg([np.count_nonzero, np.mean])
summary
returns:
a
count_nonzero mean
letter
a 6.0 0.539313
b 4.0 0.456702
When I would have expected the axis to be 0 1 with letter as a column in the dataframe.
In summary, I want to be able to group by one or more columns, summarize a single column with multiple aggregates, and return a dataframe that does not have the group by columns as the index, nor a Multi Index in the column.
The comment from #Trenton did the trick.
summary = df.groupby('letter')['a'].agg([np.count_nonzero, np.mean]).reset_index()
I am trying to apply value_counts method to a Dataframe based on the columns selected dynamically in the Streamlit app
This is what I am trying to do:
if st.checkbox("Select Columns To Show"):
all_columns = df.columns.tolist()
selected_columns = st.multiselect("Select", all_columns)
new_df = df[selected_columns]
st.dataframe(new_df)
The above lets me select columns and displays data for the selected columns. I am trying to see how could I apply value_counts/groupby method on this output in Streamlit app
If I try to do the below
st.table(new_df.value_counts())
I get the below error
AttributeError: 'DataFrame' object has no attribute 'value_counts'
I believe the issue lies in passing a list of columns to a dataframe. When you pass a single column in [] to a dataframe, you get back a pandas.Series object (which has the value_counts method). But when you pass a list of columns, you get back a pandas.DataFrame (which doesn't have value_counts method defined on it).
Can you try st.table(new_df[col_name].value_counts())
I think the error is because value_counts() is applicable on a Series and not dataframe.
You can try Converting ".value_counts" output to dataframe
If you want to apply on one single column
def value_counts_df(df, col):
"""
Returns pd.value_counts() as a DataFrame
Parameters
----------
df : Pandas Dataframe
Dataframe on which to run value_counts(), must have column `col`.
col : str
Name of column in `df` for which to generate counts
Returns
-------
Pandas Dataframe
Returned dataframe will have a single column named "count" which contains the count_values()
for each unique value of df[col]. The index name of this dataframe is `col`.
Example
-------
>>> value_counts_df(pd.DataFrame({'a':[1, 1, 2, 2, 2]}), 'a')
count
a
2 3
1 2
"""
df = pd.DataFrame(df[col].value_counts())
df.index.name = col
df.columns = ['count']
return df
val_count_single = value_counts_df(new_df, selected_col)
If you want to apply for all object columns in the dataframe
def valueCountDF(df, object_cols):
c = df[object_cols].apply(lambda x: x.value_counts(dropna=False)).T.stack().astype(int)
p = (df[object_cols].apply(lambda x: x.value_counts(normalize=True,
dropna=False)).T.stack() * 100).round(2)
cp = pd.concat([c,p], axis=1, keys=["Count", "Percentage %"])
return cp
val_count_df_cols = valueCountDF(df, selected_columns)
And Finally, you can use st.table or st.dataframe to show the dataframe in your streamlit app
df is original DataFrame, csv file.
a = df.head(3) # get part of df.
This is table a.
b = a.loc[1:3,'22':'41'] #select part of a.
c = pd.DataFrame(data=b,index=['a','b'],columns=['v','g']) # give index and columns
final
b show 2x2. I get four value.
c show 2x2 NaN. I get four NaN.
why c don't contain any number?
Try using .values, you are running into 'intrinsic data alignment'
c = pd.DataFrame(data=b.values,index=['a','b'],columns=['v','g']) # give index and columns
Pandas likes to align indexes, by converting your 'b' dataframe into a np.array, you can then use the pandas dataframe constructor to build a new dataframe with those 2x2 values assigning new indexing.
Your DataFrame b already contains row and column indices, so when you try to create DataFrame c and you pass index and columns keyword arguments, you are implicitly indexing out of the original DataFrame b.
If all you want to do is re-index b, why not do it directly?
b = b.copy()
b.index = ['a', 'b']
b.columns = ['v', 'g']
I really don't understand what I'm doing. I have two data frames. One has a list of column labels and another has a bunch of data. I want to just label the columns in my data with my column labels.
My Code:
airportLabels = pd.read_csv('airportsLabels.csv', header= None)
airportData = pd.read_table('airports.dat', sep=",", header = None)
df = DataFrame(airportData, columns = airportLabels)
When I do this, all the data turns into "NaN" and there is only one column anymore. I am really confused.
I think you need add parameter nrows to read_csv, if you need read only columns, remove header= None, because first row of csv is column names and then use parameter names in read_table with columns from DataFrame airportLabels :
import pandas as pd
import io
temp=u"""col1,col2,col3
1,5,4
7,8,5"""
#after testing replace io.StringIO(temp) to filename
airportLabels = pd.read_csv(io.StringIO(temp), nrows=0)
print airportLabels
Empty DataFrame
Columns: [col1, col2, col3]
Index: []
temp=u"""
a,d,f
e,r,t"""
#after testing replace io.StringIO(temp) to filename
df = pd.read_table(io.StringIO(temp), sep=",", header = None, names=airportLabels.columns)
print df
col1 col2 col3
0 a d f
1 e r t