Combine index header row and column header row in Pandas - pandas

I create a dataframe and export to an html table. However the headers are off as below
How can I combine the index name row, and the column name row?
I want the table header to look like this:
but it currently exports to html like this:
I create the dataframe as below (example):
data = [{'Name': 'A', 'status': 'ok', 'host': '1', 'time1': '2020-01-06 06:31:06', 'time2': '2020-02-06 21:10:00'}, {'Name': 'A', 'status': 'ok', 'host': '2', 'time1': '2020-01-06 06:31:06', 'time2': '-'}, {'Name': 'B', 'status': 'Alert', 'host': '1', 'time1': '2020-01-06 10:31:06', 'time2': '2020-02-06 21:10:00'}, {'Name': 'B', 'status': 'ok', 'host': '2', 'time1': '2020-01-06 10:31:06', 'time2': '2020-02-06 21:10:00'},{'Name': 'B', 'status': 'ok', 'host': '4', 'time1': '2020-01-06 10:31:06', 'time2': '2020-02-06 21:10:00'},{'Name': 'C', 'status': 'Alert', 'host': '2', 'time1': '2020-01-06 10:31:06', 'time2': '2020-02-06 21:10:00'},{'Name': 'C', 'status': 'ok', 'host': '3', 'time1': '2020-01-06 10:31:06', 'time2': '2020-02-06 21:10:00'},{'Name': 'C', 'status': 'ok', 'host': '4', 'time1': '-', 'time2': '-'}]
df = pandas.DataFrame(data)
df.set_index(['Name', 'status', 'host'], inplace=True)
html_body = df.to_html(bold_rows=False)
The index is set to have hierarchical rows, for easier reading in an html table:
print(df)
time1 time2
Name status host
A ok 1 2020-01-06 06:31:06 2020-02-06 21:10:00
2 2020-01-06 06:31:06 -
B Alert 1 2020-01-06 10:31:06 2020-02-06 21:10:00
ok 2 2020-01-06 10:31:06 2020-02-06 21:10:00
4 2020-01-06 10:31:06 2020-02-06 21:10:00
C Alert 2 2020-01-06 10:31:06 2020-02-06 21:10:00
ok 3 2020-01-06 10:31:06 2020-02-06 21:10:00
4 - -
The only solution that I've got working is to set every column to index.
This doesn't seem practical tho, and leaves an empty row that must be manually removed:

Setup
import pandas as pd
from IPython.display import HTML
l0 = ('Foo', 'Bar')
l1 = ('One', 'Two')
ix = pd.MultiIndex.from_product([l0, l1], names=('L0', 'L1'))
df = pd.DataFrame(1, ix, [*'WXYZ'])
HTML(df.to_html())
BeautifulSoup
Hack the HTML result from df.to_html(header=False). Pluck out the empty cells in the table head and drop in the column names.
from bs4 import BeautifulSoup
html_doc = df.to_html(header=False)
soup = BeautifulSoup(html_doc, 'html.parser')
empty_cols = soup.find('thead').find_all(lambda tag: not tag.contents)
for tag, col in zip(empty_cols, df):
tag.string = col
HTML(soup.decode_contents())

If you want to use a Dataframe Styler to perform a lot of wonderful formatting on your table, the elements, and the contents, then you might need a slight change to piRSquared's answer, as I did.
before transformation
style.to_html() added non-breaking spaces which made tag.contents always return true, and thus yielded no change to the table. I modified the lambda to account for this, which revealed another issue.
lambda tag: (not tag.contents) or '\xa0' in tag.contents
Cells were copied strangely
Styler.to_html() lacks the header kwarg - I am guessing this is the source of the issue. I took a slightly different approach - Move the second row headers into the first row, and then destroy the second header row.
It seems pretty generic and reusable for any multi-indexed dataframe.
df_styler = summary_df.style
# Use the df_styler to change display format, color, alignment, etc.
raw_html = df_styler.to_html()
soup = BeautifulSoup(raw_html,'html.parser')
head = soup.find('thead')
trs = head.find_all('tr')
ths0 = trs[0].find_all(lambda tag: (not tag.contents) or '\xa0' in tag.contents)
ths1 = trs[1].find_all(lambda tag: (tag.contents) or '\xa0' not in tag.contents)
for blank, filled in zip(ths0, ths1):
blank.replace_with(filled)
trs[1].decompose()
final_html_str = soup.decode_contents()
Success - two header rows condensed into one
Big Thanks to piRSquared for the starting point of Beautiful soup!

Related

Isin across 2 columns for groupby

How to use isin with or (?), when I know that my data to match in df1 will be distributed across 2 columns (Title, ID).
Below code works if you delete ' or df1[df1.ID.isin(df2[column])] '
import pandas as pd
df1 = pd.DataFrame({'Title': ['A1', 'A2', 'A3', 'C1', 'C2', 'C3'],
'ID': ['B1', 'B2', 'B3', 'D1', 'D2', 'D3'],
'Whole': ['full', 'full', 'full', 'semi', 'semi', 'semi']})
df2 = pd.DataFrame({'Group1': ['A1', 'A2', 'A3'],
'Group2': ['B1', 'B2', 'B3']})
df = pd.DataFrame()
for column in df2.columns:
d_group = (df1[df1.Title.isin(df2[column])] or df1[df1.ID.isin(df2[column])])
df3 = d_group.groupby('Whole')['Whole'].count()\
.rename(column, inplace=True)\
.reindex(['part', 'full', 'semi'], fill_value='-')
df = df.append(df3, ignore_index=False, sort=False)
print(df)
Desired output:
| full | part | semi
--------+---------+----------+----------
Group1 | 3 | - | -
Group2 | 3 | - | -
you need to use | instead of or and make sure you use the [] correctly to sub-select from the df you want. In general the notation is df[selection_filter]
import pandas as pd
df1 = pd.DataFrame({'Title': ['A1', 'A2', 'A3', 'C1', 'C2', 'C3'],
'ID': ['B1', 'B2', 'B3', 'D1', 'D2', 'D3'],
'Whole': ['full', 'full', 'full', 'semi', 'semi', 'semi']})
df2 = pd.DataFrame({'Group1': ['A1', 'A2', 'A3'],
'Group2': ['B1', 'B2', 'B3']})
df = pd.DataFrame()
for column in df2.columns:
d_group = df1[df1.Title.isin(df2[column]) | df1.ID.isin(df2[column])]
df3 = d_group.groupby('Whole')['Whole'].count()\
.rename(column, inplace=True)\
.reindex(['part', 'full', 'semi'], fill_value='-')
df = df.append(df3, ignore_index=False, sort=False)
print(df)

convert pandas dataframe to list and nest a dict?

I have a list:
l = [{'level': '1', 'rows': 2}, {'level': '2', 'rows': 3}]
I can conert to DataFrame, but how do I convert back?
frame = pd.DataFrame(l)
We have to_dict
frame.to_dict('r')
Out[67]: [{'level': '1', 'rows': 2}, {'level': '2', 'rows': 3}]

How to apply str.split() on pandas column?

Using Simple Data:
df = pd.DataFrame({'ids': [0,1,2], 'value': ['2 4 10 0 14', '5 91 19 20 0', '1 1 1 2 44']})
I need to convert the column to array, so I use:
df.iloc[:,-1] = df.iloc[:,-1].apply(lambda x: str(x).split())
X = df.iloc[:, 1:]
X = np.array(X.values)
but the problem is the data is being nested and I just need a matrix (3,5). How to make this properly and fast for large data (avoid looping)?
As said in the comments by #anky, #ScottBoston. You can use string method split along with expand parameter and finally change to NumPy:
df.iloc[:, 1].str.split(expand=True).values
array([['2', '4', '10', '0', '14'],
['5', '91', '19', '20', '0'],
['1', '1', '1', '2', '44']], dtype=object)

Conditional grouping in pandas and transpose

With an input dataframe framed out of a given CSV, I need to transpose the data based on certain conditions. The groupby should be applied based on Key value.
For any value in the same 'Key' group, if the 'Type' is "T", these values should be written on "T" columns labelled as T1, T2, T3...and so on.
For any value in the same 'Key' group, if the 'Type' is "P" and 'Code' ends with "00" these values should be written on "U" columns labelled as U1, U2, U3...and so on.
For any value in the same 'Key' group, if the 'Type' is "P" and 'Code' doesn't end with "00" these values should be written on "P" columns labelled as P1, P2, P3...and so on.
There might be n number of values of type T & P for any Key value and the output columns for T & P should be updated accordingly
Input Dataframe:
df = pd.DataFrame({'Key': ['1', '1', '1', '1', '1', '2', '2', '2', '2', '2'],
'Value': ['T101', 'T102', 'P101', 'P102', 'P103', 'T201', 'T202', 'P201', 'P202', 'P203'],
'Type': ['T', 'T', 'P', 'P', 'P', 'T', 'T', 'P', 'P', 'P'],
'Code': ['0', '0', 'ABC00', 'TWY01', 'JTH02', '0', '0', 'OUJ00', 'LKE00', 'WDF45']
})
Expected Dataframe:
Can anyone suggest an effective solution for this case?
Here's a possible solution using pivot.
import pandas as pd
df = pd.DataFrame({'Key': ['1', '1', '1', '1', '1', '2', '2', '2', '2', '2'],
'Value': ['T101', 'T102', 'P101', 'P102', 'P103', 'T201', 'T202', 'P201', 'P202', 'P203'],
'Type': ['T', 'T', 'P', 'P', 'P', 'T', 'T', 'P', 'P', 'P'],
'Code': ['0', '0', 'ABC00', 'TWY01', 'JTH02', '0', '0', 'OUJ00', 'LKE00', 'WDF45']
})
# Set up the U label
df.loc[(df['Code'].apply(lambda x: x.endswith('00'))) & (df['Type'] == 'P'), 'Type'] = 'U'
# Type indexing by key by type
df = df.join(df.groupby(['Key','Type']).cumcount().rename('Tcount').to_frame() + 1)
df['Type'] = df['Type'] + df['Tcount'].astype('str')
# Pivot the table
pv =df.loc[:,['Key','Type','Value']].pivot(index='Key', columns='Type', values='Value')
>>>pv
Type P1 P2 T1 T2 U1 U2
Key
1 P102 P103 T101 T102 P101 NaN
2 P203 NaN T201 T202 P201 P202
cdf = df.loc[df['Code'] != '0', ['Key', 'Code']].groupby('Key')['Code'].apply(lambda x: ','.join(x))
>>>cdf
Key
1 ABC00,TWY01,JTH02
2 OUJ00,LKE00,WDF45
Name: Code, dtype: object
>>>pv.join(cdf)
P1 P2 T1 T2 U1 U2 Code
Key
1 P102 P103 T101 T102 P101 None ABC00,TWY01,JTH02
2 P203 None T201 T202 P201 P202 OUJ00,LKE00,WDF45

Convert pandas to dictionary defining the columns used fo the key values

There's the pandas dataframe 'test_df'. My aim is to convert it to a dictionary. Therefore I run this:
id Name Gender Age
0 1 'Peter' 'M' 32
1 2 'Lara' 'F' 45
Therefore I run this:
test_dict = test_df.set_index('id').T.to_dict()
The output is this:
{1: {'Name': 'Peter', 'Gender': 'M', 'Age': 32}, 2: {'Name': 'Lara', 'Gender': 'F', 'Age': 45}}
Now, I want to choose only the 'Name' and 'Gender' columns as the values of dictionary's keys. I'm trying to modify the above script into sth like this:
test_dict = test_df.set_index('id')['Name']['Gender'].T.to_dict()
with no success!
Any suggestion please?!
You was very close, use subset of columns [['Name','Gender']]:
test_dict = test_df.set_index('id')[['Name','Gender']].T.to_dict()
print (test_dict)
{1: {'Name': 'Peter', 'Gender': 'M'}, 2: {'Name': 'Lara', 'Gender': 'F'}}
Also T is not necessary, use parameter orient='index':
test_dict = test_df.set_index('id')[['Name','Gender']].to_dict(orient='index')
print (test_dict)
{1: {'Name': 'Peter', 'Gender': 'M'}, 2: {'Name': 'Lara', 'Gender': 'F'}}