Bonjour,
I don't understand why this issue occurs.
print("p.shape= ", p.shape)
print("dfmj_dates['deces'].shape = ",dfmj_dates['deces'].shape)
cross_dfmj = pd.crosstab(p, dfmj_dates['deces'])
That produces:
p.shape= (683, 1)
dfmj_dates['deces'].shape = (683,)
----> 3 cross_dfmj = pd.crosstab(p, dfmj_dates['deces'])
--> 654 df = DataFrame(data, index=common_idx)
--> 614 mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager)
--> 589 val = sanitize_array(
--> 576 subarr = _sanitize_ndim(subarr, data, dtype, index, allow_2d=allow_2d)
--> 627 raise ValueError("Data must be 1-dimensional")
ValueError: Data must be 1-dimensional
From me, I suspect issue comes from the difference between (683, 1)
and (683,). I tried something like p.flatten(order = 'C') to get
(683,) but pd.DataFrame(dfmj_dates['deces']) too. That failed.
Do you have any idea? Regards, Atapalou
print(p.head(30))
print(df.head(30))
that produces
week
0 8
1 8
2 8
3 9
4 9
5 9
6 9
7 9
8 9
9 9
10 10
11 10
12 10
13 10
14 10
15 10
16 10
17 11
18 11
19 11
20 11
21 11
22 11
23 11
24 12
25 12
26 12
27 12
28 12
29 12
deces
0 0
1 1
2 0
3 0
4 0
5 1
6 0
7 0
8 0
9 0
10 1
11 1
12 0
13 3
14 4
15 5
16 3
17 11
18 3
19 15
20 13
21 18
22 12
23 36
24 21
25 27
26 69
27 128
28 78
29 112
Try to squeeze p:
cross_dfmj = pd.crosstab(p.squeeze(), dfmj_dates['deces'])
Example:
p = np.random.random((5, 1))
p.shape
# (5, 1)
p.squeeze().shape
# (5,)
Related
Input data frame as given given below,
data = {
'labels': ["A","B","A","B","A","B","M","B","M","B","M"],
'start': [0,9,13,23,47,77,81,92,100,104,118],
'stop': [9,13,23,47,77,81,92,100,104,118,145],
}
df = pd.DataFrame.from_dict(data)
labels start stop
0 A 0 9
1 B 9 13
2 A 13 23
3 B 23 47
4 A 47 77
5 B 77 81
6 M 81 92
7 B 92 100
8 M 100 104
9 B 104 118
10 M 118 145
The output data frame required is as below,
Try this:
df['start'] = df.apply(lambda x: range(x['start'] + 1, x['stop'] + 1), axis=1)
df = df.explode('start')
Output:
>>> df
labels start stop
0 A 1 9
0 A 2 9
0 A 3 9
0 A 4 9
0 A 5 9
0 A 6 9
0 A 7 9
0 A 8 9
0 A 9 9
1 B 10 13
1 B 11 13
1 B 12 13
1 B 13 13
2 A 14 23
2 A 15 23
2 A 16 23
2 A 17 23
2 A 18 23
2 A 19 23
2 A 20 23
2 A 21 23
2 A 22 23
2 A 23 23
...
I have some data that holds information about two opposing teams
home_x away_x
0 7 28
1 11 10
2 11 20
3 12 15
4 12 16
I know about .melt(), which returns something like this:
variable value
0 home_x 7
1 home_x 11
2 home_x 11
3 home_x 12
4 home_x 12
So each value is a row here.
There are several attributes for each team.
I want each row to have all the attributes for the respective team( home or away)
The ultimate goals is to have all the attributes of both teams in one row. This would double the number of rows.
home_x away_x
0 7 28
would be transformed into:
team1_x team2_x
0 7 28
0 28 7
sample df:
home_x
away_x
home_y
away_y
0
7
28
7
20
1
28
7
28
13
2
28
7
28
4
3
7
28
7
58
4
11
10
11
10
try:
res = pd.DataFrame()
for c in df.columns.str.split("_").str[1].unique():
p1 = df.filter(regex=f"{c}$")
c1,c2 =p1.columns
df_map = {c1:c2, c2:c1}
swap = p1.rename(columns={**df_map})
res = pd.concat([res,p1.append(swap).sort_index(ignore_index=True)], axis=1)
then rename the columns.
import re
repl = {'home': 'team1', 'away': 'team2'}
res.columns = [re.sub('|'.join(repl.keys()), lambda x: repl[x.group()], i) for i in res.columns]
team1_x
team2_x
team1_y
team2_y
0
7
28
7
20
1
28
7
20
7
2
28
7
28
13
3
7
28
13
28
4
28
7
28
4
5
7
28
4
28
6
7
28
7
58
7
28
7
58
7
8
11
10
11
10
9
10
11
10
11
Here is an approach:
You might need to group on the last split of the column names and then group on axis=1, then iterate through the groups and reverse the column order and name them same with the suffix:
def myinfo(data):
c = data.columns.str.split("_").str[-1]
f = lambda x: pd.DataFrame.set_axis(x, ["team1","team2"],axis=1)
l = [pd.concat([*map(f , (v,v.iloc[:,::-1]))]).add_suffix(f"_{k}")
for k,v in data.groupby(c,axis=1)]
return pd.concat(l,axis=1).sort_index()
print(myinfo(df))
team1_x team2_x
0 7 28
0 28 7
1 11 10
1 10 11
2 11 20
2 20 11
3 12 15
3 15 12
4 12 16
4 16 12
Supposing I have the following situation:
A dataframe where the first column ['ID'] will eventually have duplicated values.
import pandas as pd
df = pd.DataFrame({"ID": [1,2,3,4,4,5,5,5,6,6],
"l_1": [10,12,32,45,45,20,20,20,20,20],
"l_2": [11,12,32,11,21,27,38,12,9,6],
"l_3": [5,9,32,12,21,21,18,12,8,1],
"l_4": [6,21,12,77,77,2,2,2,8,8]})
ID l_1 l_2 l_3 l_4
1 10 11 5 6
2 12 12 9 21
3 32 32 32 12
4 45 11 12 77
4 45 21 21 77
5 20 27 21 2
5 20 38 18 2
5 20 12 12 2
6 20 9 8 8
6 20 6 1 8
When duplicated IDs occurs:
I need to keep only the first values for column l_1 and l_4 (other duplicated rows must be zero).
Columns 'l_2' and 'l_3' must stay the same.
When duplicated IDs occurs, the values on these rows on columns l_1 and l_4 will be also duplicated.
Expected output:
ID l_1 l_2 l_3 l_4
1 10 11 5 6
2 12 12 9 21
3 32 32 32 12
4 45 11 12 77
4 0 21 21 0
5 20 27 21 2
5 0 38 18 0
5 0 12 12 0
6 20 9 8 8
6 0 6 1 0
Is there a Straightforward way using pandas or numpy to accomplish this ?
I could just accomplish it doing all these steps:
x1 = df[df.duplicated(subset=['ID'], keep=False)].copy()
x1.loc[x1.groupby('ID')['l_1'].apply(lambda x: (x.shift(1) == x)), 'l_1'] = 0
x1.loc[x1.groupby('ID')['l_4'].apply(lambda x: (x.shift(1) == x)), 'l_4'] = 0
df = df.drop_duplicates(subset=['ID'], keep=False)
df = pd.concat([df, x1])
Isn't this just:
df.loc[df.duplicated('ID'), ['l_1','l_4']] = 0
Output:
ID l_1 l_2 l_3 l_4
0 1 10 11 5 6
1 2 12 12 9 21
2 3 32 32 32 12
3 4 45 11 12 77
4 4 0 21 21 0
5 5 20 27 21 2
6 5 0 38 18 0
7 5 0 12 12 0
8 6 20 9 8 8
9 6 0 6 1 0
I would like to omit the first row and keep x consecutive rows.
in the example below i would like to keep 7. How do i achieve this?
df = pd.Series(range(1,101)).to_frame()
df.columns = ['numbers']
df['numbers'][1::7]
1 2
8 9
15 16
22 23
29 30
36 37
43 44
50 51
57 58
64 65
71 72
78 79
85 86
92 93
99 100
I would like to keep the values below but continue to the next row sequence.
so remove 1 then keep 2 to 7. then remove 8 and keep 9 to 14
df = pd.Series(range(1,101)).to_frame()
df.columns = ['numbers']
df['numbers'][1:7]
1 2
2 3
3 4
4 5
5 6
6 7
Or loc:
df.loc[df.index % 7 != 0]
giving
numbers
1 2
2 3
3 4
4 5
5 6
6 7
8 9
9 10
10 11
11 12
12 13
13 14
15 16
16 17
... ...
drop
df.drop(df.index[::7])
numbers
1 2
2 3
3 4
4 5
5 6
6 7
8 9
9 10
10 11
11 12
12 13
13 14
15 16
16 17
17 18
18 19
.. ...
We Know that dataframe can add anther dataframe by index.
While i has two dataframes with multipindex, so how can i add them by level 'a' in index.
data3.head()
c d e
a b
0 1 2 3 4
5 6 7 8 9
10 11 12 13 14
15 16 17 18 19
20 21 22 23 24
data4.head()
b d e
a c
0 2 1 3 4
5 7 6 8 9
10 12 11 13 14
15 17 16 18 19
20 22 21 23 24
25 27 26 28 29
data3 + data4
error:merging with both multi-indexes is not implemented