Convert double index to matrix PANDAS - pandas

I have a dataframe with double index. Each index is represents an edge. I would like to pivot(?) it into matrix. First index should become columns and second index should remain index.
What path should I choose?

By "double index" I assume you mean a "hierarchical index" (aka MultiIndex). If so,
you could use the unstack method:
In [160]: df
Out[160]:
0 0 0
1 1
2 2
3 3
1 0 4
1 5
2 6
3 7
2 0 8
1 9
2 10
3 11
dtype: int32
In [161]: df.unstack(level=0)
Out[161]:
0 1 2
0 0 4 8
1 1 5 9
2 2 6 10
3 3 7 11

Related

Maximum of calculated pandas column and 0

I have a very simple problem (I guess) but don't find the right syntax to do it :
The following Dataframe :
A B C
0 7 12 2
1 5 4 4
2 4 8 2
3 9 2 3
I need to create a new column D equal for each row to max (0 ; A-B+C)
I tried a np.maximum(df.A-df.B+df.C,0) but it doesn't match and give me the maximum value of the calculated column for each row (= 10 in the example).
Finally, I would like to obtain the DF below :
A B C D
0 7 12 2 0
1 5 4 4 5
2 4 8 2 0
3 9 2 3 10
Any help appreciated
Thanks
Let us try
df['D'] = df.eval('A-B+C').clip(lower=0)
Out[256]:
0 0
1 5
2 0
3 10
dtype: int64
You can use np.where:
s = df["A"]-df["B"]+df["C"]
df["D"] = np.where(s>0, s, 0) #or s.where(s>0, 0)
print (df)
A B C D
0 7 12 2 0
1 5 4 4 5
2 4 8 2 0
3 9 2 3 10
To do this in one line you can use apply to apply the maximum function to each row seperately.
In [19]: df['D'] = df.apply(lambda s: max(s['A'] - s['B'] + s['C'], 0), axis=1)
In [20]: df
Out[20]:
A B C D
0 0 0 0 0
1 5 4 4 5
2 0 0 0 0
3 9 2 3 10

If a column value does not have a certain number of occurances in a dataframe, how to duplicate rows at random until that count is met?

Say that this is what my dataframe looks like
A B
0 1 5
1 4 2
2 3 5
3 3 3
4 3 2
5 2 0
6 4 5
7 2 3
8 4 1
9 5 1
I want every unique value in Column B to occur at least 3 times. So none of the rows with a B value of 5 are duplicated. The row with a column B value of 0 are duplicated twice. And the rest have one of their two rows duplicated at random.
Here is an example desired output
A B
0 1 5
1 4 2
2 3 5
3 3 3
4 3 2
5 2 0
6 4 5
7 2 3
8 4 1
9 5 1
10 4 2
11 2 3
12 2 0
13 2 0
14 4 1
Edit:
The row chosen to be duplicated should be selected at random
To random pick rows, I would use groupby apply with sample on each group. x of lambda is each group of B, so I use reapeat - x.shape[0] to find number of rows need to create. There may be some cases group B already has more rows than 3, so I use np.clip to force negative values to 0. Sample on 0 row is the same as ignore it. Finally, reset_index and append back to df
repeats = 3
df1 = (df.groupby('B').apply(lambda x: x.sample(n=np.clip(repeats-x.shape[0], 0, np.inf)
.astype(int), replace=True))
.reset_index(drop=True))
df_final = df.append(df1).reset_index(drop=True)
Out[43]:
A B
0 1 5
1 4 2
2 3 5
3 3 3
4 3 2
5 2 0
6 4 5
7 2 3
8 4 1
9 5 1
10 2 0
11 2 0
12 5 1
13 4 2
14 2 3

Pandas running sum

I have a pandas dataframe and it is something like this:
x y
1 0
2 1
3 2
4 0 <<<< Reset
5 1
6 2
7 3
8 0 <<<< Reset
9 1
10 2
The x values could be anything, they are not meaningful for this question. The y values increment, and reset and increment again. I need a third column (z) which is a number that represents the groups, so it increments when the y values are reset.
I cannot guarantee that the reset will be to zero, only a value that is less than the previous one, should indicate a reset.
x y z
1 0 0
2 1 0
3 2 0
4 0 1 <<<< Incremented by 1
5 1 1
6 2 1
7 3 1
8 0 2 <<<< Incremented by 1
9 1 2
10 2 2
So To produce z, i understand what needs to be done, just not familiar with the syntax. My solution would be to first assign z as a sparse column of 0 and 1's, where everything is zero except a 1 is given when y[ix] < y[ix-1], indicating that the y counter has been reset. Then a cumulative running sum should be performed on the z column, meaning that: z[ix] = sum(z[0],z[1],...,z[ix])
Id appreciate some help with the syntax of assigning column z, if someone has a moment.
Based on your logic:
#general case
df['z'] = df['y'].diff().lt(0).cumsum()
# or equivalently
# df['z'] = df['y'].lt(df['y'].shift()).cumsum()
Output:
x y z
0 1 0 0
1 2 1 0
2 3 2 0
3 4 0 1
4 5 1 1
5 6 2 1
6 7 3 1
7 8 0 2
8 9 1 2
9 10 2 2
Using ne(1)
df.y.diff().ne(1).cumsum().sub(1)
0 0
1 0
2 0
3 1
4 1
5 1
6 1
7 2
8 2
9 2
Name: y, dtype: int32

Pandas count values inside dataframe

I have a dataframe that looks like this:
A B C
1 1 8 3
2 5 4 3
3 5 8 1
and I want to count the values so to make df like this:
total
1 2
3 2
4 1
5 2
8 2
is it possible with pandas?
With np.unique -
In [332]: df
Out[332]:
A B C
1 1 8 3
2 5 4 3
3 5 8 1
In [333]: ids, c = np.unique(df.values.ravel(), return_counts=1)
In [334]: pd.DataFrame({'total':c}, index=ids)
Out[334]:
total
1 2
3 2
4 1
5 2
8 2
With pandas-series -
In [357]: pd.Series(np.ravel(df)).value_counts().sort_index()
Out[357]:
1 2
3 2
4 1
5 2
8 2
dtype: int64
You can also use stack() and groupby()
df = pd.DataFrame({'A':[1,8,3],'B':[5,4,3],'C':[5,8,1]})
print(df)
A B C
0 1 5 5
1 8 4 8
2 3 3 1
df1 = df.stack().reset_index(1)
df1.groupby(0).count()
level_1
0
1 2
3 2
4 1
5 2
8 2
Other alternative may be to use stack, followed by value_counts then, result changed to frame and finally sorting the index:
count_df = df.stack().value_counts().to_frame('total').sort_index()
count_df
Result:
total
1 2
3 2
4 1
5 2
8 2
using np.unique(, return_counts=True) and np.column_stack():
pd.DataFrame(np.column_stack(np.unique(df, return_counts=True)))
returns:
0 1
0 1 2
1 3 2
2 4 1
3 5 2
4 8 2

Pandas Dynamic Index Referencing during Calculation

I have the following data frame
val sum
0 1 0
1 2 0
2 3 0
3 4 0
4 5 0
5 6 0
6 7 0
I would like to calculate the sum of the next three rows' (including the current row) values. I need to do this for very big files. What is the most efficient way? The expected result is
val sum
0 1 6
1 2 9
2 3 12
3 4 15
4 5 18
5 6 13
6 7 7
In general, how can I dynamically referencing to other rows (via boolean operations) while making assignments?
> pd.rolling_sum(df['val'], window=3).shift(-2)
0 6
1 9
2 12
3 15
4 18
5 NaN
6 NaN
If you want the last values to be "filled in" then you'll need to tack on NaN's to the end of your dataframe.