pandas comparing column value with 0 - pandas

I am having following dataframe
data = {'sc':['a','a','a','a','b','b','b','b'],
't1':['O','O','O','X','O','X','O','O'],
'q1':[10,15,12,12,14,15,16,9],
's1':[280,310,292,245,267,288,291,298],
's2':[290,315,294,247,268,285,290,296],
}
df=pd.DataFrame(data)
df
sc t1 q1 s1 s2
0 a O 10 280 290
1 a O 15 310 315
2 a O 12 292 294
3 a X 12 245 247
4 b O 14 267 268
5 b X 15 288 285
6 b O 16 291 290
7 b O 9 298 296
I want to create a new column "s3" based on conditions
data['s3']=max(s1-s2,0) where t1="O" and
data['s3']=max(q1,14) where t1="X"
Can you pl help ?

We can make use of np.where [numpy-doc] here:
import numpy as np
df['s3'] = np.where(
df['t1'] == 'O',
df['s1'].sub(df['s2']).clip(lower=0),
df['q1'].clip(lower=14)
)
This then yields:
>>> df
sc t1 q1 s1 s2 s3
0 a O 10 280 290 0
1 a O 15 310 315 0
2 a O 12 292 294 0
3 a X 12 245 247 14
4 b O 14 267 268 0
5 b X 15 288 285 15
6 b O 16 291 290 1
7 b O 9 298 296 2
If s3 already exists, we can use np.select instead:
df['s3'] = np.select(
[df['t1'] == 'O', df['t1'] == 'X'],
[
df['s1'].sub(df['s2']).clip(lower=0),
df['q1'].clip(lower=14)
],
default=df['s3']
)

Related

iteration calculation based on another dataframe

How to do iteration calculation as shown in df2 as desired output ?
any reference links for this > many thanks for helping
df1
a b c
0 1 0 5
1 9 9 2
2 2 2 8
3 6 3 0
4 6 1 7
df2 :
a b c
0 1 0 5 >> values from df1
1 19 18 9 >> values from (df1.iloc[1] * 2) + df2.iloc[0] *1)
2 23 22 25 >> values from (df1.iloc[2] * 2) + df2.iloc[1] *1)
3 35 28 25 >> values from (df1.iloc[3] * 2) + df2.iloc[2] *1)
4 47 30 39 >> values from (df1.iloc[4] * 2) + df2.iloc[3] *1)
IIUC, you can try:
df2 = df1.mul(2).cumsum().sub(df1.iloc[0])
Output:
a b c
0 1 0 5
1 19 18 9
2 23 22 25
3 35 28 25
4 47 30 39
more complex operation
If you want x[n] = x[n]*2 + x[n-1]*2, you need to iterate:
def process(s):
out = [s[0]]
for x in s[1:]:
out.append(x*2+out[-1]*3)
return out
df1.apply(process)
Output:
a b c
0 1 0 5
1 21 18 19
2 67 58 73
3 213 180 219
4 651 542 671

Reordering a DF by category in a preset order

df = pd.DataFrame(np.random.randint(0,100,size=(15, 3)), columns=list('NMO'))
df['Catgeory1'] = ['I','I','I','I','I','G','G','G','G','G','P','P','I','I','P']
df['Catgeory2'] = ['W','W','C','C','C','W','W','W','W','W','O','O','O','O','O']
Imagining this df is much larger with many more categories, how might I sort the list, retaining all the characteristics of any given row, by a determined order. Ex. Sorting the df only by 'category1', such that all the P's are first, the I's, then G's.
You can use categorical type:
cat_type = pd.CategoricalDtype(categories=["P", "I", "G"], ordered=True)
df['Category1'] = df['Category1'].astype(cat_type)
print(df.sort_values(by='Category1'))
Prints:
N M O Category1 Category2
10 49 37 44 P O
11 72 64 66 P O
14 39 98 32 P O
0 93 12 89 I W
1 20 74 21 I W
2 25 22 24 I C
3 47 11 33 I C
4 60 16 34 I C
12 0 90 6 I O
13 13 35 80 I O
5 84 64 67 G W
6 70 47 83 G W
7 61 57 76 G W
8 19 8 3 G W
9 7 8 5 G W
For PIG order (reverse alphabetical order):
df.sort_values('Catgeory1',ascending=False)
For custom sorting:
df['Catgeory1'] = pd.Categorical(df['Catgeory1'], ['P','G','I'])
df = df.sort_values('Catgeory1')

NaNs when using Pandas subtract [duplicate]

The question
Given a Series s and DataFrame df, how do I operate on each column of df with s?
df = pd.DataFrame(
[[1, 2, 3], [4, 5, 6]],
index=[0, 1],
columns=['a', 'b', 'c']
)
s = pd.Series([3, 14], index=[0, 1])
When I attempt to add them, I get all np.nan
df + s
a b c 0 1
0 NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN
What I thought I should get is
a b c
0 4 5 6
1 18 19 20
Objective and motivation
I've seen this kind of question several times over and have seen many other questions that involve some element of this. Most recently, I had to spend a bit of time explaining this concept in comments while looking for an appropriate canonical Q&A. I did not find one and so I thought I'd write one.
These questions usually arises with respect to a specific operation, but equally applies to most arithmetic operations.
How do I subtract a Series from every column in a DataFrame?
How do I add a Series from every column in a DataFrame?
How do I multiply a Series from every column in a DataFrame?
How do I divide a Series from every column in a DataFrame?
It is helpful to create a mental model of what Series and DataFrame objects are.
Anatomy of a Series
A Series should be thought of as an enhanced dictionary. This isn't always a perfect analogy, but we'll start here. Also, there are other analogies that you can make, but I am targeting a dictionary in order to demonstrate the purpose of this post.
index
These are the keys that we can reference to get at the corresponding values. When the elements of the index are unique, the comparison to a dictionary becomes very close.
values
These are the corresponding values that are keyed by the index.
Anatomy of a DataFrame
A DataFrame should be thought of as a dictionary of Series or a Series of Series. In this case the keys are the column names and the values are the columns themselves as Series objects. Each Series agrees to share the same index which is the index of the DataFrame.
columns
These are the keys that we can reference to get at the corresponding Series.
index
This the the index that all of the Series values agree to share.
Note: RE: columns and index objects
They are the same kind of things. A DataFrames index can be used as another DataFrames columns. In fact, this happens when you do df.T to get a transpose.
values
This is a two-dimensional array that contains the data in a DataFrame. The reality is that values is not what is stored inside the DataFrame object. (Well, sometimes it is, but I'm not about to try to describe the block manager). The point is, it is better to think of this as access to a two-dimensional array of the data.
Define Sample Data
These are sample pandas.Index objects that can be used as the index of a Series or DataFrame or can be used as the columns of a DataFrame:
idx_lower = pd.Index([*'abcde'], name='lower')
idx_range = pd.RangeIndex(5, name='range')
These are sample pandas.Series objects that use the pandas.Index objects above:
s0 = pd.Series(range(10, 15), idx_lower)
s1 = pd.Series(range(30, 40, 2), idx_lower)
s2 = pd.Series(range(50, 10, -8), idx_range)
These are sample pandas.DataFrame objects that use the pandas.Index objects above:
df0 = pd.DataFrame(100, index=idx_range, columns=idx_lower)
df1 = pd.DataFrame(
np.arange(np.product(df0.shape)).reshape(df0.shape),
index=idx_range, columns=idx_lower
)
Series on Series
When operating on two Series, the alignment is obvious. You align the index of one Series with the index of the other.
s1 + s0
lower
a 40
b 43
c 46
d 49
e 52
dtype: int64
Which is the same as when I randomly shuffle one before I operate. The indices will still align.
s1 + s0.sample(frac=1)
lower
a 40
b 43
c 46
d 49
e 52
dtype: int64
And is not the case when instead I operate with the values of the shuffled Series. In this case, Pandas doesn't have the index to align with and therefore operates from a positions.
s1 + s0.sample(frac=1).values
lower
a 42
b 42
c 47
d 50
e 49
dtype: int64
Add a scalar
s1 + 1
lower
a 31
b 33
c 35
d 37
e 39
dtype: int64
DataFrame on DataFrame
The similar is true when operating between two DataFrames. The alignment is obvious and does what we think it should do:
df0 + df1
lower a b c d e
range
0 100 101 102 103 104
1 105 106 107 108 109
2 110 111 112 113 114
3 115 116 117 118 119
4 120 121 122 123 124
It shuffles the second DataFrame on both axes. The index and columns will still align and give us the same thing.
df0 + df1.sample(frac=1).sample(frac=1, axis=1)
lower a b c d e
range
0 100 101 102 103 104
1 105 106 107 108 109
2 110 111 112 113 114
3 115 116 117 118 119
4 120 121 122 123 124
It is the same shuffling, but it adds the array and not the DataFrame. It is no longer aligned and will get different results.
df0 + df1.sample(frac=1).sample(frac=1, axis=1).values
lower a b c d e
range
0 123 124 121 122 120
1 118 119 116 117 115
2 108 109 106 107 105
3 103 104 101 102 100
4 113 114 111 112 110
Add a one-dimensional array. It will align with columns and broadcast across rows.
df0 + [*range(2, df0.shape[1] + 2)]
lower a b c d e
range
0 102 103 104 105 106
1 102 103 104 105 106
2 102 103 104 105 106
3 102 103 104 105 106
4 102 103 104 105 106
Add a scalar. There isn't anything to align with, so broadcasts to everything:
df0 + 1
lower a b c d e
range
0 101 101 101 101 101
1 101 101 101 101 101
2 101 101 101 101 101
3 101 101 101 101 101
4 101 101 101 101 101
DataFrame on Series
If DataFrames are to be thought of as dictionaries of Series and Series are to be thought of as dictionaries of values, then it is natural that when operating between a DataFrame and Series that they should be aligned by their "keys".
s0:
lower a b c d e
10 11 12 13 14
df0:
lower a b c d e
range
0 100 100 100 100 100
1 100 100 100 100 100
2 100 100 100 100 100
3 100 100 100 100 100
4 100 100 100 100 100
And when we operate, the 10 in s0['a'] gets added to the entire column of df0['a']:
df0 + s0
lower a b c d e
range
0 110 111 112 113 114
1 110 111 112 113 114
2 110 111 112 113 114
3 110 111 112 113 114
4 110 111 112 113 114
The heart of the issue and point of the post
What about if I want s2 and df0?
s2: df0:
| lower a b c d e
range | range
0 50 | 0 100 100 100 100 100
1 42 | 1 100 100 100 100 100
2 34 | 2 100 100 100 100 100
3 26 | 3 100 100 100 100 100
4 18 | 4 100 100 100 100 100
When I operate, I get the all np.nan as cited in the question:
df0 + s2
a b c d e 0 1 2 3 4
range
0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
4 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
This does not produce what we wanted, because Pandas is aligning the index of s2 with the columns of df0. The columns of the result includes a union of the index of s2 and the columns of df0.
We could fake it out with a tricky transposition:
(df0.T + s2).T
lower a b c d e
range
0 150 150 150 150 150
1 142 142 142 142 142
2 134 134 134 134 134
3 126 126 126 126 126
4 118 118 118 118 118
But it turns out Pandas has a better solution. There are operation methods that allow us to pass an axis argument to specify the axis to align with.
- sub
+ add
* mul
/ div
** pow
And so the answer is simply:
df0.add(s2, axis='index')
lower a b c d e
range
0 150 150 150 150 150
1 142 142 142 142 142
2 134 134 134 134 134
3 126 126 126 126 126
4 118 118 118 118 118
It turns out axis='index' is synonymous with axis=0.
As is axis='columns' synonymous with axis=1:
df0.add(s2, axis=0)
lower a b c d e
range
0 150 150 150 150 150
1 142 142 142 142 142
2 134 134 134 134 134
3 126 126 126 126 126
4 118 118 118 118 118
The rest of the operations
df0.sub(s2, axis=0)
lower a b c d e
range
0 50 50 50 50 50
1 58 58 58 58 58
2 66 66 66 66 66
3 74 74 74 74 74
4 82 82 82 82 82
df0.mul(s2, axis=0)
lower a b c d e
range
0 5000 5000 5000 5000 5000
1 4200 4200 4200 4200 4200
2 3400 3400 3400 3400 3400
3 2600 2600 2600 2600 2600
4 1800 1800 1800 1800 1800
df0.div(s2, axis=0)
lower a b c d e
range
0 2.000000 2.000000 2.000000 2.000000 2.000000
1 2.380952 2.380952 2.380952 2.380952 2.380952
2 2.941176 2.941176 2.941176 2.941176 2.941176
3 3.846154 3.846154 3.846154 3.846154 3.846154
4 5.555556 5.555556 5.555556 5.555556 5.555556
df0.pow(1 / s2, axis=0)
lower a b c d e
range
0 1.096478 1.096478 1.096478 1.096478 1.096478
1 1.115884 1.115884 1.115884 1.115884 1.115884
2 1.145048 1.145048 1.145048 1.145048 1.145048
3 1.193777 1.193777 1.193777 1.193777 1.193777
4 1.291550 1.291550 1.291550 1.291550 1.291550
It's important to address some higher level concepts first. Since my motivation is to share knowledge and teach, I wanted to make this as clear as possible.
I prefer the method mentioned by piSquared (i.e., df.add(s, axis=0)), but another method uses apply together with lambda to perform an action on each column in the dataframe:
>>>> df.apply(lambda col: col + s)
a b c
0 4 5 6
1 18 19 20
To apply the lambda function to the rows, use axis=1:
>>> df.T.apply(lambda row: row + s, axis=1)
0 1
a 4 18
b 5 19
c 6 20
This method could be useful when the transformation is more complex, e.g.:
df.apply(lambda col: 0.5 * col ** 2 + 2 * s - 3)
Just to add an extra layer from my own experience. It extends what others have done here. This shows how to operate on a Series with a DataFrame that has extra columns that you want to keep the values for. Below is a short demonstration of the process.
import pandas as pd
d = [1.056323, 0.126681,
0.142588, 0.254143,
0.15561, 0.139571,
0.102893, 0.052411]
df = pd.Series(d, index = ['const', '426', '428', '424', '425', '423', '427', '636'])
print(df)
const 1.056323
426 0.126681
428 0.142588
424 0.254143
425 0.155610
423 0.139571
427 0.102893
636 0.052411
d2 = {
'loc': ['D', 'D', 'E', 'E', 'F', 'F', 'G', 'G', 'E', 'D'],
'426': [9, 2, 3, 2, 4, 0, 2, 7, 2, 8],
'428': [2, 4, 1, 0, 2, 1, 3, 0, 7, 8],
'424': [1, 10, 5, 8, 2, 7, 10, 0, 3, 5],
'425': [9, 2, 6, 8, 9, 1, 7, 3, 8, 6],
'423': [4, 2, 8, 7, 9, 6, 10, 5, 9, 9],
'423': [2, 7, 3, 10, 8, 1, 2, 9, 3, 9],
'427': [4, 10, 4, 0, 8, 3, 1, 5, 7, 7],
'636': [10, 5, 6, 4, 0, 5, 1, 1, 4, 8],
'seq': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
}
df2 = pd.DataFrame(d2)
print(df2)
loc 426 428 424 425 423 427 636 seq
0 D 9 2 1 9 2 4 10 1
1 D 2 4 10 2 7 10 5 1
2 E 3 1 5 6 3 4 6 1
3 E 2 0 8 8 10 0 4 1
4 F 4 2 2 9 8 8 0 1
5 F 0 1 7 1 1 3 5 1
6 G 2 3 10 7 2 1 1 1
7 G 7 0 0 3 9 5 1 1
8 E 2 7 3 8 3 7 4 1
9 D 8 8 5 6 9 7 8 1
To multiply a DataFrame by a Series and keep dissimilar columns
Create a list of the elements in the DataFrame and Series you want to operate on:
col = ['426', '428', '424', '425', '423', '427', '636']
Perform your operation using the list and indicate the axis to use:
df2[col] = df2[col].mul(df[col], axis=1)
print(df2)
loc 426 428 424 425 423 427 636 seq
0 D 1.140129 0.285176 0.254143 1.40049 0.279142 0.411572 0.524110 1
1 D 0.253362 0.570352 2.541430 0.31122 0.976997 1.028930 0.262055 1
2 E 0.380043 0.142588 1.270715 0.93366 0.418713 0.411572 0.314466 1
3 E 0.253362 0.000000 2.033144 1.24488 1.395710 0.000000 0.209644 1
4 F 0.506724 0.285176 0.508286 1.40049 1.116568 0.823144 0.000000 1
5 F 0.000000 0.142588 1.779001 0.15561 0.139571 0.308679 0.262055 1
6 G 0.253362 0.427764 2.541430 1.08927 0.279142 0.102893 0.052411 1
7 G 0.886767 0.000000 0.000000 0.46683 1.256139 0.514465 0.052411 1
8 E 0.253362 0.998116 0.762429 1.24488 0.418713 0.720251 0.209644 1
9 D 1.013448 1.140704 1.270715 0.93366 1.256139 0.720251 0.419288 1

SQL query with CASE statement ,multiple variables and sub variables

i am working on dataset where i need to write an query for below requirement either in R-programming or SQLDF , i want to know and learn to write in both language ( SQL and R ) ,kindly help.
Requirement is : i need to print Variable "a" from table when
Total_Scores of id 34 and Rank 3 is GREATER THAN Total_Scores of id 34 and Rank 4 else Print Variable b .
This above case Applicable for each id and Rank
id Rank Variable Total_Scores
34 3 a 11
34 4 b 6
126 3 c 15
126 4 d 18
190 3 e 9
190 4 f 10
388 3 g 20
388 4 h 15
401 3 i 15
401 4 x 11
476 3 y 11
476 4 z 11
536 3 p 15
536 4 q 6
i have tried to write SQL CASE statement and i am stuck ,could you please
help to write the query
"select id ,Rank ,
CASE
WHEN (select Total_Scores from table where id == 34 and Rank == 3) > (select Total_Scores from table where id == 34 and Rank == 4)
THEN "Variable is )
Final output Should be :
id Rank Variable Total_Scores
34 3 a 11
126 4 d 18
190 4 f 10
388 3 g 20
401 3 i 15
536 3 p 15
You seem to want the row with the highest score for each id. A canonical way to write this in SQL uses row_number():
select t.*
from (select t.*,
row_number() over (partition by id order by score desc) as seqnum
from t
) t
where seqnum = 1;
This returns one row per id, even when the scores are tied. If you want all rows in that case, use rank() instead of row_number().
An alternative method can have better performance with an index on (id, score):
select t.*
from t
where t.score = (select max(t2.score) from t t2 where t2.id = t.id);
You can try this.
SELECT T.* FROM (
SELECT id,
MAX(Total_Scores) Max_Total_Scores
FROM MyTable
GROUP BY id
HAVING MAX(Total_Scores) > MIN(Total_Scores) ) AS MX
INNER JOIN MyTable T ON MX.id = T.id AND MX.Max_Total_Scores = T.Total_Scores
ORDER BY id
Sql Fiddle
In R
library(dplyr)
df %>% group_by(id) %>%
filter(Total_Scores == max(Total_Scores)) %>% filter(n()==1) %>%
ungroup()
# A tibble: 6 x 4
id Rank Variable Total_Scores
<int> <int> <chr> <int>
1 34 3 a 11
2 126 4 d 18
3 190 4 f 10
4 388 3 g 20
5 401 3 i 15
6 536 3 p 15
Data
df <- read.table(text="
id Rank Variable Total_Scores
34 3 a 11
34 4 b 6
126 3 c 15
126 4 d 18
190 3 e 9
190 4 f 10
388 3 g 20
388 4 h 15
401 3 i 15
401 4 x 11
476 3 y 11
476 4 z 11
536 3 p 15
536 4 q 6
",header=T, stringsAsFactors = F)
Assuming that what you want is to get the subset of rows whose Total_Scores is largest for that id here are two approaches.
The question did not discuss how to deal with ties. There is one id in the example that has a tie but there is no output corresponding to it which I assume was not intendedand that either both the rows should have been output or one of them. Anyways in the solutions below in (1) it will give one of the rows arbitrarily if there are duplicates whereas (2) will give both.
1) sqldf
If you use max in an SQLite select it will automatically select the other variables of the same row so:
library(sqldf)
sqldf("select id, Rank, Variable, max(Total_Scores) Total_Scores
from DF
group by id")
giving:
id Rank Variable Total_Scores
1 34 3 a 11
2 126 4 d 18
3 190 4 f 10
4 388 3 g 20
5 401 3 i 15
6 476 3 y 11
7 536 3 p 15
2) base R In base R we can use ave and subset like this:
subset(DF, ave(Total_Scores, id, FUN = function(x) x == max(x)) > 0)
giving:
id Rank Variable Total_Scores
1 34 3 a 11
4 126 4 d 18
6 190 4 f 10
7 388 3 g 20
9 401 3 i 15
11 476 3 y 11
12 476 4 z 11
13 536 3 p 15
Note
The input in reproducible form:
Lines <- "id Rank Variable Total_Scores
34 3 a 11
34 4 b 6
126 3 c 15
126 4 d 18
190 3 e 9
190 4 f 10
388 3 g 20
388 4 h 15
401 3 i 15
401 4 x 11
476 3 y 11
476 4 z 11
536 3 p 15
536 4 q 6"
DF <- read.table(text = Lines, header = TRUE, as.is = TRUE)

pandas dataframe multiply with a series [duplicate]

This question already has answers here:
How do I operate on a DataFrame with a Series for every column?
(3 answers)
Closed 4 years ago.
What is the best way to multiply all the columns of a Pandas DataFrame by a column vector stored in a Series? I used to do this in Matlab with repmat(), which doesn't exist in Pandas. I can use np.tile(), but it looks ugly to convert the data structure back and forth each time.
Thanks.
What's wrong with
result = dataframe.mul(series, axis=0)
?
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.mul.html#pandas.DataFrame.mul
This can be accomplished quite simply with the DataFrame method apply.
In[1]: import pandas as pd; import numpy as np
In[2]: df = pd.DataFrame(np.arange(40.).reshape((8, 5)), columns=list('abcde')); df
Out[2]:
a b c d e
0 0 1 2 3 4
1 5 6 7 8 9
2 10 11 12 13 14
3 15 16 17 18 19
4 20 21 22 23 24
5 25 26 27 28 29
6 30 31 32 33 34
7 35 36 37 38 39
In[3]: ser = pd.Series(np.arange(8) * 10); ser
Out[3]:
0 0
1 10
2 20
3 30
4 40
5 50
6 60
7 70
Now that we have our DataFrame and Series we need a function to pass to apply.
In[4]: func = lambda x: np.asarray(x) * np.asarray(ser)
We can pass this to df.apply and we are good to go
In[5]: df.apply(func)
Out[5]:
a b c d e
0 0 0 0 0 0
1 50 60 70 80 90
2 200 220 240 260 280
3 450 480 510 540 570
4 800 840 880 920 960
5 1250 1300 1350 1400 1450
6 1800 1860 1920 1980 2040
7 2450 2520 2590 2660 2730
df.apply acts column-wise by default, but it can can also act row-wise by passing axis=1 as an argument to apply.
In[6]: ser2 = pd.Series(np.arange(5) *5); ser2
Out[6]:
0 0
1 5
2 10
3 15
4 20
In[7]: func2 = lambda x: np.asarray(x) * np.asarray(ser2)
In[8]: df.apply(func2, axis=1)
Out[8]:
a b c d e
0 0 5 20 45 80
1 0 30 70 120 180
2 0 55 120 195 280
3 0 80 170 270 380
4 0 105 220 345 480
5 0 130 270 420 580
6 0 155 320 495 680
7 0 180 370 570 780
This could be done more concisely by defining the anonymous function inside apply
In[9]: df.apply(lambda x: np.asarray(x) * np.asarray(ser))
Out[9]:
a b c d e
0 0 0 0 0 0
1 50 60 70 80 90
2 200 220 240 260 280
3 450 480 510 540 570
4 800 840 880 920 960
5 1250 1300 1350 1400 1450
6 1800 1860 1920 1980 2040
7 2450 2520 2590 2660 2730
In[10]: df.apply(lambda x: np.asarray(x) * np.asarray(ser2), axis=1)
Out[10]:
a b c d e
0 0 5 20 45 80
1 0 30 70 120 180
2 0 55 120 195 280
3 0 80 170 270 380
4 0 105 220 345 480
5 0 130 270 420 580
6 0 155 320 495 680
7 0 180 370 570 780
Why not create your own dataframe tile function:
def tile_df(df, n, m):
dfn = df.T
for _ in range(1, m):
dfn = dfn.append(df.T, ignore_index=True)
dfm = dfn.T
for _ in range(1, n):
dfm = dfm.append(dfn.T, ignore_index=True)
return dfm
Example:
df = pandas.DataFrame([[1,2],[3,4]])
tile_df(df, 2, 3)
# 0 1 2 3 4 5
# 0 1 2 1 2 1 2
# 1 3 4 3 4 3 4
# 2 1 2 1 2 1 2
# 3 3 4 3 4 3 4
However, the docs note: "DataFrame is not intended to be a drop-in replacement for ndarray as its indexing semantics are quite different in places from a matrix." Which presumably should be interpreted as "use numpy if you are doing lots of matrix stuff".