I have data like this: [144 144 144 144 143 143 143 93 93 93 93 93 93 93 93 93 93], and I want to make data like this: [0, 1, 2, 3, 0, 1, 2, 0, 1, 2, 3, 4, ....]
I tried to use the function in the link below, I got this: [3 2 1 0 2 1 0 9 0 6 5 4 3 2 1 7 8]
How can I fix it?
def grp_range(a):
count = np.unique(a,return_counts=1)[1]
idx = count.cumsum()
id_arr = np.ones(idx[-1],dtype=int)
id_arr[0] = 0
id_arr[idx[:-1]] = -count[:-1]+1
out = id_arr.cumsum()[np.argsort(a).argsort()]
return out
How to use numpy to get the cumulative count by unique values in linear time?
To speed up bpfrd version you can use numba. On my machine ~40 times faster as bpfrd version and ~10 times faster to ouroboros1 versions!
from numba import jit
import numpy as np
#jit(nopython=True)
def numba_style(a):
prev = a[0]
idx = 0
c = [idx]
for i in range(1, len(a)):
if a[i] == prev:
idx += 1
else:
prev = a[i]
idx = 0
c.append(idx)
return np.array(c)
Function
Timing (mean ± std. dev. of 3 runs, 3 loops each)
list_style
818 µs ± 116 µs per loop
grp_range
230 µs ± 81.3 µs per loop
cumcount
170 µs ± 73.9 µs per loop
np_style_2
165 µs ± 86.2 µs per loop
np_style
118 µs ± 79.3 µs per loop
numba_style
19.2 µs ± 887 ns per loop
Performance Testing
Speed comparison also to ouroboros1 versions. Define functions:
def list_style(a):
prev = a[0]
idx = 0
c = [idx]
for i in range(1, len(a)):
if a[i] == prev:
idx += 1
else:
prev = a[i]
idx = 0
c.append(idx)
return c
def dfill(a):
n = a.size
b = np.concatenate([[0], np.where(a[:-1] != a[1:])[0] + 1, [n]])
return np.arange(n)[b[:-1]].repeat(np.diff(b))
def argunsort(s):
n = s.size
u = np.empty(n, dtype=np.int64)
u[s] = np.arange(n)
return u
def cumcount(a):
n = a.size
s = a.argsort(kind='mergesort')
i = argunsort(s)
b = a[s]
return (np.arange(n) - dfill(b))[i]
def grp_range(a):
count = np.unique(a,return_counts=1)[1]
idx = count.cumsum()
id_arr = np.ones(idx[-1],dtype=int)
id_arr[0] = 0
id_arr[idx[:-1]] = -count[:-1]+1
out = id_arr.cumsum()[np.argsort(a, kind='mergesort').argsort()] # adjustment here
return out
def np_style(a):
unique, index, counts = np.unique(a, return_counts=True, return_index=True)
arg_s = np.argsort(index)
return np.concatenate(list(map(np.arange,counts[arg_s])), axis=0)
def np_style_2(a):
aa = a+np.arange(len(a))
unique, index = np.unique(a, return_index=True)
for uni, ind in zip(unique, index):
aa[a==uni] -= aa[ind]
return aa
And the actual testing with a slightly longer array:
n = 4, 3, 9, 15, 24, 100, 2500, 555
t = 144, 143, 93, 85, 84, 100, 250, 555
a = np.concatenate([[t_i]*n_i for t_i, n_i in zip(t,n)])
%timeit -n 3 -r 3 grp_range(a)
%timeit -n 3 -r 3 np_style(a)
%timeit -n 3 -r 3 np_style_2(a)
%timeit -n 3 -r 3 cumcount(a)
%timeit -n 3 -r 3 list_style(a)
%timeit -n 3 -r 3 numba_style(a)
try this:
a = [144, 144, 144, 144, 143, 143, 143, 93, 93, 93, 93, 93, 93, 93, 93, 93, 93]
prev = a[0]
idx = 0
c = [idx]
for i in range(1, len(a)):
if a[i] == prev:
idx += 1
else:
prev = a[i]
idx = 0
c.append(idx)
c
>[0, 1, 2, 3, 0, 1, 2, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
TL;DR: within your function change
out = id_arr.cumsum()[np.argsort(a).argsort()]
into
out = id_arr.cumsum()[np.argsort(a, kind='mergesort').argsort()]
If speed is a concern, use the solution offered by #piRSquared in the post mentioned. You'll need the first three functions mentioned. So:
import numpy as np
arr = np.array([144, 144, 144, 144, 143, 143, 143, 93, 93, 93, 93, 93, 93, 93, 93, 93, 93])
def dfill(a):
n = a.size
b = np.concatenate([[0], np.where(a[:-1] != a[1:])[0] + 1, [n]])
return np.arange(n)[b[:-1]].repeat(np.diff(b))
def argunsort(s):
n = s.size
u = np.empty(n, dtype=np.int64)
u[s] = np.arange(n)
return u
def cumcount(a):
n = a.size
s = a.argsort(kind='mergesort')
i = argunsort(s)
b = a[s]
return (np.arange(n) - dfill(b))[i]
cumcount(arr) # will get you desired output
The accepted answer from the referenced post is actually incorrect.
The problem lies with the fact that np.argsort uses quicksort as the default sorting algorithm. For a stable sort, we need mergesort (see the comments by #MartijnPieters on the matter here).
So, in your slightly adjusted function we need:
import numpy as np
def grp_range(a):
count = np.unique(a,return_counts=1)[1]
idx = count.cumsum()
id_arr = np.ones(idx[-1],dtype=int)
id_arr[0] = 0
id_arr[idx[:-1]] = -count[:-1]+1
out = id_arr.cumsum()[np.argsort(a, kind='mergesort').argsort()] # adjustment here
return out
Testing a couple of examples:
# OP's example
arr = np.array([144, 144, 144, 144, 143, 143, 143, 93, 93, 93, 93, 93, 93, 93, 93, 93, 93])
arr_result = grp_range(arr)
print(arr_result)
# [0 1 2 3 0 1 2 0 1 2 3 4 5 6 7 8 9] (correct)
# OP's example, with mixed sequence (note addition 143, 144 at arr1[9:11])
arr_alt = np.array([144, 144, 144, 144, 143, 143, 143, 93, 93, 143, 144, 93, 93, 93, 93, 93, 93])
arr_alt_result = grp_range(arr_alt)
print(arr_alt_result)
# [0 1 2 3 0 1 2 0 1 3 4 2 3 4 5 6 7] (correct) (note: arr_alt_result[9:11] == array([3, 4], dtype=int32))
As mentioned above, the solution offered by #piRSquared will be faster than this with the same results.
A final aside. The sequence posted is in descending order. If this is true for the actual data you are working with, you could do something like this:
import numpy as np
arr = np.array([144, 144, 144, 144, 143, 143, 143, 93, 93, 93, 93, 93, 93, 93, 93, 93, 93])
count = np.unique(arr,return_counts=1)[1][::-1] # from ascending to descending
out = np.concatenate(list(map(np.arange,count)), axis=0)
# out: [0 1 2 3 0 1 2 0 1 2 3 4 5 6 7 8 9]
or this:
from collections import Counter
arr = [144, 144, 144, 144, 143, 143, 143, 93, 93, 93, 93, 93, 93, 93, 93, 93, 93]
count_dict = Counter(arr)
out = list()
for v in count_dict.items():
out.extend([*range(v[1])])
or indeed, use the answer provided by #bpfrd.
Related
I have few arrays a,b,c and d as shown below and would like to populate a matrix by evaluating a function f(...) which consumes a,b,c and d.
with nested for loop this is obviously possible but I'm looking for more pythonic and fast way to do this.
So far I tried, np.fromfunction with no luck.
Thanks
PS: This function f has a conditional. I still can consider approaches which does not support conditionals but if the solution supports conditionals that would be fantastic.
example function in case helpful
def fun(a,b,c,c): return a+b+c+d if a==b else a*b*c*d
Also why fromfunction failed is shown below
>>> a = np.array([1,2,3,4,5])
>>> b = np.array([10,20,30])
>>> def fun(i,j): return a[i] * b[j]
>>> np.fromfunction(fun, (3,5))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Anaconda3\lib\site-packages\numpy\core\numeric.py", line 1853, in fromfunction
return function(*args, **kwargs)
File "<stdin>", line 1, in fun
IndexError: arrays used as indices must be of integer (or boolean) type
The reason the function fails is that np.fromfunction passes floating-point values, which are not valid as indices. You can modify your function like this to make it work:
def fun(i,j):
return a[j.astype(int)] * b[i.astype(int)]
print(np.fromfunction(fun, (3,5)))
[[ 10 20 30 40 50]
[ 20 40 60 80 100]
[ 30 60 90 120 150]]
Jake has explained why your fromfunction approach fails. However, you don't need fromfunction for your example. You could simply add an axis to b and have numpy broadcast the shapes:
a = np.array([1,2,3,4,5])
b = np.array([10,20,30])
def fun(i,j): return a[j.astype(int)] * b[i.astype(int)]
f1 = np.fromfunction(fun, (3, 5))
f2 = b[:, None] * a
(f1 == f2).all() # True
Extending this to the function you showed that contains an if condition, you could just split the if into two operations in sequence: creating an array given by the if expression, and overwriting the relevant parts by the else expression.
a = np.array([1, 2, 3, 4, 5])
b = np.array([5, 4, 3, 2, 1])
c = np.array([100, 200, 300, 400, 500])
d = np.array([0, 1, 2, 3])
# Calculate the values at all indices as the product
result = d[:, None] * (a * b * c)
# array([[ 0, 0, 0, 0, 0],
# [ 500, 1600, 2700, 3200, 2500],
# [1000, 3200, 5400, 6400, 5000],
# [1500, 4800, 8100, 9600, 7500]])
# Calculate sum
sum_arr = d[:, None] + (a + b + c)
# array([[106, 206, 306, 406, 506],
# [107, 207, 307, 407, 507],
# [108, 208, 308, 408, 508],
# [109, 209, 309, 409, 509]])
# Set diagonal elements (i==j) to sum:
np.fill_diagonal(result, np.diag(sum_arr))
which gives the following result:
array([[ 106, 0, 0, 0, 0],
[ 500, 207, 2700, 3200, 2500],
[1000, 3200, 308, 6400, 5000],
[1500, 4800, 8100, 409, 7500]])
I need each column to generate random integers with specified range for col 1 (random.randint(1, 50) for col 2 random.randint(51, 100)...etc
import numpy
import random
import pandas
from random import randint
wsn = numpy.arange(1, 6)
taskn = 3
t1 = numpy.random.randint((random.randint(2, 50),random.randint(51, 100),
random.randint(101, 150),random.randint(151, 200),random.randint(201, 250)),size=(5,5))
t2 = numpy.random.randint((random.randint(2, 50),random.randint(51, 100),
random.randint(101, 150),random.randint(151, 200),random.randint(201, 250)),size=(5,5))
t3= numpy.random.randint((random.randint(2, 50),random.randint(51, 100),
random.randint(101, 150),random.randint(151, 200),random.randint(201, 250)),size=(5,5))
print('\nGenerated Data:\t\n\nNumber \t\t\t Task 1 \t\t\t Task 2 \t\t\t Task 3\n')
ni = len(t1)
for i in range(ni):
print('\t {0} \t {1} \t {2} \t {3}\n'.format(wsn[i], t1[i],t2[i],t3[i]))
print('\n\n')
It prints the following
Generated Data:
Number Task 1 Task 2 Task 3
1 [ 1 13 16 121 18] [ 5 22 34 65 194] [ 10 68 60 134 130]
2 [ 0 2 117 176 46] [ 1 15 111 116 180] [22 41 70 24 85]
3 [ 0 12 121 19 193] [ 0 5 37 109 205] [ 5 53 5 106 15]
4 [ 0 5 97 99 235] [ 0 22 142 11 150] [ 6 79 129 64 87]
5 [ 2 46 71 101 186] [ 3 57 141 37 71] [ 15 32 9 117 77]
soemtimes It even generates 0 when I didn't even specifiy 0 in the ranges
np.random.randint(low, high, size=None) allows for low and high being arrays of length num_intervals.
In that case, when size is not specified, it will generate as many integers as there are intervals defined by the low and high bounds.
If you want to generate multiple integers per interval, you just need to specify the corresponding size argument, which must ends by num_intervals.
Here it is size=(num_tasks, num_samples, num_intervals).
import numpy as np
bounds = np.array([1, 50, 100, 150, 200, 250])
num_tasks = 3
num_samples = 7
bounds_low = bounds[:-1]
bounds_high = bounds[1:]
num_intervals = len(bounds_low)
arr = np.random.randint(
bounds_low, bounds_high, size=(num_tasks, num_samples, num_intervals)
)
Checking the properties:
assert arr.shape == (num_tasks, num_samples, num_intervals)
for itvl_idx in range(num_intervals):
assert np.all(arr[:, :, itvl_idx] >= bounds_low[itvl_idx])
assert np.all(arr[:, :, itvl_idx] < bounds_high[itvl_idx])
An example of output:
array([[[ 45, 61, 100, 185, 216],
[ 36, 78, 117, 152, 222],
[ 18, 77, 112, 153, 221],
[ 9, 70, 123, 178, 223],
[ 16, 84, 118, 157, 233],
[ 42, 78, 108, 179, 240],
[ 40, 52, 116, 152, 225]],
[[ 3, 92, 102, 151, 236],
[ 45, 89, 138, 179, 218],
[ 45, 73, 120, 183, 231],
[ 35, 80, 130, 167, 212],
[ 14, 86, 118, 195, 212],
[ 20, 66, 117, 151, 248],
[ 49, 94, 138, 175, 212]],
[[ 13, 75, 116, 169, 206],
[ 13, 75, 127, 179, 213],
[ 29, 64, 136, 151, 213],
[ 1, 81, 140, 197, 200],
[ 17, 77, 112, 171, 215],
[ 18, 75, 103, 180, 209],
[ 47, 57, 132, 194, 234]]])
I have a data frame column with numeric values:
df['percentage'].head()
46.5
44.2
100.0
42.12
I want to see the column as bin counts:
bins = [0, 1, 5, 10, 25, 50, 100]
How can I get the result as bins with their value counts?
[0, 1] bin amount
[1, 5] etc
[5, 10] etc
...
You can use pandas.cut:
bins = [0, 1, 5, 10, 25, 50, 100]
df['binned'] = pd.cut(df['percentage'], bins)
print (df)
percentage binned
0 46.50 (25, 50]
1 44.20 (25, 50]
2 100.00 (50, 100]
3 42.12 (25, 50]
bins = [0, 1, 5, 10, 25, 50, 100]
labels = [1,2,3,4,5,6]
df['binned'] = pd.cut(df['percentage'], bins=bins, labels=labels)
print (df)
percentage binned
0 46.50 5
1 44.20 5
2 100.00 6
3 42.12 5
Or numpy.searchsorted:
bins = [0, 1, 5, 10, 25, 50, 100]
df['binned'] = np.searchsorted(bins, df['percentage'].values)
print (df)
percentage binned
0 46.50 5
1 44.20 5
2 100.00 6
3 42.12 5
...and then value_counts or groupby and aggregate size:
s = pd.cut(df['percentage'], bins=bins).value_counts()
print (s)
(25, 50] 3
(50, 100] 1
(10, 25] 0
(5, 10] 0
(1, 5] 0
(0, 1] 0
Name: percentage, dtype: int64
s = df.groupby(pd.cut(df['percentage'], bins=bins)).size()
print (s)
percentage
(0, 1] 0
(1, 5] 0
(5, 10] 0
(10, 25] 0
(25, 50] 3
(50, 100] 1
dtype: int64
By default cut returns categorical.
Series methods like Series.value_counts() will use all categories, even if some categories are not present in the data, operations in categorical.
Using the Numba module for speed up.
On big datasets (more than 500k), pd.cut can be quite slow for binning data.
I wrote my own function in Numba with just-in-time compilation, which is roughly six times faster:
from numba import njit
#njit
def cut(arr):
bins = np.empty(arr.shape[0])
for idx, x in enumerate(arr):
if (x >= 0) & (x < 1):
bins[idx] = 1
elif (x >= 1) & (x < 5):
bins[idx] = 2
elif (x >= 5) & (x < 10):
bins[idx] = 3
elif (x >= 10) & (x < 25):
bins[idx] = 4
elif (x >= 25) & (x < 50):
bins[idx] = 5
elif (x >= 50) & (x < 100):
bins[idx] = 6
else:
bins[idx] = 7
return bins
cut(df['percentage'].to_numpy())
# array([5., 5., 7., 5.])
Optional: you can also map it to bins as strings:
a = cut(df['percentage'].to_numpy())
conversion_dict = {1: 'bin1',
2: 'bin2',
3: 'bin3',
4: 'bin4',
5: 'bin5',
6: 'bin6',
7: 'bin7'}
bins = list(map(conversion_dict.get, a))
# ['bin5', 'bin5', 'bin7', 'bin5']
Speed comparison:
# Create a dataframe of 8 million rows for testing
dfbig = pd.concat([df]*2000000, ignore_index=True)
dfbig.shape
# (8000000, 1)
%%timeit
cut(dfbig['percentage'].to_numpy())
# 38 ms ± 616 µs per loop (mean ± standard deviation of 7 runs, 10 loops each)
%%timeit
bins = [0, 1, 5, 10, 25, 50, 100]
labels = [1,2,3,4,5,6]
pd.cut(dfbig['percentage'], bins=bins, labels=labels)
# 215 ms ± 9.76 ms per loop (mean ± standard deviation of 7 runs, 10 loops each)
We could also use np.select:
bins = [0, 1, 5, 10, 25, 50, 100]
df['groups'] = (np.select([df['percentage'].between(i, j, inclusive='right')
for i,j in zip(bins, bins[1:])],
[1, 2, 3, 4, 5, 6]))
Output:
percentage groups
0 46.50 5
1 44.20 5
2 100.00 6
3 42.12 5
Convenient and fast version using Numpy
np.digitize is a convenient and fast option:
import pandas as pd
import numpy as np
df = pd.DataFrame({'x': [1,2,3,4,5]})
df['y'] = np.digitize(a['x'], bins=[3,5])
print(df)
returns
x y
0 1 0
1 2 0
2 3 1
3 4 1
4 5 2
I would like to sort a dataframe by certain priority rules.
I've achieved this in the code below but I think this is a very hacky solution.
Is there a more proper Pandas way of doing this?
import pandas as pd
import numpy as np
df=pd.DataFrame({"Primary Metric":[80,100,90,100,80,100,80,90,90,100,90,90,80,90,90,80,80,80,90,90,100,80,80,100,80],
"Secondary Metric Flag":[0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0,1,0],
"Secondary Value":[15, 59, 70, 56, 73, 88, 83, 64, 12, 90, 64, 18, 100, 79, 7, 71, 83, 3, 26, 73, 44, 46, 99,24, 20],
"Final Metric":[222, 883, 830, 907, 589, 93, 479, 498, 636, 761, 851, 349, 25, 405, 132, 491, 253, 318, 183, 635, 419, 885, 305, 258, 924]})
Primary_List=list(np.unique(df['Primary Metric']))
Primary_List.sort(reverse=True)
df_sorted=pd.DataFrame()
for p in Primary_List:
lol=df[df["Primary Metric"]==p]
lol.sort_values(["Secondary Metric Flag"],ascending = False)
pt1=lol[lol["Secondary Metric Flag"]==1].sort_values(by=['Secondary Value', 'Final Metric'], ascending=[False, False])
pt0=lol[lol["Secondary Metric Flag"]==0].sort_values(["Final Metric"],ascending = False)
df_sorted=df_sorted.append(pt1)
df_sorted=df_sorted.append(pt0)
df_sorted
The priority rules are:
First sort by the 'Primary Metric', then by the 'Secondary Metric
Flag'.
If the 'Secondary Metric Flag' ==1, sort by 'Secondary Value' then
the 'Final Metric'
If ==0, go right for the 'Final Metric'.
Appreciate any feedback.
You do not need for loop and groupby here , just split them and sort_values
df1=df.loc[df['Secondary Metric Flag']==1].sort_values(by=['Primary Metric','Secondary Value', 'Final Metric'], ascending=[True,False, False])
df0=df.loc[df['Secondary Metric Flag']==0].sort_values(["Primary Metric","Final Metric"],ascending = [True,False])
df=pd.concat([df1,df0]).sort_values('Primary Metric')
sorted with loc
def k(t):
p, s, v, f = df.loc[t]
return (-p, -s, -s * v, -f)
df.loc[sorted(df.index, key=k)]
Primary Metric Secondary Metric Flag Secondary Value Final Metric
9 100 1 90 761
5 100 1 88 93
1 100 1 59 883
3 100 1 56 907
23 100 1 24 258
20 100 0 44 419
13 90 1 79 405
19 90 1 73 635
7 90 1 64 498
11 90 1 18 349
10 90 0 64 851
2 90 0 70 830
8 90 0 12 636
18 90 0 26 183
14 90 0 7 132
15 80 1 71 491
21 80 1 46 885
17 80 1 3 318
24 80 0 20 924
4 80 0 73 589
6 80 0 83 479
22 80 0 99 305
16 80 0 83 253
0 80 0 15 222
12 80 0 100 25
sorted with itertuples
def k(t):
_, p, s, v, f = t
return (-p, -s, -s * v, -f)
idx, *tups = zip(*sorted(df.itertuples(), key=k))
pd.DataFrame(dict(zip(df, tups)), idx)
lexsort
p = df['Primary Metric']
s = df['Secondary Metric Flag']
v = df['Secondary Value']
f = df['Final Metric']
a = np.lexsort([
-p, -s, -s * v, -f
][::-1])
df.iloc[a]
Construct New DataFrame
df.mul([-1, -1, 1, -1]).assign(
**{'Secondary Value': lambda d: d['Secondary Metric Flag'] * d['Secondary Value']}
).pipe(
lambda d: df.loc[d.sort_values([*d]).index]
)
I am trying to do a conditional assignation to the rows of a specific column: target. I have done some research, and it seemed that the answer was given here: "How to do row processing and item assignment in dask".
I will reproduce my necessity. Mock data set:
x = [3, 0, 3, 4, 0, 0, 0, 2, 0, 0, 0, 6, 9]
y = [200, 300, 400, 215, 219, 360, 280, 396, 145, 276, 190, 554, 355]
mock = pd.DataFrame(dict(target = x, speed = y))
The look of mock is:
In [4]: mock.head(7)
Out [4]:
speed target
0 200 3
1 300 0
2 400 3
3 215 4
4 219 0
5 360 0
6 280 0
Having this Pandas DataFrame, I convert it into a Dask DataFrame:
mock_dask = dd.from_pandas(mock, npartitions = 2)
I apply my conditional rule: all values in target above 0, must be 1, all others 0 (binaryze target). Following the mentioned thread above, it should be:
result = mock_dask.target.where(mock_dask.target > 0, 1)
I have a look at the result dataset and it is not working as expected:
In [7]: result.head(7)
Out [7]:
0 3
1 1
2 3
3 4
4 1
5 1
6 1
Name: target, dtype: object
As we can see, the column target in mock and result are not the expected results. It seems that my code is converting all 0 original values to 1, instead of the values that are greater than 0 into 1 (the conditional rule).
Dask newbie here, Thanks in advance for your help.
OK, the documentation in Dask DataFrame API is pretty clear. Thanks to #MRocklin feedback I have realized my mistake. In the documentation, where function (the last one in the list) is used with the following syntax:
DataFrame.where(cond[, other]) Return an object of same shape as self and whose corresponding entries are from self where cond is True and otherwise are from other.
Thus, the correct code line would be:
result = mock_dask.target.where(mock_dask.target <= 0, 1)
This will output:
In [7]: result.head(7)
Out [7]:
0 1
1 0
2 1
3 1
4 0
5 0
6 0
Name: target, dtype: int64
Which is the expected output.
They seem to be the same to me
In [1]: import pandas as pd
In [2]: x = [1, 0, 1, 1, 0, 0, 0, 2, 0, 0, 0, 6, 9]
...: y = [200, 300, 400, 215, 219, 360, 280, 396, 145, 276, 190, 554, 355]
...: mock = pd.DataFrame(dict(target = x, speed = y))
...:
In [3]: import dask.dataframe as dd
In [4]: mock_dask = dd.from_pandas(mock, npartitions = 2)
In [5]: mock.target.where(mock.target > 0, 1).head(5)
Out[5]:
0 1
1 1
2 1
3 1
4 1
Name: target, dtype: int64
In [6]: mock_dask.target.where(mock_dask.target > 0, 1).head(5)
Out[6]:
0 1
1 1
2 1
3 1
4 1
Name: target, dtype: int64