Related
I have the following pandas DataFrame:
account_num = [
1726905620833, 1727875510892, 1727925550921, 1727925575731, 1727345507414,
1713565531401, 1725735509119, 1727925546516, 1727925523656, 1727875509665,
1727875504742, 1727345504314, 1725475539855, 1791725523833, 1727925583805,
1727925544791, 1727925518810, 1727925606986, 1727925618602, 1727605517337,
1727605517354, 1727925583101, 1727925583201, 1727925583335, 1727025517810,
1727935718602]
total_due = [
1662.87, 3233.73, 3992.05, 10469.28, 799.01, 2292.98, 297.07, 5699.06, 1309.82,
1109.67, 4830.57, 3170.12, 45329.73, 46.71, 11981.58, 3246.31, 3214.25, 2056.82,
1611.73, 5386.16, 2622.02, 5011.02, 6222.10, 16340.90, 1239.23, 1198.98]
net_returned = [
0.0, 0.0, 0.0, 2762.64, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 12008.27,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 2762.69, 0.0, 0.0, 0.0, 9254.66, 0.0, 0.0]
total_fees = [
0.0, 0.0, 0.0, 607.78, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 2161.49, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 536.51, 0.0, 0.0, 0.0, 1712.11, 0.0, 0.0]
year = [2021, 2022, 2022, 2021, 2021, 2020, 2020, 2022, 2019, 2019, 2020, 2022, 2019,
2018, 2018, 2022, 2021, 2022, 2022, 2020, 2019, 2019, 2022, 2019, 2021, 2022]
flipped = [1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0]
proba = [
0.960085, 0.022535, 0.013746, 0.025833, 0.076159, 0.788912, 0.052489, 0.035279,
0.019701, 0.552127, 0.063949, 0.061279, 0.024398, 0.902681, 0.009441, 0.015342,
0.006832, 0.032988, 0.031879, 0.026412, 0.025159, 0.023195, 0.022104, 0.021285,
0.026480, 0.025837]
d = {
"account_num" : account_num,
"total_due" : total_due,
"net_returned" : net_returned,
"total_fees" : total_fees,
"year" : year,
"flipped" : flipped,
"proba" : proba
}
df = pd.DataFrame(data=d)
I want to sample the DataFrame by the "year" column according to a specific ratio for each year, which I have successfully done with the following code:
df_fractions = pd.DataFrame({"2018": [0.5], "2019": [0.5], "2020": [1.0], "2021": [0.8],
"2022": [0.7]})
df.year = df.year.astype(str)
grouped = df.groupby("year")
df_training = grouped.apply(lambda x: x.sample(frac=df_fractions[x.name]))
df_training = df_training.reset_index(drop=True)
However, when I invoke sample(), I also want to ensure the samples from each year are stratified according to the number of flipped accounts in that year. So, I want to stratify the per-year samples based on the flipped column. With this small, toy DataFrame, after sampling per year, the ratio of flipped per year are pretty good with respect to the original proportions. But this is not true for a really large DataFrame with close to 300K accounts.
So, that's really my question to all you Python experts: is there a better way to solve this problem than the solution I came up with?
i want to implement a bezier curve animation which is provided by easing in react native but the docs are not very clear about how to implement it. please need your suggestion
Here on this repository you can see some examples of the use of react-native-easing:
react-native-easing
Here's the file on the repository:
import { Easing } from 'react-native';
export default {
step0: Easing.step0,
step1: Easing.step1,
linear: Easing.linear,
ease: Easing.ease,
quad: Easing.quad,
cubic: Easing.cubic,
poly: Easing.poly,
sin: Easing.sin,
circle: Easing.circle,
exp: Easing.exp,
elastic: Easing.elastic,
back: Easing.back,
bounce: Easing.bounce,
bezier: Easing.bezier,
in: Easing.in,
out: Easing.out,
inOut: Easing.inOut,
easeIn: Easing.bezier(0.42, 0, 1, 1),
easeOut: Easing.bezier(0, 0, 0.58, 1),
easeInOut: Easing.bezier(0.42, 0, 0.58, 1),
easeInCubic: Easing.bezier(0.55, 0.055, 0.675, 0.19),
easeOutCubic: Easing.bezier(0.215, 0.61, 0.355, 1.0),
easeInOutCubic: Easing.bezier(0.645, 0.045, 0.355, 1.0),
easeInCirc: Easing.bezier(0.6, 0.04, 0.98, 0.335),
easeOutCirc: Easing.bezier(0.075, 0.82, 0.165, 1.0),
easeInOutCirc: Easing.bezier(0.785, 0.135, 0.15, 0.86),
easeInExpo: Easing.bezier(0.95, 0.05, 0.795, 0.035),
easeOutExpo: Easing.bezier(0.19, 1.0, 0.22, 1.0),
easeInOutExpo: Easing.bezier(1.0, 0.0, 0.0, 1.0),
easeInQuad: Easing.bezier(0.55, 0.085, 0.68, 0.53),
easeOutQuad: Easing.bezier(0.25, 0.46, 0.45, 0.94),
easeInOutQuad: Easing.bezier(0.455, 0.03, 0.515, 0.955),
easeInQuart: Easing.bezier(0.895, 0.03, 0.685, 0.22),
easeOutQuart: Easing.bezier(0.165, 0.84, 0.44, 1.0),
easeInOutQuart: Easing.bezier(0.77, 0.0, 0.175, 1.0),
easeInQuint: Easing.bezier(0.755, 0.05, 0.855, 0.06),
easeOutQuint: Easing.bezier(0.23, 1.0, 0.32, 1.0),
easeInOutQuint: Easing.bezier(0.86, 0.0, 0.07, 1.0),
easeInSine: Easing.bezier(0.47, 0.0, 0.745, 0.715),
easeOutSine: Easing.bezier(0.39, 0.575, 0.565, 1.0),
easeInOutSine: Easing.bezier(0.445, 0.05, 0.55, 0.95),
easeInBack: Easing.bezier(0.6, -0.28, 0.735, 0.045),
easeOutBack: Easing.bezier(0.175, 0.885, 0.32, 1.275),
easeInOutBack: Easing.bezier(0.68, -0.55, 0.265, 1.55),
easeInElastic: Easing.out(Easing.elastic(2)),
easeInElasticCustom: (bounciness = 2) => Easing.out(Easing.elastic(bounciness)),
easeOutElastic: Easing.in(Easing.elastic(2)),
easeOutElasticCustom: (bounciness = 2) => Easing.in(Easing.elastic(bounciness)),
easeInOutElastic: Easing.inOut(Easing.out(Easing.elastic(2))),
easeInOutElasticCustom: (bounciness = 2) => Easing.inOut(Easing.out(Easing.elastic(bounciness))),
easeInBounce: Easing.out(Easing.bounce),
easeOutBounce: Easing.in(Easing.bounce),
easeInOutBounce: Easing.inOut(Easing.out(Easing.bounce)),
};
And here's what each function generates:
I'm doing a homework on WebGL2 and am provided with a projection and view matrix I have to use to form a camera. It says "the matrices have to be send to the shaders and the shaders have to be extended by new uniforms".
It's part two of a multipart assignment where part one was to send the vertices of a cube to the vertex shader.
I get to the part where it shows a rectangle, as all over parts of the cube are behind that one.
I looked at some examples on webgl2fundamentals but wasn't able to adapt the code to the code we were provided with. I've tried several positionings, especially with looking up the uniforms during init() and then binding them either in createGeometry() or render(), where all questionable lines of code currently sit for better overview.
I think at least the lookup shouldn't happen at render time.
vertex shader:
#version 300 es
precision mediump float;
layout(location = 0) in vec3 aPos;
layout(location = 1) in vec3 aColor;
uniform mat4 u_pmatrix;
uniform mat4 u_vmatrix;
out vec3 color;
void main() {
color = aColor;
gl_Position = u_pmatrix * u_vmatrix * vec4(aPos, 1.0);
}
"use strict"
var gl;
var viewMatrix;
var projectionMatrix;
var program;
var vao;
function render(timestamp, previousTimestamp)
{
var light = getLightPosition(); // vec3
var rotation = getRotation(); // vec3
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.useProgram(program);
gl.bindVertexArray(vao);
var pMatLocation = gl.getUniformLocation(program, "u_pmatrix");
var vMatLocation = gl.getUniformLocation(program, "u_vmatrix");
gl.uniformMatrix4fv(pMatLocation, false, projectionMatrix);
gl.uniformMatrix4fv(vMatLocation, false, viewMatrix);
gl.drawArrays(gl.TRIANGLES, 0, 6 * 6);
window.requestAnimFrame(function (time) {
render(time, timestamp);
});
}
function createGeometry()
{
var positions = [];
positions.push(vec3(-0.5, -0.5, -0.5));
positions.push(vec3(-0.5, 0.5, -0.5));
positions.push(vec3(0.5, -0.5, -0.5));
positions.push(vec3(-0.5, 0.5, -0.5));
positions.push(vec3(0.5, 0.5, -0.5));
positions.push(vec3(0.5, -0.5, -0.5));
positions.push(vec3(-0.5, -0.5, 0.5));
positions.push(vec3(0.5, -0.5, 0.5));
positions.push(vec3(-0.5, 0.5, 0.5));
positions.push(vec3(-0.5, 0.5, 0.5));
positions.push(vec3(0.5, -0.5, 0.5));
positions.push(vec3(0.5, 0.5, 0.5));
positions.push(vec3(-0.5, 0.5, -0.5));
positions.push(vec3(-0.5, 0.5, 0.5));
positions.push(vec3(0.5, 0.5, -0.5));
positions.push(vec3(-0.5, 0.5, 0.5));
positions.push(vec3(0.5, 0.5, 0.5));
positions.push(vec3(0.5, 0.5, -0.5));
positions.push(vec3(-0.5, -0.5, -0.5));
positions.push(vec3(0.5, -0.5, -0.5));
positions.push(vec3(-0.5, -0.5, 0.5));
positions.push(vec3(-0.5, -0.5, 0.5));
positions.push(vec3(0.5, -0.5, -0.5));
positions.push(vec3(0.5, -0.5, 0.5));
positions.push(vec3(-0.5, -0.5, -0.5));
positions.push(vec3(-0.5, -0.5, 0.5));
positions.push(vec3(-0.5, 0.5, -0.5));
positions.push(vec3(-0.5, 0.5, -0.5));
positions.push(vec3(-0.5, 0.5, 0.5));
positions.push(vec3(-0.5, 0.5, -0.5));
positions.push(vec3(0.5, -0.5, -0.5));
positions.push(vec3(0.5, 0.5, -0.5));
positions.push(vec3(0.5, -0.5, 0.5));
positions.push(vec3(0.5, -0.5, 0.5));
positions.push(vec3(0.5, 0.5, -0.5));
positions.push(vec3(0.5, 0.5, 0.5));
vao = gl.createVertexArray();
gl.bindVertexArray(vao);
var vertexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer)
gl.bufferData(gl.ARRAY_BUFFER, flatten(positions), gl.STATIC_DRAW);
gl.vertexAttribPointer(0, 3, gl.FLOAT, gl.FALSE, 0, 0);
gl.enableVertexAttribArray(0);
var colors = [];
colors.push(vec3(0.0, 1.0, 0.0));
colors.push(vec3(0.0, 1.0, 0.0));
colors.push(vec3(0.0, 1.0, 0.0));
colors.push(vec3(0.0, 1.0, 0.0));
colors.push(vec3(0.0, 1.0, 0.0));
colors.push(vec3(0.0, 1.0, 0.0));
colors.push(vec3(1.0, 0.0, 0.0));
colors.push(vec3(1.0, 0.0, 0.0));
colors.push(vec3(1.0, 0.0, 0.0));
colors.push(vec3(1.0, 0.0, 0.0));
colors.push(vec3(1.0, 0.0, 0.0));
colors.push(vec3(1.0, 0.0, 0.0));
colors.push(vec3(0.0, 0.0, 1.0));
colors.push(vec3(0.0, 0.0, 1.0));
colors.push(vec3(0.0, 0.0, 1.0));
colors.push(vec3(0.0, 0.0, 1.0));
colors.push(vec3(0.0, 0.0, 1.0));
colors.push(vec3(0.0, 0.0, 1.0));
colors.push(vec3(1.0, 1.0, 0.0));
colors.push(vec3(1.0, 1.0, 0.0));
colors.push(vec3(1.0, 1.0, 0.0));
colors.push(vec3(1.0, 1.0, 0.0));
colors.push(vec3(1.0, 1.0, 0.0));
colors.push(vec3(1.0, 1.0, 0.0));
colors.push(vec3(1.0, 1.0, 0.5));
colors.push(vec3(1.0, 1.0, 0.5));
colors.push(vec3(1.0, 1.0, 0.5));
colors.push(vec3(1.0, 1.0, 0.5));
colors.push(vec3(1.0, 1.0, 0.5));
colors.push(vec3(1.0, 1.0, 0.5));
colors.push(vec3(1.0, 0.0, 1.0));
colors.push(vec3(1.0, 0.0, 1.0));
colors.push(vec3(1.0, 0.0, 1.0));
colors.push(vec3(1.0, 0.0, 1.0));
colors.push(vec3(1.0, 0.0, 1.0));
colors.push(vec3(1.0, 0.0, 1.0));
var vboColor = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vboColor);
gl.bufferData(gl.ARRAY_BUFFER, flatten(colors), gl.STATIC_DRAW);
gl.vertexAttribPointer(1, 3, gl.FLOAT, gl.FALSE, 0, 0);
gl.enableVertexAttribArray(1);
}
function loadModel()
{
var meshData = loadMeshData();
var positions = meshData.positions;
var normals = meshData.normals;
var colors = meshData.colors;
var vertexCount = meshData.vertexCount;
}
window.onload = function init() {
var canvas = document.getElementById('rendering-surface');
gl = WebGLUtils.setupWebGL( canvas );
gl.viewport(0, 0, canvas.width, canvas.height);
gl.enable(gl.DEPTH_TEST);
gl.clearColor(0.0, 0.0, 0.0, 0.0);
program = initShaders(gl, "vertex-shader","fragment-shader");
gl.useProgram(program);
createGeometry();
loadModel();
var projectionMatrix = mat4(1.0);
projectionMatrix = perspective(90, canvas.width / canvas.height, 0.1, 100);
var eyePos = vec3(0, 1.0, 2.0);
var lookAtPos = vec3(0.0, 0.0, 0.0);
var upVector = vec3(0.0, 1.0, 0.0);
viewMatrix = lookAt(eyePos, lookAtPos, upVector);
render(0,0);
}
There should be a cube, but all that's to be seen is blank space. Either the positioning or transformation is wrong, or the program is crashing.
In your init function you're shadowing your global projectionMatrix thus your projection matrix used in render always remains undefined.
var projectionMatrix = mat4(1.0);// << shadowing your global with the same name
projectionMatrix = perspective(90, canvas.width / canvas.height, 0.1, 100);
You might want to take a look at this article on how to use developer tools for debugging.
I am trying to do sentiment analysis on tweets using sentimentIntensityAnalyzer() from nltk.sentiment.vader
sid = SentimentIntensityAnalyzer()
listy = []
for index, row in data.iterrows():
ss = sid.polarity_scores(row["Tweets"])
listy.append(ss)
se = pd.Series(listy)
data['polarity'] = se.values
display(data.head(100))
This is the resulting dataFramee :
Tweets polarity
0 RT #spectatorindex: Facebook controls:\n\n- Wh... {'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound...
1 RT #YAATeamWest: Today we're at #BradfordUniSU... {'neg': 0.0, 'neu': 0.902, 'pos': 0.098, 'comp...
2 #SachinTendulkar launches India’s first Multip... {'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound...
3 How To Create a 360 Render (And How to Improv... {'neg': 0.0, 'neu': 0.722, 'pos': 0.278, 'comp...
4 The Most Disturbing Virtual Reality You Will E... {'neg': 0.174, 'neu': 0.826, 'pos': 0.0, 'comp...
5 VR Training for Troops 🎮\n\n... {'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound...
6 RT #DefenceHQ: The #BritishArmy has awarded a ... {'neg': 0.0, 'neu': 0.847, 'pos': 0.153, 'comp...
7 RT #UofGHumanities: #UofGCSPE Humanities Lectu... {'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound...
8 RT #OyezServices: Ever wanted a tour of Machu ... {'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound...
9 RT #ProjectDastaan: We are an Oxford Universit... {'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound...
10 RT #Paula_Piccard: Virtual reality will change... {'neg': 0.0, 'neu': 0.878, 'pos': 0.122, 'comp...
In order to do statistical analysis on the 'neg','pos','neu' and 'compound' entities in the polarity column I wanted to split the data into four different columns. To achieve this I used :
list_pos= []
list_neg = []
list_comp = []
list_neu = []
for index, row in data.iterrows():
list_pos.append(row['polarity']['pos'])
list_neg.append(row['polarity']['neg'])
list_comp.append(row['polarity']['compound'])
list_neu.append(row['polarity']['neu'])
se_pos = pd.Series(list_pos)
se_neg = pd.Series(list_neg)
se_comp = pd.Series(list_comp)
se_neu = pd.Series(list_neu)
data['positive'] = se_pos.values
data['negative'] = se_neg.values
data['compound'] = se_comp.values
data['neutral'] = se_neu.values
The resulting dataFrame:
Tweets polarity positive negative compound neutral
0 RT #spectatorindex: Facebook controls:\n\n- Wh... {'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound... 0.000 0.000 0.0000 1.000
1 RT #YAATeamWest: Today we're at #BradfordUniSU... {'neg': 0.0, 'neu': 0.902, 'pos': 0.098, 'comp... 0.098 0.000 0.3612 0.902
2 #SachinTendulkar launches India’s first Multip... {'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound... 0.000 0.000 0.0000 1.000
Is there a more concise way of achieving a similar dataFrame? Using the lambda function perhaps? Thanks for the help!
I have a numpy array with shape [t, z, x, y] epresenting an hourly time series of three-D data. The axes of the array are time, vertical coordinate, horizontal coordinate 1, horizontal coordinate 2. There is also a t-element list of hourly datetime.datetime timestamps.
I want to calculate the daily mid-day means for each day. This will be an [nday, Z, X, Y] array.
I'm trying to find a pythonic way to do this. I've written something with a bunch of for loops that works but seems slow, inflexible, and verbose.
It appears to me that Pandas is not a solution for me because my time series data are three-dimensional. I'd be happy to be proven wrong.
I've come up with this, using itertools, to find mid-day timestamps and group them by date, and now I'm coming up short trying to apply imap to find the means.
import numpy as np
import pandas as pd
import itertools
# create 72 hours of pseudo-data with 3 vertical levels and a 4 by 4
# horizontal grid.
data = np.zeros((72, 3, 4, 4))
t = pd.date_range(datetime(2008,7,1), freq='1H', periods=72)
for i in range(data.shape[0]):
data[i,...] = i
# find the timestamps that are "midday" in North America. We'll
# define midday as between 15:00 and 23:00 UTC, which is 10:00 EST to
# 15:00 PST.
def is_midday(this_t):
return ((this_t.hour >= 15) and (this_t.hour <= 23))
# group the midday timestamps by date
for dt, grp in itertools.groupby(itertools.ifilter(is_midday, t),
key=lambda x: x.date()):
print 'date ' + str(dt)
for g in grp:
print g
# find means of mid-day data by date
data_list = np.split(data, data.shape[0])
grps = itertools.groupby(itertools.ifilter(is_midday, t),
key=lambda x: x.date())
# how to apply itertools.imap (or something else) to data_list and
# grps? Or somehow split data along axis 0 according to grps?
You can shove pretty much any object into a pandas structure. Normally not recommended, but in this case it might work for you.
Create a Series indexed by time, with each element a 3-d numpy array
In [117]: s = Series([data[i] for i in range(data.shape[0])],index=t)
In [118]: s
Out[118]:
2008-07-01 00:00:00 [[[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], ...
2008-07-01 01:00:00 [[[1.0, 1.0, 1.0, 1.0], [1.0, 1.0, 1.0, 1.0], ...
2008-07-01 02:00:00 [[[2.0, 2.0, 2.0, 2.0], [2.0, 2.0, 2.0, 2.0], ...
2008-07-01 03:00:00 [[[3.0, 3.0, 3.0, 3.0], [3.0, 3.0, 3.0, 3.0], ...
2008-07-01 04:00:00 [[[4.0, 4.0, 4.0, 4.0], [4.0, 4.0, 4.0, 4.0], ...
2008-07-01 05:00:00 [[[5.0, 5.0, 5.0, 5.0], [5.0, 5.0, 5.0, 5.0], ...
2008-07-01 06:00:00 [[[6.0, 6.0, 6.0, 6.0], [6.0, 6.0, 6.0, 6.0], ...
2008-07-01 07:00:00 [[[7.0, 7.0, 7.0, 7.0], [7.0, 7.0, 7.0, 7.0], ...
2008-07-01 08:00:00 [[[8.0, 8.0, 8.0, 8.0], [8.0, 8.0, 8.0, 8.0], ...
2008-07-01 09:00:00 [[[9.0, 9.0, 9.0, 9.0], [9.0, 9.0, 9.0, 9.0], ...
2008-07-01 10:00:00 [[[10.0, 10.0, 10.0, 10.0], [10.0, 10.0, 10.0,...
2008-07-01 11:00:00 [[[11.0, 11.0, 11.0, 11.0], [11.0, 11.0, 11.0,...
2008-07-01 12:00:00 [[[12.0, 12.0, 12.0, 12.0], [12.0, 12.0, 12.0,...
2008-07-01 13:00:00 [[[13.0, 13.0, 13.0, 13.0], [13.0, 13.0, 13.0,...
2008-07-01 14:00:00 [[[14.0, 14.0, 14.0, 14.0], [14.0, 14.0, 14.0,...
...
2008-07-03 09:00:00 [[[57.0, 57.0, 57.0, 57.0], [57.0, 57.0, 57.0,...
2008-07-03 10:00:00 [[[58.0, 58.0, 58.0, 58.0], [58.0, 58.0, 58.0,...
2008-07-03 11:00:00 [[[59.0, 59.0, 59.0, 59.0], [59.0, 59.0, 59.0,...
2008-07-03 12:00:00 [[[60.0, 60.0, 60.0, 60.0], [60.0, 60.0, 60.0,...
2008-07-03 13:00:00 [[[61.0, 61.0, 61.0, 61.0], [61.0, 61.0, 61.0,...
2008-07-03 14:00:00 [[[62.0, 62.0, 62.0, 62.0], [62.0, 62.0, 62.0,...
2008-07-03 15:00:00 [[[63.0, 63.0, 63.0, 63.0], [63.0, 63.0, 63.0,...
2008-07-03 16:00:00 [[[64.0, 64.0, 64.0, 64.0], [64.0, 64.0, 64.0,...
2008-07-03 17:00:00 [[[65.0, 65.0, 65.0, 65.0], [65.0, 65.0, 65.0,...
2008-07-03 18:00:00 [[[66.0, 66.0, 66.0, 66.0], [66.0, 66.0, 66.0,...
2008-07-03 19:00:00 [[[67.0, 67.0, 67.0, 67.0], [67.0, 67.0, 67.0,...
2008-07-03 20:00:00 [[[68.0, 68.0, 68.0, 68.0], [68.0, 68.0, 68.0,...
2008-07-03 21:00:00 [[[69.0, 69.0, 69.0, 69.0], [69.0, 69.0, 69.0,...
2008-07-03 22:00:00 [[[70.0, 70.0, 70.0, 70.0], [70.0, 70.0, 70.0,...
2008-07-03 23:00:00 [[[71.0, 71.0, 71.0, 71.0], [71.0, 71.0, 71.0,...
Freq: H, Length: 72
Define your aggregating function. You need to access the values which returns the inside object; concatenating coerces back to an actual numpy array, then aggregate (mean in this case)
In [119]: def f(g,grp):
.....: return np.concatenate(grp.values).mean()
.....:
Since not sure what your end output should look like, just create a time-based grouper manually (this is essentially a resample), but doesn't do anything with the final results (its just a list of the aggregated values)
In [121]: [ f(g,grp) for g, grp in s.groupby(pd.Grouper(freq='D')) ]
Out[121]: [11.5, 35.5, 59.5]
You can get reasonable fancy here and say return a pandas object (and potentially concat them).