GEKKO dynamic optimization negative degrees of freedom - optimization

I'm trying to use GEKKO for minimization of combined power load from charging vehicle batteries in discrete time.
Each vehicle has an energy demand ('dem' in vehicles_info dict) which should be met within its available time frame (from 'start' to 'end' in the vehicles_info dict)
There is also a constraint for the maximum power supply (Crate) to the battery based on SoC-level in each time step. Thus SoC and Crate is continously calculated as intermediates for each vehicle battery in every time step.
A solution is found with the vehicles in the vehicles_list below, but the degrees of freedom is -1255. I guess this could become an issue for convergence with bigger systems (=more vehicles and longer time periods)? I can't really tell how to fix this.
Full code:
import numpy as np
#################
# Vehicles info #
#################
# start = starting timestep for charging of vehicle
# end = ending timestep for charging of vehicle
# batt = vehicle battery size
# dem = vehicle energy demand
# start_soc = vehicle battery starting state-of-charge
vehicles_info = {1: {'start': 5, 'end':50, 'batt': 700.0, 'dem': 290.0, 'start_soc': 0.2,},
2: {'start': 20, 'end':80, 'batt': 650.0, 'dem': 255.0, 'start_soc':0.2},
3: {'start': 40, 'end':90, 'batt': 600.0, 'dem': 278.0, 'start_soc':0.27},
4: {'start': 50, 'end':350, 'batt': 600.0, 'dem': 450.0, 'start_soc':0.15},
5: {'start': 90, 'end':390, 'batt': 600.0, 'dem': 450.0, 'start_soc':0.15}}
##############################
# Charging curve (max Crate) #
##############################
## Charging curve parameters
C_high=2.0
C_med=1.0
C_low=0.5
SoC_med=0.5
SoC_high=0.8
n1 = 100 # slope exponential functions
# Exopnential function: Crate = C_high - C_med/(1 + m.exp(-n1*(SoC-SoC_med))) - C_low/(1 + m.exp(-n1*(SoC-SoC_high)))
###################
# Time parameters #
###################
time_stepsize_min = 1 # minute
time_stepsize_h = time_stepsize_min/60 # hour
start_timestep = 0
end_timestep = 400
m = GEKKO()
# overall time frame
m.time = np.linspace(start_timestep,end_timestep,end_timestep+1)
# variables for optimization (charging power)
P = m.Array(m.Var,len(vehicles_info))
# add initial guess and lower bound for the variables
for i in range(len(P)):
P[i].value = 0
P[i].lower = 0
# "block" time intervals outside each vehicle's time frame
for i in range(len(P)):
for j1 in range(1,vehicles_info[i+1]['start']):
m.fix(P[i],val=0,pos=j1)
for j2 in range(vehicles_info[i+1]['end'],end_timestep+1):
m.fix(P[i],val=0,pos=j2)
# Intermediates
SoC = [m.Intermediate(m.integral(P[i]*time_stepsize_h)/vehicles_info[i+1]['batt']+vehicles_info[i+1]['start_soc']) for i in range(len(P))]
Crate = [m.Intermediate(C_high - C_med/(1 + m.exp(-n1*(SoC[i]-SoC_med))) - C_low/(1 + m.exp(-n1*(SoC[i]-SoC_high)))) for i in range(len(P))]
# fix energy demand at ending time for each vehicle
E_fin = [m.integral(P[i]*time_stepsize_h) for i in range(len(P))]
for i in range(len(P)):
m.fix(E_fin[i],vehicles_info[i+1]['dem'],pos=vehicles_info[i+1]['end'])
## Equations
m.Equations(P[i]<=Crate[i]*vehicles_info[i+1]['batt'] for i in range(len(P)))
m.Minimize(np.sum(P,axis=0)**2)
m.options.IMODE = 6
m.solve(disp=True)
And some result plots:
from matplotlib import pyplot as plt
fig, ax = plt.subplots(3,1,figsize=(10,15))
# plot power, soc and crate curves
for i in range(len(P)):
ax[0].plot(m.time,P[i])
ax[1].plot(m.time,SoC[i])
ax[2].plot(m.time,Crate[i])
ax[0].set_title('Power curves')
ax[1].set_title('SoC curves')
ax[2].set_title('Crate curve')

The degrees of freedom are calculated at the beginning of the problem before the solver calculates a solution and knows which constraints are active. As long as this constraint:
m.Equations(P[i]<=Crate[i]*vehicles_info[i+1]['batt'] for i in range(len(P)))
is not at the boundary (P[i]==Crate[i]*vehicles_...) for every i and time point then the actual degrees of freedom may be positive. If the problem becomes infeasible due to too few degrees of freedom then an alternative form is to use a slack variable that minimizes the infeasibility.

Related

Is nx.eigenvector_centrality_numpy() using the Arnoldi iteration instead of the basic power method?

Since nx.eigenvector_centrality_numpy() using ARPACK, is it mean that nx.eigenvector_centrality_numpy() using Arnoldi iteration instead of the basic power method?
because when I try to compute manually using the basic power method, the result of my computation is different from the result of nx.eigenvector_centrality_numpy(). Can someone explain it to me?
To make it more clear, here is my code and the result that I got from the function and the result when I compute manually.
import networkx as nx
G = nx.DiGraph()
G.add_edge('a', 'b', weight=4)
G.add_edge('b', 'a', weight=2)
G.add_edge('b', 'c', weight=2)
G.add_edge('b','d', weight=2)
G.add_edge('c','b', weight=2)
G.add_edge('d','b', weight=2)
centrality = nx.eigenvector_centrality_numpy(G, weight='weight')
centrality
The result:
{'a': 0.37796447300922725,
'b': 0.7559289460184545,
'c': 0.3779644730092272,
'd': 0.3779644730092272}
Below is code from Power Method Python Program and I did a little bit of modification:
# Power Method to Find Largest Eigen Value and Eigen Vector
# Importing NumPy Library
import numpy as np
import sys
# Reading order of matrix
n = int(input('Enter order of matrix: '))
# Making numpy array of n x n size and initializing
# to zero for storing matrix
a = np.zeros((n,n))
# Reading matrix
print('Enter Matrix Coefficients:')
for i in range(n):
for j in range(n):
a[i][j] = float(input( 'a['+str(i)+']['+ str(j)+']='))
# Making numpy array n x 1 size and initializing to zero
# for storing initial guess vector
x = np.zeros((n))
# Reading initial guess vector
print('Enter initial guess vector: ')
for i in range(n):
x[i] = float(input( 'x['+str(i)+']='))
# Reading tolerable error
tolerable_error = float(input('Enter tolerable error: '))
# Reading maximum number of steps
max_iteration = int(input('Enter maximum number of steps: '))
# Power Method Implementation
lambda_old = 1.0
condition = True
step = 1
while condition:
# Multiplying a and x
ax = np.matmul(a,x)
# Finding new Eigen value and Eigen vector
x = ax/np.linalg.norm(ax)
lambda_new = np.vdot(ax,x)
# Displaying Eigen value and Eigen Vector
print('\nSTEP %d' %(step))
print('----------')
print('Eigen Value = %0.5f' %(lambda_new))
print('Eigen Vector: ')
for i in range(n):
print('%0.5f\t' % (x[i]))
# Checking maximum iteration
step = step + 1
if step > max_iteration:
print('Not convergent in given maximum iteration!')
break
# Calculating error
error = abs(lambda_new - lambda_old)
print('errror='+ str(error))
lambda_old = lambda_new
condition = error > tolerable_error
I used the same matrix and the result:
STEP 99
----------
Eigen Value = 3.70328
Eigen Vector:
0.51640
0.77460
0.25820
0.25820
errror=0.6172133998483682
STEP 100
----------
Eigen Value = 4.32049
Eigen Vector:
0.71714
0.47809
0.35857
0.35857
Not convergent in given maximum iteration!
I've to try to compute it with my calculator too and I know it's not convergent because |lambda1|=|lambda2|=4. I've to know the theory behind nx.eigenvector_centrality_numpy() properly so I can write it right for my thesis. Help me, please

Knn give more weight to specific feature in distance

I'm using the Kobe Bryant Dataset.
I wish to predict the shot_made_flag with KnnRegressor.
I've used game_date to extract year and month features:
# covert season to years
kobe_data_encoded['season'] = kobe_data_encoded['season'].apply(lambda x: int(re.compile('(\d+)-').findall(x)[0]))
# add year and month using game_date
kobe_data_encoded['year'] = kobe_data_encoded['game_date'].apply(lambda x: int(re.compile('(\d{4})').findall(x)[0]))
kobe_data_encoded['month'] = kobe_data_encoded['game_date'].apply(lambda x: int(re.compile('-(\d+)-').findall(x)[0]))
kobe_data_encoded = kobe_data_encoded.drop(columns=['game_date'])
and I wish to use season, year, month features to give them more weight in the distance function so events with closer date to the current event will be closer neighbors but still maintain reasonable distances to potential other datapoints, so for example I don't wish an event withing the same day would be the closest neighbor just because of the date features but it'll take into account the other features such as shot_range etc..
To give it more weight I've tried to use metric argument with custom distance function but the arguments of the function are just numpy array without column information of pandas so I'm not sure what I can do and how to implement what I'm trying to do.
EDIT:
Using larger weights for date features to find the optimal k with cv of 10 running on k from [1, 100]:
from IPython.display import display
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
# scaling
min_max_scaler = preprocessing.MinMaxScaler()
scaled_features_df = kobe_data_encoded.copy()
column_names = ['loc_x', 'loc_y', 'minutes_remaining', 'period',
'seconds_remaining', 'shot_distance', 'shot_type', 'shot_zone_range']
scaled_features = min_max_scaler.fit_transform(scaled_features_df[column_names])
scaled_features_df[column_names] = scaled_features
not_classified_df = scaled_features_df[scaled_features_df['shot_made_flag'].isnull()]
classified_df = scaled_features_df[scaled_features_df['shot_made_flag'].notnull()]
X = classified_df.drop(columns=['shot_made_flag'])
y = classified_df['shot_made_flag']
cv = StratifiedKFold(n_splits=10, shuffle=True)
neighbors = [x for x in range(1, 100)]
cv_scores = []
weight = np.ones((X.shape[1],))
weight[[X.columns.get_loc("season"),
X.columns.get_loc("year"),
X.columns.get_loc("month")
]] = 5
weight = weight/weight.sum() #Normalize weights
def my_distance(x, y):
dist = ((x-y)**2)
return np.dot(dist, weight)
for k in neighbors:
print('k: ', k)
knn = KNeighborsClassifier(n_neighbors=k, metric=my_distance)
cv_scores.append(np.mean(cross_val_score(knn, X, y, cv=cv, scoring='roc_auc')))
#optimal K
optimal_k_index = cv_scores.index(min(cv_scores))
optimal_k = neighbors[optimal_k_index]
print('best k: ', optimal_k)
plt.plot(neighbors, cv_scores)
plt.xlabel('Number of Neighbors K')
plt.ylabel('ROC AUC')
plt.show()
Runs really slow, any idea on how to make it faster?
The idea of the weighted features is to find neighbors more close to the data point date to avoid data leakage and cv for finding optimal k.
First, you have to prepare a numpy 1D weight array, specifying weight for each feature. You could do something like:
weight = np.ones((M,)) # M is no of features
weight[[1,7,10]] = 2 # Increase weight of 1st,7th and 10th features
weight = weight/weight.sum() #Normalize weights
You can use kobe_data_encoded.columns to find indexes of season, year, month features in your dataframe to replace 2nd line above.
Now define a distance function, which by guideline have to take two 1D numpy array.
def my_dist(x,y):
global weight #1D array, same shape as x or y
dist = ((x-y)**2) #1D array, same shape as x or y
return np.dot(dist,weight) # a scalar float
And initialize KNeighborsRegressor as:
knn = KNeighborsRegressor(metric=my_dist)
EDIT:
To make things efficient, you can precompute distance matrix, and reuse it in KNN. This should bring in significant speedup by reducing calls to my_dist, since this non-vectorized custom python distance function is quite slow. So now -
dist = np.zeros((len(X),len(X))) #Computing NXN distance matrix
for i in range(len(X)): # You can halve this by using the fact that dist[i,j] = dist[j,i]
for j in range(len(X)):
dist[i,j] = my_dist(X[i],X[j])
for k in neighbors:
print('k: ', k)
knn = KNeighborsClassifier(n_neighbors=k, metric='precomputed') #Note: metric='precomputed'
cv_scores.append(np.mean(cross_val_score(knn, dist, y, cv=cv, scoring='roc_auc'))) #Note: passing dist instead of X
I couldn't test it, so let me know if something isn't alright.
Just add on Shihab's answer regarding distance computation. Can use scipy pdist as suggested in this post, which is faster and more efficient.
from scipy.spatial.distance import pdist, minkowski, squareform
# create the custom weight array
weight = ...
# calculate pairwise distances, using Minkowski norm with custom weights
distances = pdist(X, minkowski, 2, weight)
# reformat the result as a square matrix
distances_as_2d_matrix = squareform(distances)

fft find low frequencies in short time history

I have 1 time unit of signal history. My dominant frequency is 1/100 time units. When I use numpy's fft function, I am limited in resolution by the extent of the signal history. How can I increase the resolution of my frequency comb without corrupting my signal?
import numpy as np
import matplotlib.pyplot as plt
'''
I need to caputre a low-frequency oscillation with only 1 time unit of data.
So far, I have not been able to find a way to make the fft resolution < 1.
'''
timeResolution = 10000
mytimes = np.linspace(0, 1, timeResolution)
mypressures = np.sin(2 * np.pi * mytimes / 100)
fft = np.fft.fft(mypressures[:])
T = mytimes[1] - mytimes[0]
N = mypressures.size
# fft of original signal is limitted by the maximum time
f = np.linspace(0, 1 / T, N)
filteredidx = f > 0.001
freq = f[filteredidx][np.argmax(np.abs(fft[filteredidx][:N//2]))]
print('freq bin is is ', f[1] - f[0]) # 1.0
print('frequency is ', freq) # 1.0
print('(real frequency is 0.01)')
I thought that I could artificially increase the time history length (and thus decrease the width of the frequency comb) by pasting the signal end-to-end and doing the fft. That didn't work for me for some reason I don't understand:
import numpy as np
import matplotlib.pyplot as plt
timeResolution = 10000
mytimes = np.linspace(0, 1, timeResolution)
mypressures = np.sin(2 * np.pi * mytimes / 100)
# glue data to itself to make signal articicially longer
timesby = 1000
newtimes = np.concatenate([mytimes * ii for ii in range(1, timesby + 1)])
newpressures = np.concatenate([mypressures] * timesby)
fft = np.fft.fft(newpressures[:])
T = newtimes[1] - newtimes[0]
N = newpressures.size
# fft of original signal is limitted by the maximum time
f = np.linspace(0, 1 / T, N)
filteredidx = f > 0.001
freq = f[filteredidx][np.argmax(np.abs(fft[filteredidx][:N//2]))]
print('freq bin is is ', f[1] - f[0]) # 0.001
print('frequency is ', freq) # 1.0
print('(real frequency is 0.01)')
Your goal, recovering spectral information from a "too short" , i.e. << sample_rate / frequency_of_interest, window seems ambitious.
Even in the most simple case (clean sine wave, your example) the data look pretty much like a straight line (left panel below). Only after detrending we can see a tiny bit of curvature (right panel below, note the very small y-values) and that is all any hypothetical algorithm can go by. In particular, FT---as far as I can see---will not work.
If we are very lucky there is one way out: comparing derivatives.
If you have a sinosoidal signal with an offset---like f = c + sin(om * t´---then the 1st and 3rd derivatives will be om * cos(om * t) and -om^3 * cos(om * t)´´.
If the signal is simple and clean enough this together with robust numerical differentiation can be used to recover the frequency omega.
In the demo code below I use a SavGol filter to obtain the derivatives while getting rid of some high frequency noise (blue curve below) that had been added to the signal (orange curve). Other (better) methods of numerical differentiation may exist.
Sample run:
Estimated freq clean signal: 0.009998
Estimated freq noisy signal: 0.009871
We can see that in this very simple case the frequency is recovered ok.
It may be possible to recover multiple frequencies using more derivatives and some linear decomposition voodoo, but I'm not going to explore this here.
Code:
import numpy as np
import matplotlib.pyplot as plt
'''
I need to caputre a low-frequency oscillation with only 1 time unit of data.
So far, I have not been able to find a way to make the fft resolution < 1.
'''
timeResolution = 10000
mytimes = np.linspace(0, 1, timeResolution)
mypressures = np.sin(2 * np.pi * mytimes / 100)
fft = np.fft.fft(mypressures[:])
T = mytimes[1] - mytimes[0]
N = mypressures.size
# fft of original signal is limitted by the maximum time
f = np.linspace(0, 1 / T, N)
filteredidx = f > 0.001
freq = f[filteredidx][np.argmax(np.abs(fft[filteredidx][:N//2]))]
print('freq bin is is ', f[1] - f[0]) # 1.0
print('frequency is ', freq) # 1.0
print('(real frequency is 0.01)')
import scipy.signal as ss
plt.figure(1)
plt.subplot(121)
plt.plot(mytimes, mypressures)
plt.subplot(122)
plt.plot(mytimes, ss.detrend(mypressures))
plt.figure(2)
mycorrupted = mypressures + 0.00001 * np.random.normal(size=mypressures.shape)
plt.plot(mytimes, ss.detrend(mycorrupted))
plt.plot(mytimes, ss.detrend(mypressures))
width, order = 8999, 3
hw = (width+3) // 2
dsdt = ss.savgol_filter(mypressures, width, order, 1, 1/timeResolution)[hw:-hw]
d3sdt3 = ss.savgol_filter(mypressures, width, order, 3, 1/timeResolution)[hw:-hw]
est_freq_clean = np.nanmean(np.sqrt(-d3sdt3/dsdt) / (2 * np.pi))
dsdt = ss.savgol_filter(mycorrupted, width, order, 1, 1/timeResolution)[hw:-hw]
d3sdt3 = ss.savgol_filter(mycorrupted, width, order, 3, 1/timeResolution)[hw:-hw]
est_freq_noisy = np.nanmean(np.sqrt(-d3sdt3/dsdt) / (2 * np.pi))
print(f"Estimated freq clean signal: {est_freq_clean:10.6f}")
print(f"Estimated freq noisy signal: {est_freq_noisy:10.6f}")

getting counterintuitive results with numpy FFT when calculating mean frequency and ESD Progression

I have some timecourse data that visually appears to have differing levels of high frequency fluctuation. I have plotted the timecourse Data A and B below.
I have used numpy FFT to perform Fourier Transformation as follows:
Fs = 1.0; # sampling rate
Ts = 864000; # sampling interval
t = np.arange(0,Ts,Fs) # time vector
n = 864000 # number of datapoints of timecourse data
k = np.arange(n)
T = n/Fs
frq = k/T # two sides frequency range
frq = frq[range(n/2)] # one side frequency range
Y = np.fft.fft(A1)/n # fft computing and normalization of timecourse data (A1)
Y = Y[range(n/2)]
Z = abs(Y)
##### Calculate Mean Frequency ########
Mean_Frequency = sum((frq*Z))/(sum(Z))
################################make sure the first value doesnt create an issue
Freq = frq[1:]
Z = Z[1:]
max_y = max(Z) # Find the maximum y value
mode_Frequency = Freq[Z.argmax()] # Find the x value corresponding to the maximum y value
###################### Plot figures
fig, ax = plt.subplots(2, 1)
fig.suptitle(str(D[k]), fontsize=14, fontweight='bold')
ax[0].plot(t,A1, 'black')
ax[0].set_xlabel('Time')
ax[0].set_ylabel('Amplitude')
ax[1].plot(frq,abs(Y),'r') # plotting the spectrum
ax[1].set_xscale('log')
ax[1].set_xlabel('Freq (Hz)')
ax[1].set_ylabel('|Y(freq)|')
text(2*mode_x, 0.95*max_y, "Mode Frequency (Hz): "+str(mode_x))
text(2*mode_x, 0.85*max_y, "Mean Frequency (Hz): "+str(mean_x))
plt.savefig("/home/phoenix/Desktop/Figures/Figure3/FourierGraphs/"+str(D[k])+".png")
plt.close()
This results look like this:
The timecourse data (black) for A on the left appears to me to have much more high frequency noise than than the data on the right (B).
yet the mean frequency is higher for the data on the left.
Is this because I have performed FFT incorrectly, or because I have calculated mean frequency incorrectly or because I need to use a different method to capture the really low frequency noise in timecourse B?
thanks for your time.

Financial time series: python Matplotlib "specgram" y-axis displaying Period instead of Frequency

python Matplotlib's "specgram" display of a heatmap showing frequency (y-axis) vs. time (x-axis) is useful for time series analysis, but I would like to have the y-axis displayed in terms of Period (= 1/frequency), rather than frequency. I am still asking if anyone has a complete working solution to achieve this?
The immediately following python code generates the author's original plot using "specgram" and (currently commented out) a comparison with the suggested solution that was offered using "mlab.specgram". This suggested solution succeeds with the easy conversion from frequency to period = 1/frequency, but does not generate a viable plot for the authors example.
from __future__ import division
from datetime import datetime
import numpy as np
from pandas import DataFrame, Series
import pandas.io.data as web
import pandas as pd
from pylab import plot,show,subplot,specgram
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
################################################
# obtain data:
ticker = "SPY"
source = "google"
start_date = datetime(1999,1,1)
end_date = datetime(2012,1,1)
qt = web.DataReader(ticker, source, start_date, end_date)
qtC = qt.Close
################################################
data = qtC
fs = 1 # 1 sample / day
nfft = 128
# display the time-series data
fig = plt.figure()
ax1 = fig.add_subplot(311)
ax1.plot(range(len(data)),data)
#----------------
# Original version
##################
# specgram (NOT mlab.specgram) --> gives direct plot, but in Frequency space (want plot in Period, not freq).
ax2 = fig.add_subplot(212)
spec, freq, t = specgram(data, NFFT=nfft, Fs=fs, noverlap=0)
#----------------
"""
# StackOverflow version (with minor changes to axis titles)
########################
# calcuate the spectrogram
spec, freq, t = mlab.specgram(data, NFFT=nfft, Fs=fs, noverlap=0)
# calculate the bin limits in time (x dir)
# note that there are n+1 fence posts
dt = t[1] - t[0]
t_edge = np.empty(len(t) + 1)
t_edge[:-1] = t - dt / 2.
# however, due to the way the spectrogram is calculates, the first and last bins
# a bit different:
t_edge[0] = 0
t_edge[-1] = t_edge[0] + len(data) / fs
# calculate the frequency bin limits:
df = freq[1] - freq[0]
freq_edge = np.empty(len(freq) + 1)
freq_edge[:-1] = freq - df / 2.
freq_edge[-1] = freq_edge[-2] + df
# calculate the period bin limits, omit the zero frequency bin
p_edge = 1. / freq_edge[1:]
# we'll plot both
ax2 = fig.add_subplot(312)
ax2.pcolormesh(t_edge, freq_edge, spec)
ax2.set_ylim(0, fs/2)
ax2.set_ylabel('freq.[day^-1]')
ax3 = fig.add_subplot(313)
# note that the period has to be inverted both in the vector and the spectrum,
# as pcolormesh wants to have a positive difference between samples
ax3.pcolormesh(t_edge, p_edge[::-1], spec[:0:-1])
#ax3.set_ylim(0, 100/fs)
ax3.set_ylim(0, nfft)
ax3.set_xlabel('t [days]')
ax3.set_ylabel('period [days]')
"""
If you are only asking how to display the spectrogram differently, then it is actually rather straightforward.
One thing to note is that there are two functions called specgram: matplotlib.pyplot.specgram and matplotlib.mlab.specgram. The difference between these two is that the former draws a spectrogram wheras the latter only calculates one (and that's what we want).
The only slightly tricky thing is to calculate the colour mesh rectangle edge positions. We get the following from the specgram:
t: centerpoints in time
freq: frequency centers of the bins
For the time dimension it is easy to calculate the bin limits by the centers:
t_edge[n] = t[0] + (n - .5) * dt, where dt is the time difference of two consecutive bins
It would be similarly simple for frequencies:
f_edge[n] = freq[0] + (n - .5) * df
but we want to use the period instead of frequency. This makes the first bin unusable, and we'll have to toss the DC component away.
A bit of code:
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
import numpy as np
# create some data: (fs = sampling frequency)
fs = 2000.
ts = np.arange(10000) / fs
sig = np.sin(500 * np.pi * ts)
sig[5000:8000] += np.sin(200 * np.pi * (ts[5000:8000] + 0.0005 * np.random.random(3000)))
# calcuate the spectrogram
spec, freq, t = mlab.specgram(sig, Fs=fs)
# calculate the bin limits in time (x dir)
# note that there are n+1 fence posts
dt = t[1] - t[0]
t_edge = np.empty(len(t) + 1)
t_edge[:-1] = t - dt / 2.
# however, due to the way the spectrogram is calculates, the first and last bins
# a bit different:
t_edge[0] = 0
t_edge[-1] = t_edge[0] + len(sig) / fs
# calculate the frequency bin limits:
df = freq[1] - freq[0]
freq_edge = np.empty(len(freq) + 1)
freq_edge[:-1] = freq - df / 2.
freq_edge[-1] = freq_edge[-2] + df
# calculate the period bin limits, omit the zero frequency bin
p_edge = 1. / freq_edge[1:]
# we'll plot both
fig = plt.figure()
ax1 = fig.add_subplot(211)
ax1.pcolormesh(t_edge, freq_edge, spec)
ax1.set_ylim(0, fs/2)
ax1.set_ylabel('frequency [Hz]')
ax2 = fig.add_subplot(212)
# note that the period has to be inverted both in the vector and the spectrum,
# as pcolormesh wants to have a positive difference between samples
ax2.pcolormesh(t_edge, p_edge[::-1], spec[:0:-1])
ax2.set_ylim(0, 100/fs)
ax2.set_xlabel('t [s]')
ax2.set_ylabel('period [s]')
This gives: