Curve fitting a function of functions in matplotlib - matplotlib

I have 3 functions: E (energy), P (pressure) and H (enthalpy).
The given data data.dat, contains the variables V (volume) and E (energy):
# Volume: V_C_I Energy: E_C_I
111.593876 -1.883070511360E+03
113.087568 -1.883074916825E+03
114.632273 -1.883078906679E+03
116.184030 -1.883082373429E+03
117.743646 -1.883085344696E+03
119.326853 -1.883087860954E+03
120.927806 -1.883089938181E+03
122.538335 -1.883091557526E+03
124.158641 -1.883092750745E+03
125.789192 -1.883093540824E+03
125.790261 -1.883093800747E+03
127.176327 -1.883094160364E+03
128.358654 -1.883094017730E+03
128.542807 -1.883094255789E+03
129.977279 -1.883094094751E+03
131.390610 -1.883093689121E+03
132.812287 -1.883093053342E+03
134.242765 -1.883092185844E+03
135.682211 -1.883091101112E+03
137.130792 -1.883089807766E+03
138.588565 -1.883088314435E+03
The following script, performs:
1) a curve_fit of E versus V,
2) calculates P (pressure) using the def(P) function,
3) calculates H (enthalpy) using the def(H) function. (H = E + PV).
4) Performs 3 plots with the fitting curves: E versus P, P versus V and H versus P.
When plotting the fitting curve, I have used the following:
For example, for the E versus V curve:
plt.plot(V_C_I_lin, E(V_C_I_lin, *popt_C_I)
where V_C_I_lin is a linspace of volumes between the data points.
This seems fine:
Similarly, for the P versus V curve, the analogous scheme:
plt.plot(V_C_I_lin, P(V_C_I_lin, *popt_C_I), color='grey', label='E fit Data' )
produces the desired result:
The pressure for each data point is saved in the file ./E_V_P_H__C_I.dat by the script.
However, for the H versus P curve, the analogous scheme:
plt.plot(xp_C_I, H(V_C_I_lin, *popt_C_I), color='grey', label='H fit Data' )
where xp_C_I is a linspace of pressures, does not produce the correct fit:
Why this is happening in the third case?
Code:
import numpy as np
from scipy.optimize import curve_fit
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
import sys
import os
# Intial candidates for fit
E0_init = -941.510817926696
V0_init = 63.54960592453
B0_init = 76.3746233515232
B0_prime_init = 4.05340727164527
def E(V, E0, V0, B0, B0_prime):
return E0+ (2.293710449E+17)*(1E-21)*( (9.0/16.0)*(V0*B0) * ( (((V0/V)**(2.0/3.0)-1.0)**3.0)*B0_prime + ((V0/V)**(2.0/3.0)-1)**2 * (6.0-4.0*(V0/V)**(2.0/3.0)) ))
def P(V, E0, V0, B0, B0_prime):
f0=(3.0/2.0)*B0
f1=((V0/V)**(7.0/3.0))-((V0/V)**(5.0/3.0))
f2=((V0/V)**(2.0/3.0))-1
pressure= f0*f1*(1+(3.0/4.0)*(B0_prime-4)*f2)
return pressure
def H(V, E0, V0, B0, B0_prime):
return E(V, E0, V0, B0, B0_prime) + P(V, E0, V0, B0, B0_prime) * V
# Data (Red triangles):
V_not_p_f_unit_C_I, E_not_p_f_unit_C_I = np.loadtxt('data.dat', skiprows = 1).T
# A minor conversion of the data:
nFU_C_I = 2.0
nFU_C_II = 4.0
E_C_I = E_not_p_f_unit_C_I/nFU_C_I
V_C_I = V_not_p_f_unit_C_I/nFU_C_I
######
# Fitting and obtaining the parameters:
init_vals = [E0_init, V0_init, B0_init, B0_prime_init]
popt_C_I, pcov_C_I = curve_fit(E, V_C_I, E_C_I, p0=init_vals)
# Calculation of P:
pressures_per_F_unit_C_I = P(V_C_I, *popt_C_I)
# Calculation of H: H = E + PV
H_C_I = E_C_I + pressures_per_F_unit_C_I * V_C_I
# We save E, P and H into a file:
output_array_3 = np.vstack((E_C_I, V_C_I, pressures_per_F_unit_C_I, H_C_I)).T
np.savetxt('E_V_P_H__C_I.dat', output_array_3, header="Energy / FU (a.u.) \t Volume / FU (A^3) \t Pressure / F.U. (GPa) \t Enthalpy (a.u.)", fmt="%0.13f")
EnergyCI, VolumeCI, PressureCI, EnthalpyCI = np.loadtxt('./E_V_P_H__C_I.dat', skiprows = 1).T
# Plotting E vs V:
fig = plt.figure()
V_C_I_lin = np.linspace(VolumeCI[0], VolumeCI[-1], 100)
# Linspace for plotting the fitting curve:
V_C_I_lin = np.linspace(VolumeCI[0], VolumeCI[-1], 100)
p2, = plt.plot(V_C_I_lin, E(V_C_I_lin, *popt_C_I), color='grey', label='E fit Data' )
# Plotting the scattered points:
p1 = plt.scatter(VolumeCI, EnergyCI, color='red', marker="^", label='Data', s=100)
fontP = FontProperties()
fontP.set_size('small')
plt.legend((p1, p2 ), ("Data", 'E fit Data' ), prop=fontP)
plt.xlabel('V / Formula unit (Angstrom$^{3}$)')
plt.ylabel('E / Formula unit (a.u.)')
plt.ticklabel_format(useOffset=False)
plt.savefig('E_vs_V.pdf', bbox_inches='tight')
# Plotting P vs V:
fig = plt.figure()
# Linspace for plotting the fitting curve:
xp_C_I = np.linspace(PressureCI[-1], PressureCI[0], 100)
# Plotting the fitting curves:
p2, = plt.plot(V_C_I_lin, P(V_C_I_lin, *popt_C_I), color='grey', label='E fit Data' )
# Plotting the scattered points:
p1 = plt.scatter(VolumeCI, PressureCI, color='red', marker="^", label='Data', s=100)
fontP = FontProperties()
fontP.set_size('small')
plt.legend((p1, p2), ("Data", "E fit Data"), prop=fontP)
plt.xlabel('V / Formula unit (Angstrom$^{3}$)')
plt.ylabel('P (GPa)')
plt.ticklabel_format(useOffset=False)
plt.savefig('P_vs_V.pdf', bbox_inches='tight')
# Plotting H vs P:
fig = plt.figure()
xp_C_I = np.linspace(PressureCI[0], PressureCI[-1], 100)
V_C_I_lin = np.linspace(VolumeCI[0], VolumeCI[-1], 100)
# Linspace for plotting the fitting curve:
V_C_I_lin = np.linspace(VolumeCI[0], VolumeCI[-1], 100)
p2, = plt.plot(xp_C_I, H(V_C_I_lin, *popt_C_I), color='grey', label='H fit Data' )
# Plotting the scattered points:
p1 = plt.scatter(pressures_per_F_unit_C_I, H_C_I, color='red', marker="^", label='Data', s=100)
fontP = FontProperties()
fontP.set_size('small')
plt.legend((p1, p2 ), ("Data", 'H fit Data' ), prop=fontP)
plt.xlabel('P / Formula unit (GPa)')
plt.ylabel('H / Formula unit (a.u.)')
plt.ticklabel_format(useOffset=False)
plt.savefig('H_vs_V.pdf', bbox_inches='tight')
plt.show()

You cannot just plot H(V) versus some completely uncorrelated pressures xp_C_I.
plt.plot(xp_C_I, H(V_C_I_lin, *popt_C_I), )
Instead you need to plot H(V) against P(V), such that exactly the same V values are used for the pressure and the enthalpy:
plt.plot(P(V_C_I_lin, *popt_C_I), H(V_C_I_lin, *popt_C_I), )

Related

Tensorflow 2: Nested gradient tape produces wrong second pathwise derivative

there is a fair number of questions on gradients out there, but I haven't been able to fix my problem. In a nutshell: Trying to run a Monte Carlo simulation and get pathwise differentials. There are a few tutorials out there, but they do run into the same problem as my own code. I have boiled it down into the toy example below.
The second order derivative called gamma is wrong however (I'll highlight it below). So something is wrong with my nested gradient tape. Having the tape watch the variable has no effect actually. There must be something I am not aware of here. Any hint much appreciated.
edit: I have cross checked this with a deterministic function and the code works just fine. Second derivatives are calculated correctly using gradient tape. So no idea why it doesn't work on a Monte Carlo.
import numpy as np
import pandas as pd
import tensorflow as tf
from pprint import pprint
DTYPE = tf.float32
SEED = 3232
S0 = tf.Variable(100, dtype=DTYPE)
strike = tf.Variable(110, dtype=DTYPE)
time_to_expiry = tf.Variable(1, dtype=DTYPE)
implied_vol = tf.Variable(0.3, dtype=DTYPE)
v = dict(S0=S0,strike=strike,time_to_expiry=time_to_expiry,implied_vol=implied_vol)
#tf.function
def brownian(S0, dt, sigma, mu, dw):
dt_sqrt = tf.math.sqrt(dt)
shock = sigma * dt_sqrt * dw
drift = (mu - (sigma ** 2) / 2)
bm = tf.math.exp(drift * dt + shock)
out = S0 * tf.math.cumprod(bm, axis=1)
return out
#tf.function
def pricer_montecarlo(S0, strike, time_to_expiry, implied_vol, dw):
sigma = implied_vol
T = time_to_expiry
r = tf.constant(0.0,dtype=DTYPE)
K = strike
dt = T / dw.shape[1]
st = brownian(S0, dt, sigma, r, dw)
payout = tf.math.maximum(st[:, -1] - K, 0)
npv = tf.exp(-r * T) * tf.reduce_mean(payout)
return npv
def calculate_montecarlo(greeks=True):
nsims = 10**7
nobs = 2
dw = tf.random.normal((nsims, nobs), seed=SEED)
out = dict()
if greeks:
with tf.GradientTape() as g2:
g2.watch(v['S0'])
with tf.GradientTape() as g1:
g1.watch(v['S0'])
npv = pricer_montecarlo(**v, dw=dw)
dv = g1.gradient(npv, v)
g2.watch(dv)
d2v = g2.gradient(dv['S0'], v)
out["dv"] = {k: v.numpy() for k, v in dv.items()}
out["d2v"] = {k: v.numpy() for k, v in d2v.items()}
else:
npv = pricer_montecarlo(**v, dw=dw).numpy()
out["npv"] = npv.numpy()
return out
out = calculate_montecarlo()
pprint(out)
from py_vollib import black_scholes
from py_vollib.black_scholes.greeks import analytical
print('npv='+str(black_scholes.black_scholes('c', 100, 110, 1, 0, 0.3)))
print('dv S0='+str(analytical.delta('c', 100, 110, 1, 0, 0.3)))
print('d2v S0='+str(analytical.gamma('c', 100, 110, 1, 0, 0.3)))
print('dv implied_vol='+str(analytical.vega('c', 100, 110, 1, 0, 0.3)))
print('dv time_to_expiry='+str(analytical.theta('c', 100, 110, 1, 0, 0.3)))
Output: 'd2v': {'S0': 0.0..., should be close to 0.013112390443974165 (plus some stochastic noise).
{'d2v': {'S0': 0.0,
'implied_vol': 0.3933603,
'strike': 0.0,
'time_to_expiry': 0.059004053},
'dv': {'S0': 0.43342653,
'implied_vol': 39.336025,
'strike': -0.32001525,
'time_to_expiry': 5.9004045},
'npv': 8.140971}
npv=8.141012048964207
dv S0=0.4334094123285094
d2v S0=0.013112390443974165
dv implied_vol=0.39337171331922494
dv time_to_expiry=-0.01616596082133801

Multiple grouped charts with altair

My data has 4 attributes: dataset (D1/D2), model (M1/M2), layer (L1/L2), scene (S1/S2). I can make a chart grouped by scenes and then merge plots horizontally and vertically (pic above).
However, I would like to have 'double grouping' by scene and dataset, like merging the D1 and D2 plots by placing blue/orange bars from next to each other but with different opacity or pattern/hatch.
Basically something like this (pretend that the black traits are a hatch pattern).
Here is the code to reproduce the first plot
import numpy as np
import itertools
import argparse
import pandas as pd
import matplotlib.pyplot as plt
import os
import altair as alt
alt.renderers.enable('altair_viewer')
np.random.seed(0)
################################################################################
model_keys = ['M1', 'M2']
data_keys = ['D1', 'D2']
scene_keys = ['S1', 'S2']
layer_keys = ['L1', 'L2']
ys = []
models = []
dataset = []
layers = []
scenes = []
for sc in scene_keys:
for m in model_keys:
for d in data_keys:
for l in layer_keys:
for s in range(10):
data_y = list(np.random.rand(10) / 10)
ys += data_y
scenes += [sc] * len(data_y)
models += [m] * len(data_y)
dataset += [d] * len(data_y)
layers += [l] * len(data_y)
# ------------------------------------------------------------------------------
df = pd.DataFrame({'Y': ys,
'Model': models,
'Dataset': dataset,
'Layer': layers,
'Scenes': scenes})
bars = alt.Chart(df, width=100, height=90).mark_bar().encode(
# field to group columns on
x=alt.X('Scenes:N',
title=None,
axis=alt.Axis(
grid=False,
title=None,
labels=False,
),
),
# field to use as Y values and how to calculate
y=alt.Y('Y:Q',
aggregate='mean',
axis=alt.Axis(
grid=True,
title='Y',
titleFontWeight='normal',
),
),
# field to use for sorting
order=alt.Order('Scenes',
sort='ascending',
),
# field to use for color segmentation
color=alt.Color('Scenes',
legend=alt.Legend(orient='bottom',
padding=-10,
),
title=None,
),
)
error_bars = alt.Chart(df).mark_errorbar(extent='ci').encode(
x=alt.X('Scenes:N'),
y=alt.Y('Y:Q'),
)
text = alt.Chart(df).mark_text(align='center',
baseline='line-bottom',
color='black',
dy=-5 # y-shift
).encode(
x=alt.X('Scenes:N'),
y=alt.Y('mean(Y):Q'),
text=alt.Text('mean(Y):Q', format='.1f'),
)
chart_base = bars + error_bars + text
chart_base = chart_base.facet(
# field to use to use as the set of columns to be represented in each group
column=alt.Column('Layer:N',
# header=alt.Header(
# labelFontStyle='bold',
# ),
title=None,
sort=list(set(models)), # get unique indices
),
spacing={"row": 0, "column": 15},
)
def unique(sequence):
seen = set()
return [x for x in sequence if not (x in seen or seen.add(x))]
for i, m in enumerate(unique(models)):
chart_imnet = chart_base.transform_filter(
alt.FieldEqualPredicate(field='Dataset', equal='D1'),
).transform_filter(
alt.FieldEqualPredicate(field='Model', equal=m)
)
chart_places = chart_base.transform_filter(
alt.FieldEqualPredicate(field='Dataset', equal='D2')
).transform_filter(
alt.FieldEqualPredicate(field='Model', equal=m)
)
if i == 0:
title_params = dict({'align': 'center', 'anchor': 'middle', 'dy': -10})
chart_imnet = chart_imnet.properties(title=alt.TitleParams('D1', **title_params))
chart_places = chart_places.properties(title=alt.TitleParams('D2', **title_params))
chart_places = alt.concat(chart_places,
title=alt.TitleParams(
m,
baseline='middle',
orient='right',
anchor='middle',
angle=90,
# dy=10,
dx=30 if i == 0 else 0,
),
)
if i == 0:
chart = (chart_imnet | chart_places).resolve_scale(x='shared')
else:
chart = (chart & (chart_imnet | chart_places).resolve_scale(x='shared'))
chart.save('test.html')
For now, I don't know a good answer, but once https://github.com/altair-viz/altair/pull/2528 is accepted you can use the xOffset encoding channel as such:
alt.Chart(df, height=90).mark_bar(tooltip=True).encode(
x=alt.X("Scenes:N"),
y=alt.Y("mean(Y):Q"),
color=alt.Color("Scenes:N"),
opacity=alt.Opacity("Dataset:N"),
xOffset=alt.XOffset("Dataset:N"),
column=alt.Column('Layer:N'),
row=alt.Row("Model:N")
).resolve_scale(x='independent')
Which will result in:
See Colab Notebook or Vega Editor
EDIT
To control the opacity and legend names one can do as such
alt.Chart(df, height=90).mark_bar(tooltip=True).encode(
x=alt.X("Scenes:N"),
y=alt.Y("mean(Y):Q"),
color=alt.Color("Scenes:N"),
opacity=alt.Opacity("Dataset:N",
scale=alt.Scale(domain=['D1', 'D2'],
range=[0.2, 1.0]),
legend=alt.Legend(labelExpr="datum.label == 'D1' ? 'D1 - transparent' : 'D2 - full'")),
xOffset=alt.XOffset("Dataset:N"),
column=alt.Column('Layer:N'),
row=alt.Row("Model:N")
).resolve_scale(x='independent')

Drawing segments (tangents) of fixed lengths preserving the aspect angles with matplotlib

Context: I'm trying to display the gradients as fixed-length lines on a plot of gradient noise. Each "gradient" can be seen as a tangent on a given point. The issue is, even if I make sure the lines have the same length, the aspect ratio stretches them:
The complete code to generate this:
from math import sqrt, floor, modf, sin
import matplotlib.pyplot as plt
mix = lambda a, b, x: a*(1-x) + b*x
interpolant = lambda t: ((6*t - 15)*t + 10)*t*t*t
rng01 = lambda x: modf(sin(x) * 43758.5453123)[0]
def _gradient_noise(t):
i = floor(t)
f = t - i
s0 = rng01(i) * 2 - 1
s1 = rng01(i + 1) * 2 - 1
v0 = s0 * f;
v1 = s1 * (f - 1);
return mix(v0, v1, interpolant(f))
def _plot_noise(n, interp_npoints=100):
xdata = [i/interp_npoints for i in range(n * interp_npoints)]
gnoise = [_gradient_noise(x) for x in xdata]
plt.plot(xdata, gnoise, label='gradient noise')
plt.xlabel('t')
plt.ylabel('amplitude')
plt.grid(linestyle=':')
plt.legend()
for i in range(n + 1):
a = rng01(i) * 2 - 1 # gradient slope
norm = sqrt(1 + a**2)
norm *= 4 # 1/4 length
vnx, vny = 1/norm, a/norm
x = (i-vnx/2, i+vnx/2)
y = (-vny/2, vny/2)
plt.plot(x, y, 'r-')
plt.show()
if __name__ == '__main__':
_plot_noise(15)
The red-lines drawing is located in the for-loop.
hypot(x[1]-x[0], y[1]-y[0]) gives me a constant .25 for every vector, which corresponds to my target length (¼). Which means my segments are actually in the correct length for the given aspect. This can also be "verified" with .set_aspect(1):
I've tried several things, such as translating the coordinates into display coordinates (plt.gca().transData.transform(...)), scale them, then back again (plt.gca().transData.inverted().transform(...)), without success (as if the aspect was applied on top of the display coordinates). Doing that would probably also actually change the angles as well anyway.
So to sum up: I'm looking for a way to display lines with a fixed length (expressed in the x data coordinates system), and oriented (rotated) in the xy data coordinates system.
Welcome to SO. What a well asked first question. It made me question my sanity for a hot second once I reproduced the plot and the math checked out...
However, you identified the core problem yourself: the issue is that in your code the length of the gradient lines is determined in data coordinates, when it should be dependent on the aspect ratio of the plot.
So, if you want the gradient lines to be of uniform length in display space then you need to rescale the either the dx or the dy component by the aspect ratio of the plot (or its inverse, respectively) when computing then norm:
import matplotlib.pyplot as plt
from math import sqrt, floor
mix = lambda a, b, x: a*(1-x) + b*x
interpolant = lambda t: ((6*t - 15)*t + 10)*t*t*t
rng01 = lambda x: ((1103515245*x + 12345) % 2**32) / 2**32
def _gradient_noise(t):
i = floor(t)
f = t - i
s0 = rng01(i) * 2 - 1
s1 = rng01(i + 1) * 2 - 1
v0 = s0 * f;
v1 = s1 * (f - 1);
return mix(v0, v1, interpolant(f))
def _plot_noise(n, interp_npoints=100):
xdata = [i/interp_npoints for i in range(n * interp_npoints)]
gnoise = [_gradient_noise(x) for x in xdata]
fig, ax = plt.subplots()
ax.plot(xdata, gnoise, label='gradient noise')
ax.set_xlabel('t')
ax.set_ylabel('amplitude')
ax.grid(linestyle=':')
ax.legend(loc=1)
x0, x1, y0, y1 = ax.axis()
aspect = (y1 - y0) / (x1 - x0)
for i in range(n + 1):
dy = rng01(i) * 2 - 1 # gradient slope
dx = 1
norm = sqrt(dx**2 + (dy / aspect)**2)
# norm *= 4 # 1/4 length
vnx, vny = dx/norm, dy/norm
x = (i-vnx/2, i+vnx/2)
y = (-vny/2, vny/2)
ax.plot(x, y, 'r-')
plt.show()
if __name__ == '__main__':
_plot_noise(15)
Final code with proper aspect ratio and resize event handled:
import matplotlib.pyplot as plt
from matplotlib.patches import Ellipse
from math import hypot, floor, modf, sin
mix = lambda a, b, x: a*(1-x) + b*x
interpolant = lambda t: ((6*t - 15)*t + 10)*t*t*t
rng01 = lambda x: modf(sin(x) * 43758.5453123)[0]
def _gradient_noise(t):
i = floor(t)
f = t - i
s0 = rng01(i) * 2 - 1
s1 = rng01(i + 1) * 2 - 1
v0 = s0 * f;
v1 = s1 * (f - 1);
return mix(v0, v1, interpolant(f))
def _get_ar(ax):
fs = ax.figure.get_size_inches()
pos = ax.get_position(original=False)
return 1 / (ax.get_data_ratio() * (fs[0] * pos.width) / (fs[1] * pos.height))
def _get_line_coords(aspect, i):
dx, dy = 1, rng01(i) * 2 - 1 # gradient slope
norm = hypot(dx, dy * aspect)
vnx, vny = dx/norm, dy/norm
x = (i-vnx/2, i+vnx/2)
y = (-vny/2, vny/2)
return x, y
def _plot_noise(n, interp_npoints=100):
xdata = [i/interp_npoints for i in range(n * interp_npoints)]
gnoise = [_gradient_noise(x) for x in xdata]
fig, ax = plt.subplots()
ax.plot(xdata, gnoise, label='gradient noise')
ax.set_xlabel('t')
ax.set_ylabel('amplitude')
ax.grid(linestyle=':')
ax.legend(loc=1)
xlim = ax.get_xlim()
ylim = ax.get_ylim()
aspect = _get_ar(ax)
resize_objects = []
for i in range(n + 1):
lx, ly = _get_line_coords(aspect, i)
line = ax.plot(lx, ly, 'r-')[0]
ellipse = Ellipse(xy=(i, 0), width=1, height=1/aspect, fill=False, linestyle=':')
ax.add_patch(ellipse)
resize_objects.append((line, ellipse))
def _onresize(event):
ar = _get_ar(ax)
for i, (line, ellipse) in enumerate(resize_objects):
ellipse.set_height(1 / ar)
lx, ly = _get_line_coords(ar, i)
line.set_xdata(lx)
line.set_ydata(ly)
ax.figure.canvas.mpl_connect('resize_event', _onresize)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
plt.show()
if __name__ == '__main__':
_plot_noise(10)
Some notes:
the same question was asked on matplotlib discourse, where jklymak provided the correct answer for the ratio computation: https://discourse.matplotlib.org/t/drawing-segments-tangents-of-fixed-lengths-preserving-the-aspect-angles-with-matplotlib/21844/14
the ax.get_{x,y}lim() → ax.set_{x,y}lim() roundtrip seems necessary because the aspect is computed based on the initial axis, which changes when plotting the lines/ellipses
the resize events is not necessary in case of export

Numpy float128 is not giving a correct answer

I created a differential equation solver (Runge-Kutta 4th order method) in Python. Than I decided to check its results by setting the parameter mu to 0 and looking at the numeric solution that was returned by it.
The problem is, I know that this solution should give an stable oscillation as a result, but instead I get a diverging solution.
The code is presented below. I tried solving this problem (rounding errors from floating point precision) by using numpy float128 data type. But the solver keeps giving me the wrong answer.
The code is:
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
def f(t,x,v):
f = -k/m*x-mu/m*v
return(f)
def g(t,x,v):
g = v
return(g)
def srunge4(t,x,v,dt):
k1 = f(t,x,v)
l1 = g(t,x,v)
k2 = f(t+dt/2, x+k1*dt/2, v+l1*dt/2)
l2 = g(t+dt/2, x+k1*dt/2, v+l1*dt/2)
k3 = f(t+dt/2, x+k2*dt/2, v+l2*dt/2)
l3 = g(t+dt/2, x+k2*dt/2, v+l2*dt/2)
k4 = f(t+dt/2, x+k3*dt, v+l3*dt)
l4 = g(t+dt/2, x+k3*dt, v+l3*dt)
v = v + dt/6*(k1+2*k2+2*k3+k4)
x = x + dt/6*(l1+2*l2+2*l3+l4)
t = t + dt
return([t,x,v])
mu = np.float128(0.00); k = np.float128(0.1); m = np.float128(6)
x0 = np.float128(5); v0 = np.float128(-10)
t0 = np.float128(0); tf = np.float128(1000); dt = np.float128(0.05)
def sedol(t, x, v, tf, dt):
sol = np.array([[t,x,v]], dtype='float128')
while sol[-1][0]<=tf:
t,x,v = srunge4(t,x,v,dt)
sol=np.append(sol,np.float128([[t,x,v]]),axis=0)
sol = pd.DataFrame(data=sol, columns=['t','x','v'])
return(sol)
ft_runge = sedol(t0, x0, v0, tf, dt=0.1)
plt.close("all")
graf1 = plt.plot(ft_runge.iloc[:,0],ft_runge.iloc[:,1],'b')
plt.show()
Am I using numpy float128 in a wrong way?
You are mixing in srunge4 the association of k and l to x and v. Per the function association and the final summation, the association should be (v,f,k) and (x,g,l). This has to be obeyed in the updates of the stages of the method.
In stage 4, it should be t+dt in the first argument. However, as t is not used in the derivative computation, this error has no consequence here.
Also, you are destroying the float128 computation if you set one parameter to a float in the default float64 type in dt=0.1.
The code with these corrections and some further simplifications is
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
mu = np.float128(0.00); k = np.float128(0.1); m = np.float128(6)
x0 = np.float128(5); v0 = np.float128(-10)
t0 = np.float128(0); tf = np.float128(1000); dt = np.float128(0.05)
def f(t,x,v): return -(k*x+mu*v)/m
def g(t,x,v): return v
def srunge4(t,x,v,dt): # should be skutta4, Wilhelm Kutta gave this method in 1901
k1, l1 = (fg(t,x,v) for fg in (f,g))
# here is the essential corrections, x->l, v->k
k2, l2 = (fg(t+dt/2, x+l1*dt/2, v+k1*dt/2) for fg in (f,g))
k3, l3 = (fg(t+dt/2, x+l2*dt/2, v+k2*dt/2) for fg in (f,g))
k4, l4 = (fg(t+dt , x+l3*dt , v+k3*dt ) for fg in (f,g))
v = v + dt/6*(k1+2*k2+2*k3+k4)
x = x + dt/6*(l1+2*l2+2*l3+l4)
t = t + dt
return([t,x,v])
def sedol(t, x, v, tf, dt):
sol = [[t,x,v]]
while t<=tf:
t,x,v = srunge4(t,x,v,dt)
sol.append([t,x,v])
sol = pd.DataFrame(data=np.asarray(sol) , columns=['t','x','v'])
return(sol)
ft_runge = sedol(t0, x0, v0, tf, dt=2*dt)
plt.close("all")
fig,ax = plt.subplots(1,3)
ft_runge.plot(x='t', y='x', ax=ax[0])
ft_runge.plot(x='t', y='v', ax=ax[1])
ft_runge.plot.scatter(x='x', y='v', s=1, ax=ax[2])
plt.show()
It produces the expected ellipse without visually recognizable changes in the amplitudes.

Using numpy.trapz() to calculate a pdf

I have some polar [R,theta] data, that I want to express as a pdf.
As I understand it, I simply normalize the data, so its integral is 1.
I am using numpy.trapz() method, but am a bit unsure of the syntax.
The code below shows what I have tried, I expect the outputs to be in a range ( 0, 1 > but they seem a bit high.
Am I using .trapz() method correctly?
Is .trapz() appropriate for circular data?
theta = np.linspace( -np.pi, np.pi, 50 )
r = np.array([ 1.23445554e-02, 3.03557798e-02, 7.02699393e-02,
1.51352457e-01, 3.00238850e-01, 5.43818199e-01,
8.93133849e-01, 1.32293971e+00, 1.76077447e+00,
2.10102936e+00, 2.24555703e+00, 2.15016660e+00,
1.84666100e+00, 1.42543142e+00, 9.91689535e-01,
6.24126941e-01, 3.56994042e-01, 1.86663166e-01,
8.98562553e-02, 4.01629846e-02, 1.68354083e-02,
6.69424846e-03, 2.55748908e-03, 9.52021934e-04,
3.50547486e-04, 1.29726513e-04, 4.90546282e-05,
1.92776373e-05, 8.00850696e-06, 3.57673721e-06,
1.74551297e-06, 9.45084654e-07, 5.75501932e-07,
3.98658528e-07, 3.16843272e-07, 2.90421340e-07,
3.07486145e-07, 3.75264328e-07, 5.25073662e-07,
8.35385632e-07, 1.49531795e-06, 2.97409444e-06,
6.48232116e-06, 1.52539070e-05, 3.81503341e-05,
9.97843762e-05, 2.68504223e-04, 7.31213584e-04,
1.98302971e-03, 5.27234229e-03
]
)
r_pdf = r / np.trapz( r, x = theta )
print r_pdf.max()
fig = plt.figure()
ax = fig.add_subplot( 111, polar = True )
ax.plot( theta, r, lw = 3 )
ax.plot( theta, r_pdf, lw = 3 )