Bifurcation diagram using python - matplotlib

I have a simple question. Can we create Bifurcation diagram from any type of equation or just from the equation of logistic map like
x=r (1-x)
What is the main idea of making a bifurcation diagram. I am working on this for last couple of weeks but got no idea. I just see the same equation plotting everywhere like I mentioned above. I do have different equation like
x[i+1]= a*x[i]*2**n+b (mod2**n)/2**n
where mod2**n indicates the remainder after division by 2**n.
I have to write a code to make a bifurcation diagram for above equation.
I tried to change a but it does not work.
import matplotlib.pyplot as plt
import numpy as np
def iter_map(a,N):
x=np.zeros(N)
x[0]=0.5
for i in range(N-1):
d=(a*x[i]*(2**n) +b)%(2**n)
x[i+1]=d/2**n
return x
N=5
n=8
a0 = 1.0
a_max= 4
step= 0.001
plt.figure()
for b in [0,1]:
for a in np.arange(a0, a_max, step):
x = iter_map(a, 8)
plt.plot(a*np.ones_like(x),x)
plt.xlabel(r'$a$')
plt.ylabel(r'$x$')
plt.show()
The result does not look as expected

Related

Matplotlib streamplot with streamlines that don't break or end

I'd like to make a streamplot with lines that don't stop when they get too close together. I'd rather each streamline be calculated in both directions until it hits the edge of the window. The result is there'd be some areas where they'd all jumble up. But that's what I want.
I there anyway to do this in matplotlib? If not, is there another tool I can use for this that could interface with python/numpy?
import numpy as np
import matplotlib.pyplot as plt
Y,X = np.mgrid[-10:10:.01, -10:10:.01]
U, V = Y**2, X**2
plt.streamplot(X,Y, U,V, density=1)
plt.show(False)
Ok, I've figured out I can get mostly what I want by turning up the density a lot and using custom start points. I'm still interested if there is a better or alternate way to do this.
Here's my solution. Doesn't it look so much better?
import numpy as np
import matplotlib.pyplot as plt
Y,X = np.mgrid[-10:10:.01, -10:10:.01]
y,x = Y[:,0], X[0,:]
U, V = Y**2, X**2
stream_points = np.array(zip(np.arange(-9,9,.5), -np.arange(-9,9,.5)))
plt.streamplot(x,y, U,V, start_points=stream_points, density=35)
plt.show(False)
Edit: By the way, there seems to be some bug in streamplot such that start_points keyword only works if you use 1d arrays for the grid data. See Python Matplotlib Streamplot providing start points
As of Matplotlib version 3.6.0, an optional parameter broken_streamlines has been added for disabling streamline breaks.
Adding it to your snippet produces the following result:
import numpy as np
import matplotlib.pyplot as plt
Y,X = np.mgrid[-10:10:.01, -10:10:.01]
U, V = Y**2, X**2
plt.streamplot(X,Y, U,V, density=1, broken_streamlines=False)
plt.show(False)
Note
This parameter just extends the streamlines which were originally drawn (as in the question). This means that the streamlines in the modified plot above are much more uneven than the result obtained in the other answer, with custom start_points. The density of streamlines on any stream plot does not represent the magnitude of U or V at that point, only their direction. See the documentation for the density parameter of matplotlib.pyplot.streamplot for more details on how streamline start points are chosen by default, when they aren't specified by the optional start_points parameter.
For accurate streamline density, consider using matplotlib.pyplot.contour, but be aware that contour does not show arrows.
Choosing start points automatically
It may not always be easy to choose a set of good starting points automatically. However, if you know the streamfunction corresponding to the flow you wish to plot you can use matplotlib.pyplot.contour to produce a contour plot (which can be hidden from the output), and then extract a suitable starting point from each of the plotted contours.
In the following example, psi_expression is the streamfunction corresponding to the flow. When modifying this example for your own needs, make sure to update both the line defining psi_expression, as well as the one defining U and V. Ensure these both correspond to the same flow.
The density of the streamlines can be altered by changing contour_levels. Here, the contours are uniformly distributed.
import numpy as np
import matplotlib.pyplot as plt
import sympy as sy
x, y = sy.symbols("x y")
psi_expression = x**3 - y**3
psi_function = sy.lambdify((x, y), psi_expression)
Y, X = np.mgrid[-10:10:0.01, -10:10:0.01]
psi_evaluated = psi_function(X, Y)
U, V = Y**2, X**2
contour_levels = np.linspace(np.amin(psi_evaluated), np.amax(psi_evaluated), 30)
# Draw a temporary contour plot.
temp_figure = plt.figure()
contour_plot = plt.contour(X, Y, psi_evaluated, contour_levels)
plt.close(temp_figure)
points_list = []
# Iterate over each contour.
for collection in contour_plot.collections:
# Iterate over each segment in this contour.
for path in collection.get_paths():
middle_point = path.vertices[len(path.vertices) // 2]
points_list.append(middle_point)
# Reshape python list into numpy array of coords.
stream_points = np.reshape(np.array(points_list), (-1, 2))
plt.streamplot(X, Y, U, V, density=1, start_points=stream_points, broken_streamlines=False)
plt.show(False)

Estimation of t-distribution by mean of samples does not work

I am trying to create a t-distribution by taking the mean of many samples from a normal distribution (and then estimating the shape with kernel density estimation).
For some reason, I am getting pretty different results when I compare what I get with a proper t-distribution. I don't understand what is going wrong, so I think I am confused about something.
Here is the code:
import numpy as np
from scipy.stats import gaussian_kde
import matplotlib.pyplot as plt
import seaborn
inner_sample_size = 10
X = np.arange(-3, 3, 0.01)
results = [
np.mean(np.random.normal(size=inner_sample_size))
for _ in range(10000)
]
estimation = gaussian_kde(results)
plt.plot(X, estimation.evaluate(X))
t_samples = np.random.standard_t(inner_sample_size, 10000)
t_estimator = gaussian_kde(t_samples)
plt.plot(X, t_estimator.evaluate(X))
plt.ylabel("Probability density")
plt.show()
And here is the plot I get:
Where the orange line is numpy's own t-distribution, and the blue line is the one estimated by sampling.
Your assumption that the mean of Standard Normals has T distribution is incorrect. In fact, the mean of Standard Normals has Normal Distribution, which explains the shape of your blue graph. To generate one random variable T from a T distribution with k degrees of freedom, you first generate k+1 independent Standard Normals Z_i, i=0,...,k. You then compute
T = Z_0 / sqrt( sum(Z_i^2, i=1 to k)/k ).
The sum of squared Standard Normals sum(Z_i^2, i=1 to k) has Chi-Squared Distribution with k degrees of freedom, so if there is a pre-canned method to generate this, you should use it, since it's likely more efficient.

Locally weighted smoothing for binary valued random variable

I have a random variable as follows:
f(x) = 1 with probability g(x)
f(x) = 0 with probability 1-g(x)
where 0 < g(x) < 1.
Assume g(x) = x. Let's say I am observing this variable without knowing the function g and obtained 100 samples as follows:
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import binned_statistic
list = np.ndarray(shape=(200,2))
g = np.random.rand(200)
for i in range(len(g)):
list[i] = (g[i], np.random.choice([0, 1], p=[1-g[i], g[i]]))
print(list)
plt.plot(list[:,0], list[:,1], 'o')
Plot of 0s and 1s
Now, I would like to retrieve the function g from these points. The best I could think is to use draw a histogram and use the mean statistic:
bin_means, bin_edges, bin_number = binned_statistic(list[:,0], list[:,1], statistic='mean', bins=10)
plt.hlines(bin_means, bin_edges[:-1], bin_edges[1:], lw=2)
Histogram mean statistics
Instead, I would like to have a continuous estimation of the generating function.
I guess it is about kernel density estimation but I could not find the appropriate pointer.
straightforward without explicitly fitting an estimator:
import seaborn as sns
g = sns.lmplot(x= , y= , y_jitter=.02 , logistic=True)
plug in x= your exogenous variable and analogously y = dependent variable. y_jitter is jitter the point for better visibility if you have a lot of data points. logistic = True is the main point here. It will give you the logistic regression line of the data.
Seaborn is basically tailored around matplotlib and works great with pandas, in case you want to extend your data to a DataFrame.

Basic axis malfuction in matplotlib

When plotting using matplotlib, I ran into an interesting issue where the y axis is scaled by a very inconvenient quantity. Here's a MWE that demonstrates the problem:
import numpy as np
import matplotlib.pyplot as plt
l = np.linspace(0.5,2,2**10)
a = (0.696*l**2)/(l**2 - 9896.2e-9**2)
plt.plot(l,a)
plt.show()
When I run this, I get a figure that looks like this picture
The y-axis clearly is scaled by a silly quantity even though the y data are all between 1 and 2.
This is similar to the question:
Axis numerical offset in matplotlib
I'm not satisfied with the answer to this question in that it makes no sense to my why I need to go the the convoluted process of changing axis settings when the data are between 1 and 2 (EDIT: between 0 and 1). Why does this happen? Why does matplotlib use such a bizarre scaling?
The data in the plot are all between 0.696000000017 and 0.696000000273. For such cases it makes sense to use some kind of offset.
If you don't want that, you can use you own formatter:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker
l = np.linspace(0.5,2,2**10)
a = (0.696*l**2)/(l**2 - 9896.2e-9**2)
plt.plot(l,a)
fmt = matplotlib.ticker.StrMethodFormatter("{x:.12f}")
plt.gca().yaxis.set_major_formatter(fmt)
plt.show()

Cubic spline interpolation drops out halfway

I am trying to make a cubic spline interpolation and for some reason, the interpolation drops off in the middle of it. It's very mysterious and I can't find any mention of similar occurrences anywhere online.
This is for my dissertation so I have excluded some labels etc. to keep it obscure intentionally, but all the relevant code is as follows. For context, this is an astronomy related plot.
from scipy.interpolate import CubicSpline
import numpy as np
import matplotlib.pyplot as plt
W = np.array([0.435,0.606,0.814,1.05,1.25,1.40,1.60])
sum_all = np.array([sum435,sum606,sum814,sum105,sum125,sum140,sum160])
sum_can = np.array([sumc435,sumc606,sumc814,sumc105,sumc125,sumc140,sumc160])
fall = CubicSpline(W,sum_all)
newallx=np.arange(0.435,1.6,0.001)
newally=fall(newallx)
fcan = CubicSpline(W,sum_can)
newcanx=np.arange(0.435,1.6,0.001)
newcany=fcan(newcanx)
#----plot
plt.plot(newallx,newally)
plt.plot(newcanx,newcany)
plt.plot(W,sum_all,marker='o',color='r',linestyle='')
plt.plot(W,sum_can,marker='o',color='b',linestyle='')
plt.yscale("log")
plt.ylabel("Flux S$_v$ [erg s$^-$$^1$ cm$^-$$^2$ Hz$^-$$^1$]")
plt.xlabel("Wavelength [n$\lambda$]")
plt.show()
The plot that I get from that comes out like this, with a clear gap in the interpolation:
And in case you are wondering, these are the values in the sum_all and sum_can arrays (I assume it doesn't matter, but just in case you want the numbers to plot it yourself):
sum_all:
[ 3.87282732e+32 8.79993191e+32 1.74866333e+33 1.59946687e+33
9.08556547e+33 6.70458731e+33 9.84832359e+33]
can_all:
[ 2.98381061e+28 1.26194810e+28 3.30328780e+28 2.90254609e+29
3.65117723e+29 3.46256846e+29 3.64483736e+29]
The gap happens between [0.606,1.26194810e+28] and [0.814,3.30328780e+28]. If I change the intervals from 0.001 to something higher, it's obvious that the plot doesn't actually break off but merely dips below 0 on the y-axis (but the plot is continuous). So why does it do that? Surely that's not a correct interpolation? Just looking with our eyes, that's clearly not a well-interpolated connection between those two points.
Any tips or comments would be extremely appreciated. Thank you so much in advance!
The reason for the breakdown can be better observed on a linear scale.
We see that the spline actually passes below 0, which is undefined on a log scale.
So I would suggest to first take the logarithm of the data, perform the spline interpolation on the logarithmically scaled data, and then scale back by the 10th power.
from scipy.interpolate import CubicSpline
import numpy as np
import matplotlib.pyplot as plt
W = np.array([0.435,0.606,0.814,1.05,1.25,1.40,1.60])
sum_all = np.array([ 3.87282732e+32, 8.79993191e+32, 1.74866333e+33, 1.59946687e+33,
9.08556547e+33, 6.70458731e+33, 9.84832359e+33])
sum_can = np.array([ 2.98381061e+28, 1.26194810e+28, 3.30328780e+28, 2.90254609e+29,
3.65117723e+29, 3.46256846e+29, 3.64483736e+29])
fall = CubicSpline(W,np.log10(sum_all))
newallx=np.arange(0.435,1.6,0.001)
newally=fall(newallx)
fcan = CubicSpline(W,np.log10(sum_can))
newcanx=np.arange(0.435,1.6,0.01)
newcany=fcan(newcanx)
plt.plot(newallx,10**newally)
plt.plot(newcanx,10**newcany)
plt.plot(W,sum_all,marker='o',color='r',linestyle='')
plt.plot(W,sum_can,marker='o',color='b',linestyle='')
plt.yscale("log")
plt.ylabel("Flux S$_v$ [erg s$^-$$^1$ cm$^-$$^2$ Hz$^-$$^1$]")
plt.xlabel("Wavelength [n$\lambda$]")
plt.show()