Laplace distribution sampling - pdf

Anyone know how to draw multiple times from a Laplace distribution in Stata? I want to run some Monte Carlo analysis and know that my data fits a Laplace distribution.

Here's a sample script. Naturally your scale parameter will be whatever it is. The location parameter is here zero by implication; if not just add it in.
clear
version 10: set seed 2803
set obs 10000
scalar sigma = 1
gen P = runiform()
gen y = sigma * cond(P <= 0.5, log(2 * P), -log(2 * (1 - P)))
We can use a normal quantile plot as reference showing that the tail behaviour is quite different from the normal or Gaussian.
qnorm y
Many people prefer to see some kind of density estimate
kdensity y, biweight bw(0.2)
but the most critical graph is a dedicated quantile-quantile plot. This one uses qplot which you must install from the Stata Journal archive after a search qplot in Stata. Note that # is not a typo here: it is a placeholder for whatever would otherwise be plotted on the x axis.
qplot y, trscale(cond(# <= 0.5, log(2 * #), -log(2 * (1 - #))))

Related

Projection of fisheye camera model by Scaramuzza

I am trying to understand the fisheye model by Scaramuzza, which is implemented in Matlab, see https://de.mathworks.com/help/vision/ug/fisheye-calibration-basics.html#mw_8aca38cc-44de-4a26-a5bc-10fb312ae3c5
The backprojection (uv to xyz) seems fairly straightforward according to the following equation:
, where rho=sqrt(u^2 +v^2)
However, how does the projection (from xyz to uv) work?! In my understanding we get a rather complex set of equations. Unfortunately, I don't find any details on that....
Okay, I believe I understand it now fully after analyzing the functions of the (windows) calibration toolbox by Scaramuzza, see https://sites.google.com/site/scarabotix/ocamcalib-toolbox/ocamcalib-toolbox-download-page
Method 1 found in file "world2cam.m"
For the projection, use the same equation above. In the projection case, the equation has three known (x,y,z) and three unknown variables (u,v and lambda). We first substitute lambda with rho by realizing that
u = x/lambda
v = y/lambda
rho=sqrt(u^2+v^2) = 1/lambda * sqrt(x^2+y^2) --> lambda = sqrt(x^2+y^2) / rho
After that, we have the unknown variables (u,v and rho)
u = x/lambda = x / sqrt(x^2+y^2) * rho
v = y/lambda = y / sqrt(x^2+y^2) * rho
z / lambda = z /sqrt(x^2+y^2) * rho = a0 + a2*rho^2 + a3*rho^3 + a4*rho^4
As you can see, the last equation now has only one unknown, namely rho. Thus, we can solve it easily using e.g. the roots function in matlab. However, the result does not always exist nor is it necessarily unique. After solving the unknown variable rho, calculating uv is very simple using the equation above.
This procedure needs to be performed for each point (x,y,z) separately and is thus rather computationally expensive for an image.
Method 2 found in file "world2cam_fast.m"
The last equation has the form rho(x,y,z). However, if we define m = z / sqrt(x^2+y^2) = tan(90°-theta), it only depends on one variable, namely rho(m).
Instead of solving this equation rho(m) for every new m, the authors "plot" the function for several values of m and fit an 8th order polynomial to these points. Using this polynomial they can calculate an approximate value for rho(m) much quicker in the following.
This becomes clear, because "world2cam_fast.m" makes use of ocam_model.pol, which is calculated in "undistort.m". "undistort.m" in turn makes use of "findinvpoly.m".

How to recover amplitude, and phase shift from Fourier Transform in Numpy?

I'm trying to write a simple python script that recovers the amplitude and phase of a sine wave from it's fourier transformation.
I should be able to do this by calculating the magnitude, and direction of the vector defined by the real and imaginary numbers for the fourier transform, for a given frequency, i.e:
Amplitude_at_freq = sqrt(real_component_at_freq^2 + imag_component_at_freq^2)
Phase = arctan(imag_component_at_freq/real_component_at_freq)
Ref: 1 min 45 seconds into this video: https://www.youtube.com/watch?time_continue=106&v=IWQfj05i87g
I've written a simple python script using numpy's fft library to try and reproduce this, but despite writing out my derivation exactly as above, am failing to get the amplitude and phase, although I can recover the original frequency of my test sine wave correctly. This previous post Calculating amplitude from np.fft and this one Why FFT does not retrieve original amplitude when increasing signal length points to the same problem (where amplitude is off by factor of 2). Specifically the solution is to "multiply by 2 (half of spectrum is removed so energy must be preserved)," but I need clarification on what that means. Secondly there's no mention of my issue with recovering the phase change, and the amplitude is calculated differently from what I have here.
# Define amplitude, phase, frequency
_A = 4 # Amplitude
_p = 0 # Phase shift
_f = 8 # Frequency
# Construct a simple signal
t = np.linspace(0, 2*np.pi, 1024 + 1)[:-1]
g = _A * np.sin(_f * t + _p)
# Apply the fourier transform
ff = np.fft.fft(g)
# Get frequency of original signal
ff_ii = np.where(np.abs(ff) > 1.0)[0][0] # Just get one frequency, the other one is just mirrored freq at negative value
print('frequency of:', ff_ii)
# Get the complex vector at that frequency to retrieve amplitude and phase shift
yy = ff[ff_ii]
# Calculate the amplitude
T = t.shape[0] # domain of x; which we will divide height to get freq amplitude
A = np.sqrt(yy.real**2 + yy.imag**2)/T
print('amplitude of:', A)
# Calculate phase shift
phi = np.arctan(yy.imag/yy.real)
print('phase change:', phi)
However, the result I'm getting is:
>> frequency of: 8
>> amplitude of: 2.0
>> phase change: 1.5707963267948957
So the frequency is accurate, but I'm getting an amplitude of 2, when it should be 4, and phase change of pi/2, when it should be zero.
Is my math wrong, or is my understanding of numpy's fft implementation incorrect?
Fourier analyses a signal as a sum of exp(i.2.pi.f.t) terms, so it sees
A.sin(2.pi.f1.t) as:
-i.A/2.exp(i.2.pi.f1.t)+i.A/2.exp(-i.2.pi.f1.t),
which is mathematically equal. So in Fourier terms, you have both the positive frequency f1 and negative -f1 with complex values -A/2.i and A/2.i respectively. So each 'side' has only half the amplitude, but if you add them together (in the inverse Fourier transform) you get back amplitude A. This split in positive and negative frequency is where your missing factor 2 is if you only look at one (positive or negative) side of the spectrum. This is often done in practice because for real signals, the other half is trivial to derive given one.
Look into the exact mathematics Euler's formula and Fourier transform.

Constrained np.polyfit

I am trying to fit a quadratic to some experimental data and using polyfit in numpy. I am looking to get a concave curve, and hence want to make sure that the coefficient of the quadratic term is negative, also the fit itself is weighted, as in there are some weights on the points. Is there an easy way to do that? Thanks.
The use of weights is described here (numpy.polyfit).
Basically, you need a weight vector with the same length as x and y.
To avoid the wrong sign in the coefficient, you could use a fit function definition like
def fitfunc(x,a,b,c):
return -1 * abs(a) * x**2 + b * x + c
This will give you a negative coefficient for x**2 at all times.
You can use curve_fit
.
Or you can run polyfit with rank 2 and if the last coefficient is bigger than 0. run again linear polyfit (polyfit with rank 1)

FFT real/imaginary/abs parts interpretation

I'm currently learning about discret Fourier transform and I'm playing with numpy to understand it better.
I tried to plot a "sin x sin x sin" signal and obtained a clean FFT with 4 non-zero points. I naively told myself : "well, if I plot a "sin + sin + sin + sin" signal with these amplitudes and frequencies, I should obtain the same "sin x sin x sin" signal, right?
Well... not exactly
(First is "x" signal, second is "+" signal)
Both share the same amplitudes/frequencies, but are not the same signals, even if I can see they have some similarities.
Ok, since I only plotted absolute values of FFT, I guess I lost some informations.
Then I plotted real part, imaginary part and absolute values for both signals :
Now, I'm confused. What do I do with all this? I read about DFT from a mathematical point of view. I understand that complex values come from the unit circle. I even had to learn about Hilbert space to understand how it works (and it was painful!...and I only scratched the surface). I only wish to understand if these real/imaginary plots have any concrete meaning outside mathematical world:
abs(fft) : frequencies + amplitudes
real(fft) : ?
imaginary(fft) : ?
code :
import numpy as np
import matplotlib.pyplot as plt
N = 512 # Sample count
fs = 128 # Sampling rate
st = 1.0 / fs # Sample time
t = np.arange(N) * st # Time vector
signal1 = \
1 *np.cos(2*np.pi * t) *\
2 *np.cos(2*np.pi * 4*t) *\
0.5 *np.cos(2*np.pi * 0.5*t)
signal2 = \
0.25*np.sin(2*np.pi * 2.5*t) +\
0.25*np.sin(2*np.pi * 3.5*t) +\
0.25*np.sin(2*np.pi * 4.5*t) +\
0.25*np.sin(2*np.pi * 5.5*t)
_, axes = plt.subplots(4, 2)
# Plot signal
axes[0][0].set_title("Signal 1 (multiply)")
axes[0][0].grid()
axes[0][0].plot(t, signal1, 'b-')
axes[0][1].set_title("Signal 2 (add)")
axes[0][1].grid()
axes[0][1].plot(t, signal2, 'r-')
# FFT + bins + normalization
bins = np.fft.fftfreq(N, st)
fft = [i / (N/2) for i in np.fft.fft(signal1)]
fft2 = [i / (N/2) for i in np.fft.fft(signal2)]
# Plot real
axes[1][0].set_title("FFT 1 (real)")
axes[1][0].grid()
axes[1][0].plot(bins[:N/2], np.real(fft[:N/2]), 'b-')
axes[1][1].set_title("FFT 2 (real)")
axes[1][1].grid()
axes[1][1].plot(bins[:N/2], np.real(fft2[:N/2]), 'r-')
# Plot imaginary
axes[2][0].set_title("FFT 1 (imaginary)")
axes[2][0].grid()
axes[2][0].plot(bins[:N/2], np.imag(fft[:N/2]), 'b-')
axes[2][1].set_title("FFT 2 (imaginary)")
axes[2][1].grid()
axes[2][1].plot(bins[:N/2], np.imag(fft2[:N/2]), 'r-')
# Plot abs
axes[3][0].set_title("FFT 1 (abs)")
axes[3][0].grid()
axes[3][0].plot(bins[:N/2], np.abs(fft[:N/2]), 'b-')
axes[3][1].set_title("FFT 2 (abs)")
axes[3][1].grid()
axes[3][1].plot(bins[:N/2], np.abs(fft2[:N/2]), 'r-')
plt.show()
For each frequency bin, the magnitude sqrt(re^2 + im^2) tells you the amplitude of the component at the corresponding frequency. The phase atan2(im, re) tells you the relative phase of that component. The real and imaginary parts, on their own, are not particularly useful, unless you are interested in symmetry properties around the data window's center (even vs. odd).
With respect to some reference point, say the center of a fixed time window, a sine wave and a cosine wave of the same frequency will look different (have different starting phases with respect to any fixed time reference point). They will also be mathematically orthogonal over any integer periodic width, so can represent independent basis vector components of a transform.
The real portion of an FFT result is how much each frequency component resembles a cosine wave, the imaginary component, how much each component resembles a sine wave. Various ratios of sine and cosine components together allow one to construct a sinusoid of any arbitrary or desired phase, thus allowing the FFT result to be complete.
Magnitude alone can't tell the difference between a sine and cosine wave. An IFFT(imag(FFT)) would screw up the reconstruction of any signal with a different phase than pure cosines. Same with IFFT(re(FFT)) and pure sine waves (with respect to the FFT aperture window).
You can convert the signal 1, which consists of a product of three cos functions to a sum of four cos functions. This makes the difference to function 2 which is a sum of four sine functions.
A cos function is an even function cos(-x) == cos(x).
The Fourier Transformation of an even function is pure real.
That is the reason why the plot of the imaginary part of the fft of function 1 contains only values close to zero (1e-15).
A sine function is an odd function sin(-x) == -sin(x).
The Fourier Transformation of an odd function is pure imaginary.
That is the reason why the plot of the real part of the fft of function 2 contains only values close to zero (1e-15).
If you want to understand FFT and DFT in more detail read a textbook of signal analysis for electrical engineering.
Although ... you must be a good expert now :)
For others: Please notice that
with this set of Correct mathematical equation
so correcting the sum as:
signal1 = \
1 *np.cos(2*np.pi * t) *\
2 *np.cos(2*np.pi * 4*t) *\
0.5 *np.cos(2*np.pi * 0.5*t)
signal2 = \
0.25*np.cos(-2*np.pi * 2.5*t) +\
0.25*np.cos(2*np.pi * 3.5*t) +\
0.25*np.cos(-2*np.pi * 4.5*t) +\
0.25*np.cos(2*np.pi * 5.5*t)
Now gives following result
(Now results are)
So point is that real part should also be the same

comparing two frequency spectra

I'm trying to compare two frequency spectra but I am confused over a number of points.
One device samples at 40 Hz the other at 100 Hz and so I'm not sure if I need to take this into account. Anyway I have produced frequency spectra from both devices and now I wish to compare these. How can I do correlation at each point so that I get pearson correlations at each point. I know how to do an overall one of course but I want to see points of strong correlation and those less strong?
If you are calculating power spectral densities P(f), then it doesn't matter how your original signal x(t) is sampled. You can directly and quantitatuively compare both spectra. To make sure that you have calculated the spectral densities you can explicitly check Parsevals theorem:
$ \int P(f) df = \int x(t)^2 dt $
Of course you have to think about which frequencies are actually evaluated Remember that a fft gives you frequencies from f = 1/T until or below the Nyquist frequency f_ny = 1/(2 dt) depending on the number of samples in x(t) being even or odd.
Here's a python example code for psd
def psd(x,dt=1.):
"""Computes one-sided power spectral density of x.
PSD estimated via abs**2 of Fourier transform of x
Takes care of even or odd number of elements in x:
- if x is even both f=0 and Nyquist freq. appear once
- if x is odd f=0 appears once and Nyquist freq. does not appear
Note that there are no tapers applied: This may lead to leakage!
Parseval's theorem (Variance of time series equal to integral over PSD) holds and can be checked via
print ( np.var(x), sum(Px*f[1]) )
Accordingly, the etsimated PSD is independent of time series length
Author/date: M. von Papen / 16.03.2017
"""
N = np.size(x)
xf = np.fft.fft(x)
Px = abs(xf)**2./N*dt
f = np.arange(N/2+1)/(N*dt)
if np.mod(N,2) == 0:
Px[1:N/2] = 2.*Px[1:N/2]
else:
Px[1:N/2+1] = 2.*Px[1:N/2+1]
# Take one-sided spectrum
Px = Px[0:N/2+1]
return Px, f