Applying forces at different points fails to create torque in pymunk - physics

I'm using pymunk to apply forces to a circular body at the ends of its diameter. The forces are of different magnitudes, and neither has an x-component (relative to the body, that is, so they are perpendicular to the diameter). I would expect these forces together to rotate the body to some degree, but instead they just add together to create a force vector with no x-component and a y-component (so, again, perpendicular to the diameter) that is just a combination of the magnitudes of both forces.
Is pymunk just unable to calculate the resultant rotation from multiple forces applied at separate points on a body? Since that is the only reason I'm even using a physics engine at all, I would be extremely disappointed if that were the case. I would appreciate any help with this problem. Thank you in advance.

pymunk should be able to calculate the rotation unless I misunderstand the question. Check this example:
>>> b = Body(1,100)
>>> c = Circle(b,10)
>>> s.add(b,c)
>>> b.apply_impulse((100,0), (0,10))
>>> b.apply_impulse((-50,0), (0,-10))
>>> s.step(.1)
>>> b.angle
-1.5
>>> b.position
Vec2d(5.0, 0.0)
>>> s.step(.1)
>>> b.angle
-3.0
>>> b.position
Vec2d(10.0, 0.0)

Related

Projecting a vector in a given plane using numpy

Using numpy, how can I do an orthogonal projection of, for example, the vector np.array([0.3,0.5,0.2]) into the plane 3x+2y-2z=0 ?
EDIT:
I think one may simply use numpy.linalg.lstsq to find the orthogonal projection?
Your hyperplane is defined by the set of x such that <a,x>=0, where a is a vector orthogonal to the plane. In your example,
a = (3,2,-2).
Then The projection of a point p is in the hyperplane is a point p_proj such that p-p_proj is orthogonal to the plane. This means that it is parallel to a, or in other words p-p_proj=lambda*a. So
p_proj = p- lambda*a (1).
since p_proj is in the hyperplane, <p_proj,a> = 0 so multiplying by a on the equality(1) gives
lambda= <p,a>/<a,a>.
Substituting into (2), you get
Projection(p) = p_proj = p-<p,a>/<a,a>a
which can be done easily in numpy using np.dot(v_1,v_2) wherever we encounter <v_1,v_2>:
def projection(p,a):
lambda_val = np.dot(p,a)/np.dot(a,a)
return p - lambda_val * a
(Note that this is a Gram-Schmidt iteration).

linalg.matrix_power(A,n) for a huge $n$ and a huge $A$

I'm trying to use linalg to find $P^{500}$ where $ P$ is a 9x9 matrix but Python displays the following:
Matrix full of inf
I think this is too much for this method so my question is, there is annother library to find $P^{500}$? Must I surrender?
Thank you all in advance
Use the eigendecomposition and then exponentiate the matrix of eigenvalues. Like this. You end up getting an inf up in the first column. Unless you control the type of matrix by their eigenvalues this won't happen I believe. In other words, your eigenvalues have to be bounded. You can generate a random matrix by the Schur decomposition putting the eigenvalues along the diagonal. This is a post I have about generating a matrix with given eigenvalues. This should be the way that method works anyways.
% Generate random 9x9 matrix
n=9;
A = randn(n);
[V,D] = eig(A);
p = 500;
Dp = D^p;
Ap = V^(-1)*Dp*V;
Ap1 = mpower(A,p);
NumPy arrays have homogeneous data types and float datatype maximum is
>>> np.finfo('d').max
1.7976931348623157e+308
>>> _**0.002
4.135322944991858
>>> np.array(4.135)**500
1.7288485271474026e+308
>>> np.array(4.136)**500
__main__:1: RuntimeWarning: overflow encountered in power
inf
So if there is an inner product that results higher than approx. 4.135 it is going to blowup and once it blows up, the next product will be multiplied with infinities and more entries will get infinities until everything becomes infinities.
Metahominid's suggestion certainly helps but it will not solve the issue if your eigenvalues are larger than this value. In general, you need to use specialized high-precision tools to get correct results.

Overlaying mixed effects model results with ggplot2

I have been having some difficulty in displaying the results from my lmer model within ggplot2. I am specifically interested in displaying predicted regression lines on top of observed data. The lmer model I am running on this (speech) data is here below:
lmer.declination <- lmer(zlogF0_m60~Center.syll*Tone + (1|Trial) + (1+Tone|Speaker) + (1|Utterance.num), data=data)
The dependent variable here is fundamental frequency (F0), normalized and averaged across the middle 60% of a syllable. The fixed effects are syllable number (Center.syll), counted backwards from the end of a sentence (e.g. -2 is the 3rd last syllable in the sentence). The data here is from a lexical tone language, so the Tone (all low tone /1/, all mid tone /3/, and all high tone /4/) is a discrete fixed effect. The experimental questions are whether F0 falls across the sentences for this language, if so, by how much, and whether tone matters. It was a bit difficult for me to think of a way to produce a toy data set here, but the data can be downloaded here (a 437K file).
In order to extract the model fits, I used the effects package and converted the output to a data frame.
ex <- Effect(c("Center.syll","Tone"),lmer.declination)
ex.df <- as.data.frame(ex)
I plot the data using ggplot2, with the following code:
t.plot <- ggplot(data, aes(factor(Center.syll), zlogF0_m60, group=Tone, color=Tone)) + stat_summary(fun.data = mean_cl_boot, geom = "smooth") + ylab("Normalized log(F0)") + xlab("Syllable number") + ggtitle("F0 change across utterances with identical level tones, medial 60% of vowel") + geom_pointrange(data=ex.df, mapping=aes(x=Center.syll, y=fit, ymin=lower, ymax=upper)) + theme_bw()
t.plot
This produces the following plot:
Predicted trajectories and observed trajectories
The predicted values appear to the left of the observed data, not overlaid on the data itself. Whatever I seem to try, I can not get them to overlap on the observed data. I would ideally like to have a single line drawn rather than a pointrange, but when I attempted to use geom_line, the default was for the line to connect from the upper bound of one point to the lower bound of the next (not at the median/midpoint). Thank you for your help.
(Edit: As the OP pointed out, he did in fact include a link to his data set. My apologies for implying that he didn't.)
First of all, you will have much better luck getting a helpful response if you provide a minimal, complete, and verifiable example (MVCE). Look here for information on how to best do that for R specifically.
Lacking your actual data to work with, I believe your problem is that you're factoring the x-axis for the stat_summary, but not for the geom_pointrange. I mocked up a toy example from the plot you linked to in order to demonstrate:
dat1 <- data.frame(x=c(-6:0, -5:0, -4:0),
y=c(-0.25, -0.5, -0.6, -0.75, -0.8, -0.8, -1.5,
0.5, 0.45, 0.4, 0.2, 0.1, 0,
0.5, 0.9, 0.7, 0.6, 1.1),
z=c(rep('a', 7), rep('b', 6), rep('c', 5)))
dat2 <- data.frame(x=dat1$x,
y=dat1$y + runif(18, -0.2, 0.2),
z=dat1$z,
upper=dat1$y + 0.3 + runif(18, -0.1, 0.1),
lower=dat1$y - 0.3 + runif(18, -0.1, 0.1))
Now, the following call gives me a result similar to the graph you linked to:
ggplot(dat1, aes(factor(x), # note x being factored here
y, group=z, color=z)) +
geom_line() + # (this is a place-holder for your stat_summary)
geom_pointrange(data=dat2,
mapping=aes(x=x, # but x not being factored here
y=y, ymin=lower, ymax=upper))
However, if I remove the factoring of the initial x value, I get the line and the point ranges overlaid:
ggplot(dat1, aes(x, # no more factoring here
y, group=z, color=z)) +
geom_line() +
geom_pointrange(data=dat2,
mapping=aes(x=x, y=y, ymin=lower, ymax=upper))
Note that I still get the overlaid result if I factor both of the x-axes. The two simply have to be consistent.
Again, I can't stress enough how much it helps this entire process if you provide code we can copy/paste into an R session and see what you're seeing. Hopefully this helps you out, but it all goes more smoothly (and quickly) if you help us help you.

Verify that points lie on a grid of specified pitch

While I am trying to solve this problem in a context where numpy is used heavily (and therefore an elegant numpy-based solution would be particularly welcome) the fundamental problem has nothing to do with numpy (or even Python) as such.
The task is to create an automated test for an algorithm which is supposed to produce points distributed on a grid whose pitch is specified as an input to the algorithm. The absolute positions of the points do not matter, but their relative positions do. For example, following
collection_of_points = algorithm(data, pitch=[1.3, 1.5, 2])
collection_of_points should contain only points whose x-coordinates differ by multiples of 1.3, whose y-coordinates differ by multiples of 1.5 and whose z-coordinates differ by multiples of 2.
The test should verify that this condition is satisfied.
One thing that I have tried, which doesn't seem too ugly, but doesn't work is
points = algo(data, pitch=requested_pitch)
for p1, p2 in itertools.combinations(points, 2):
distance_between_points = np.array(p2) - np.array(p1)
assert np.allclose(distance_between_points % requested_pitch, 0)
[ Aside for those unfamiliar with python or numpy:
itertools.combinations(points, 2) is a simple way of iterating through all pairs of points
Arithmetic operations on np.arrays are performed elementwise, so np.array([5,6,7]) % np.array([2,3,4]) evaluates to np.array([1, 0, 3]) via np.array([5%2, 6%3, 7%4])
np.allclose checks whether all corresponding elements in the two inputs arrays are approximately equal, and numpy automatically pretends that the 0 which is passed in as the second argument, was really an all-zero array of the correct size
]
To see why the idea shown above fails, consider a desired pitch of 3 and two points which are separated by 8.9999999 in the relevant dimension. 8.999999 % 3 is around 2.999999 which is nowhere near the required 0.
In all of this, I can't help feeling that I'm missing something obvious or that I'm re-inventing some wheel.
Can you suggest an elegant way of writing such a check?
Change your assertion to:
np.all(np.logical_or(np.isclose(x % y, 0), np.isclose((x % y) - y, 0)))
If you want to make it more readable, you should functionalize the statement. Something like:
def is_multiple(x, y, rtol=1e-05, atol=1e-08):
"""
Test if x is a multiple of y.
"""
remainder = x % y
is_zero = np.isclose(remainder, 0., rtol, atol)
is_y = np.isclose(remainder, y, rtol, atol)
return np.logical_or(is_zero, is_y)
And then:
assert np.all(is_multiple(distance_between_points, requested_pitch))

Numpy/Scipy irfft strange behaviour

I am resampling a real signal, and since I have at my disposal its fft from rfft, I want to use irfft(signal, new_length). But I can't seem to get it working.
This is a working code snippet that resamples a signal of length 4 using complex fft:
from numpy.fft import fft,ifft
p=array([1.,2.2,4.,1.])
pk=fft(p)
pnew=ifft(pk,8)*(8./4.)
where the factor (8./4.) rescales from the original to the new length. You can check that pnew[::2]==p.
Now, when I try to apply the same strategy with real Fourier transform, I get the wrong result at the original points:
from numpy.fft import rfft,irfft
p=array([1.,2.2,4.,1.])
pk=rfft(p)
pnew=irfft(pk,8)*(8./4.)
and I have pnew[::2]=[ 1.45, 1.75, 4.45, 0.55]!=p.
Does anybody have a clue of what is going on? I have tried using the routines from scipy, with the same result. The documentation itself discusses briefly how to do this, see here, bottom of the page
The documentation you like to says:
In other words, irfft(rfft(a), len(a)) == a to within numerical accuracy.
This is not the case if you do irfft(pk, 8)! The problem is due the odd samples and the symmetry of the Fourier transform in addition to your padding. Note that there are no problems at all all if len(p) is odd.
For a better understanding consider this:
>>> p = np.array([1.,2.2,4.,1.])
>>> np.fft.fft(p)
array([ 8.2+0.j , -3.0-1.2j, 1.8+0.j , -3.0+1.2j])
>>> np.fft.fftfreq(len(p))
array([ 0. , 0.25, -0.5 , -0.25]) # 0.5 only occurs once negative
>>> np.fft.rfft(p)
array([ 8.2+0.j , -3.0-1.2j, 1.8+0.j ])
>>> np.fft.rfftfreq(len(p)) # (not available in numpy 1.6.)
array([ 0. , 0.25, 0.5 ]) # 0.5 occurs, here positive, it does not matter
# also consider the odd length FFT
>>> np.fft.fftfreq(len(p)+1)
array([ 0. , 0.2, 0.4, -0.4, -0.2]) # 0.4 is in there twice.
# And consider that this gives the result you expect:
>>> symmetric_p = np.fft.rfft(p)
>>> symmetric_p[-1] /= 2
>>> np.fft.irfft(symmetric_p, 8)[::2]*(8./4.)
array([ 1. , 2.2, 4. , 1. ])
Which means if you look closely. The FFT frequencies calculated are not symmetric if the input samples are even, instead there is an extra negative frequency (which actually could just as well be a positive frequency, since it always has no phase shift).
Because you are padding (for no real reason?) to a different frequency, the RFFT suddenly has extra "room" for this frequency. So if you look at it from the FFT point of view, you add this normally only once occuring negative frequency also as a positive frequency (which basically means it goes in double). If you look above the symmetric_p halving this frequency gives the expected result with padding (it will not give the expected result without padding).