Using vDSP_biquad as a one pole filter - accelerate-framework

I'd like to be able to use the vDSP_biquad function as a one pole filter.
My one pole filter looks like this :
output[i] = onePole->z1 = input[i] * onePole->a0 + onePole->z1 * onePole->b1;
where
b1 = exp(-2.0 * M_PI * (_frequency / sampleRate));
a0 = 1.0 - b1;
This one pole works great, but of course it's not optimized, which is why I'd like to use the Accelerate Framework to speed it up.
Because vDSP_biquad uses the Direct Form II of the biquad implementation, it seems to me I should be able to set the coefficients to use it as a one-pole filter. https://en.wikipedia.org/wiki/Digital_biquad_filter#Direct_form_2
filter->omega = 2 * M_PI * freq / sampleRate;
filter->b1 = exp(-filter->omega);
filter->b0 = 1 - filter->b1;
filter->b2 = 0;
filter->a1 = 0;
filter->a2 = 0;
However, this does not work as a one pole filter. (The implementation of biquad is fine, I use it for many other filter types, it's just these coefficients don't have the desired effect).
What am I doing wrong?
Also open to hearing other ways to optimize a one-pole filter with Accelerate or otherwise.

The formula in the Apple docs is:
y[n] = b0*x[n] + b1*x[n-1] + b2*x[n-2] - a1*y[n-1] - a2*y[n-2]
In your above code, you're using b1 which is two inputs ago. For a one-pole, you'll need to use the previous output, y[n-1].
So I think the coefficients you want are:
a1 = -exp(-2.0 * M_PI * (_frequency / sampleRate))
b0 = 1.0 + a1

Related

Compute FOVX when FOVY > 180°

This is related to this question. The formula fieldOfViewX = 2 * atan(tan(fieldOfViewY * 0.5) * aspect) works but when FOVY > 180° for fisheyes camera, it doesn't work anymore. It is possible to adapt this formula to make it work?

Transform a vector to another frame of reference

I have a green vehicle which will shortly collide with a blue object (which is 200 away from the cube)
It has a Kinect depth camera D at [-100,0,200] which sees the corner of the cube (grey sphere)
The measured depth is 464 at 6.34° in the X plane and 12.53° in the Y plane.
I want to calculate the position of the corner as it would appear if there was a camera F at [150,0,0], which would see this:
in other words transform the red vector into the yellow vector. I know that this is achieved with a transformation matrix but I can't find out how to compute the matrix from the D-F vector [250,0,-200] or how to use it; my high-school maths dates back 40 years.
math.se has a similar question but it doesn't cover my problem and I can't find anything on robotices.se either.
I realise that I should show some code that I've tried, but I don't know where to start. I would be very grateful if somebody could help me to solve this.
ROS provides the tf library which allows you to transform between frames. You can simply set a static transform between the pose of your camera and the pose of your desired location. Then, you can get the pose of any point detected by your camera in the reference frame of your desired point on your robot. ROS tf will do everything you need and everything I explain below.
The longer answer is that you need to construct a transformation tree. First, compute the static transformation between your two poses. A pose is a 7-dimensional transformation including a translation and orientation. This is best represented as a quaternion and a 3D vector.
Now, for all poses in the reference frame of your kinect, you must transform them to your desired reference frame. Let's call this frame base_link and your camera frame camera_link.
I'm going to go ahead and decide that base_link is the parent of camera_link. Technically these transformations are bidirectional, but because you may need a transformation tree, and because ROS cares about this, you'll want to decide who is the parent.
To convert rotation from camera_link to base_link, you need to compute the rotational difference. This can be done by multiplying the quaternion of base_link's orientation by the conjugate of camera_link's orientation. Here's a super quick Python example:
def rotDiff(self,q1: Quaternion,q2: Quaternion) -> Quaternion:
"""Finds the quaternion that, when applied to q1, will rotate an element to q2"""
conjugate = Quaternion(q2.qx*-1,q2.qy*-1,q2.qz*-1,q2.qw)
return self.rotAdd(q1,conjugate)
def rotAdd(self, q1: Quaternion, q2: Quaternion) -> Quaternion:
"""Finds the quaternion that is the equivalent to the rotation caused by both input quaternions applied sequentially."""
w1 = q1.qw
w2 = q2.qw
x1 = q1.qx
x2 = q2.qx
y1 = q1.qy
y2 = q2.qy
z1 = q1.qz
z2 = q2.qz
w = w1 * w2 - x1 * x2 - y1 * y2 - z1 * z2
x = w1 * x2 + x1 * w2 + y1 * z2 - z1 * y2
y = w1 * y2 + y1 * w2 + z1 * x2 - x1 * z2
z = w1 * z2 + z1 * w2 + x1 * y2 - y1 * x2
return Quaternion(x,y,z,w)
Next, you need to add the vectors. The naive approach is to simply add the vectors, but you need to account for rotation when calculating these. What you really need is a coordinate transformation. The position of camera_link relative to base_link is some 3D vector. Based on your drawing, this is [-250, 0, 200]. Next, we need to reproject the vectors to your points of interest into the rotational frame of base_link. I.e., all the points your camera sees at 12.53 degrees that appear at the z = 0 plane to your camera are actually on a 12.53 degree plane relative to base_link and you need to find out what their coordinates are relative to your camera as if your camera was in the same orientation as base_link.
For details on the ensuing math, read this PDF (particularly starting at page 9).
To accomplish this, we need to find your vector's components in base_link's reference frame. I find that it's easiest to read if you convert the quaternion to a rotation matrix, but there is an equivalent direct approach.
To convert a quaternion to a rotation matrix:
def Quat2Mat(self, q: Quaternion) -> rotMat:
m00 = 1 - 2 * q.qy**2 - 2 * q.qz**2
m01 = 2 * q.qx * q.qy - 2 * q.qz * q.qw
m02 = 2 * q.qx * q.qz + 2 * q.qy * q.qw
m10 = 2 * q.qx * q.qy + 2 * q.qz * q.qw
m11 = 1 - 2 * q.qx**2 - 2 * q.qz**2
m12 = 2 * q.qy * q.qz - 2 * q.qx * q.qw
m20 = 2 * q.qx * q.qz - 2 * q.qy * q.qw
m21 = 2 * q.qy * q.qz + 2 * q.qx * q.qw
m22 = 1 - 2 * q.qx**2 - 2 * q.qy**2
result = [[m00,m01,m02],[m10,m11,m12],[m20,m21,m22]]
return result
Now that your rotation is represented as a rotation matrix, it's time to do the final calculation.
Following the MIT lecture notes from my link above, I'll arbitrarily name the vector to your point of interest from the camera A.
Find the rotation matrix that corresponds with the quaternion that represents the rotation between base_link and camera_link and simply perform a matrix multiplication. If you're in Python, you can use numpy to do this, but in the interest of being explicit, here is the long form of the multiplication:
def coordTransform(self, M: RotMat, A: Vector) -> Vector:
"""
M is my rotation matrix that represents the rotation between my frames
A is the vector of interest in the frame I'm rotating from
APrime is A, but in the frame I'm rotating to.
"""
APrime = []
i = 0
for component in A:
APrime.append(component * M[i][0] + component * M[i][1] + component * m[i][2])
i += 1
return APrime
Now, the vectors from camera_link are represented as if camera_link and base_link share an orientation.
Now you may simply add the static translation between camera_link and base_link (or subtract base_link -> camera_link) and the resulting vector will be your point's new translation.
Putting it all together, you can now gather the translation and orientation of every point your camera detects relative to any arbitrary reference frame to gather pose data relevant to your application.
You can put all of this together into a function simply called tf() and stack these transformations up and down a complex transformation tree. Simply add all the transformations up to a common ancestor and subtract all the transformations down to your target node in order to find the transformation of your data between any two arbitrary related frames.
Edit: Hendy pointed out that it's unclear what Quaternion() class I refer to here.
For the purposes of this answer, this is all that's necessary:
class Quaternion():
def __init__(self, qx: float, qy: float, qz: float, qw: float):
self.qx = qx
self.qy = qy
self.xz = qz
self.qw = qw
But if you want to make this class super handy, you can define __mul__(self, other: Quaternion and __rmul__(self, other: Quaternion) to perform quaternion multiplication (order matters, so make sure to do both!). conjugate(self), toEuler(self), toRotMat(self), normalize(self) may also be handy additions.
Note that due to quirks in Python's typing, the above other: Quaternion is only for clarity. You'll need a longer-form if type(other) != Quaternion: raise TypeError('You can only multiply quaternions with other quaternions) error handling block to make that into valid python :)
The following definitions are not necessary for this answer, but they may prove useful to the reader.
import numpy as np
def __mul__(self, other):
if type(other) != Quaternion:
print("Quaternion multiplication only works with other quats")
raise TypeError
r1 = self.qw
r2 = other.qw
v1 = [self.qx,self.qy,self.qz]
v2 = [other.qx,other.qy,other.qz]
rPrime = r1*r2 - np.dot(v1,v2)
vPrimeA = np.multiply(r1,v2)
vPrimeB = np.multiply(r2,v1)
vPrimeC = np.cross(v1,v2)
vPrimeD = np.add(vPrimeA, vPrimeB)
vPrime = np.add(vPrimeD,vPrimeC)
x = vPrime[0]
y = vPrime[1]
z = vPrime[2]
w = rPrime
return Quaternion(x,y,z,w)
def __rmul__(self, other):
if type(other) != Quaternion:
print("Quaternion multiplication only works with other quats")
raise TypeError
r1 = other.qw
r2 = self.qw
v1 = [other.qx,other.qy,other.qz]
v2 = [self.qx,self.qy,self.qz]
rPrime = r1*r2 - np.dot(v1,v2)
vPrimeA = np.multiply(r1,v2)
vPrimeB = np.multiply(r2,v1)
vPrimeC = np.cross(v1,v2)
vPrimeD = np.add(vPrimeA, vPrimeB)
vPrime = np.add(vPrimeD,vPrimeC)
x = vPrime[0]
y = vPrime[1]
z = vPrime[2]
w = rPrime
return Quaternion(x,y,z,w)
def conjugate(self):
return Quaternion(self.qx*-1,self.qy*-1,self.qz*-1,self.qw)

using a loop to change color of pixels according to calculations

I am just starting to learn jython, and just have a question which I cannot seem to get right.
From my text, I am to create a picture that is 640 x 480 pixels, and then, using a loop, pixel by pixel set the color to a calculation for r, g, b which we have already been given.
I can create a picture, I can set variables, however I cannot seem to go any further in creating a loop to set each pixel colour.
I know its only simple, but just wandering if anyone can help me out here.
xrange() will create a generator which yields integers in a range. for will loop once per element of an iterable.
for row in xrange(480):
for col in xrange(640):
...
This may help you to iterate through the pixels.
picture = makeEmptyPicture(400,200)
pixels = getPixels(picture)
#make an empty picture and get the pixels
for px in getPixels(picture):
x=getX(px)
y=getY(px)
r = (sin(x * radian * id[1]) * cos(y * radian * id[4]) + 1) * ord(StringID[0]) * 2.5
g = (sin(x * radian * id[2]) * cos(y * radian * id[5]) + 1) * ord(StringID[0]) * 2.5
b = (sin(x * radian * id[3]) * cos(y * radian * id[6]) + 1) * ord(StringID[0]) * 2.5
newColor=makeColor(255 - r, 255 - g, 255 - b)
setColor(px, newColor)
show(picture)
repaint(picture)

Creating the % functionality in calculator

I am doing a simple calculator in xcode. I have done the basic functions such as +,/, -,1/x, √. Now i am trying to do the % functionality. Not sure how to start on this.. Need some guidance on this...
Edited:
5/8% = 62.5
5x8%=0.4
5+8% = 5.4
5-8% = 4.6
It should be able to handle different functions before it.. That's why i am confused...
If you mean the modulo operator:
For integers simply use
int result = a % b;
For floats use fmod (math.h)
float result = fmod(a, b);
If you mean the percent operator you have to remember that taking X percent of something is the same as multiplying it with X/100:
float result = a * X / 100.f; // result will be X percent of a.
EDIT (to answer your edited question):
That's not how a percentage operator works on any calculator I know. The percentage sign is just a division by 100. So in your example:
5/8% = 5/8/100 = 0.00625
5x8% = 5x8/100 = 0.4
5+8% = 5+8/100 = 5.08
5-8% = 5-8/100 = 4.92
I think what you mean by 5+8% is actually 5+5x8% or 5+5x8/100.

Normal Distribution Best Approach

I'm trying to build a simple program to price call options using the black scholes formula http://en.wikipedia.org/wiki/Black%E2%80%93Scholes. I'm trying to figure our the best way to get probabilities from a normal distribution. For example if I were to do this by hand and I got the value of as d1=0.43 than I'd look up 0.43 in this table http://www.math.unb.ca/~knight/utility/NormTble.htm and get the value 0.6664.
I believe that there are no functions in c or objective-c to find the normal distribution. I'm also thinking about creating a 2 dimensional array and looping through until I find the desired value. Or maybe I can define 300 doubles with the corresponding value and loop through those until I get the appropriate result. Any thoughts on the best approach?
You need to define what it is you are looking for more clearly. Based on what you posted, it appears you are looking for the cumulative distribution function or P(d < d1) where d1 is measured in standard deviations and d is a normal distribution: by your example, if d1 = 0.43 then P(d < d1) = 0.6664.
The function you want is called the error function erf(x) and there are some good approximations for it.
Apparently erf(x) is part of the standard math.h in C. (not sure about objective-c but I assume it probably contains it as well).
But erf(x) is not exactly the function you need. The general form P(d < d1) can be calculated from erf(x) in the following formula:
P(d<d1) = f(d1,sigma) = (erf(x/sigma/sqrt(2))+1)/2
where sigma is the standard deviation. (in your case you can use sigma = 1.)
You can test this on Wolfram Alpha for example: f(0.43,1) = (erf(0.43/sqrt(2))+1)/2 = 0.666402 which matches your table.
There are two other things that are important:
If you are looking for P(d < d1) where d1 is large (greater in absolute value than about 3.0 * sigma) then you should really be using the complementary error function erfc(x) = 1-erf(x) which tells you how close P(d < d1) is to 0 or 1 without running into numerical errors. For d1 < -3*sigma, P(d < d1) = (erf(d1/sigma/sqrt(2))+1)/2 = erfc(-d1/sigma/sqrt(2))/2, and for d1 > 3*sigma, P(d < d1) = (erf(d1/sigma/sqrt(2))+1)/2 = 1 - erfc(d1/sigma/sqrt(2))/2 -- but don't actually compute that; instead leave it as 1 - K where K = erfc(d1/sigma/sqrt(2))/2. For example, if d1 = 5*sigma, then P(d < d1) = 1 - 2.866516*10-7
If for example your programming environment doesn't have erf(x) built into the available libraries, you need a good approximation. (I thought I had an easy one to use but I can't find it and I think it was actually for the inverse error function). I found this 1969 article by W. J. Cody which gives a good approximation for erf(x) if |x| < 0.5, and it's better to use erf(x) = 1 - erfc(x) for |x| > 0.5. For example, let's say you want erf(0.2) &approx; 0.2227025892105 from Wolfram Alpha; Cody says evaluate with x * R(x2) where R is a rational function you can get from his table.
If I try this in Javascript (coefficients from Table II of the Cody paper):
// use only for |x| <= 0.5
function erf1(x)
{
var y = x*x;
return x*(3.6767877 - 9.7970565e-2*y)/(3.2584593 + y);
}
then I get erf1(0.2) = 0.22270208866303123 which is pretty close, for a 1st-order rational function. Cody gives tables of coefficients for rational functions up to degree 4; here's degree 2:
// use only for |x| <= 0.5
function erf2(x)
{
var y = x*x;
return x*(21.3853322378 + 1.72227577039*y + 0.316652890658*y*y)
/ (18.9522572415 + 7.8437457083*y + y*y);
}
which gives you erf2(0.2) = 0.22270258922638206 which is correct out to 10 decimal places. The Cody paper also gives you similar formulas for erfc(x) where |x| is between 0.5 and 4.0, and a third formula for erfc(x) where |x| > 4.0, and you can check your results with Wolfram Alpha or known erfc(x) tables for accuracy if you like.
Hope this helps!