Element-wise operations on real and imaginary data of fcomplex array? - complex-numbers

I have a array of type ILArray thats comes as an output from FFT function.
I want to further perform some math operations on the real and imaginary parts.
For example:
complexArray.realPart * 2 + complexArray.imaginaryPart * 4 ???

You have found the solution already. I'll put the answer here for sake of completeness.
Using ILMath.real() and ILMath.imag() gives access to the real and imaginary part of ILArray. If you are operating on the elements, using the properties .real and .imag of ILNumerics.fcomplex might be another option:
// create test array of complex elements
ILArray<fcomplex> C = ccomplex(ones<float>(1,10), -ones<float>(1,10));
// using real(), imag()
ILArray<float> a1 = 2 * real(C) + 4 * imag(C);
// using direct access to real / imag part of complex elements
foreach (var c in C) {
float a2 = c.real * 2 + 4 * c.imag;
// ...
}

Related

weird operator precedence and assignment behavior in borland turboC++

I have to use borland TurboC++ for C programming in my college.
They say our examination board recommends it. I have to use it..
The problem is that they gave this operator precedence related question:
int a=10,b=20,result;
result1 = ++a + b-- - a++ * b++ + a * ++b;
printf("result=%d",);
printf("\n a=%d",a);
printf("\n b=%d",b);
Other compilers like gcc can't perform this operation. But turbo C can and gives us:
result=32
a=12
b=21
I made mistake in my test. My teacher tried to explain what's going on. But I am not convinced. Is it some kind of weird behavior of turbo C or in older days it used to be totally fine with all compilers. If so, what are the steps to understand what is going on and how to understand.
To solve these kind of problem, turbo-c do it in manner as follows :
1) Consider the initial value of variables used.
a=10
b=20
2) Count all the pre-increment and decrements for each variable and store all post on stack separate for each variable.
for variable a
pre increment = 1 therefore change the value of a to 11
post = 1 stored to stack
for variable b
pre increment = 1 therefore change the value of b to 21
post = 2 stored to stack
3) Now replace all the pre and post with the current value of a and b
result = 11 + 21 - 11 * 21 + 11 * 21 ;
result = 11 + 21;
result = 32;
4) lastly pop the stack and perform the operation on the variable.
a = 12
b = 21
This the only way to solve this problem. You can check the procedure with any question of same kind. The result will came out same. g++ fails to solve because it probably cannot resolve the variable in the same way thus the precedence error came in picture. It might probably fail with ++ + and -- - because it cannot understand the increment or decrements operator and forms ambiguous trees.

Hyperpriors for hierarchical models with Stan

I'm looking to fit a model to estimate multiple probabilities for binomial data with Stan. I was using beta priors for each probability, but I've been reading about using hyperpriors to pool information and encourage shrinkage on the estimates.
I've seen this example to define the hyperprior in pymc, but I'm not sure how to do something similar with Stan
#pymc.stochastic(dtype=np.float64)
def beta_priors(value=[1.0, 1.0]):
a, b = value
if a <= 0 or b <= 0:
return -np.inf
else:
return np.log(np.power((a + b), -2.5))
a = beta_priors[0]
b = beta_priors[1]
With a and b then being used as parameters for the beta prior.
Can anybody give me any pointers on how something similar would be done with Stan?
To properly normalize that, you need a Pareto distribution. For example, if you want a distribution p(a, b) ∝ (a + b)^(-2.5), you can use
a + b ~ pareto(L, 1.5);
where a + b > L. There's no way to normalize the density with support for all values greater than or equal to zero---it needs a finite L as a lower bound. There's a discussion of using just this prior as the count component of a hierarchical prior for a simplex.
If a and b are parameters, they can either both be constrained to be positive, or you can leave a unconstrained and declare
real<lower = L - a> b;
to insure a + b > L. L can be a small constant or something more reasonable given your knowledge of a and b.
You should be careful because this will not identify a + b. We use this construction as a hierarchical prior for simplexes as:
parameters {
real<lower = 1> kappa;
real<lower = 0, upper = 1> phi;
vector<lower = 0, upper = 1>[K] theta;
model {
kappa ~ pareto(1, 1.5); // power law prior
phi ~ beta(a, b); // choose your prior for theta
theta ~ beta(kappa * phi, kappa * (1 - phi)); // vectorized
There's an extended example in my Stan case study of repeated binary trials, which is reachable from the case studies page on the Stan web site (the case study directory is currently linked under the documentation link from the users tab).
Following suggestions in the comments I'm not sure that I will follow this approach, but for reference I thought I'd at least post the answer to my question of how this could be accomplished in Stan.
After some asking around on Stan Discourses and further investigation I found that the solution was to set a custom density distribution and use the target += syntax. So the equivalent for Stan of the example for pymc would be:
parameters {
real<lower=0> a;
real<lower=0> b;
real<lower=0,upper=1> p;
...
}
model {
target += log((a + b)^-2.5);
p ~ beta(a,b)
...
}

Using vDSP_biquad as a one pole filter

I'd like to be able to use the vDSP_biquad function as a one pole filter.
My one pole filter looks like this :
output[i] = onePole->z1 = input[i] * onePole->a0 + onePole->z1 * onePole->b1;
where
b1 = exp(-2.0 * M_PI * (_frequency / sampleRate));
a0 = 1.0 - b1;
This one pole works great, but of course it's not optimized, which is why I'd like to use the Accelerate Framework to speed it up.
Because vDSP_biquad uses the Direct Form II of the biquad implementation, it seems to me I should be able to set the coefficients to use it as a one-pole filter. https://en.wikipedia.org/wiki/Digital_biquad_filter#Direct_form_2
filter->omega = 2 * M_PI * freq / sampleRate;
filter->b1 = exp(-filter->omega);
filter->b0 = 1 - filter->b1;
filter->b2 = 0;
filter->a1 = 0;
filter->a2 = 0;
However, this does not work as a one pole filter. (The implementation of biquad is fine, I use it for many other filter types, it's just these coefficients don't have the desired effect).
What am I doing wrong?
Also open to hearing other ways to optimize a one-pole filter with Accelerate or otherwise.
The formula in the Apple docs is:
y[n] = b0*x[n] + b1*x[n-1] + b2*x[n-2] - a1*y[n-1] - a2*y[n-2]
In your above code, you're using b1 which is two inputs ago. For a one-pole, you'll need to use the previous output, y[n-1].
So I think the coefficients you want are:
a1 = -exp(-2.0 * M_PI * (_frequency / sampleRate))
b0 = 1.0 + a1

Picking random binary flag

I have defined the following:
typdef enum {
none = 0,
alpha = 1,
beta = 2,
delta = 4
gamma = 8
omega = 16,
} Greek;
Greek t = beta | delta | gammax
I would like to be able to pick one of the flags set in t randomly. The value of t can vary (it could be, anything from the enum).
One thought I had was something like this:
r = 0;
while ( !t & ( 2 << r ) { r = rand(0,4); }
Anyone got any more elegant ideas?
If it helps, I want to do this in ObjC...
Assuming I've correctly understood your intent, if your definition of "elegant" includes table lookups the following should do the trick pretty efficiently. I've written enough to show how it works, but didn't fill out the entire table. Also, for Objective-C I recommend arc4random over using rand.
First, construct an array whose indices are the possible t values and whose elements are arrays of t's underlying Greek values. I ignored none, but that's a trivial addition to make if you want it. I also found it easiest to specify the lengths of the subarrays. Alternatively, you could do this with NSArrays and have them self-report their lengths:
int myArray[8][4] = {
{0},
{1},
{2},
{1,2},
{4},
{4,1},
{4,2},
{4,2,1}
};
int length[] = {1,1,1,2,1,2,2,3};
Then, for any given t you can randomly select one of its elements using:
int r = myArray[t][arc4random_uniform(length[t])];
Once you get past the setup, the actual random selection is efficient, with no acceptance/rejection looping involved.

Shuffle data in a repeatable way (ability to get the same "random" order again)

This is the opposite of what most "random order" questions are about.
I want to select data from a database in random order. But I want to be able to repeat certain selects, getting the same order again.
Current (random) select:
SELECT custId, rand() as random from
(
SELECT DISTINCT custId FROM dummy
)
Using this, every key/row gets a random number. Ordering those ascending results in a random order.
But I want to repeat this select, getting the very same order again. My idea is to calculate a random number (r) once per session (e.g. "4") and use this number to shuffle the data in some way.
My first idea:
SELECT custId, custId * 4 as random from
(
SELECT DISTINCT custId FROM dummy
)
(in real life "4" would be something like 4005226664240702)
This results in a different number for each line but the same ones every run. By changing "r" to 5 all numbers will change.
The problem is: multiplication is not sufficient here. It just increases the numbers but keeps the order the same. Therefore I need some other kind of arithmetic function.
More abstract
Starting with my data (A-D). k is the key and r is the random number currently used:
k r
A = 1 4
B = 2 4
C = 3 4
D = 4 4
Doing some calculation using k and r in every line I want to get something like:
k r
A = 1 4 --> 12
B = 2 4 --> 13
C = 3 4 --> 11
D = 4 4 --> 10
The numbers can be whatever they want, but when I order them ascending I want to get a different order than the initial one. In this case D, C, A, B, E.
Setting r to 7 should result in a different order (C, A, B, D):
k r
A = 1 7 --> 56
B = 2 7 --> 78
C = 3 7 --> 23
D = 4 7 --> 80
Every time I use r = 7 should result in the same numbers => same order.
I'm looking for a mathematical function to do the calculation with k and r. Seeding the RAND() function is not suitable because it's not supported by some databases we support
Please note that r is already a randomly generated number
Background
One Table - Two data consumers. One consumer will get random 5% of the table, the other one the other 95%. They don't just get the data but a generated SQL. So there are two SQL's which must not select the same data twice but still random.
You could try and implement the Multiply-With-Carry PseudoRandomNumberGenerator. The C version goes like this (source: Wikipedia):
m_w = <choose-initializer>; /* must not be zero, nor 0x464fffff */
m_z = <choose-initializer>; /* must not be zero, nor 0x9068ffff */
uint get_random()
{
m_z = 36969 * (m_z & 65535) + (m_z >> 16);
m_w = 18000 * (m_w & 65535) + (m_w >> 16);
return (m_z << 16) + m_w; /* 32-bit result */
}
In SQL, you could create a table Random, with two columns to contain w and z, and one ID column to identify each session. Perhaps your vendor supports variables and you need not bother with the table.
Nonetheless, even if we use a table, we immediately run into trouble cause ANSI SQL doesn't support unsigned INTs. In SQL Server I could switch to BIGINT, unsure if your vendor supports that.
CREATE TABLE Random (ID INT, [w] BIGINT, [z] BIGINT)
Initialize a new session, say number 3, by inserting 1 into z and the seed into w:
INSERT INTO Random (ID, w, z) VALUES (3, 8921, 1);
Then each time you wish to generate a new random number, do the computations:
UPDATE Random
SET
z = (36969 * (z % 65536) + z / 65536) % 4294967296,
w = (18000 * (w % 65536) + w / 65536) % 4294967296
WHERE ID = 3
(Note how I have replaced bitwise operands with div and mod operations and how, after computing, you need to mod 4294967296 to stay within the proper 32 bits unsigned int range.)
And select the new value:
SELECT(z * 65536 + w) % 4294967296
FROM Random
WHERE ID = 3
SQLFiddle demo
Not sure if this applies in non-SQL Server, but typically when you use a RAND() function, you can specify a seed. Everytime you specify the same seed, the randomization will be the same.
So, it sounds like you just need to store the seed number and use that each time to get the same set of random numbers.
MSDN Article on RAND
Each vendor has solved this in its own way. Creating your own implementation will be hard, since random number generation is difficult.
Oracle
dbms_random can be initialized with a seed: http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_random.htm#i998255
SQL Server
First call to RAND() can provide a seed: http://technet.microsoft.com/en-us/library/ms177610.aspx
MySql
First call to RAND() can provide a seed: http://dev.mysql.com/doc/refman/4.1/en/mathematical-functions.html#function_rand
Postgresql
Use SET SEED or SELECT setseed() : http://www.postgresql.org/docs/8.3/static/sql-set.html