Fast Fourier Transform in Objective-C doesn't work fine - objective-c

I have a method in Objective-C that receives an array of doubles and then it uses the Fast Fourier Transform, however the exit of the FFT doesn't match to what I want.
Can someone help me, I don't know what I'm doing wrong?
This is my method where fftLength is 4096:
-(double(*)) doFFT:(double(*))data{
double (*fft) = malloc((fftLength * 2) * sizeof(double));
FFTSetupD fft_weights = vDSP_create_fftsetupD((log2((double)(fftLength))), kFFTRadix2);
DSPDoubleSplitComplex fftData;
//fftData.imagp = fftMagnitudes;
fftData.imagp = (double*)malloc(fftLength * sizeof(double));
fftData.realp = (double*)malloc(fftLength * sizeof(double));
for (int i=0; i< fftLength; i++) {
fftData.realp[i] = (double)data[i];
fftData.imagp[i] = (double)0.0;
}
vDSP_fft_zipD(fft_weights, &fftData, 1, log2((double)(fftLength)), FFT_FORWARD);
for(int i = 0;i<fftLength * 2;i++){
fft[i] = fftData.realp[i];
fft[i+1] = fftData.imagp[i];
}
return fft;
}
And this is part of my input data:
[0.0,2.092731423889438E-4, 8.858534436404497E-4,0.0013714427743574675,0.0012678166431137061,-9.650789019044481E-4,-0.002852548808273958,-0.005176802258252122,-0.007281581949909022,-0.00575977878132905,…]
And the result should be:
[21478.183372382526,0.0,-10190.412374839314,…]
But I'm not getting this.

This loop is wrong:
for(int i = 0;i<fftLength * 2;i++){
fft[i] = fftData.realp[i];
fft[i+1] = fftData.imagp[i];
}
Assuming you want interleaved real/complex output data then it should be:
for(int i = 0; i < fftLength; i++) {
fft[i * 2] = fftData.realp[i];
fft[i * 2 + 1] = fftData.imagp[i];
}

Related

calculating forward kinematics using D-H matrix

I have a 6-DOF robot arm model:
robot arm structure
I want to calculate forward kinematics, so I uses the D-H matrix. the D-H parameters are:
static const std::vector<float> theta = {
0,0,90.0f,0,-90.0f,0};
// d
static const std::vector<float> d = {
380.948f,0,0,-560.18f,0,0};
// a
static const std::vector<float> a = {
-220.0f,522.331f,80.0f,0,0,94.77f};
// alpha
static const std::vector<float> alpha = {
90.0f,0,90.0f,-90.0f,-90.0f,0};
and the calculation :
glm::mat4 Robothand::armForKinematics() noexcept
{
glm::mat4 pose(1.0f);
float cos_theta, sin_theta, cos_alpha, sin_alpha;
for (auto i = 0; i < 6;i++)
{
cos_theta = cosf(glm::radians(theta[i]));
sin_theta = sinf(glm::radians(theta[i]));
cos_alpha = cosf(glm::radians(alpha[i]));
sin_alpha = sinf(glm::radians(alpha[i]));
glm::mat4 Ai = {
cos_theta, -sin_theta * cos_alpha,sin_theta * sin_alpha, a[i] * cos_theta,
sin_theta, cos_theta * cos_alpha, -cos_theta * sin_alpha,a[i] * sin_theta,
0, sin_alpha, cos_alpha, d[i],
0, 0, 0, 1 };
pose = pose * Ai;
}
return pose;
}
the problem I have is that, I can't get the correct result, for example, I want to calculate the transformation matrix from first joint to the 4th joint, I will change the for loop i < 3,then I can get the pose matrix, and I can the origin coordinate in 4th coordinate system by pose * (0,0,0,1).but the result (380.948,382.331,0) seems not correct because it should be move along x-axis not y-axis. I have read many books and materials about D-H matrix, but I can't figure out what's wrong with it.
I have figured it out by myself, the real problem behind is glm::mat, glm::mat is col-type which means columns will be initialized before rows,I changed the code and get the correct result:
for (int i = 0; i < joint_num; ++i)
{
pose = glm::rotate(pose, glm::radians(degrees[i]), glm::vec3(0, 0, 1));
pose = glm::translate(pose,glm::vec3(0,0,d[i]));
pose = glm::translate(pose, glm::vec3(a[i], 0, 0));
pose = glm::rotate(pose,glm::radians(alpha[i]),glm::vec3(1,0,0));
}
then I can get the position by:
auto pos = pose * glm::vec4(x,y,z,1);

Bidirectional path tracing

I'm making a bidirectional path tracer and I have some troubles.
To be clear :
1) One point light
2) All objects are diffuse
3) All objects are spheres, even walls (they are very large)
4) NO MIS WEIGHTING
The light emission is a 3D vector. The BRDF of a sphere is a 3D vector. Hard coded.
In the main function below I generate EyePath and LightPath then I connect them. At least I try.
In this post I will talking about the main function then EyePath then LightPath. The talking about connecting function will appear once EyePath and Light are good.
First questions :
Does the generation of the first light point is good ?
Do I need to compute this point according to the emission of the light source? or is it just the emission ? The line is commented where i'm filling the Vertices structure.
Do I need to translate fromlight ? In order to put it on the sphere
The code below is sampled in the main function. Above it there is two for loops going through all pixels. Camera.o is the eye. CameraRayDir is the direction to the current pixel.
//The path light starting point is at the same position as the light
Ray fromLight(Vec(0, 24.3, 0), Vec());
Sphere light = spheres[7];
#define PDF 0.15915494309 // 1 / (2 * PI)
for(int i = 0; i < samps; ++i)
{
std::vector<Vertices> PathEye;
std::vector<Vertices> PathLight;
Vec cameraRayDir = cx * (double(x) / w - .5) + cy * (double(y) / h - .5) + camera.d;
Ray rayEye(camera.o, cameraRayDir.norm());
// Hemisphere oriented towards the top
fromLight.d = generateRayInHemisphere(fromLight.o,Vec(0,1,0)).d;
double f = clamp(n.dot(fromLight.d.norm()));
Vertices vert;
vert.d = fromLight.d;
vert.x = fromLight.o;
vert.id = 7;
vert.cos = f;
vert.n = Vec(0,1,0).norm();
// this one ?
//vert.couleur = spheres[7].e * f / PDF;
// Or this one ?
vert.couleur = spheres[7].e;
PathLight.push_back(vert);
int sizeEye = generateEyePath(PathEye, rayEye, maxDepth);
int sizeLight = generateLightPath(PathLight, fromLight, maxDepth);
for (int s = 0; s < sizeLight; ++s)
{
for (int t = 1; t < sizeEye; ++t)
{
int depth = t + s - 1;
if ((s == 0 && t == 0) || depth < 0 || depth > maxDepth)
continue;
pixelValue = pixelValue + connectPaths(PathEye, PathLight, s, t);
}
}
}
For the EyePath I intersect the geometry then I compute the illumination according to the distance with the light. The colour is black if the point is in the shadow.
Second question : For the eye path and the direct illumination, is the computation good ? I've seen in many code, people use the pdf even in direct illumination. But I'm only using point light and spheres.
int generateEyePath(std::vector<Vertices>& v, Ray eye, int maxDepth)
{
double t;
int id = 0;
Vertices vert;
int RussianRoulette;
while(v.size() <= maxDepth)
{
if(distribRREye(generatorRREye) < 10)
break;
// Intersect all the geometry
// id is the id of the intersected geometry in an array
intersect(eye, t, id);
const Sphere& obj = spheres[id];
// Intersection point
Vec x = eye.o + eye.d * t;
// normal
Vec n = (x - obj.p).norm();
Vec direction = light.p - x;
// Shadow ray
Ray RaytoLight = Ray(x, direction.norm());
const float distance = direction.length();
// shadow
const bool visibility = intersect(RaytoLight, t, id);
const Sphere &lumiere = spheres[id];
float degree = clamp(n.dot((lumiere.p - x).norm()));
// If the intersected geometry is not a light, then in shadow
if(lumiere.e.x == 0)
{
vert.couleur = Vec();
}
else // else we compute the colour
// obj.c is the brdf, lumiere.e is the emission
vert.couleur = (obj.c).mult(lumiere.e / (distance * distance)) * degree;
vert.x = x;
vert.id = id;
vert.n = n;
vert.d = eye.d.normn();
vert.cos = degree;
v.push_back(vert);
eye = generateRayInHemisphere(x,n);
}
return v.size();
}
For the LightPath, for a given point, I compute it according to the previous one and the values at this point. Like in a common path tracing.\n
Third question: Is the colour computation good ?
int generateLightPath(std::vector<Vertices>& v, Ray fromLight, int maxDepth)
{
double t;
int id = 0;
Vertices vert;
Vec previous;
while(v.size() <= maxDepth)
{
if(distribRRLight(generatorRRLight) < 10)
break;
previous = v.back().couleur;
intersect(fromLight, t, id);
// intersected geometry
const Sphere& obj = spheres[id];
// Intersection point
Vec x = fromLight.o + fromLight.d * t;
// normal
Vec n = (x - obj.p).norm();
double f = clamp(n.dot(fromLight.d.norm()));
// obj.c is the brdf
vert.couleur = previous.mult(((obj.c / M_PI) * f) / PDF);
vert.x = x;
vert.id = id;
vert.n = n;
vert.d = fromLight.d.norm();
vert.cos = f;
v.push_back(vert);
fromLight = generateRayInHemisphere(x,n);
}
return v.size();
}
For the moment I get this result.
enter image description here
The connecting function will come once EyePath and LightPath are good.
Thank you all
Try the spherical reference scene mentioned in this paper. I think then you can work out most of your questions by yourself since it has an analytical solution.
https://www.researchgate.net/publication/221546261_Testing_Monte-Carlo_Global_Illumination_Methods_with_Analytically_Computable_Scenes
It would save your time to implement and verify your understanding with path tracing and light tracing first, then try to combine them with weights.

controlP5: matrix/ 2D array for multiple RadioButton results

I'm trying to query a user to select from 4 or more sets of radio buttons where there are 5 buttons in each set (Processing 2+). Where I'm having trouble is taking the array created by selecting from each set of buttons and getting it to fill columns in a matrix where the elements can be queried and the 2D array can be printed and ultimately written as csv or tab txt file.
import controlP5.*;
ControlP5 controlP5;
RadioButton c0;
RadioButton c1;
RadioButton c2;
RadioButton c3;
int cols = 5;
int rows = 4;
int[][] myArray = new int[cols][rows];
void setup() {
size(600,650);
controlP5 = new ControlP5(this);
c0 = controlP5.addRadioButton("ch0",60,60)
.setSize(20,20)
.setItemsPerRow(5)
.setSpacingColumn(50)
.addItem("c03", 1)
.addItem("c04", 2)
.addItem("c05", 3)
.addItem("c0AM", 4)
.addItem("c0AF", 5)
;
c1 = controlP5.addRadioButton("ch1",60,80)
.setSize(20,20)
.setItemsPerRow(5)
.setSpacingColumn(50)
.addItem("c13", 1)
.addItem("c14", 2)
.addItem("c15", 3)
.addItem("c1AM", 4)
.addItem("c1AF", 5)
;
c2 = controlP5.addRadioButton("ch2",60,100)
.setSize(20,20)
.setItemsPerRow(5)
.setSpacingColumn(50)
.addItem("c23", 1)
.addItem("c24", 2)
.addItem("c25", 3)
.addItem("c2AM", 4)
.addItem("c2AF", 5)
;
c3 = controlP5.addRadioButton("ch3",60,120)
.setSize(20,20)
.setItemsPerRow(5)
.setSpacingColumn(50)
.addItem("c33", 1)
.addItem("c34", 2)
.addItem("c35", 3)
.addItem("c3AM", 4)
.addItem("c3AF", 5)
;
}
void draw() {
background(0);
}
void controlEvent(ControlEvent theEvent) {
if(theEvent.isGroup() && theEvent.name().equals("ch0") || theEvent.name().equals("ch0") || theEvent.name().equals("ch2") || theEvent.name().equals("ch3")){
println(theEvent.name());
println(theEvent.arrayValue());
//float t=float(theEvent.arrayValue());
//int[][] = { {float getGroup(),float[] getArrayValue()}, {3,2,1,0}, {3,5,6,1}, {3,8,3,4} };
//int cols = 4;
//int rows = 5;
//int[][] myArray = new int[cols][rows];
// Two nested loops allow us to visit every spot in a 2D array.
// For every column I, visit every row J.
//for (int i = 0; i < cols; i++) {
// for (int j = 0; j < rows; j++) {
//myArray[i][j] = float(theEvent.arrayValue);
}
}
You were mixing different things together. Also you don't need to check whole array and store this information. Just update clicked button. Here is new version of your controlEvent
void controlEvent(ControlEvent theEvent) {
int cols = 4;
int rows = 5;
int[][] myArray = new int[cols][rows];
switch(theEvent.getId()){
case 0:
myArray[0][(int)theEvent.value()-1] = 1;
break;
case 1:
myArray[1][(int)theEvent.value()-1] = 1;
break;
case 2:
myArray[2][(int)theEvent.value()-1] = 1;
break;
case 3:
myArray[3][(int)theEvent.value()-1] = 1;
break;
}
println("==== " + theEvent.getId() + " ===");
println(myArray[theEvent.getId()]);
}
To do this simple switch you need to add ID parameter to all your radio buttons like this:
c3 = controlP5.addRadioButton("ch3", 60, 120)
.setId(3)
.setSize(20, 20)
...
I don't know how exactly you want use this array so my implementation use it as local variable so it will be deleted every time this event is called but this could be avoided by declaring array as global variable and then deleting only updated column.

Accessing my malloc'd 2d array with [x][y]

I'm updating a class to use member variable instead of #defines to define the bounds of a 2d array. It used to look like:
#define kWidth 3
#define kHeight 100
NSUInteger fields[kWidth][kHeight];
Now kWidth and kHeight are iVars. I switched to malloc convention, because I see no other choice as the bounds can now change. The problem is I cannot access the array using two [] ([][]). See my inline comments. I am sure I've malloc'd correctly. I've done this many times before, and under iOS. Why can't I access this way?
self.kFieldsHeight = 100;
self.kFieldsWidth = 3;
NSUInteger** fields = (NSUInteger**)malloc(sizeof(NSUInteger) * self.kFieldsHeight * self.kFieldsWidth);
memset(fields, 0xFF, self.kFieldsWidth * self.kFieldsHeight * sizeof(NSUInteger));
//// Now with LLDB I can examine the array in one dimension
// p fields[0] // 0xFFFFFFFF
// p fields[299] // 0xFFFFFFFF
// p fields[300] // 0xGARBAGE
//// THIS fails with "error: Couldn't dematerialize struct : Couldn't read a composite type from the target: gdb remote returned an
error: E08"
// p fields[0][0]
// Thus this fails in my code
NSUInteger i = fields[0][0];
What's the deal?
Edit: (more detail)
I've also tried mallocing like this:
fields = (NSUInteger**)malloc(sizeof(NSUInteger*) * self.kFieldsHeight);
if(fields){
for(int i = 0; i < self.kFieldsHeight; i++){
fields[i] = (NSUInteger*)malloc(sizeof(NSUInteger) * self.kFieldsWidth);
}
}
Edit: (even more detail). I swapped the width and height with the same results:
fields = (NSUInteger**)malloc(sizeof(NSUInteger*) * self.kFieldsWidth);
if(fields){
for(int i = 0; i < self.kFieldsWidth; i++){
fields[i] = (NSUInteger*)malloc(sizeof(NSUInteger) * self.kFieldsHeight);
}
}
You need to do this:
NSUInteger **fields = malloc(self.kFieldsHeight * sizeof(NSUInteger*));
for (int i = 0; i < self.kFieldsHeight; i++) {
fields[i] = malloc(sizeof(NSUInteger) * self.kFieldsWidth);
memset(fields, 0xFF, self.kFieldsWidth * sizeof(NSUInteger));
}
But this is not the way most people do the job. Most developers use a big array instead of a 2d array and try to index it themselves. Like this:
NSUInteger *fields = malloc(self.kFieldsHeight * self.kFieldsHeight * sizeof(NSUInteger));
fields[h * width + w] = pixel_value;
try this
fields = (NSUInteger**)malloc(sizeof(NSUInteger*) * self.kFieldsWidth);
if(fields){
for(int i = 0; i < self.kFieldsWidth; i++){
fields+i = (NSUInteger*)malloc(sizeof(NSUInteger) * self.kFieldsHeight);
}
}
no real reason for doing it this way, just another way of saying a[i] is a+i
I am using my C knowledge here and not objective C hope I can help

What's the most efficient way to access 2D seismic data

Can anyone tell me the most efficient/performant method to access 2D seismic data using Ocean?
For example, if I need to perform a calculation using data from 3x2D seismic lines (all with the same geometry) is this the most efficient way?
for (int j = 0; j < seismicLine1.NumSamplesJK.I; j++)
{
ITrace trace1 = seismicLine1.GetTrace(j);
ITrace trace2 = seismicLine2.GetTrace(j);
ITrace trace3 = seismicLine3.GetTrace(j);
for (int k = 0; k < seismicLine1.NumSamplesJK.J; k++)
{
double sum = trace1[k] + trace2[k] + trace3[k];
}
}
Thanks
A followup to #Keith's suggestion - with .NET4 his code could be refactored to a generic:
public static IEnumerable<Tuple<T1, T2, T3>> TuplesFrom<T1,T2,T3>(IEnumerable<T1> s1, IEnumerable<T2> s2, IEnumerable<T3> s3)
{
bool m1, m2, m3; // "more" flags
using (var e1 = s1.GetEnumerator())
using (var e2 = s2.GetEnumerator())
using (var e3 = s3.GetEnumerator())
while ((m1 = e1.MoveNext()) &&
(m2 = e2.MoveNext()) &&
(m3 = e3.MoveNext()))
yield return Tuple.Create(e1.Current, e2.Current, e3.Current);
if (m1 || m2 || m3)
throw new ArgumentException(); // sequences of unequal lengths
}
Which gives:
foreach (var traceTuple in TuplesFrom(seismicLine1.Traces, seismicLine2.Traces, seismicLine3.Traces))
for (int k = 0; k < maxK; ++k)
sum = traceTuple.Item1[k] + traceTuple.Item2[k] + traceTuple.Item3[k];
What you have will work except for the two bugs I see, but it can also be made slightly faster. First the bugs. Your loops should be testing NumSamplesIJK.J not .I for the outer loop and .K, not .J for the inner loop. The .I is always 0 for 2D lines.
You can get a slight performance lift by minimizing the dereference of the NumSamplesIJK properties. Since the geometries are the same you should create a pair of variables for the J and K properties and use them.
int maxJ = seismicLine1.NumSamplesIJK.J;
int maxK = seismicLine1.NumsamplesIJK.K;
for (int j = 0; j < maxJ; j++)
...
for (int k = 0; k < maxK; k++)
...
You might also consider using the Traces enumerator instead of calling GetTrace. It will process the data in trace ascending order. Unfortunatley with three lines the code is a bit harder to read.
int maxK = SeismicLine1.NumSamplesIJK.K;
IEnumerator line2Traces = seismicLine2.Traces.GetEnumerator();
ITrace line2Trace = line2Traces.MoveNext();
IEnumerator line3Traces = seismicLine3.Traces.GetEnumerator();
ITrace line3Trace = line3Traces.MoveNext();
foreach (ITrace line1Trace in seismicLine1.Traces)
{
for (int k = 0; k < maxK; k++)
{
double sum = line1Trace[k] + line2Trace[k] + line3Trace[k];
}
line2Trace = line2Traces.MoveNext();
line3Trace = line3Traces.MoveNext();
}
I don't know what, if any, performance lift this might provide. You'll have to profile it to find out.
Good luck.