Best (and fastest) way to store triangles and lines in C++? - variables

I've got a few 3D apps going, and I was wondering, what is the best way to store lines and triangles? At the moment, I have lines as an array of typedef'd vectors as such:
typedef struct
{
float x, y, z;
}
Vector
Vector line[2];
Now, I could do it like this:
typedef struct
{
Vector start, end;
}
Line
Line lineVar;
Faces could be similar:
typdef struct
{
Vector v1, v2, v3;
}
Face faceVar;
My question is this: Is there a better or faster way to store lines and faces? Or am I doing it OK?
Thanks,
James

What you have is pretty much how vectors are represented in computer programs. I can't imagine any other way to do it. This is perfectly fine:
typedef struct
{
float x, y, z;
} Vector;
(DirectX stores vector components like this, by the way.)
However, 3D intensive programs typically have the faces index into a vector array to save space since the same points often appear on different faces of a 3D model:
typedef struct
{
int vectorIndex1, vectorIndex2, vectorIndex3;
} Face;

Related

CGAL: problem accessing the neighbors of every vertex using edge iterator in periodic triangulation

I am using periodic Delaunay triangulation in CGAL in my code, and producing for each vertex all neighboring vertices. For this I use Edge iterator, since in my case it will be much more faster than Vertex iterator.
Here is the code snippet,
typedef CGAL::Exact_predicates_inexact_constructions_kernel Kernel;
typedef CGAL::Periodic_2_triangulation_traits_2<Kernel> Gt;
typedef CGAL::Triangulation_vertex_base_with_info_2<unsigned int, Gt> Vb;
typedef CGAL::Periodic_2_triangulation_face_base_2<Gt> Fb;
typedef CGAL::Triangulation_data_structure_2<Vb, Fb> Tds;
typedef CGAL::Periodic_2_Delaunay_triangulation_2<Gt, Tds> Triangulation;
typedef Triangulation::Iso_rectangle Iso_rectangle;
typedef Triangulation::Edge_iterator Edge_iterator;
typedef Triangulation::Vertex_handle Vertex_handle;
typedef Triangulation::Point Point;
typedef vector<pair<Point, unsigned> > Vector_Paired;
Vector_Paired points;
Iso_rectangle domain(0,0,L,L);
for(int iat = 0; iat < N; iat++)
{
points.push_back(make_pair(Point(r_tot[iat][0],r_tot[iat][1]),iat));
}
Triangulation T(points.begin(), points.end(), domain);
for(Edge_iterator ei=T.finite_edges_begin(); ei!=T.finite_edges_end(); ei++)
{
Triangulation::Face& f = *(ei->first);
int ii = ei->second;
Vertex_handle vi = f.vertex(f.cw(ii));
Vertex_handle vj = f.vertex(f.ccw(ii));
int iat = vi->info();
int jat = vj->info();
VecInd[iat].push_back(jat);
VecInd[jat].push_back(iat);
}
But, sometimes instead of one special neighbors for each vertex I get 8 or 9 or ... copy of the same neighbor.
For example in VecInd which is a 2D vector containing neighboring indices I get some thing like this:
VecInd[0]=[2,2,2,2,4,4,4,...]
I couldn't find an example using edge iterator in CGAL website, and nothing related in stackoverflow.
I am wondering whether this implementation is correct? What should I add to my code in order to get one copy per each neighbor, I can use STL::sets, but I would like to know the source of problem.
Here is the answer that was posted on the CGAL-discuss mailing-list, by Mael:
If your point set is not geometrically well spaced, it's possible that the triangulation of these points do not form a simplicial complex over the flat torus (in other words, there are short cycles in the triangulation). In this case, the algorithm uses 8 copies of the triangulation to artificially create a simplicial complex. You can check if this is the case using the function is_triangulation_in_1_sheet() and read more about these mechanisms in the User Manual.
When copies are being used, iterating over the edges will indeed give you exactly what the underlying data structure has : 9 entities for each edge. To get unique ones, you can simply filter 8 out of the 9 by looking at the offset of the vertices of the edge. This is what is done in the iterator that returns unique periodic segments. Unfortunately, you want edges and this iterator converts directly to the geometry of the edge (the segment). Nevertheless, you can simply use the main filtering function from that iterator, that is: is_canonical(). This function will look at the offset of the two vertices of your edges, and keep only those that have at least one vertex in the first copy of the domain, which is enough to make it unique.

CGAL static AABB tree to intersect many spheres with rays

I would like to use CGAL's AABB Tree to compute intersection between many static spheres and rays. I am fairly new to CGAL and might need some guidance.
As there does not seem to be direct support for spheres in the AABB tree, I think need to complement the functionality by creating AABB_sphere_primitive. Is that the only thing that is needed to get something like AABB_tree/AABB_triangle_3_example.cpp, with spheres instead of triangles? Do I need to also define an analogue of Point_from_triangle_3_iterator_property_map?
typedef CGAL::Simple_cartesian<double> K;
typedef K::FT FT;
typedef K::Point_3 Point;
typedef K::Plane_3 Plane;
typedef K::Sphere_3 Sphere; // <-- this is done already
typedef std::list<Sphere>::iterator Iterator;
typedef CGAL::AABB_sphere_primitive<K,Iterator> Primitive; // <---- must be defined newly
typedef CGAL::AABB_traits<K, Primitive> Traits;
typedef CGAL::AABB_tree<Traits> Tree;
The routine for intersection between sphere and ray is already implemented somewhere (Spherical_kernel_intersections.h?) and will be used?
Thanks for pointers.
You need to provide a new primitive type that is a model of the concept AABBPrimitive. Basically you can copy/paste the implementation of CGAL::AABB_triangle_primitive and adapt it to the case of a sphere.
The next tricky part is to provide the intersection predicate for a ray and a sphere as required by the AABBTraits concept.
If you are not looking for exact predicates, you can simply using the distance of the center of the sphere to the support line of the ray + the direction of the center of the sphere with reference to the source of the ray.
If you want exact predicates, the class Filtered_predicate can help you make your predicate robust.

Triangulating Polyhedron faces in CGAL

Having an arbitrary polyhedron in CGAL (one that can be convex, concave or, even, have holes) how can I triangulate its faces so that I can create OpenGL Buffers for rendering?
I have seen the convex_hull_3() returns a polyhedron that has triangulated faces, but it won't do what I want for arbitrary polyhedrons.
The header file <CGAL/triangulate_polyhedron.h> contains a non-documented function
template <typename Polyhedron>
void triangulate_polyhedron(Polyhedron& p)
that is working with CGAL::Exact_predicates_inexact_constructions_kernel for example.
The Polygon Mesh Processing package provides the function CGAL::Polygon_mesh_processing::triangulate_faces with multiple overloads. The simplest thing to do would be
typedef CGAL::Simple_cartesian<float> Kernel;
typedef CGAL::Polyhedron_3<Kernel> Polyhedron_3;
Polyhedron_3 polyhedron = load_my_polyhedron();
CGAL::Polygon_mesh_processing::triangulate_faces(polyhedron);
After that, all faces in polyhedron are triangles.
The function modifies the model in-place, so one has to use a HalfedgeDS that supports removal. This is the default, but, for example, HalfedgeDS_vector won't do.
See also an official example that uses Surface_mesh instead of Polyhedron_3:
Polygon_mesh_processing/triangulate_faces_example.cpp

Implementing 3D movement forces

I have a 3D object moving and I need to be able to apply forces to it such as gravity. In 2D, I would simply store its movement in dx and dy, but since this is in 3D, I am using a Vector3D direction and a float speed. How can I determine how much to rotate the direction and change the speed when by using something like applyForce(Vector3D force)?
Newton's second law gives that the acceleration is proportional to the force applied. Thus, a really simple method is forward integration, e.g. (pseudocode for compactness)
class Object {
Vector3D position;
Vector3D velocity;
float mass;
updatePhysics(Vector3D force, float dt) {
velocity += (1.0/mass) * force * dt;
position += velocity * dt;
}
}
Of course, in real life there are problems with for example numeric instability and the choice of time delta. I did not understand from your question if you try to perform some one-shot calculation or if this is for a 3D game. If the latter, I suggest looking into a physics library such as Bullet Physics, you will get a lot for free.

How do I create a function at runtime in Objective-C

So it's late here, and my google skills seem to be failing me. I've found some great responses on SO before (time and time again), I thought you guys could help.
I have a neural network I'm trying to run in native objective-c. It works, but it's too slow. These networks are not recurrent. Each network I run about 20,000 times ( 128x80 times, or around that). The problem is these networks really just boil down to math functions (each network is a 4 dimensional function, taking x,y,dist(x,y),and bias as inputs, and outputting 3 values).
What I want to do is convert each network (only once) into a function call, or a block of code at runtime in objective-c.
How do I do this? I could make a big string of the math operations that need to be performed, but how do I go about executing that string, or converting the string into a block of code for execution?
Again, my late night search failed me, so sorry if this has already been answered. Any help is greatly appreciated.
-Paul
Edit: Aha! Great success! Nearly 24 hours later, I have working code to turn a neural network with up to 4 inputs into a single 4 dimensional function. I used the block method suggested by Dave DeLong in the answers.
For anybody who ever wants to follow what I've done in the future, here is a (quick) breakdown of what I did (excuse me if this is incorrect etiquette on stackoverflow):
First, I made a few typedef's for the different block functions:
typedef CGFloat (^oneDFunction)(CGFloat x);
typedef CGFloat (^twoDFunction)(CGFloat x, CGFloat y);
typedef CGFloat (^threeDFunction)(CGFloat x, CGFloat y, CGFloat z);
typedef CGFloat (^fourDFunction)(CGFloat x, CGFloat y, CGFloat z, CGFloat w);
A oneDFunction takes the form of f(x), twoD is f(x,y), etc. Then I made functions to combine two fourDFunction blocks (and 2 oneD, 2 twoD, etc, although these were not necessary).
fourDFunction (^combineFourD) (fourDFunction f1, fourDFunction f2) =
^(fourDFunction f1, fourDFunction f2){
fourDFunction blockToCopy = ^(CGFloat x, CGFloat y, CGFloat z, CGFloat w){
return f1(x,y,z,w) + f2(x,y,z,w);
};
fourDFunction act = [blockToCopy copy];
[f1 release];
[f2 release];
//Need to release act at some point
return act;
};
And, of course, I needed to apply the activation function to the fourD function for every node, and for each node, I would need to multiply by the weight connecting it:
//for applying the activation function
fourDFunction (^applyOneToFourD)( oneDFunction f1, fourDFunction f2) =
^(oneDFunction f1, fourDFunction f2){
fourDFunction blockToCopy = ^(CGFloat x, CGFloat y, CGFloat z, CGFloat w){
return f1(f2(x,y,z,w));
};
fourDFunction act = [blockToCopy copy];
[f1 release];
[f2 release];
//Need to release act at some point
return act;
};
//For applying the weight to the function
fourDFunction (^weightCombineFour) (CGFloat x, fourDFunction f1) =
^(CGFloat weight, fourDFunction f1)
{
fourDFunction blockToCopy = ^(CGFloat x, CGFloat y, CGFloat z, CGFloat w){
return weight*f1(x,y,z,w);
};
fourDFunction act = [blockToCopy copy];
[f1 release];
//[act release];
//Need to release act at some point
return act;
};
Then, for each node in the network, I simply applied the activation function to the sum of the fourD functions from the source neurons multiplied by their connection weight.
After composing all those blocks, I took the final functions from each output. Therefore, my outputs are separate 4D functions of the inputs.
Thanks for the help, this was very cool.
You can do this with blocks. Something like:
//specify some parameters
int parameter1 = 42;
int parameter2 = 54;
//create your block
int (^myBlock)(int) = ^(int parameter3){
return parameter1 * parameter2 * parameter3;
};
//copy the block off the stack
myBlock = [myBlock copy];
//stash the block somewhere so that you can pull it out later
[self saveBlockOffSomewhereElse:myBlock underName:#"myBlock"];
//balance the call to -copy
[myBlock release];
And then elsewhere...
int (^retrievedBlock)(int) = [self retrieveBlockWithName:#"myBlock"];
int theAnswer = retrievedBlock(2); //theAnswer is 4536
If you have a string representing some math to evaluate, you could check out GCMathParser (fast but not extensible) or my own DDMathParser (slower but extensible).
Your idea isn't very stupid. As a matter of fact, LLVM is designed to do exactly that kind of thing (generate code, compile, link, load and run) and it even has libraries to link against and APIs to use.
While you could go down a path of trying to piece together a bunch of blocks or primitives -- a sort of VM of your own -- it'll be slower and probably more maintenance. You'll end up having to write some kind of a parser, write all the primitive blocks, and then piecing it all together.
For code generation, you'll probably still need a parser, obviously, but the resulting code is going to be much much faster because you can crank the optimizer on the compiler up and, as long as you generate just one really big file of code, the compiler's optimizer will be even more effective.
I would suggest, though, that you generate your program and then run it externally to your app. That will prevent the hell that is trying to dynamically unload code. It also means that if the generated code crashes, it doesn't take out your application.
LLVM.org has a bunch of additional details.
(Historical note -- one early form of Pixar's modeling environment was a TCL based system that would emit, literally, hundreds of thousands of lines of heavily templated C++ code.)
Here's another possibility: Use OpenGL.
The sorts of functions you are executing in a neural network are very similar to those performed by GPU's. multiplication/scaling, distance, sigmoids, etc... You could encode your state in a bitmap, generate a pixel shaper as ASCII, compile & link it using the provided library calls, then generate an output "bitmap" with the new state. Then switch the two bitmaps and iterate again.
Writing a pixel shaper is not as hard as you might imagine. In the basic case you are given a pixel from the input bitmap/buffer and you compute a value to put in the output buffer. You also have access to all the other pixels in the input and output buffers, as wall as arbitrary parameters you set global, including "texture" bitmaps which might serve as just an arbitrary data vector.
Modern GPU's have multiple pipelines so you'd probably get much better performance than even native CPU machine code.
Another vote for blocks. If you start with a bunch of blocks representing primitive operations, you could compose those into larger blocks that represent complex functions. For example, you might write a function that takes a number of blocks as parameters, copies each one in turn and uses it as the first parameter to the next block. The result of the function could be a block that represents a mathematical function.
Perhaps I'm talking crazy here due to the late hour, but it seems like the ability of blocks to refer to other blocks and to maintain state should make them very good for assembling operations.