Particles passing through each other - optimization

I am writing a code to simulate particle movement. (currently 2D soon 3D hopefully)
The thing is, if I use a relatively large timestep particles end up passing through each other.
Do you have any suggestion that would allow me to correct that without using a really small step?
(it is in C++ if that makes much difference).

The use of timestep to advance the clock introduces model artifacts which can destroy the model validity, as is happening in your case. Use discrete event scheduling instead. This paper from Winter Simulation Conference 2005 describes how to implement movement in a discrete event framework. Your model will not only be more accurate, it will probably run much faster as well.

So you will have to do some sort of collision detection to see if two objects would collide.
Depending on your data structure the detection could take many forms. If you just have a list of points you would have to check all against each other in N^2 each step for the particle (adding the movement vector to create a larger spacial foot print). This could be done by the GJK algorithm.
Using some spacial data structure could reduce the complexity by only running the GJK on a pruned set of particles, i.e. no need to check if they impossible could overlap.

Related

Creating a good training set for one-class detection

I am training a one-class (hands) object detector on the egohands data set. My problem is that it detects way too many things as hands. It feels like it is detecting everything that is skin-colored as a hand.
I assume the most likely explanation for this is that my training set is poor, as every single image of the set contains hands, and also almost no other skin-toned elements are on the images. I guess it is necessary to also present the network images that are not what you try to detect?
I just want to verify I am right with my assumptions, before investing lots of time into creating a better training set. Therefore I am very grateful for every hint want I am doing wrong.
Object detection preprocessing is critical step, take extra caution guards as detection networks are sensitive to geometrical transformations.
Some proven data augmentation methods include:
1.Random geometry transformation for random cropping (with constraints),
2.Random expansion,
3.Random horizontal flip
4.Random resize (with random interpolation).
5.Random color jittering for brightness, hue, saturation, and contrast

OpenCascade: How to subdivide elongated triangles?

I am using OpenCascade to import STEP/IGES as meshes in my software. Works nicely.
But I need small triangles, and the one I get are sometimes very large (in flat area), or very elongated (eg. when meshing a cylinder). The best would be to split triangle's edge bigger than some absolute value. Avoiding T vertices, too.
I was'nt able to google anything about that... So, currently, I pass the mesh to OpenMesh, apply the OpenMesh::Subdivider::Uniform::LongestEdgeT operator, then pass it back to my software. Tedious and costly when I manage several M triangles...
Questions:
Is there an equivalent in OpenCascade ?
Or a simple code snipet to implement my own loop to do so ?
Thanks !
The meshing algorithm BRepMesh_IncrementalMesh coming with Open CASCADE Technology is mainly focused on two usage scenarios:
Visualization in 3D Viewer. Large and prolonged triangles make no harm to presentation, as vertex normals ensure proper smooth shading. Deflection parameters allows managing presentation quality.
Computing Algorithms using triangulation as approximation (to speed up calculations compared to the same algorithm working on exact geometry). In this case, deflection parameters determine the target precision of the algorithm. Large and prolonged triangles should not cause problems here, as deviation from exact geometry is controlled by meshing parameters.
There are, however, some categories of algorithms, where shape of mesh element is very important. Solvers (numerical simulation) make one of such categories, where unfortunate mesh elements may cause numerical instability or other issues. What exactly matters / cause issues depend on specific algorithm - this may include element skewness, element aspect ratio, element size and elements grid. Some solvers work much better on quads rather than on triangles.
If you take a look onto meshing result of BRepMesh_IncrementalMesh algorithm, you may notice that not only large prolonged triangles, but entire mesh structure is somewhat suboptimal for solver algorithms:
There are several options you may consider:
Triangulation refinement algorithm. Such algorithm processes existing triangulation and tries healing some properties like skewness. This what does OpenMesh from your question, I suppose. Such postprocessing algorithm might give satisfactory results at good performance, but final result will dramatically depend on properties of original meshing algorithm. For the moment, OCCT doesn't have any refinement tool, although it is possible writing such algorithm on your own (I cannot give you a small code snippet, because such algorithm is not that small an trivial as it may look from a first glance).
Consider alternative meshing algorithm. Probably incomplete list:
Express Mesh by Open Cascade (hence, working directly on OCCT shapes). This tool generates triangulation having nice grid-alike structure (for smooth surfaces), configurable element size and quad-dominant generation option. This is a commercial product though.
Netgen mesher. This open source tool provides bindings to OCCT, and although it is focused on 3D tetrahedral mesh generation, it may be also used for generating a common triangular mesh. I cannot say something good about this tool - it was rather slow and unstable when I've seen its work many years ago.
MeshGems surface meshing. Another commercial tool providing an interface to OCCT. Never worked with this product, so cannot share any opinion on it.
Consider improving BRepMesh_IncrementalMesh. As OCCT is an open source framework, you may consider extending its meshing algorithm with more parameters and contribute to the project.

Correcting SLAM drift error using GPS measurements

I'm trying to figure out how to correct drift errors introduced by a SLAM method using GPS measurements, I have two point sets in euclidian 3d space taken at fixed moments in time:
The red dataset is introduced by GPS and contains no drift errors, while blue dataset is based on SLAM algorithm, it drifts over time.
The idea is that SLAM is accurate on short distances but eventually drifts, while GPS is accurate on long distances and inaccurate on short ones. So I would like to figure out how to fuse SLAM data with GPS in such way that will take best accuracy of both measurements. At least how to approach this problem?
Since your GPS looks like it is very locally biased, I'm assuming it is low-cost and doesn't use any correction techniques, e.g. that it is not differential. As you probably are aware, GPS errors are not Gaussian. The guys in this paper show that a good way to model GPS noise is as v+eps where v is a locally constant "bias" vector (it is usually constant for a few metters, and then changes more or less smoothly or abruptly) and eps is Gaussian noise.
Given this information, one option would be to use Kalman-based fusion, e.g. you add the GPS noise and bias to the state vector, and define your transition equations appropriately and proceed as you would with an ordinary EKF. Note that if we ignore the prediction step of the Kalman, this is roughly equivalent to minimizing an error function of the form
measurement_constraints + some_weight * GPS_constraints
and that gives you a more straigh-forward, second option. For example, if your SLAM is visual, you can just use the sum of squared reprojection errors (i.e. the bundle adjustment error) as the measurment constraints, and define your GPS constraints as ||x- x_{gps}|| where the x are 2d or 3d GPS positions (you might want to ignore the altitude with low-cost GPS).
If your SLAM is visual and feature-point based (you didn't really say what type of SLAM you were using so I assume the most widespread type), then fusion with any of the methods above can lead to "inlier loss". You make a sudden, violent correction, and augment the reprojection errors. This means that you lose inliers in SLAM's tracking. So you have to re-triangulate points, and so on. Plus, note that even though the paper I linked to above presents a model of the GPS errors, it is not a very accurate model, and assuming that the distribution of GPS errors is unimodal (necessary for the EKF) seems a bit adventurous to me.
So, I think a good option is to use barrier-term optimization. Basically, the idea is this: since you don't really know how to model GPS errors, assume that you have more confidance in SLAM locally, and minimize a function S(x) that captures the quality of your SLAM reconstruction. Note x_opt the minimizer of S. Then, fuse with GPS data as long as it does not deteriorate S(x_opt) more than a given threshold. Mathematically, you'd want to minimize
some_coef/(thresh - S(X)) + ||x-x_{gps}||
and you'd initialize the minimization with x_opt. A good choice for S is the bundle adjustment error, since by not degrading it, you prevent inlier loss. There are other choices of S in the litterature, but they are usually meant to reduce computational time and add little in terms of accuracy.
This, unlike the EKF, does not have a nice probabilistic interpretation, but produces very nice results in practice (I have used it for fusion with other things than GPS too, and it works well). You can for example see this excellent paper that explains how to implement this thoroughly, how to set the threshold, etc.
Hope this helps. Please don't hesitate to tell me if you find inaccuracies/errors in my answer.

Basics of face Sculpting in Blender

I mean, the basics..
1) I have seen in the Online videos, that they are modelling a character (or anything) through one object only, they are extruding, loop cut, scaling, etc and model a character, why don't they design different objects separately (like hands separately, legs separately, body separate and then join them together and make one object)..??????
2) Like What the texturing department has to see so that they should not return the model back to the modelling department. I mean like the meshes(polygons) over the model face must be quad, etc not triangle. while modelling a character..
what type of basics i should know , means is there any check list or is there any basics which i should see before modelling a character..
Please correct me if i am wrong , and answer my both questions.. Thanks
It may be common but it definitely isn't mandatory to have a model as one solid mesh. Some models will have parts of the body underneath clothing removed to reduce the poly count. How the model is to be used will be a big factor to how you model it, that is a for a single image it is easy to get away with multiple parts, while a character that will be animated in a cartoony animation could be stretched and distorted in ways that could show holes in a model with multiple pieces. When working in a team, there may be rules in place determining whether a solid or multi-part model is considered acceptable.
An example of an animated model made from multiple parts is Sintel, the main character in the Sintel short animation.
There is nothing stopping you from making a library of separate body parts and joining them together when you make your model. Be aware that this can bring complications, if you model an arm with 12 verts and then you make your hand with 15, then you have to fiddle around to merge them together.
You will also find some extra freedom to work with multiple body parts during the sculpting phase as you are creating a high density mesh that is used as a template to model a clean mesh over. This step is called retopology.
It is more likely that the rigging department will send a model back for fixing than the texturing department. When adding a rig and deforming the mesh in different ways, any parts that deform badly will be revealed and need fixing.
[...] (like hands separately, legs separately, body separate and then
join them together and make one object) [...]
Some modelers I know do precisely this and they do it in a way where they block in the design using broad primitive shapes, start slicing some edge loops and add broad details, then merge everything together, then sculpt it a bit further with high-res sculpting tools, and finally retopologize everything.
The main modelers I know who do this, however, model in a way that tries to adhere as close as possible to the concept artist's illustration. They're not creating their own models from scratch but are instead given top/front/back/side illustrations of a character, for example, and are just trying to match it as closely as possible.
When you start modeling everything in small pieces, it helps to have that concept illustration since you can get lost in the topology otherwise and fusing organic meshes together can be difficult to do in a clean way.
[...] why don't they design different objects separately? [...]
Again they sometimes do, but one of the appeals of creating organic meshes by keeping it seamless the entire time is that you can start to focus on how edge loops propagate across the entire model. It helps to know that the base of a finger is a hexagon, for example, in figuring out how to cleanly propagate and terminate the edge loops for a hand, and likewise have a strategy for the hand to cleanly propagate and terminate edge loops as it joins into the forearm.
It can be hard to get the topology to match up cleanly if you designed everything in small pieces and then had to figure out how to merge it all together. Polygonal modeling is very topology-oriented. It tends to require as much thinking about the wireframe and edge flows as it does the shape of the model, since it needs to be a certain way for everything to subdivide cleanly and smoothly and animate predictably with subdivision surfaces.
I used to work with developers who took one glance at the topology-dominated workflow of polygonal modeling and immediately wanted to jump to seeking alternatives, like voxel sculpting. With voxels you could be able to potentially model everything in pieces and foose it all together in a nice and smooth organic way without thinking about topology whatsoever.
However, that loses sight of the key appeal of polygonal meshes. Their wire flow forms a control lattice with a very finite number of control points for the artist to animate and move around to predictably control the shape of their model. You immediately lose that with a voxel representation -- so while voxels free the artist of thinking about how the topology works and how the wireframe flows through the model, it also loses all those control benefits of having that. So often if people use voxel sculpting, they end up meticulously retopologizing everything at the end anyway to gain back that level of coarse and predictable control they have with polygonal meshes.
I mean like the
meshes(polygons) over the model face must be quad, etc not triangle.
while modelling a character..
This is all in the context of subdivision surfaces: the most popular of which are variants of catmull-clark. That favors quads to get the most predictable subdivision. It's much easier for the artist to predict how everything will look like and deform if they favor, as much as possible, uniform grids of quadrangles wrapped around their model with 4-valence vertices and every polygon having 4 points. Then only in the case where they kind of need to "join" these quad grids together, they might create some funky topology: a 5-valence vertex here, a 3-valence vertex there, a 5-sided polygon here, a triangle there -- but those cases tend to deform a bit unpredictably (at least unintuitively), so artists tend to try to avoid these as much as possible.
Because when artists model polygonal meshes in this way, they are not just trying to create a statue with a nice shape. If that's all they wanted to do, they'd save themselves a lot of grief avoiding dealing with things in terms of individual vertices/edges/polygons in the first place and using something like Sculptris. Instead they are designing not only shapes but also designing a control lattice, a wire flow and a set of control points they can easily move around in the future to get predictable behavior out of their control cage. They're basically designing controls or an "interactive GUI/rig" almost for themselves with how they design the topology.
2) Like What the texturing department has to see so that they should
not return the model back to the modelling department.
Generally how a mesh is modeled in a direct sense shouldn't affect the texture department's work much at all if they're working with UV maps and painting textures over them (at that point it doesn't really matter if a model has clean wire flows or not, since all the texture artists do is pain images over the 2D UV map or directly onto the 3D model).
However, if the modeler does the UV mapping, then regardless of whether he uses quad meshes and clean wire flows or not, if the UV mapping is poor, then the resulting texture images will look all distorted. So the UV maps need to be made well with minimal distortion, though that's usually easy to do automatically these days.
The other exception is if the department doesn't use UV maps and instead uses, say, PTex from Disney. PTex really favors quads. In the original paper at least, it only worked with quads.

How is ray coherence used to improve raytracing speed while still looking realistic?

I'm considering exploiting ray coherence in my software per-pixel realtime raycaster.
AFAICT, using a uniform grid, if I assign ray coherence to patches of say 4x4 pixels (where at present I have one raycast per pixel), given 16 parallel rays with different start (and end) point, how does this work out to a coherent scene? What I foresee is:
There is a distance within which the ray march would be exactly the same for adjacent/similar rays. Within that distance, I am saving on processing. (How do I know what that distance is?)
I will end up with a slightly to seriously incorrect image, due to the fact that some rays didn't diverge at the right times.
Given that my rays are cast from a single point rather than a plane, I guess I will need some sort of splitting function according to distance traversed, such that the set of all rays forms a tree as it move outward. My concern here is that finer detail will be lost when closer to the viewer.
I guess I'm just not grasping how this is meant to be used.
If done correctly, ray coherence shouldn't affect the final image. Because the rays are very close together, there's a good change that they'll all take similar paths when traversing the acceleration structure (kd-tree, aabb tree, etc). You have to go down each branch that any of the rays could hit, but hopefully this doesn't increase the number of branches much, and it saves on memory access.
The other advantage is that you can use SIMD (e.g. SSE) to accelerate some of your tests, both in the acceleration structure and against the triangles.