Custom Delaunay Refinement with CGAL Delaunay3D - mesh

I want to perform a custom refinement strategy in a tetrahedral mesh.My input is a point cloud and I have tetrahedralized it using Delaunay 3D routine available in CGAL. The points have scalar values associated with it. Now I want to refine the tetrahedral mesh with this following strategy:
1. Get the maximum value among the vertices of each tetrahedra.
2. Get the value at the point that is going to be inserted (May be barycentre, weighted centroid or circumcenter).
3. If the difference is large enough add this point.
Any idea how to do this effectively? Note: I do not require 0-1 dimensional feature preservation.
I have already tried the above strategy. Let me show what I have done so far.
// Assume T is Delaunay_3D triangulation CGAL mesh and I have an oracle f that tells me what is the value at the point that is going to be inserted if conditions are met.
bool updated = true;
int it = 0;
while (updated)
{
updated = false;
std::vector<std::pair<Point, unsigned> > point_to_be_inserted;
for (auto cit = T.finite_cells_begin(); cit != T.finite_cells_end(); cit++)
{
Cell_handle c = cit;
Point v = Maximum valued vertex
Point q = Point that is going to be inserted
double val_at_new_pt = oracle(q, &pts, var);
double ratio = std::abs(max_val - val_at_new_pt) / max_val;
if (ratio > threshold) {
point_to_be_inserted.emplace_back(std::make_pair(q, new_pt_ind));
updated = true;
}
}
if (updated)
{
std::cout << "Total pts inserted in it: " << it << " " << point_to_be_inserted.size() << std::endl;
T.insert(point_to_be_inserted.begin(), point_to_be_inserted.end())
}
}
The problem is it is quite slow (each time iterating through all the cells). I am not finding any effective strategy to do the refinement locally. I tried using a queue but the cell_handles are getting messed up after I perform one iteration of refinement. I cannot have a map that tells me whether the tetrahedra is refined or not because each time after insertion of new points cell_handles are getting created. Any help will be appreciated. Thanks in advance.

Related

Distance Calculation in NS-3

I need to calculate the distance between the nodes and display them in either in terminal or text file.
I have complied the program using the function GetDistanceFrom();
double
ns3::MobilityModel::GetDistanceFrom (Ptr<const MobilityModel> other) const
{
Vector oPosition = other->DoGetPosition ();
Vector position = DoGetPosition ();
return CalculateDistance (position, oPosition);
}
I have used the above functions in my program but I don't know how to display them.
A standard std::cout or the ns3`NS_LOG' should print you the information you want. See the logging section in the manual here
To calculate the distance between two nodes you need to access the MobilityModel in each node.
Here is an example:
Ptr<MobilityModel> model1 = node1->GetObject<MobilityModel>();
Ptr<MobilityModel> model2 = node2->GetObject<MobilityModel>();
double distance = model1->GetDistanceFrom (model2);
And this is how you print:
NS_LOG_DEBUG("Distance = " << distance);
For the log to appear you must enable loggin. So if you have a component named "MyComp":
NS_LOG_COMPONENT_DEFINE ("MyComp");
you enable loggin using:
LogComponentEnable ("MyComp", LOG_LEVEL_ALL);

Optimized recalculating all pairs shortest path when removing vertexes dynamically from an undirected graph

I use following dijkstra implementation to calculate all pairs shortest paths in an undirected graph. After calling calculateAllPaths(), dist[i][j] contains shortest path length between i and j (or Integer.MAX_VALUE if no such path available).
The problem is that some vertexes of my graph are removing dynamically and I should recalculate all paths from scratch to update dist matrix. I'm seeking for a solution to optimize update speed by avoiding unnecessary calculations when a vertex removes from my graph. I already search for solution and I now there is some algorithms such as LPA* to do this, but they seem very complicated and I guess a simpler solution may solve my problem.
public static void calculateAllPaths()
{
for(int j=graph.length/2+graph.length%2;j>=0;j--)
{
calculateAllPathsFromSource(j);
}
}
public static void calculateAllPathsFromSource(int s)
{
final boolean visited[] = new boolean[graph.length];
for (int i=0; i<dist.length; i++)
{
if(i == s)
{
continue;
}
//visit next node
int next = -1;
int minDist = Integer.MAX_VALUE;
for (int j=0; j<dist[s].length; j++)
{
if (!visited[j] && dist[s][j] < minDist)
{
next = j;
minDist = dist[s][j];
}
}
if(next == -1)
{
continue;
}
visited[next] = true;
for(int v=0;v<graph.length;v++)
{
if(v == next || graph[next][v] == -1)
{
continue;
}
int md = dist[s][next] + graph[next][v];
if(md < dist[s][v])
{
dist[s][v] = dist[v][s] = md;
}
}
}
}
If you know that vertices are only being removed dynamically, then instead of just storing the best path matrix dist[i][j], you could also store the permutation of each such path. Say, instead of dist[i][j] you make a custom class myBestPathInfo, and the array of an instance of this, say myBestPathInfo[i][j], contain members best distance as well as permutation of the best path. Preferably, the best path permutation is described as an ordered set of some vertex objects, where the latter are of reference type and unique for each vertex (however used in several myBestPathInfo instances). Such objects could include a boolean property isActive (true/false).
Whenever a vertex is removed, you traverse through the best path permutations for each vertex-vertex pair, to make sure no vertex has been deactivated. Finally, only for broken paths (deactivated vertices) do you re-run Dijkstra's algorithm.
Another solution would be to solve the shortest path for all pairs using linear programming (LP) techniques. A removed vertex can be easily implemented as an additional constraint in your program (e.g. flow in <=0 and and flow out of vertex <= 0*), after which the re-solving of the shortest path LP:s can use the previous optimal solution as a feasible basic feasible solution (BFS) in the dual LPs. This property holds since adding a constraint in the primal LP is equivalent to an additional variable in the dual; hence, previously optimal primal BFS will be feasible in dual after additional constraints. (on-the-fly starting on simplex solver for LPs).

Unwanted click when using SoXR Library to do variable rate resampling

I am using the SoXR library's variable rate feature to dynamically change the sampling rate of an audio stream in real time. Unfortunately I have have noticed that an unwanted clicking noise is present when changing the rate from 1.0 to a larger value (ex: 1.01) when testing with a sine wave. I have not noticed any unwanted artifacts when changing from a value larger than 1.0 to 1.0. I looked at the wave form it was producing and it appeared as if a few samples right at rate change are transposed incorrectly.
Here's a picture of an example of a stereo 440Hz sinewave stored using signed 16bit interleaved samples:
I also was unable to find any documentation covering the variable rate feature beyond the fifth code example. Here's is my initialization code:
bool DynamicRateAudioFrameQueue::intialize(uint32_t sampleRate, uint32_t numChannels)
{
mSampleRate = sampleRate;
mNumChannels = numChannels;
mRate = 1.0;
mGlideTimeInMs = 0;
// Intialize buffer
size_t intialBufferSize = 100 * sampleRate * numChannels / 1000; // 100 ms
pFifoSampleBuffer = new FiFoBuffer<int16_t>(intialBufferSize);
soxr_error_t error;
// Use signed int16 with interleaved channels
soxr_io_spec_t ioSpec = soxr_io_spec(SOXR_INT16_I, SOXR_INT16_I);
// "When creating a var-rate resampler, q_spec must be set as follows:" - example code
// Using SOXR_VR makes sense, but I'm not sure if the quality can be altered when using var-rate
soxr_quality_spec_t qualitySpec = soxr_quality_spec(SOXR_HQ, SOXR_VR);
// Using the var-rate io-spec is undocumented beyond a single code example which states
// "The ratio of the given input rate and ouput rates must equate to the
// maximum I/O ratio that will be used: "
// My tests show this is not true
double inRate = 1.0;
double outRate = 1.0;
mSoxrHandle = soxr_create(inRate, outRate, mNumChannels, &error, &ioSpec, &qualitySpec, NULL);
if (error == 0) // soxr_error_t == 0; no error
{
mIntialized = true;
return true;
}
else
{
return false;
}
}
Any idea what may be causing this to happen? Or have a suggestion for an alternative library that is capable of variable rate audio resampling in real time?
After speaking with the developer of the SoXR library I was able to resolve this issue by adjusting the maximum ratio parameters in the soxr_create method call. The developer's response can be found here.

A* Pathfinding - how to modify G and H to include rough terrain movement cost?

I have A* pathfinding implemented in my 2D game and it works well on a plain map with obstacles. Now I'm trying to understand how to modify the algorithm, so it counts rough terrain (hills, forest, etc) as 2 moves instead of 1.
With the 1 movement cost, the algorithm uses integers 10 and 14 in the move cost function. Im interested in how to modify these values if one cell actually has a movement cost of 2? will it be 20:17?
Here's how my current algorithm currently computes G and H (adopted from Ray Wenderleich):
// Compute the H score from a position to another (from the current position to the final desired position
- (int)computeHScoreFromCoord:(CGPoint)fromCoord toCoord:(CGPoint)toCoord
{
// Here we use the Manhattan method, which calculates the total number of step moved horizontally and vertically to reach the
// final desired step from the current step, ignoring any obstacles that may be in the way
return abs(toCoord.x - fromCoord.x) + abs(toCoord.y - fromCoord.y);
}
// Compute the cost of moving from a step to an adjecent one
- (int)costToMoveFromStep:(ShortestPathStep *)fromStep toAdjacentStep:(ShortestPathStep *)toStep
{
return ((fromStep.position.x != toStep.position.x)
&& (fromStep.position.y != toStep.position.y))
? 14 : 10;
}
If some of the edges have movement cost 2, you will simply add 2 to the G of the parent node, rather than 1.
As for H: it doesn't need to change. The resulting heuristic will still be admissible/consistent.
I think I got it, with this line the tutorial author checks if the move is 1 square or 2 squares(diagonal) from the move that is currently being considered.
return ((fromStep.position.x != toStep.position.x)
&& (fromStep.position.y != toStep.position.y))
? 14 : 10;
Unfortunately, this is a really simple case and does not really explain what has to be done. Number 10 is used to make calculations easier (10 = 1 move cost), and (14 = 1 diagonal move) is an approximation of sqrt(10*10).
I attempted to introduce terrain cost below, and this requires extra information - I need to know which cell I'm going through to reach the destination. This turned out to be really annoying, and the code below is clearly not my best, but I attempted to spell out what's going on at each step.
If I'm making a diagonal move, I need to know it's move cost AND the move cost of 2 squares that can be used to get there. I can then pick the lowest movement cost among two squares and plug it into the equation of the form:
moveCost = (int)sqrt(lowestMoveCost*lowestMoveCost + (stepNode.moveCost*10) * (stepNode.moveCost*10));
Here's the entire loop that checks adjacent steps and creates new steps out of them with the move cost. It finds tile in my map array and returns it's terrain cost.
NSArray *adjSteps = [self walkableAdjacentTilesCoordForTileCoord:currentStep.position];
for (NSValue *v in adjSteps) {
ShortestPathStep *step = [[ShortestPathStep alloc] initWithPosition:[v CGPointValue]];
// Check if the step isn't already in the closed set
if ([self.spClosedSteps containsObject:step]) {
continue; // Ignore it
}
tileIndex = [MapOfTiles tileIndexForCoordinate:step.position];
DLog(#"point (x%.0f y%.0f):%i",step.position.x,step.position.y,tileIndex);
stepNode = [[MapOfTiles sharedInstance] mapTiles] [tileIndex];
// int moveCost = [self costToMoveFromStep:currentStep toAdjacentStep:step];
//in my case 0,0 is bottom left, y points up x points right
if((currentStep.position.x != step.position.x) && (currentStep.position.y != step.position.y))
{
//move one step away - easy, multiply move cost by 10
moveCost = stepNode.moveCost*10;
}else
{
possibleMove1 = 0;
possibleMove2 = 0;
//we are moving diagonally, figure out in which direction
if(step.position.y > currentStep.position.y)
{
//moving up
possibleMove1 = tileIndex + 1;
if(step.position.x > currentStep.position.x)
{
//moving right and up
possibleMove2 = tileIndex + tileCountTall;
}else
{
//moving left and up
possibleMove2 = tileIndex - tileCountTall;
}
}else
{
//moving down
possibleMove1 = tileIndex - 1;
if(step.position.x > currentStep.position.x)
{
//moving right and down
possibleMove2 = tileIndex + tileCountTall;
}else
{
//moving left and down
possibleMove2 = tileIndex - tileCountTall;
}
}
moveNode1 = nil;
moveNode2 = nil;
CGPoint coordinate1 = [MapOfTiles tileCoordForIndex:possibleMove1];
CGPoint coordinate2 = [MapOfTiles tileCoordForIndex:possibleMove2];
if([adjSteps containsObject:[NSValue valueWithCGPoint:coordinate1]])
{
//we know that possible move to reach destination has been deemed walkable, get it's move cost from the map
moveNode1 = [[MapOfTiles sharedInstance] mapTiles] [possibleMove1];
}
if([adjSteps containsObject:[NSValue valueWithCGPoint:coordinate2]])
{
//we know that the second possible move is walkable
moveNode2 = [[MapOfTiles sharedInstance] mapTiles] [possibleMove2];
}
#warning not sure about this one if the algorithm has to backtrack really far back
//find out which square has the lowest move cost
lowestMoveCost = fminf(moveNode1.moveCost, moveNode2.moveCost) * 10;
moveCost = (int)sqrt(lowestMoveCost*lowestMoveCost + (stepNode.moveCost*10) * (stepNode.moveCost*10));
}
// Compute the cost form the current step to that step
// Check if the step is already in the open list
NSUInteger index = [self.spOpenSteps indexOfObject:step];
if (index == NSNotFound) { // Not on the open list, so add it
// Set the current step as the parent
step.parent = currentStep;
// The G score is equal to the parent G score + the cost to move from the parent to it
step.gScore = currentStep.gScore + moveCost;
// Compute the H score which is the estimated movement cost to move from that step to the desired tile coordinate
step.hScore = [self computeHScoreFromCoord:step.position toCoord:toTileCoord];
// Adding it with the function which is preserving the list ordered by F score
[self insertInOpenSteps:step];
}
else { // Already in the open list
step = (self.spOpenSteps)[index]; // To retrieve the old one (which has its scores already computed ;-)
// Check to see if the G score for that step is lower if we use the current step to get there
if ((currentStep.gScore + moveCost) < step.gScore) {
// The G score is equal to the parent G score + the cost to move from the parent to it
step.gScore = currentStep.gScore + moveCost;
// Because the G Score has changed, the F score may have changed too
// So to keep the open list ordered we have to remove the step, and re-insert it with
// the insert function which is preserving the list ordered by F score
// Now we can removing it from the list without be afraid that it can be released
[self.spOpenSteps removeObjectAtIndex:index];
// Re-insert it with the function which is preserving the list ordered by F score
[self insertInOpenSteps:step];
}
}
}
These types of problems are quite common in, say, chip routing and, yes, gamedev.
Standard approach is to have your graph (in C++ I would say you have Boost "grid graph" or similar structure). If you can afford to have an object each vertex, then the solution is quite easy.
You connect two vertices (neighbors or diagonally adjacent) by an edge, unless there is an obstacle between them. You assign this edge a weight equal to edge length (10 or 14) times terrain cost. Sometimes people prefer not to exclude obstacle edges but assign extremely high weights to them (an advantage is that with such approach you are guaranteed to find at least some path, even when object is stuck at an island).
Then you apply A* algorithm. Your heuristic function (H) can be "pessimistic" (equal to Euclidean distance times the max move cost) or "optimistic" (Euclidean distance times min move cost) or anything in between. Different heuristics will result in slightly different "personalities" of your search but usually do not matter much.

Looping with iterator vs temp object gives different result graphically (Libgdx/Java)

I've got a particle "engine" whom I've implementing a Pool system to and I've tested two different ways of rendering every Particle in a list. Please note that the Pooling really doesn't have anything with the problem to do. I just followed a tutorial and tried to use the second method when I noticed that they behaved differently.
The first way:
for (int i = 0; i < particleList.size(); i++) {
Iterator<Particle> it = particleList.iterator();
while (it.hasNext()) {
Particle p = it.next();
if (p.isDead()){
it.remove();
}
p.render(batch, delta);
}
}
Which works just fine. My particles are sharp and they move with the correct speed.
The second way:
Particle p;
for (int i = 0; i < particleList.size(); i++) {
p = particleList.get(i);
p.render(batch, delta);
if (p.isDead()) {
particleList.remove(i);
bulletPool.free(p);
}
}
Which makes all my particles blurry and moving really slow!
The render method for my particles look like this:
public void render(SpriteBatch batch, float delta) {
sprite.setX(sprite.getX() + (dx * speed) * delta * Assets.FPS);
sprite.setY(sprite.getY() + (dy * speed) * delta * Assets.FPS);
ttl--;
sprite.setScale(sprite.getScaleX() - 0.002f);
if (ttl <= 0 || sprite.getScaleX() <= 0)
isDead = true;
sprite.draw(batch);
}
Why do the different rendering methods provide different results?
Thanks in advance
You are mutating (removing elements from) a list while iterating over it. This is a classic way to make a mess.
The Iterator must have code to handle the delete case correctly. But your index-based for loop does not. Specifically when you call particleList.remove(i) the i is now "out of sync" with the content of the list. Consider what happens when you remove the element at index 3: 'i' will increment to 4, but the old element 4 got shuffled down into index 3, so it will get skipped.
I assume you're avoiding the Iterator to avoid memory allocations. So, one way to side-step this issue is to reverse the loop (go from particleList.size() down to 0). Alternatively, you can only increment i for non-dead particles.