For an undirected connected graph, how create an index of the bridges maintained after removal of edges? - indexing

Creating an index itself is the same as computing the list of bridges. The question is about how to maintain that index after removing an edge without recomputing it altogether.
Maybe storing the list of all (simple) cycles and removing all cycles that required that edge (index maintenance) would work together with "is this edge in a cycle" to check its requiredness. For a bigger graph, this would be quite expensive to compute initially because the number of cycles grows exponentially with the degree of connectedness.
EDIT: an algorithm that would give a probabilistic answer might also work
P.S. Here's an excerpt from "Introduction to Algorithms" for the terminology

One way to reduce amount of work when recomputing the list of the bridges after an edge removal would be to build a list of no-bridge components along with the list of bridges (where no-bridge components means a maximal connected subgraph without any bridge - in the picture bcc components "2+3" would form one such no-bridge component - since they don't have any bridge, only one articulation point). Bridges in the graph will always connect 2 such no-bridge components. Also if you merge all points of each no-bridge component into one point you'll end up with a graph which only has one edge per bridge of original graph and guaranteed no cycles (otherwise the bridge could be removed and graph would stay connected). So e.g. for the graph on the picture you can look at it as the following:
Component1 - Components 2+3 - Component 4 - Component 6
| / \
One-point Component 5 One-point
Now with such representation the algorithm for updating list of bridges will need to look at the edge being deleted and act as following:
If edge deleted is one of bridges - graph is no longer connected.
Otherwise the edge being deleted belongs to one of bi-connected each edge which was the bridge still will be the bridge + there might be new bridges appearing. It will be guaranteed that new bridge might only appear in the component to which deleted edge belongs - so we can rebuild the list of bridges only for that subgraph and if there are bridges there split the graph into no-bridge components
In the example on the picture - e.g. edge from component 4 was deleted. In that case we would only need to look at the component 4 itself, determine that now all 3 left edges are bridges and add them to the set of bridges + no-bridge component 4 with three "one-point" components (though one-point components are not really needed for this purpose since they don't contain any edges).
In this case we always have to rebuild the list of bridges only for the no-bridge component that the edge is being deleted from. Unfortunately if your original graph had no bridges (i.e. it was one large no-bridge component) this doesn't really help you much, though you could argue that starting point "there are no bridges" doesn't contain a lot of information too.
I don't claim this is the best you can do, but it does answer your question "maintain that index after removing an edge without recomputing it altogether" to the degree that you only need to check one no-bridge component after each edge removal.
For the algorithm of building a list of bridges from scratch (in the beginning on the process or when you need to apply it to one no-bridge component) you can e.g. use an algorithm described here which works in O(V+E) time.

Related

How to define routes for a large grid network in sumo?

When using SUMO to create a grid network, it seems we have to define route for different types of vehicles. But for a large grid network such as 10*10, it would be impossible to manually input the routes for different flow, especially when considering turning at intersections.
My goal is to have a large network, let flow run throught it with certain turning probabilities at intersections. Then I wish I could use traCI to control the signal lights.
There are a few ways as to how you could manage multiple routes:
Define trip and/or flow with to edge and from edge attributes. The DUAROUTER application will find the shortest route possible or the best route possible (if edge-weights are provided)
The above (point 1) can also be achieved if fromTaz/toTaz (Traffic Assignment Zones) are assigned
NOTE - For both point 1 and 2, the via attribute can force the vehicles to travel through a given edge or a given set of edges.
Another way to generate multiple routes is to generate the 10*10 network and to note down (in the program) all the connections (this is done so that SUMO does not throw any no connection errors). A simple program can be written in conjunction with TraCI, that turns the vehicle from a given edge to a different edge on any junction. Given that this will be time consuming, but if your focus is not on the overall simulation time, this approach will be the most apt for you.
Another way is to add rerouter devices on all edges leading to a junction. You can define new destinations and routes here. This will be the easiest solution for a large network.

Genetic Algorithm: Need some clarification on selection and what to do when crossover doesn't happen

I'm writing a genetic algorithm to minimize a function. I have two questions, one in regards to selection and the other with regards to crossover and what to do when it doesn't happen.
Here's an outline of what I'm doing:
while (number of new population < current population)
# Evaluate all fitnesses and give them a rank. Choose individual based on rank (wheel roulette) to get first parent.
# Do it again to get second parent, ensuring parent1 =/= parent2
# Elitism (do only once): choose the fittest individual and immediately copy to new generation
Multi-point crossover: 50% chance
if (crossover happened)
do single point mutation on child (0.75%)
else
pick random individual to be copied into new population.
end
And all of this is under another while loop which tracks fitness progression and number of iterations, which I didn't include. So, my questions:
As you can see, two parents are chosen randomly in each
iteration until the new population is filled up. So, the two same
parents may mate more than once and surely several fit parents will
mate many more times than once. Is this in any way bad?
In the obitko tutorial, it says if crossover doesn't
happen, then child is exact copy of parents. I don't even understand
what that means, so, as you can see, I just picked a random parent
(uniformly; no fitness considered) and copied to new population.
This seems weird to me. Whether I actually do this or not, my results really don't change that much. What's the proper way to handle the case
when crossover doesn't happen?
Some parents having several offspring is common; I'd even say this is the default practice (and consider biological evolution, where precisely this is one of the main ingredients).
"If crossover doesn't happen, then child is exact copy of parents"
That is a bit confusing. Crossover (well explained in your link) means taking some genes from one parent and some from the other. This is called sexual reproduction and requires two (or more?) parents.
But asexual reproduction is also possible. In this case, you simply take one parent and mutate its genome in the new individual. This is almost what you were attempting, but you are missing the important mutation step (note mutations can be very aggressive or very conservative!)
Note that asexual reproduction requires mutation after copying the genome to create diversity, while in sexual reproduction this is an optional step.
It is fine to use either type of reproduction, or a mix of them. By the way: in some problems genes might not always have the same size. Sexual reproduction is problematic in this case. If you are interested in this problem, take a look at the NEAT algorithm, a popular neuroevolution algorithm designed to address this (wiki and paper).
Finally, elitism (copying the best-performing individuals to the next generation) is common, but it may be problematic. Genetic algorithms often stall in sub-optimal solutions (called local maxima, where any changes decrease fitness). Elitism can contribute to this problem. Of course, the opposite problem is too much diversity being similar to random search, so you need to find the right balance.
I don't see anything wrong with the same individual being the parent of more than one child per generation. It can only affect your diversity a little bit. If you don't like this, or find a real lack of diversity at the final generations, you can actually flag the individual so it cannot be parent more than once per generation.
I actually don't fully agree with the tutorial, I think after you have selected the individuals that will become parents (based on their fitness, of course) you should actually perform the crossover. Otherwise you will be cloning a lot of individuals to the next generation.

Layered and Pipe-and-Filter

I'm a bit confused in which situations these patterns should be used, because in some sense, they seem similar to me?
I understand that Layered is used when system is complex, and can be divided by its hierarchy, so each layer has a function on different level of hierarchy, and uses the functions on the lower level, while in the same time exposes its function to higher level.
On the other hand, Pipe-and-Filter is based on independent components that process data, and can be connected by pipes so they make a whole that executes the complete algorithm.
But if the hierarchy does not exist, it all comes to question if order of the modules can be changed?
And an example that confuses me is compiler. It is an example of pipe-and-filter architecture, but the order of some modules is relevant, if I'm not wrong?
Some example to clarify things would be nice, to remove my confusion. Thanks in advance...
Maybe it is too late to answer but I will try anyway.
The main difference between the two architectural styles are the flow of data.
On one hand, for Pipe-and-Filter, the data are pushed from the first filter to the last one.
And they WILL be pushed, otherwise, the process will not be deem success.
For example, in car manufacturing factory, each station is placed after one another.
The car will be assembled from the first station to the last.
If nothing goes wrong, you will get a complete car at the end.
And this is also true for compiler example. You get the binary code after from the last compiling process.
On the other hand, Layered architecture dictates that the components are grouped in so-called layers.
Typically, the client (the user or component that accesses the system) can access the system only from the top-most layer. He also does not care how many layers the system has. He cares only about the outcome from the layer that he is accessing (which is the top-most one).
This is not the same as Pipe-and-Filter where the output comes from the last filter.
Also, as you said, the components in the same layer are using "services" from the lower layers.
However, not all services from the lower layer must be accessed.
Nor that the upper layer must access the lower layer at all.
As long as the client gets what he wants, the system is said to work.
Like TCP/IP architecture, the user is using a web browser from application layer without any knowledge how the web browser or any underlying protocols work.
To your question, the "hierarchy" in layered architecture is just a logical model.
You can just say they are packages or some groups of components accessing each other in chain.
The key point here is that the results must be returned in chain from the last component back to the first one (where the client is accessing) too.
(In contrast to Pipe-and-Filter where the client gets the result from the last component.)
1.) Layered Architecture is hierarchical architecture, it views the entire system as -
hierarchy of structures
The software system is decomposed into logical modules at different levels of hierarchy.
where as
2.) Pipe and Filter is a Data-Flow architecture, it views the entire system as -
series of transformations on successive sets of data
where data and operations on it are independent of each other.

iOS - How to detect if two or more objects collide

How can i detect if two or more objects collide?
I would like to use only default frameworks, such Core Graphics. Or i have to use Box2d or Cocos2d?
EDIT
You're right, the question isn't really clear.
So this is the situation :
i have multiple UIImageView which move with the accelerometer, but i want that when two or more images collides these isn't overlap each others. Is it clear?
Probably you want a multi-step process.
First, define a "center" and "radius" for each object, such that a line drawn around the center at the selected radius will entirely encompass the object without "too much extra". (You define how hard you work to define center and radius to prevent "too much".)
An optional next step is to divide the screen into quadrants/sections somehow, and compute which objects (based on their centers and radii) lie entirely within one quadrant, which straddle a quadrant boundary, which straddle 4 quadrants, etc. This allows you to subset the next step and only consider object pairs that are in the same quadrant or where one of the two is a straddler of one sort or another.
Then, between every pair of objects, calculate the center-to-center distance using the Pythagorean theorem. If that distance is less than the sum of the two objects' radii then you have a potential collision.
Finally, you have to get down and dirty with calculating actual collisions. There are several different approaches, depending on the shape of your objects. Obviously, circles are covered by the prior step, squares/rectangles (aligned to the X/Y axes) can be computed fairly well, but odd shapes are harder. One scheme is to, on a pair of "blank" canvases, draw the two objects, then AND together the two, pixel by pixel, to see if you come up with a 1 anywhere. There are several variations of this scheme.
As mentioned, your question is pretty vague, and therefore difficult to answer succinctly. But to give you some ideas to go by, you can do this with core animation, though some 3rd party gaming engines/frameworks may be more efficient.
Essentially, you create a timer that fires quite often (how often would depend on the size of the objects you're colliding and their speed - too slow and the objects can collide and pass each other before the timer fires - math is your friend here).
Every time the timer fires you check each object on screen for collisions with the others. For efficiencies sake you should ensure that you only check each pair once - ie. if you have A,B,C,D objects, check A & D but not D & A.
If you have a collision handle it however you want (animation/points/notification/whatever you want to do).
There's way too much to cover here in a post. I'd suggest checking out the excellent writeup on the Asteroids game at cocoawithlove, especially part 3 (though not iOS the principles are the same):
http://cocoawithlove.com/2009/03/asteroids-style-game-in-coreanimation.html

Per frame optimization for large datasets

Summary
New to iPhone programming, I'm having trouble picking the right optimization strategy to filter a set of view components in a scrollview with huge content. In what area would my app gain the most performance?
Introduction
My current iPad app-in-progress let's users explore fairly large binary tree structures. The trees contain between 30 to 900 nodes, and when drawing inside a scrollview (with limited zoom) it looks like this.
The nodes' contents are stored in a SQLite backed Core Data model. It's a binary tree and if a node has children, there are always exactly two. The x and y positions are part of the model, as are the dimensions of the node connections, shown as dotted lines.
Optimization
Only about 50 nodes fit the screen at any given time. With the largest trees containing up to 900 nodes, it's not possible to put everything in a scrollview controlled and zooming UIView, that's a recipe for crashes. So I have to do per frame filtering of the nodes.
And that's where my troubles start. I don't have the experience to make a well founded decision between the possible filtering options, and in addition I probably don't know about that really fast special magic buried deep in Objective-C or Cocoa Touch. Because the backing store is close to 200 MB in size (some 90.000 nodes in hundreds of trees), it's very time consuming to test every single tree on the iPad device. Which is why I'd like to ask you guys for advice.
For all my attempts I'm putting a filter method in the scrollViewDidScroll: and scrollViewDidZoom:. I'm also blocking the main thread with the filter, because I can't show the content without the nodes anyway. But maybe someone has an idea in that area?
Because all the positioning is present in the Core Data model, I might use NSFetchRequest to do the filtering. Is that really fast though? I have the idea it's not a very optimized method.
From what I've tried, the faulted managed objects seem to fit in memory at once, but it might be tricky for the larger trees once their contents start firing faults. Is it a good idea to loop over the NSSet of nodes and see what items should be on screen?
Are there other tricks to gain performance? Would you see ways where I could use multi threading to get the display set faster, even though the model's context was created on the main thread?
Thanks for your advice,
EP.
Ironically your binary tree could be divided using Binary Space Partitioning done in 2D so rendering will be very fast performant and a number of check close to minimum necessary.