How to define routes for a large grid network in sumo? - sumo

When using SUMO to create a grid network, it seems we have to define route for different types of vehicles. But for a large grid network such as 10*10, it would be impossible to manually input the routes for different flow, especially when considering turning at intersections.
My goal is to have a large network, let flow run throught it with certain turning probabilities at intersections. Then I wish I could use traCI to control the signal lights.

There are a few ways as to how you could manage multiple routes:
Define trip and/or flow with to edge and from edge attributes. The DUAROUTER application will find the shortest route possible or the best route possible (if edge-weights are provided)
The above (point 1) can also be achieved if fromTaz/toTaz (Traffic Assignment Zones) are assigned
NOTE - For both point 1 and 2, the via attribute can force the vehicles to travel through a given edge or a given set of edges.
Another way to generate multiple routes is to generate the 10*10 network and to note down (in the program) all the connections (this is done so that SUMO does not throw any no connection errors). A simple program can be written in conjunction with TraCI, that turns the vehicle from a given edge to a different edge on any junction. Given that this will be time consuming, but if your focus is not on the overall simulation time, this approach will be the most apt for you.
Another way is to add rerouter devices on all edges leading to a junction. You can define new destinations and routes here. This will be the easiest solution for a large network.

Related

Is there a parameter to allow drivers to jostle from an empty lane to a busy lane on Eclipse SUMO

I'm simulating drivers with Eclipse SUMO using TraCi. I have a road segment of 1Km (highway) that contains five lanes: two right lanes that are leading to a right turn (exit), and three left lanes that are leading straight. It seems that drivers that want to go right will stand on the right lanes regardless of the queue size, resulting in two lanes that are very slow, and beside them, three lanes that can have a speed of 120 Km/h.
In reality, some drivers are not waiting in the 1 Km queue, but deciding to jostle to the queue in the middle, while other might regret standing in the queue and decide to go straight instead. This is resulting in a slow down on the three left lanes i.e., it is rare to have two adjacent lanes
(lanes 2 and 3 from the right on this case) with a 100 km/h speed difference (safety reasons)
My question is if a parameter that limits the speed ratio between two adjacent lanes exists, or if there is another way to simulate this kind of behavior, as I could not find any.
There are two different behaviors you propose as alternative to standing in the queue. The first one is changing the lane later which you might achieve by reducing the eagerness to do a strategic lane change with the lcStrategic parameter.
The second idea is to change the route and/or the destination. To change the route automatically you can enable the rerouting device for the vehicles. This will work only if there really is a faster route (taking the jam into account) in your network which leads to the same destination. Another possibility is to employ rerouters to set new routes or destinations. You can define a probability here but it is not possible to let this depend on the size / delay of the jam .

For an undirected connected graph, how create an index of the bridges maintained after removal of edges?

Creating an index itself is the same as computing the list of bridges. The question is about how to maintain that index after removing an edge without recomputing it altogether.
Maybe storing the list of all (simple) cycles and removing all cycles that required that edge (index maintenance) would work together with "is this edge in a cycle" to check its requiredness. For a bigger graph, this would be quite expensive to compute initially because the number of cycles grows exponentially with the degree of connectedness.
EDIT: an algorithm that would give a probabilistic answer might also work
P.S. Here's an excerpt from "Introduction to Algorithms" for the terminology
One way to reduce amount of work when recomputing the list of the bridges after an edge removal would be to build a list of no-bridge components along with the list of bridges (where no-bridge components means a maximal connected subgraph without any bridge - in the picture bcc components "2+3" would form one such no-bridge component - since they don't have any bridge, only one articulation point). Bridges in the graph will always connect 2 such no-bridge components. Also if you merge all points of each no-bridge component into one point you'll end up with a graph which only has one edge per bridge of original graph and guaranteed no cycles (otherwise the bridge could be removed and graph would stay connected). So e.g. for the graph on the picture you can look at it as the following:
Component1 - Components 2+3 - Component 4 - Component 6
| / \
One-point Component 5 One-point
Now with such representation the algorithm for updating list of bridges will need to look at the edge being deleted and act as following:
If edge deleted is one of bridges - graph is no longer connected.
Otherwise the edge being deleted belongs to one of bi-connected each edge which was the bridge still will be the bridge + there might be new bridges appearing. It will be guaranteed that new bridge might only appear in the component to which deleted edge belongs - so we can rebuild the list of bridges only for that subgraph and if there are bridges there split the graph into no-bridge components
In the example on the picture - e.g. edge from component 4 was deleted. In that case we would only need to look at the component 4 itself, determine that now all 3 left edges are bridges and add them to the set of bridges + no-bridge component 4 with three "one-point" components (though one-point components are not really needed for this purpose since they don't contain any edges).
In this case we always have to rebuild the list of bridges only for the no-bridge component that the edge is being deleted from. Unfortunately if your original graph had no bridges (i.e. it was one large no-bridge component) this doesn't really help you much, though you could argue that starting point "there are no bridges" doesn't contain a lot of information too.
I don't claim this is the best you can do, but it does answer your question "maintain that index after removing an edge without recomputing it altogether" to the degree that you only need to check one no-bridge component after each edge removal.
For the algorithm of building a list of bridges from scratch (in the beginning on the process or when you need to apply it to one no-bridge component) you can e.g. use an algorithm described here which works in O(V+E) time.

M-ary simulation in labview

I am totally new to LabVIEW .I am having the block diagram of M-ary communication system:
How can I create the same project by myself?
Where can i find these components?.A step by step approach is welcomed.
I am getting an error in the above simulation
You have two or more cluster data types wired together, but the clusters have different kinds or numbers of elements.
Cluster FSK system parameters, a cluster of 3 elements,
conflicts with cluster ASK system parameters, a cluster of 2 elements.
please help
In your example the broken wire (black dashed wire with the red cross) is caused by connecting two different (incompatible) data types. The vi to the left is passing out the datatype and the two to the right expect .
I think what has happened here is the RF tookit allows you to use both FSK and ASK as modulation types and the wrong one has been selected. The left vi has a polymorphic selector under the icon which can be used to select different operations. If you can change this from FSK(M) to ASK(M) then this should remove the broken wire.
There is an example on the NI website that my be of use to you. This includes steps to create a working vi: RF Simulation Demo: Amplitude Shift Keying

Layered and Pipe-and-Filter

I'm a bit confused in which situations these patterns should be used, because in some sense, they seem similar to me?
I understand that Layered is used when system is complex, and can be divided by its hierarchy, so each layer has a function on different level of hierarchy, and uses the functions on the lower level, while in the same time exposes its function to higher level.
On the other hand, Pipe-and-Filter is based on independent components that process data, and can be connected by pipes so they make a whole that executes the complete algorithm.
But if the hierarchy does not exist, it all comes to question if order of the modules can be changed?
And an example that confuses me is compiler. It is an example of pipe-and-filter architecture, but the order of some modules is relevant, if I'm not wrong?
Some example to clarify things would be nice, to remove my confusion. Thanks in advance...
Maybe it is too late to answer but I will try anyway.
The main difference between the two architectural styles are the flow of data.
On one hand, for Pipe-and-Filter, the data are pushed from the first filter to the last one.
And they WILL be pushed, otherwise, the process will not be deem success.
For example, in car manufacturing factory, each station is placed after one another.
The car will be assembled from the first station to the last.
If nothing goes wrong, you will get a complete car at the end.
And this is also true for compiler example. You get the binary code after from the last compiling process.
On the other hand, Layered architecture dictates that the components are grouped in so-called layers.
Typically, the client (the user or component that accesses the system) can access the system only from the top-most layer. He also does not care how many layers the system has. He cares only about the outcome from the layer that he is accessing (which is the top-most one).
This is not the same as Pipe-and-Filter where the output comes from the last filter.
Also, as you said, the components in the same layer are using "services" from the lower layers.
However, not all services from the lower layer must be accessed.
Nor that the upper layer must access the lower layer at all.
As long as the client gets what he wants, the system is said to work.
Like TCP/IP architecture, the user is using a web browser from application layer without any knowledge how the web browser or any underlying protocols work.
To your question, the "hierarchy" in layered architecture is just a logical model.
You can just say they are packages or some groups of components accessing each other in chain.
The key point here is that the results must be returned in chain from the last component back to the first one (where the client is accessing) too.
(In contrast to Pipe-and-Filter where the client gets the result from the last component.)
1.) Layered Architecture is hierarchical architecture, it views the entire system as -
hierarchy of structures
The software system is decomposed into logical modules at different levels of hierarchy.
where as
2.) Pipe and Filter is a Data-Flow architecture, it views the entire system as -
series of transformations on successive sets of data
where data and operations on it are independent of each other.

disambiguating HPCT artificial intelligence architecture

I am trying to construct a small application that will run on a robot with very limited sensory capabilities (NXT with gyroscope/ultrasonic/touch) and the actual AI implementation will be based on hierarchical perceptual control theory. I'm just looking for some guidance regarding the implementation as I'm confused when it comes to moving from theory to implementation.
The scenario
My candidate scenario will have 2 behaviors, one is to avoid obstacles, second is to drive in circular motion based on given diameter.
The problem
I've read several papers but could not determine how I should classify my virtual machines (layers of behavior?) and how they should communicating to lower levels and solving internal conflicts.
These are the list of papers I've went through to find my answers but sadly could not
pct book
paper on multi-legged robot using hpct
pct alternative perspective
and the following ideas are the results of my brainstorming:
The avoidance layer would be part of my 'sensation layer' and that is because it only identifies certain values like close objects e.g. ultrasonic sensor specific range of values. The other second layer would be part of the 'configuration layer' as it would try to detect the pattern in which the robot is driving like straight line, random, circle, or even not moving at all, this is using the gyroscope and motor readings. 'Intensity layer' represents all sensor values so it's not something to consider as part of the design.
Second idea is to have both of the layers as 'configuration' because they would be responding to direct sensor values from 'intensity layer' and they would be represented in a mesh-like design where each layer can send it's reference values to the lower layer that interface with actuators.
My problem here is how conflicting behavior would be handled (maneuvering around objects and keep running in circles)? should be similar to Subsumption where certain layers get suppressed/inhibited and have some sort of priority system? forgive my short explanation as I did not want to make this a lengthy question.
/Y
Here is an example of a robot which implements HPCT and addresses some of the issues relevant to your project, http://www.youtube.com/watch?v=xtYu53dKz2Q.
It is interesting to see a comparison of these two paradigms, as they both approach the field of AI at a similar level, that of embodied agents exhibiting simple behaviors. However, there are some fundamental differences between the two which means that any comparison will be biased towards one or the other depending upon the criteria chosen.
The main difference is of biological plausibility. Subsumption architecture, although inspired by some aspects of biological systems, is not intended to theoretically represent such systems. PCT, on the hand, is exactly that; a theory of how living systems work.
As far as PCT is concerned then, the most important criterion is whether or not the paradigm is biologically plausible, and criteria such as accuracy and complexity are irrelevant.
The other main difference is that Subsumption concerns action selection whereas PCT concerns control of perceptions (control of output versus control of input), which makes any comparison on other criteria problematic.
I had a few specific comments about your dissertation on points that may need
clarification or may be typos.
"creatures will attempt to reach their ultimate goals through
alternating their behaviour" - do you mean altering?
"Each virtual machine's output or error signal is the reference signal of the machine below it" - A reference signal can be a function of one or more output signals from higher-level systems, so more strictly this would be, "Each virtual machine's output or error signal contributes to the reference signal of a machine at a lower level".
"The major difference here is that Subsumption does not incorporate the ideas of 'conflict' " - Well, it does as the purpose of prioritising the different layers, and sub-systems, is to avoid conflict. Conflict is implicit, as there is not a dedicated system to handle conflicts.
"'reorganization' which require considering the goals of other layers." This doesn't quite capture the meaning of reorganisation. Reorganisation happens when there is prolonged error in perceptual control systems, and is a process whereby the structure of the systems changes. So rather than just the reference signals changing the connections between systems or the gain of the systems will change.
"Design complexity: this is an essential property for both theories." Rather than an essential property, in the sense of being required, it is a characteristic, though it is an important property to consider with respect to the implementation or usability of a theory. Complexity, though, has no bearing on the validity of the theory. I would say that PCT is a very simple theory, though complexity arises in defining the transfer functions, but this applies to any theory of living systems.
"The following step was used to create avoidance behaviour:" Having multiple nodes for different speeds seem unnecessarily complex. With PCT it should only be necessary to have one such node, where the distance is controlled by varying the speed (which could be negative).
Section 4.2.1 "For example, the avoidance VM tries to respond directly to certain intensity values with specific error values." This doesn't sound like PCT at all. With PCT, systems never respond with specific error (or output) values, but change the output in order to bring the intensity (in this case) input in to line with the reference.
"Therefore, reorganisation is required to handle that conflicting behaviour. I". If there is conflict reorganisation may be necessary if the current systems are not able to resolve that conflict. However, the result of reorganisation may be a set of systems that are able to resolve conflict. So, it can be possible to design systems that resolve conflict but do not require reorganisation. That is usually done with a higher-level control system, or set of systems; and should be possible in this case.
In this section there is no description of what the controlled variables are, which is of concern. I would suggest being clear about what are goal (variables) of each of the systems.
"Therefore, the designed behaviour is based on controlling reference values." If it is only reference values that are altered then I don't think it is accurate to describe this as 'reorganisation'. Such a node would better be described as a "conflict resolution" node, which should be a higher-level control system.
Figure 4.1. The links annotated as "error signals" are actually output signals. The error signals are the links between the comparator and the output.
"the robot never managed to recover from that state of trying to reorganise the reference values back and forth." I'd suggest the way to resolve this would be to have a system at a level above the conflicted systems, and takes inputs from one or both of them. The variable that it controls could simply be something like, 'circular-motion-while-in-open-space', and the input a function of the avoidance system perception and then a function of the output used as the reference for the circular motion system, which may result in a low, or zero, reference value, essentially switching off the system, thus avoiding conflict, or interference. Remember that a reference signal may be a weighted function of a number of output signals. Those weights, or signals, could be negative so inhibiting the effect of a signal resulting in suppression in a similar way to the Subsumption architecture.
"In reality, HPCT cannot be implemented without the concept of reorganisation because conflict will occur regardless". As described above HPCT can be implemented without reorganisation.
"Looking back at the accuracy of this design, it is difficult to say that it can adapt." Provided the PCT system is designed with clear controlled variables in mind PCT is highly adaptive, or resistant to the effects of disturbances, which is the PCT way of describing adaption in the present context.
In general, it may just require clarification in the text, but as there is a lack of description of controlled variables in the model of the PCT implementation and that, it seems, some 'behavioural' modules used were common to both implementations it makes me wonder whether PCT feedback systems were actually used or whether it was just the concept of the hierarchical architecture that was being contrasted with that of the Subsumption paradigm.
I am happy to provide more detail of HPCT implementation though it looks like this response is somewhat overdue and you've gone beyond that stage.
Partial answer from RM of the CSGnet list:
https://listserv.illinois.edu/wa.cgi?A2=ind1312d&L=csgnet&T=0&P=1261
Forget about the levels. They are just suggestions and are of no use in building a working robot.
A far better reference for the kind of robot you want to develop is the CROWD program, which is documented at http://www.livingcontrolsystems.com/demos/tutor_pct.html.
The agents in the CROWD program do most of what you want your robot to do. So one way to approach the design is to try to implement the control systems in the CROWD programs using the sensors and outputs available for the NXT robot.
Approach the design of the robot by thinking about what perceptions should be controlled in order to produce the behavior you want to see the robot perform. So, for example, if one behavior you want to see is "avoidance" then think about what avoidance behavior is (I presume it is maintaining a goal distance from obstacles) and then think about what perception, if kept under control, would result in you seeing the robot maintain a fixed distance from objects. I suspect it would be the perception of the time delay between sending and receiving of the ultrasound pulses.Since the robot is moving in two-space (I presume) there might have to be two pulse sensors in order to sense the two D location of objects.
There are potential conflicts between the control systems that you will need to build; for example, I think there could be conflicts between the system controlling for moving in a circular path and the system controlling for avoiding obstacles. The agents in the CROWD program have the same problem and sometimes get into dead end conflicts. There are various ways to deal with conflicts of this kind;for example, you could have a higher level system monitoring the error in the two potentially conflicting systems and have it make reduce the the gain in one system or the other if the conflict (error) persists for some time.