How to control gap between vehicles in SUMO at high speed? - sumo

I am doing some simulation in SUMO at high speed 100 Km/h, the space between vehicles is large and i would like to narrow. I think this space is coming because of the high speed. Does there exist any command to control the maximum Gap between vehicles in SUMO same as it exist for the minimum gap "minGap.

This gap is controlled by the time gap parameter tau which can be modified just like the minGap (but has seconds as unit). The default is 1 and commonly used values for automated driving are 0.8 or 0.5 but even down to 0.2. You need to make sure though that your simulation step size (--step-length) is at most the time gap.

Related

Why does a decrease in cross-sectional area increase the pressure

When the cross section of the flow tube decreases, the flow speed increases, and therefore the pressure decreases.
can someone explain to me why this is true, i would think that as the cross section decreases the pressure would also increase .
This is related to "Continuity Equation" of fluid mechanisam.(Assuming fluid as incompressible)
if we have two cross-sections of areas A1 and A2 having velocities V1 and V2 respectively .Then according to continuity equation
A1*V1=A2*V2 or we can write
V2=(A1*V1)÷A2
V2 Is inversly proportional to the A2.
so velocity increases as the area decreases.
further we have a theorem in fluid mechanics called "Bernouli's theorem".
which states that the sum of all energies at any cross-section is constant.
So if the velocity(i.e kinetic energy) increases at any section there will be decrease in pressure(i.e pressure energy)
Think of it this way, what is pressure in the first place.!
Well pressure is the force acting perpendicular to a unit area write ?
So the fluid whatsoever particles are exerting force on that unit area, that's fine..
image five 10 people standing in an elevator standing next to each other, these guys are too much to fit inside the width of the elevator, thus they would push themselves towards the wall of the elevator making huge amount of force on the adjacent walls write ? what was that again ? aha!! huge force per area which are in this examples the walls of the elevator. hence they are too much pressure, okay.. now imagine that these people instead of standing 10 next to each others they formed themselves as groups of twos, so 5 rows of twos instead one of ten, i bet that they will feel more comfortable right? they won't push themselves that much to the wall and hence the wall will have small force on it and then small pressure, that was an example for proving that physics isn't just some numbers that define what is going to happen, Bernoulli's equation predicted that the pressure will decrease based on logic. science works :D

ValveLinear Model Modelica Standard Library - Working Principle

I am implementing the ValveLinear model from the Modelica standard fluid library into a model of mine using Dymola. I have some questions regarding its parameters which I can hopefully clear up:
The key parameters for this valve are as follows:
parameter Medium.MassFlowRate m_flow_nominal
"Nominal mass flowrate at full opening";
final parameter Types.HydraulicConductance k = m_flow_nominal/dp_nominal
"Hydraulic conductance at full opening";
Modelica.Blocks.Interfaces.RealInput opening(min=0,max=1)
"=1: completely open, =0: completely closed"
The mass flow over the valve is then caclulated as
m_flow = opening*k*dp;
Am I right in assuming that m_flow_nominal is the maximum mass flow rate with a linear drop off in mass_flow down to zero as opening goes from 1 to 0?
Furthermore is dp_nominal the corresponding minimum pressure drop across the valve? (i.e. at fully open). Therefore would we see a linear increase in dp from dp_nominal to some maximum value as opening goes from 1 to 0?
The answer may seem trivial but I have run some examples with valves in Dymola so far and in some cases it seems that dp remains constant across the valve as the opening in varied which doesn't make sense to me.
The nominal mass flow rate and pressure drop are just design values used to calculate the valve coefficient k (fixed relation between pressure drop and mass flow). Since no "nominal opening degree" can be specified in ValveLinear the valve opening in the design point is assumed to be one (fully open valve).
The mass flow rate through the valve is not limited to m_flow_nominal. If you double the pressure drop the mass flow through the valve will double, regardless of the nominal mass flow rate.
An example model is shown below:
m_flow_nominal is 5 kg/s and dp_nominal is 10 bar.
At time = 0 s the (fixed) pressure drop over the valve is 10 bar and the valve is fully open. Therefore, the mass flow through the valve is 5 kg/s.
At time = 1 s the pressure drop over the valve is increased by 50
pct (from 10 to 15 bar). The mass flow increases with 50 pct as well
(to 7.5 kg/s).
At time = 3 s the valve opening is reduced by 50 % (from fully to
half open). The pressure drop remains at 15 bar (of course, since
it's a boundary value) while the mass flow rate is reduced to 50 pct (= 3.75 kg/s).
Regarding your second question. The pressure drop is not limited. If the mass flow through the valve is given as a boundary condition (e.g. if source in the model is replaced with a MassFlowSource_T) and the mass flow rate is reduced to half of the nominal value (from 5 to 2.5 kg/s) the pressure drop will also be reduced to half of the nominal value (10 to 5 bar). If the mass flow rate is zero, so will the pressure drop be.
If, on the other hand, you fix the mass flow rate to a value > 0 kg/s and ramp the valve opening towards zero, the pressure drop will approach infinity.
Best regards,
Rene Just Nielsen

Precision of up to 1 gram

I know what precision means. But what does it mean to have a precision of up to 1 gram when talking about weighing machines? Does it mean if actual weight is 100 grams, it may show 99 grams, 100 next time and 101 the third time?
Thanks in advance
It means that the different values you would get, differs by atmost 1 gram.
The key here is the difference between precision and accuracy
If you fired 100 arrows, high precision would mean each arrow falls on the same point, regardless of whether that point is the bullseye. In other words, it refers to the variability of the distribution
Accuracy, on the other hand, is whether the mean/center of that distribution is located around your intended target (the bullseye in the arrow example)
High precision + low accuracy would mean each arrow will be tightly grouped, but wont necessarily be grouped at the target
Low precision + high accuracy means your arrows will be distributed widely in an area, but the center of that area/distribution will be the intended target/bullseye
So in your weighing machine example, precision of 1 gram refers to variability of all your weighings. The lower the number, the more consistent the measurement

Computing the approximate LCM of a set of numbers

I'm writing a tone generator program for a microcontroller.
I use an hardware timer to trigger an interrupt and check if I need to set the signal to high or low in a particular moment for a given note.
I'm using pretty limited hardware, so the slower I run the timer the more time I have to do other stuff (serial communication, loading the next notes to generate, etc.).
I need to find the frequency at which I should run the timer to have an optimal result, which is, generate a frequency that is accurate enough and still have time to compute the other stuff.
To achieve this, I need to find an approximate (within some percent value, as the higher are the frequencies the more they need to be imprecise in value for a human ear to notice the error) LCM of all the frequencies I need to play: this value will be the frequency at which to run the hardware timer.
Is there a simple enough algorithm to compute such number? (EDIT, I shall clarify "simple enough": fast enough to run in a time t << 1 sec. for less than 50 values on a 8 bit AVR microcontroller and implementable in a few dozens of lines at worst.)
LCM(a,b,c) = LCM(LCM(a,b),c)
Thus you can compute LCMs in a loop, bringing in frequencies one at a time.
Furthermore,
LCM(a,b) = a*b/GCD(a,b)
and GCDs are easily computed without any factoring by using the Euclidean algorithm.
To make this an algorithm for approximate LCMs, do something like round lower frequencies to multiples of 10 Hz and higher frequencies to multiples of 50 Hz. Another idea that is a bit more principled would be to first convert the frequency to an octave (I think that the formula is f maps to log(f/16)/log(2)) This will give you a number between 0 and 10 (or slightly higher --but anything above 10 is almost beyond human hearing so you could perhaps round down). You could break 0-10 into say 50 intervals 0.0, 0.2, 0.4, ... and for each number compute ahead of time the frequency corresponding to that octave (which would be f = 16*2^o where o is the octave). For each of these -- go through by hand once and for all and find a nearby round number that has a number of smallish prime factors. For example, if o = 5.4 then f = 675.58 -- round to 675; if o = 5.8 then f = 891.44 -- round to 890. Assemble these 50 numbers into a sorted array, using binary search to replace each of your frequencies by the closest frequency in the array.
An idea:
project the frequency range to a smaller interval
Let's say your frequency range is from 20 to 20000 and you aim for a 2% accurary, you'll calculate for a 1-50 range. It has to be a non-linear transformation to keep the accurary for lower frequencies. The goal is both to compute the result faster and to have a smaller LCM.
Use a prime factors table to easily compute the LCM on that reduced range
Store the pre-calculated prime factors powers in an array (size about 50x7 for range 1-50), and then use it for the LCM: the LCM of a number is the product of multiplying the highest power of each prime factor of the number together. It's easy to code and blazingly fast to run.
Do the first step in reverse to get the final number.

How do I keep time without cumulative error?

How can you keep track of time in a simple embedded system, given that you need a fixed-point representation of the time in seconds, and that your time between ticks is not precisely expressable in that fixed-point format? How do you avoid cumulative errors in those circumstances.
This question is a reaction to this article on slashdot.
0.1 seconds cannot be neatly expressed as a binary fixed-point number, just as 1/3 cannot be neatly expressed as a decimal fixed-point number. Any binary fixed-point representation has a small error. For example, if there are 8 binary bits after the point (ie using an integer value scaled by 256), 0.1 times 256 is 25.6, which will be rounded to either 25 or 26, resulting in an error in the order of -2.3% or +1.6% respectively. Adding more binary bits after the point reduces the scale of this error, but cannot eliminate it.
With repeated addition, the error gradually accumulates.
How can this be avoided?
One approach is not to try to compute the time by repeated addition of this 0.1 seconds constant, but to keep a simple integer clock-tick count. This tick count can be converted to a fixed-point time in seconds as needed, usually using a multiplication followed by a division. Given sufficient bits in the intermediate representations, this approach allows for any rational scaling, and doesn't accumulate errors.
For example, if the current tick count is 1024, we can get the current time (in fixed point with 8 bits after the point) by multiplying that by 256, then dividing by 10 - or equivalently, by multiplying by 128 then dividing by 5. Either way, there is an error (the remainder in the division), but the error is bounded since the remainder is always less than 5. There is no cumulative error.
Another approach might be useful in contexts where integer multiplication and division is considered too costly (which should be getting pretty rare these days). It borrows an idea from Bresenhams line drawing algorithm. You keep the current time in fixed point (rather than a tick count), but you also keep an error term. When the error term grows too large, you apply a correction to the time value, thus preventing the error from accumulating.
In the 8-bits-after-the-point example, the representation of 0.1 seconds is 25 (256/10) with an error term (remainder) of 6. At each step, we add 6 to our error accumulator. Based on this so far, the first two steps are...
Clock Seconds Error
----- ------- -----
25 0.0977 6
50 0.1953 12
At the second step, the error value has overflowed - exceeded 10. Therefore, we increment the clock and subtract 10 from the error. This happens every time the error value reaches 10 or higher.
Therefore, the actual sequence is...
Clock Seconds Error Overflowed?
----- ------- ----- -----------
25 0.0977 6
51 0.1992 2 Yes
76 0.2969 8
102 0.3984 4 Yes
There is almost always an error (the clock is precisely correct only when the error value is zero), but the error is bounded by a small constant. There is no cumulative error in the clock value.
A hardware-only solution is to arrange for the hardware clock ticks to run very slightly fast - precisely fast enough to compensate for cumulative losses caused by the rounding-down of the repeatedly added tick-duration value. That is, adjust the hardware clock tick speed so that the fixed-point tick-duration value is precisely correct.
This only works if there is only one fixed-point format used for the clock.
Why not have 0.1 sec counter and every ten times increment your seconds counter, and wrap the 0.1 counter back to 0?
In this particular instance, I would have simply kept the time count in tenths of a seconds (or milliseconds, or whatever time scale is appropriate for the application). I do this all the time in small systems or control systems.
So a time value of 100 hours would be stored as 3_600_000 ticks - zero error (other than error that might be introduced by hardware).
The problems that are introduced by this simple technique are:
you need to account for the larger numbers. For example, you may have to use a 64-bit counter rather than a 32-bit counter
all your calculations need to be aware of the units used - this is the area that is most likely going to cause problems. I try to help with this problem by using time counters with a uniform unit. For example, this particular counter needs only 10 ticks per second, but another counter might need millisecond precision. In that case, I'd consider making both counters millisecond precision so they use the same units even though one doesn't really need that precision.
I've also had to play some other tricks this with timers that aren't 'regular'. For example, I worked on a device that required a data acquisition to occur 300 times a second. The hardware timer fired once a millisecond. There's no way to scale the millisecond timer to get exactly 1/300th of a second units. So We had to have logic that would perform the data acquisition on every 3, 3, and 4 ticks to keep the acquisition from drifting.
If you need to deal with hardware time error, then you need more than one time source and use them together to keep the overall time in sync. Depending on your needs this can be simple or pretty complex.
Something I've seen implemented in the past: the increment value can't be expressed precisely in the fixed-point format, but it can be expressed as a fraction. (This is similar to the "keep track of an error value" solution.)
Actually in this case the problem was slightly different, but conceptually similar—the problem wasn't a fixed-point representation as such, but deriving a timer from a clock source that wasn't a perfect multiple. We had a hardware clock that ticks at 32,768 Hz (common for a watch crystal based low-power timer). We wanted a millisecond timer from it.
The millisecond timer should increment every 32.768 hardware ticks. The first approximation is to increment every 33 hardware ticks, for a nominal 0.7% error. But, noting that 0.768 is 768/1000, or 96/125, you can do this:
Keep a variable for "fractional" value. Start it on 0.
wait for the hardware timer to count 32.
While true:
increment the millisecond timer.
Add 96 to the "fractional" value.
If the "fractional" value is >= 125, subtract 125 from it and wait for the hardware timer to count 33.
Otherwise (the "fractional" value is < 125), wait for the hardware timer to count 32.
There will be some short term "jitter" on the millisecond counter (32 vs 33 hardware ticks) but the long-term average will be 32.768 hardware ticks.