SUMO - simulating traffic scenario - sumo

How can I simulate continuous traffic flow from historical data which consists of:
1. Vehicle ID;
2. Speed;
3. Coordinates
without knowing the routes of each vehicle ID.

This is a commonly asked questions but probably hasn't been answered here before. Unfortunately the answer largely depends on the quality of your input data mainly on the frequency / distance of your location updates (it would be also helpful if there is a time stamp to each datum) and how precise the locations fit your street network. In the best case there is a location update on each edge of the route in the street network and you can simply read off the route by mapping the location to the street. This mapping can be done using the python sumolib coming with sumo:
import sumolib
net = sumolib.net.readNet("myNet.net.xml")
route = []
radius = 1
for x, y in coordinates:
minDist, minEdge = min([(dist, edge) for edge, dist in net.getNeighboringEdges(x_coordinate, y_coordinate, radius)])
if len(route) == 0 or route[-1] != minEdge.getID():
route.append(minEdge.getID())
See also http://sumo.dlr.de/wiki/Tools/Sumolib#locate_nearby_edges_based_on_the_geo-coordinate for additional geo conversion.
This will fail when there is an edge in the route which did not get hit by a data point or if you have a mismatch (for instance matching an edge which goes in the "wrong" direction). In the former case you can easily repair the route using sumo's duarouter.
> duarouter -n myNet.net.xml -r myRoutesWithGaps.rou.xml -o myRepairedRoutes.rou.xml --repair
The latter case is considerably harder both to detect and to repair because it largely depends on your definition of a wrong edge. There are almost clear cases like hitting suddenly the opposite direction (which still can happen in real traffic) and a lot of small detours which are hard to decide and deserve a separate answer.
Since you are asking for continuous input you may also be interested in doing this live with TraCI and in this FAQ on constant input flow.

Related

How to model a time-dependent vehicle routing problem with time windows in octapy?

I am looking to model a vehicle routing problem with time windows on OctaPy. Specifically, this problem involves traffic enforcement on public roads, so parking wardens need to survey carparks and road segments and visit them more than once during a 24-hour period.
I refer to the answer in the following question as foundation to develop my problem:
Is it possible to create a VRP solution using NetworkX?
I have a few questions regarding the modelling:
How does OctaPy model time-dependency, that is a different edge weight representing travel duration depending on the time of day?
How do I model the demand points if each point needs to be visited X times?
If a demand point is to be visited X times, how can I enforce a time window gap such that the duration between visits is at least a fixed duration (e.g. 1 hour)?
OptaPy models time-dependency the way you model time-dependency. That is, whatever you use to model time-dependency (may it be an edge, a list, a matrix, a class, etc.), OptaPy can use it in its constraints.
If X is known in advance, for each demand point, you create X copies of it and put it in the #problem_fact_collection_property field. If X is not known in advance, consider using real-time planning (https://www.optapy.org/docs/latest/repeated-planning/repeated-planning.html#realTimePlanning).
This depends on how you implement your time dependency. This would be easier when OptaPy supports the new VariableListener API for List Variable (as well as the builtin list shadow variables) that OptaPlanner has. Until then, you need to do the calculation in a function. Make Edge a #planning_entity and give it a inverse relation shadow variable (https://www.optapy.org/docs/latest/shadow-variable/shadow-variable.html#bidirectionalVariable). Add a method get_arrival_time(edge) to Vehicle that get the estimated time of visit for a given Edge in its visited_edges_list.
def less_than_one_hour_between(visit_1: Edge, visit_2: Edge):
visit_1_arrival_time = visit_1.vehicle.get_arrival_time(visit_1)
visit_2_arrival_time = visit_2.vehicle.get_arrival_time(visit_2)
duration = visit_2_arrival_time - visit_1_arrival_time
return timedelta(hours=0) <= duration <= timedelta(hours=1)
def one_hour_between_consecutive_visits(constraint_factory):
return (
constraint_factory.for_each(Edge)
.join(Edge, Joiners.equal(lambda edge: edge.graph_from_node),
Joiners.equal(lambda edge: edge.graph_to_node))
.filter(lambda a, b: a is not b and less_than_one_hour_between(a, b))
.penalize('less than 1 hour between visits', HardSoftScore.ONE_HARD)

Optimize "1D" bin packing/sheet cutting

Our use case could be described as a variant of 1D bin packing or sheet cutting.
Imagine a drywall with a beam framing.
We want to optimize the number and size of gypsum boards that would be needed to cover the wall.
Boards must start and end on a beam.
Boards must not overlap (hard constraint).
Less (i.e. bigger) boards, the better (soft constraint).
What we currently do:
Pre-generate all possible boards and pass them as problem facts.
Let the solver pick the best subset of those (nullable planning variable).
First Fit Decreasing + Simulated Annealing
Even relatively small walls (~6m, less than 20 possible boards to pick from) take sometimes minutes and while we mostly get a feasible solution, it's rarely optimal.
Is there a better way to model that?
EDIT
Our current domain model looks like the following. Please note that the planning entity only holds the selected/picked material but nothing else. I.e. currently our planning entities are all equal, which kind of prevents any optimization that depends on planning entity difficulty.
data class Assignment(
#PlanningId
private val id: Long? = null,
#PlanningVariable(
valueRangeProviderRefs = ["materials"],
strengthComparatorClass = MaterialStrengthComparator::class,
nullable = true
)
var material: Material? = null
)
data class Material(
val start: Double,
val stop: Double,
)
Active (sub)pillar change and swap move selectors. See optaplanner docs section about move selectors (move neighorhoods). The default moves (single swap and single change) are probably getting stuck in local optima (and even though SA helps them escape those, those escapes are probably not efficient enough).
That should help, but a custom move to swap two subpillars of the almost the same size, might improve efficiency further.
Also, as you're using SA (Simulated Annealing), know that SA is parameter sensitive. Use optaplanner-benchmark to try multiple SA starting temp parameters with different dataset set sizes. Also compare it to a plain LA (Late Acceptance) in benchmarks too. LA isn't fickle like SA can be. (With fickle I don't mean unstable. I mean potential dataset size sensitive parameter tweaking.)

How to access net displacements in pyiron

Using pyiron, I want to calculate the mean square displacement of the ions in my system. How do I see the total displacement (i.e. not folded back by periodic boundary conditions) without dumping very frequently and checking when an atom passes over the boundary and gets wrapped?
Try to compare job['output/generic/unwrapped_positions'][-1] and job.structure.positions+job.output.total_displacements[-1]. If they deliver the same values, it's definitely fine both ways. If not, you can post the relevant lines in your notebook here.
I'd like to add a few comments to Jan's answer:
While job['output/generic/unwrapped_positions'] returns the unwrapped positions parsed from the output files, job.output.total_displacements returns the displacement of atoms calculated from each pair of consecutive snapshots. So if an atom moves more than half the box length in any direction, job.output.total_displacements will give wrong coordinates. Therefore, job['output/generic/unwrapped_positions'] is generally more trustworthy, but it is not available in all the codes (since some codes simply do not provide an output for unwrapped positions).
Moreover, if an interactive job is used, it is possible that job.structure.positions does not return the initial positions, i.e. job.structure.positions+job.output.total_displacements won't be initial positions + displacements.
So, in short, my answer to your question would be rather "Use job['output/generic/unwrapped_positions'] and if it's not available, use job.structure.positions+job.output.total_displacements but be aware of potential problems you might be running into."

Fuzzy screenshot comparison with Selenium

I'm using Selenium to automate webpage functional testing. It's important for us to do a pixel-by-pixel comparison when we roll out new code, so we're using Selenium to take screenshots and comparing the base64 encoded strings to see if anything has changed.
We're finding that in practice, it's hard to get complete pixel consistency, especially with images. I would like minor blurriness / rendering artifacts to count as a "pass" instead of a "fail", so I'm wondering if there's a way of doing a fuzzy comparison to make our tests a bit less fragile.
I was thinking of maybe looking at the Levenshtein distance between the base64 strings as a starting point, but I don't really know if that's a good approach, or what the tolerances should be that distinguish "something moved on the page" from "rendering artifact". Any ideas / approaches?
So I ended up going with the ImageMagick command-line tool (because why re-invent image comparison). The "Peak Absolute Error" metric of the "compare" tool tells you how much you have to fuzz pixels before two images are identical. This seems to work well... for an image with slight graphical distortions, there might be a lot of pixels that don't match, but slight fuzzing is enough to make them match; but for two images that are actually different, even though most pixels might match, the ones that don't tend to be very different. Right now I'm checking for a PAE of less than 15% to see if the images should be counted as identical. Command line I'm using is:
compare -metric PAE original.png new.png comparison.png
Documentation on ImageMagick's compare tool is here: http://www.imagemagick.org/script/compare.php
I've been using perceptualdiff which uses a model of the human visual system to try to avoid reporting unnoticeable changes (the authors used for renderer regression testing). Usage is quite simple:
perceptualdiff -output diff.ppm baseline.png test.png
(where diff.ppm is a PPM format image highlighting the areas of difference)
The needle regression testing framework has support for using pdiff to compare screenshots:
http://needle.readthedocs.org/en/latest/#engines
Use an image format that does not create artifacts (like BMP or PNG) then you can do a per-pixel comparison.
I think you can check each pixel with a common Euclidean Distance.
To improve performance a little, do not calculate the square root but check the squares of the distances
// Maximum color distance allowed to define pixel consistency.
const float maxDistanceAllowed = 5.0;
// Square of the distance, used in calculations.
float maxD = maxDistanceAllowed * maxDistanceAllowed;
public bool isPixelConsistent(Color pixel1, Color pixel2)
{
// Euclidean distance in 3-dimensions.
float distanceSquared = (pixel1.R - pixel2.R)*(pixel1.R - pixel2.R) + (pixel1.G - pixel2.G)*(pixel1.G - pixel2.G) + (pixel1.B - pixel2.B)*(pixel1.B - pixel2.B);
// If the actual distance is less than the max allowed, the pixel is
// consistent and the method returns TRUE
return distanceSquared <= maxD;
}
Didn't test the C# code, but it should give you the idea. Give some tries and adjust the maxDistanceAllowed to your needs.
If anyone else is looking for something similar there is Depicted-dpxdt. It is designed to be used as part of a CI/CD process.
It combines perceptual diff with server, commandline tool, wrapper for phantom js.
It has functionality demonstrated like crawling entire site and comparing pages for differences.

Custom EQ AudioUnit on iOS

The only effect AudioUnit on iOS is the "iTunes EQ", which only lets you use EQ pre-sets. I would like to use a customized eq in my audio graph
I came across this question on the subject and saw an answer suggesting using this DSP code in the render callback. This looks promising and people seem to be using this effectively on various platforms. However, my implementation has a ton of noise even with a flat eq.
Here's my 20 line integration into the "MixerHostAudio" class of Apple's "MixerHost" example application (all in one commit):
https://github.com/tassock/mixerhost/commit/4b8b87028bfffe352ed67609f747858059a3e89b
Any ideas on how I could get this working? Any other strategies for integrating an EQ?
Edit: Here's an example of the distortion I'm experiencing (with the eq flat):
http://www.youtube.com/watch?v=W_6JaNUvUjA
In the code in EQ3Band.c, the filter coefficients are used without being initialized. The init_3band_state method initialize just the gains and frequencies, but the coefficients themselves - es->f1p0 etc. are not initialized, and therefore contain some garbage values. That might be the reason for the bad output.
This code seems wrong in more then one way.
A digital filter is normally represented by the filter coefficients, which are constant, the filter inner state history (since in most cases the output depends on history) and the filter topology, which is the arithmetic used to calculate the output given the input and the filter (coeffs + state history). In most cases, and of course when filtering audio data, you expect to get 0's at the output if you feed 0's to the input.
The problems in the code you linked to:
The filter coefficients are changed in each call to the processing method:
es->f1p0 += (es->lf * (sample - es->f1p0)) + vsa;
The input sample is usually multiplied by the filter coefficients, not added to them. It doesn't make any physical sense - the sample and the filter coeffs don't even have the same physical units.
If you feed in 0's, you do not get 0's at the output, just some values which do not make any sense.
I suggest you look for another code - the other option is debugging it, and it would be harder.
In addition, you'd benefit from reading about digital filters:
http://en.wikipedia.org/wiki/Digital_filter
https://ccrma.stanford.edu/~jos/filters/