creating opposite direction lanes in SUMO network - sumo

I am trying to simulate an overtaking in my network, i came across a suggestion to use to create opposite lanes. It is however not working. I don't know if i omitted something. I also tried the command
netconvert --opposite.guess true --node-files (name) --edge-files (name) -t (name) -o (outputfile)
I have attached my code from the edge file.
<edges>
<edge id="0" from="0" to="2" type="1">
<neigh lane="a0"/>
</edge>
<edge id="2" from="2" to="4" type="1">
<neigh lane="a2"/>
</edge>
<edge id="4" from="4" to="6" type="1">
<neigh lane="a4"/>
</edge>
</edges>
This is the result in the terminal after netconvert:
netconvert --node-files curve.nod.xml --edge-files curve.edg.xml -t curves.type.xml -o curve.net.xml
Warning: Removing unknown opposite lane 'a0' for edge '0'.
Warning: Removing unknown opposite lane 'a10' for edge '10'.
Warning: Removing unknown opposite lane 'a11' for edge '11'.
Warning: Removing unknown opposite lane 'a13' for edge '13'.
Warning: Removing unknown opposite lane 'a15' for edge '15'.
Warning: Removing unknown opposite lane 'a2' for edge '2'.
Warning: Removing unknown opposite lane 'a4' for edge '4'.
Warning: Removing unknown opposite lane 'a6' for edge '6'.
Warning: Removing unknown opposite lane 'a7' for edge '7'.
Warning: Removing unknown opposite lane 'a8' for edge '8'.
Warning: Removing unknown opposite lane 'a9' for edge '9'.
Success.

The lanes you specify in neigh need to exist. A valid example would be:
<edges>
<edge id="a0" from="2" to="0" type="1"/>
<edge id="0" from="0" to="2" type="1">
<neigh lane="a0_0"/>
</edge>
</edges>
This is only correct when the edge a0 has only a single lane. The number after the _denotes the lane index and should be the last lane on the given edge.

Related

ABAQUS meshing problems :The volume of 13124 elements is zero, small, or negative?

I tried to develop a thermal model by using an orphan mesh part(DC3D8R) but when I run the model, the following error was displayed:
'The volume of 13124 elements is zero, small, or negative. Check coordinates or node numbering, or modify the mesh seed. In the case of a tetrahedron this error may indicate that all nodes are located very nearly in a plane. The elements have been identified in element set ErrElemVolSmallNegZero.'
Please, how can I fix this error ?
Thank you !

Inconsistent pose pairs in HAND-EYE calibration in HALCON warnings

I am trying to perform hand-eye calibration using HALCON for the UR5 cobot. I am using 'hand_eye_stationarycam_calibration.hdev.But every time , I get a warning that says: 'Inconsistent pose pairenter image description here
Can anybody help me in this issue? I have tried all of the pose types as well, but the warning and fault results remain.
Try looking at the line of code:
check_hand_eye_calibration_input_poses (CalibDataID, 0.001, 0.005, Warnings)
There is a rotation tolerance (0.001 here) and a translation tolerance (0.005 here). Try to increase these values and see if that gets rid of the error.
Sometimes when I've had this problem in the past it was because my units were not consistent. For example, the translation units of the robot pose were all in 'mm' but my camera units were in 'm'. Double check the translation units and ensure they match.
Also I believe the UR5 robot might default to an axis-angle representation. You must ensure your camera poses and robot poses are all in the same format. See the link below for a description from Universal and how to convert between the different formats. You could either use the script from universal to convert to a roll, pitch, yaw convention or you could take the axis angle representation and convert it inside Halcon.
Here is an example of how I converted from axis-angle to 'abg' pose type in Halcon. In this case the camera returned 7 values: 3 describing a translation in X, Y, Z and 4 values describing the rotation using the angle axis convention. This is then converted to a pose type where the first rotation is performed around the Z axis followed by the new Y axis followed by the new X axis (which matched the robot I was using). The other common pose type in Halcon is 'gba' and you might need that one. In general I believe there are 12 different rotation combinations that are possible so you should verify which one is being used for both the camera and the robot.
In general I try and avoid using the terms roll, pitch, and yaw since I've seen it cause confusion in the past. If the translation units are given in X, Y, Z I prefer the rotation be given in Rx, Ry, and Rz followed by the order of rotation.
*Pose of Object Relative to Camera
get_framegrabber_param (NxLib, '/Execute/halcon1/Result/Patterns/PatternPose/Rotation/Angle', RotationAngle)
get_framegrabber_param (NxLib, '/Execute/halcon1/Result/Patterns/PatternPose/Rotation/Axis/\\0', Axis0)
get_framegrabber_param (NxLib, '/Execute/halcon1/Result/Patterns/PatternPose/Rotation/Axis/\\1', Axis1)
get_framegrabber_param (NxLib, '/Execute/halcon1/Result/Patterns/PatternPose/Rotation/Axis/\\2', Axis2)
get_framegrabber_param (NxLib, '/Execute/halcon1/Result/Patterns/PatternPose/Translation/\\0', Txcam)
get_framegrabber_param (NxLib, '/Execute/halcon1/Result/Patterns/PatternPose/Translation/\\1', Tycam)
get_framegrabber_param (NxLib, '/Execute/halcon1/Result/Patterns/PatternPose/Translation/\\2', Tzcam)
axis_angle_to_quat(Axis0, Axis1, Axis2, RotationAngle, QuatObjRelCam)
quat_to_hom_mat3d(QuatObjRelCam, MatObjRelCam)
MatObjRelCam[3] := Txcam/1000
MatObjRelCam[7] := Tycam/1000
MatObjRelCam[11] := Tzcam/1000
hom_mat3d_to_pose(MatObjRelCam, PoseObjRelCam)
convert_pose_type (PoseObjRelCam, 'Rp+T', 'abg', 'point', PoseObjRelCam)
EXPLANATION ON ROBOT ORIENTATION from Universal Robotics

How to change the gem5 ARM SVE vector length?

I'm doing an experiment to see which ARM SVE vector length would be the best for my chip design, or to help select which chip has the optimal vector length for my application.
How to change the vector length in a gem5 simulation to see how it affects workload performance?
For SE:
se.py --param 'system.cpu[:].isa[:].sve_vl_se = 2'
For FS:
fs.py --param 'system.sve_vl = 2'
where the values are given in multiples of 128 bits, so 2 means length 256.
You can test this easily with the ADDVL instruction as shown in this example.
The name of those parameters can be easily determined by looking at a m5out/config.ini generated from a previous run.
Note however that this value is architecturally visible, and so it might not be possible to checkpoint after Linux boot, and restore with a different vector length than the boot, to speed up experiments. This is likely true in general even though the kernel itself does not run vector instructions, because there is software control of the effective vector length. Maybe it is possible to set a big vector length on the simulator to start with and then tell Linux to reduce it somehow in software, but I'm not sure what's the API.
Tested in gem5 3126e84db773f64e46b1d02a9a27892bf6612d30.
To change the vector length, one can use command line option:
--arm-sve-vl=<vl in quadwords: one of {1, 2, 4, 8, 16}>
where vl is a multiple of 128. So for a simulation of 512-bit SVE machine, one should use:
--arm-sve-vl=4
This works both for Syscall-Emulation mode and Full System mode.
If one wants to quickly explore the space of different vector lengths, one can also change it during the simulation (only in Full system mode). For example, to change the SVE length to 256, put the following line in your bootscript, before running the benchmark:
echo 256 >/proc/sys/abi/sve_default_vector_length
You can get more information on https://www.rico.cat/files/ICS18-gem5-sve-tutorial.pdf.

CNTK Asymmetric padding warning

When creating a model in CNTK, with a convolutional layer, I get the following warning:
WARNING: Detected asymmetric padding issue with even kernel size and lowerPad (9) < higherPad (10) (i=2), cuDNN will not be able to produce correct result. Switch to reference engine (VERY SLOW).
I have tried increasing the kernel size from 4x4 to 5x5 so the kernel size is not even without result.
I have also tried adjusting lowerPad, upperPad (the paramater named in the docs), and higherPad (the parameter listed in the message).
Setting autoPadding=false does not affect this message.
Is it just a warning that I should ignore? The VERY SLOW part concerns me, as my models are already quite slow.
I figured this out if anyone else is interested in the answer.
I stated in the question that I tried setting "autopadding=false". This is the incorrect format for the autopadding parameter; it must actually be a set of boolean values, with the value corresponding to the InputChannels dimension being false.
So the correct form of the parameter would be "autopadding=(true:true:false)", and everything works correctly.
You have a layer that has lower pad 9 and upper pad 10 at depth direction. Are you doing 3D convolution?

implement shape to display 2D laser scan data

I'm working on a Win8 editor which is basically based on a Canvas and Shapes like Line, Rectangle, etc. on it. Those shapes can be manipulated by the user. Now I want to implement a custom shape that take a list of 2D points from a laser scan (used in architecture).
So my question is, which base primitive would you use to display lets say 500 points? I was thinking of a Path but then I get rather a set of connected lines (path, polygone) instead of just the dots. So what else?
This picture illustrates what I want to achieve. All blue dots should be in one shape that can be dragged by the user.
My first guess would be the PathGeometry filled with a lot of RectangleGeometries or EllipseGeometries. But I wonder what this means in terms of performance.
<Path Fill="LemonChiffon" Stroke="Black" StrokeThickness="1">
<Path.Data>
<RectangleGeometry Rect="50,50,5,5" />
<RectangleGeometry Rect="60,50,5,5" />
<RectangleGeometry Rect="70,50,5,5" />
...
</Path.Data>
</Path>