How to generate position, speed and acceleration of each car in an output file sumo - sumo

I'm working on sumo and I want to generate the trajectory for each car in an output file. To generate an output file containing each vehicle's position, speed, acceleration, I found maybe the most suitable output is the following link:
https://sumo.dlr.de/docs/Simulation/Output/AmitranOutput.html
However after generating this output file, I see that it doesn't contain the position information for each car. I appreciate if any one can help me how can I produce these data in an output file in sumo.
Here is what it produces:
<trajectories xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://sumo.dlr.de/xsd/amitran/trajectories.xsd" timeStepSize="1000">
<actorConfig id="6" vehicleClass="Passenger" fuel="Gasoline" emissionClass="Euro4" ref="car"/>
<vehicle id="0" actorConfig="6" startTime="0" ref="myflow.0"/>
<motionState vehicle="0" speed="1425" time="0" acceleration="0"/>
<actorConfig id="7" vehicleClass="Passenger" fuel="Gasoline" emissionClass="Euro4" ref="malicious-car"/>
<vehicle id="1" actorConfig="7" startTime="0" ref="myflowmalicious.0"/>
<motionState vehicle="1" speed="1490" time="0" acceleration="0"/>
<motionState vehicle="0" speed="1557" time="1000" acceleration="1317"/>
<vehicle id="2" actorConfig="6" startTime="1000" ref="myflow1.0"/>
<motionState vehicle="2" speed="932" time="1000" acceleration="0"/>
<motionState vehicle="1" speed="1737" time="1000" acceleration="2465"/>
<motionState vehicle="0" speed="1738" time="2000" acceleration="1809"/>
<motionState vehicle="2" speed="1212" time="2000" acceleration="2799"/>
...

You should use --fcd-output fcd.xml --fcd-output.acceleration.

Related

Speedata and XPath

Unfortunately I can't handle the XPath syntax in the Laout.xml of Speedata .
I've been programming XSL for years and maybe I'm a bit preburdened.
The XML I'm trying to evaluate has the following structure:
<export>
<object>
<fields>
<field key="demo1:DISPLAY_NAME" lang="de_DE" origin="default" ftype="string">Anwendungsbild</field>
<field key="demo1:DISPLAY_NAME" lang="en_UK" origin="inherit" ftype="string">application picture</field>
<field key="demo1:DISPLAY_NAME" lang="es_ES" origin="self" ftype="string">imagen de aplicaciĆ³n</field>
</fields>
</object>
</export>
The attempt to output the element node with the following XPath fails.
export/object/fields/field[#key='demo1:DISPLAY_NAME' and #lang='de_DE' and #origin='default']
How do I formulate the query in Speedata Publisher, please?
Thnk you for our Help.
The speedata software only supports a small subset of XPath. You have two options
preprocess the data with the provided Saxon XSLT processor
iterate through the data yourself:
<Layout xmlns="urn:speedata.de:2009/publisher/en"
xmlns:sd="urn:speedata:2009/publisher/functions/en">
<Record element="export">
<ForAll select="object/fields/field">
<Switch>
<Case test="#key='demo1:DISPLAY_NAME' and #lang='de_DE' and #origin='default'">
<SetVariable variable="whatever" select="."/>
</Case>
</Switch>
</ForAll>
<Message select="$whatever"></Message>
</Record>
</Layout>
(given your input file as data.xml)

How can I generate the density information of vehicles in only SUMO?

This is my sumocfg file code in the SUMO program.
<input>
<net-file value="updated.net.xml"/>
<route-files value="trips.trips.xml"/>
</input>
<time>
<begin value="0"/>
</time>
<report>
<verbose value="true"/>
<no-step-log value="true"/>
</report>
updated.net.xml is network file which I write, and trips.trips.xml is just the vehicle mobility file.
In https://sumo.dlr.de/wiki/Simulation/Output/Lane-_or_Edge-based_Traffic_Measures, there is the density information format, but I don't know how can I generate the additional output file including vehicle density information.
What code should I add here?
As it is stated under 'Instantiating within the Simulation', you need to add an additional file under as follows:
<input>
<net-file value = 'xxx'/>
<route-files value = 'xxx'/>
<additional-files value = 'xxx'/>
</input>
In the additional file, add the required edge and lane ids that are to be measured.

how to input value in pattern entry

I'm studying OptaPlanner and I'm needing help at one
point.
I need to why is this pattern used for and how to use:
<Patterns>
<Pattern ID="0" weight="1">
<PatternEntries>
<PatternEntry index="0">
<ShiftType>L</ShiftType>
<Day>Any</Day>
</PatternEntry>
<PatternEntry index="1">
<ShiftType>D</ShiftType>
<Day>Any</Day>
</PatternEntry>
</PatternEntries>
</Pattern>
<Pattern ID="1" weight="1">
<PatternEntries>
<PatternEntry index="0">
<ShiftType>D</ShiftType>
<Day>Any</Day>
</PatternEntry>
<PatternEntry index="1">
<ShiftType>E</ShiftType>
<Day>Any</Day>
</PatternEntry>
<PatternEntry index="2">
<ShiftType>D</ShiftType>
<Day>Any</Day>
</PatternEntry>
</PatternEntries>
I thank you immensely for any kind of help you can give me.
That input file is definied by the INRC2011 website, see link in optaplanner user guide chapter 3.
Specifically:
The first pattern is a Late (L) followed by a Day (D) shift.
The second pattern is a Day shift, then an Early shift and then a Day shift again.
When a pattern matches, there's a penalty involved that lowers the score. The goal is to improve nurse's health by avoiding unhealthy shift patterns.

JdbcSQLException: Referential integrity constraint violation [on delete]

I am setting up unit tests on my application, and that for I need to mockup the database so I am not altering the actual one.
So, I am using DBUnit to simulate a database based on a XML File :
<?xml version="1.0" encoding="UTF-8"?>
<dataset>
<ACTION_TYPE ID="1" NAME="INVALID_CPT"/>
<ACTION_TYPE ID="2" NAME="INTEGRATED_FLOW"/>
<ACTION_TYPE ID="3" NAME="CLOSING_TECHNICAL_FLOW"/>
<ACTION_TYPE ID="4" NAME="AUDIT_FLOW"/>
<ACTION_TYPE ID="5" NAME="VALIDATION_FLOW"/>
<ACTION_TYPE ID="6" NAME="RETURN_DRAFT_FLOW"/>
<USER_GROUP ID="1" NAME="AKA_CPT"/>
<USER_GROUP ID="2" NAME="AKA_ADMIN_SECU"/>
<USER_GROUP ID="3" NAME="AKA_CISO"/>
<USER_GROUP ID="4" NAME="AKA_ADMIN_NETWORK"/>
<ACTION_USER_GROUP ACTION_TYPE_ID="1" USER_GROUP_ID="1"/>
<ACTION_USER_GROUP ACTION_TYPE_ID="4" USER_GROUP_ID="1"/>
<ACTION_USER_GROUP ACTION_TYPE_ID="6" USER_GROUP_ID="1"/>
<ACTION_USER_GROUP ACTION_TYPE_ID="2" USER_GROUP_ID="2"/>
<ACTION_USER_GROUP ACTION_TYPE_ID="3" USER_GROUP_ID="2"/>
<ACTION_USER_GROUP ACTION_TYPE_ID="5" USER_GROUP_ID="2"/>
<ACTION_USER_GROUP ACTION_TYPE_ID="5" USER_GROUP_ID="3"/>
<ACTION_USER_GROUP ACTION_TYPE_ID="2" USER_GROUP_ID="4"/>
</dataset>
This is just a part of the whole XML file, but this is where my problem come from.
I have these 2 objects ACTION_TYPE and USER_GROUP which are linked by a Many to Many relationship.
But as soon as I run my test code, I am getting this error:
org.h2.jdbc.JdbcSQLException: Referential integrity constraint violation: "FKFA9ACTE184RNO1RS40IW6E1LK: PUBLIC.ACTION FOREIGN KEY(USER_GROUP_ID) REFERENCES PUBLIC.USER_GROUP(ID) (2)"; SQL statement:
delete from USER_GROUP [23503-193]
I know what an integrity constraint violation is, but despite I do not think I made one in the XML file above.
Does someone know where this error is coming from, and maybe how I could fix it?
Thank you in advance

DateFormatTransformer not working with FileListEntityProcessor in Data Import Handler

While indexing data from a local folder on my system, i am using given below configuration.However the lastmodified attribute is getting indexed in the format "Wed 23 May 09:48:08 UTC" , which is not the standard format used by solr for filter queries .
So, I am trying to format the lastmodified attribute in the data-config.xml as given below .
<dataConfig>
<dataSource name="bin" type="BinFileDataSource" />
<document>
<entity name="f" dataSource="null" rootEntity="false"
processor="FileListEntityProcessor"
baseDir="D://FileBank"
fileName=".*\.(DOC)|(PDF)|(pdf)|(doc)|(docx)|(ppt)" onError="skip"
recursive="true" transformer="DateFormatTransformer">
<field column="fileAbsolutePath" name="path" />
<field column="fileSize" name="size" />
<field column="fileLastModified" name="lastmodified" dateTimeFormat="YYYY-MM-DDTHH:MM:SS.000Z" locale="en"/>
<entity name="tika-test" dataSource="bin" processor="TikaEntityProcessor"
url="${f.fileAbsolutePath}" format="text" onError="skip">
<field column="Author" name="author" meta="true"/>
<field column="title" name="title" meta="true"/>
<!--<field column="text" />-->
</entity>
</entity>
</document>
</dataConfig>
But there is no effect of transformer, and same format is indexed again . Has anyone got success with this ? Is the above configuration right , or am i missing something ?
Your dateTimeFormat provided does not seem to be correct. The upper and lower case characters have different meaning. Also the format you showed does not match the date text you are trying to parse. So, it probably keeps it as unmatched.
Also, if you have several different date formats, you could parse them after DIH runs by creating a custom UpdateRequestProcessor chain. You can see schemaless example where there is several date formats as part of auto-mapping, but you could also do the same thing for a specific field explicitly.