How to create arrival rate for each lane in SUMO traffic simulation? - sumo

Provided that I have 3 lanes in 3 legged intersection. Meaning I have 9 lanes in total, I would like to have all 9 lanes having different arrival rate. Usually I am using random.py for the arrival, I am wondering if there is another method to do it?

You can define all flows manually which might be still feasible for 9 target lanes.
<flow id="0" from="source" to="target" begin="0" end="100" number="20" arrivalLane="0"/>
<flow id="1" from="source" to="target" begin="0" end="100" number="20" arrivalLane="1"/>
...
and adapt the numbers to your liking.

Related

Radzen Chart adjustment

I'm trying to graphically show balance/value of some bank account throughout time. For example, in the period of 2 years there were input/output transactions to/from bank account and they were not done periodically, let's say each day, there were few transactions in same day, then for few months nothing, then again few in the same day etc, so the graph looks messy.
And each of this dot on graph actually represents few transactions in that day and it's not really readable and looks silly, especially this long period of no changes. Is there any way to fix this up a little bit using Radzen or maybe some other tool?
Radzen code I used for shown graph:
<RadzenChart>
<RadzenLineSeries Smooth="false" Data="#HistoricalTransactionArray" CategoryProperty="Date" Title="Value" LineType="LineType.Solid" ValueProperty="Quote">
<RadzenMarkers MarkerType="MarkerType.Square" />
</RadzenLineSeries>
<RadzenCategoryAxis Padding="1" FormatString="{0:dd-MM-yyyy}" />
<RadzenValueAxis>
<RadzenGridLines Visible="true" />
<RadzenAxisTitle Text="Value" />
</RadzenValueAxis>
</RadzenChart>

Optaplanner: Howto change the sample XML in the nurserostering demo?

I am stucked with the Nurserostering example in Optaplanner. I would like to change the input XML to play around (for example increase the number of nurses from 30 to 100), and I find it's very complicated to manually edit it, so I think there must be some kind of 'generator', or maybe I should make my own 'XML generator'.
For example I see every node in the sample has a unique id, so if I want to increase the number of nurses, it's not as simple as copying the last Employee node and pasting it 70 times; I should check every id inside and increase it accordingly.
<Employee id="358">
<id>6</id>
<code>6</code>
<name>6</name>
<contract reference="36"/>
<dayOffRequestMap id="359">
<entry>
<ShiftDate reference="183"/>
<DayOffRequest id="360">
<id>18</id>
<employee reference="358"/>
<shiftDate reference="183"/>
<weight>1</weight>
</DayOffRequest>
...
Therefore, I ask, is there any method to generate this (or other) XML?
The best way I could think of is write a small java application where you could load the original dataset, and then add any number of employees you want (using java code of course). At least this is what I do when I need a bigger dataset or when I toy around the model data (because the dataset need to be updated too).
Oh I almost forgot, sometimes I use xml viewer to help me do some manual copy and paste work (it help me a lot since the row is thousand lines).
You looked at the wrong XML file! Instead of taking e.g. data/nurserostering/unsolved/medium01.xml, take data/nurserostering/import/medium01.xml.
<Employees>
<Employee ID="0">
<ContractID>0</ContractID>
<Name>0</Name>
<Skills>
<Skill>Nurse</Skill>
</Skills><
</Employee>
[...]
<DayOffRequests>
<DayOff weight="1">
<EmployeeID>0</EmployeeID>
<Date>2010-01-21</Date>
</DayOff>
[...]
This file can then easily be edited and imported in OptaPlanner.

Simplest OptaPlanner example - is only construction heuristic enough?

I'm new into OptaPlanner and I'm trying to create an as simple as possible app that assigns few employees to some shifts. The only rule is that one employee can be assigned to one shift per day. I wonder if following solver configuration is not enough:
<solver>
<solutionClass>com.test.shiftplanner.ShiftPlanningSolution</solutionClass>
<entityClass>com.test.shiftplanner.ShiftAssignment</entityClass>
<scoreDirectorFactory>
<scoreDefinitionType>HARD_SOFT</scoreDefinitionType>
<scoreDrl>rules.drl</scoreDrl>
</scoreDirectorFactory>
<!-- Solver termination -->
<termination>
<secondsSpentLimit>60</secondsSpentLimit>
</termination>
<constructionHeuristic>
<constructionHeuristicType>FIRST_FIT</constructionHeuristicType>
</constructionHeuristic>
</solver>
because the collection of ShiftAssignment at ShiftPlanningSolution class remains EMPTY even though the Solver.solve() finishes and getBestSolution() returns something. What's more it seems that my rules at rules.drl are not fired at all. I even added a dummy rule just to see if it is triggered:
rule "test"
when
shiftAssignment : ShiftAssignment()
then
System.out.println(shiftAssignment);
end
and it's not fired at all.
So what are my mistakes here? Thanks in advance!
the rule should be doing something with scoreHolder, see docs chapter 5. But despite that, you should see that rule being fired once for every ShiftAssignement instance in your dataset - check if you have any in there.

Is Apache Camel's idempotent consumer pattern scalable?

I'm using Apache Camel 2.13.1 to poll a database table which will have upwards of 300k rows in it. I'm looking to use the Idempotent Consumer EIP to filter rows that have already been processed.
I'm wondering though, whether the implementation is really scalable or not. My camel context is:-
<camelContext xmlns="http://camel.apache.org/schema/spring">
<route id="main">
<from
uri="sql:select * from transactions?dataSource=myDataSource&consumer.delay=10000&consumer.useIterator=true" />
<transacted ref="PROPAGATION_REQUIRED" />
<enrich uri="direct:invokeIdempotentTransactions" />
<!-- Any processors here will be executed on all messages -->
</route>
<route id="idempotentTransactions">
<from uri="direct:invokeIdempotentTransactions" />
<idempotentConsumer
messageIdRepositoryRef="jdbcIdempotentRepository">
<ognl>#{request.body.ID}</ognl>
<!-- Anything here will only be executed for non-duplicates -->
<log message="non-duplicate" />
<to uri="stream:out" />
</idempotentConsumer>
</route>
</camelContext>
It would seem that the full 300k rows are going to be processed every 10 seconds (via consumer.delay parameter) which seems very inefficient. I would expect some sort of feedback loop as part of the pattern so that the query that feeds the filter could take advantage of the set of rows already processed.
However, the messageid column in the CAMEL_MESSAGEPROCESSED table has the pattern of
{1908988=null}
where 1908988 is the request.body.ID I've set the EIP to key on so this doesn't make it easy to incorporate into my query.
Is there a better way of using the CAMEL_MESSAGEPROCESSED table as a feedback loop into my select statement so that the SQL server is performing most of the load?
Update:
So, I've since found out that it was my ognl code that was causing the odd message id column value. Changing it to
<el>${in.body.ID}</el>
has fixed it. So, now that I have a usable messageId column, I can now change my 'from' SQL query to
select * from transactions tr where tr.ID IN (select cmp.messageid from CAMEL_MESSAGEPROCESSED cmp where cmp.processor = 'transactionProcessor')
but I still think I'm corrupting the Idempotent Consumer EIP.
Does anyone else do this? Any reason not to?
Yes, it is. But you need to use scalable storage for holding sets of already processed messages. You can use either Hazelcast - http://camel.apache.org/hazelcast-idempotent-repository-tutorial.html or Infinispan - http://java.dzone.com/articles/clustered-idempotent-consumer - depending on which solution is already in your stack. Of course, JDBC repository would work, but only if it meets performance criteria selected.

Multiple triggers to Quartz endpoint in Mule

Is there a way to configure a Quartz inbound endpoint in Mule to have multiple triggers? Say I want an event every day at 9:00, plus one at 1:00 a.m. on the first day of the month.
Here is what you might work for you --
<flow name="MultipleIBEndpoints" doc:name="MultipleIBEndpoints">
<composite-source doc:name="Composite Source">
<quartz:inbound-endpoint jobName="QuartzDaily" doc:name="Quartz Daily"
cronExpression="0 0 9 1/1 * ? *">
<quartz:event-generator-job>
<quartz:payload>dummy</quartz:payload>
</quartz:event-generator-job>
</quartz:inbound-endpoint>
<quartz:inbound-endpoint jobName="QuartzMonthly" doc:name="Quartz Monthly"
cronExpression="0 0 1 1 1/1 ? *">
<quartz:event-generator-job>
<quartz:payload>dummy</quartz:payload>
</quartz:event-generator-job>
</quartz:inbound-endpoint>
</composite-source>
<logger level="INFO" doc:name="Logger" />
</flow>
The above flow uses composite source scope which allows you to embed into a single message source two or more inbound endpoints.
In the case of Composite, the embedded building blocks are actually message sources (i.e. inbound endpoints) that listen in parallel on different channels for incoming messages. Whenever any of these receivers accepts a message, the Composite scope passes it to the first message processor in the flow, thus triggering that flow.
You can do you requirement just by using one quartz endpoint with the required quartz endpoint
CRON Expression 0 0 1,21 1 * *
Please refer to the below link for more tweaks.
Mulesoft quartz reference
wikipedia reference
List of Cron Expression examples
In that case you need to configure two crontrigger and add them to the scheduler.
Please go through the below link where i have described the whole thing.
Configure multiple cron trigger
Hope this will help.