I'm currently a Mechanical Engineering student that is looking into a project on Intelligent Manufacturing.
I have been using AnyLogic to explore manufacturing simulation. I have created a basic Jobshop that involves that transportation of material pallets from delivery to storage to processing. My next step is to transition this static scheduling system to a dynamic scheduling system.
I would like to know if there is any way to actively manipulate the simulation whilst it is running? For example, controlling the availability of processing machines in real time or triggering a delivery. So far I have been unable to find any way of manipulating the simulation after it has been run.
Does anybody have experience with real time data input into simulation software?
In your model, you can always add control elements (buttons, check boxes, sliders, etc). By adding these in the models you can control your model on runtime. For instance... if you have a variable X equal to 3 in your model, if you use a button, you can add the code X=4; and the variable X will change its value.
My suggestion is for you to explore the different options in the controls palette and refer yourself to the anylogic help to learn how to use each of them.
These controls must be placed in "main" in order to make changes when the simulation is running. If you place them in the simulation experiment window, then you won't be able to use them on runtime.
Your model will look like this:
Related
I am right now trying to model a warehouse with import and export processes. I have the problem that I do not know how I should model the capacity of different storage places in the warehouse. There are processes where vehicles with different loadings come and all of them need to be stored in the warehouse with a limited capacity. Else the arriving goods have to be declined.
I am modeling this process in a BPM Suite and was thinking about using Python to access this problem. I thought that I could simply use variables and if clauses to check the capacity of each storage. But if I would simulate this process with this approach then the variables are re-instantiated each time with the start value and do not hold the actual value., beucause with the script is included in the model as a script task.
Does anyone has other ideas to model capacity in BPMN?
Have you considered to not use BPMN as it is clearly adds more complexity than benefit in your case? Look at the Cadence Workflow which allows to specify orchestration logic using normal code and would support your requirements directly without any ugly workarounds.
I want to create a simulation program, for BUS related simulations.
These simulations use lot of calculations, and methods, and there are more and more.
First I want to write the Core of the program, which can use separately coded and compiled program blocks. What kind of programming technology is good for this?
For example: I want to send a CAN frame with checksum calculation. In the core program I choose from a directory that which checksum calculation is good for me.
NOTE: This question has been ported over from Programmers since it appears to be more appropriate here given the limitation of the language I'm using (VBA), the availability of appropriate tags here and the specificity of the problem (on the inference that Programmers addresses more theoretical Computer Science questions).
I'm attempting to build a Discrete Event Simulation library by following this tutorial and fleshing it out. I am limited to using VBA, so "just switch to [insert language here] and it's easy!" is unfortunately not possible. I have specifically chosen to implement this in Access VBA to have a convenient location to store configuration information and metrics.
How should I handle logging metrics in my Discrete Event Simulation engine?
If you don't want/need background, skip to The Design or The Question section below...
Simulation
The goal of a simulation of the type in question is to model a process to perform analysis of it that wouldn't be feasible or cost-effective in reality.
The canonical example of a simulation of this kind is a Bank:
Customers enter the bank and get in line with a statistically distributed frequency
Tellers are available to handle customers from the front of the line one by one taking an amount of time with a modelable distribution
As the line grows longer, the number of tellers available may have to be increased or decreased based on business rules
You can break this down into generic objects:
Entity: These would be the customers
Generator: This object generates Entities according to a distribution
Queue: This object represents the line at the bank. They find much real world use in acting as a buffer between a source of customers and a limited service.
Activity: This is a representation of the work done by a teller. It generally processes Entities from a Queue
Discrete Event Simulation
Instead of a continuous tick by tick simulation such as one might do with physical systems, a "Discrete Event" Simulation is a recognition that in many systems only critical events require process and the rest of the time nothing important to the state of the system is happening.
In the case of the Bank, critical events might be a customer entering the line, a teller becoming available, the manager deciding whether or not to open a new teller window, etc.
In a Discrete Event Simulation, the flow of time is kept by maintaining a Priority Queue of Events instead of an explicit clock. Time is incremented by popping the next event in chronological order (the minimum event time) off the queue and processing as necessary.
The Design
I've got a Priority Queue implemented as a Min Heap for now.
In order for the objects of the simulation to be processed as events, they implement an ISimulationEvent interface that provides an EventTime property and an Execute method. Those together mean the Priority Queue can schedule the events, then Execute them one at a time in the correct order and increment the simulation clock appropriately.
The simulation engine is a basic event loop that pops the next event and Executes it until there are none left. An event can reschedule itself to occur again or allow itself to go idle. For example, when a Generator is Executed it creates an Entity and then reschedules itself for the generation of the next Entity at some point in the future.
The Question
How should I handle logging metrics in my Discrete Event Simulation engine?
In the midst of this simulation, it is necessary to take metrics. How long are Entities waiting in the Queue? How many Acitivity resources are being utilized at any one point? How many Entities were generated since the last metrics were logged?
It follows logically that the metric logging should be scheduled as an event to take place every few units of time in the simulation.
The difficulty is that this ends up being a cross-cutting concern: metrics may need to be taken of Generators or Queues or Activities or even Entities. Consider also that it might be necessary to take derivative calculated metrics: e.g. measure a, b, c, and ((a-c)/100) + Log(b).
I'm thinking there are a few main ways to go:
Have a single, global Stats object that is aware of all of the simulation objects. Have the Generator/Queue/Activity/Entity objects store their properties in an associative array so that they can be referred to at runtime (VBA doesn't support much in the way of reflection). This way the statistics can be attached as needed Stats.AddStats(Object, Properties). This wouldn't support calculated metrics easily unless they are built into each object class as properties somehow.
Have a single, global Stats object that is aware of all of the simulation objects. Create some sort of ISimStats interface for the Generator/Queue/Activity/Entity classes to implement that returns an associative array of the important stats for that particular object. This would also allow runtime attachment, Stats.AddStats(ISimStats). The calculated metrics would have to be hardcoded in the straightforward implementation of this option.
Have multiple Stats objects, one per Generator/Queue/Activity/Entity as a child object. This might make it easier to implement simulation object-specific calculated metrics, but clogs up the Priority Queue a little bit with extra things to schedule. It might also cause tighter coupling, which is bad :(.
Some combination of the above or completely different solution I haven't thought of?
Let me know if I can provide more (or less) detail to clarify my question!
Any and every performance metric is a function of the model's state. The only time the state changes in a discrete event simulation is when an event occurs, so events are the only time you have to update your metrics. If you have enough storage, you can log every event, its time, and the state variables which got updated, and retrospectively construct any performance metric you want. If storage is an issue you can calculate some performance measures within the events that affect those measures. For instance, the appropriate time to calculate delay in queue is when a customer begins service (assuming you tagged each customer object with its arrival time). For delay in system it's when the customer ends service. If you want average delays, you can update the averages in those events. When somebody arrives, the size of the queue gets incremented, then they begin service it gets decremented. Etc., etc., etc.
You'll have to be careful calculating statistics such as average queue length, because you have to weight the queue lengths by the amount of time you were in that state: Avg(queue_length) = (1/T) integral[queue_length(t) dt]. Since the queue_length can only change at events, this actually boils down to summing the queue lengths multiplied by the amount of time you were at that length, then divide by total elapsed time.
We are developing a motor controller on a dsPIC. We intend to use Simulink to model the motor control algorithm with Real Time Embedded Workshop to convert the Simulink model into C code.
Our firmware will have some other minor logic operations, but its main function is motor control. We are wondering if we should try to do all the firmware in Simulink or seperate the logic operations into C code, while the motor control algorithm stays in Simulink?
Does anyone have a recommendation on which path we should start down?
thanks,
Brent
FYI I just built a system like yours but with a TI DSP.
I'm assuming you're doing something complex like vector control. If so, here's what you do: In your model, make one block for each task / each time period you need. This may just be the PWM interrupt with control in it. Define all the IO each task will need - try to keep each signal to 16 bits which are atomic on the DsPIC (this eliminates most rate transitions). Get simulink to make each top-level block a function call. Do only the control inside this/these blocks and leave all hardware configuration, task scheduling, other logic to C code. Simulink can generate a C and H file that you just include in the project with the other code. You'll fill out a structure of inputs, call the function, and get back a structure with the outputs. Keep the model clean of all hardware dependencies.
Do not believe the Mathworks marketing folks. They want you to do everything in Simulink. Don't get me wrong, it's a great tool for certain types of things. But for stuff you can't do in the model (like hello world) they suggest using the "legacy code tool" as if anything that's not a model is "like totally old-school". Restrict your model to control loops and signal flows - which it's good for - and it'll be fine.
Do the logic operations interact with the motor control or are they just unrelated operations? The degree of interaction could help make the decision.
If they are unrelated then for maintainability it might be best to keep them out of the model. Then you can update the logic without having to regenerate the C for the entire SimuLink model. There would be less chance of a regression problem.
If they are related to or interact with the model, then of course it is a case to keep them in the model so that you don't get incompatible versions linked into a build.
Our group is building a process modelling application that simulates an industrial process. The final output of this process is a set of number representing chemistry and flow rates.
This application is based on some very old software that uses the exact same underlying mathematical model to create the simulation. Thousands of variables are involved in the simulation.
Although each component has been unit tested, we now need to be able to make sure that the data output produced by our software matches that of the old simulation software. I am wondering how best to approach this issue in a formalised and rigorous manner.
The old program works by specifying the input via a text file, so I was thinking we could programatically take each variable, adjust its value in the file (and correspondingly in our new application), then compare the outputs between the new and old application. We do this for every variable in the model.
We know the allowable range for each variable so I suppose a random sample across each variable of a few values is enough to show correctness for that particular variable.
Any thoughts on this approach? Any other ideas?
The comparison of output of the old and new applications id definitely good idea. This is sometime called back-to-back testing.
Regarding test input samples - get familiarized with following concepts:
Equivalence partitioning
Boundary-value analysis