How to model failures and preemption in fluid library - fluid

I will use the Fluid Pickup example from AnyLogic to explain my question.
Let's say we have a pump (resource) that needs to be available for the cars to pickup fluid. We can add a seize and release before and after the pickup block and model this. Next step is to allow the pump to fail. The question is what is going to happen to a car that has already started picking up fluid:
How can we stop the flow when the pump fails? (probably use failure flowchart and close the valve)
How can we force the car to leave the pump with the current amount picked up? (not wait until Fluid to pickup is completed)
Similarly, if the tank is small and is being continuously filled by a pump and that pump fails, the car needs to leave with whatever it has picked up (In this case, the flow to the car will become 0 so the first question is answered but the second one exists)
Thanks

You will have to create your own resource agent that acts as a pump without using the fluid pickup block.
The pick up block is not very difficult to model on your own, since it's just moving fluid from your resource to the agent that needs the fluid (using fluid enter and fluid exit blocks).
With that being said, you will have a resource that works as any other with failures... and you can create the logic to do something with the agent that didn't get all the fluid it wanted.
Summary: This may be a bit too much to show you a full working model here, but the take away lesson here is not to use the fluid pickup block. Instead have the source of the fluid inside your resource (since your resource is the pump) and move it with fluid enter and exit blocks.

Related

Labview--Input-Output Data delay

I am a new labview user. I am trying to implement a controller in real time using labview. For that purpose, I started doing analouge input output exercise. As a part of learning process, I was trying to apply input on a system, get the data and feed it back through analouge channel. However, I noticed there is a signficant delay between input and output, It was about 1 ms. Then i thought of doing the simplest exercise. I generated an input signal read it through labview and feed it back again. So, basically its a task for ADC and DAC only. But still, it has the same amount of delay. I was under impression that if i do hardwared time data read and write, it would reduce the delay. But still, no change.
Could you help me out on this issue? Any kind of advice would be really helpful for me
What you're seeing might be the result of the while loop method you're using.
The while loop picks up data from the AI once every time it goes around and then sends that to the AO. This ends up being done in batches (maybe with a batch size of 1, but still) and you could be seeing a delay caused by that conversion of batches.
If you need more control over timing, LabVIEW does provide that. Read up on the different clocks, sample clocks and trigger clocks for example. They allow you to preload output signals into a niDAQ buffer and then output the signals point by point at specific, coordinated moments.

Merge signal in flat sequance

I have three inputs in merge signals in different time, the out put of merge signals appeared to wait for all signals and outputted them. what I want is to have an output for every signal (on current output) as soon as it inputted.
For example: if I write (1) in initial value. 5,5,5 in all three numeric. with 3 sec time delay, I will have 6,7,and 16 in target 1, target 2, and target 3. And over all 16 on current output. I don't want that to appear at once on current output. I want to have as it appears in target with same time layout.
please see attached photo.
can anyone help me with that.
thanks.
All nodes in LabVIEW fire when all their inputs arrive. This language uses synchronous data flow, not asynchronous (which is the behavior you were describing).
The output of Merge Signals is a single data structure that contains all the input signals — merged, like the name says. :-)
To get the behavior you want, you need some sort of asynchronous communication. In older versions of LabVIEW, I would tell you to create a queue refnum and go look at examples of a producer/consumer pattern.
But in LabVIEW 2016 and later, right click on each of the tunnels coming out of your flat sequence and chose “Create>>Channel Writer...”. In the dialog that appears, choose the Messenger channel. Wire all the outputs of the new nodes together. This creates an asynchronous wire, which draws very differently from your regular wires. On the wire, right click and choose “Create>>Channel Reader...”. Put the reader node inside a For Loop and wire a 3 to the N terminal. Now you have the behavior that as each block finishes, it will send its data to the loop.
Move the Write nodes inside the Flat Sequence if you want to guarantee the enqueue order. If you wait and do the Writes outside, you’ll sometimes get out-of-order data (I.e. when the data generation nodes happen to run quickly).
Side note: I (and most LabVIEW architects) would strongly recommend you avoid using sequence structures as much as possible. They’re a bad habit to get into — lots of writings online about their disadvantages.

Input live data into AnyLogic

I'm currently a Mechanical Engineering student that is looking into a project on Intelligent Manufacturing.
I have been using AnyLogic to explore manufacturing simulation. I have created a basic Jobshop that involves that transportation of material pallets from delivery to storage to processing. My next step is to transition this static scheduling system to a dynamic scheduling system.
I would like to know if there is any way to actively manipulate the simulation whilst it is running? For example, controlling the availability of processing machines in real time or triggering a delivery. So far I have been unable to find any way of manipulating the simulation after it has been run.
Does anybody have experience with real time data input into simulation software?
In your model, you can always add control elements (buttons, check boxes, sliders, etc). By adding these in the models you can control your model on runtime. For instance... if you have a variable X equal to 3 in your model, if you use a button, you can add the code X=4; and the variable X will change its value.
My suggestion is for you to explore the different options in the controls palette and refer yourself to the anylogic help to learn how to use each of them.
These controls must be placed in "main" in order to make changes when the simulation is running. If you place them in the simulation experiment window, then you won't be able to use them on runtime.
Your model will look like this:

How should I handle measurement logging in my Discrete Event Simulation engine?

NOTE: This question has been ported over from Programmers since it appears to be more appropriate here given the limitation of the language I'm using (VBA), the availability of appropriate tags here and the specificity of the problem (on the inference that Programmers addresses more theoretical Computer Science questions).
I'm attempting to build a Discrete Event Simulation library by following this tutorial and fleshing it out. I am limited to using VBA, so "just switch to [insert language here] and it's easy!" is unfortunately not possible. I have specifically chosen to implement this in Access VBA to have a convenient location to store configuration information and metrics.
How should I handle logging metrics in my Discrete Event Simulation engine?
If you don't want/need background, skip to The Design or The Question section below...
Simulation
The goal of a simulation of the type in question is to model a process to perform analysis of it that wouldn't be feasible or cost-effective in reality.
The canonical example of a simulation of this kind is a Bank:
Customers enter the bank and get in line with a statistically distributed frequency
Tellers are available to handle customers from the front of the line one by one taking an amount of time with a modelable distribution
As the line grows longer, the number of tellers available may have to be increased or decreased based on business rules
You can break this down into generic objects:
Entity: These would be the customers
Generator: This object generates Entities according to a distribution
Queue: This object represents the line at the bank. They find much real world use in acting as a buffer between a source of customers and a limited service.
Activity: This is a representation of the work done by a teller. It generally processes Entities from a Queue
Discrete Event Simulation
Instead of a continuous tick by tick simulation such as one might do with physical systems, a "Discrete Event" Simulation is a recognition that in many systems only critical events require process and the rest of the time nothing important to the state of the system is happening.
In the case of the Bank, critical events might be a customer entering the line, a teller becoming available, the manager deciding whether or not to open a new teller window, etc.
In a Discrete Event Simulation, the flow of time is kept by maintaining a Priority Queue of Events instead of an explicit clock. Time is incremented by popping the next event in chronological order (the minimum event time) off the queue and processing as necessary.
The Design
I've got a Priority Queue implemented as a Min Heap for now.
In order for the objects of the simulation to be processed as events, they implement an ISimulationEvent interface that provides an EventTime property and an Execute method. Those together mean the Priority Queue can schedule the events, then Execute them one at a time in the correct order and increment the simulation clock appropriately.
The simulation engine is a basic event loop that pops the next event and Executes it until there are none left. An event can reschedule itself to occur again or allow itself to go idle. For example, when a Generator is Executed it creates an Entity and then reschedules itself for the generation of the next Entity at some point in the future.
The Question
How should I handle logging metrics in my Discrete Event Simulation engine?
In the midst of this simulation, it is necessary to take metrics. How long are Entities waiting in the Queue? How many Acitivity resources are being utilized at any one point? How many Entities were generated since the last metrics were logged?
It follows logically that the metric logging should be scheduled as an event to take place every few units of time in the simulation.
The difficulty is that this ends up being a cross-cutting concern: metrics may need to be taken of Generators or Queues or Activities or even Entities. Consider also that it might be necessary to take derivative calculated metrics: e.g. measure a, b, c, and ((a-c)/100) + Log(b).
I'm thinking there are a few main ways to go:
Have a single, global Stats object that is aware of all of the simulation objects. Have the Generator/Queue/Activity/Entity objects store their properties in an associative array so that they can be referred to at runtime (VBA doesn't support much in the way of reflection). This way the statistics can be attached as needed Stats.AddStats(Object, Properties). This wouldn't support calculated metrics easily unless they are built into each object class as properties somehow.
Have a single, global Stats object that is aware of all of the simulation objects. Create some sort of ISimStats interface for the Generator/Queue/Activity/Entity classes to implement that returns an associative array of the important stats for that particular object. This would also allow runtime attachment, Stats.AddStats(ISimStats). The calculated metrics would have to be hardcoded in the straightforward implementation of this option.
Have multiple Stats objects, one per Generator/Queue/Activity/Entity as a child object. This might make it easier to implement simulation object-specific calculated metrics, but clogs up the Priority Queue a little bit with extra things to schedule. It might also cause tighter coupling, which is bad :(.
Some combination of the above or completely different solution I haven't thought of?
Let me know if I can provide more (or less) detail to clarify my question!
Any and every performance metric is a function of the model's state. The only time the state changes in a discrete event simulation is when an event occurs, so events are the only time you have to update your metrics. If you have enough storage, you can log every event, its time, and the state variables which got updated, and retrospectively construct any performance metric you want. If storage is an issue you can calculate some performance measures within the events that affect those measures. For instance, the appropriate time to calculate delay in queue is when a customer begins service (assuming you tagged each customer object with its arrival time). For delay in system it's when the customer ends service. If you want average delays, you can update the averages in those events. When somebody arrives, the size of the queue gets incremented, then they begin service it gets decremented. Etc., etc., etc.
You'll have to be careful calculating statistics such as average queue length, because you have to weight the queue lengths by the amount of time you were in that state: Avg(queue_length) = (1/T) integral[queue_length(t) dt]. Since the queue_length can only change at events, this actually boils down to summing the queue lengths multiplied by the amount of time you were at that length, then divide by total elapsed time.

In OOP, if objects send each other messages, won't there be easily an infinite loop happening?

In an Apple paper about Object Oriented Programming, it depicts objects sending messages to each other. So Appliance can send a message to Valve, saying requesting for water, and the Valve object can then send a message to the Appliance, for "giving the water".
(to send a message is actually calling the method of the other object)
So I wonder, won't this cause subtle infinite loop in some way that even the programmer did not anticipate? For example, one is if we program two objects, each one of them pulling each other by gravity, so one is sending to the other object, that there is a "pull" force, and the other object's method got called, and in turn sends a message to the first object, and they will go into an infinite loop. So if the computer program only has 1 process or 1 thread, it will simply go into an infinite loop, and never run anything else in that program (even if the two object finally collide together, they still continue to pull each other). How does this programming paradigm work in reality to prevent this?
Update: this is the Apple paper: http://developer.apple.com/library/mac/documentation/cocoa/conceptual/OOP_ObjC/OOP_ObjC.pdf
Update: for all the people who just look at this obvious example and say "You are wrong! Programs should be finite, blah blah blah", well, what I am aiming at is, what if there are hundreds or thousands of objects, and they send each other messages, and when getting a message, they might in turn send other messages to other objects. Then, how can you be sure there can't be infinite loop and the program cannot go any further.
On the other hand, for people who said, "a program must be finite", well, what about a simple GUI program? It has the event loop, and it is an infinite loop, running UNTIL the user explicitly asks the program to stop. And what about a program that keep on looking for prime numbers? It can keep looking (with BigNum such as in Ruby so that there can be any number of digits for an integer), so the program is just written to keep on running, and write the next larger prime number into the hard disk (or write to hard disk once every million time it find greater prime number -- so it find 1 million prime number and write that 1 millionth to the hard drive and then keep on looking for the next million prime numbers and write the 2 millionth number to hard drive (write only 1 number, not 1 million of them). Well, for a computer with 12GB or RAM and 2TB of hard drive, maybe you can say it can take 20 years for the program to exceed the capability of the computer, when hard disk is full or when the 12GB of RAM cannot fit all the variables (it might be billion of years that an integer cannot fit in 1GB of RAM), but as far as the program is concerned, it just keep running, unless the memory manager cannot allocate another BigNum, or the hard drive is full, that the exception is raised and the program is forced to stop, but the program is written to run indefinitely. So not all programs HAS TO BE written to be finite.
Why should Appliance request for water repeatedly?
Why should Valve bombard Appliance saying that water is being provided?
In theory - it's likely to create infinite loop, but in practice - it comes down to proper modeling of Your objects.
Appliance should send ICanHasWater message only once, wait for response, receive water or receive an answer that water cannot be provided, or will be in future when Applicance might want to try requesting water once again.
that's why I went into the 2 objects and gravity example instead.
Infinite loop of calculation of gravity effects between objects would happen only if You would trigger this calculation on calculation.
I think that common approach is to introduce Time concept and calculate gravitation for particular TimeFrame and then move on to next one for next round of calculation. That way - Your World would have control over thread between TimeFrames and Your application might do something more useful than endless calculations of gravity effects.
Without OOP it is as easy to create infinite loops unintentionally, using imperative programming languages or functional programming maybe. Thus I cannot see what is special about OOP in this case.
If you think of your objects as actors sending each other messages, it's not necessarily wrong to go into an infinite loop. GUI toolkits work this way. Dependend on the programming language used this is made obvious by a call to toolKit.mainLoop()or the like.
I think that even your example of modelling gravity by objects pulling at each other is not wrong per se. You have to ensure that something is happening as a result to the message (i.e. the object being accelerated and moving a little) and you will get a rough discretization of the underlying formulae. You want to check for collision nevertheless to make your model more complete :-)
Using this model requires some level of concurrency in your program to ensure that messages are processed in proper order.
In real life implementations there's no infinite loop, there's infinite indirect recursion instead - A() calls B(), B() calls C() and on some branch C() calls A() again. In your example if Appliance sends GetWater, Valve sends HeresYourWaterSir inmmediately and Appliance's handler of HeresYouWaterSir for whatever reason sends GetWater again - infinite indirect recursion will begin.
So yes, you're right, in some cases problems can happen. The OOP paradigm itself doesn't protect against that - it's up to the developer.