I need to change the delay type of a process during the simulation in Arena. For example I need a delay type costant for the first two hours and then a triangular delay for 3 hours.
You can try putting a decide just before the process. In that decide you can give conditions about the time decision. For instance, if the current time is at 2 hours, its exit point will go to an assign module that will assign its process time to a wanted constant. Or if it is 3 hours then it will go to another assign that will assign the process time of that current entity to a triangular distribution.
After that you can connect all these assigns to the "process" module and enter the delay type as an expression which will be the attribute you choose to assign the process times.
Related
I'm trying to understand an illustrative example of how Lamport's algorithm is applied. In the course that I'm taking, we were presented with two representations of the clocks within three [distant] processes, one with the lamport alogrithm applied and the other without.
Without the Lamport algorithm:
With the lamport algorithm applied:
My question is concerning the validity of the change that was applied to the third entry of the table pertaining to the process P1. Shouldn't it be, as the Lamport algorithm instructs, max(2, 2) + 1, which is 3 not 4?
When I asked some of my classmates regarding this issue, one of them informed me that the third entry of the table of P1 represents a "local" event that happened within P1, and so when message A is arrived, the entry is updated to max(2, 3) + 1, which is 4. However, if that was the case, shouldn't the receipt of the message be represented in a new entry of its own, instead of being put in the same entry that represents the local event that happened within P1?
Upon further investigation, I found, in the same material of the course, a figure that was taken from Tannenbaum's Distributed Systems: Principles and Paradigms, in which the new values of an entry that corresponds to the receipt of a message is updated by adding 1 to the max of the entry before it in the same table and the timestamp of the received message, as shown below, which is quite different from what was performed in the first illustration.
I'm unsure if the problem relates to a faulty understanding that I have regarding the algorithm, or to the possibility that the two illustrations are using different conventions with respect to what the entries represent.
validity of the change that was applied to the third entry of the table pertaining to the process P1
In classical lamport algorithm, there is no need to increase local counter before taking max. If you do that, that still works, but seems like an useless operation. In the second example, all events are still properly ordered. In general, as long as counters go up, the algorithm works.
Another way of looking at correctness is trying to rebuild the total order manually. The hard requirement is that if an event A happens before an event B, then in the total order A will be placed before B. In both picture 2 and 3, everything is good.
Let's look into picture 2. Event (X) from second cell in P0 happens before the event (Y) of third cell of P1. To make sure X does come before Y in the total order it is required that the time of Y to be larger than X's. And it is. It doesn't matter if the time difference is 1 or 2 or 100.
in which the new values of an entry that corresponds to the receipt of a message is updated by adding 1 to the max of the entry before it in the same table and the timestamp of the received message, as shown below, which is quite different from what was performed in the first illustration
It's actually pretty much the same logic, with exception of incrementing local counter before taking max. Generally speaking, every process has its own clock and every event increases that clock by one. The only exception is when a clock of a different process is already in front, then taking max is required to make sure all events have correct total order. So, in the third picture, P2 adjusts clock (taking max) as P3 is way ahead. Same for P1 adjust.
I have the below data. I need to create a flag to show if a call-back happens within the last X period of time (for this purpose we can use 1 day)
I require a calculated field to provide the column in the second data set.
Relatively new to LOD and have tried a few bits without success (hence the time!).
Really appreciate the support :)
I want to make a variable within SSIS that is the current date so that I can reference it in a script task but I have only been able to do this with start date and creation date instead of sysdate. Can anyone help?
SSIS has two states: design-time and run-time. Design-time is the experience in Visual Studio/BIDS/SSDT. There are artifacts on the screen, interactive windows, and our Variables window show the values of the package "at rest".
The Run-time is the experience in the Debugger (or an unattended execution). In the debugger, it looks like the run-time - you see the objects, the data flow components light up and you can see data flowing between components but you can find discrepancies between the two. For example, the Variables window won't show you what the value of a variable is "RIGHT NOW." Instead, it is going to show the design-time value. If you want to see what the internals look like now, that's the Debug menu, Locals window. There you'd see that the current values of all the variables that were defined as design-time.
The System::StartTime has the run-time value set when the package begins (OnPackageStart event). The time the package starts is constant for the run of a package, whether the package run lasts a minute or 3 days, the start time is the time the package started. The design-time value won't ever be passed to a consumer of that variable because the value was updated when the package starts. SSIS does not update the design-time values with the previous run's values. i.e. A design-time start time of 2021-02-18 will always be the at rest value despite being run every day
You cannot control this behavior, nor do you need to worry about it never being accurate as it is part of how run-time works.
An expression exists, GetDate() which is evaluated every time it is inspected (design and run time). I usually advise against this because I am likely using the current time to correlate database activities.
e.g. I created these 10, 100, or 1000000 records at 2021-02-22T11:16:32.123. If I inserted in batches of ten, the first scenario would be recorded under the same timestamp. The second would look something like the first 10 at 2021-02-22T11:16:32.123, the next 10 at 2021-02-22T11:16:32.993, the next ten at 2021-02-22T11:16:33.223 etc. Maybe more, maybe less. Why that matters is I can't prove to the business "these 10/100/1000000 are the rows from load X because they all have the same timestamp" Instead, I need to find all the rows from 2021-02-22T11:16:32.123 to 2021-02-22T11:16:38.532 and oops, a different process also ran in that timeframe so my range query now identifies (10/105/1000003) rows.
GetDate for longer running processes that start before, but near the midnight boundary can result in frustrating explanations to the business.
Finally, since you're referencing a Script Task, you're already in .NET space so you can use Now/Today in your methods and not worry about passing an SSIS variable into the environment.
I'm currently learning ABAP and trying to make an enhancement but have broken down in confusion on how to go about building on top of existing code. I have a program that runs periodically via a background job that disables user accounts X amount of days (in this case 90 days of inactive usage based on USR02~TRDAT).
I want to add an enhancement to notify the User via their email address (result of usr02~bname to match usr21~bname to pass the usr21~persnumber and usr21~addrnumber to adr6 which will point to the adr6~smtp_addr of the user, providing the usr02~bname -> adr6~smtp_addr relationship) based on their last logon date being 30, 15, 7, 5, 3, and 1 day away from the 90 day inactivity threshold with a link to the SAP system to help them reactivate the account with ease.
I'm beginning to think that an enhancement might not be a good idea but rather create a new program and schedule the background job daily. Any guidance or information would be greatly appreciated...
Extract
CLASS cl_inactive_users_reader DEFINITION.
PUBLIC SECTION.
TYPES:
BEGIN OF ts_inactive_user,
user_name TYPE syst_uname,
days_of_inactivity TYPE int1,
END OF ts_inactive_user.
TYPES tt_inactive_users TYPE STANDARD TABLE OF ts_inactive_user WITH EMPTY KEY.
CLASS-METHODS read_inactive_users
IMPORTING
min_days_of_inactivity TYPE int1
RETURNING
VALUE(result) TYPE tt_inactive_users.
ENDCLASS.
Then refactor
REPORT block_inactive_users.
DATA(inactive_users) = cl_inactive_users_readers=>read_inactive_users( 90 ).
LOOP AT inactive_users INTO DATA(inactive_user).
" block user
ENDLOOP.
And add
REPORT warn_inactive_users.
DATA(inactive_users) = cl_inactive_users_readers=>read_inactive_users( 60 ).
LOOP AT inactive_users INTO DATA(inactive_user).
CASE inactive_user-days_of_inactivity.
" choose urgency
ENDCASE.
" send e-mail
ENDLOOP.
and run both reports daily.
Don't create a big ball of mud by squeezing new features into existing code.
From SAP wiki:
The enhancement concept allows you to add your own functionality to SAP's standard business applications without having to modify the original applications. To modify the standard SAP behavior as per customer requirements, we can use enhancement framework.
As per your description, it doesn't sound like a use case for an enhancement. It isn't an intervention in an existing process. The original process and your new requirement are two different processes with some mutual logical part - selection of days of inactivity of users. The two shouldn't rely on each other.
Structurally I think it is best to have a separate program for computing which e-mails need to be sent and when, and a separate program for actually sending them.
I would copy your original program to a new one, and modify it a little bit so that instead of disabling a user, it records into some table for each user: 1) an e-mail 2) a date when to send 3) how many days left (30, 15, 7, etc) 4) status if the e-mail was sent or not. Initially you can even have multiple such jobs for each period (30, 15, 7 etc) and pass it as a parameter (which you use inside instead of 90).
This program you run daily as a job and it populates that table with e-mail "tasks" of what needs to be sent today. It just adds new lines, so lines from yesterday should stay in there.
The 2nd program should just read that table and send actual e-mails and update the statuses. You run that program daily as well.
This way you have:
overview: just check the table to see what's going on
control: if the e-mailer dies or hangs, you can restart it and it will continue where it left off; with statuses you avoid sending duplicate mails
you can make sure that you don't send outdated e-mails if in your mailer script you ignore all tasks older than say 2 days
I want to clarify your confusion about the use of enhancements:
You would want to use enhancements in terms of 'something' happens or is going to happen in the system and you would want to change this standard way.
That something, let's call it event or process could be for example an order is placed, a certain user is logging onto the system or a material has been or is going to be changed.
The change could be notifying another system of an order or checking the logged on user with additional checks for example his GUI version and warn him/her if not up-to-date.
Ask yourself, what process on the system does the execution of your program or code depend on. Does anything need to happen before the program is executed? No, only elapsing time.
Even if you had found an enhancement, you would want to use. If this process using the enhancement would not be run in 90 days, your mails would not be sent, because the enhancement would never been called.
edit: That being said, supposing you mean by enhancement 'building on your existing program' instead of 'creating a new one' would be absolutely not the right terminology for enhancement in the sap universe.
I would extend the functionality of your existing program, since you already compute how many days are left and you would have only one job to maintain.
I have a batch job that runs once per day.
At the end of the job I submit a meter metric with a count of the items processed.
I want to alert if one day this metric is not updated.
On http://metrics.librato.com the maximum time I can check "not reported for" when creating an alert is 60 minutes.
I thought maybe I can create a composite metric and take the avg rate of change over the past 24 hours, and alert if that reaches zero.
I've been trying:
derive(s("my.metric", "%", {function:"sum", period:"86400"}))
However it seems that, because I log only a single event, above quite small values of period (~250s) my rate of change simply drops to zero ...I guess the low frequency means my single value is completely lost by the sampling.
Maybe I am using the wrong tool for the job...
Is there a way to achieve this in Librato?
There currently is not a way to achieve this as composite metrics are subject to the 60 minute limitation of alerts as well (as of 5/15/2015). You may need to look into configuring the metric (or a similar metric) to report within the 60m time range if possible.