I'm working on an IRC bot in VB.net 2012. I know I know, but it's not a botnet, lol
I wanted to ask your advise on how to manage to many timers. Right now I have a timer that rewards points at a user specified interval, a timer that autosaves those points at a user specified interval. I also have one that displays advertisements, and broadcast a collection of responses at specified intervals.
I feel like it's getting out of hand and would love to know if there is a way I could do all of these thing with a single or at least less timers.
Please keep in mind I learn as I go and don't always understand all the terminology, but I am a quick learner if you have the patience to explain. :)
Thank you.
Yes, of course you can do them with a single timer. In fact, that is what you should do—timers are a limited resource, and there's hardly ever a reason for a single application to use more than one of them.
What you do is create a single timer that ticks at the most frequent interval required by all of your logic. Then, inside of that timer's Tick event handler, you set/check flags that indicate how much time has elapsed. Depending on which interval has expired, you perform the appropriate action and update the state of the flags. By "flags", I mean module-level variables that keep track of the various intervals you want to track—the ones you're tracking now with different timers.
It is roughly the same way that you keep track of time using a clock. You don't use separate clocks for every task you want to time, you use a single clock that ticks every second. You operate off of displacements from this tick—60 of these ticks is 1 minute, 3600 of these ticks is 1 hour, etc.
However, I strongly recommend figuring out a way to do as many of these things as possible in response to events, rather than at regular intervals tracked by a timer. You could, for example, reward points in response to specific user actions. This type of polling is very resource-intensive and rarely the best solution.
Related
I am wondering if it's possible to treat scheduling problems with tasks with the following property using Optaplanner. Instead of have a fixed duration of 1 hour we have a 1 hour-man, i.e if there is two employees working on that task, it could be done in 1/2 hour.
Otherwise, what are the other solvers that could be used ?
Model wise, the easy approach is to split up that 1 task into 2 smaller tasks that get individually assigned. (When they're both assigned to the same person, sequentially after each other, you can add a soft constraint to reward that.) The downside is that you have to decide in advance for each task into how many pieces they can be split up.
In reality, tasks are rarely arbitrary dividable. Some parts of each task are atomic. For example, taking out the garbage can is a do-or-do-not task. Taking it half the way out, or taking half of it out, and assigning someone else to do the rest, is not allowed because it will increase the time spent on it.
Some tasks need at least 2 persons to execution. For example, someone to hold the ladder while the other is standing on it. In the docs, see the auto delay to last pattern.
Alternatively to the simple model, you can also play with nullable=true and custom moves to allow multiple people to assign to the same tasks, but it's complicated. It can avoid having to tune the number of task pieces in advance too much. Once we support #PlanningVariableCollection, and do so fully, more and better options in this regard will become available.
I want to have a function like if the calculation time get too long, we abort routing calculation and submit the best solution at the point of time. Is there such a function in optaplanner ?
For example in a GUI application, you would start solving on a background (worker) thread. In this scenario you can stop solver asynchronously by calling solver.terminateEarly() from another thread, typically the UI thread when you click a stop button.
If this is not what you're looking for, read on.
Provided that by calculation you actually mean the time spent solving, you have several options how to stop solver. Besides asynchronous termination described in the first paragraph, you can use synchronous termination:
Use time spent termination if you know how much time you want dedicate to solving beforehand.
Use unimproved time spent termination if you want to stop solving if the solution doesn't improve for a specified amount of time.
Use best score termination if you want to stop solving after a certain score has been reached.
Synchronous termination is defined before starting the solver and it's done either by XML solver configuration or using the SolverConfig API. See OptaPlanner documentation for other termination conditions.
Lastly, in case you're talking about score calculation and it takes too long to calculate score for a single move (solution change) then you're most certainly doing something wrong. For OptaPlanner to be able to search the solution space effectively, the score calculation must be fast (at least 1000 calculations per second).
For example in vehicle routing problem, driving time or road distances must be known at the time when you start solving. You shouldn't slow down score calculation with a heavy computation that can be done beforehand.
In Cocoa, NSEvents have a timestamp property which returns a timeinterval representing "The time when the event occurred in seconds since system startup." I am writing an application which needs exactly this information- precisely when the user presses keys. But I am worried that the timestamp might not be accurate enough. Timeinterval itself has sub-millisecond precision, which is great. Is there any documentation that indicates whether or not the sub-millisecond precision is used to provide accurate timestamps for NSEvents that represent keyboard and mouse inputs?
If I had to guess at apple's implementation, I would hope that the timestamp on the NSEvent might be generated soon after a hardware interrupt, which would be fantastic. However I could also imagine a system which polls for keyboard/mouse inputs and only populates the timestamp field when the next poll interval comes around and reads the inputs.
Thanks very much for any insight.
You may find Apple's Cocoa Event Handling Guide: Event Architecture document a useful read. From that:
Before it dispatches an event to an application, the window server processes it in various ways; it time-stamps it, annotates it with the associated window and process port, and possibly performs other tasks as well.
This seems to indicate the Window Server, which lives between the kernel and the application layer, applies the timestamp... though it is possible it reads metadata that was generated at a lower level in order to do this.
I would think the best way to get a better sense of this would be to write an app that records timestamps of key down/up events and look for exact matches in adjacent events. If you see several events in a row with (nearly) identical timestamps, it's more likely they were queued in some buffer before they got timestamped. If there is enough of a gap between adjacent events, it's more likely getting stamped closer to the hardware event.
For example, I need a timer on the page. So I can have an action every 100 ms
type Action = Tick Time
and I have field time in my Model. Model could be big, but I need to recreate it and whole view every 100 ms because of time field. I think it would not be effective performance-wise.
Is there another approach or I shouldn't worry about such thing?
The whole view isn't necessarily being recreated every time. Elm uses a Virtual DOM and does diffs to change only the bare minimum of the actual DOM. If large parts of your view are actually changing on every 100ms tick, then that could obviously cause problems, but I'm guessing you're only making smaller adjustments at every 100ms, and you probably have nothing to worry about. Take a look at your developer tools to see the whether the process utilization is spiking.
Your model isn't being recreated every 100ms either. There are optimizations around the underlying data structures (see this related conversation about foldp internals) that let you think in terms of immutability and pureness, but are optimized under the hood.
VB.NET 2010, .NET 4
Hello,
I have an application which controls a process and several stopwatches that keep track of the elapsed time since various events.
The simplified picture is: The process starts, at a later time an event "A" occurs, at a later time an event "B" occurs, etc...
There are a finite number of such events. At the start of each event (including the process start event), I create and start a new stopwatch. I then update some indicators that display the amount of time since each event started.
So, I have a bunch of labels (LabelStart, LabelA, LabelB, etc) each formatted as HH:MM:SS which represent the elapsed time since each event occurred. Their text is derived from the corresponding stopwatchs' properties.
My question is, would it be better to have one stopwatch and a list of offset integers (from a CPU/memory efficiency standpoint)? I.e., the stopwatch starts at process start and, at each event, an integer equal to the current elapsed milliseconds on that stopwatch is added to a list. The labels could then be updated by subtracting the offset from the one running stopwatch.
I have no idea how they work. Maybe this is a dumb question. I'm just curious.
Thanks in advance!
Brian
If you are developing application for computers and if SEVERAL is not too much like less than 10 it should not make any difference.
But the way you are thinking will make it more efficient.
The stopwatch type itself is a structure, rather than a class, and it essentially contains a "mode" indication along with a number that either represents the number of ticks elapsed (when it's not running), or the system performance counter value at which it should be deemed to have started (when it is running). An array holding a million StopWatch instances, all started at different times, would impose no more continuing overhead than any other array of similarly-sized structures.