How many active objects can one ActiveScheduler handle? - symbian

I have a question about Symbian active objects handling. What's the problem: my program runs in 1 thread and have pretty much active objects in it. As per my logs, I see strange pauses in tasks processing. My program have about 30 simultaneously active objects in one ActiveScheduler. Is it okay?

Any Symbian Active Scheduler can handle pretty much as many Active Objects as you need.
Obviously, each added active object has a tiny performance impact on the whole scheduler but 30 is well within acceptable range.
You do have to remember this is all based on cooperative multitasking, though. If too many requests get completed too fast and active objects take too long to run, the time it takes for the scheduler to call RunL() on a specific single active object can become unacceptable for your application.

Related

Are delayed messages in Redis reliable?

I have an architecture solution that relies on the delayed messages.
In short:
There are many clients (mostly mobile devices running android or ios) that can process a given job.
I am creating a job delegation (in RDBMS) for a given client expecting it to be picked up within a certain period of time and the "chosen" client receives a push notification that there is something for it to process. IMO the details about the algorithm of choosing single client out of many is irrelevant to the problem so skipping this part.
When the client pulls a job delegation then the status of it is changed from pending to processing.
As mentioned clients are mobile devices and are often carried by people in move and thus can, due to many reasons, be unable to pull the job delegation from the server and process it.
That's why during the creation of the job delegation, there is also a delayed message dispatched in Redis which is supposed to check in now() + 40 seconds if the job was pulled or not (so if the status is pending or not).
If the delegation hasn't been pulled by the client (status = pending) server times it out and creates a new job delegation with status = pending for a different client. As so on as so for.
It works pretty well except the fact that I've noticed the "check if should timeout" jobs do not ALWAYS run at the time I would expect them to be run. The average is 7 seconds later and the max is 29 seconds later for the analyzed sample of few thousands of jobs. Redis is used as a queue but also as a key-value cache store and in general heavily utilized by the system. May it become that much impacted by the load? I've sort of "reproduced" the issue also on my local environment with a containerized setup with much less load so I doubt it's entirely due to the Redis being busy.
The delay in execution (vs expected) is quite a problem here because it may happen that, especially in case of trying few clients from the list, the total time since creation of the job till it's successfully processed can increase a lot.
So back to the original question. Is the delayed messaging functionality in Redis reliable?
Are there any good recommended docs about it?
Are there any more reliable solutions designed to solve that issue?
Expecting that messages set to be executed in a given timestamp is executed no later than 2-3 seconds from that timestamp.

Listen or wait for a specific time without using timer

Is there a way to listen or wait for a specific time (e.g. 11:30 am) every day. The only way I know how is to set a timer that checks for the current time every 60 seconds which I have actually implemented using a backgroundworker. But is there a way to just wait and listen for the specified time (similar to monitoring for directory changes) and then take some action?
Thanks in advance.
Typically, rather than having a program resident in memory waiting, you would setup a Scheduled Task for this (or a cron job on linux). The scheduled task will run the program at the appropriate time. The program can still check (validate) the expected time if needed, but it shouldn't just always sit in the background using up resources if it's only going to run once per day.
The scheduled task is also better because it will recover automatically from computer reboots, crashes, etc. If something happens that interrupts your program's normal running, the scheduled task will still be able to run.
This is especially important in the .Net world, because .Net requires you to be very careful writing long-lived programs to avoid address space fragmentation. The .Net garbage collector is good at freeing up and returning old memory to the operating system, but over time your program's virtual address space can become fragmented and eventually you will not be able to allocate new memory any longer.
Even if this is part of a larger program, where there are also other things happening based on user interactions, it's still a good idea to split this off into a separate process.

Shared Data Storage Strategy for 'Live' Dashboards in Excel VBA

I'm doing an UI in excel which the goal is to have "live" information on Orders and Order Status between three users, I'll name them DataUser, DashboardOne, and DashboardTwo for examples sake.
The process is that the DataUser will fill in the Orders data, that data is going to be used to populate information on two dashboards. The dashboards are going to be updated live with changes from the DataUser(Orders Increases/Decreases), and changes on order status between DashboardOne and DashboardTwo. For the live updates I'm thinking on using Application.OnTime event call to refresh the View/Dashboards. The two dashboards will be active about 8 hours a day.
Where I'm struggling in on how/where to store the Data, I've though about a couple of options but I don't know the implications of one over the other, especially considering that I intend that the dashboards will run/refresh every 30 sec. with Application.OnTime which could prove expensive.
The options I thought about where:
A Master Workbook that would create separate Workbooks for DashboardOne and DashboardTwo and act database and main UI for DataUser.
Three separate workbooks that would all refer to the one DataWorkbook or another flat data file (perhaps and XML or JSON).
Using an actual database for the data, although this would bring other implications (don't currently have one).
I'm not considering a shared workbook as I've tried something similar in the past (and this time ^^, early steps) and it went rather poorly, nightmare to sync and poor data integrity.
In short:
Which would be the best Data storage strategy for Excel that wouldn't jeopardise the integrity of the data nor be so expensive as to interfere with the uptime rest of the code? Are there better options that I should be considering?
There are quite a number of alternatives, depending on the time you want to invest and the tools at hand. I'll give you a couple of options here.
But first, the basic assumptions:
The amount of data items that you need to share (being a dashboard) is of few tens (let's say, less than 100),
You have at least basic programming skills,
From your description, you have one client with READ-WRITE capabilities while there are two clients with READ-ONLY capability.
OPTION 1:
You can have the Excel saving the data in CSV format (very small amount of data and hence it would take a small fraction of a second to save it and to read it).
The two clients would then open the file in read-only mode, load the data and update the display. You would need to include exception handling at both types of client:
At the one writing, handle the condition of error when it attempts to write at the same time one of the clients attempts to read,
At the two reading, handle the condition of error when attempting to open the file (for read only) while the other process is writing.
Since the write and read operations are going to take a very, VERY short time (as stated, a small fraction of a second), these conditions will be very rare. Additional, since both dashboard clients would be open the file for read-only, they will not disturb each other if they make their attempt at the same moment.
If you wish to drastically reduce the chances of collision, you may set the timers (of the update process on one hand and of the reading processes on the other) to be a primary number of seconds. For instance, the timer of the updating process would be every 11 seconds while that of the reading process would be every 7 seconds.
OPTION 2:
Establish a TCP/IP channel between the processes, where the main process (meaning the one that would have WRITE privilege) would send a triggering message to the other two requesting to start an update whenever a new version of the data had been saved. Upon reception of the trigger, both READ-ONLY processes would approach the file and fetch the data.
In this case, the chances of collision would become near to null.

Does using Timers have negative effects on applications?

I am wondering about the Timer component and what, if any, negative effects occur because of its use or multiple instances of its use. In practice, should there be a limit as to how many timers one should use in a project at one time?
Well, everything is relative but a System.Windows.Forms.Timer is a pretty expensive object. It works by creating a hidden window, required to make the underlying winapi SetTimer() function work. This window is not shared, every timer object gets its own window. A window is in general one of the more expensive operating system objects.
So a very hard upper limit is that you can never have more than 10,000 enabled timers. Windows refuses to allow an app to create that many windows. You should stay considerably south of that limitation, given that all of the windows of all of the processes that run in one desktop session need to share a common heap. Or in other words, creating a lot of windows but staying below the 10,000 quota can negatively impact other processes, it can make them fail when the heap is exhausted.
I'd say a reasonable upper limit hovers around 100. That's a large number of moving parts to keep track of in general, assuming that all of these timers have different Tick event handlers. If they don't then you should tackle this a different way, you only ever need one Timer to measure an arbitrary number of intervals. Roughly the same way you keep appointments with single watch on your wrist. You do so by storing the due times in a SortedList and start the timer only for the first one that's due. When it ticks, work off the entries in the list that have the expired due time and repeat. When you add or remove a due time, stop the timer and restart it when there's a new first due time.
I am assuming you mean the winforms timer object So,
From the Docs:
A Timer is used to raise an event at user-defined intervals. This
Windows timer is designed for a single-threaded environment where UI
threads are used to perform processing. It requires that the user code
have a UI message pump available and always operate from the same
thread, or marshal the call onto another thread.
When you use this
timer, use the Tick event to perform a polling operation or to display
a splash screen for a specified period of time. Whenever the Enabled
property is set to true and the Interval property is greater than
zero, the Tick event is raised at intervals based on the Interval
property setting.
So reading that line by line if you start to pack your application with timers, you are quickly going to be racing the interval events for UI render time.
For instance: You have a clock application that uses a timer to run the clock. At each 1 second interval you have the application render the hands.
In this application you also let the user define as many 'alarms' as they want. Each one creating a new timer that will trigger at set times. These alarms are also allowed to be cyclical. That is to say you allow the user to set an 'alarm' that goes off every x seconds.
Now suppose the user has a long running task (access DB, network resource, calculate PI to 1500 chars etc) that happens on a cyclical alarm. Now suppose the user has 10 long running tasks that need to happen in order and need to happen at 3 4 and 5 second intervals.
The behavior of these timers would not be adequate for this application because the following would happen:
The clock would stop rendering during the execution of the 'alarms'
The alarms may run over one another and thus they would queue up but not happen when they were supposed to happen, because the UI thread is processing all messages synchronously.
you end up with an unresponsive UI that does not do what you want.
So to answer as best I can your actual question; there does not necessarily need to be a limit to the amount of timers, just the interval between when they will fire in conjunction with the consideration of the time it will take to process your event handler.
If you are using the timers to fire separate processing threads that are going to come back to the UI thread eventually and make changes, then no there does not feasibly need to be a limit until you run into the upper end of the performance of your target machine. That is to say at some point the amount of timers could be so large that you are calling more timer events and clogging the message queue to the point that the form rendering becomes affected.
So in short:
Negative effects:
Timers run in the UI thread so they are blocking
they can have unexpected behaviors if your interval is shorter than the amount of time it takes to process your event handler.
In practice the only time you should need to limit your usage of timers, like any component that the user does not control, is if they begin to affect the user experience.
I hope that reads a lot less 'ramble-y' than it felt when I was writing it.

SSIS 2005 Control Flow Priority

The short version is I am looking for a way to prioritize certain tasks in SSIS 2005 control flows. That is I want to be able to set it up so that Task B does not start until Task A has started but Task B does not need to wait for Task A to complete. The goal is to reduce the amount of time where I have idle threads hanging around waiting for Task A to complete so that they can move onto Tasks C, D & E.
The issue I am dealing with is converting a data warehouse load from a linear job that calls a bunch of SPs to an SSIS package calling the same SPs but running multiple threads in parallel. So basically I have a bunch of Execute SQL Task and Sequence Container objects with Precedent Constraints mapping out the dependencies. So far no problems, things are working great and it cut our load time a bunch.
However I noticed that tasks with no downstream dependencies are commonly being sequenced before those that do have dependencies. This is causing a lot of idle time in certain spots that I would like to minimize.
For example: I have about 60 procs involved with this load, ~10 of them have no dependencies at all and can run at any time. Then I have another one with no upstream dependencies but almost every other task in the job is dependent on it. I would like to make sure that the task with the dependencies is running before I pick up any of the tasks with no dependencies. This is just one example, there are similar situations in other spots as well.
Any ideas?
I am late in updating over here but I also raised this issue over on the MSDN forums and we were able to devise a partial work around. See here for the full thread, or here for the feature request asking microsoft to give us a way to do this cleanly...
The short version is that you use a series of Boolean variables to control loops that act like roadblocks and prevent the flow from reaching the lower priority tasks until the higher priority items have started.
The steps involved are:
Declare a bool variable for each of the high priority tasks and default the values to false.
Create a pre-execute event for each of the high priority tasks.
In the pre-execute event create a script task which sets the appropriate bool to true.
At each choke point insert a for each loop that will loop while the appropriate bool(s) are false. (I have a script with a 1 second sleep inside each loop but it also works with empty loops.)
If done properly this gives you a tool where at each choke point the package has some number of high priority tasks ready to run and a blocking loop that keeps it from proceeding down the lower priority branches until said high priority items are running. Once all of the high priority tasks have been started the loop clears and allows any remaining threads to move on to lower priority tasks. Worst case is one thread sits in the loop while waiting for other threads to come along and pick up the high priority tasks.
The major drawback to this approach is the risk of deadlocking the package if you have too many blocking loops get queued up at the same time, or misread your dependencies and have loops waiting for tasks that never start. Careful analysis is needed to decide which items deserved higher priority and where exactly to insert the blocks.
I don't know any elegant ways to do this but my first shot would be something like this..
Sequence Container with the proc that has to run first. In that same sequence container put a script task that just waits 5-10 seconds or so before each of the 10 independent steps can run. Then chain the rest of the procs below that sequence container.