Network activity indicator and asynchronous sockets - objective-c

I have an app which continuously reads status updates from a server connection.
All is working well with a stream delegate to handle all the reading and writing asynchronously.
There's no part of the app that is "waiting" for a specific response from the server, it is just continuously handling status updates as they sporadically arrive from the socket. There are no requests on the client side that are waiting for responses.
I'm wondering what the best practice would be for the network activity indicator in this case.
I could turn it on in the stream event handler, and off before we leave the handler, but that would be a very short time (just enough for an non-blocking read or write to occur). Trying this, I only see the faintest flicker of the indicator; it needs to be on longer than just during the event handler.
What about turning it on in the stream delegate, and setting a timer to turn it off a short time later? (This would ensure it's on long enough to be seen, rather than the short time spent in the stream delegate.)
Note: I've tried this last idea: turning on the network activity indicator whenever there's stream activity, and note the NSDate; then in a timer (that I have fired every 1 second), if the time passsed is >.5 second, I turn off the indicator. This seems to give a reasonable indication of network activity.
Any better recommendations?

If the network activity is continuous then it sounds like it might be somewhat annoying to the user, especially if it's turning on and off all the time.
Perhaps better would be to test for lack-of-response up to a certain timeout value and then display an alert view to the user if you aren't getting any response from the server. Even that could be optional if you can provide feedback (like "Last update: 5 mins ago") to the user instead.

Related

Aging of objects in Gigaspaces

I'm fairly new to Gigaspaces. I am using a polling container to fetch events from a space and then dispatch these over a HTTPS connection. If the server endpoint for the connection becomes unavailable, I need to update the state of the event objects to 'blocked' and re-queue them in the space for later retries (for which I have a separate polling container that specifically looks for the blocked events).
What I'm struggling with is finding a good way to ensure the blocked event polling container does not over-rotate on the blocked events (that is, read the events, discover that the endpoint is still blocked, write them back to the space and then immediately re-read them).
Is there a way I could build in a delay in re-reading the events from the space. Options might include:
Setting/updating a timestamp on the object before writing it back, and then comparing this with the current time within the polling process (for this, I expect I would have to use a SQLQuery which includes SYSDATE as the EventTemplate, but would I have to query SYSDATE out of the space every time I want to update the object rather than using System.currentTimeMillias or equivalent, in order to ensure I am comparing apples to apples?)
Applying a configuration setting of some kind on the blocked event polling container or listener that causes it to only poll periodically.
You can use both approach:
docs.gigaspaces.com/xap97/polling-container.html#dynamic-template-definition
docs.gigaspaces.com/sbp/dynamic-polling-container-templates-using-triggeroperationhandler.html
In the future, for GigaSpaces related questions, please use:
ask.gigaspaces.org/questions/
Thanks,
Ester.

Is boost::asio asyn_read with timer a good idea?

My server app needs to keep thousands of TCP connections. One time, I used one timer for each connection. Once a timer is expired, my code will check database to see if there is a message is ready for sending or not, if found then send it to remote client. This design works but the performance is very very slow, because there are thousands of timers in my app. My friend asked me to remove all timers and use one thread to check the database and send them to all remote clients in for(...) loop.
But I see a lot of articles that introcuce how to use dead_line_timer with async_read, see below link
http://www.boost.org/doc/libs/1_40_0/doc/html/boost_asio/example/timeouts/stream_receive_timeout.cpp
My question is, does this work well when server has thousands of connections? I guess not, how do you think?
I think the timers are not your main performance problem. They have of course some penalty, but by far not in the dimensions that the IO itself has.
I could imagine that your main problem is that you have a a large delay betweeen change-in-db -> timer-expired -> send happens. Another problem could be that you check your whole DB when a timer expires? If yes then you could only set a flag when an update happens, check for that in the timer and reset in when you sent the update.
Can you directly send the changes after they happen so that you avoid the timers at all? You could use io_service->post() to trigger an update function which sends the update to all connected clients. You should also use the async_write methods to avoid that a single client blocks your whole application.
If you don't want to send all updates but only in given intervals then your friends suggestion to use a single timer for checking for changes and sending the updates sounds also good.

Notification after user becomes idle on OS X?

What's the best way of detecting when a user has been idle for X amount of time, and then detect when the user becomes immediately active?
I know there's NSWorkspace which provides will/did sleep/wake notifications, but I can't rely on that because the sleep setting is usually about ~15 minutes to never. I need to be able to detect if the user's been idle for ~1-2 minutes.
This answer provides a way to get the idle time. I'd like to avoid polling if possible.
Polling is your only option, to my knowledge. As user1118321 points out, polling every O(minutes) is unlikely to cause any problems, performance or otherwise.
If your app has a GUI and receives UI events anyway, you could install a handler via +[NSEvent addLocalMonitorForEventsMatchingMask:handler:] that resets your timer on each event. That'll help reduce if not eliminate polls when the user is consistently active, in your own app at least.
Once you've determined that the user has been idle long enough, you could then install a global event tap to watch for the next event. See for example -[NSEvent addGlobalMonitorForEventsMatchingMask:handler:].
Note: you should use CGEventSourceSecondsSinceLastEventType if at all possible rather than poking into the IO registry. It's a formal, supported API and may be more efficient. Not to mention it's way simpler. There's also UKIdleTimer though it relies on Carbon, so may not be applicable.

Desing pattern for background working app

I have created a web-service app and i want to populate my view controllers according to the response i fetch(via GET) in main thread. But i want to create a scheduled timer which will go and control my server, if there becomes any difference(let's say if the count of an array has changed) i will create a local notification. As far as i read from here and some google results, i cant run my app in background more then ten minutes expect from some special situations(Audio, Vo-IP, GPS).. But i need to control the server at least one per minute.. Can anyone offer some idea-or link please?
EDIT
I will not sell the app in store, just for a local area network. Let's say, from the server i will send some text messages to the users and if a new message comes, the count of messages array will increment, in this situation i will create a notification. I need to keep this 'controlling' routing alive forever, whether in foreground or background. Does GCD give such a solution do anyone have any idea?
Just simply play a mute audio file in loop in the background, OR, ping the user's location in the background. Yes, that will drain the battery a bit, but it's a simple hack for in-home applications. Just remember to enable the background types in your Info.plist!
Note: "[...] I fetch (via GET) in main thread." This is not a good approach. You should never fetch any network resources on the main thread. Why? Because your GUI, which is maintained by the main thread, will become unresponsive whenever a fetch isn't instantaneous. Any lag spike on the network results in a less than desirable user experience.
Answer: Aside from the listed special situations, you can't run background apps. The way I see it:
Don't put the app in the background. (crappy solution)
Try putting another "entity" between the app and the "server". I don't know why you "need to control the server at least one per minute" but perhaps you can delegate this "control" to another process outside the device?
.
iOS app -> some form of proxy server -> server which requires
"babysitting" every minute.

Work managers threads constraint and page cannot be displayed

We have a memory intensive processing for certain functionality and we would like to limit the number of parallel requests to this processing. We are able to configure by using "Work Managers" in WebLogic and putting a limit on the number of threads for that servlet.
For example, if we put maximim thread limit as 3, then if there are 10 parallel requests; 7 requests are in queue. There could be situations where these the requests waiting in queue could take up to 30-40 minutes to be processed. We did simple testing and the received page cannot be displayed due to timeout after 15 mins and received the message after 1 hour.
Does any one know if there is a setting in WebLogic to increase/decrease timeout and avoid page cannot be displayed?
Appreciate if any one has any thoughts around this.
Does any one know if there is a setting in WebLogic to increase/decrease timeout and avoid page cannot be displayed?
There might be something but I actually didn't check as it would be a bad advice anyway. By looking for this, you are trying to solve the wrong problem here. A browser is just not made for long-running process like the one you are describing (>30mn) even if you don't mind the user waiting (not mentioning that he could refresh the page and queue more and more jobs).
So, the right answer here is in my opinion: use asynchronism, this is the perfect use case. When the user clicks on the button, send a JMS message to a queue (or create a Quartz job) and send the user a page with a request ID telling him to come back later. When the processing is done, update the status somewhere and make the status/result available to the user. Really, the user experience will be better doing this and you'll face less problems than with a browser.
1) Use some other tool (not browser) like WGET where you can control timeout parameter (--timeout).
2) Why do you use HTTP? Use message driven beans and send message JMS to that and don't care about time outs.
Perhaps quartz can do what you need? Start a job and check in on it as you need to?