Call a Web Service to get search result in titanium - titanium

I have implemented a tableView with a searchBar added to it. I want to call a service when user start typing the search keyword in the search bar. I know that I can call the service in the change event listener that will call the service.
I know that for every change in the search bar it is not good to call a service. So what is the efficient approach of using search bar when the search result is coming from a service call or what we can do to make the search efficient.
For example: the search functionality on Apple's App store

I did something like this for one of my test projects. I would check in my change event that at least 3 characters were entered before I would attempt a look-up. I have no idea why I went with 3, but it seemed like a decent number of characters for filtering my data. I would also set a flag that a network request was in progress. So if they entered 3 characters, you could kick off the search if no look-up was already in progress. If there was a network request in progress, you could setup a wait interval to keep checking if the request came back and kick off an additional request when it is. I would send back short lists of items, which for me was 25 so that my table appeared fast.
Though I didn't do this, you could track the interval of time between characters typed to make sure the user is finished typing. For the best interval you will need to experiment with what is reasonable for an average user. Get some feedback from typically non-power users on this.
I can see a potential issue where you are in the middle of a look-up, but the user is still typing. You might need to track those character updates and perhaps kick off an additional search for the updated character string. You might even check the character search string you sent at the time with the current characters in the input box and choose to abandon the list of look-up items you already received and just do another search.
You might want to show the list of items you did receive just so the user knows the app is working, but immediately send another request for look-up items automatically. A user might eventually start hammering keys and think the app is unresponsive if you don't show something in the table once in a while.

Related

Is there a better way to trap Xero API errors?

I am writing some code in vb.net 2013 Express to access Xero accounting via a private application, and it's working fairly well. However I have come across a problem when trying to write some code to upload multiple contacts from a single XML file. I parse the XML, create a new contact from each line, and add it to a list of contacts. I then submit these to Xero:
try
dim sResult = private_app_api.Create(mContacts)
Catch ex As Xero.Api.Infrastructure.Exceptions.ValidationException
' do something with ex to determine what went wrong
end try
If all contacts create correctly, sresult contains a list of those contacts with their Xero-GUIDs, which I then need to feed back up to the system they are being sent from. This all works correctly.
If one or more of the contacts does not create for some reason, I get a list of one or more errors in the ex.ValidationErrors() collection, but I get nothing in sresult. So, I don't have a reference back to those that have worked, only those that have not.
To get around this, I am looping through each contact and pre-checking that they don't already exist on Xero, and don't have a duplicate name. This also works, and means that I only submit contacts that I know are not already on Xero.
My worry now, though, is that I am going to run into the Xero API limits of 60 calls in a rolling 60-second window. I am trying to make the code robust by pre-checking most of the common things that could cause a problem, but every time I do that, I get closer to the limit, which in theory means that I need to add some complexity by trying to throttle calls to Xero.
Is there a better way that I can call .create() and get both the successful information and the error information?
I think the way around this seems to be to add a reference to RateLimiter when I first create the API object. This appears to implement a means where any calls that would exceed the rate limit are automatically paused. It seems necessary, though, to set the limit a little lower than 60 per 60-second rolling window, as I still get rate errors at that. I set it to 50, and my test code now waits a little while once it runs over the limit.
I haven't figured out how to implement both the 60/60s limit and the 5000/24h limit, though.

How to set Custom Fields Notifications in HP Project and Portfolio Management(PPM)?

I am using HP Project and Portfolio Management(PPM) tool and I am adding a custom field in my request type which has date as value. Now my requirement is to send the email notifications to the users once the date mentioned in the custom field crosses the system date.
I had tried to set the notification for field level from Notification tab but not getting the custom fields in the list. All the fields, which is available, are pre-configured fields.
So, can anyone suggest me how to implement this requirement? And also where the changes need to be done If any required to implement this?
Please answer in detail and also reply soon.
Thanks in advance!!
PPM notifications can be configured on pre-defined events, like a certain transition, or timeout etc...
One possible solution for this scenario is to have timeouts on your decision step. Time out goes to an execution step, which checks for the date condition. If the condition is met it fires the notification. Else it just returns to the original decision step.
The downside of this work around is that there will be transaction details added once daily. Also the last update date of the request keeps updating daily, which is not ideal if you want to track what was the last time an end user update the request.
It is a workaround cause of the restrictions around notification events.
And if you have a large workflow, I would not recommend this work around.

Periodic tile update

I'm developing an application, that allows using dictionaries (e.g. English-German). I want to enable periodic tile updates for my application at start screen. I want to show random word with translation from random dictionary, so all I want to do is just show 2 strings (I found an appropriate template for it). I can show the notification once using TileUpdateManager.CreateTileUpdaterForApplication().AddToSchedule()
but I want it to happen say every minute. I only found examples that use TileUpdateManager.CreateTileUpdaterForApplication().StartPeriodicUpdate() method and they all use some web address. Is there any way to make it happen using just my local strings without accessing cloud or something?
For showing periodic tile updates periodically when your app is not running you need to provide a URL that will be requested every once in a while (I don't think you can set it to as frequent as 1 minute though - I think the least is 15 minutes.).
If your app is in the foreground, you can simply run a timer and show an update every minute.
You could use push notifications, but again - that's more complex than just providing a URL that returns the XML for a tile update.
In case you want to look at the possibilities: http://blog.equinoxe-consulting.net/blog/bard-rsquo-s-tile-i-introduction-and-local-tiles-and-badges

In a CQRS system, how should I show the user that their request has been received?

I'm trying to decouple some of the bits of our big-ball-of-mud architecture, and identified several boundaries that are obvious candidates for using CQRS to provide a more resilient and scalable solution.
Typical example: when a customer places an order, at the moment we block their thread whilst the order is submitted for payment, approved by the sales system, etc, etc.
This can all be handled asynchronously - allowing us to accept and queue orders whilst the payment processing system is unavailable, etc. - but I'm not sure how I should manage the UI data for the customer.
In other words - they place an order. Their order goes in a queue. If they log back into their account five seconds later and click "review orders" - what happens?
If I draw it from the central repo (or from a cache that's updated based on that repo), then the user won't see their order and will probably try and place it again - or phone us and panic.
If I draw it from a local database, then I have the overhead of maintaining another database of orders - which will need to be synchronised in a load-balanced environment, and seems to undermine a lot of the advantages of CQRS.
I want to do this in lots of places - and not all of them are actions as significant as confirming an order; in some cases it's as simple as a customer changing a phone number or something - so they're not all cases where I can just say "thanks a lot, we'll send you a confirmation e-mail" - because sending confirmation e-mails for every modification to a record strikes me as a little excessive.
Any patterns or solutions I should look at to help with this?
Something worth considering is a 'user' inbox: a place in your app the user can consult 'in-progress' commands at. You could also 'push' notifications back to the user's UI when he has already moved onto another screen, but still resides in your app. This might also be an option when the user logs back on.
Another option could be faking the synchronous experience, i.e. wait around and do polling while in the background everything happens asynchronously. Granted, this might involve including timeouts as well, but I'd argue that those are embraced in today's synchronous processing as well.
On top of all this, you may want to both inform and solicit feedback from your end users about how they experience your app and its behavior.
Regardless what anybody tells you, if you want to handle this elegantly, it will take some effort on your part.
The best thing to do is lie!
The user should have no idea that their transaction is in fact a little like Schrödinger's cat, either dead or alive. From their perspective the transaction was a success, because you just indicate to them that it was successful and queue the job away for offline processing.
Because the vast majority of transactions are successful you can then handle those that are not with an appropriate compensationary mechanism.
Insignificant cases, like modification of some record:
Send the user to a confirmation page telling him something around the lines of "Thanks, your input is being processed. What do you want to do next?" and a couple of links.
If you absolutely have to send the user back to the edited record or a list thereof, in non-distributed systems we're probably talking about milliseconds until the read store has been updated. As long as it takes longer to redirect the user to the new page, from the user's POV everything's fine.
If in some cases the user actually doesn't see his update "immediately", he might call user support. They tell him to hit F5. What? It's there now? Great! Guess what he does next time before reaching for the phone.
Significant cases like offline order processing:
There might be an implicit concept of a Received Order or Pending Order in your domain. If you make this concept explicit, you can present the user with accurate information.
"Thank you very much! Your order has been received an we'll keep you updated once it has been shipped. [Click here] to see a list of your pending orders..."
I think the simplest thing, doing nothing, can often be good enough. If user changes phone number, and the system processes this command in 1-2s, it is a good chance user has not had the opportunity to see old data in-between this operation.
If that is not satisfactory, and your user must absolutely know that his request was fulfilled, your UI can subscribe to domain events. Once the command is executed successfully, your UI gets notification and can inform the user. There are various ways you could do this in UI. You could simply block until the success notification arrives. Or you can say "we received your request", and once you get confirmation, show the notification window "your request was fulfilled" somewhere in the corner.

How to skip known entries when syncing with Google Reader?

for writing an offline client to the Google Reader service I would like to know how to best sync with the service.
There doesn't seem to be official documentation yet and the best source I found so far is this: http://code.google.com/p/pyrfeed/wiki/GoogleReaderAPI
Now consider this: With the information from above I can download all unread items, I can specify how many items to download and using the atom-id I can detect duplicate entries that I already downloaded.
What's missing for me is a way to specify that I just want the updates since my last sync.
I can say give me the 10 (parameter n=10) latest (parameter r=d) entries. If I specify the parameter r=o (date ascending) then I can also specify parameter ot=[last time of sync], but only then and the ascending order doesn't make any sense when I just want to read some items versus all items.
Any idea how to solve that without downloading all items again and just rejecting duplicates? Not a very economic way of polling.
Someone proposed that I can specify that I only want the unread entries. But to make that solution work in the way that Google Reader will not offer this entries again, I would need to mark them as read. In turn that would mean that I need to keep my own read/unread state on the client and that the entries are already marked as read when the user logs on to the online version of Google Reader. That doesn't work for me.
Cheers,
Mariano
To get the latest entries, use the standard from-newest-date-descending download, which will start from the latest entries. You will receive a "continuation" token in the XML result, looking something like this:
<gr:continuation>CArhxxjRmNsC</gr:continuation>`
Scan through the results, pulling out anything new to you. You should find that either all results are new, or everything up to a point is new, and all after that are already known to you.
In the latter case, you're done, but in the former you need to find the new stuff older than what you've already retrieved. Do this by using the continuation to get the results starting from just after the last result in the set you just retrieved by passing it in the GET request as the c parameter, e.g.:
http://www.google.com/reader/atom/user/-/state/com.google/reading-list?c=CArhxxjRmNsC
Continue this way until you have everything.
The n parameter, which is a count of the number of items to retrieve, works well with this, and you can change it as you go. If the frequency of checking is user-set, and thus could be very frequent or very rare, you can use an adaptive algorithm to reduce network traffic and your processing load. Initially request a small number of the latest entries, say five (add n=5 to the URL of your GET request). If all are new, in the next request,
where you use the continuation, ask for a larger number, say, 20. If those are still all new, either the feed has a lot of updates or it's been a while, so continue on in groups of 100 or whatever.
However, and correct me if I'm wrong here, you also want to know, after you've downloaded an item, whether its state changes from "unread" to "read" due to the person reading it using the Google Reader interface.
One approach to this would be:
Update the status on google of any items that have been read locally.
Check and save the unread count for the feed. (You want to do this before the next step, so that you guarantee that new items have not arrived between your download of the newest items and the time you check the read count.)
Download the latest items.
Calculate your read count, and compare that to google's. If the feed has a higher read count than you calculated, you know that something's been read on google.
If something has been read on google, start downloading read items and comparing them with your database of unread items. You'll find some items that google says are read that your database claims are unread; update these. Continue doing so until you've found a number of these items equal to the difference between your read count and google's, or until the downloads get unreasonable.
If you didn't find all of the read items, c'est la vie; record the number remaining as an "unfound unread" total which you also need to include in your next calculation of the local number you think are unread.
If the user subscribes to a lot of different blogs, it's also likely he labels them extensively, so you can do this whole thing on a per-label basis rather than for the entire feed, which should help keep the amount of data down, since you won't need to do any transfers for labels where the user didn't read anything new on google reader.
This whole scheme can be applied to other statuses, such as starred or unstarred, as well.
Now, as you say, this
...would mean that I need to keep my own read/unread state on the client and that the entries are already marked as read when the user logs on to the online version of Google Reader. That doesn't work for me.
True enough. Neither keeping a local read/unread state (since you're keeping a database of all of the items anyway) nor marking items read in google (which the API supports) seems very difficult, so why doesn't this work for you?
There is one further hitch, however: the user may mark something read as unread on google. This throws a bit of a wrench into the system. My suggestion there, if you really want to try to take care of this, is to assume that the user in general will be touching only more recent stuff, and download the latest couple hundred or so items every time, checking the status on all of them. (This isn't all that bad; downloading 100 items took me anywhere from 0.3s for 300KB, to 2.5s for 2.5MB, albeit on a very fast broadband connection.)
Again, if the user has a large number of subscriptions, he's also probably got a reasonably large number of labels, so doing this on a per-label basis will speed things up. I'd suggest, actually, that not only do you check on a per-label basis, but you also spread out the checks, checking a single label each minute rather than everything once every twenty minutes. You can also do this "big check" for status changes on older items less often than you do a "new stuff" check, perhaps once every few hours, if you want to keep bandwidth down.
This is a bit of bandwidth hog, mainly because you need to download the full article from Google merely to check the status. Unfortunately, I can't see any way around that in the API docs that we have available to us. My only real advice is to minimize the checking of status on non-new items.
The Google API hasn't yet been released, at which point this answer may change.
Currently, you would have to call the API and dis-regard items already downloaded, which as you said isn't terribly efficient as you will be re-downloading items every time, even if you already have them.