Is there a better way to trap Xero API errors? - vb.net

I am writing some code in vb.net 2013 Express to access Xero accounting via a private application, and it's working fairly well. However I have come across a problem when trying to write some code to upload multiple contacts from a single XML file. I parse the XML, create a new contact from each line, and add it to a list of contacts. I then submit these to Xero:
try
dim sResult = private_app_api.Create(mContacts)
Catch ex As Xero.Api.Infrastructure.Exceptions.ValidationException
' do something with ex to determine what went wrong
end try
If all contacts create correctly, sresult contains a list of those contacts with their Xero-GUIDs, which I then need to feed back up to the system they are being sent from. This all works correctly.
If one or more of the contacts does not create for some reason, I get a list of one or more errors in the ex.ValidationErrors() collection, but I get nothing in sresult. So, I don't have a reference back to those that have worked, only those that have not.
To get around this, I am looping through each contact and pre-checking that they don't already exist on Xero, and don't have a duplicate name. This also works, and means that I only submit contacts that I know are not already on Xero.
My worry now, though, is that I am going to run into the Xero API limits of 60 calls in a rolling 60-second window. I am trying to make the code robust by pre-checking most of the common things that could cause a problem, but every time I do that, I get closer to the limit, which in theory means that I need to add some complexity by trying to throttle calls to Xero.
Is there a better way that I can call .create() and get both the successful information and the error information?

I think the way around this seems to be to add a reference to RateLimiter when I first create the API object. This appears to implement a means where any calls that would exceed the rate limit are automatically paused. It seems necessary, though, to set the limit a little lower than 60 per 60-second rolling window, as I still get rate errors at that. I set it to 50, and my test code now waits a little while once it runs over the limit.
I haven't figured out how to implement both the 60/60s limit and the 5000/24h limit, though.

Related

Does it make sense to define a GET /users/{id}/photos route or should I just send multiple GET /photos/{id} requests to return a user's photos?

This is a dilemma I find myself facing very often when dealing with nested resources.
So suppose the target user has n photos. Does it make sense to define a GET /users/{id}/photos route or should I just send n GET /photos/{id} requests by first requesting the User object and then looping through that User's photo_ids attribute?
From a best practices perspective, I would not send several request for such similar resources. In that scenario, you end up creating more work for yourself by having to rate limit on the backend, in order to keep your users with a lot of pictures from blowing up your server. That would be impactful to your UX as well.
I recommend you format your route as such:
GET /users/photos?id={{id}}
And return all associated photos with that user ID all at once. You can always limit that to X number of photos per call too, and paginate:
GET /users/photos?id=658&page={{1,2,3, etc.}}
My personal preference is always to try to keep variable data in the URL parameters. Having spent the afternoon with several unrelated APIs, I can tell you a whole slew of developers agree.

How to consolidate API calls for the ASANA API

I'm a freelance web dev and I work with a lot of clients across many different workspaces in Asana. Not being able to get a consolidated view makes this a tedious and difficult thing to manage, so I'm putting together my own little utility to help me get a 'superview' of tasks assigned to me in order of the due date. In order to make this easier for me to scan, I need to have the project name next to the task details.
The easiest way, in my mind, would be a single API call for all tasks assigned to me and request the project name, task name, task id, due date, and workspace name all at once.
The API doesn't seem to allow this consolidated type of request, however, so instead, the workflow goes something like this;
API call to get all my workspaces
Loop through the workspaces, making an API call for each to get all tasks
Use PHP to sort those tasks accordingly
Loop through those tasks making an API call for the first instance of each project in order to get the project name (I cache the data as I
go so that I'm only making a call once per project)
The issue I'm getting is a 500 error when I start making API calls to get the project details. I doubt I'm hitting the 100 call per minute limit, but I'm still getting the errors none the less. In light of this, I'm looking for a way to make a consolidated call that contains all the data I need, but I can't seem to figure it out.
Anyone have some guidance on this?
Good news! We actually do support Input/Output options that allow you to specify which fields you want, including nested fields. So, while you still need to make separate calls for each workspace, you can do something like this:
workspaces = GET /workspaces
for id in workspaces
tasks = GET /workspaces/:id/tasks?assignee=me&opt_fields=name,due_on,projects.name
(If you're only interested in incomplete tasks, you can add &completed_since=now - or if you want incomplete and recently completed tasks, &completed_since=... with the timestamp you want to exclude any tasks that were completed before)
Additionally, 500 is not the code we send for rate limiting - it's likely an issue with the request itself. How are you requesting the project details?

Call a Web Service to get search result in titanium

I have implemented a tableView with a searchBar added to it. I want to call a service when user start typing the search keyword in the search bar. I know that I can call the service in the change event listener that will call the service.
I know that for every change in the search bar it is not good to call a service. So what is the efficient approach of using search bar when the search result is coming from a service call or what we can do to make the search efficient.
For example: the search functionality on Apple's App store
I did something like this for one of my test projects. I would check in my change event that at least 3 characters were entered before I would attempt a look-up. I have no idea why I went with 3, but it seemed like a decent number of characters for filtering my data. I would also set a flag that a network request was in progress. So if they entered 3 characters, you could kick off the search if no look-up was already in progress. If there was a network request in progress, you could setup a wait interval to keep checking if the request came back and kick off an additional request when it is. I would send back short lists of items, which for me was 25 so that my table appeared fast.
Though I didn't do this, you could track the interval of time between characters typed to make sure the user is finished typing. For the best interval you will need to experiment with what is reasonable for an average user. Get some feedback from typically non-power users on this.
I can see a potential issue where you are in the middle of a look-up, but the user is still typing. You might need to track those character updates and perhaps kick off an additional search for the updated character string. You might even check the character search string you sent at the time with the current characters in the input box and choose to abandon the list of look-up items you already received and just do another search.
You might want to show the list of items you did receive just so the user knows the app is working, but immediately send another request for look-up items automatically. A user might eventually start hammering keys and think the app is unresponsive if you don't show something in the table once in a while.

How to decide whether to split up a VB.Net application and, if so, how to split it up?

I have 2 1/2 years experience of VB.Net, mostly self taught, so please bear with me if I seem rather noobish still and do not know some of the basics. I would recommend you grab a cup of tea before starting on this, as it appears to have got quite long...
I currently have a rather large application (VB.Net website) of over 15000 lines of code at the last count. It does not do retail or anything particularly complex like that - it is literally just a wholesale viewing website with admin frontend, catalogue / catalogue management system and pageview system.
I don't really know much about how .Net applications work in the background - whether they are all loaded on the same thread or if each has its own thread... I just know how to code them, or at least like to think I do... :-)
Basically my application is set up as follows:
There are two different areas - the customer area and the administration frontend.
The main part of the customer frontend is the Catalogue. The MasterPage will load a list of products but that's all, and this is common to all the customer frontend pages.
I tend to work on only one or several parts of the application at a time before uploading the changes. So, for example, I may alter the hierarchy of the Catalogue and change the Catalogue page to match the hierarchy change whilst leaving everything else alone.
The pageview database is getting really quite large and so it is getting rather slow when the application is first requested due to the way it works.
The application timeout is set to 5 minutes - don't know how to change it, I have even tried asking this question on here and seem to remember the solution was quite complex and I was recommended not to change it, but if a customer requests the application 5 minutes after the last page view then it will reload the application from scratch. This means there is a very slow page load whenever it exceeds 5 minutes of inactivity.
I am not sure if this needs consideration to determine how best to split the application up, if at all, but each part of the catalogue system is set up as follows:
A Manager class at the top level, which is used by the admin frontend to add, edit and remove items of the specified type and the customer frontend to retrieve a list of items of the specified type. For example the "RangeManager" will contain a list of product "Ranges" and will be used to interact with these from the customer frontend.
An Item class, for example Range, which contains a list of Attributes. For example Name, Description, Visible, Created, CreatedBy and so on. The form for adding / editing loops through these to display relevant controls for the administrator. For example a Checkbox for BooleanAttribute.
An Attribute class, which can be of type StringAttribute, BooleanAttribute, IntegerAttribute and so on. There are also custom Attributes (not just datatypes) such as RangeAttribute, UserAttribute and so on. These are given a data field which is used to get a piece of data specific to the item it is contained in when it is first requested. Basically the Item is given a DataRow which is stored and accessed by Attributes only when they are first requested.
When one item is requested from a specific manager is requested, the manager will loop through all the items in the database and create a new instance of the item class. For example when a Range is requested from the RangeManager, the RangeManager will loop through all of the DataRows in the Ranges table and create a new instance of Range for each one. As stated above it simply creates a new instance with the DataRow, rather than loading all the data into it there and then. The Attributes themselves fetch the relevant data from the DataRow as and when they're first requested.
It just seems a tad stupid, in my mind, to recompile and upload the entire application every time I fix a minor bug or a spelling mistake for a word which is in the code behind (for example if I set the text of a Label dynamically). A fix / change to the Catalogue page, the way it is now, may mean a customer trying to view the Contact page, which is in no way related to the Catalogue page apart from by having the same MasterPage, cannot do so because the DLL is being uploaded.
Basically my question is, given my current situation, how would people suggest I change the architecture of the application by way of splitting it into multiple applications? I mean would it be just customer / admin, or customer / admin and pageviews, or some other way? Or not at all? Are there any other alternatives which I have not mentioned here? Could web services come in handy here? Like split the catalogue itself into a different application and just have the masterpage for all the other pages use a web service to get the names of the products to list on the left hand side? Am I just way WAY over-complicating things? Judging by the length of this question I probably am, and it wouldn't be the first time... I have tried to keep it short, but I always fail... :-)
Many thanks in advance, and sorry if I have just totally confused you!
Regards,
Richard
15000 LOC is not really all that big.
It sounds like you are not pre-compiling your site for publishing. You may want to read this: http://msdn.microsoft.com/en-us/library/1y1404zt(v=vs.80).aspx
Recompiling and uploading the application is the best way to do it. If all you are changing is your markup, that can be uploaded individually (e.g. changing some html layout in an aspx page).
I don't know what you mean here by application timeout, but if your app domain recycles every 5 minutes, then that doesn't seem right at all. You should look into this.
Also, if you find yourself working on various different parts of the site (i.e. many different changes), but need to deploy only some items in isolation, then you should look into how you are using your source control tools (you are using one, aren't you?). Look into something like GIT and branching/merging.
Start by reading:
Application Architecture Guide

How to skip known entries when syncing with Google Reader?

for writing an offline client to the Google Reader service I would like to know how to best sync with the service.
There doesn't seem to be official documentation yet and the best source I found so far is this: http://code.google.com/p/pyrfeed/wiki/GoogleReaderAPI
Now consider this: With the information from above I can download all unread items, I can specify how many items to download and using the atom-id I can detect duplicate entries that I already downloaded.
What's missing for me is a way to specify that I just want the updates since my last sync.
I can say give me the 10 (parameter n=10) latest (parameter r=d) entries. If I specify the parameter r=o (date ascending) then I can also specify parameter ot=[last time of sync], but only then and the ascending order doesn't make any sense when I just want to read some items versus all items.
Any idea how to solve that without downloading all items again and just rejecting duplicates? Not a very economic way of polling.
Someone proposed that I can specify that I only want the unread entries. But to make that solution work in the way that Google Reader will not offer this entries again, I would need to mark them as read. In turn that would mean that I need to keep my own read/unread state on the client and that the entries are already marked as read when the user logs on to the online version of Google Reader. That doesn't work for me.
Cheers,
Mariano
To get the latest entries, use the standard from-newest-date-descending download, which will start from the latest entries. You will receive a "continuation" token in the XML result, looking something like this:
<gr:continuation>CArhxxjRmNsC</gr:continuation>`
Scan through the results, pulling out anything new to you. You should find that either all results are new, or everything up to a point is new, and all after that are already known to you.
In the latter case, you're done, but in the former you need to find the new stuff older than what you've already retrieved. Do this by using the continuation to get the results starting from just after the last result in the set you just retrieved by passing it in the GET request as the c parameter, e.g.:
http://www.google.com/reader/atom/user/-/state/com.google/reading-list?c=CArhxxjRmNsC
Continue this way until you have everything.
The n parameter, which is a count of the number of items to retrieve, works well with this, and you can change it as you go. If the frequency of checking is user-set, and thus could be very frequent or very rare, you can use an adaptive algorithm to reduce network traffic and your processing load. Initially request a small number of the latest entries, say five (add n=5 to the URL of your GET request). If all are new, in the next request,
where you use the continuation, ask for a larger number, say, 20. If those are still all new, either the feed has a lot of updates or it's been a while, so continue on in groups of 100 or whatever.
However, and correct me if I'm wrong here, you also want to know, after you've downloaded an item, whether its state changes from "unread" to "read" due to the person reading it using the Google Reader interface.
One approach to this would be:
Update the status on google of any items that have been read locally.
Check and save the unread count for the feed. (You want to do this before the next step, so that you guarantee that new items have not arrived between your download of the newest items and the time you check the read count.)
Download the latest items.
Calculate your read count, and compare that to google's. If the feed has a higher read count than you calculated, you know that something's been read on google.
If something has been read on google, start downloading read items and comparing them with your database of unread items. You'll find some items that google says are read that your database claims are unread; update these. Continue doing so until you've found a number of these items equal to the difference between your read count and google's, or until the downloads get unreasonable.
If you didn't find all of the read items, c'est la vie; record the number remaining as an "unfound unread" total which you also need to include in your next calculation of the local number you think are unread.
If the user subscribes to a lot of different blogs, it's also likely he labels them extensively, so you can do this whole thing on a per-label basis rather than for the entire feed, which should help keep the amount of data down, since you won't need to do any transfers for labels where the user didn't read anything new on google reader.
This whole scheme can be applied to other statuses, such as starred or unstarred, as well.
Now, as you say, this
...would mean that I need to keep my own read/unread state on the client and that the entries are already marked as read when the user logs on to the online version of Google Reader. That doesn't work for me.
True enough. Neither keeping a local read/unread state (since you're keeping a database of all of the items anyway) nor marking items read in google (which the API supports) seems very difficult, so why doesn't this work for you?
There is one further hitch, however: the user may mark something read as unread on google. This throws a bit of a wrench into the system. My suggestion there, if you really want to try to take care of this, is to assume that the user in general will be touching only more recent stuff, and download the latest couple hundred or so items every time, checking the status on all of them. (This isn't all that bad; downloading 100 items took me anywhere from 0.3s for 300KB, to 2.5s for 2.5MB, albeit on a very fast broadband connection.)
Again, if the user has a large number of subscriptions, he's also probably got a reasonably large number of labels, so doing this on a per-label basis will speed things up. I'd suggest, actually, that not only do you check on a per-label basis, but you also spread out the checks, checking a single label each minute rather than everything once every twenty minutes. You can also do this "big check" for status changes on older items less often than you do a "new stuff" check, perhaps once every few hours, if you want to keep bandwidth down.
This is a bit of bandwidth hog, mainly because you need to download the full article from Google merely to check the status. Unfortunately, I can't see any way around that in the API docs that we have available to us. My only real advice is to minimize the checking of status on non-new items.
The Google API hasn't yet been released, at which point this answer may change.
Currently, you would have to call the API and dis-regard items already downloaded, which as you said isn't terribly efficient as you will be re-downloading items every time, even if you already have them.