Worklight JSON Store, can we get race conditions? - ibm-mobilefirst

Worklight 6.1 on both Windows (colleague) and Mac (me), building an a Hybrid app destined for Android device but to speed up development we do initial testing as Mobile Web App in Chrome browser on desktop.
We get a weird symptom that I'm trying to fine-down to a reproducible test case. I think I see different behaviours when stepping in debugger and just letting it run. Want to check whether a certain coding pattern could be the cause of the symptom before I go any further.
Fundamental question: should we wait for the resolution of a promise returned by a JSONSTore request for an action on a collection before issuing another request? more explanation below.
The overall intent is to load some data into the JSONStore, with some intelligent replace/merge action if a record is already present. Pseudo code:
for each record retrieved from back-end
if ( record already present in Store )
do some data merging
replace record
else
add record
The application code actually works like this, just considering the add() case, the problem manifests when the store is empty, all records need to be added
for each record to add
addPromise = store.get().add(record);
listOfPromises.insert(addPromise);
examine the list of promises recording any errors
That is there is no "wait" for add to finish before issuing the next add request. Hence in effect we've initiated a set of adds "in parallel" whatever that might mean in JavaScript in Chrome.
The code appears to run just fine, no errors reported. On android device it works reliably. In Chrome under normal running (no stepping in debugger) we end up with no reported errors but only one record inserted - indeed as though a snapshot of the initial "empty" store had been taken and each add is working on that "empty" copy.
After writing this I'm now pretty convinced that the coding pattern described above is vulnerable to a kind of race and that the better approach is build a list of documents to be added and insert them in a single operation.

A more detailed answer will be coming later, but I now know that this
the coding pattern described above is vulnerable to a kind of race and
that the better approach is build a list of documents to be added and
insert them in a single operation.
is true. In the browser the JSONStore does require that we wait for the result of one request before issuing another one. The recommended approach is
var dataToAdd = buildArrayOfDataToAdd(responseFromServer);
var dataToReplace = buildArrayOfDataToReplace(responseFromServer);
jsonstore.add( dataToAdd ).then( function() { jsonstore.replace( dataToReplace); })

Related

TheTVDB API - Starting out

I'm looking for assistance for the bare minimum code to pull some information from the TheTVDB API (v3).
I've never coded anything to do with APIs before.
I tried to shortcut using TVDBSharper, but that uses asynchronous routines, and tasks, etc. which I just can't get my head around at the moment, given the documentation is for C#, and I clearly don't understand how "await" works in VB.
I've tried searching for API examples, but most are about creating an API.
The first thing TheTVDB API documentation says is:
"Users must POST to the /login route with their API key and credentials in the following format in order to obtain a JWT token."
^ I don't know how to POST. Any examples I've seen are very long and confusing, and mostly in C#.
So (and I apologise for this drivel, but I've tried on and off for months now)…
Could someone please show me the minimal amount of VB.NET code to pull the show name from, for example series ID 73739 (Lost). Hopefully from there, I can start to figure some things out.
I have a valid API Key from the TheTVDB.
Mostly you don't need to understand async/await in any great detail but I was once where you are now, and though I don't claim to be an expert, I did manage to get my head around it like this:
You know how, if you had something that threw an exception and you never caught it:
Sub Main(arguments)
Whatever()
End Sub
Sub Whatever
StuffBefore()
OtherWhateverThrowsException()
StuffAfter()
End Sub
Sub OtherWhateverThrowsException()
StuffBefore()
throw New Exception("Blah")
End Sub
As soon as you threw that exception, your VB thread would stop what it was doing, and wind its way back up through the call stack until it popped out of the main, and crashed to the command line - a matrixy style "return to the source" if you like
Async Await is a bit like that. When there's some method that is going to take a long time to do its work (download strings from tvdb) we could make it sit around doing nothing in our code, having a up of coffee and waiting for TVDB's slow server. This makes things easy to understand because if we sit and wait, we wait 30 seconds, then we get the response, and process the response. Obviously we can't process the response before we get it so we have to sit around and wait for it, and this is always true..
But it'd be better if we could let our thread nip back out the way it came in, "go back to the source", do something else for someone else, and then call it(or another one of its coworkers, we probably don't care) back to carry on working for us when TVDB's server responds. This is what Async Await does for us. Methods that are marked Async are treated differently by the compiler, something like saving your progress on your xbox game. If you reach a point where you want to wait, you can issue the waiting command, the thread that was doing our work performs a savegame, goes off and works for someone else, then when we're ready it comes back, loads the game again and carries on where it left off.
The save game file is manifest as a Task; methods that once upon a time were subs (didn't return anything) should now be Functions that return a Task (a savegame with no associated data). Methods that once upon a time returned something like a string, should now be marked as returning a Task(Of String) - the Task part is to save the state of play (data that VB wants to work with), the string is the data your app wants to work with.
Once you mark something as Async, it needs to contain an Await statement. Await is that SaveYourGameAndGoDoSomethingElseWhileThisFinishes. Typically, while you're awaiting something your program won't have any other stuff it needs the thread to do, so it's not just your Function that calls TVDB's API that needs to Await/be marked Async - every single function in the chain, all the way up and out of your code, needs to be marked as Async, and typically you'll Await at every step of the way back up:
Sub DownloadTVDBButton_Click(arguments)
DoStuff()
End Sub
Sub DoStuff
StuffBefore()
GetFromTVDB()
StuffAfter()
End Sub
Sub GetFromTVDB()
Dim i = 1
GetDataFromTVDBServer() 'wait 30s for TVDB
ParseDataFromTVDB()
End Sub
Sub ParseDataFromTVDB()
End Sub
Becomes:
Async Sub DownloadTVDBButton_Click(arguments) 'windows forms event handlers are always subs. Do not use async subs in your own code
Await DoStuff()
End Sub
Function DoStuffAsync Returns Task
StuffBefore()
Await GetFromTVDBAsync()
StuffAfter()
End Function
Async GetFromTVDBAsync() Returns Task
Dim i = 1
Await GetDataFromTVDBServerAsync() 'go back up, and do something else for 30s
ParseDataFromTVDB()
End Sub
Sub ParseDataFromTVDB() 'downstream; doesn't need to be async/await
End Sub
We switched to using TVB's Async data call, so we await it. When we await, the thread would go back up to the previous function DoStuffAsync. Because we're awaiting that, the thread goes back up a level again into the button click handler. Because we're awaiting that also, the thread goes back up again and out of your code. It goes back to its regular day job of drawing the UI, making it looks like the program is still responding etc. When the TVDB call completes the thread comes back to the point just after it (ready to run ParseData), and it has all the data back from TVDB, and the savegame has been reloaded so everything it knew before/the state is as it was (variable i exists and is 1; you could conceive that it would have been lost otherwise when the thread went off to do something else)
In essence, async/await has allowed us to work exactly as we would have done without it, it's just that it built a little savegame mechanism that meant our thread could go off an do something else while TVDB was busy getting our data, rather than having to sit aorund doing nothing while we waited
It may also help to think of Await as a device that unpacks a save game and gets your data out of it. If a GetSomething() sits for 30s then returns a String you want, then GetSomethingAsync() will instantly return a Task that will (in 30s when the work is done) encloses that same String you want, and Await GetSomethingAsync() will wait until the Task is done then get the string you want out of it
Methods that are named like "...Async" should be thought of as "behave in an asyncronous way". They DON'T have to be marked with the Async modifier; Async is only needed if a method uses the Await word but I'm recommending you use Await on everything that returns a Task (i.e. is awaitable) all the way up and down your call tree. When you get more confident you don't always have to Await SomethingAsync but honestly the overhead of doing so is minimal and the consequences of not doing so are occasionally disastrous. All developers who follow convention always name their stuff ...Async if it behaves in an async way; you should adopt this too, and make sure you name all your Async methods with an"Async" at the end of the name
I don't know how to POST
You don't really need to. The TVDB API has a swagger endpoint; swagger is a way of describing a REST service programmatically so that your visual studio can build a set of classes to use it and provide you with nicely named things. Whipping out a WebClient and manually creating some JSON is very old school/low level
TVDB's swagger descriptor is at https://api.thetvdb.com/swagger.json
You're supposed to be able to right click your project, choose Add... Rest API Client:
,
Paste https://api.thetvdb.com/swagger.json in as the url and pick a namespace (an organizational unit) for all the generated classes to go in.
At the moment something in TVDB's API is causing AutoRest (the tool that VS uses to parse the API spec) to choke but ordinarily it would work out and you'd get a bunch of code (autorest generates c#; you'd be best off generating the c# into a new project and then adding reference to that project from your VB) objects to work with that would do all the POSTing etc for you.
As noted, my VS can't process the TVDB API at the moment and I dont have enough time today to figure out why, but you could sure post a question on AutoRest's github (or on SO) saying "why does https://api.thetvdb.com/swagger.json cause a "Input string not in correct format"" and get some more help
You asked (maybe implicitly) a couple of follow up questions in the comments:
I don't know about REST/swagger (I've heard of it though), and can't see any way to add to the project as you described, and I'm no closer to getting info from TheTVDB. However, it might have have helped me use functions in TVDBSharper. I will just have to try a few things with it. Thanks again
Yes; sorry - I should have been more explicit that "Add REST API client" is only available in a C# project because it relies on a tool that generates C#. This isnt a blocker though - you can just make a C# project and add it to your VB solution alongside your VB project; the two languages are totally interoperable. Your VB can tell your C# what to do
However, there isn't much point in trying at the moment, because the tool that is suppsoed to do it can't handle what TVDB is putting out; my VS can successfully ask the TVDB API to describe itself, but it doesn't seem able to understand the response.
In a nutshell; VS has a bug that means it can't use TVDB API directly, you're best off trying via TvDbSharper. The https://github.com/HristoKolev/TvDbSharper readme has some examples in. They're C# but basically "remove the semicolons and they'll pretty much work in VB"
Now, a bit about the headline terms here, background understanding if you like. API, RESTand swagger are easy enough to explain:
API
An API is effectively a website (in this case run by TVDB), intended for software to consume rather than humans. It takes raw data in and chucks raw data out - unlike a normal website intended for our eyes, nothing about it is presentational in the slightest.
REST
REST as a phrase and a concept is a source of confusion for many and a lot of times you try and read about what REST means and the blogs quickly start getting bogged down with details and make it too complex, with all these funky examples. They kinda forget to explain the REST part because it's come to mean not much at all - it's something so obvious and nondescript that we don't think about it any more.
In essence, something is RESTful if the server doesn't have to remember something about what you did before, in order to service a request you make now - every request stands on its own and can be serviced completely without reference to something else. This is a different workflow to other forms where you might want to change the name of something by issuing a editname('newname') command. What name actually gets edited depends on whether you first did selectshow() or selectactor() and also which show or which actor - a workflow like that means the server has to start remembering whether you selected a show or actor, and what show/actor was selected before it can process the editname() command. If you selected show 123, the edit would edit the name of the show id 123. If you selected an actor 456, the edit name would edit the name of an actor 456
Critically, if you replayed the same editname() at a different time a different thing would get edited because the state of your dialog with the server changes. It's kinda dumb to make the server have to remember all that, for everyone, when really we could push the job of identifying whether we want to name an actor or a show and which show, onto the client
By making it that you have editactorname(123,'Jon wayne') you're transferring all the info the server needs to perform the request; your credentials, the actor id, the new name, the fact that it's an actor name and not a show name. All this goes in the one request, and you can replay this request as many times as you like at any time, and it always has the same effect; things that happened before don't affect it (well.. apart from authentication)
It gets a bit woolly if taken literally - "well if the server doesn't remember anything how does it even remember I changed the name of actor 123, to Jon Wayne so it can service my later request of getactorname(123)?" but that's more about the state of the data in the server, not the state of your interaction with the server. Things that are truly stateless are mostly purely calculatory and not too useful; something somewhere needs to be able to remember something or there is nothing to calculate. Things are rarely completely stateless; even TVDB's API requires you to authenticate first, using a user/password/apikey and then the serverissues a token that becomes your username/password/apikey equivalent for every subsequent request - the server has to start remembering that token, or every time you quote it it will say "can't edit actor name; not authorized". So, yeah.. when viewed holistically something usually has to be rememberd at some point otherwise nothing works. REST things are rarely 100% truly stateless, but mostly they are - and it's really about that "when you want to edit the actor name, send a) that you want to edit actorname, b) what actor, c) what name, d) your credentials to prove youre allowed to" - everything the server needs in the one hit
Swagger
Now called OpenAPI, swagger is a protocol for describing an API: when an api has some actions that take some data, and return some data, it's helpful to know what the actions are called (setactoryearsactive), what type of data they take (date, date), what sort of things you should put in it (the from date, the to date or null if still active), what they return (boolean) and what the return means (true if success, false if not).
If we have a standardized way of describing these things then we can build standard software that reads the standard description of the API and writes a bunch of standard code that uses the API. This is software that writes a description so other software can read it and write software that uses the first set of software. It's an API API.
There is a lot of software here:
The API is software(tvdb),
The thing that generates the description of the API is software (Swagger),
The thing that consumes the description of the API and creates a client is software(AutoRest),
And the thing that uses the client is software (your app).
You could code your app to hit the api directly- the API's just responding
to HTTP requests, which are just text files formatted in a particular way sent to port 80 of the web server that hosts the API. You could write one such request in notepad and use telnet to send it and get a valid response. You could code your app to do it (you were just about to). You could use someone else's library (TvBbSharper) which does it somehow. You could use some software that generates something like TvDbSharper; it reads the description of the api and generates classes for you to use; those classes will make the http requests. Everything can be done at any level; you could write all your apps in assembler, the lowest of the low. It takes ages and it is boring - this is why we use ever higher levels of abstraction.
We make something and then make it do a thousand things and then realize that listing the same code over and over and changing one bit each time is boring, and repetitive and something a computer should do, so we devise ways of making it so software can write the boring repetitive code so that we can do the interesting things.
Swagger and AutoRest are those kind of things; Swagger inspects all the methods, what they take and return and generates a regular consistent description. AutoRest reads it and generates a regular consistent set of client classes. Then the human uses the client classes to do the interesting things. The AutoRest part doesn't work out for us at the moment; it's written by different people than the Swagger team so some differences arise; Awagger describes something and Autorest can't understand it. It will one day I'm sure (in this game of walls and ladders); such is the nature of open source - everyone has a different set of priorities.
Right now we could probably get AutoRest working by finding the one thing it is choking on and removing it. There may be no need; if the TvDbSharper guys have written enough of a set of client classes that you can use TvDbSharper to do all your necessary things. It is thus effectively already the set of client classes AutoRest would have built, maybe more; use TvDbSharper.
The idea behind Swagger and Autorest is that a TvDbSharper shouldn't need to exist: it's a very specific application, only works with tvdb, only works in .net.
If we put effort into making Swagger able to generate a description of any API written in any language, and we put effort into making Autorest able to consume that description and output any language, then we have something more useful than TvDbSharper/no need to TvDbSharper because we can generate something that does the same (of course, specific applications can be superior, just like bespoke tailored suits are superior bt that's another philosophy for another time)

Is it possible to allow Worklight users to use app before updating?

Let's say I push new code to the Worklight server for purposes of a Direct Update. Can I allow users to still use the application for a set amount time before they actually have to accept the update or is the application essentially unavailable to them until they download the new code?
If you are developing your application using Worklight 6.2, then you as a developer can take over the entire Direct Update flow and can essentially decide how to handle a received update from the server.
Note that by taking full control, you own the flow end-to-end; the default Worklight framework handling will not be available and the full responsibility is on the developer to ensure the validity of every step.
You can read more about customizing Direct Update, here:
Initial reading, starting slide #14: http://public.dhe.ibm.com/software/mobile-solutions/worklight/docs/v620/05_06_Using_Direct_Update_to_quickly_update_your_application.pdf
In depth reading: http://www-01.ibm.com/support/knowledgecenter/SSZH4A_6.2.0/com.ibm.worklight.dev.doc/dev/c_customizing_direct_update_ui_android_wp8_ios.html
In your scenario, I think you could probably go in a less extreme way and just do some tweaks before letting the Worklight framework handle the update from the server. Meaning, you could use the example provided in the training module (slide #18 from the PDF above), where you intercept the update:
wl_directUpdateChallengeHandler.handleDirectUpdate = function(directUpdateData,
directUpdateContext) {
... // display message or counter
}
And display a message and start a counter, and when time's up just directUpdateContext.start(); the update.

Is it possible to write a plugin for Glimpse's existing SQL tab?

Is it possible to write a plugin for Glimpse's existing SQL tab?
I'm trying to log my SQL queries and the currently available extensions don't support our in-house SQL libary. I have written a custom plugin which logs what I want, but it has limited functionality and it doesn't integrate with the existing SQL tab.
Currently, I'm logging to my custom plugin using a single helper method inside my DAL's base class. This function looks takes the SqlCommand and Duration in order to show data on my custom tab:
// simplified example:
Stopwatch sw = Stopwatch.StartNew();
sqlCommand.Connection = sqlConnection;
sqlConnection.Open();
object result = sqlCommand.ExecuteScalar();
sqlConnection.Close();
sw.Stop();
long duration = sw.ElapsedMilliseconds;
LogSqlActivity(sqlCommand, null, duration);
This works well on my 'custom' tab but unfortunately means I don't get metrics shown on Glimpse's HUD:
Is there a way I can provide Glimpse directly with the info it needs (in terms of method names, and parameters) so it displays natively on the SQL tab?
The following advise is based on the fact that you can't use DbProviderFactory and you can't use a proxied SqlCommand, etc.
The data that appears in the "out-of-the-box" SQL tab is based on messages of given types been published through our internal Message Broker (see below on information on this). Because of the above limitations in your case, to get things lighting up correctly (i.e. your data showing up in HUD and the SQL tab), you will need to simulate the work that we do under the covers when we publish these messages. This shouldn't be that difficult and once done, should just work moving forward.
If you have a look at the various proxies we have here you will be above to see what messages we publish in what circumstances. Here are some highlights:
DbCommand
Log command start - here
Log command error - here
Log command end - here
DbConnection:
Log connection open - here
Log connection closed - here
DbTransaction
Log Started - here
Log committed - here
Log rollback - here
Other
Command row count here - Glimpses calculates this at the DbDataReader level but you could do it elsewhere as well
Now that you have an idea of what messages we are expecting and how we generate them, as long as you pass in the right data when you publish those messages, everything should just light up - if you are interested here is the code that looks for the messages that you will be publishing.
Message Broker: If you at the GlimpseConfiguration here you will see how to access the Broker. This can be done statically if needed (as we do here). From here you can publish the messages you need.
Helpers: For generating some of the above messages, you can use the helpers inside the Support class here. I would have shifted all the code for publishing the actual messages to this class, but I didn't think there would be too many people doing what you are doing.
Update 1
Starting point: With the above approach you shouldn't need to write your own plugin. You should just be able to access the broker GlimpseConfiguration.GetConfiguredMessageBroker() (make sure you check if its null, which it is if Glimpse is turned off, etc) and publish your messages.
I would imagine that you would put the inspection that leverages the broker and published the messages, where ever you have knowledge of the information that needs to be collected (i.e. inside your custom lib). Normally this would require references inside your lib to glimpse (which you may not want), so to protect against this, from your lib, you would call a proxy (which could be another VS proj) that has the glimpse dependency. Hence your ado lib only has references to your own code.
To get your toes wet, try just publishing a couple of fake connection and command messages. Assuming the broker you get from GlimpseConfiguration.GetConfiguredMessageBroker() isn't null, these should just show up. Then you can work towards getting real data into it from your lib.
Update 2
Obsolete Broker Access
Its marked as obsolete because its going to change in v2. You will still be able to do what you need to do, but the way of accessing the broker has changed. For what you currently need to do this is ok.
Sometimes null
As you have found this is really dependent on where in the page lifecycle you are currently at. To get around this, I would probably change my original recommendation a little.
In the code where you are currently creating messages and pushing them to the message bus, try putting them into HttpContext.Current.Items. If you haven't used it before, this is a store which asp.net provides out of the box which lasts the lifetime of a given request. You could have a list that you put in there, still create the message objects that you are doing, but put them into that list instead of pushing them through the broker.
Then, create a HttpModule (its really simple to do) which taps into the PostLogRequest event. Within this handler, you would pull the list out of the context, iterate through it and push the message into the message broker (accessing the same way you have been).

Porting PHP API over to Parse

I am a PHP dev looking to port my API over to the Parse platform.
Am I right in thinking that you only need cloud code for complex operations? For example, consider the following methods:
// Simple function to fetch a user by id
function getUser($userid) {
return (SELECT * FROM users WHERE userid=$userid LIMIT 1)
}
// another simple function, fetches all of a user's allergies (by their user id)
function getAllergies($userid) {
return (SELECT * FROM allergies WHERE userid=$userid)
}
// Creates a script (story?) about the user using their user id
// Uses their name and allergies to create the story
function getScript($userid) {
$user = getUser($userid)
$allergies = getAllergies($userid).
return "My name is {$user->getName()}. I am allergic to {$allergies}"
}
Would I need to implement getUser()/getAllergies() endpoints in Cloud Code? Or can I simply use Parse.Query("User")... thus leaving me with only the getScript() endpoint to implement in cloud code?
Cloud code is for computation heavy operations that should not be performed on the client, i.e. handling a large dataset.
It is also for performing beforeSave/afterSave and similar hooks.
In your example, providing you have set up a reasonable data model, none of the operations require cloud code.
Your approach sounds reasonable. I tend to put simply queries that will most likely not change on the client side, but it all depends on your scenario. When developing mobile apps I tend to put a lot of code in cloud code. I've found that it speeds up my development cycle. For example, if someone finds a bug and it's in cloud code, make the fix, run parse deploy, done! The change is available to all mobile environments instantly!!! If that same code is in my mobile app, it really sucks, cause now I have to fix the bug, rebuild, push it to the app store/google play, wait x number of days for it to be approved, have the users download it... you see where I'm going here.
Take for example your
SELECT * FROM allergies WHERE userid=$userid query.
Even though this is a simple query, what if you want to sort it? maybe add some additional filtering?
These are the kinds of things I think of when deciding where to put the code. Hope this helps!
As a side note, I have also found cloud code very handy when needing to add extra security to my apps.

how to get errors from tileupdatemanager

In my winrt app, I am trying to update the live tile based on polled URIs. There is currently no update happening and I can't figure out how to troubleshoot. There are numerous scenarios that could be breaking things but i can't seem to find anyway to get insight into potential errors.
The TileUpdateManager seems to be a bit of a black hole that absorbs information but never lets it out.
Does anyone know of how to view errors from the TileUpdateManager?
If it interests anyone, here is my update code:
TileUpdateManager.CreateTileUpdaterForApplication().EnableNotificationQueue(true);
PeriodicUpdateRecurrence recurrence = PeriodicUpdateRecurrence.HalfHour;
List<Uri> urisToPoll = new List<Uri>();
urisToPoll.Add(new Uri(#"http://livetileservice2012.azurewebsites.net/api/liveupdate/1"));
urisToPoll.Add(new Uri(#"http://livetileservice2012.azurewebsites.net/api/liveupdate/2"));
TileUpdateManager.CreateTileUpdaterForApplication().StartPeriodicUpdateBatch(urisToPoll, recurrence);
To expand on Nathan's comment, here are two steps you can take to troubleshoot:
Enter your URI straight into a browser to see the results that are returned, and inspect them for being proper XML. As Nathan points out, your URIs are returning JSON which will be ignored by the tile update manager. As a working example (that I use in Chapter 13 of my HTML/JS book), try http://programmingwin8-js-ch13-hellotiles.azurewebsites.net/Default.cshtml.
If you feel that your URI is returning proper XML, try it in the Push and Periodic Notifications Sample (Scenarios 4 and 5 for tiles and badges). If this works, then the error would be in your app code and not in the service.
Do note that StartPeriodicUpdate[Batch] will send a request to the service right away, rather than waiting for the first interval to pass.
Also, if you think that you might have a problem with the service, it's possible to step through its code using Visual Studio Express for Web running on the localhost, when the app is also running inside Visual Studio Express for Win8 (where localhost is enabled).
.Kraig