IBM Worklight - JSONStore logic to refresh data from the server and be able to work offline - ibm-mobilefirst

currently the JSONStore API provides a load() method that says in the documentation:
This function always stores whatever it gets back from the adapter. If
the data exists, it is duplicated in the collection". This means that
if you want to avoid duplicates by calling load() on an already
populated collection, you need to empty or drop the collection before.
But if you want to be able to keep the elements you already have in
the collection in case there is no more connectivity and your
application goes for offline mode, you also need to keep track of
these existing elements.
Since the API doesn't provide a "overwrite" option that would replace the existing elements in case the call to the adapter succeeds, I'm wondering what kind of logic should be put in place in order to manage both offline availability of data and capability to refresh at any time? It is not that obvious to manage all the failure cases by nesting the JS code due to the promises...
Thanks for your advices!

One approach to achieve this:
Use enhance to create your own load method (i.e. loadAndOverwrite). You should have access to the all the variables kept inside an JSONStore instance (collection name, adapter name, adapter load procedure name, etc. -- you will probably use those variables in the invokeProcedure step below).
Call push to make sure there are no local changes.
Call invokeProcedure to get data, all the variables you need should be provided in the context of enhance.
Find if the document already exists and then remove it. Use {push: false} so JSONStore won't track that change.
Use add to add the new/updated document. Use {push: false} so JSONStore won't track that change.
Alternatively, if the document exists you can use replace to update it.
Alternatively, you can use removeCollection and call load again to refresh the data.
There's an example that shows how to use all those API calls here.
Regarding promises, read this from InfoCenter and this from HTML5Rocks. Google can provide more information.

Related

How to queue requests in React Native without Redux?

Let's say I have a notes app. I want to enable the user to make changes while he is offline, save the changes optimistically in a Mobx store, and add a request to save the changes (on the server) to a queue.
Then when the internet connection is re-established I want to run the requests in the queue one by one so the data in the app syncs with data on the server.
Any suggestions would help.
I tried using react-native-job-queue but it doesn't seem to work.
I also considered react-native-queue but the library seems to be abandoned.
You could create a separate store (or an array in AsyncStorage) for pending operations, and add the operations to an array there when the network is disconnected. Tell your existing stores to look there for data, so you can render it optimistically. Then, when you detect a connection, run the updates in array order, and clear the array when done.
You could also use your existing stores, and add something like pending: true to values that haven't posted to your backend. However, you'll have less control over the order of operations, which sounds like it is important.
As it turns out I was in the wrong. The react-native-job-queue library does work, I just made a mistake by trying to pass a function reference (API call) to the Worker instead of just passing an object that contains the request URL and method and then just implement the Worker to make the API call based on those parameters.

Is it a good practice the attach an event related parameter to an object's model as a variable?

This is about an API handling the validation during saving an object. Which means that the front-end client sends a request to the API to a specific end point, then on the back-end the API creates a new object if the right conditions are meet.
Right now the regular method that we use is that the models has a ruleset for each fields and then the validation is invoked when the save function is invoked, but technically the validation is done right before the object is saved into the database.
Then during today's code review I came across a solution which I wasn't sure if it's a good practice or not. And it was about that the front-end must send a specific parameter to the API every time. This is because other APIs are using our API as well, and we needed to know if the request was sent as and API request or a browser request. If this parameter is present then we want to execute an extra validation function on a specific field.
(1)If I would have to implement it, then I would check the incoming parameter in the service handler or in the controller level, and if I got one, I would invoke the validation right away, and if it fails I would throw an error.
(2)The implementation I saw however adds an extra variable to the model, and sets the model variable when there is an incoming parameter, then validates only when the save function is invoked on the object(which first validates the ruleset defined on the object fields, then saves the object into the database)
So my problem with (2) is that the object now grown bigger with an extra variable that is only related to a specific event. So I would say it's better to implement (1). But (2) also has an advantage, and that is when you create the object on different end point by parsing the parameters, then the validation will work there as well, even if the developer forget to update the code there.
Now this may seems like a silly question because, why would I care about just 1 extra variable, but this is like a bedrock of something good or bad. So if I say this is ok, then from now on the models will start growing with extra variables that are only related to specific events, which I think should be handled on the controller/service handler level. On the other hand the code would be more reliable if it's not the developer who should remember all the 6712537 functionalities and keep them in mind when makes some changes somewhere. Let's say all the devs will get heart attack tomorrow from the excitement of an amazing discovery, and a new developer has to work on the project while he doesn't know about these small details, and then he has to change something on the code that is related to this functionality - so that new feature should be supported by this old one as well.
So my question is if is there any good practice on this, and what do you think what would be the best approach?
So I spent some time on thinking on the solution, and I think the best is to have an array of acceptable trigger variables in the model class. Then when the parameters are passed to the model on the controller level, then the loader function can be modified that it takes the trigger variables from the parameters and save it in the model's associative array variable that stores the trigger variables.
By default this array is empty, and it doesn't matter how much new variables are needed to be created, it will only contain the necessary ones when those are used.
Then of course the loader function needs to be modified in a way that it can filter out the non trigger variables as well as it is done for the regular fields, and there can be even a rule set of validation on the trigger variables if necessary.
So this solves the problem with overgrowing the object with unnecessary variables and the centralized validation part, because now the validation can be always done in the model instead of the controller.
And since the loader function is modified to store the trigger variables in the model's trigger variables array variable, the developer never has to remember that this functionality was created. Which is good, because in the future when he creates a new related function or end point that should handle object creation, he will not miss it to validate it against the old functionality, because the the loader function that he modified in the past like this will handle it for him.
It needs to be noted tho, that since the loader function doesn't differentiate between the parameters, and where to load them other then checking the names of the parameters with the filter functions, these parameter names should be identical from each other, otherwise a buggy functionality can be created accidentally. Like if you forget that a model attribute with the same name was used, then you can accidentally trigger an event that was programmed to be triggered if the trigger variable with the same name is present. However this can be solved by prefixing the trigger variables for example.

How to do right rest api update?

I am developing rest api update method for user profile resource user/profile. I am disappointed what http method should i use. Update contains some required attributes so it more PUT request, where client need to fill all attributes. But how it can extend attributes in future. If i will decide to add new attribute then it will automatically clear because client is not implement it yet.
But what if this new attribute has default value or is set by another route?
Can i use PUT with not stricting number of attributes and use old data if new isn't come in request. Or how it can be done normally?
HTTP is an application whose application domain is the transfer of documents over a network -- Webber, 2011.
PUT is the appropriate method to use when "saving" a new version of a document onto a web server.
how it can extend attributes in future.
You design your schemas to be forward and backward compatible; in practice, what this means is that you can add new optional elements with reasonable default values. When you need to add a new required element, you change the name of the schema.
You'll find prior art in this topic by searching XML literature for must ignore.
You understand correctly: PUT is for complete replacement, so values that you don't include would be lost.
Instead, use the PATCH method, which is for making partial updates. You can update only the properties you include values for.

Worklight - Updatable static content

I have this requirement : My WL application have a set of static pages that might be updated any time. Originally the source of all static content is a desktop page that will be transformed by xsl to a mobile friendly content. The problem that I don't want to do that on each request (HA requirement).
I want to get some inspiration on how to architect that without using direct update mechanism (don't want the end user to get notified of these updates).
I should note that pages will change rarely every few month maybe.
I'm thinking about 2 ways of doing that :
1- Making the transformation on adapter side and rely on WL caching so that transformation is not made each time (does that exist ?). But how the adapter will get notified of page change and flush the cache ? Should I program some advanced java based adapter ? (Storing in the cache and having a kind of a job that scans every day for content changes ?)
2- Doing it mobile side but I don't know how to get notified of changes !
Is your only problem with Worklight's Direct Update that the user is being notified and is required to explicitly approve the transfer?
In this case why not use the option of Silent Direct Update?
The property you're looking for is updateSliently set to true in initOptions.js.
For this to work it is required, obviously, that connectOnStartup will be set to true as well.
perhaps what is doable is to use an adapter to fetch the HTML (or whatever it is) and save it to the device's local storage and then have the app display this content, this way you do not alter the app's web resources and not trigger Direct Update.

What Rest API definition to use when moving items between collections

I am building a REST/JSON based service which has in it's API a couple of collections that contain items. All these items are of the same type.
As an example: The service much resembles a TODO list with collections for items that still need to be done, are in the progress of being completed and are finished.
The API would be something like
/todo/new
/todo/inprogress
/todo/finished
So how would one define an instruction to move an item from the /todo/new to the /todo/inprogress ?
Basically both collections are as much responsible to perform the move. Should one of them be responsible? or should I make another API called /todo/item which will receive the move instruction?
Ideally you would use the PATCH method to modify a single item.
PATCH /todos/:id?status=finished
However PATCH is infrequently used and server/client support isn't always present. You may wish to use PUT instead.