Simple question :
if I add documents with push:false to a collection linked to an adapter and then delete them, will they be marked dirty for deleting?
You can use the getAllDirty (Worklight Version == v6.2) or getPushRequired (Worklight Version < v6.2) APIs after the operations (add, remove) to see their state. If the change is not tracked (add w/o tracking the change, remove) you will not get a document back. In that case the intent is that instead of telling the backed "add this document, then remove it" the API just doesn't tell the server about the document. It's a bit more efficient than sending a change over the network that will just get removed.
Otherwise, if the change is tracked as it's the case for add(doc) you will get back something like this as one of the elements of the array returned:
{_id: 1,
json: {id: 1, ssn: '111-22-3333', name: 'Carlos'},
_operation: 'add',
_dirty: '1395774961,12902'}
Where _operation is the last operation performed. When using push (deprecated in Worklight v6.2) it will send that document to the adapter procedure noted in the _operation field (e.g. add => add procedure). The documentation here covers how to work with external data in Worklight v6.2. The API documentation is here and here. There are also examples of the various APIs here. Feature requests here.
Related
I'm currently creating an application (let's say, notes app, for instance - webapplication + mobile app). I wanted to use RESTful API here, so I read a lot about this topic and I found out there's a lot of ambiguity over there.
So let's start at the beginning. And the beginning in REST is that we have to first request the / (root), then it returns list of paths we can further retrieve, etc, etc. Isn't this the the first part where REST is completely wasteful? Instead of rigid paths, we have to obtain them each time we want to do something. Nah.
The second problem I encountered was bulk operations. How to implement them in REST? Let's say, user didn't have access to the internet for a while, made a few changes and they all have to be made on server as well. So, let's say user modified 50 notes, added 30 and removed 20. We have to make 100 separate requests now. A way to make bulk operations would be very helpful - I saw this stackoverflow topic: Patterns for handling batch operations in REST web services? but I didn't found anything interesting here actually. Everything is okay as long as you want to do one type of operation on one type of resources.
Last, but not least - retrieving whole collection of items. When writing an example app I mentioned - notes app - you probably want to retrieve all collection of items (notes, tags, available notes colors, etc...) at once. With REST, you have to first retrieve list of item links, then fetch the items one by one. 100 notes = over 100 requests.
Since I'm currently learning all this REST stuff, I may be completely wrong at what I said here. Anyway, the more I read about it, the more gruesome it looks like for me. So my question finally is: where am I wrong and/or how to solve problems I mentioned?
It's all about resources. Resources that are obtained through a uniform interface (usually via URI and HTTP methods).
You do not have to navigate through root every time. A good interface keeps their URIs alive forever (if they go stale, they should return HTTP Moved or similar). Root offering pathways to navigate is a part of HATEOAS, one of Roy Fieldings defined architectural elements of REST.
Bulk operations are a thing the architectural style is not strong on. Basically nothing is stopping you to POST a payload containing multiple items to a specific resource. Again, it's all up to what resource you are using/offering and ultimately, how your server implementation handles requests. Your case of 100 requests: I would probably stick with 100 requests. It is clean to write and the overhead is not that huge.
Retrieving a collection: It's about resources what the API decides to offer. GET bookCollection/ vs GET book/1 , GET/book/2 ... or even GET book/all. Or maybe GET book/?includeDetails=true to return all books with same detail as GETting them one-by-one.
I think that this link could give you interesting hints to design a RESTful service: https://templth.wordpress.com/2014/12/15/designing-a-web-api/.
That said, here are my answers to your questions:
There is no need to implement a resource for the root path. With this, I think that you refered to HATEOS. In addition, no link within the payload is also required. Otherwise you can use available formats like Swagger or RAML to document your RESTful service. This show to your end users what is available.
Regarding bulk operations, you are free to use methods POST or PATCH to implement this on the list resource. I think that these two answers could be helpful to you:
REST API - Bulk Create or Update in single request - REST API - Bulk Create or Update in single request
How to Update a REST Resource Collection - How to Update a REST Resource Collection
In fact, you are free to regarding the content you want for your methods GET. This means the root element managed by the resources (list resource and element resource) can contain its hints and also the ones of dependencies. So you can have several levels in the returned content. For example, you can have something like this for an element Book that references a list of Author:
GET /books
{
"title": "the title",
(...)
"authors": [
{
"firstName": "first name",
"lastName": last name"
}, {
(...)
},
(...)
]
}
You can notice that you can leverage query parameters to ask the RESTful service to get back the expected level. For example, if you want only book hints or book hints with corresponding authors:
GET /books
{
"title": "the title",
(...)
}
GET /books?include=authors
{
"title": "the title",
(...)
"authors": [
{
"firstName": "first name",
"lastName": last name"
}, {
(...)
},
(...)
]
}
You can notice that you can distinguish two concepts here:
The inner data (complex types, inner objects): data that are specific to the element and are embedded in the element itself
The referenced data: data that reference and correspond other elements. In this case, you can have a link or the data itself embedded in the root element.
The OData specification addresses such issue with its feature "navigation links" and its query parameter expand. See the following links for more details:
http://www.odata.org/getting-started/basic-tutorial/ - Section "System Query Option $expand"
https://msdn.microsoft.com/en-us/library/ff478141.aspx - Section "Query and navigation"
Hope it helps you,
Thierry
Let's say I push new code to the Worklight server for purposes of a Direct Update. Can I allow users to still use the application for a set amount time before they actually have to accept the update or is the application essentially unavailable to them until they download the new code?
If you are developing your application using Worklight 6.2, then you as a developer can take over the entire Direct Update flow and can essentially decide how to handle a received update from the server.
Note that by taking full control, you own the flow end-to-end; the default Worklight framework handling will not be available and the full responsibility is on the developer to ensure the validity of every step.
You can read more about customizing Direct Update, here:
Initial reading, starting slide #14: http://public.dhe.ibm.com/software/mobile-solutions/worklight/docs/v620/05_06_Using_Direct_Update_to_quickly_update_your_application.pdf
In depth reading: http://www-01.ibm.com/support/knowledgecenter/SSZH4A_6.2.0/com.ibm.worklight.dev.doc/dev/c_customizing_direct_update_ui_android_wp8_ios.html
In your scenario, I think you could probably go in a less extreme way and just do some tweaks before letting the Worklight framework handle the update from the server. Meaning, you could use the example provided in the training module (slide #18 from the PDF above), where you intercept the update:
wl_directUpdateChallengeHandler.handleDirectUpdate = function(directUpdateData,
directUpdateContext) {
... // display message or counter
}
And display a message and start a counter, and when time's up just directUpdateContext.start(); the update.
currently the JSONStore API provides a load() method that says in the documentation:
This function always stores whatever it gets back from the adapter. If
the data exists, it is duplicated in the collection". This means that
if you want to avoid duplicates by calling load() on an already
populated collection, you need to empty or drop the collection before.
But if you want to be able to keep the elements you already have in
the collection in case there is no more connectivity and your
application goes for offline mode, you also need to keep track of
these existing elements.
Since the API doesn't provide a "overwrite" option that would replace the existing elements in case the call to the adapter succeeds, I'm wondering what kind of logic should be put in place in order to manage both offline availability of data and capability to refresh at any time? It is not that obvious to manage all the failure cases by nesting the JS code due to the promises...
Thanks for your advices!
One approach to achieve this:
Use enhance to create your own load method (i.e. loadAndOverwrite). You should have access to the all the variables kept inside an JSONStore instance (collection name, adapter name, adapter load procedure name, etc. -- you will probably use those variables in the invokeProcedure step below).
Call push to make sure there are no local changes.
Call invokeProcedure to get data, all the variables you need should be provided in the context of enhance.
Find if the document already exists and then remove it. Use {push: false} so JSONStore won't track that change.
Use add to add the new/updated document. Use {push: false} so JSONStore won't track that change.
Alternatively, if the document exists you can use replace to update it.
Alternatively, you can use removeCollection and call load again to refresh the data.
There's an example that shows how to use all those API calls here.
Regarding promises, read this from InfoCenter and this from HTML5Rocks. Google can provide more information.
Can I extend the server-side Java Code in Worklight?
For example, there is a class called JavaScriptIntegrationLibraryImplementation under com.worklight.integration.js. Inside this class, there is a method broadcastNotification and I would like to override this method. Is it possible to do so?
EDIT
The reason is that:
When I make the subscription in client side with option field (e.g. add customType: A), I would like to retrieve a list of devices which have been subscribed to this event source. Base on the option field in deviceSubscrpition, I would like to have some business logic to determine who to send the notification message. For example, I will only submit the message to the user which 'customType=A'.
However, there is no API in Worklight which can retrieve a list of devices which make me to retrieve the list first. Then do the logic in JavaScript and called WL.Server.notifyDevice..
Therefore, I would like to check whether there is any method to retrieve a list of devices (through API / Adapter which connects to DB) which have subscribed to an event source.
Thanks.
This part of Worklight is not extendable. You can try and override this method as you say, but do note this is not supported and we cannot help in this case.
Edit
Now that it is clear what you're trying to achieve... what you are looking for currently not available. I will open a feature request for it and it will get evaluated at some point (if you are a customer of IBM, I suggest to get in touch with your contact...).
My suggestion (somewhat hackish in form): you could perhaps use multiple Event Sources, where each event source represents an iOS version. On the client-side, upon app initialization, you can retrieve the iOS version and use it to register to the correct event source (this would be very generic code to allow re-use). In case a new iOS version is released (you will likely know of this in advance), you simply add this event source to the adapter code and re-deploy the adapter. Users of the new iOS version could still register for notification, because you get the iOS version upon init, and use this information to register to the correct event source...
To reiterate:
The adapter contains: ES_iOS5 ES_iOS6
The client:
fetches iOS version, stores it in some variable.
registers to event source, where event source name is ES_${iOSVersion}
if a new iOS version is released, simply create a new event source and re-deploy
the adapter; the client is already equipped to handle this.
#Red23jordon,
i had similar case, i created a custom table where at the time of subscription, I was saving
user ID and event type in custom table. and when user unsubscribe then i also remove details from custom table.
For sending push to users subscribed to a particular "even type" i look into custom table to get list of user IDs subscribed to particular event type, and then i went into Notification user/device tables and fetching corresponding devices and sending Push.
Hope it may help you.
thanks
Background
I'm building an API that allows clients to manipulate geospatial objects. These objects contain a location on the world (in latitude/longitude), and a good bit of metadata. The actual API is rather large, so I present a simplified version here.
Current API
Consider an API with two objects, features and attributes.
The feature endpoint is /api/feature and looks like:
{
id: 5,
name: "My super cool feature",
geometry: {
type: "Point",
coordinates: [
-88.043355345726,
43.293055846667315
]
}
}
The attribute endpoint is /api/attribute. An attribute looks like:
{
id: 3,
feature_id: 5,
name: "attr-name",
value: "value"
}
You can interact with these objects by issuing HTTP requests to their endpoints using different HTTP methods, like you might expect:
GET /api/feature/5 reads the feature with id 5.
PUT /api/feature/5 updates the feature with id 5.
POST /api/feature creates a new feature.
DELETE /api/feature/5 deletes the feature with id 5.
Same goes for attributes.
Attributes are related to features by foreign key (commonly expressed as "features have many attributes").
The Problem
It would be useful to be able to make a copy of a feature and all its metadata (all the attributes that belong to it). The use case is more or less, "I just made this feature and gave it a bunch of attributes, now I want the same thing... but over there." So the only difference between the two features would be their geometries.
Solution #1: Make the client do it.
My first thought was to just have the client do it. Create a new feature with the same name at a new location, then iterate through all the attributes on the source feature, issuing POST requests to make copies of them on the new feature. This, however, suffers from a few problems. First, it isn't atomic. Should the client's Internet connection flake out during this process, you'd be left with an incomplete copy, which is lame. Second, it'd probably be slow, especially for features with many attributes. Anyway, this is a bad idea.
Solution #2: Add copy functionality to the API.
Doing the copy server-side, in a single API call, would be the better approach. This leads me to https://www.rfc-editor.org/rfc/rfc2518#section-8.8 and the COPY method. Being able to do a deep copy of a feature in a single COPY /api/feature/5 request seems ideal.
The Question
My issue, here, is the semantics of COPY don't quite fit the use I envision for it. Issuing a COPY request on a resource executes a copy of that resource to the destination specified in the Destination header. According to the RFC, Destination must be present, and it must be a URI specifying where the copied resource will end up. In my case, the destination for the copied feature is a geometry, which is decidedly not a URI.
So, my questions are: Would stuffing json for the geometry into the Destination header of a COPY request be a perversion of the spec? Is COPY even the right thing to use, here? If not, what alternatives are there? I just want to be sure I'm implementing this in the most HTTP-kosher way.
Well, you'll need a way to make the Destination a URI then (why is that a problem). If you're using the Destination header field for something else, you're not using COPY per spec. (And, BTW, the current specification is RFC 4918)