Sencha Touch 2 - "quietly" delete and update records in localstorage - sencha-touch-2

I have a sencha touch 2 web app that is using a localstorage datasource to store a bunch of records.
I am able to perform all the usual crud operations fine, but I want to sync data using a webservice. so periodically, the sencha app will poll the webservice for data changes and then make the necessary changes to the localstorage datasource of my sencha app..
My approach has been to use the following code block to run my sync process every 60 seconds:
var timerID = setInterval(function()
{
MyApp.app.BackgroundProcessingMain();
}, (60000));
Inside "BackgroundProcessingMain()", I have various method calls to sync the various datastores (5)..
I call the webservice and get the data I require back, and then my approach has been to loop through the returned data, filter my store to the id of the current item of the returned data and then either delete it, or update it as necessary.
This works fine.. BUT, if this background process kicks off and I'm viewing a bound list control, my list which is using a filtered version of my datasource, suddenly drops down to only showing a single item, usually the last one in the returned data that needs to be synchronised since it was the last one that my update process filtered the store to operate on.
I thought I could use store.findById, get the record reference and update/delete that way, but if the particular ID is already being filtered out due to the view my bound list requires, the record isn't found in the store and therefore doesn't get updated..
What I'd like to be able to do is get a temporary copy of the store, unfiltered, be able to modify it, and then when my app then queries the localstorage next time a form is shown, it will just get the new updated data..
That is basically what I'm referring to as "quietly" in the title..
Does anyone have a suggestion as to what process I could take to get this update done..??
If you have example code, that would be awesome, but pseudo-code is fine..
Thanks

You can use suspendEvents() and resumeEvents() to temporarily prevent your store from firing events. You can then clear your filters, apply your updates (using store.findById()), and reapply your filters without your list changing.
var store = Ext.getStore('myStore');
store.suspendEvents();
store.clearFilter();
doThings(...);
store.filter(myFilters);
store.resumeEvents();
If you pass true into store.resumeEvents(), the buffered events will be discarded.

Related

How to queue requests in React Native without Redux?

Let's say I have a notes app. I want to enable the user to make changes while he is offline, save the changes optimistically in a Mobx store, and add a request to save the changes (on the server) to a queue.
Then when the internet connection is re-established I want to run the requests in the queue one by one so the data in the app syncs with data on the server.
Any suggestions would help.
I tried using react-native-job-queue but it doesn't seem to work.
I also considered react-native-queue but the library seems to be abandoned.
You could create a separate store (or an array in AsyncStorage) for pending operations, and add the operations to an array there when the network is disconnected. Tell your existing stores to look there for data, so you can render it optimistically. Then, when you detect a connection, run the updates in array order, and clear the array when done.
You could also use your existing stores, and add something like pending: true to values that haven't posted to your backend. However, you'll have less control over the order of operations, which sounds like it is important.
As it turns out I was in the wrong. The react-native-job-queue library does work, I just made a mistake by trying to pass a function reference (API call) to the Worker instead of just passing an object that contains the request URL and method and then just implement the Worker to make the API call based on those parameters.

jQuery DataTables: How can I explicitly set the table instance name / table ID to use with state saving?

Background:
I'm using DataTables in conjunction with a JS library called "Turbolinks", which basically turns your application into a Single Page Application (SPA) without all the overhead of using a true client-side framework. It is extremely useful for Ruby on Rails application performance.
There's a couple of headaches it introduces though - one is compatibility with DataTables. I've got it working pretty well by basically destroying any DataTable on a Turbolinks navigation, and then re-initializing it on turbolinks page load again. This method works well and seems to be the all-around accepted answer as to the best practice to get DataTables to work with Turbolinks.
Question:
On of the last features / finishing touches I'm trying to add to some of my applications is DataTable state saving. The issue I'm facing is that every time a table is destroyed/re-initialized on a page navigation, the...I'm actually not quite sure what to call it, but it looks like from inspecting the settings object on the stateSaveCallback - it looks like its the sInstance and/or the sTableId:
DataTables_Table_0
Then the localStorage key gets set as:
DataTables_DataTables_Table_0_/current_path: "{data: data}"
where current_path is whatever path/page you're on.
Then when it get re-initialized upon returning to the page, it gets set as DataTables_Table_1, and so on and so forth - so the state never gets correctly loaded.
Is there a way to override that ID (or some way to set the name of it in the stateSaveCallback / stateLoadCallback) so that it doesn't increase the last '0', '1', etc at the end of it? That way when the table is re-initialized, it will pull the saved state from just DataTables_Table/current_path?
The answer is to simply give the table an ID! Then DataTables won't assign it its own ID with the incrementing number and the saveState option just works.
Also, the destroy/re-init actually causes the server to get hit twice in the case of an AJAX table.
The better way to do it is to disable the turbolinks cache for any index pages with datatables. If not, you'll end up doing two requests to the server when only one is needed.

IBM Worklight - JSONStore logic to refresh data from the server and be able to work offline

currently the JSONStore API provides a load() method that says in the documentation:
This function always stores whatever it gets back from the adapter. If
the data exists, it is duplicated in the collection". This means that
if you want to avoid duplicates by calling load() on an already
populated collection, you need to empty or drop the collection before.
But if you want to be able to keep the elements you already have in
the collection in case there is no more connectivity and your
application goes for offline mode, you also need to keep track of
these existing elements.
Since the API doesn't provide a "overwrite" option that would replace the existing elements in case the call to the adapter succeeds, I'm wondering what kind of logic should be put in place in order to manage both offline availability of data and capability to refresh at any time? It is not that obvious to manage all the failure cases by nesting the JS code due to the promises...
Thanks for your advices!
One approach to achieve this:
Use enhance to create your own load method (i.e. loadAndOverwrite). You should have access to the all the variables kept inside an JSONStore instance (collection name, adapter name, adapter load procedure name, etc. -- you will probably use those variables in the invokeProcedure step below).
Call push to make sure there are no local changes.
Call invokeProcedure to get data, all the variables you need should be provided in the context of enhance.
Find if the document already exists and then remove it. Use {push: false} so JSONStore won't track that change.
Use add to add the new/updated document. Use {push: false} so JSONStore won't track that change.
Alternatively, if the document exists you can use replace to update it.
Alternatively, you can use removeCollection and call load again to refresh the data.
There's an example that shows how to use all those API calls here.
Regarding promises, read this from InfoCenter and this from HTML5Rocks. Google can provide more information.

Ncqrs recreate the complete ReadModel

Using Ncqrs, is there a way to replay every single event ever happened (all aggregate types) and feed these through my denormalizers in order to recreate the whole read model from scratch?
Edit:
I though it's be good to provide a more specific use case. I'm building this inside a ASP.NET MVC application and using Entity Framework (Code first) for working with the read models. In order to speed up development (and because I'm lazy), I want to use a database initializer that recreates the database schemas once any read model changes. Then using the initializer's seed method to repopulate them.
There is unfortunately nothing built in to do this for you (though I haven't updated the version of ncqrs I use in quite a while so perhaps that's changed). It is also somewhat non-trivial to do it since it depends on exactly what you want to do.
The way I would do it (up to this point I have not had a need) would be to:
Call to the event store to get all relevant events
Depending on what you are doing this could be all events or just the events for one aggregate root, or a subset of events for one or more aggregate roots.
Re-create the read-model in memory from scratch (to save slow and unnecessary writing)
Store the re-created read-model in place of the existing one
Call to the event store one more time to get any events that may have been missed
Repeat until there are no new events being returned
One thing to note, if you are recreating the entire read-model database from scratch I would off-line the service temporarily or queue up new events until you finish.
Again there are different ways you could approach this problem, your architecture and scenarios will probably dictate how best to do it.
We use a MsSqlServerEventStore, to replay all the events I implemented the following code:
var myEventBus = NcqrsEnvironment.Get<IEventBus>();
if (myEventBus == null) throw new Exception("EventBus is not found in NcqesEnvironment");
var myEventStore = NcqrsEnvironment.Get<IEventStore>() as MsSqlServerEventStore;
if (myEventStore == null) throw new Exception("MsSqlServerEventStore is not found in NcqesEnvironment");
var myEvents = myEventStore.GetEventsAfter(GetFirstEventIdFromEventStore(), int.MaxValue);
myEventBus.Publish(myEvents);
This will push all the events on the eventbus and the denormalizers will process all the events. The function GetFirstEventIdFromEventStore just queries the eventstore and returns the first Id from the eventstore (where SequentialId = 1)
What I ended up doing is the following. At the service startup, before any commands are being processed, if the read model has changed, I throw it away and recreate it from scratch by processing all past events in my denormalizers. This is done in the database initializer's seed method.
This was a trivial task using the MS SQL event storage as there was a method for retrieving all events. However, I'm not sure about other event storages.

How do i start and stop application state in ASP.NET

What seems to be happening is on a request some values I have retrieved are being stored in application state, but when I make changes to the values, the old values are still in application state for a while before finally going.
I want a way to refresh the application state on each request.?
Use Application.Lock and Application.Unlock to ensure multiple users cannot overwrite others changes, and also to ensure users are using the correct Application values.
See: http://msdn.microsoft.com/en-us/library/bf9xhdz4(VS.71).aspx