What seems to be happening is on a request some values I have retrieved are being stored in application state, but when I make changes to the values, the old values are still in application state for a while before finally going.
I want a way to refresh the application state on each request.?
Use Application.Lock and Application.Unlock to ensure multiple users cannot overwrite others changes, and also to ensure users are using the correct Application values.
See: http://msdn.microsoft.com/en-us/library/bf9xhdz4(VS.71).aspx
Related
Let's say I have a notes app. I want to enable the user to make changes while he is offline, save the changes optimistically in a Mobx store, and add a request to save the changes (on the server) to a queue.
Then when the internet connection is re-established I want to run the requests in the queue one by one so the data in the app syncs with data on the server.
Any suggestions would help.
I tried using react-native-job-queue but it doesn't seem to work.
I also considered react-native-queue but the library seems to be abandoned.
You could create a separate store (or an array in AsyncStorage) for pending operations, and add the operations to an array there when the network is disconnected. Tell your existing stores to look there for data, so you can render it optimistically. Then, when you detect a connection, run the updates in array order, and clear the array when done.
You could also use your existing stores, and add something like pending: true to values that haven't posted to your backend. However, you'll have less control over the order of operations, which sounds like it is important.
As it turns out I was in the wrong. The react-native-job-queue library does work, I just made a mistake by trying to pass a function reference (API call) to the Worker instead of just passing an object that contains the request URL and method and then just implement the Worker to make the API call based on those parameters.
VueJS + Quasar + Pinia + Axios
Single page application
I have an entity called user with 4 endpoints associated:
GET /users
POST /user
PUT /user/{id}
DELETE /user/{id}
When I load my page I call the GET and I save the response slice of users inside a store (userStore)
Post and Put returns the created/updated user in the body of the response
Is it a good practice to manually update the slice of users in the store after calling one of these endpoints, or is better to call the GET immediatly after ?
If you own the API or can be sure about the behavior of what PUT/POST methods return, you can use local state manipulation. Those endpoints should return the same value as what the GET endpoint returns inside. Otherwise, you might end up with incomplete or wrong data on the local state.
By mutating the state locally without making an extra GET request, the user can immediately see the change in the browser. It will also be kinder to your server and the user's data usage.
However, if creating the resource(user, in this case) was a really common operation accessible by lots of users, then calling the GET endpoint to return a slice would be better since it would have more chance to include the new ones that are created by other users. But, in that case, listening to real-time events(i.e. using WebSockets) would be even better to ensure everyone gets accurate and new data in real-time.
Background:
I'm using DataTables in conjunction with a JS library called "Turbolinks", which basically turns your application into a Single Page Application (SPA) without all the overhead of using a true client-side framework. It is extremely useful for Ruby on Rails application performance.
There's a couple of headaches it introduces though - one is compatibility with DataTables. I've got it working pretty well by basically destroying any DataTable on a Turbolinks navigation, and then re-initializing it on turbolinks page load again. This method works well and seems to be the all-around accepted answer as to the best practice to get DataTables to work with Turbolinks.
Question:
On of the last features / finishing touches I'm trying to add to some of my applications is DataTable state saving. The issue I'm facing is that every time a table is destroyed/re-initialized on a page navigation, the...I'm actually not quite sure what to call it, but it looks like from inspecting the settings object on the stateSaveCallback - it looks like its the sInstance and/or the sTableId:
DataTables_Table_0
Then the localStorage key gets set as:
DataTables_DataTables_Table_0_/current_path: "{data: data}"
where current_path is whatever path/page you're on.
Then when it get re-initialized upon returning to the page, it gets set as DataTables_Table_1, and so on and so forth - so the state never gets correctly loaded.
Is there a way to override that ID (or some way to set the name of it in the stateSaveCallback / stateLoadCallback) so that it doesn't increase the last '0', '1', etc at the end of it? That way when the table is re-initialized, it will pull the saved state from just DataTables_Table/current_path?
The answer is to simply give the table an ID! Then DataTables won't assign it its own ID with the incrementing number and the saveState option just works.
Also, the destroy/re-init actually causes the server to get hit twice in the case of an AJAX table.
The better way to do it is to disable the turbolinks cache for any index pages with datatables. If not, you'll end up doing two requests to the server when only one is needed.
I have a sencha touch 2 web app that is using a localstorage datasource to store a bunch of records.
I am able to perform all the usual crud operations fine, but I want to sync data using a webservice. so periodically, the sencha app will poll the webservice for data changes and then make the necessary changes to the localstorage datasource of my sencha app..
My approach has been to use the following code block to run my sync process every 60 seconds:
var timerID = setInterval(function()
{
MyApp.app.BackgroundProcessingMain();
}, (60000));
Inside "BackgroundProcessingMain()", I have various method calls to sync the various datastores (5)..
I call the webservice and get the data I require back, and then my approach has been to loop through the returned data, filter my store to the id of the current item of the returned data and then either delete it, or update it as necessary.
This works fine.. BUT, if this background process kicks off and I'm viewing a bound list control, my list which is using a filtered version of my datasource, suddenly drops down to only showing a single item, usually the last one in the returned data that needs to be synchronised since it was the last one that my update process filtered the store to operate on.
I thought I could use store.findById, get the record reference and update/delete that way, but if the particular ID is already being filtered out due to the view my bound list requires, the record isn't found in the store and therefore doesn't get updated..
What I'd like to be able to do is get a temporary copy of the store, unfiltered, be able to modify it, and then when my app then queries the localstorage next time a form is shown, it will just get the new updated data..
That is basically what I'm referring to as "quietly" in the title..
Does anyone have a suggestion as to what process I could take to get this update done..??
If you have example code, that would be awesome, but pseudo-code is fine..
Thanks
You can use suspendEvents() and resumeEvents() to temporarily prevent your store from firing events. You can then clear your filters, apply your updates (using store.findById()), and reapply your filters without your list changing.
var store = Ext.getStore('myStore');
store.suspendEvents();
store.clearFilter();
doThings(...);
store.filter(myFilters);
store.resumeEvents();
If you pass true into store.resumeEvents(), the buffered events will be discarded.
I'm facing a very peculiar issue with sessions being reset without any apparent reason. This happens randomly, once every few tens or hundreds of requests.
My web application is running on windows 2003, IIS 6.0, .NET 1.1. The application has a webpage which populates a bunch of Session variables during its Page_Load event. The data is stored out of process in ASPNET State Service.
After the Page_load event exits and the page is displayed, the user clicks on a button, which retrieves the session data and does some work with it.
And this Button_click is where the issue occurs. On some occasions, the session variable is null, raising a nullRefException.
Our traces show that the sessionID during the Button_click event is a brand new session, with a different ID than the session of the Page_Load event. Thus, the application fails to retrieve the data that was stored during Page_Load. Our event log shows that the session variables for the problematic requests are indeed populated during the Page_load event, and the response is sent without issue, which normally would persist the data.
We have ruled out session timeouts; although a timeout would still result in the same nullRefException, the same session ID from Page_load would be used to retrieve non-existing data. In this case, the sessionID is different than the original.
We are not messing with the ASPNET cookie in any way, we do not use session.abandon, nor do we inadvertedly remove items from the session.
My question is: what server-side factors could cause the cardholder's session to be reset like that? The Application event log does not contain any useful info.
Also, is there anything client-side (e.g. cookie tampering) that could force IIS to assign a new session upon subsequent postbacks of the page?
Many thanks in advance.
I'm not sure if this applies to your situation, but it might help others.
I was designing a website and I found out the hard way, meaning I had to redesign a portion of this site I was working on. When you create or delete a folder (from an asp.net page) within the active IIS folder it resets all sessions for the website. This means every user currently on the site gets their sessions instantly deleted.
If you have control of your the server, store files outside the IIS folder and stream them in as needed. If you don't have control of the server, you will have to remove any work with folders.