We have a blazor server application in production and the issue is that users usually are filling forms while spending time on other browser tabs, and they leave the blazor server tab open for hours. When they come back to the tab, the connection to the server is lost and they have to reload the page. The issue is that at refresh they loose the data that they added in the application inputs which creates a bad user experience.
I looked into this question but is seems that the solution involves page reloading.
I there any way I can automatically reconnect to the server without refreshing the page ?
One solution for this would be to use the browsers storage, either the session or local storage (I'm pretty sure session storage would be lost on disconnect though I'm not 100% sure) here is Microsoft's documentation on browser storage.
Whenever they fill something on the form out you push it to the browser's storage and if for whatever reason they lose connection and reconnect you check their browser for any data and pull it.
Just remember that if you use local storage they can open multiple tabs and have access to the same data.
From what I've seen the usual recommendation is to use Blazored Local Storage.
Can also use cookies to accomplish the same effect as well.
I don't think so. You wouldn't want the server to be maintaining the state of dead pages indefinitely.
If these users can be identified (for example by logging them in), then it's trivial to keep state. You can use a SQL database to store a JSON string of the current state of the input model, for example. Or you can use a singleton service with data keyed to the user.
Related
So we just switched to SQL Managed Instnace. As much as it pains me, we had more than a few places where untrained users were querying our production database(s). The switch to a business critical SQL Managed Instance was made partially because we could have them connecting to a read-only-replica of the DB.
Upon digging more, it seems that to connect to the read-only-replica (ror), they're going to need to open SSMS, hit "Advanced Options" then go over to extra parameters and put ApplicationIntent=ReadOnly. This is a bit of a bummer because 1. That means that many of them will probably mistakenly connecto to the "real" db and potentially cause havoc and 2. Thats a lot of extra steps for a user.
My Questions:
Is there a way to use SSMS to bake a connection into their system somehow that automatically sets the paramaters?
If not, is there a way to deny them connection if they DONT Have those parameters?
Side Note: I put a CNAME in my private DNS to cname sqlprod01.mydomain.com to the endpoint I get "bad login" but when i keep that same login info and hit the endpoint directly, it works fine. What's up with that?
I have created an authentication system using db2 for database, and nodejs for backend code. Yes, I know I could have used passport, but I wanted to try to build this auth system without it. I have everything working, and the way I am making sure users are logged into the system before it redirects to the desired page is checking if a global variable value is = true, and if it is, render the page. The problem is, the value of the variable is not unique to every session. I'm wondering if the best way to go about this is somehow setting the variable unique to every session, or maybe using db2 somehow. Thanks in advance for the help :)
The standard way to handle such cases is to store a session cookie on the client side. Then, on the server side, build a cache map (mapping a session id to a data structure, that will include info about the session.
For this to be robust, support multiple instances of the service (for scale), and make sure sessions can be "forgotten" with TTL so the cache will not grow forever, it will be best to use a cache DB, such as Redis.
Here is an excellent tutorial on this: How to Manage Session using Node.js and Express
Db2 session global variable values are unique to a connection. See CREATE VARIABLE
Use Case:
Say I wanted to create a realtime-collaborative document editing system.
In this scenario many users could create and collaborate on many documents.
Due to client-device constraints, it's not possible for any client to keep a replica of all documents, only just a handful.
There needs to be central storage server where all documents always live, and this server is always backed up.
Each client can 'subscribe' to any document, and all clients subscribed can see realtime changes of all other clients subscribed/editing the same document.
Questions:
Since each client can't store all documents, there needs to be a way to remove the replicas of 'old' documents from the client, without deleting the document from the central store, ideally based on an automatic least-recently-used approach. How is this handled in gun?
In gun, how can a document be deleted from the central store, so it's then effectively permanently removed from, and no longer accessible to, all clients?
When a document is deleted from the central store, how is the physical storage space ever actually reclaimed for later use?
Great questions, #user2672083 . Here is the current lay out:
Collaborative realtime document editing is possible with gun. Here is a quick prototype I recorded a long time ago, however there are no full pre-built examples/implementations yet.
Not all data is stored on every client by default. The browser only keeps the data it requests/gets/subscribes to.
The default server already acts as a backup. I recommend using the S3 storage adapter, because then you do not have to worry about running out of disk space.
Removing old replicas. Currently, if I want the server to act as a central "master", I just put a localStorage.clear() at the top of my browser code. This will force the browser to have to always load the latest from the server. This is not ideal though, an LRU specific feature is coming soon according to the roadmap.
Permanently removing data and reclaiming space. While this should be easy for a central setup, because gun is P2P by default, it uses a technique called tombstoning to delete data. Given a lot of requests (like yours) for LRU/TTL/GC/deleting, there will be better support for this in the future. Currently, you have to use a mix of rm data.json, localStorage.clear() and 30 day lifecycles on S3 to get this to work. This will be more integrated/easier in the future.
Now a question for you: What are you working on, and how can I help? Many of the things you asked about are possible (with some work) now, but slated to be the focus of the next version of gun - I'd love to get your feedback as we build this out.
All peers reply to requests for data (#2), meaning that localStorage and the server will both reply. Because localStorage is physically closer to a user, it will reply first/fastest and then replies from the server will be merged. GUN does not try each peer "in sequence" doing try/catch cascades, GUN replies from all peers in parallel.
GUN has swappable storage and transport interfaces, so yes, it is easy to build other layers on top or into it.
This is a more general question, so bear with my abstraction of the following problem.
I'm currently developing an application, that is interfacing with a remote server over a public api. The api in question does provide mechanisms for fetching data based on a timestamp (e.g. "get me everything that changed since xxx"). Since the amount of data is quite high, I keep a local copy in a database and check for changes on the remote side every hour.
While this makes the application robust against network problems (remote server in maintenance, network outage, etc.) and enables employees to continue working with the application, there is one big gaping problem:
The api in question also offers write access. E.g. my application can instruct the remote server to create a new object. Currently I'm sending the request via api, and upon success create the object in my local database, too. It will eventually propagate via the hourly data fetching, where my application (ideally) sees that no changes need to be made to the local database.
Now when the api is unreachable, i create the object in my database, and cache the request until the api is reachable again. This has multiple problems:
If the request fails (due to not beforehand validateble errors), I end up with an object in the database which shouldn't even exist. I could delete it, but it seems hard to explain to the user(s) ("something went wrong with the api, we deleted that object again").
The problem especially cascades when depended actions que up. E.g. creating the object, and two more requests for modifying it. When the initial create fails, so will the modifying requests (since the object does not exist on the remote side)
Worst case is deletion - when an object is deleted locally, but will not be deleted on the remote site, I have no way of restoring it (easily).
One might suggest to never create objects locally, and let them propagate only through the hourly data sync. This unfortunately is not an option. If the api is not accessible, it can be for hours. And it is mandatory that employees can continue working with the application (which they cannot when said objects don't exist locally).
So bottom line:
How to handle such a scenario, where the api might not be reachable, but certain requests must be cached locally and repeated when the api is reachable again. Especially how to handle cases where those requests unpredictable fail.
I'm curious how AX 2009 handles code propagation when operating in a load balanced environment.
We have recently converted our AX server infrastructure from a single AOS instance to 3 AOS instances, one of which is a dedicated load balancer (effectively 2 user-facing servers). All share the same application files and database. Since then, we have had one user who has been having trouble receiving code updates made to the system. The changes generally take a few days before they can see it, and the changes don't seem to update all at once.
For example, a value was added to an ENUM field, and they were not able to see it on a form where it was used (though others connected to the same instance were). Now, this user can see the field in the dropdown as expected, but when connected to one of the instances it will not flow onto a report as it should. When connected to the other instance it works fine, and for any other user connected to either instance it works properly.
I'm not certain if this is related to the infrastructure changes, but it does seem odd that only one user is experiencing it. My understanding was that with this setup, code changes would propagate across the servers either immediately (due to sharing the Application Files), or at least in a reasonable amount of time (<1 day). Is this correct or have I been misinformed?
As your cache problems seems to be per user, then go learn about AUC files.
The files are store on the client computer and can be tricky to keep in sync. There are other problems as well.
Start AX by a script, delete the AUC file before starting AX.
There is no cache coherency between AOS instances: import an XPO on one AOS server, and it is not visible on the other. You will either have to flush the cache manually or restart the other AOS. The simplest thing is to import on each server, this is especially true for labels, as this is the only way to bring labels in sync to my knowledge.
I am sort of curious to this as well, but what I do know, is that if a user has access to the AOT (member of admin or a group with developer access), the client will cache AOT-elements more aggressively than if not having developer access.
Elements (like an Enum) might be cached at client level, but also at AOS-level. Restarting the AOS (service) would flush out memory for that service, forcing it to reload elements upon restart.
I guess what I am suggesting is that you make sure the element is not cached client side. Either restart the client, or run the "Refresh AOD" from the developer tools menu. If that doesn't help, try restaring the AOS the client connects to, and see if that helps.
I think it is safe to say, if you want to be absolutely sure every user has the most recent "copy" of any element, you should not develop on the application files shared by all of these services, but rather develop in an environment with 1 AOS. And when you need to move things to production, you need to take down all AOSes in production and move the chances over while the system is down.
In such cases it is often difficult to find the exact cause for a specific case.
I try to follow some best practices to avoid such situations:
- Use separate environment for developing
- Deploy code changes using layer files, not XPOs
- When deploying, stop all AOSs, deploy files, delete index files in the application directory, start one AOSs, compile, sync DB, start other AOS (or even shut down all and start again)
- Try to have latest kernel versions for AOSs and client