I want to use breezejs api for storing data in local storage (indexdb or websql) and also want to sync local data with sql server.
But I am failed to achieve this and also not able to find sample app of this type of application using breezejs, knockout and mvc api.
My requirement is:
1) If internet is available, the data will come from sql server by using mvc web api.
2) If internet is shutdown, the application will retrieve data from cached local storage (indexdb or websql).
3) As soon as internet is on, the local data will sync to sql server.
Please let me know Can I achieve this requirement by using breezejs api or not?
If yes, please provide me some and links and sample.
If no, what other can we use for achieving this type of requirement?
Thanks.
Please help me to meet this requirement.
You can do this, but I would suggest simply using localstorage. Basically, every time you read from the server or save to the server, you export the entities and save that to local storage. THen, when you need to read in the data, if the server is unreachable, you read the data from localstorage and use importentities to get it into the manager and then query locally.
function getData() {
var query = breeze.EntityQuery
.from("{YourAPI}");
manager.executeQuery.then(saveLocallyAndReturnPromise)
.fail(tryLocalRestoreAndReturnPromise)
// If query was successful remotely, then save the data in case connection
// is lost
function saveLocallyAndReturnPromise(data) {
// Should add error handling here. This code
// assumes tis local processing will be successful.
var cacheData = manager.exportEntities()
window.localStorage.setItem('savedCache',cacheData);
// return queried data as a promise so that this detour is
// transparent to viewmodel
return Q(data);
}
function tryLocalRestoreAndReturnPromise(error) {
// Assume any error just means the server is inaccessible.
// Simplified for example, but more robust error handling is
// warranted
var cacheData = window.localStorage.getItem('savedCache');
// NOTE: should handle empty saved cache here by throwing error;
manager.importEntities(cacheData); // restore saved cache
var query = query.using(breeze.FetchStrategy.FromLocalCache);
return manager.executeQuery(query); // this is a promise
}
}
This is a code skeleton for simplicity. You should check catch and handle errors, add an isConnected function to determine connectivity, etc.
If you are doing editing locally, there are a few more hoops to jump through. Every time you make a change to the cache, you will need to export either the whole cache or the changes (probably depending on the size of the cache). When there is a connection, you will need to test for local changes first and, if found, save them to the server before requerying the server. In addition, any schema changes made while offline complicate matters tremendously, so be aware of that.
Hope this helps. A robust implementation is a bit more complex, but this should give you a starting point.
Related
I have a very simple Cosmos DB query that I am making from an asp.net core 3 Razor Pages application. The same query I make in Data Explorer in Azure will return results in 0.02ms. When I run it through the application, setting up stopwatches to see the duration of the calls, it can be anywhere from 400ms to 2000ms.
QueryDefinition queryDefinition = new QueryDefinition("SELECT * FROM Cache where Cache.JoinCode = #jc").WithParameter("#jc", JoinCode);
var query = _container.GetItemQueryIterator<HostCache>(queryDefinition);
List<HostCache> results = new List<HostCache>();
while (query.HasMoreResults)
{
var response = await query.ReadNextAsync();
results.AddRange(response.ToList());
}
return results.FirstOrDefault();
The long running request is the await query.ReadNextAsync();. Is there anything I can do to speed that up? Maybe I'm doing it wrong?
First, I would highly recommend that you (or anyone using Cosmos DB .Net SDK) to watch this video on Cosmos DB Youtube Channel: https://www.youtube.com/watch?v=McZIQhZpvew. This provides really useful information about the best practices to follow when working with this SDK.
This video will explain why the first request takes so much time and how you can speed that up.
To summarize for the purpose of this answer, creating an instance of Cosmos Client (with "Direct" connection mode) does not do much. When you make the 1st request with that client, the initialization happens and at that time SDK makes a few network requests to get necessary information about establishing "Direct" (TCP) connection. That's why it takes a great deal of time with the 1st request. After the 1st request, the information is cached by the SDK so subsequent requests take much less time than the 1st one.
To do the initialization while creating Cosmos client, you would need to use CreateAndInitializeAsync method of the CosmosClient. Here's an example of the same from the documentation page:
using Microsoft.Azure.Cosmos;
List<(string, string)> containersToInitialize = new List<(string, string)>
{ ("DatabaseName1", "ContainerName1"), ("DatabaseName2", "ContainerName2") };
CosmosClient cosmosClient = await CosmosClient.CreateAndInitializeAsync("connection-string-from-portal",
containersToInitialize)
Background:
I'm building an SPA (Single Page Application) PWA (Progressive Web App) using Vue.js. I've a remote PostgreSQL database, serving the tables over HTTP with PostgREST. I've a working Workbox Service Worker and IndexedDB, which hold a local copy of the database tables. I've also registered some routes in my service-worker.js; everything is fine this far....
I'm letting Workbox cache GET calls that return tables from the REST service. For example:
https://www.example.com/api/customers will return a json object of the customers.
workbox.routing.registerRoute('https://www.example.com/api/customers', workbox.strategies.staleWhileRevalidate())
At this point, I need Workbox to do the stale-while-revalidate pattern, but to:
Not use a cache, but instead return the local version of this table, which I have stored in IndexedDB. (the cache part)
Make the REST call, and update the local version, if it has changed. (the network part)
I'm almost certain that there is no configurable option for this in this workbox strategy. So I would write the code for this, which should be fairly simple. The retrieval of the cache is simply to return the contents of the requested table from IndexedDB. For the update part, I'm thinking to add a data revision number to compare against. And thus decide if I need to update the local database.
Anyway, we're now zooming in on the actual question:
Question:
Is this actually a good way to use Workbox Routes/Caching, or am I now misusing the technology because I use IndexedDB as the cache?
and
How can I make my own version of the StaleWhileRevalidate strategy? I would be happy to understand how to simply make a copy of the existing Workbox version and be able to import it and use it in my Vue.js Service Worker. From there I can make my own necessary code changes.
To make this question a bit easier to answer, these are the underlying subquestions:
First of all, the StaleWhileRevalidate.ts (see link below) is a .ts (TypeScript?) file. Can (should) I simply import this as a module? I propably can. but then I get errors:
When I to import my custom CustomStaleWhileRevalidate.ts in my main.js, I get errors on all of the current import statements because (of course) the workbox-core/_private/ directory doesn't exist.
How to approach this?
This is the current implementation on Github:
https://github.com/GoogleChrome/workbox/blob/master/packages/workbox-strategies/src/StaleWhileRevalidate.ts
I don't think using the built-in StaleWhileRevalidate strategy is the right approach here. It might be possible to do what you're describing using StaleWhileRevalidate along with a number of custom plugin callbacks to override the default behavior... but honestly, you'd end up changing so much via plugins that starting from scratch would make more sense.
What I'd recommend that you do instead is to write a custom handlerCallback function that implements exactly the logic you want, and returns a Response.
// Your full logic goes here.
async function myCustomHandler({event, request}) {
event.waitUntil((() => {
const idbStuff = ...;
const networkResponse = await fetch(...);
// Some IDB operation go here.
return finalResponse;
})());
}
workbox.routing.registerRoute(
'https://www.example.com/api/customers',
myCustomHandler
);
You could do this without Workbox as well, but if you're using Workbox to handle some of your unrelated caching needs, it's probably easiest to also register this logic via a Workbox route.
I've hooked up Miniprofiler to my local ASP.NET Core project and it works as expected. Now I need it to work in a hosted environment where there are multiple instances of the same website and there are no sticky sessions. It is my understanding that this should be supported if you just set the storage option when configuring the profiler. However, setting the storage does not seem to do anything. I initialize the storage like this:
var redisConnection = "...";
MiniProfiler.DefaultOptions.Storage = new RedisStorage(redisConnection);
app.UseMiniProfiler();
After doing this, I expected that I could open a profiled page and a result would be added to my redis cache. I would then also expect that a new instance of my website would list the original profiling result. However, nothing is written to the cache when generating new profile results.
To test the connection, I tried manually saving a profiler instance (storage.Save()) and it gets saved to the storage. But again, the saved result is not loaded when showing profiler results (and regardless, none of the examples I've seen requires you to do this). I have a feeling that I've missed some point about how the storage is supposed to work.
It turns out that my assumption that MiniProfiler.DefaultOptions.Storage would be used was wrong. After changing my setup code to the following, it works.
// Startup.cs ConfigureServices
var redisConnection = "...";
services.AddMiniProfiler(o =>
{
o.RouteBasePath = "/profiler";
o.Storage = new RedisStorage(redisConnection); // This is new
});
// Startup.cs Configure
app.UseMiniProfiler();
I'm currently using a self hosted Parse Server up to date but I'm facing some security issues.
At the moment, calls done to the route /classes can retrieve any object in any table and, even though I might want an object to be public readable, I wouldn't like to show all the parameters of that object. Briefly I don't want the database to be retrieved in any case, I would like to disable "everything" except the Parse Cloud code. So that is, I would be able to run calls to my own functions, but not able to use clients (Android, iOS, C#, Javascript...) to retrieve data.
Is there any way to do this? I've been searching deeply for this, trying to debug some Controllers but I don't have any clue.
Thank you very much in advance.
tl;dr: set the ACL for all objects to be only readable when using the master key and then tell the query in Cloud Code to use the MK when querying your data
So without changing Parse Server itself you could make use of ACL and only allow a specific user to access objects. You would then "login" as that user in your Cloud Code and be able to access all objects.
As the old method, Parse.Cloud.useMasterKey() isn't available in the OS Parse Server you will have to pass the parameter useMasterKey to the query you are running which should do the trick for this particular request and will bypass ACL/CLP. There is an example in the Wiki of Parse Server as well.
For convenience, here is a short code example from the Wiki:
Parse.Cloud.define('getTotalMessageCount', function(request, response) {
var query = new Parse.Query('Messages');
query.count({
useMasterKey: true
}) // count() will use the master key to bypass ACLs
.then(function(count) {
response.success(count);
});
});
I have a Web Api 2 service that will be deployed across 4 production servers. When a request doesn't pass validation a custom response object is generated and returned to the client.
A rudimentary example
if (!ModelState.IsValid)
{
var responseObject = responseGenerator.GetResponseForInvalidModelState(ModelState);
return Ok(responseObject);
}
Currently the responseGenerator is aware of what environment it is in and generates the response accordingly. For example, in development it'll return a lot detail but in production it'll only return a simple failure status.
How can I implement a "switch" that turns details on without requiring a round trip to the database each time?
Due to the nature of our environment using a config file isn't realistic. I've considered using a flag in the database and then caching it at the application layer but environmental constraints make refreshing the cache on all 4 servers very painful.
I ended up going with the parameter suggestion and then implementing a token system on the back end. If a Debug token is present in the request the service validates it against the database. If it's a valid and active token it returns the additional detail.
This allows us to control things from our end while keeping things simple for the vendors and only adds that extra round trip to the database during debugging.