How to implement an async ignite cache store? - ignite

I am trying to implement 3rd Party Persistence using Ignite.Net.
I have implemented a CacheStore , wherein I am using Dapper as a 3rd Party ORM for database interaction in Load(),Write(),Delete() functions.
Can we make Load(),Write(),Delete() functions async ? Or do we have an async CacheStoreAdapter ?

You can use a cache store in write-behind mode. In this mode updates are collected and written to the underlying DB asynchronously with cache operations.
To enable it you should set CacheConfiguration#writeBehindEnabled configuration property to true.

Related

[Question + Discussion]: what are the tradeoffs of using apollo-client in redux application?

I have a redux application that fetches data from a Graphql server. I am currently using a lightweight Graphql client called graphql-request, and all it does is help you send Graphql queries/mutations, but I would like to get the best out of my APIs. even though I am using Redux as state management, is it ok to use apollo-client without its built-in cache and use it only for network requests/ API calls?
Benefits I know I would get from using apollo-client include:
Better error handling
Better implementation of auto-refreshing tokens
Better integration with my server, since my server is written apollo-server
Thanks
Apollo-client's built-in cache does pretty much the same job that redux state management would do for your application. Obviously, if you are not comfortable with it, you can use redux to implement the functionality that you need, but the best case scenario in my opinion would be to drop redux, since the configuration of it its pretty heavy, and rely purely on the cache provided by Apollo-client.

Azure Function API Versioning - How to structure my code?

I have created a demo microservices application implemented with the help of Azure Function Apps. For separation of concerns, I have created an API Layer, Business Layer, and a Data Layer.
The API layer, being the function app, calls the business layer which implements the business logic while the data layer implements logic for storing and retrieving data.
After considerable thought, I have decided to use query-based API versioning for my demo.
The question I have is,
What is the best way to organize my code to facilitate this? Is there any other way to organize my code to accommodate the different versions apart from using different namespaces / repos?
As of now, I've created separate namespaces for each version but this has created a lot of code duplication. Also after getting it reviewed by some of my friends, they raised the concern that If separate namespaces are being used I would be forcing legacy systems to change references to the new namespace if they need to update which is not recommended.
Any help would be appreciated.
The simplest way to implement versioning in Azure Functions is using endpoints. The HttpTrigger Attribute allows the definition of a custom route where you can set the expected version.
// Version 1 of get users
[FunctionName(nameof(V1UserList))]
public static IEnumerable<UserModel> V1UserList(
[HttpTrigger(AuthorizationLevel.Function, "get", Route = "v1/users")]HttpRequest req, ILogger log)
{
}
// Version 2 of get users
[FunctionName(nameof(V2UserList))]
public static IEnumerable<UserModel> V2UserList(
[HttpTrigger(AuthorizationLevel.Function, "get", Route = "v2/users")]HttpRequest req, ILogger log)
{
}
When deploying each version in isolation a router component is required to redirect requests to the correct API endpoint.
The router component can be implemented in Azure using different services, such as:
Azure Function Proxies : you can specify endpoints on your function app that are implemented by another resource. You can use these proxies to break a large API into multiple function apps (as in a microservice architecture), while still presenting a single API surface for clients.
API Management :Azure API Management supports importing Azure Function Apps as new APIs or appending them to existing APIs. The process automatically generates a host key in the Azure Function App, which is then assigned to a named value in Azure API Management.
Sample code for Versioning APIs in Azure Functions

XSockets.Net - how to manage NHibernate Session Context

I wonder what is the best way to manage NHibernate Session Context
when using NH data layer from Xsockets controller.
Particularly I refer to self hosted winservice/console application or Azure worker role,
where HTTPContext is not available.
Of course there is always an option to create and dispose session per call, but that means a performance hit, so better reuse sessions in some way.
My controller provides API for CRUD operations in underlying NH repository and is pushing updates to relevant subscribers when certain records are updated in DB.
Your ideas appreciated :)
I'm using StructureMap to handle dependencies and create a NestedContainer to handle session per request. Don't have to mess with CurrentSessionContext or HttpContext anymore for storing session.
http://structuremap.github.io/the-container/nested-containers/
You could even just create a middleware UnitOfWork if you are using OWIN with WebAPI.
Since XSockets has state is will be bad for your database if you open the connection in the OnOpen event since the connection will remain open as long as the socket is open. Best is to use the repository only in the methods calling the CRUD operations as briefly as possible.
To get the instance of your repository should not be a bottleneck in this case.
I will be happy to review any code you might have.
Regards
Uffe

dojo store isDirty

The older dojo.data API had an isDirty function to query if a store, or a selected item, had unsaved changes. I used this in the ItemFileWriteStore.
The new (since 1.7) dojo/store API doesn't seem to have this. I'm looking at the Memory and JsonREST stores.
Is there an easy way to add this functionality, or is it a write/mixin your own ?
There is no need to have these functions in the new API. The dojo/store is more abstract than the dojo/data API, because there's no API for asynchronous saving/dirty checks.
The dojo/data write API was meant to be used in combination with a service that should be updated when calling save(). Because there could be a difference between the local and remote version, they had to add a function like isDirty() to verify that.
The new dojo/store API has no API for asynchronous saving. The stores you mention are not using asynchronous saving either, so they don't have such a feature.
dojo/store/Memory is an in memory storage, there is no service behind this store, so saving it to a service is something you will have to implement here, there's no need to have a save() or isDirty() feature.
dojo/store/JsonRest immediately pushes local changes to the RESTful webservice behind this store. This means there are no dirty objects that aren't saved yet. Because of this, there's no need to have a save() or isDirty() feature here either.
If you really need an asynchronous save, you will have to create your own store, which you can extend with your own save() and isDirty() API.
I think the old API was to specified (the API was only valid for certain stores), that's why I think they left it out. But there's nobody that's stopping you from creating your own additional API.

NHibernate and potential caching problems

Ok so I have an n tiered model. (WPF, Asp.Net, etc) to backend services via WCF. These services use NHibernate to communicate with the database. Some of these services will be run in InstanceContextMode.Single mode.
Questions:
In singleton service instances should I try to utilize 1 session object for the entire time the wcf service is alive to get the most out of my cache?
If I use 1 session instance in this singleton instance and never create new ones I assume I have to worry about eventually removing cached entities from the session or dumping it all together to avoid performance issues with the session?
Is it a good idea at all to use the session in this way for a singleton wcf service? It seems like it would be if I want to utilize caching.
Should I utilize 2nd level cache in a scenario like this?
Outside of this scenario when should I avoid caching? I would assume that I would want to avoid it in any sort of batching scenario where a large number of objects are created/updated and never really used again outside of the creation or updates.
Are items automatically cached in session when I create/read/update/delete or do I need to specify something in the mapping files or configuration?
1-3: As far as I know, ISession objects are supposed to be light-weight, short-lived objects, which live only for the duration for which they're needed. I would advise AGAINST using the same ISession object for the whole lifetime of your service.
What I would suggest instead is using the same ISeessionFactory instance, and creating new ISessions from it as necessary (you can try something similar to Session-Per-Request pattern).
If you enable 2nd level cache, you can have all the benefits of caching in this scenario.
5 Yep, pretty much. Also remember that 2nd level cache instance is per ISessionFactory instance. that means that if you're using more than 1 ISessionFactory instance you'll have a lot of problems with your cache.
6 for 1st level cache you don't need to define anything.
for 2nd level cache you need to enable the cache when you configure nHibernate (fluently, in my case):
.Cache(c => c.UseQueryCache()
.ProviderClass(
isWeb ? typeof(NHibernate.Caches.SysCache2.SysCacheProvider).AssemblyQualifiedName //in web environment- use sysCache2
: typeof(NHibernate.Cache.HashtableCacheProvider).AssemblyQualifiedName //in dev environmet- use stupid cache
))
)
and specify for each entity and each collection that you want to enable cache for them:
mapping.Cache.ReadWrite().Region("myRegion");
and for a collection:
mapping.HasMany(x => x.Something)
.Cache.ReadWrite().Region("myRegion");