How do you perform completely asynchrouns operations in ASP NET Core - asp.net-core

Hello i am trying to do a trail log for some of my API endpoints.These logs are generated when the endpoint is called.I would like the writing of the logs to be done in an asynchrouns manner (as lightweight as possible) as to not affect the performance of my usual logic.
I was thinking to have a component that is injectable and can be called anywhere in my endpoints when a log is produced.The problem is that i seem to not find a suitable async solution:
Important service that needs not be obstructed by delays
public interface IImportantInterface
{
Task DoSomethingUndistrubedAsync(string value);
}
**Wrapper around Redis pub-sub**
public interface IIOService{
Task PublishAsync( object obj);
}
Controller
public class Controller
{
private IImportantInterface importantService;
private Publisher publisher;
[HttpPost]
public async Task SomeEndpointAsync(){
this.publisher.Publish([some_log]);
await this.importantService.DoSomethingUndisturbedAsync([something]);
}
public Controller(IImportantInterface importantService)
{
this.importantService=importantService;
}
}
Now comes the real problem.How do i make the smallest footprint for my Publisher.I came up with 3 scenarios but two of them are unfeasible due to going out of scope:
Attempt 1
Transient Service with Task scoped to method:
public class Publisher{
private IIOService writeService{get;set;}
public async Task PublishAsync(object obj){
Task t1=Task.Run(async()=>await writeService.PublishAsync(obj)); //t1 might not finished when method exits
}
}
Task t1 might not finish by the time the method ends.
Attempt 2
Task embedded in Transient Service
public class Publisher{ //publisher might get discarded when calling controller gets out of scope
private Task t1;
private IIOService writeService{get;set;}
public async Task PublishAsync(object obj){
t1=Task.Run(async ()=> this.IIOService.writeService(obj));
}
}
Now task will not get collected after method scope , but it might not finish by the time the calling Controller method class gets out of scope
Attempt 3
Singleton object with a ConcurrentQueue of Tasks that get enqueued.
This would not get out of scope but when would i clear the items?
public class Publisher{
private ConcurrentQueue<Task> Queue;
public async Task PublishAsync(object obj){
this.Queue.Enqueue();
}
}
P.S I want to publish these logs in a common place.From that place the target is to get published to a Redis database using the pub-sub functionality.
Should i just write to Redis ?

Hello i am trying to do a trail log for some of my API endpoints.These logs are generated when the endpoint is called.I would like the writing of the logs to be done in an asynchrouns manner (as lightweight as possible) as to not affect the performance of my usual logic.
I strongly recommend that you use an existing and exhaustively-tested logging library, of which there are many with modern capabilities such as semantic logging and async-compatible implicit state.
Modern logging libraries generally have a singleton kind of design, where logs are kept in-memory (and logging methods are synchronous). Then there is a separate "processor" which publishes these messages to a collector. If you insist on writing your own logging framework (why?), I would recommend you take the same approach as all the other highly successful logging frameworks.

Related

How to integrate TPL Dataflow in ASP NET Core

Hello have looked with intense interest lately to TPL Dataflow and i want to integrate it in my ASP .NET Core application.
I want to use it as a pipeline where multiple methods from different parts of application can post data to this DataFlow chain.
What i do not know is where do you store your link of blocks in case you want them to be called from multiple places?
Producer
public class Producer
{
private BufferBlock<int> startBlock{get;}
private ActionBlock<int> ioBlock{get;}
private IOService service;
private void InitializeChain()
{
this.startBlock=new BufferBlock<int>();
var transformLink=new TransformBlock<int,string>([something]);
// some chain of blocks here
this.ioBlock=new ActionBlock<int>(async(x)=>await this.service.WriteAsync(x));
this.startBlock.LinkTo([someBlock]).LinkTo([someOtherBlock])......LinkTo(ioBlock);
}
public async Task AddAsync(int data)
{
this.BufferBlock.Post(data);
}
public Producer(IOService service)
{
this.service=service;
this.InitializeChain();
}
}
API Producers
I am envisioning this Producer getting called from multiple parts of my application , well use Controller-s for brevity:
public class C1:Controller
{
private Producer producer;
[HttpPost]
[Route([someroute])
public async Task SomeRoute(int data)
{
await this.producer.AddAsync(data);
}
[HttpGet]
[Route([someotherroute])
public async Task SomeOtherRoute(int data)
{
await this.producer.AddAsync(data);
}
public C1(Producer producer)
{
this.producer=producer;
}
}
Startup
public void ConfigureServices(IServiceCollection services) {
services.AddSingleton<Producer>();
}
This can be extended to a multiple Controller scenario or deeper in the hierarchy.
Now my question would be:
How should the Producer that keeps the Dataflow chain be injected ? Should it be transient ? Should the Blocks be instantiated on every call ?
I do not know if this design is ok.I know TPL Dataflow is threadsafe , but can it be used this way?
P.S I basically do not know in what form to keep my Dataflow pipeline and its lifetime , if i want it to be available per the entire scope of my ASP NET Core application.
I want to fetch data from multiple endpoints (directly or deeper in the call hierarchy) , batch them ,transform them , and control the way they are in the end written to an external source (async operation).
Does this play nice with the already existing ThreadPool of ASP NET Core ?
P.S 2: This question also haunts me for an Rx equivalent.
I recommend not directly linking your controllers to your background processor. For reliability reasons, there should be a persistent queue in between them. This can be an Azure Queue, Amazon Simple Queue, or even something old school like MSMQ or a database.
Your processor can be independent (Azure Function, Amazon Lambda, or old school like Win32 service), or it can be part of your web app (ASP.NET Core hosted service).
Your controller writes to the persistent queue and then returns. Your processor then reads messages from the queue and processes them. Your processor is what would use TPL Dataflow or Rx - whichever is more natural.

Project Reactor Schedulers elastic using old threadlocal value

I am using spring webflux to call one service from another via Schedulers.elastic()
Mono<Integer> anaNotificationCountObservable = wrapWithRetryForFlux(wrapWithTimeoutForFlux(
notificationServiceMediatorFlux.getANANotificationCountForUser(userId).subscribeOn(reactor.core.scheduler.Schedulers.elastic())
)).onErrorReturn(0);
In main thread i am setting one InhertitableThreadLocal variable and in the child thread I am trying to access it and it is working fine.
This is my class for storing threadlocal
#Component
public class RequestCorrelation {
public static final String CORRELATION_ID = "correlation-id";
private InheritableThreadLocal<String> id = new InheritableThreadLocal<>();
public String getId() {
return id.get();
}
public void setId(final String correlationId) {
id.set(correlationId);
}
public void removeCorrelationId() {
id.remove();
}
}
Now the issue is first time its working fine meaning the value i am setting in threadlocal is passed to other services.
But second time also, it is using old id(generated in last request).
I tried using Schedulers.newSingle() instead of elastic(), then its working fine.
So think since elastic() is re-using threads, thats why it is not able to clear / or it is re-using.
How should i resolve issue.
I am setting thread local in my filter and clearing the same in myfiler
requestCorrelation.setId(UUID.randomUUID().toString());
chain.doFilter(req,res)
requestCorrelation.removeCorrelationId();
You should never tie resources or information to a particular thread when leveraging a reactor pipeline. Reactor is itself scheduling agnostic; developers using your library can choose to schedule work on another scheduler - if you decide to force a scheduling model you might lose performance benefits.
Instead you can store data inside the reactor context. This is a map-like structure that’s tied to the subscriber and independent of the scheduling arrangement.
This is how projects like spring security and micrometer store information that usually belongs in a threadlocal.

Hangfire job enqueued using interface ignores specified job filters on class/method level

Consider we have the following class:
[AutomaticRetry(Attempts = 3)]
public class EmailSender : IEmailSender
{
[ErrorReporting(Attempts = 1)]
public async Task Send()
{
}
}
public interface IEmailSender
{
Task Send();
}
And we enqueue job in this way:
backgroundJobClient.Enqueue<IEmailSender>(s => s.Send());
Just to mention, I use SimpleInjector and it's Hangfire job activator.
First of all Attempts property from AutomaticRetry attribute is not taken into account. When it comes to ErrorReporting custom attribute it is not executed at all.
Seems Hangfire checks defined attributes just on registered type (interface in my case) not the instance type that will be resolved.
In my case IEmailSender is defined in seperate project. I believe one solution would be to keep it together with EmailSender and custom attributes implementation, plus define attributes on interface level but I wouldn't like to do it in this way since my Hangfire jobs are processed in Windows Service and jobs themselves are enqueued by clients (using interfaces) so there is no need for clients to know anything about implementation.
Do you have any idea how I could solve this issue in a good way? Can we somehow configure those filters when creating BackgroundJobServer in Windows Service?
I solved it in this way:
https://gist.github.com/rwasik/80f1dc1b7bbb8b8a9b47192f0dfd4664
If you have any other ideas please let me know.

The best way to deliver result from Service binding in Android

How to get result from the Service, if task run asynchronously?
If a task is started synchronously in main thread, there is no problem:
Object result = serviceInstanceInActivity.callMethod();
But if the task runs in other thread, we have a problem:
void asyncMethodInService() {
new MyTask().execute();
}
class MyTask extends AsyncTask<Void, Void, Result> {
// implementation of the others methods
public void onPostExecute(Result result) {
// We need to send data to Activity here
}
}
Of Course, it's work via ServiceConnection. In usual class, I would use interfaces as callback, but if I do same here, the Activity instance will leaked in Service via callback.
So, what is recommended way to deliver data in this cases?
I would use LocalBroadcastManager to broadcast an Intent containing the result, so that any interested activities can register for and receive the broadcast when the service completes its task.
If the data is complex and is not practical to pack into an Intent, you need to get a bit creative. You should probably use a ContentProvider and put a content URI into the intent which the activities can then use to query for the result. Or you might be able to store the result on a static singleton (or on your application instance) and just have the activity retrieve the updated value when it receives the broadcast. It depends on your requirements.
Hope that helps!

NHibernate + WCF + Windows Service and WcfOperationSessionContext class

I have a Windows Service Application
in which i create WCF services in it.
One of the services is data
services: add, delete,
read , updatte data via
WCF.
WCF use NHibernate for data manipulation
So my guestions are:
Any advice (best practice) for session management for Hibernate using with WCF?
Anybody knows anything about
WcfOperationSessionContext (hibernate 3.0) class?
how to use it with WCF?
Well to make it concrete :
Suppose that i have WCF Service called DataServices
class WCFDataService .....
{
void SaveMyEntity(MyEntity entity)
{
.....................?? // How to do? Best Way
// Should i take one session and use it all times
// Should i take session and dipsose when operation finished then get
//new session for new operations?
// If many clients call my WCF service function at the same time?
// what may go wrong?
// etc....
}
}
And I need a NHibernateServiceProvider class
class NHibernateServiceProvider ....
{
// How to get Session ?? Best way
ISession GetCurrentSession(){.... }
DisposeSession(){ ....}
}
Best Wishes
PS: I have read similiar entries here and other web pages. But can not see "concrete" answers.
The WcfOperationSessionContext, similar to ThreadStaticSessionContext and WebRequestSessionContext is an implementation for a session context. The session context is used to bind (associate) a ISession instance to a particular context.
The session in the current context can be retrieved by calling ISessionFactory.GetCurrentSession().
You can find more information about session context here.
The WcfOperationSessionContext represents a context that spans for the entire duration of a WCF operation. You still need to handle the binding of the session in the begining of the operation and the unbinding/commiting/disposal of the session at the end of the operation.
To get access to the begin/end actions in the wcf pipeline you need to implement a IDispatchMessageInspector. You can see a sample here.
Also regarding WCF integration: if you use ThreadStatic session context it will appear to work on development, but you will hit the wall in production when various components (ex: authorization, authentication ) from the wcf pipeline are executed on different threads.
As for best practices you almost nailed it: Use WcfOperationSessionContext to store the current session and the IDispatchMessageInspector to begin/complete your unit of work.
EDIT - to address the details you added:
If you configured WcfOperationSessionContext and do the binding/unbinding as i explained above, all you have to do to is inject the ISessionFactory into your service and just use factory.GetCurrentSession(). I'll post a sample prj if time permits.
Here is the sample project
The model we use for managing NHibernate sessions with WCF is as follows:
1) We have our own ServiceHost class that inherits from System.ServiceModel.ServiceHost which also implements ICallContextInitializer. We add the service host instance to each of the operations in our service as follows:
protected override void InitializeRuntime()
{
base.InitializeRuntime();
foreach (ChannelDispatcher cd in this.ChannelDispatchers)
{
foreach (EndpointDispatcher ed in cd.Endpoints)
{
foreach (DispatchOperation op in ed.DispatchRuntime.Operations)
{
op.CallContextInitializers.Add(this);
}
}
}
}
public void AfterInvoke(object correlationState)
{
// We don't do anything after the invoke
}
public object BeforeInvoke(InstanceContext instanceContext, IClientChannel channel, Message message)
{
OperationContext.Current.Extensions.Add(new SessionOperationContext());
return null;
}
The BeforeInvoke simply makes sure that the OperationContext for each WCF call has it's own session. We have found problems with IDispatchMessageInspector where the session is not available during response serialisation - a problem if you use lazy loading.
2) Our SessionOperationContext will then be called to attach itself and we use the OperationCompleted event to remove ourselves. This way we can be sure the session will be available for response serialisation.
public class SessionOperationContext : IExtension<OperationContext>
{
public ISession Session { get; private set; }
public static SessionOperationContext Current
{
get
{
OperationContext oc = OperationContext.Current;
if (oc == null) throw new InvalidOperationException("Must be in an operation context.");
return oc.Extensions.Find<SessionOperationContext>();
}
}
public void Attach(OperationContext owner)
{
// Create the session and do anything else you required
this.Session = ... // Whatever instantiation method you use
// Hook into the OperationCompleted event which will be raised
// after the operation has completed and the response serialised.
owner.OperationCompleted += new EventHandler(OperationCompleted);
}
void OperationCompleted(object sender, EventArgs e)
{
// Tell WCF this extension is done
((OperationContext)sender).Extensions.Remove(this);
}
public void Detach(OperationContext owner)
{
// Close our session, do any cleanup, even auto commit
// transactions if required.
this.Session.Dispose();
this.Session = null;
}
}
We've used the above pattern successfully in high-load applications and it seems to work well.
In summary this is similar to what the new WcfOperationSessionContext does (it wasn't around when we figured out the pattern above;-)) but also overcomes issues surrounding lazy loading.
Regarding the additional questions asked: If you use the model outlined above you would simply do the following:
void SaveMyEntity(MyEntity entity)
{
SessionOperationContext.Current.Session.Save(entity);
}
You are guaranteed that the session is always there and that it will be disposed once the WCF operation is completed. You can use transactions if required in the normal way.
Here is a post describing, in detail, all the steps for registering and using the WcfOperationSessionContext. It also includes instructions for using it with the agatha-rrsl project.
Ok, after few days of reading internet posts etc. all approaches shown in the internets seems to be wrong. When we are using UnitOfWork pattern with NH 3^ with nhibernate transaction this all aprochaes are producing exceptions. To test it and proof that we need to create test enviroment with MSMQ transaction queue, special interface with OneWay operation contract with transaction required set on it. This approach should works like this:
1. We put transactionally message in queue.
2. Service is getting transactionally messege from queue.
3. Everything works queue is empty.
In some cases not so obious with internet approaches this does not work properly. So here are expamples which we tested that are wrong and why:
Fabio Maulo approach: Use ICallContextInitializer - open NH session/transaction on BeforeCall, after that WCF is executing service method, on AfterCall in context initializer we call session.Flush + transaction.commit. Automaticly session will be saved when transaction scope will commit operation. In situation when on calling transaction.Complete exception will be thrown WCF service will shutdown! Question can be ok, so take transaction.Complete in try/catch clausule - great! - NO wrong! Then transaction scope will commit transaction and message will be taken from queue but data will not be saved !
Another approach is to use IDispatchMessageInspector - yesterday I thought this is best approach. Here we need to open session/transaction in method AfterReceiveRequest, after WCF invoke service operation on message dispatcher inspector BeforeSendReply is called. In this method we have info about [reply] which in OneWay operation is null, but filled with fault information if it occured on invoking service method. Great I thought - this is this ! but NOT! Problem is that at this point in WCF processing pipe we have no transaction ! So if transaction.Complete throw error or session.Flush will throw it we will have not data saved in database and message will not come back to queue what is wrong.
What is the solution?
IOperationInvoker and only this!
You need to implement this interface as a decorator pattern on default invoker. In method Invoke before call we are openning session/transaction open then we call invoke default invoker and after that call transaction.complete in finally clausule we call session.flush. What types of problem this solves:
1. We have transaction scope on this level so when complete throws exception message will go back to queue and WCF will not shutdown.
2. When invocation will throw exception transaction.complete will not be called what will not change database state
I hope this will clear everyones missinformation.
In some free time I will try to write some example.