ASP.NET Core 2.1 Handling ServiceScope usage with dependencies on WebSocket middleware's every incoming messages - asp.net-core

I have some performance issue on using websockets on ASP.NET Core 2.1
At first, I have an implementation of websockets similar to this example:
https://radu-matei.com/blog/aspnet-core-websockets-middleware/
On every incoming websocket message, I have it to parse, call some services, send a message back to websocket.
if (result.MessageType == WebSocketMessageType.Text)
{
using (var scope = service.CreateScope())
{
var communicationService = scope.ServiceProvider.GetSomeService();
await communicationService.HandleConnection(webSocket, result, buffer);
}
}
So as you see on every incoming message I am creating a new Scope, getting Service provider and then calling services on this service's method communicationService.HandleConnection. But if there is a lot of messages, my Azure WebService CPU goes up to 100%.
Can someone tell me if I am using these scope creations correct on every socket message?

It's hard to know for certain with the limited code snippet you provided (i.e. the lifetime and type of communicationService. Having said that, I would look at https://learn.microsoft.com/en-us/azure/app-service/faq-availability-performance-application-issues to capture a snapshot of your app service when the CPU spike to 100%. You may discover the issue might be unrelated to your using (scope).

Related

ServiceStack: Reinstate pipeline when invoking a Service manually?

As a follow-up to this question, I wanted to understand how my invoking of a Service manually can be improved. This became longer than I wanted, but I feel the background info is needed.
When doing a pub/sub (broadcast), the normal sequence and flow in the Messaging API isn't used, and I instead get a callback when a pub/sub message is received, using IRedisClient, IRedisSubscription:
_subscription.OnMessage = (channel, msg) =>
{
onMessageReceived(ParseJsonMsgToPoco(msg));
};
The Action onMessageReceived will then, in turn, invoke a normal .NET/C# Event, like so:
protected override void OnMessageReceived(MyRequest request)
{
OnMyEvent?.Invoke(this, new RequestEventArgs(request));
}
This works, I get my request and all that, however, I would like it to be streamlined into the other flow, the flow in the Messaging API, meaning, the request finds its way into a Service class implementation, and that all normal boilerplate and dependency injection takes place as it would have using Messaging API.
So, in my Event handler, I manually invoke the Service:
private void Instance_OnMyEvent(object sender, RequestEventArgs e)
{
using (var myRequestService = HostContext.ResolveService<MyRequestService>(new BasicRequest()))
{
myRequestService.Any(e.Request);
}
}
and the MyRequestService is indeed found and Any called, and dependency injection works for the Service.
Question 1:
Methods such as OnBeforeExecute, OnAfterExecute etc, are not called, unless I manually call them, like: myRequestService.OnBeforeExecute(e) etc. What parts of the pipeline is lost? Can it be reinstated in some easy way, so I don't have to call each of them, in order, manually?
Question 2:
I think I am messing up the DI system when I do this:
using (var myRequestService = HostContext.ResolveService<MyRequestService>(new BasicRequest()))
{
myRequestService.OnBeforeExecute(e.Request);
myRequestService.Any(e.Request);
myRequestService.OnAfterExecute(e.Request);
}
The effect I see is that the injected dependencies that I have registered with container.AddScoped, isn't scoped, but seems static. I see this because I have a Guid inside the injected class, and that Guid is always the same in this case, when it should be different for each request.
container.AddScoped<IRedisCache, RedisCache>();
and the OnBeforeExecute (in a descendant to Service) is like:
public override void OnBeforeExecute(object requestDto)
{
base.OnBeforeExecute(requestDto);
IRedisCache cache = TryResolve<IRedisCache>();
cache?.SetGuid(Guid.NewGuid());
}
So, the IRedisCache Guid should be different each time, but it isn't. This however works fine when I use the Messaging API "from start to finish". It seems that if I call the TryResolve in the AppHostBase descendant, the AddScoped is ignored, and an instance is placed in the container, and then never removed.
What parts of the pipeline is lost?
None of the request pipeline is executed:
myRequestService.Any(e.Request);
Is physically only invoking the Any C# method of your MyRequestService class, it doesn't (nor cannot) do anything else.
The recommended way for invoking other Services during a Service Request is to use the Service Gateway.
But if you want to invoke a Service outside of a HTTP Request you can use the RPC Gateway for executing non-trusted services as it invokes the full Request Pipeline & converts HTTP Error responses into Typed Error Responses:
HostContext.AppHost.RpcGateway.ExecuteAsync()
For executing internal/trusted Services outside of a Service Request you can use HostContext.AppHost.ExecuteMessage as used by ServiceStack MQ which applies Message Request Request/Response Filters, Service Action Filters & Events.
I have registered with container.AddScoped
Do not use Request Scoped dependencies outside of a HTTP Request, use Singleton if the dependencies are ThreadSafe, otherwise register them as Transient. If you need to pass per-request storage pass them in IRequest.Items.

protobuf-net.grpc client and .NET Core's gRPC client factory integration

I am experimenting with a gRPC service and client using proto files. The advice is to use gRPC client factory integration in .NET Core (https://learn.microsoft.com/en-us/aspnet/core/grpc/clientfactory?view=aspnetcore-3.1). To do this you register the client derived from Grpc.Core.ClientBase that is generated by the Grpc.Tools package, like this:
Host.CreateDefaultBuilder(args)
.ConfigureServices((hostContext, services) =>
{
services.AddGrpcClient<MyGrpcClientType>(o =>
{
o.Address = new Uri("https://localhost:5001");
});
})
My understanding is that MyGrpcClientType is registered with DI as a transient client, meaning a new one is created each time it is injected, but that the client is integrated with the HttpClientFactory, allowing the channel to be reused rather than be created each time.
Now, I would like to use protobuf-net.grpc to generate the client from an interface, which appears to be done like this:
GrpcClientFactory.AllowUnencryptedHttp2 = true;
using var http = GrpcChannel.ForAddress("http://localhost:10042");
var calculator = http.CreateGrpcService<ICalculator>();
If I am correct in thinking that channels are expensive to create, but clients are cheap, how do I achieve integration with the HttpClientFactory (and so reuse of the underlying channel) using protobuf-net.grpc? The above appears to create a GrpcChannel each time I want a client, so what is the correct approach to reusing channels?
Similarly, is it possible to register the protobuf-net.grpc generated service class with the below code in ASP.Net Core?
endpoints.MapGrpcService<MyGrpcServiceType>();
(Please correct any misunderstandings in the above)
Note that you don't need the AllowUnencryptedHttp2 - that's just if you aren't using https, but: you seem to be using https.
On the "similarly"; that should already work - the only bit you might be missing is the call to services.AddCodeFirstGrpc() (usually in Startup.cs, via ConfigureServices).
As for the AddGrpcClient; I would have to investigate. That isn't something that I've explored in the integrations so far. It might be a new piece is needed.
The Client Factory support not exists, and works exactly like documented here except you register with the method
services.AddCodeFirstGrpcClient<IMyService>(o =>
{
o.Address = new Uri("...etc...");
});

Using TAP progress reporting in a WCF service

I (new to WCF) am writing a WCF service that acquires and analyzes an X-ray spectrum - i.e. it is a long-running process, sometimes several minutes. Naturally, this begs for asynchronous calls so, using wsDualHttpBinding and defining the following in my ServiceContract
[ServiceContract(Namespace="--removed--",
SessionMode=SessionMode.Required, CallbackContract=typeof(IAnalysisSubscriber))]
public interface IAnalysisController
{
// Simplified - removed other declarations for clarity
[OperationContract]
Task<Measurement> StartMeasurement(MeasurementRequest request);
}
And the (simplified) implementation has
async public Task<Measurement> StartMeasurement(MeasurementRequest request)
{
m_meas = m_config.GetMeasurement(request);
Spectrum sp = await m_mca.Acquire(m_meas.AcquisitionTime, null);
UpdateSpectrum(m_meas, sp);
return m_meas;
}
private void McaProgress(Spectrum sp)
{
m_client.ReportProgress(sp);
}
Where m_client is the callback object obtained from m_client = OperationContext.Current.GetCallbackChannel(); in the "Connect" method - called when the WCF client first connects. This works as long as I don't use progress reporting, but as soon as I add progress reporting by changing the "null" in the m_mca.Acquire() method to "new Progress(McaProgress)", on the first progress report, the client generates an error "The server did not provide a meaningful reply; this might be caused by a contract mismatch..."
I understand the client is probably awaiting a particular reply of a Task rather than having a callback made into it, but how do I implement this type of progress reporting with WCF? I would like the client to be able to see the live spectrum as it is generated and get an estimate of the time remaining to complete the spectrum acquisition. Any help or pointers to where someone has implemented this type of progress reporting with WCF is much appreciated (I've been searching but find mostly references to EAP or APM and WCF and not much related to TAP).
Thanks, Dave
Progress<T> wasn't really meant for use in WCF. It was designed for UI apps, and may behave oddly depending on your host (e.g., ASP.NET vs self-hosted).
I would recommend writing a simple IProgress<T> implementation that just called IAnalysisSubscriber.ReportProgress directly. Also make sure that IAnalysisSubscriber.ReportProgress has OperationContract.IsOneWay set to true.

WCF Comet Implementation

I have a requirement that needs a real-time updates on the Web client (ASP.NET MVC). The only way I can turn around on it is that to implement the COMET technique (ServerPush/Reverse-AJAX) technique.
The scenario is that:
User A save a message in different website client. Then, User B will automatically get the updates made by User "A" in different browser.
I actually finish the solution by this Architecture:
ASP.NET MVC - did a jquery ajax (post) request (long-pooled) on the WCF.
WCF - do some polling on the database (SQL Server) with the interval of 1 second. If new data has been added to the database, the polling is break with the data being returned on the client.
WCF COMET Method Pseudo Code:
private Message[] GetMessages(System.Guid userID)
{
var messages = new List<Message>();
var found = false;
/* declare connection, command */
while (!found )
{
try
{
/* open connection - connection.Open(); */
/* do some database access here */
/* close connection - connection.Close(); */
/* if returned record > 0 then process the Message and save to messages variable */
/* sleep thread : System.Threading.Thread.Sleep(1000); */
found = true;
}
finally
{
/* for sure, incase of exception */
/* close connection - connection.Close(); */
}
}
return messages.ToArray();
}
My concern and question is: Is it the best approach to do the polling technique in WCF (with 1 second interval)?
Reason: I maximized the use of database connection pooling and I am expecting that there is no issue on that technique.
Note: This is a multi-threaded implementation with the use of WCF given attributes below.
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall), ConcurrencyMode = ConcurrencyMode.Multiple, UseSynchronizationContext = true)]
I'd recommend using a dedicated realtime server (i.e. don't host the WCF service in IIS) or using a hosted service for realtime updates. As Anders says, IIS isn't all that great at handling multiple long-running concurrent requests.
I'd also suggest you look at using a solution which uses WebSockets with support for fallback solutions such as Flash, HTTP streaming, HTTP long-polling and then possibly polling. WebSockets are the first standardised method of full duplex bi-directional communication between a client and a server and will ultimately deliver a better solution to any such problems.
For the moment implementing your own Comet/WebSockets realtime solution is definitely going to be a time consuming task (as you may have already found), especially when building a public facing app where it could be accessed by users with a multitude of different browsers.
With this in mind the XSockets project looks very interesting as does the SuperWebSocket project.
.NET Comet solutions I know of are from FrozenMountain have a WebSync server for IIS. There is also PokeIn.
I've compiled a list of realtime web technologies that may also be useful.
nstead of polling the database cant you have an event sent when updating instead? Thats the way I've implemented Pub/Sub scenarios anyway and it works great.

Slow MSMQ within a WCF service

this is a weird thing.
I created a simple SOAP based web service with WCF. When the 'SubmitTransaction' method is called, the transaction is being passed on to an application service. But if the application service is not available, it is being written to a MSMQ.
Like this:
public void SubmitTransaction(someTransaction)
{
try
{
// pass transaction data to application
}
catch(SomeError)
{
// write to MSMQ
}
}
So when an error occures the transaction is written to the queue. Now, when using the MSMQ API directly in my WCF service, everything is fine. Each call takes a few milliseconds.
E.g.:
...
catch(SomeError)
{
// write to MSMQ
var messageQueue = new MessageQueue(queuePath);
try
{
messageQueue.Send(accountingTransaction, MessageQueueTransactionType.Single);
}
finally
{
messageQueue.Close();
}
}
But since I want to use the message queue functionality at some other points of the system as well, I created a new assembly that takes care of the message queue writing.
Like:
...
catch(SomeError)
{
// write to MSMQ
var messageQueueService = new MessageQueueService();
messageQueueService.WriteToQueue(accountingTransaction);
}
Now when using this setup, the web service is suddenly very slow. From the above-mentioned milliseconds, each call now takes up to 4 seconds. Only because the message queue stuff is encapsulated in a new assembly. The logic is exactly the same. Anyone knows what the problem could be...?
Thanks!
Ok, now I know. It has something to do with my logging setup (log4net). I'll have to check that first. Sorry for stealing your time..
You have two new lines of code here:
var messageQueueService = new MessageQueueService();
messageQueueService.WriteToQueue(accountingTransaction);
Do you know which of the two is causing the problem? Perhaps add some logging, or profiling, or step through in a debugger to see which one seems slow.