I have a requirement that needs a real-time updates on the Web client (ASP.NET MVC). The only way I can turn around on it is that to implement the COMET technique (ServerPush/Reverse-AJAX) technique.
The scenario is that:
User A save a message in different website client. Then, User B will automatically get the updates made by User "A" in different browser.
I actually finish the solution by this Architecture:
ASP.NET MVC - did a jquery ajax (post) request (long-pooled) on the WCF.
WCF - do some polling on the database (SQL Server) with the interval of 1 second. If new data has been added to the database, the polling is break with the data being returned on the client.
WCF COMET Method Pseudo Code:
private Message[] GetMessages(System.Guid userID)
{
var messages = new List<Message>();
var found = false;
/* declare connection, command */
while (!found )
{
try
{
/* open connection - connection.Open(); */
/* do some database access here */
/* close connection - connection.Close(); */
/* if returned record > 0 then process the Message and save to messages variable */
/* sleep thread : System.Threading.Thread.Sleep(1000); */
found = true;
}
finally
{
/* for sure, incase of exception */
/* close connection - connection.Close(); */
}
}
return messages.ToArray();
}
My concern and question is: Is it the best approach to do the polling technique in WCF (with 1 second interval)?
Reason: I maximized the use of database connection pooling and I am expecting that there is no issue on that technique.
Note: This is a multi-threaded implementation with the use of WCF given attributes below.
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall), ConcurrencyMode = ConcurrencyMode.Multiple, UseSynchronizationContext = true)]
I'd recommend using a dedicated realtime server (i.e. don't host the WCF service in IIS) or using a hosted service for realtime updates. As Anders says, IIS isn't all that great at handling multiple long-running concurrent requests.
I'd also suggest you look at using a solution which uses WebSockets with support for fallback solutions such as Flash, HTTP streaming, HTTP long-polling and then possibly polling. WebSockets are the first standardised method of full duplex bi-directional communication between a client and a server and will ultimately deliver a better solution to any such problems.
For the moment implementing your own Comet/WebSockets realtime solution is definitely going to be a time consuming task (as you may have already found), especially when building a public facing app where it could be accessed by users with a multitude of different browsers.
With this in mind the XSockets project looks very interesting as does the SuperWebSocket project.
.NET Comet solutions I know of are from FrozenMountain have a WebSync server for IIS. There is also PokeIn.
I've compiled a list of realtime web technologies that may also be useful.
nstead of polling the database cant you have an event sent when updating instead? Thats the way I've implemented Pub/Sub scenarios anyway and it works great.
Related
I have some performance issue on using websockets on ASP.NET Core 2.1
At first, I have an implementation of websockets similar to this example:
https://radu-matei.com/blog/aspnet-core-websockets-middleware/
On every incoming websocket message, I have it to parse, call some services, send a message back to websocket.
if (result.MessageType == WebSocketMessageType.Text)
{
using (var scope = service.CreateScope())
{
var communicationService = scope.ServiceProvider.GetSomeService();
await communicationService.HandleConnection(webSocket, result, buffer);
}
}
So as you see on every incoming message I am creating a new Scope, getting Service provider and then calling services on this service's method communicationService.HandleConnection. But if there is a lot of messages, my Azure WebService CPU goes up to 100%.
Can someone tell me if I am using these scope creations correct on every socket message?
It's hard to know for certain with the limited code snippet you provided (i.e. the lifetime and type of communicationService. Having said that, I would look at https://learn.microsoft.com/en-us/azure/app-service/faq-availability-performance-application-issues to capture a snapshot of your app service when the CPU spike to 100%. You may discover the issue might be unrelated to your using (scope).
I have a WCF web service hosted on IIS 7. This web service serves up JSON content for use in mobile apps. It uses the Entity Framework on top of MS SQL 2005, and the Interface contract looks like the below:
[ServiceContract]
public interface MyService
{
[WebInvoke(Method = "GET", UriTemplate = "/GetStuff?skip={skip}&take={take}&loanAmt={loanAmt}&propertyVal={propertyVal}&Term={Term}&MonthlyRent={MonthlyRent}", RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)]
[OperationContract]
ProductsDTO GetProducts(int skip, int take, decimal loanAmt, decimal propertyVal, int Term, decimal MonthlyRent);
}
The implementation looks like this:
public ProductsDTO GetProducts(int skip, int take, decimal loanAmt, decimal propertyVal, int Term, decimal MonthlyRent)
{
//Some set up code
using (MyEntities context = new MyEntities())
{
//Get our products
}
return ReturnedList
}
On the first run of this, it can take anywhere up to 15 seconds (unacceptable for a mobile app), on subsequent runs, the data comes back in under a second. After 5 minutes of inactivity, the WCF service reverts to taking 15 seconds to start.
I initially thought the bottle neck was at IIS7 and thought my App Pool was shutting off. After setting the App Pool to never recycle and studying w3wp.exe processes on the IIS server, I realized this was not the case. It is the database session that is shutting off after those five minutes.
This being the case, I want to hold a SQL session open to immediately serve up requests from the WCF application, however, I don't want to set the Entity Context as a singleton or leave this open in the service as I understand this is poor practise. I could pass a SQL connection object TO the context (using (MyEntities context = new MyEntities(MySQLConnection))) and hold that open? Or can someone suggest something else?
I had seen lots of posts with scripts that touch the web service to keep it alive which gave me the creeping horrors, so have avoided going down this route.
What are your thoughts?
Update 1
As per Andomars response, I have added the following initialization code to the WCF service.
public Service1()
{
// Blocking call that initializes
// the service instance
this.Initialize();
}
BackgroundWorker KeepSQLAlive = new BackgroundWorker();
private void Initialize()
{
KeepSQLAlive.DoWork += new DoWorkEventHandler(KeepSQLAlive_DoWork);
KeepSQLAlive.RunWorkerAsync();
}
void KeepSQLAlive_DoWork(object sender, DoWorkEventArgs e)
{
Timer pingback = new Timer(180000);
pingback.Elapsed += new ElapsedEventHandler(pingback_Elapsed);
pingback.Start();
}
void pingback_Elapsed(object sender, ElapsedEventArgs e)
{
using (MyEntities context = new MyEntities ())
{
context.ExecuteStoreCommand("select ##servername");
}
}
This feels like a bit of a fudge, and I'm a bit worried without creating the service as a singleton the service will continue to spawn SQL sessions without ever killing any. However, if it works it is the path of least resistance as hosting this code in a windows service would take more time and I am unclear how to integrate this in with IIS security (I want to use SSL). I will report back whether the above works. (thanks for the assistance all)
You could add a background thread that runs select ##servername (or another trivial query) every minute. That should keep the connection pool warm.
Web service shouldnt run background threads so..
Write Windows Service with WCF contract as a proxy between IIS and Database.
Make sure you are using connection pooling on your Windows Service.
Query something from DB every 30 seconds/ 1 minute / 5 minutes (this time have to be tested) from your windows service. After query remember to close your connection (this will not close real connection but will make connection available for the pool). This will keep at least one active connection in pool ready for your request.
Use pipes (NetNamedPipeBinding) between WCF service on IIS and Windows Service (it's fast).
Think about publishing already compiled application on your IIS. link
I have 2 WCF services implemented in C# that test a client-server interaction of a 3-rd party application. Let's say I have a server-side tester interface for WCF test service (I skipped the attributes ans simplified the interfaces)
interface IServerTester
{
bool Start();
}
And a client side one:
interface IClientTester
{
bool Start();
}
The purpose of those methods is merely to start the server and to start the client of the 3-rd party application. I am using NUnit to test it. On the upper level it looks like a C# transaction script, where I first start a server, then a client and lastly verify that they are communicating.
Later, I want to easily add more clients (start more than one), thus I need to add more WCF calls to IClientTester in my transaction script.
I can do something like this, with each client has its own endpoint
//Start server
//start client 1
//start client 2
//...
//start client N
I will need to reuse the code in many other tests.But it seems to be a rather long solution. Is there any better idea, or perhaps a pattern that I can adopt? Many thanks!
I'm not sure I completely followed your question, but it sounds to me like a need for Pub/Sub. It sounds like when the server starts, you want 1:M clients to be notified and also start, correct? If so, then the server could publish a message or event that the "clients" all subscribe to. You would not need to modify anything to add new clients, simply subscribe to the message or event in the new client implementation.
I'm putting together a WCF service using net.tcp and netTcpBinding to get duplex comms with my Silverlight client. I call into the service from the Silverlight app and the service calls out to another server, passing it a callback method in the WCF service class. The remote server calls back several times and each time it does, the WCF service uses the callbackchannel to send the data to the Silverlight client. It all works nicely most of the time.
If the user puts in a big request, I get a TimeoutException after a large number of callbacks have already worked. (Clearly, there's some work to do elsewhere to prevent this but I'd like to robustify the service, first.)
I was expecting to do some kind of 'if (client.ConnectionState == faulted)' check before trying to call back to the Silverlight client but I can't seem to find the object that holds the state of the connection. Is there one? Am I approaching this from the wrong side?
This is my first venture into a service net.tcp and duplex. I just moved house and my WCF bible is still in a box. Somewhere. :-) So, I can't do my usual background reading.
Any pointers would be gratefully received. Here's some bare code in case my description is too soupy:
private IActiveDirectoryClient client;
private AsyncSearchRunner runner;
public void Search(Request request)
{
this.client = OperationContext.Current.GetCallbackChannel<IActiveDirectoryClient>();
runner = new AsyncSearchRunner();
runner.Run(request.SearchRoot, request.SearchFilter, request.PageSize,
System.DirectoryServices.Protocols.SearchScope.Subtree, SendObjects);
}
private void SendObjects(List<DirectoryObject> items)
{
Response response = new Response();
response.DirectoryObjects = items.ToArray();
client.SendResponse(response);
}
Yes, there is a State property that is defined in the ClientBase<> class (all the proxy classes are derived from ClientBase<>).
There are some proxy wrappers out there that handle fault states of the connection and re-establish connections as needed. Google for "wcf proxy wrapper".
You can also home-brew something if you use some kind of ServiceLocator pattern.
I'm building a web application using WCF that will be consumed by other applications as a service. Our app will be installed on a farm of web services and load balanced for scalability purposes. Occasionally we run into problems specific to one web server and we'd like to be able to determine from the response which web server the request was processed by and possibly timing information as well. For example, this request was processed by WebServer01 and the request took 200ms to finish.
The first solution that came to mind was to build an ISAPI filter to add an HTTP header that stores this information in the response. This strikes me as the kind of thing somebody must have done before. Is there a better way to do this or an off-the-shelf ISAPI filter that I can use for this?
Thanks in advance
WCF offers much nicer extension points than ISAPI filters. You could e.g. create a client side message inspector that gets called just before the message goes out to the server, and then also gets called when the response comes back, and thus you could fairly easily measure the time needed for a service call from a client perspective.
Check out the IClientMessageInspector interface - that might be what you're looking for. Also see this excellent blog entry on how to use this interface.
Marc
I don't have a ready-made solution for you but I can point you towards IHttpModule.
See code in Instrument and Monitor Your ASP.NET Apps Using WMI and MOM 2005 for example.
private DateTime startTime;
private void context_BeginRequest(object sender, EventArgs e)
{
startTime = DateTime.Now;
}
private void context_EndRequest(object sender, EventArgs e)
{
// Increment corresponding counter
string ipAddress = HttpContext.Current.Request.
ServerVariables["REMOTE_ADDR"];
if (HttpContext.Current.Request.IsAuthenticated)
authenticatedUsers.Add(ipAddress);
else
anonymousUsers.Add(ipAddress);
// Fire excessively long request event if necessary
int duration = (int) DateTime.Now.Subtract(
startTime).TotalMilliseconds;
if (duration > excessivelyLongRequestThreshold)
{
string requestPath=HttpContext.Current.Request.Path;
new AspNetExcessivelyLongRequestEvent(applicationName,
duration,requestPath).Fire();
}
}