WCF Catastrophic Failure - wcf

I've got a real lemon on my hands. I hope someone who has the same problem or know how to fix it could point me in the right direction.
The Setup
I'm trying to create a WCF data service that uses an ADO Entity Framework model to retrieve data from the DB. I've added the WCF service reference and all seems fine. I have two sets of data service calls. The first one retrieves a list of all "users" and returns (this list does not include any dependent data (eg. address, contact, etc.). The second call is when a "user" is selected, the application request to include a few more dependent information such as address, contact details, messages, etc. given a user id. This also seems to work fine.
The Lemon
After some user selection change, ie. calling for more dependent data from the data service, the application stops to respond.
Crash error:
The request channel timed out while waiting for a reply after 00:00:59.9989999. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout.
I restart the debugging process but the application will not make any data service calls until after about a minute or so, VS 08 displays a message box with error:
Unable to process request from service. 'http://localhost:61768/ConsoleService.svc'. Catastrophic failure.
I've Googled the hell out of this error and related issues but found nothing of use.
Possible Solutions
I've found some leads as to the source of the problem. In the client's app.config:
maxReceivedMessageSize > Set to a higher value, eg. 5242880.
receiveTimeout > Set to a higher value, eg. 00:30:00
I've tried these but all in vain. I suspect there is an underlying problem that cannot be fixed by simply changing some numbers. Any leads would be much appreciated.

I've solved it =P.
Cause
The WCF service works fine. It was the data service calls that was the culprit. Every time I made the call, I instantiated a new reference to the data service, but never closed/disposed the service reference. So after a couple of calls, the data service reaches its maximum connection and halts.
Solution
Make sure to close/dispose of any data service reference properly. Best practice would be to enclose in a using statement.
using(var dataService = new ServiceNS.ServiceClient() )
{
// Use service here
}
// The service will be disposed and connection freed.

Glad to see you fixed your problem.
However, you need to be carefull about using the using statement. Have a look at this article:
http://msdn.microsoft.com/en-us/library/aa355056.aspx

Related

Redis-backed session state not saving everything

We are trying to move from server session (IIS) to Redis-backed session. I updated my web.config with the custom sessionState configuration. I'm finding that only SOME of my key/value pairs are being saved. Of the 5 I expect to be in there, there are only 2. I verified all my code is ultimately hitting HttpContext.Current.Session.Add. I verified that my POCOs are marked as serializable. Looking at monitoring, I see that it adds the first two pairs, then everything after that just doesn't make it. No hit, no rejection, no exceptions. Nothing.
Anyone ever see this? Know where I could start to look to resolve?
TIA,
Matt
Update 1: I've switched to using a JSON serializer to store my data. Same thing. Doesn't seem to be a serialization issue.
Update 2: I've now downloaded the source code, compiled and am debugging it. The method SetAndReleaseItemExclusive, which seems to send the session items to Redis, is only hit once, though it should be hit more than once as my web site handle SSO and bounces from page to page to load the user and such. Have to investigate why it's only firing once...
Figured it out. Turns out that my AJAX request to an "API" endpoint without my MVC app did not have the appropriate session state attached. Therefore, the SetAndReleaseItemExclusive was never called. Adding this fixed it:
protected void Application_PostAuthorizeRequest()
{
if (HttpContext.Current.Request.Url.LocalPath == "/api/user/load")
HttpContext.Current.SetSessionStateBehavior(System.Web.SessionState.SessionStateBehavior.Required);
else
HttpContext.Current.SetSessionStateBehavior(System.Web.SessionState.SessionStateBehavior.ReadOnly);
}

Capture start of long running POST VB.net MVC4

I have a subroutine in my Controller
<HttpPost>
Sub Index(Id, varLotsOfData)
'Point B.
'By the time it gets here - all the data has been accepted by server.
What I would like to do it capture the Id of the inbound POST and mark, for example, a database record to say "Id xx is receiving data"
The POST receive can take a long time as there is lots of data.
When execution gets to point B I can mark the record "All data received".
Where can I place this type of "pre-POST completed" code?
I should add - we are receiving the POST data from clients that we do not control - that is, it is most likely a client's server sending the data - not a webbrowser client that we have served up from our webserver.
UPDATE: This is looking more complex than I had imagined.
I'm thinking that a possible solution would be to inspect the worker processes in IIS programatically. Via the IIS Manager you can do this for example - How to use IIS Manager to get Worker Processes (w3wp.exe) details information ?
From your description, you want to display on the client page that the method is executing and you can show also a loading gif, and when the execution completed, you will show a message to the user that the execution is completed.
The answer is simply: use SignalR
here you can find some references
Getting started with signalR 1.x and Mvc4
Creating your first SignalR hub MVC project
Hope this will help you
If I understand your goal correctly, it sounds like HttpRequest.GetBufferlessInputStream might be worth a look. It allows you to begin acting on incoming post data immediately and in "pieces" rather than waiting until the entire post has been received.
An excerpt from Microsoft's documentation:
...provides an alternative to using the InputStream propertywhich waits until the whole request has been received. In contrast, the GetBufferlessInputStream method returns the Stream object immediately. You can use the method to begin processing the entity body before the complete contents of the body have been received and asynchronously read the request entity in chunks. This method can be useful if the request is uploading a large file and you want to begin accessing the file contents before the upload is finished.
So you could grab the beginning of the post, and provided your client-facing page sends the ID towards the beginning of its transmission, you may be able to pull that out. Of course, this would be reading raw byte data which would need to be decoded so you could grab the inbound post's ID. There's also a buffered one that will allow the stream to be read in pieces but will also build a complete request object for processing once it has been completely received.
Create a custom action filter,
Action Filters for executing filtering logic either before or after an action method is called. Action Filters are custom attributes that provide declarative means to add pre-action and post-action behavior to the controller's action methods.
Specifically you'll want to look at the
OnActionExecuted – This method is called after a controller action is executed.
Here are a couple of links:
http://www.infragistics.com/community/blogs/dhananjay_kumar/archive/2016/03/04/how-to-create-a-custom-action-filter-in-asp-net-mvc.aspx
http://www.asp.net/mvc/overview/older-versions-1/controllers-and-routing/understanding-action-filters-vb
Here is a lab, but I think it's C#
http://www.asp.net/mvc/overview/older-versions/hands-on-labs/aspnet-mvc-4-custom-action-filters

WSSecurityTokenSerializer ReadToken method performance

I have a Dispatch MessageInspector which is deserializing a SAML Token contained in the SOAP message header.
To do the deserialization I am using a variation of the following code:
List<SecurityToken> tokens = new List<SecurityToken>();
tokens.Add(new X509SecurityToken(CertificateUtility.GetCertificate()));
SecurityTokenResolver outOfBandTokenResolver = SecurityTokenResolver.CreateDefaultSecurityTokenResolver(new ReadOnlyCollection<SecurityToken>(tokens), true);
SecurityToken token = WSSecurityTokenSerializer.DefaultInstance.ReadToken(xr, outOfBandTokenResolver);
The problem I am seeing is that the performance of the ReadToken call varies depending on the account that is running the windows service (in which the WCF service is hosted).
If the service is running as a windows domain account the elapsed time for the ReadToken call is virtually zero. When running as a local machine account the call takes between 200 and 1000 milliseconds.
Can anyone shed any light on what is going on here and why the account running this bit of code makes a difference as to its performance?
Thanks,
Martin
When the service is running under a local account that there is considerably more activity taking place, examples of this are :
Accessing and using C:\WINDOWS\system32\certcli.dll
Accessing and using C:\WINDOWS\system32\atl.dll
Attempting to access registry keys e.g.
HKLM\SYSTEM\CurrentControlSet\Services\CertSvc\Configuration
None of this extra activity appears to occur when running under a domain account.
A quick search on the internet for "certcli.dll domain user" brings up microsoft knowledge base article 948080 which sounds similar.
Unsure how to resolve this as ultimately a .Net method is being called (WSSecurityTokenSerializer.ReadToken) where you have little to no control over the internals.
This appears to also describe the same problem :
http://groups.google.com/group/microsoft.public.biztalk.general/browse_thread/thread/402a159810661bf6?pli=1

How can I reject a Windows "Service Stop" request in ATL 7?

I have a Windows service built upon ATL 7's CAtlServiceModuleT class. This service serves up COM objects that are used by various applications on the system, and these other applications naturally start getting errors if the service is stopped while they are still running.
I know that ATL DLLs solve this problem by returning S_OK in DllCanUnloadNow() if CComModule's GetLockCount() returns 0. That is, it checks to make sure no one is currently using any COM objects served up by the DLL. I want equivalent functionality in the service.
Here is what I've done in my override of CAtlServiceModuleT::OnStop():
void CMyServiceModule::OnStop()
{
if( GetLockCount() != 0 ) {
return;
}
BaseClass::OnStop();
}
Now, when the user attempts to Stop the service from the Services panel, they are presented with an error message:
Windows could not stop the XYZ service on Local Computer.
The service did not return an error. This could be an internal Windows error or an internal service error.
If the problem persists, contact your system administrator.
The Stop request is indeed refused, but it appears to put the service in a bad state. A second Stop request results in this error message:
Windows could not stop the XYZ service on Local Computer.
Error 1061: The service cannot accept control messages at this time.
Interestingly, the service does actually stop this time (although I'd rather it not, since there are still outstanding COM references).
I have two questions:
Is it considered bad practice for a service to refuse to stop when asked?
Is there a polite way to signify that the Stop request is being refused; one that doesn't put the Service into a bad state?
You can't do this. Once the SCM sends a SERVICE_CONTROL_STOP to your service, you have to stop.
If your other apps are also services, you can make them dependencies within the SCM. Of course, if the apps using this service are just regular applications that can't be used.
When ATL's lock count increments to 1, call SetServiceStatus() with the SERVICE_ACCEPT_STOP flag omitted in the SERVICE_STATUS::dwControlsAccepted field. Then you will not receive any SERVICE_CONTROL_STOP requests at all. Any attempt to stop the service will fail immediately. When ATL's lock count falls back to 0, call SetServiceStatus() again with the SERVICE_ACCEPT_STOP flag specified.
I just had to do this in 2 (older) ATL-based services, and it works well. Granted, I was unable to figure out the best way to override Lock() and Unlock() directly, so I just put a small loop inside my service that checks GetLockCount() at frequent intervals and calls SetServiceStatus() when needed.
In your constructor, update m_status.dwControlsAccepted removing SERVICE_ACCEPT_STOP. For instance:
CMyServiceModule::CMyServiceModule()
: ATL::CAtlServiceModuleT<CMyServiceModule, IDS_SERVICENAME>()
{
m_status.dw &= ~SERVICE_ACCEPT_STOP
}

WCF Paged Results & Data Export

I've walked into a project that is using a WCF service for the data tier. Currently, when data is needed for a grid, all rows are returned and the results are bound to a grid and the dataset is stuffed into a session variable for paging/sorting/rebinding. We've already hit a max message size problem, so I'm thinking it's time to convert from fetch and cache to fetch only the current page.
Face value this seems easy enough, but there's a small catch. The user is allowed to export the entire result set at any point. This means that for grid viewing purposes fetching the current page is fine, but when they want to do an export, I still need to make a call for all data.
This puts me back into the max message size issue. What is the recommended approach for this type of setup?
We are currently using the wsHttpBinding...
Thanks for any assistance.
I think the recommended approach for large files is to use WCF streaming. I'm not sure the exact details for your scenario, but you could take a look at this as a starting point:
http://msdn.microsoft.com/en-us/library/ms789010.aspx
I would probably do something like this in your case
create a service with a "paged" GetData() method - where you specify the page index and the page size as additional parameters. This should give you a nice clean interface for "regular" use, and that should not hit the maxMessageSize limits
create a second service (or method) that would send all data - ideally, you could bundle that up into a ZIP file or something on the server, before sending it. If that ZIP file is still too large, you might want to check out WCF streaming for handling large files, as Andy already pointed out
The maxMessageSizeLimit is in place for a good reason: to avoid Denial of Service attacks where a WCF service would just get flooded with large messages and thus brought to its knees. If you can, always try to keep that in mind and don't just jack up the maxMessageSize to 2 GB - it might come back to bite you :-)