Service Bus for Windows Server / SubscriptionDescription MessageCountDetails properties are all zeros! (0's) - servicebus

I'm writing a 'Service Bus Monitor' [Windows] service so that we can keep an eye on our topic/subscription(s) and have run into an interesting issue. (And of course by "interesting" I mean "super frick'en annoying.")
For each of my topic/subscription pairs, I create a SubscriptionDescription so that I can get the MessageCount. This works just fine.
var subscriptionDescription = namespaceManager.GetSubscription(
busTargetModel.Topic, busTargetModel.Subscription);
var messageCountThisSubscription = subscriptionDescription.MessageCount;
However, in one particular case messageCountThisSubscription == 51 and I happen to know
that all 51 are actually sitting in the dead letter box/queue/whatever, but, when I try to get the MessageCountDetails...
// I actually make this call BEFORE the MessageCount call above.
// (In case that matters somehow)
var messageCountDetails = subscriptionDescription.MessageCountDetails;
...all 5 of it's properties (ActiveMessageCount, DeadLetterMessageCount, ScheduledMessageCount, TransferDeadLetterMessageCount and TransferMessageCount) have a value of 0 (Zero!)
I cannot for the life of me imagine what I could be doing wrong here; seems pretty straight forward, yet.. ZEROS.
Thoughts, insights, ANY help appreciated!
(This is for Service Bus for Windows Server but I don't see any tags for this except for all the Azure stuff, and from what I've read, they are NOT created equal.. at least not yet, so hoping I got the tags right.)

Scott,
Service Bus 1.0 for Windows Server does not have support for message count details. That feature was implemented after the bits locked down so these properties are not returning the expected values. We have a symmetric (single) client library for both Server and Service offering of Service Bus so you see them available but the values are only populated when targeting the Service or the recently released preview for Service Bus 1.1 for Windows Server. You can install this from WebPI, more details are here: http://msdn.microsoft.com/en-us/library/windowsazure/dn282144(v=azure.10).aspx

Related

.Net 4.0 MemoryCache Clearing

I am using a .Net 4.0 MemoryCache in my WCF service.
I originally was using the Default Cache as below:
var cache = MemoryCache.Default;
Then doing the usual pattern as trying to find something in the Cache, getting, then
setting into the cache if did not find (code snippet / pseudo code as below):
var geoCoordinate = cache.Get(cacheKey) as GeoCoordinate;
if (geoCoordinate == null)
{
geoCoordinate = get it from somewhere;
cache.Set(cacheKey, geoCoordinate, DateTimeOffset.Now.AddDays(7));
}
I was finding that my entries were disappearing after approx. 2 minutes. Even if my code placed the missing entries back into the cache, subsequent cache Gets would return null.
My WCF Service is being hosted by IIS 7.5. If I recycled the App Pool, everything would work normally for 2 minutes, but then the pattern as described above would repeat.
After doing some researching I then did the below to replace:
var cache = MemoryCache.Default;
// WITH NEW CODE TO CREATE CACHE AS BELOW:
var config = new NameValueCollection();
//Hack: Set Polling interval to 10 days, so will no clear cache.
config.Add("pollingInterval", "10:00:00:00");
config.Add("physicalMemoryLimitPercentage", "90");
config.Add("cacheMemoryLimitMegabytes", "2000");
//instantiate cache
var cache = new MemoryCache("GeneralCache", config);
It seems that no matter what I place into physicalMemoryLimitPercentage, or cacheMemoryLimitMegabytes does not seem to help. But placing the pollingInterval to a large datespan does.
ie: If I set as below:
config.Add("pollingInterval", "00:00:15:00");
Then everything works fine for 15 minutes.
Note: If my WCF service is hosted by IISExpress on my dev environment, I cannot reproduce.
This also seems to happen when my WCF service is hosted by IIS 7.5.
My app pool on IIS 7.5 is NOT recycling.
Has anybody experienced something like this?
I have seen the below:
MemoryCache does not obey memory limits in configuration
Thanks,
Matt
I too have seen this issue and filed a bug with MS here with a simple reproducer project.
This has been resolved by MS in the above bug I filed - with a work around there and an upcoming QFE for .net 4 as well as confirmation that this isn't a problem in 4.5
I have not yet tried the work around
I can however give some more information on conditions required by myself to recreate this. The application pool needed to be in Integrated Pipeline mode for me to see this issue - Classic mode fixes this issue though removes some of the benefits of moving to IIS 7.5.
Equally when using Integrated mode I also did not see this issue if I used a built-in application pool identity such as ApplicationPoolIdentity. However my app needs to run as a custom identity using a service account and it is at this point at which I see the behavior. Therefore if you don't need Integrated mode or a custom Identity you can maybe work around this.
Perhaps the built-in accounts have permissions to do the cache memory statistics gathering initiated by the pollingInterval that my custom Identity does not have, I don't know.
Hope this helps or even that someone else can join more of the dots to figure out a better work around.

WSSecurityTokenSerializer ReadToken method performance

I have a Dispatch MessageInspector which is deserializing a SAML Token contained in the SOAP message header.
To do the deserialization I am using a variation of the following code:
List<SecurityToken> tokens = new List<SecurityToken>();
tokens.Add(new X509SecurityToken(CertificateUtility.GetCertificate()));
SecurityTokenResolver outOfBandTokenResolver = SecurityTokenResolver.CreateDefaultSecurityTokenResolver(new ReadOnlyCollection<SecurityToken>(tokens), true);
SecurityToken token = WSSecurityTokenSerializer.DefaultInstance.ReadToken(xr, outOfBandTokenResolver);
The problem I am seeing is that the performance of the ReadToken call varies depending on the account that is running the windows service (in which the WCF service is hosted).
If the service is running as a windows domain account the elapsed time for the ReadToken call is virtually zero. When running as a local machine account the call takes between 200 and 1000 milliseconds.
Can anyone shed any light on what is going on here and why the account running this bit of code makes a difference as to its performance?
Thanks,
Martin
When the service is running under a local account that there is considerably more activity taking place, examples of this are :
Accessing and using C:\WINDOWS\system32\certcli.dll
Accessing and using C:\WINDOWS\system32\atl.dll
Attempting to access registry keys e.g.
HKLM\SYSTEM\CurrentControlSet\Services\CertSvc\Configuration
None of this extra activity appears to occur when running under a domain account.
A quick search on the internet for "certcli.dll domain user" brings up microsoft knowledge base article 948080 which sounds similar.
Unsure how to resolve this as ultimately a .Net method is being called (WSSecurityTokenSerializer.ReadToken) where you have little to no control over the internals.
This appears to also describe the same problem :
http://groups.google.com/group/microsoft.public.biztalk.general/browse_thread/thread/402a159810661bf6?pli=1

How can I reject a Windows "Service Stop" request in ATL 7?

I have a Windows service built upon ATL 7's CAtlServiceModuleT class. This service serves up COM objects that are used by various applications on the system, and these other applications naturally start getting errors if the service is stopped while they are still running.
I know that ATL DLLs solve this problem by returning S_OK in DllCanUnloadNow() if CComModule's GetLockCount() returns 0. That is, it checks to make sure no one is currently using any COM objects served up by the DLL. I want equivalent functionality in the service.
Here is what I've done in my override of CAtlServiceModuleT::OnStop():
void CMyServiceModule::OnStop()
{
if( GetLockCount() != 0 ) {
return;
}
BaseClass::OnStop();
}
Now, when the user attempts to Stop the service from the Services panel, they are presented with an error message:
Windows could not stop the XYZ service on Local Computer.
The service did not return an error. This could be an internal Windows error or an internal service error.
If the problem persists, contact your system administrator.
The Stop request is indeed refused, but it appears to put the service in a bad state. A second Stop request results in this error message:
Windows could not stop the XYZ service on Local Computer.
Error 1061: The service cannot accept control messages at this time.
Interestingly, the service does actually stop this time (although I'd rather it not, since there are still outstanding COM references).
I have two questions:
Is it considered bad practice for a service to refuse to stop when asked?
Is there a polite way to signify that the Stop request is being refused; one that doesn't put the Service into a bad state?
You can't do this. Once the SCM sends a SERVICE_CONTROL_STOP to your service, you have to stop.
If your other apps are also services, you can make them dependencies within the SCM. Of course, if the apps using this service are just regular applications that can't be used.
When ATL's lock count increments to 1, call SetServiceStatus() with the SERVICE_ACCEPT_STOP flag omitted in the SERVICE_STATUS::dwControlsAccepted field. Then you will not receive any SERVICE_CONTROL_STOP requests at all. Any attempt to stop the service will fail immediately. When ATL's lock count falls back to 0, call SetServiceStatus() again with the SERVICE_ACCEPT_STOP flag specified.
I just had to do this in 2 (older) ATL-based services, and it works well. Granted, I was unable to figure out the best way to override Lock() and Unlock() directly, so I just put a small loop inside my service that checks GetLockCount() at frequent intervals and calls SetServiceStatus() when needed.
In your constructor, update m_status.dwControlsAccepted removing SERVICE_ACCEPT_STOP. For instance:
CMyServiceModule::CMyServiceModule()
: ATL::CAtlServiceModuleT<CMyServiceModule, IDS_SERVICENAME>()
{
m_status.dw &= ~SERVICE_ACCEPT_STOP
}

WCF Catastrophic Failure

I've got a real lemon on my hands. I hope someone who has the same problem or know how to fix it could point me in the right direction.
The Setup
I'm trying to create a WCF data service that uses an ADO Entity Framework model to retrieve data from the DB. I've added the WCF service reference and all seems fine. I have two sets of data service calls. The first one retrieves a list of all "users" and returns (this list does not include any dependent data (eg. address, contact, etc.). The second call is when a "user" is selected, the application request to include a few more dependent information such as address, contact details, messages, etc. given a user id. This also seems to work fine.
The Lemon
After some user selection change, ie. calling for more dependent data from the data service, the application stops to respond.
Crash error:
The request channel timed out while waiting for a reply after 00:00:59.9989999. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout.
I restart the debugging process but the application will not make any data service calls until after about a minute or so, VS 08 displays a message box with error:
Unable to process request from service. 'http://localhost:61768/ConsoleService.svc'. Catastrophic failure.
I've Googled the hell out of this error and related issues but found nothing of use.
Possible Solutions
I've found some leads as to the source of the problem. In the client's app.config:
maxReceivedMessageSize > Set to a higher value, eg. 5242880.
receiveTimeout > Set to a higher value, eg. 00:30:00
I've tried these but all in vain. I suspect there is an underlying problem that cannot be fixed by simply changing some numbers. Any leads would be much appreciated.
I've solved it =P.
Cause
The WCF service works fine. It was the data service calls that was the culprit. Every time I made the call, I instantiated a new reference to the data service, but never closed/disposed the service reference. So after a couple of calls, the data service reaches its maximum connection and halts.
Solution
Make sure to close/dispose of any data service reference properly. Best practice would be to enclose in a using statement.
using(var dataService = new ServiceNS.ServiceClient() )
{
// Use service here
}
// The service will be disposed and connection freed.
Glad to see you fixed your problem.
However, you need to be carefull about using the using statement. Have a look at this article:
http://msdn.microsoft.com/en-us/library/aa355056.aspx

Setting USB configuration fails

I'm trying to talk to a USB device using libusb, but I feel like I'm tripping up on the first leg of the race. I know precisely what endpoints I need to talk to, etc., but I can't even get that far. I have, in essence:
usb_device *dev = ...; // opened from get_busses()
usb_set_configuration(dev, dev->config[0].bConfigurationValue); // bConfigVal = 1
Now, I can look at the device information in debug mode and I know that the current configuration is 0 (uninitialized / just after restart), and there's exactly 1 valid configuration, which has a configuration number of 1. But when I set the config to 1, I get a return value of -22, which (passed through the stringifier) translates to "windows api error: bad parameter.
I haven't been able to find other people having a similar problem, and it seems like such a simple thing to do -- I can't even claim the interface, or set the alt-interface, or anything like that, because I have to set the configuration first. What am I missing? (if it matters: this is on WinXP)
Looking at libusb-win32\src\driver\set_configuration.c, there seem to be a bunch of different reasons for returning STATUS_INVALID_PARAMETER.
Use libusb_set_debug (from your user mode application) to set verbose debug level, then run Sysinternals DebugView to see the driver's error messages. Hopefully you'd see a clue as to why your set_configuration call fails.