Windows Perf Counters on Docker containers : System.InvalidOperationException: Category does not exist - performancecounter

I am running a .net application in a windows docker container with Docker Desktop for Windows. When my app tries to create perf counters using this code:
_counter = new PerformanceCounter(categoryName, counterName, InstanceName, true);
//My categoryname is Processor , CounterName is % Processor Time & instancename is _Total.
I am getting this exception:
Unhandled Exception: System.InvalidOperationException: Category does
not exist. at
System.Diagnostics.PerformanceCounterLib.CounterExists(String machine,
String category, String counter) at
System.Diagnostics.PerformanceCounter.InitializeImpl() at
System.Diagnostics.PerformanceCounter..ctor(String categoryName,
String counterName, String instanceName, Boolean readOnly)
This is the base image of my container: https://hub.docker.com/r/microsoft/dotnet-framework/
It has Windows Server core as part of it.
Appreciate any help in fixing this. I am not sure if this is just some windows setting I need to change here or is it because of the docker & windows or my code not able to access the categories of the perf counters.
It is working perfectly fine, when I run it in my local machine instead of a container.

As far as I know these should be working.
Can you try using TypePerf to query those counters? I see them on my system when I query them on the Windows Server core base image. Do they break in the container you built?
docker run microsoft/windowsservercore TypePerf "\Processor(*)\% Processor Time"
Does show CPU usage:
"(PDH-CSV 4.0)","\\DF4E02B31BBD\Processor(0)\% Processor Time","\\DF4E02B31BBD\Processor(1)\% Processor Time","\\DF4E02B31BB
D\Processor(_Total)\% Processor Time"
"04/25/2017 09:52:34.412","50.536535","38.170669","44.353602"
"04/25/2017 09:52:35.423","19.583557","2.572386","11.077971"
"04/25/2017 09:52:36.425","39.207660","50.119106","44.663383"
"04/25/2017 09:52:37.453","31.606146","43.765053","37.685600"

Related

Azure Container Instance is immediately killed on Startup

I am trying to run an azure container instance but it appears to be getting killed off the second I run it. This works fine in 2 other resource groups but not my production resource group where I see the following:
In events I see 'Successfully pulled image
selenium/standalone-chrome:latest' with count 1 and then 'Started
container' and then 'Killing container' with count 31. The times for
started and killed are the same.
In logs, it just says 'No logs available'
The metrics for CPU and memory on the container never show any change from zero.
Looked at this article but the proposed solution didn't work: Azure Container Group Instance I have tried putting on both an empty directory volume and 2Gb of ram as advised here: https://github.com/SeleniumHQ/docker-selenium but nothing works.
This is the code I am using to create the container:
containerGroup = await azure.ContainerGroups.Define(containerName)
.WithRegion("West Europe")
.WithExistingResourceGroup(configuration.ContainerResourceGroup)
.WithLinux()
.WithPublicImageRegistryOnly()
.WithEmptyDirectoryVolume("devshm")
.DefineContainerInstance(containerName)
.WithImage("selenium/standalone-chrome")
.WithExternalTcpPorts(4444)
.WithVolumeMountSetting("devshm", "/dev/shm")
.WithMemorySizeInGB(2)
.Attach()
.WithDnsPrefix(configuration.AppServiceName + "container")
.WithRestartPolicy(ContainerGroupRestartPolicy.OnFailure)
.CreateAsync(cancellationToken);
How do I debug what is going wrong?
What is wrong with the container?
In case this helps someone I renamed the "containerName" parameter in the above example from myinstance to myinstance1 and changed the region from West Europe to UK South. This fixed the issue. I can only think that Azure caches instances somehow to reduce start up times and the cached image I was using was poisoned somehow.
One issue could be the restart policy - have a look at the Microsoft restart policy troubleshooting on Microsoft's ACI troubleshooting page. According to the website under the Container continually exits and restarts (no long-running process) header in the page:
Container groups default to a restart policy of Always, so containers
in the container group always restart after they run to completion.
You may need to change this to OnFailure or Never if you intend to run
task-based containers. If you specify OnFailure and still see
continual restarts, there might be an issue with the application or
script executed in your container.
In your case you may need to adjust the code as follows using the withStartingCommand:
containerGroup = await azure.ContainerGroups.Define(containerName)
.WithRegion("West Europe")
.WithExistingResourceGroup(configuration.ContainerResourceGroup)
.WithLinux()
.WithPublicImageRegistryOnly()
.WithEmptyDirectoryVolume("devshm")
.DefineContainerInstance(containerName)
.WithImage("selenium/standalone-chrome")
.WithExternalTcpPorts(4444)
.WithVolumeMountSetting("devshm", "/dev/shm")
.WithMemorySizeInGB(2)
.WithStartingCommandLine("tail")
.WithStartingCommandLine("-f")
.WithStartingCommandLine("/dev/null")
.Attach()
.WithDnsPrefix(configuration.AppServiceName + "container")
.WithRestartPolicy(ContainerGroupRestartPolicy.OnFailure)
.CreateAsync(cancellationToken);
This link is helpful for this issue.
--command-line
linux => "tail -f /dev/null"
windows => "ping -t localhost"
# .yml
command: tail -f /dev/null
It will keep your azure instance running.
As now azure do have a endpoint to connect/analyze the process on.

NServiceBus endpoint is not starting on Azure Service Fabric local cluster

I have a .NetCore stateless WebAPI service running inside Service Fabric local cluster.
return Endpoint.Start(endpointConfiguration).GetAwaiter().GetResult();
When I'm trying to start NServiceBus endpoint, I'm getting this exception :
Access to the path 'C:\SfDevCluster\Data_App_Node_0\AppType_App10\App.APIPkg.Code.1.0.0.diagnostics' is denied.
How can it be solved ? VS is running under administrator.
The issue you are having is because the folder you are trying to write to is not supposed to be written by your application.
The package folder is used to store you application binaries and can be recreated dynamically whenever an application is hosted in the node.
Also, the binaries are reused by multiple service instances running on same node, so it might compete to use the files by different instances.
You should instead instruct your application to write to the WorkFolder,
public Stateless1(StatelessServiceContext context): base(context)
{
string workdir = context.CodePackageActivationContext.WorkDirectory;
}
The code above will give you a path like this:
'C:\SfDevCluster\Data_App_Node_0\AppType_App10\App.APIPkg.Code.1.0.0.diagnostics\work'
This folder is dynamic, will change depending on the node or instance of your application is running, when created, your application should already have permission to write to it.
For more info, see:
how-do-i-get-files-into-the-work-directory-of-a-stateless-service?forum=AzureServiceFabric
Open folder properties Security tab
Select ServiceFabricAllowedUsers
Add Write permission

INSTALL "[AMD/ATI] Tonga XT GL [FirePro S7150]" Graphic Card on a VMu (Centos 6.9) running on XenServer 7.4

Just start using XenServer. Doing some experiment for my company. Installed XenServer 7.4 on a Box and created a Centos 6.9 VMU. Using XenCenter.
Got to the point when I can run the virtual operating system but when I try to use the "Advanced Micro Devices, Inc. [AMD/ATI] Tonga XT GL [FirePro S7150]" graphic card with the command:
xe vgpu-create vm-uuid=xxx-xxx-xxx-xxx gpu-group-uuid=xxx-xxx-xxx-xxx
I receive the following error message:
The use of this feature is restricted.
I have also tried to install the graphic interface (Xen-Center) using a licensed Xen-Server to enable the AMD card using the Tools->Install Update: downloaded and selected the mxgpu-1.0.5.amd.iso to enable the Graphic card but I cannot complete the process as I receive the error message:
The attempt to create a VDI failed
I am running out of option. The CentOS is running but I cannot use the machine AMD graphic card. Can you help?
Could you try running the VM with the virtual disk stored on the same Local Storage repository located on that card's host, and removed from any pools. This is the default configuration, but I'd thought I'd mention these tips in case you have this box somehow mixed in a heterogenous pool. If the machine is part of a pool, make sure that you are not selecting the video adapter to passthrough to the VM of another host's adapter.

RDO unable to boot VM with disk size specified

I have packstack-allinone setup on my RHEL7.1 trial for Juno release.
I am facing problem while launching VM(for ex: cirros) with a disk size mentioned in flavor. If there is 0gb disk size then VM are getting launched but not for higher flavor sizes.
I also observe that when I do this, openstack-nova-compute service goes down which I observed when I checked using nova-manage service list with nova-compute being XXX making me restart the service everytime I try this scenario. The compute logs doesn't throw any error, it just gets stuck at "Creating image".
Is there any Filesystem issue which i missing to be configured? I am new to this, so please help.
PS: I run all commands with "root" user.
The problem was with esxi. Esxi needs to be 5.5v to support RHEL7x Since mine was 5.1v it only supported RHEL6x.
After upgrading esxi5.1 to 5.5v it worked fine.

Windows Azure Console for Worker Role Cloud Service

I have a worker role cloud service that I have recently developed on my local machine. The service exposes a WCF interface that receives a file as a byte array, recompiles the file, converts it to the appropriate format, then stores it in Azure Storage. I managed to get everything working using the Azure Compute Emulator on my machine and published the service to Azure and... nothing. Running it on my machine again, it works as expected. When I was working on it on my computer, the Azure Compute Emulator's console output was essential in getting the application running.
Is there a similar functionality that can be tapped into on the Cloud Service via RDP? Such as starting/restarting the role at the command prompt or in power shell? If not, what is the best way to debug/log what the worker role is doing (without using Intellitrace)? I have diagnostics enabled in the project, but it doesn't seem to be giving me the same level of detail as the Computer Emulator console. I've rerun the role and corresponding .NET application again on localhost and was unable to find any possible errors in the console.
Edit: The Next Best Thing
Falling back to manual logging, I implemented a class that would feed text files into my Azure Storage account. Here's the code:
public class EventLogger
{
public static void Log(string message)
{
CloudBlobContainer cbc;
cbc = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue("StorageClientAccount"))
.CreateCloudBlobClient()
.GetContainerReference("errors");
cbc.CreateIfNotExist();
cbc.GetBlobReference(string.Format("event-{0}-{1}.txt", RoleEnvironment.CurrentRoleInstance.Id, DateTime.UtcNow.Ticks)).UploadText(message);
}
}
Calling ErrorLogger.Log() will create a new text file and record whatever message you put in there. I found an example in the answer below.
There is no console for worker roles that I'm aware of. If diagnostics isn't giving you any help, then you need to get a little hacky. Try tracing out messages and errors to blob storage yourself. Steve Marx has a good example of this here http://blog.smarx.com/posts/printf-here-in-the-cloud
As he notes in the article, this is not for production, just to help you find your problem.