I have a WCF service running under a service user on my local system. Every time I try to debug it is giving me a message Attach Security warning.
In Visual Studio, by default (even without attaching), I get this error:
Attaching to this process can potentially harm your computer. If the
information below looks suspicious or you are unsure, do not attach to
this process
Name: C:\Windows\System32\inetsrv\w3wp.exe
What is w3wp.exe? According to a Google search, I think it is related to IIS. But what does it do? What setting should be changed so that this won't give this message everytime I try to debug on my local system?
An Internet Information Services (IIS) worker process is a windows
process (w3wp.exe) which runs Web applications, and is responsible for
handling requests sent to a Web Server for a specific application
pool.
It is the worker process for IIS. Each application pool creates at least one instance of w3wp.exe and that is what actually processes requests in your application. It is not dangerous to attach to this, that is just a standard windows message.
Chris pretty much sums up what w3wp is. In order to disable the warning, go to this registry key:
HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\10.0\Debugger
And set the value DisableAttachSecurityWarning to 1.
A worker process runs as an executables file named W3wp.exe
A Worker Process is user mode code whose role is to process requests,
such as processing requests to return a static page.
The worker process is controlled by the www service.
worker processes also run application code, Such as ASP .NET
applications and XML web Services.
When Application pool receive the request, it simply pass the request
to worker process (w3wp.exe) . The worker process“w3wp.exe” looks up
the URL of the request in order to load the correct ISAPI extension.
ISAPI extensions are the IIS way to handle requests for different
resources. Once ASP.NET is installed, it installs its own ISAPI
extension (aspnet_isapi.dll)and adds the mapping into IIS.
When Worker process loads the aspnet_isapi.dll, it start an
HTTPRuntime, which is the entry point of an application. HTTPRuntime
is a class which calls the ProcessRequest method to start Processing.
For more detail refer URL
http://aspnetnova.blogspot.in/2011/12/how-iis-process-for-aspnet-requests.html
w3wp.exe is a process associated with the application pool in IIS. If you have more than one application pool, you will have more than one instance of w3wp.exe running. This process usually allocates large amounts of resources. It is important for the stable and secure running of your computer and should not be terminated.
You can get more information on w3wp.exe here
http://www.processlibrary.com/en/directory/files/w3wp/25761/
Related
I'm using Simple Injector in my WCF service. While running it from VS2010 everything is fine. However, when I publish it to my server using IIS 7, after some time (20 min, counted) my WCF loses all registered assemblies, modules, classes in container.
I guess IIS recycles the WCF Service Application Pool and drops my container registrations.
Can anyone help me on this?
While there exists many legitimate cases of using self-hosting WCF services, however, approaching self-hosting just because of IIS recycling may be counter productive.
Hosting in IIS gives you a lot benefit during development and daily operations, and I am not going to repeat what benefits which you could easily find out in google search.
So when IIS receive the first request to your application, it will launch a worker process named "w3wp.exe" according the settings in the application pool associated with your web app. And by default IIS will shutdown in 20 minutes of idle time. Check the Advanced Settings of the application pool, you will find a lot settings for the life cycle. You won't get such flexibility and robustness through self-hosting out of the box.
So basically you could have a few options provided you decide to stay with IIS hosting.
Change the Idle Time-out to 24-hours or even a month.
Write a small program or use cUrl to ping your application every 10 minute.
Leave it as it is
If you want to keep states during operations, save them in disk, then load them during next launch triggered by a request.
I Have a .NET 4.5 WCF web service that consumes messages from a local private MSMQ queue running on Windows Server 2008 R2 with AppFabric installed.
This service reads the messages of the queue and processes the files referenced in the message, i have used AppFabric to throttle the service to process 16 concurrent messages, 8 on each AppPool worker process.
The AppPool is uses a domain account that has full privileges on the network share where the files to be processed are stored.
This service has been working fine for years, except in the last week the ~90% of the files its been asked to process have failed with either a UnauthorizedAccessException.
This behavior was exhibited across all of the services on that application server, no matter which file server the service was asked to process files from. Even files that had previously processed file were now failing.
After a long a fruitless weekend searching and hacking of various different things including:
Shared Folder Permissions and Quotas
Windows Licencing (CALs etc)
Firewalls
Various software patches to the Web app
I eventually discovered the actual issue by accident, whilst redeploying the Web app i noticed something odd. When i stopped the web app via the WCF menu in IIS, the messages continued to be consumed so i stopped the stopped the app pool running the web service, but the messages continue to be consumed, I though this might be due to the large latency added to MQMQ message state by the Distributed transaction service when lots of messages are rolled back to the poison message queue, so i went to lunch. When i came back the messages were still being consumed and process explorer confirmed the apppool running my server was no longer executing.
Something was clearly up but it was uncertain weather this was the cause, a symptom or a coincidence. The clincher was when i throttled my service back to only process one message at a time to see if the access, to the share was reaching some sort of limit, I noticed that failure rate went up to ~98%. This suggested that something else was processing the messages and failing, but also reporting those failures into my reporting system in a way only my application could.
I little further investigation revealed that the default application pool used to serve the default web site, was also executing my WCF web service but failing to access the files on the file server as the identity used to run the default application pool had no privileges the failures took less time the than the successful file processes therefor the slower i made my service go the more messages were failed by the default app pool.
The Cause
Whilst i was adjusting the throttling on my web app, i inadvertently set the throttling or the default web site that was the parent to the web application, i noticed this strait away and reset them back to the default value. What i hadn't realized at the time was that this had added a <system.servicemodel> tag to the web config of the default website. The outcome of which was that my default web site started to behave like a web application and for reasons i am yet to understand, it started to execute the functionality of its child web application, it may be related to the WAS activation, all i know is that i was most certainly not the desired behavior.
The Fix
I removed the <system.servicemodel> tag and its contents from the web.conf of the default website and removed net.msmq from its list of enabled protocols and everything is back to normal.
I have a WCF Web Service that has no concurrency configuration in the web.config, so I believe it is running as the default as persession. In the service, it uses a COBOL Virtual Machine to execute code that pulls data from COBOL Vision files. Per the developer of the COBOL VM, it is a singleton.
When more than one person accesses the service at a time, I'll get periodic crashes of the web service. What I believe is happening is that as one process is executing another separate process comes in at about the same time. The first process ends and closes the VM down through normal closing procedures. The second process is still executing and attempting to read/write data, but the VM was shutdown and it crashes. In the constructor for the web service, an instance of the VM is created and when a series of methods complete, the service is cleaned up and the VM closed out.
I have been reading up on Singleton concurrency in WCF web services and thinking I might need to switch to this instead. This way I can open the COBOL VM and keep it alive forever and eliminate my code shutting down the VM in my methods. The only data I need to share between requests is the status of the COBOL VM.
My alternative I'm thinking of is creating a server process that manages opening the VM and keeping it alive and allowing the web service to make read/write requests through that process instead.
Does this sound like the right path? I'm basically looking for a way to keep the Virtual Machine alive in a WCF web service situation and just keep executing code against it. The COBOL VM system sends me back locking information on the read/writes which I can use to handle retries or waits against.
Thanks,
Martin
The web service is now marked as:
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Single)]
From what I understand, this only allows a single thread to run through the web service at a time. Other requests are queued until the first completes. This was a quick fix that works in my situation because my web service doesn't require high concurrency. There are never more than a handful of requests coming in at a time.
We got a WCF service that is calling some unmanaged code. The unmanaged code is using COM and creating COM objects to do some legacy work. We noticed that when the application pool re-cycle happens (due to web.config change). The service is not always able to re-host again in a new application domain and the w3wp.exe appear to be hang. I am not getting errors or logs, except that I can't communicate to the service anymore. Killing the process and starting a new one woks. Is there a way to tell IIS not to use application domain re-cycle (due to legacy/unmanaged code in the w3wp.exe). I noticed that when using re-cycle on private bytes limit there is a process re-cycle and not a app domain recycle. What else can I do to find out the state of the w3wp.exe process and why it is hang.
I am also noticing that different recycle options create different ways to recycle. When using the UI in IIS a recycle removes the processes and put new ones. When setting a memory limit the same happens. But when changing a web config, an app domain recycle happens. Is there information to what type of re-cycle occurs based on the different trigger and is there ways to configure it?
I have a WCF service that does some document conversions and returns the document to the caller. When developing locally on my local dev server, the service is hosted on ASP.NET Development server, a console application invokes the operation and executes within seconds.
When I host the service in IIS via a .svc file, two of the documents work correctly, the third one bombs out, it begins to construct the word document using the OpenXml Sdk, but then just dies. I think this has something to do with IIS, but I cannot put my finger on it.
There are a total of three types of documents I generate. In a nutshell this is how it works
SQL 2005 DB/IBM DB2 -> WCF Service written by other developer to expose data. This service only has one endpoint using basicHttpBinding
My Service invokes his service, gets the relevant data, uses the Open Xml Sdk to generate a Microsoft Word Document, saves it on a server and returns the path to the user.
The word documents are no bigger than 100KB.
I am also using basicHttpBinding although I have tried wsHttpBinding with the same results.
What is amazing is how fast it is locally, and even more that two of the documents generate just fine, its the third document type that refuses to work.
To the error message:
An error occured while receiving the HTTP Response to http://myservername.mydomain.inc/MyService/Service.Svc. This could be due to the service endpoint binding not using the HTTP Protocol. This could also be due to an HTTP request context being aborted by the server (possibly due to the server shutting down). See server logs for more details.
I have spent the last 2 days trying to figure out what is going on, I have tried everything, including changing the maxReceivedMessageSize, maxBufferSize, maxBufferPoolSize, etc etc to large values, I even included:
<httpRuntime maxRequestLength="2097151" executionTimeout="120"/>
To see maybe if IIS was choking because of that.
Programatically the service does nothing special, it just constructs the word documents from the data using the Open Xml Sdk and like I said, locally all 3 documents work when invoked via a console app running locally on the asp.net dev server, i.e. http://localhost:3332/myService.svc
When I host it on IIS and I try to get a Windows Forms application to invoke it, I get the error.
I know you will ask for logs, so yes I have logging enabled on my Host.
And there is no error in the logs, I am logging everything.
Basically I invoke two service operations written by another developer.
MyOperation calls -> HisOperation1 and then HisOperation2, both of those calls give me complex types. I am going to look at his code tomorrow, because he is using LINQ2SQL and there may be some funny business going on there. He is using a variety of collections etc, but the fact that I can run the exact same document, lets call it "Document 3" within seconds when the service is being hosted locally on ASP WebDev Server is what is most odd, why would it run on scaled down Cassini and blow up on IIS?
From the log it seems, after calling HisOperation1 and HisOperation2 the service just goes into la-la land dies, there is a application pool (w3wp.exe) error in the Windows Event Log.
Faulting application w3wp.exe, version 6.0.3790.1830, stamp 42435be1, faulting module kernel32.dll, version 5.2.3790.3311, stamp 49c5225e, debug? 0, fault address 0x00015dfa.
It's classified as .NET 2.0 Runtime error.
Any help is appreciated, the lack of sleep is getting to me.
Help me Obi-Wan Kenobi, you're my only hope.
I had this message appearing:
An error occured while receiving the HTTP Response to http://myservername.mydomain.inc/MyService/Service.Svc. This could be due to the service endpoint binding not using the HTTP Protocol. This could also be due to an HTTP request context being aborted by the server (possibly due to the server shutting down). See server logs for more details.
And the problem was that the object that I was trying to transfer was not [Serializable]. The object I was trying to transfer was DataTable.
I believe word documents you were trying to transfer are also non serializable so that might be the problem.
Yes, we'd want logs, or at least some idea of what you're logging. I assume you have both message and transport logging on at the WCF level.
One thing to look at is permissions. When you run under Cassini the web server is running as the currently logged in user. This hides any SQL or CAS permission problems (as, lets be honest, your account is usually a local administrator). As soon as you publish to IIS you are now running under the application pool user, which is, by default, a lot more limited.
Try turning on IIS debug dumps and following the steps in KB919789
Fyi, I changed IIS 6 to work in IIS 5.0 Isolation mode and everything works. Odd.
I had the same error when using an IEnumerable<T> DataMember in my WCF service. Turned out that in some cases I was returning an IQueryable<T> as an IEnumerable<T>, so all I had to do was add .ToList<T>() to my LINQ statements.
I changed the IEnumerable<T> to IList<T> to prevent making the same mistake again.