How to Force WCF Service Application Running in 32 bit Mode? - wcf

If run a WCF service application straight out of the box in 64 bit mode with service selected , it works fine an gives me the default data contracts.
If I change app to x86 and build with x86 ( not any CPU ) - and configure IIS 8 application pool for this application to allow 32 bit - it fails. How do I make the WCF Application work in 32 bit ( it must be 32 bit because it needs to be a wrapper for some legacy dlls ) . Note: I haven't referenced the DLLs or anything - it is just straight out of the box default WCF application ( Not a WCF library ) . help :-)

Although I am not sure why it is not working in your case, there are two issues to consider when running in 32-bit mode on 64-bit server:
Setting the platform target in Visual Studio: Setting this to x86 will force the target assembly to be built as a 32-bit application. If the assembly that loads the target assembly is running in a 64-bit process, it will fail to load your assembly
However, you do not have to specify x86 to allow your assembly to be loaded in a 32-bit process. If you specify Any CPU as Platform Target, it can be loaded in either a 32-bit or a 64-bit process.
32-bit IIS process: If your application is running as a web app, (running in an IIS app pool worker process), you’ll want that worker process (w3wp.exe) to be a 32-bit process. That can be specified in the advanced settings of the app pool:
Although is says 'Enable', it actually means “force”, meaning that the app pool worker process will always be launched as a 32-bit process when this setting has a value of True. Setting it to False will launch a 64-bit app pool worker process.

Related

Understanding ASP.NET Core with Apache

In IIS we had an aspnet_isapi extension that handles the request, it then spawns a process w3wp.exe, w3wp.exe then loads and starts the CLR and then CLR does its job.
Now, Kestrel is configured inside the Main() method, so first the Main() should execute, so who starts the Core CLR ? is it IIS for windows and Apache for Linux? Do IIS and Apache know how to search and start Core CLR?
What I know is, when a .NET application is executed at that time the control will go to the operating system, the OS creates a process to load CLR.
The program used by the operating system for loading CLR is called runtime host, which are different depending upon the type of application that is desktop or web-based application i.e.
The runtime host for desktop applications is an API function called CorbinToRuntime.
The runtime host for web-based applications is the ASP.NET worker process (aspnet-wp.exe).
So, how is it possible that first the Main() method will execute and then the CLR, i am not able to digest it, please help.
Forget about everything you know about IIS.
For Apache or nginx, just run your ASP.NET Core console application (who initializes Core CLR) at a local port (http://localhost:5000 for example), and then set up reverse proxy rules to forward external traffic to that port.
That's roughly what Microsoft documented in this article
Such reverse proxy setup is common, as other web stacks (Node.js, Python, Go) are using the same approach.
Because of this specific setup, Linux launches your .NET Core console app by analyzing the COFF envelope (of dotnet executable, or your own executable for self contained deployment) to locate the native entry (not your managed Main).
Apache/nginx is not involved in anyway.
Calling into this entry triggers CoreCLR initialization, and in turn your managed assemblies are loaded and managed Main is called.
You might find articles like this helpful.

Do IIS Web Applications that use the same App Pool share DLLs in memory?

I inherited a large web site. To the user, it consists of 20 "modules" with different functionality. Each module can be accessed via a menu from each other module.
Each module has been implemented as a separate Web Application in IIS, all sitting under the Default Web Site. They all use the same App Pool. All implemented in ASP.NET Core (net5).
The modules share about 70% of their code. This library code sits in several projects. The web application projects all have References to the library DLLs. After everything has been built, the bin folder of each web application project has a copy of the library DLLs (so there are then 20 copies of each library DLL on disk).
Assuming that web application 1 is receiving requests and has been loaded into server memory. If web application 2 then gets loaded into server memory, will the library DLLs then be loaded into memory again for web application 2? Or will web application 2 use the library DLLs that have already been loaded into memory for web application 1? As in, after web applications 1 and 2 have been loaded in memory, will there be 1 copy of the library cod in memory or 2 copies?
Reason behind the question is that I need to reduce memory usage on the web server. There are no operational benefits to having separate web applications. They are all deployed together in one go. We never start or stop just one of them, it is always all or nothing. Wondering if I can save memory by having 1 big web application instead of 20 smaller web applications.
Your ASP.NET Core web apps in the same application pool are configured to use out-of-process hosting, so all their assemblies/libraries are loaded into individual .NET Core processes (Kestrel based) (dotnet.exe usually, or your own executable when self contained deployment is used).
Diagrams in that Microsoft article make it super easy to understand the relationship among the runtime processes.
In that mode, IIS worker process(es) w3wp.exe only loads the ASP.NET Core module to work as reverse proxy.
Combining the two above, the answer to your question Do IIS Web Applications that use the same App Pool share DLLs in memory? is rather clear that nothing is shared and you cannot share anything either due to the process boundary.

WCF Service Application does not work with App pool in integrated Pipeline mode

I am running the Windows Server 2012 Release Candidate. For testing purposes I deployed the WCF Service Application from the VS Template without modifications on the Default Website. Retrieving the Service1.svc only works if the App pool is in Classic Pipeline Mode but not in the Integrated one.
If I switch to integrated mode I get a HTTP-Error 404.17 - Not Found.
The Isapi Handler to aspnet_isapi.dll for the *.svc extension is registered for 32 and 64 bit as well. The Service works in classic mode. So the handler registration seems to be partly right.
I already tried setting "Enable 32-Bit Applications" to True. Deploying the Application as 32bit or 64 bit made no differnece neither.
What am I missing?

Out of process COM server works fine in the unit test harness but not in the real service

We have a WCF service hosted in IIS that currently calls a VB6 DLL to do part of its job. This works fine, but we can't deploy the service to a 64-bit target environment without configuring IIS to run it in a 32-bit worker process.
I am currently investigating ways around that restriction. Porting the VB6 DLL to .NET is not possible for various reasons, so I created an ActiveX EXE wrapper around the DLL in VB6, so that the service can run in 64-bit and the VB6 parts in 32-bit.
When I tested the service I got this error:
Type: System.UnauthorizedAccessException
Message: Retrieving the COM class factory for component with CLSID {9AE7303B-D159-43F6-B1A5-52D297581820} failed due to the following error: 80070005.
After some Googling I found that this is due to either:
Calling an MS Office component
DCOM permissions not being configured
NTFS file permissions not allowing read/exec access to the IIS worker process identity (ASPNET in my environment)
Of these:
Definitely not applicable
Also not applicable; I am not hosting the EXE in DCOM or COM+, just a simple COM out-of-process activation
This looks likely; however, I checked the permissions, and NTFS reports that the Users group (which ASPNET is a member of) does indeed have read/exec access to the file
I tried calling the EXE from a unit test fixture, which is executed in my admin-level account rather than the IIS worker process account, and it worked fine, so the error is definitely something to do with permissions. I'm not sure what to do next. Can anyone suggest things I can check?
My test environment is Windows XP / IIS 5.1
UPDATE:
The IIS virtual directory is configured for Anonymous+Windows access; the WCF service uses only Anonymous authentication, the Windows authentication is for the VS debugger. Task Manager reports that the aspnet_wp.exe process is definitely running in the ASPNET account.
I explicitly granted Read and Execute access to the ASPNET and IUSR_<machine> accounts on all the COM exes and dlls involved. This made no difference.
I explicitly granted Local Launch and Local Activation access to the ASPNET and IUSR_<machine> accounts on the relevant interfaces in the DCOM configuration. This made no difference either.
As I see it I have 3 options:
Keep trying to get this working somehow.
Go the whole hog and host the EXE in COM+.
Give up. Tell users that the WCF service must be configured to run in a 32-bit app pool on 64-bit Windows.
Your error is an Unauthorized access exception. Therefore, the problem is probably rights related.
You could check what the security context of the 32bit worker process is.
Also check your event log, they may be information there about what account is being used.

Shared memory of same DLL in different 32 bit processes is sometimes different in a terminal session on Windows server 2008 with 64 bit

We have an 32 bit application consisting of some processes. They communicate with shared memory of a DLL used by every process. Shared memory is build with global variables in C++ by "#pragma data_seg ("Shared")".
When running this application sometime during starting a new process in addition to an existing (first) process we observe that the shared memory of both processes is not the same. All new started processes cannot communicate with the first process.
After stopping all of our processes and restarting the application (with some processes) everything works fine. But sometime or other after successfully starting and finishing new processes the problem occurs again.
Running on all other Windows versions or terminal sessions on Windows server 2003 our application never got this problem. Is there any new "feature" on Windows server 2008 that might disturb the hamony of our application?
Windows runs 32bit programs under a wrapper called Wow64. Are your processes all running under the same Wow64 wrapper (use Process Explorer to see the process tree).