Shared memory of same DLL in different 32 bit processes is sometimes different in a terminal session on Windows server 2008 with 64 bit - windows-server-2008

We have an 32 bit application consisting of some processes. They communicate with shared memory of a DLL used by every process. Shared memory is build with global variables in C++ by "#pragma data_seg ("Shared")".
When running this application sometime during starting a new process in addition to an existing (first) process we observe that the shared memory of both processes is not the same. All new started processes cannot communicate with the first process.
After stopping all of our processes and restarting the application (with some processes) everything works fine. But sometime or other after successfully starting and finishing new processes the problem occurs again.
Running on all other Windows versions or terminal sessions on Windows server 2003 our application never got this problem. Is there any new "feature" on Windows server 2008 that might disturb the hamony of our application?

Windows runs 32bit programs under a wrapper called Wow64. Are your processes all running under the same Wow64 wrapper (use Process Explorer to see the process tree).

Related

How to Force WCF Service Application Running in 32 bit Mode?

If run a WCF service application straight out of the box in 64 bit mode with service selected , it works fine an gives me the default data contracts.
If I change app to x86 and build with x86 ( not any CPU ) - and configure IIS 8 application pool for this application to allow 32 bit - it fails. How do I make the WCF Application work in 32 bit ( it must be 32 bit because it needs to be a wrapper for some legacy dlls ) . Note: I haven't referenced the DLLs or anything - it is just straight out of the box default WCF application ( Not a WCF library ) . help :-)
Although I am not sure why it is not working in your case, there are two issues to consider when running in 32-bit mode on 64-bit server:
Setting the platform target in Visual Studio: Setting this to x86 will force the target assembly to be built as a 32-bit application. If the assembly that loads the target assembly is running in a 64-bit process, it will fail to load your assembly
However, you do not have to specify x86 to allow your assembly to be loaded in a 32-bit process. If you specify Any CPU as Platform Target, it can be loaded in either a 32-bit or a 64-bit process.
32-bit IIS process: If your application is running as a web app, (running in an IIS app pool worker process), you’ll want that worker process (w3wp.exe) to be a 32-bit process. That can be specified in the advanced settings of the app pool:
Although is says 'Enable', it actually means “force”, meaning that the app pool worker process will always be launched as a 32-bit process when this setting has a value of True. Setting it to False will launch a 64-bit app pool worker process.

Running Linux application on windows

How about running a linux application on windows platform without any OS virtualization.
Lets say we have an linux software installed on windows machine which can run successfully on windows with below mentioned approach:
A normal windows application runs on windows by creating a virtual address space on any Operating system. Program loader loads required libraries for the application from physical drive onto virtual memory address space. All those libraries related to application gets loaded when required by using File System APIs.
Now lets go in different way, instead of creating a virtual address space on local system, we can create a process address space on different machine which is capable to run the application. In our case, create address space for linux application on remote linux machine instead of local windows machine. All file system access can be grab on remote machine and transferred to local windows machine. In this way linux application located on local windows
machine, creates process address space on remote linux machine, access file system on local windows machine. All file system related apis can be remoted and routed to local machine. Linux application UI can be captured on linux machine and sent for display on local windows machine.
In this way different platform applications can be run on other platform as well without need of OS virtualization. What is your opinion on this approach and how much it is feasible. Is there any big fault in this approach which makes this approach non-feasible.
That little word- API that you have used there means translating the entire set of system-calls of an operating system to another. Calls that go into creating a socket connection or a directory to file locking etc, EVERYTHING changes. You've discussed just memory here, the GUI has it's own calls, so do drivers and networks.
By the end of 6 years, that little million-line of code that you would've written to achieve all this, when packaged and bundled, will be called; surprise, surprise- a hypervisor.

COM out of process server starts multiple instances

How do you force a local COM server to run under a common account (local system would be good)? The RunAs documentation seems like its only for DCOM and doesnt work locally.
The problem i face is that my CoCreateInstance is being called from processes that are running in different desktops and the SCM under this scenario wants to start a new server for each desktop, I only want a single instanse - as designed!
What you are describing is a system service, not a COM server. A COM server is designed to run under whatever session runs it, not under "session 0" (services) or any single session. If you need something that only runs under 1 session and has global access to everything else, you should use a Windows Service, not a COM server.
If you need the COM server aspect for other reason, but want to share resources globally or still have "one process" that controls whatever you need to do... you can have your COM server communicate with your service using whatever IPC method you prefer.
Also, in your comments you say "when I run from the command line" -- if you run an EXE from the command-line, it doesn't matter if it is registered as a COM server or not, it just runs like any other EXE/app -- which means it runs as whatever user you run it as, in whatever session you are in. Registering an EXE as a COM server just allows other processes to run that EXE and communicate with it via OLE/COM, but the EXE can still run as a normal app as well. For example, Microsoft Word and Outlook are both COM servers. That is, outlook.exe is a COM server, but of course you can also run it as a normal application.

Client VM not inlining?

I am on a Linux machine and use openjdk7. After finding my code was executed twice as fast when using the -server option, i dove deeper into what was happening inside the VM and found that the Server VM inlines my code like crazy, while the Client VM does not inline at all.
Is this normal behavior?
It is normal behaviour.
The server JVM optimises the code more heavily. This uses more CPU on startup and more memory when it is running.
The client VM is designed to quick start up e.g. applets. It is the default on Windows 32-bit JVMs only.

IIS process recycle and session variables

Is it a bad idea to setup a web application on client PCs with Windows XP (there are ten workstations), rather than on a server (Windows Server 2003).
I have an ASP.NET application, which has a memory leak and uses session variables. I believe that the session variables may be causing problems when the IIS process recycles. Is there any benefit of installing IIS on a server rather than on workstations?
I was debating to myself whether to ask this question here or on server vault. As my question references session variables I decided to ask it here.
If the problem is regarding session variables or a memory leak it really doesn't matter where you run it because the problem is in code, not in the platform.
The only possible benefit to running a problematic application like this on Windows Server 2003 (IIS 6) rather than on Windows XP (IIS 5.1) is that you can schedule recycles for the Application pools under IIS 6 and may be able to put a band-aid over the problem by recycling often and changing the code to store session out-of-process.
Bottom line - fix the code and run the application where it makes sense to run it.