WCF services on 64 bit and 32 bit machine - wcf

we created WCF services which is having high traffic consumption.multiple clients make request to services per second to get live data from server.
when I deployed services on UAT on below configuration machine then IIS 7.5 consumes cpu process around 30-35%.
machine configuration -
windows 7 professinal
Intel(R)Core(TM)i3-2120 CPU#3.30GHz
RAM - 4 GB
32 bit operating system
When I deployed services on IIS 7.5 on below configuration machine then IIS consumes around 60-70% of cpu process.
Machine Configuration -
Windows server 2008 R2 standard
Intel(R) Xeon(R) CPU E5649 #2.53 GHz 2.53 GHz (2 processor)
RAM-6GB
64 bit operating system
I do not understand why on high configuration server with 64 bit machine,same code consumes more cpu processes. Please suggest on this.
Thanks,
ravi

Is you're IIS 7.5 application pool is running in 32-bit compatibility mode or not. Try to switch. But I don't think the problem is comming from 32bit or 64bit OS.

Related

IIS8 enable 32bit applications

I have a server Windows Server 2012 with 8GB RAM.
When I used Enable 32-Bit Applications in IIS, what is the maximum RAM can I use with this option?
What is the difference in RAM loaded with set enable 32 bit applications to true and false?
When I used enable 32 bit application What is the maximum ram can I used with this option?
Each App Pool Worker Process will be limited to 4GB of memory. A 32bit process has only a 4GB Virtual Address Space, and on 64bit Windows a 32bit process doesn't have any reservation in that 4GB for kernel memory (like on 32bit Windows). Note that an App Pool can have multiple worker processes if you want.

DotNetZip performance issue, but only on one specific server

I'm having a wierd performance problem with the DotNetZip library.
In the application (which runs under asp.net) i'm reading a set of files from the database and packs them on-the-fly into a zip file for the user to download.
Everything works fine on my development laptop. A zip file being about 10MB with default compression rate takes something around 5 seconds to finish. However, on the dev server at the customer, the same set of files takes around 1-2 minutes to compress. I've even experienced even longer times, up to several minutes. The CPU utilization is 100% when the zipping is running, but otherwise it stays around 0%, so it's not due to overload.
What's even more interesting is that on the production server, it takes something about 20 seconds to finish.
Where should I start looking?
Some hardware specs:
My Laptop
Development environment running on a virtualbox with 2 cores and 4GB RAM dedicated.
Core i5 M540 2,5GHz
8 GB RAM
Win7
Dev Server
According to properties dialog on My Computer (probably virtualized)
Intel Xeon 5160 3GHz
540MB RAM
Windows 2003 Server
Task Manager Reports Single Core
Production Server
According to properties dialog on My Computer (probably virtualized)
Xenon 5160 3GHz
512MB RAM
Windows 2003 Server
Task Manager Reports Dual Core
Update
The servers are running on a VMWare host. Found the VMWare icon hiding in the taskbar.
as mitch said, the virus scanner would probably be your best bet. that combined with the dev server being just a single core machine and the production server being a dual core (and probably without virus scanner) may explain a delay. what would also be valuable to know is the type of disk in those machines. if the production server and your laptop have SSDs and the dev server has a very old standard harddisk with low rpm, for example, that would also explain a delay. try getting a view on the I/O reads/writes for the zipfolder for the dev server and production server, you could use the SysInternals tools for that, and if you have a virus scanner or any other unexpected process running you're probably going to see a difference there. the SysInternals tools could be of value here in finding the culprit quickly.
UPDATE: because you commented the zip is created in-memory I'd like to add you can also use those tools to get a better understanding of what happens in memory. a delay of several minutes where you'd expect almost equal results because the dev server and production server are a lot alike has me thinking of the page file.. see if there are other processes on the dev server that have claimed a lot of memory. if there isn't enough left for the zip operation the dev server will start using the page file, which is very expensive.
The hardware seemed to be the problem here.
The customer's IT guys have now upgraded the server hardware on which the virtualized dev server runs and I now see compression times at about 6s for the same package size and number of files as on my local computer.
The specs now found in the My Computer properties window:
AMD Phenom II X6 1100T
3.83GHz 1,99 GB RAM

Does VMWare or any other 'virtualization software' alowe you to set the amount of CPU cores?

I am setting up a testing PC to help find a weired bug on a single core computer. None of the PCs in our office have single core any more.
Can I use VMWare to something similar to emulate a single core PC on a multi-core computer?
Can you emulate a 32Bit version of windows with VMWare on a 64bit PC?
Yes for both questions with VMWare.
You can change the number of CPUs (cores) to allocate for each Virtual Machine. If you allocate just one, you'll have your 1-core machine.
You can install a 32-bit operating system on a 64-bit VMWare host.
Yes. Pretty much all the desktop virtualisation apps default to emulating a single-core 32-bit guest machine. (They may be using multiple host cores to do it.)
VMware and VirtualBox allow multiple cores to be configured and 64-bit guests; VirtualPC currently does not.

How much memory does each apppool use by default in IIS 6.0?

How much memory does Windows or IIS 6.0 allocate by default for each web site when each site runs in its own AppPool? The effect memory wise when web sites share the same AppPool vs each in a separate one. I am talking about when websites start up, not when web sites start using memory when they run applications.
Vanilla Windows server 2003 IIS 6.0 process consumes about 5mb of memory.

Staging, Testing, and Version Control Server Spec Recommendations

I am going to be working with a few outside developers for some ASP.NET projects and wanted to setup a co-located server for version control, testing, and staging client sites until they are ready to deploy.
I already have the ISP and have a 10 megabit connection burstable to 100, so I don't think bandwidth is going to be an issue.
My question is, what specs should the server itself have? I was thinking of getting a Dell server with the following specs:
Dual Core Intel Pentium E2180, 2.0GHz, 1MB Cache, 800MHz FSB
4GB, DDR2, 800MHz, 4x1GB,Dual Ranked DIMM
RAID 1 160GB 7.2K RPM SATA 3Gbps hard drives
Windows Server 2008
Will this suffice?
If the project isn't too big that looks fine. My experience with version control systems on large projects is that memory tends to be the biggest bottleneck. I'd make sure you can upgrade to 8GB RAM if the project is going to be large.