What do I need to know about running my own dedicated server (with windows 2008) - windows-server-2008

I'm thinking of getting my own dedicated server with the following stats:
Processor: Celeron 440 2.0 GHz
Memory: 1 GB
Primary Hard Drive : 160 GB SATA II
This will be running Windows. I have some experience with my local IIS and playing around with servers, but I have never set one up (at least a Windows one) and I've never dealt with DNS/backup/security issues.
My question has two parts:
Will this server be able to run Windows 2008, SQL Server, and possible Exchange on it without trouble. I'm worried about the processor and RAM.
Are there any guides/tutorials that talk about how to admin a windows server from start to finish. (I'm looking for something like the FAQs slicehost has for *nix based servers).

You WILL run into a problems with RAM. Refer to MS documentation and minimum requirements (SQL Server and Exchange). Also please mind that new releases of Exchange run only on 64bit systems.
Personally I would recommend installing CORE version of W2K8 if you plan to go with your described configuration.

It depends from user load. If you have about 1k unique users / month this means that probably, you will have 30 users per day - roughly 1 per hour. I think you will use more CPU working on this computer personally. So it really depends from user load.
If I were you, I would add more RAM to have something about 4 GB. RAM is the cheapest upgrade available.

You state "I've never dealt with DNS/backup/security issues."
I would suggest to you that these are the most important issues. You need to stay on top of security, applying security patches, insuring firewalls are properly configured etc.
Having been called after the fact for websites that have been hacked, I can tell you it is not pretty. Learn all you can before you stand up this server on the internet.

Related

High Memory usage by Java (TM) Platform SE binary

We are noticing that IBM MobileFirst Server is using High Memory by Java TM Platform SE binary process, after 2 3 days of server start it reach up to 6 GB which cause the server in hang status, then only restart is the solution.
in logs we found below message:
"No buffer space available (maximum connections reached?): connect"
Enviornment: IBm Worklight Server 7.1 and java version is 1.7 64 bit on windows server 2012. hybrid Mobile application running on this server.
It seems that there might be some configuration required can any one advice ?
Lots of information missing... this can be caused by any number of reasons.
Are you in a cluster? if yes, how many servers? how much memory is available to each machine?
How many adapters do you have deployed? What is the value you gave to the serverSessionTimeout property? This for example can cause connections to stay open for a longer time, meaning the server will not "clean/remove" connections... and the more you have open, the more memory you will require.
all of these and more can contribute to how much memory you may need.
See also: http://www-01.ibm.com/support/docview.wss?uid=swg21690707
It mentions DB2, but the idea is - the more connections, the more memory you will need.

DotNetZip performance issue, but only on one specific server

I'm having a wierd performance problem with the DotNetZip library.
In the application (which runs under asp.net) i'm reading a set of files from the database and packs them on-the-fly into a zip file for the user to download.
Everything works fine on my development laptop. A zip file being about 10MB with default compression rate takes something around 5 seconds to finish. However, on the dev server at the customer, the same set of files takes around 1-2 minutes to compress. I've even experienced even longer times, up to several minutes. The CPU utilization is 100% when the zipping is running, but otherwise it stays around 0%, so it's not due to overload.
What's even more interesting is that on the production server, it takes something about 20 seconds to finish.
Where should I start looking?
Some hardware specs:
My Laptop
Development environment running on a virtualbox with 2 cores and 4GB RAM dedicated.
Core i5 M540 2,5GHz
8 GB RAM
Win7
Dev Server
According to properties dialog on My Computer (probably virtualized)
Intel Xeon 5160 3GHz
540MB RAM
Windows 2003 Server
Task Manager Reports Single Core
Production Server
According to properties dialog on My Computer (probably virtualized)
Xenon 5160 3GHz
512MB RAM
Windows 2003 Server
Task Manager Reports Dual Core
Update
The servers are running on a VMWare host. Found the VMWare icon hiding in the taskbar.
as mitch said, the virus scanner would probably be your best bet. that combined with the dev server being just a single core machine and the production server being a dual core (and probably without virus scanner) may explain a delay. what would also be valuable to know is the type of disk in those machines. if the production server and your laptop have SSDs and the dev server has a very old standard harddisk with low rpm, for example, that would also explain a delay. try getting a view on the I/O reads/writes for the zipfolder for the dev server and production server, you could use the SysInternals tools for that, and if you have a virus scanner or any other unexpected process running you're probably going to see a difference there. the SysInternals tools could be of value here in finding the culprit quickly.
UPDATE: because you commented the zip is created in-memory I'd like to add you can also use those tools to get a better understanding of what happens in memory. a delay of several minutes where you'd expect almost equal results because the dev server and production server are a lot alike has me thinking of the page file.. see if there are other processes on the dev server that have claimed a lot of memory. if there isn't enough left for the zip operation the dev server will start using the page file, which is very expensive.
The hardware seemed to be the problem here.
The customer's IT guys have now upgraded the server hardware on which the virtualized dev server runs and I now see compression times at about 6s for the same package size and number of files as on my local computer.
The specs now found in the My Computer properties window:
AMD Phenom II X6 1100T
3.83GHz 1,99 GB RAM

Does a cloud service like Azure or EC2 exist which can run arbitrary workloads? (e.g. Client SKUs of Windows)

Azure and EC2 are optimized for running servers. Lots and lots of servers. Both platforms attempt to manage tons of things for you -- in Azure's case, it wants to manage even the target operating system.
However, I'd like to use such a service for a different reason: Testing.
I've got a ton of operating systems I need to support. My tests don't actually take that long, but running them on every platform is time consuming. I was going to just use a cloud service for this, thinking that these machines would be running for much less than an hour, and it wouldn't cost all that much.
The problem is that the major cloud services won't run client versions of Windows -- Windows Server only.
Is there a cloud service which would let me run every client and server version, and every service pack level, of Windows released starting with Windows 2000 SP4 to the present day?
Try CloudSigma, Defiantly can upload your own ISO's and run any x86 and 64bit OS you like on it. They have their in-house versions to get started but you can bring your own OS versions.
Based in Switzerland but they would have also the servers in the US, performance i've expected to quite good.
https://www.cloudsigma.com/
There is also a free trail on at the moment
https://cs.cloudsigma.com/accounts/signup/
The list of Open Virtualization Alliance members may have some candidates for you.
A search on the page for "operating system" suggests the following possibilities (in addition to the already-mentioned CloudSigma):
ElasticHosts
stepping stone GmbH (I'm less sure about this one)
Sublime IP
No, commercial cloud services like Azure and Amazon EC2 are themselves virtual, so you don't get a great deal of control over the operating system.
An option may be to consider renting a full physical server (colocated, or managed) and then use a battery of virtual machines to run the tests. Something like VMWare's snapshot feature sounds perfect: spin up a clean virtual machine, deploy the test code, then throw away changes to the disk once the tests have been completed.
Or, indeed, as #Stuart suggests - run the tests locally.
This definitely isn't something Azure offers - I think all of Azure's images are based near to Windows Server 2008 R2.
For EC2 you could set up images for Server 2003 through to 2008R2 - but nothing else. There are also some services out there to assist with this - e.g. VaasNet http://www.vaasnet.com/catalog
For testing the other Windows operating systems, I simply don't think there's a cloud service available to let you do this. I don't even think there are any cloud services where you can run "Virtual PC" type applications on top of the hosted operating system - as I think most of the virtualization APIs are disabled in the cloud environments (virtualization within virtualization not supported!)
Sorry to say this, but your best bet may be local test hardware running VirtualPC images.
It appears that the Xen Cloud Platform might do what you're after. This page ends with:
Guest Operating Systems: the XCP binary distribution is delivered with a wide range of Linux and Widnows guests. Check out the release notes for a complete list.
And their PDF document Xen Cloud Platform Virtual Machine Installation Guide (Release 0.1, Published October 2009) says that Windows 2000 Server has "No known issues."
(I don't have any affiliation with Xen)
In conjunction with the above, there is also a list of Xen VirtualPrivateServerProviders, several of which say they include Windows.
Buy time on an EC2 instance and use it to host VirtualBox VMs with VMs set up for each operating system you want to test for. Use a RDP client or VNC or some other means to control the guest OS. This forum post seems to point to that being possible. But yes it is not a cloud service itself and you would have todo some initial setup and configuration work yourself.

Does it make sense to put all development works in Cloud?

Is it possible having virtual machines in the cloud, install visual studio there, and making developers using the 'cloud' to do day-to-day programming work? Is the cost going to be too high? Is the speed going to be too slow?
Where can I find statistics or numbers to convince people?
I like using remote virtual machines to run development servers, but I don't like using my IDE on a remote server. The latency is noticeable. If you're without an internet connection you can't work. My happy compromise is to have a dev server available (EC2) and sync it with my laptop via git.
It is completely possible to do this, using a service like Rackspace you can set up a fairly powerful windows server for as little as $60 a month:
http://www.rackspacecloud.com/cloud_hosting_products/servers/pricing
In my experience using Remote Desktop to log into a Rackspace Windows Cloud Server has been snappy and quick (of course a lot of that depends on the strength of your internet connection). The process of standing up the server is lighting fast, backing it up is even easier, and it can be easily resized down the line if you need more storage/bandwidth.
These days I don't understand why a small to mid sized organization would actually waste capital on server hardware.
Evan

Stress testing a server and VPS's vs. Dedicated servers

We used to have a dedicated server (1&1) and very infrequently ran into problems with the server having issues.
Recently, we migrated to a VPS (Wiredtree.com) with similar specs to our old dedicated server, but notice frequent problems running out of memory, mysql having to restart, etc... both when knowingly running intensive scrips and also just randomly during normal use.
Because of this, we're considering migrating to another at VPS - this time at Slicehost to see if it performs better.
My question is two fold...
Are their straightforward ways we could stress test a VPS at Slicehost to see if the same issues occur without having to actually migrate everything over?
Also, is it possible that the issues we're facing aren't just because of the provider (Wiredtree) but just the difference between a dedicated box and VPS (despite having similar specs)?
The best way to stress test an environment is to put it under load. If this VPS is hosting a web application, use one of the many available web server benchmark tools: ab, httperf, Siege or http_load. You don't necessarily care that much about the statistics from the tool itself, but more that it puts a predictable load on the server so that you can tune Apache to handle it, or at least not crash and burn.
The one problem you have with testing against Slicehost is that you are at the mercy of the Internet and your bandwidth to Slicehost. You may not be able to put enough load on the server to reach a meaningful conclusion.
Instead, you might find it just as valuable to run one of the many virtualization products on the market and set up a VM with comparable specs to the VPS plan you're considering. Local testing over your LAN will allow you to put a higher and more predictable load on the server.
In either case, you don't need to migrate everything, but you will need to set up an environment for your application to run in, with representative data in your database.
A VPS with similar specs to a dedicated server should perform approximately the same, but in order to get good performance, you still need to tune Apache, MySQL and any other long-lived server processes. In my experience, the out-of-the-box configuration of Apache in many Linux distributions is not ideal and will allow far too many child processes, overcommitting memory and sending the server into a swap-death spiral.