We have some kind of strange behavior on our Openstack based servers. There is an nginx as loadbalancer with two apache web servers. This setup is automated by ansible and worked fine on our previous cloud.
After moving to another cloud we have the problem that all static files are served extremly slow. Slow means 35 seconds for a 140k JS file. And what makes it realy strange is that only windows devices have this problem.
When you download that static file on a Windows VM on a Mac it has the normal speed (100ms). Downloading it on a Linux VM on a Windows Host is very slow (35s). So it depends on the real system.
We have no idea where to start searching. Every tipp is welcome.
Related
I have a CentOS; LAMP web server and have finished 97% of development work. I can test my website anywhere on the LAN and can load correctly on different browsers from different machine.
However when I try wget on the web server, it is timing out for some reason. Please point me in the right direction for this as some functionality is not working the way it was designed because of this.
Thanks!
I'm running some websites on a dedicated Ubuntu web server. If I'm remembering correctly, it has 8 cores, 16GB memory, and running as a 64 bit Ubuntu. Content and files are delivered quickly to web browsers. Everything seems like a dream... until I run gzip or zip to backup an 8.6GB sized website.
When running gzip or zip, Apache stops delivering content. Internal server error messages are delivered until the compression process is complete. During the process, I can login via ssh without delays and run the top command. I can see that the zip process is taking about 50% CPU (I'm guessing that's 50% of a single CPU, not all 8?).
At first I thought this could be a log issue, with Apache logs growing out of control and not wanting to be messed with. Log files are under 5MB though and being rotated when they hit 5MB. Another current thought is that Apache only wants to run on one CPU and lets any other process take the lead. Not sure where to look to address that yet.
Any thoughts on how to troubleshoot this issue? Taking out all my sites while backups occur is not an option, and I can't seem to reproduce this issue on my local machines (granted, it's different hardware and configuration). My hopes are that this question is not to vague. I'm happy to provide additional details as needed.
Thanks for your brains in advance!
I'd suggest running your backup script under the "ionice" command. It will help prevent starving httpd from I/O.
What is your preferred development environment ?
Native
WAMP/MAMP/LAMP (Apache, MySQL, PHP) on Windows/MacOS/Linux
Working copy local, SVN/CVS on server
IDE/Editor on the same system (Eclipse, Aptana, Zend...)
Virtual/Native (Server on VM)
LAMP on VirtualBox/VMware
working copy in the VM
IDE/Editor on host, access to the VM with Samba, FTP, SFTP (eventually mapping with tools like WebDrive)
Virtual (VM)
Complete development environment running in a VM (server, tools, IDE)
Host is only used for special tools not available on the OS running in the VM
All have pros and cons.
With BitNami stacks you can run the exact same XAMP environment locally or remotely (and make sure everybody on your team is running the exact same stack). It is free and works on Windows, Linux, Mac.
I like having the SVN repository somewhere on a web server.
It's reasonably secure (using Apache WebDAV), and it gives me a good chance of recovering quickly from any disasters that may befall my main development machine. I have the luxury of control over my own web server, but there are lots of cheap hosts that will do the job at low cost.
As regards VM or no VM:
Advantages of VM - very fast recovery from screwing up your development environment
Ability to try out different versions or upgrades quickly
If you have many systems running the VM host, ability to quickly move the whole environment
Can choose any Host
Disadvantages of VM - performance impact; extra setup complexity.
On balance, I go for "no VM" if all the tools are available on my host system, but I do use VM when I need to run a different OS (the host system is a Mac Pro, so if I need Visual Studio, I do it with Parallels).
Heres the problem. I use around three different machines for development. My partner is using two. We have to go through the same freaking set up procedure on all five machines to get to work.
Working with a php project here, so:
Install and configure, PDT, a php debugger, and some version of XAMPP.
Then possible install an svn client, and any other tools.
Again, to each of the five machines.
What if, instead, we did all of this once, in a virtual machine that is set up with the same stack, same versions, as the production server. Then each of us could grab a copy of the VM image, run that image on each of the five machines and do all of our development in that VM. Put Eclipse, apache, mysql, the works, all in that vm.
The only negative of this approach, and please correct me on the only part, is performance. Is it really that big of an issue though? The slowest machine out of the five is a Samsung NC10 powered by an Intel Atom 1.6 ghz processor.
Do you think this is possible and practically usable? Or am I crazy?
I use a VM for development (running on my laptop) and have never had performance problems. Another approach that you could take would be to image the drive in the state that you want. Use Acronis or Ghost to re-image each machine when you need to. Only takes about 5-10 minutes to restore an image on any modern PC.
I use a VM for all my "work" as it keeps it away from my "play". This set up allows me to use the office VPN without exposing my whole machine to the office environment (which I trust about as much as the internets. ;-) Also I don't have to worry about messing up my development environment by trying games or other software. My work VM is currently running inside VirtualBox but I have used VMWare in the past. I have only noticed performance issues when using graphic intensive programs like Webex or the Terminal Server Client.
It can certainly be done. What turns me off is the size of the VM image, which would normally be several GBs. Having it on a network share means it can take longer to transfer then your current setup process takes. I guess an external hard drive would be the easiest way to move it around.
Performance wouldn't be an issue with any web development.
I have to ask why your current machines need to be "re-imaged" each time you sit down for work?
If you're using Windows you'll probably want to use SYSPREP on the master image so that the 'mini-setup' runs when you boot up the virtual machines for the first time.
Otherwise in terms of Windows' point of view, the machines have the exact same SID, hostname and other things - running multiple machines with the same SID on the same network can cause tons of headaches. Even more if you want them to communicate with each other.
I've run websphere for zSeries on a vmware virtual machine with no problem and websphere is more resource intensive then any PHP stack. I find that having a multi core machine or at least hyper threading makes it run a lot faster.
With vmware, disk operations are slower. For PHP development I doubt it would be a problem, but you'd definitely notice it if you are compiling a large C++ project. There is also Sun's VirtualBox which is free, and the latest version is rather nice (but I haven't looked at how slow disk operations are yet).
I am using that idea in practice. Virtual machines are generally great for development.
To run on multiple operating systems and multiple separate development environments.
Preserver older development environments for later support.
Can be easily backed up, when hard drive crashes no need to start from beginning.
Can be copied from developer to another, so everyone don't have to do tedious installations and configurations.
Down sides are:
Virtual machines are slower, you need more powerful computers than you would need otherwise. I would recommend having at least 4 G of ram, but preferably more like 16, fast multi core processors and fast hard drives.
Copying Windows OS virtual machines, each used copy of virtual machine should have it's own product key. When you make a copy, it needs to be registered with new product key.
Did you think about a software configuration manager like ansible, chef or puppet? With such software automation of such tasks is very easy! It can even create fresh vm and then configure it.
What I would like to do is create a clean virtual machine image as the output of a build of an application.
So a new virtual machine would be created (from a template is fine, with the OS installed, and some base software installed) --- a new web site would be created in IIS, and the web app build output copied to a location on the virtual machine hard disk, and IIS configured correctly, the VM would start up and run.
I know there are MSBuild tasks to script all the administrative actions in IIS, but how do you script all the actions with Virtual machines? Specifically, creating a new virtual machine from a template, naming it uniquely, starting it, configuring it, etc...
Specifically I was wondering if anyone has successfully implemented any VM scripting as part of a build process.
Update: I assume with Hyper-V, there is a different set of libraries/APIs to script virtual machines, anyone played around with this? And anyone with real practical experience of doing something like this?
Checkout Powershell Management library for Hyper-V on CodePlex. Some features:
Finding a VM
Connecting to a VM
Discovering and manipulating Machine states
Backing up, exporting and snapshotting VMs
Adding and removing VMs, configuring motherboard settings.
Manipulating Disk controllers, drives and disk images
Manipluating Network Interface Cards
Working with VHD files
You can actually script a fair number of tasks in MS Virtual Server:
http://www.microsoft.com/technet/scriptcenter/scripts/vs/default.mspx?mfr=true
http://msdn.microsoft.com/en-us/library/aa368876(VS.85).aspx
Also Virtual PC guy has got a ton of stuff on his blog about scripting Virtual Server/PC and now Hyper-V here:
http://blogs.msdn.com/virtual_pc_guy/default.aspx
VMware has similar capabilities:
http://www.vmware.com/support/developer/scripting-API/