Apologies if this is the wrong platform for this question.
If I want to migrate 100 VM's onto Azure VM's what all things I need to consider and how can I migrate?
This is not a comprehensive answer but some things to consider are:
- Start with a thorough inventory of the VMs to migrate. Issues to watch out for include..
- Any unsupported OS versions, including 32-bit.
- large numbers of attached drives.
- Disk drives >1TB.
- Gen 2 VHDs.
- Application and network interdependencies which need to be maintained.
- Specific performance requirements (i.e. any VMs that would need Azure premium storage, SSD drives etc.).
In developing a migration strategy some important considerations are:
- How much downtime can you tolerate? To minimize downtime look at solutions like Azure Site Recovery which supports rapid switchover. If downtime is more flexible there are more offline migration tools and scripts available.
- Understand whether to move to the new Azure Resource Manager or the Service Management deployment model. See https://azure.microsoft.com/en-us/documentation/articles/resource-group-overview/.
- Which machines to move first (pick the simplest, with fewest dependences).
- Consider cases where it may be easier to migrate the data or application to a new VM rather than migrate the VM itself).
A good forum to ask specific migration questions is: Microsoft Azure Site Recovery
Appending to sendmarsh's reply
The things you will have to consider are:
Version of virtual environment i.e VMWare or Hyper-V.
Os version, RAM Size, OS disk size, OS disk count, Number of disks, Capacity of each disk, format of hard disk, number of processor cores,number of NIC's, processor architecture, Network configurations such as IP address's, generation type if the environment is Hyper-V.
I could have missed a few more things... like checking if the VMWare tools are installed. Some of the configurations are not supported like having an iSCSI disk will not be supported. Microsoft supports not all naming conventions for the machines, so be careful in setting the name as that might affect things later.
A full length of pre-requisites list is over at:
[1]: https://azure.microsoft.com/en-us/documentation/articles/site-recovery-best-practices/#azure-virtual-machine-requirements
Update: Using Powershell to automate the migration would make your life easier.
Related
I work with a number of different specialized and configured OS environments but I generally only use one at a time. I have a processor-beefy laptop but storage is always an issue. It would also be good to have a running backup of each environment so I can work from other hardware.
What would be ideal would be if I could run some kind of VM library server that maintained canonical copies of each environment from which I could DL local execution copies to my local machine to work with and then stream changes back to the server image as I did my work.
In my research it seems like a number of the virtual machine providers used to have services like this (Citrix Player, VMWare Mirage) but that they have all been EOLd.
Is there a way to set something like this up today? I'd love a foss solution based on KVM but id be willing to take a free proprietary solution.
I downloaded Redis server and cli to my local machine and it working good.
I just wanted to know if I can use it also in production server:
Are there any critical limitations? For example: Can I use 100 GB for free? (It will be on my computer).
I know that Redis labs cost money per month but if I download the redis to my machine and not using the redis labs, would it be free? (and the cost will be only the storage of the machine I using).
Redis is an open source software, licensed under BSD. That basically means you can do anything you want with it, without owing anyone anything.
Redis Labs, the home of open source Redis and the provider of commercial products that leverage on it, offers a wide spectrum of solutions - whether hosted, as-a-service, downloadable, remotely managed and so forth. You can (and should sometimes) use them, but that's definitely not a requirement.
Disclaimer: I work at Redis Labs and with the open source project.
Azure and EC2 are optimized for running servers. Lots and lots of servers. Both platforms attempt to manage tons of things for you -- in Azure's case, it wants to manage even the target operating system.
However, I'd like to use such a service for a different reason: Testing.
I've got a ton of operating systems I need to support. My tests don't actually take that long, but running them on every platform is time consuming. I was going to just use a cloud service for this, thinking that these machines would be running for much less than an hour, and it wouldn't cost all that much.
The problem is that the major cloud services won't run client versions of Windows -- Windows Server only.
Is there a cloud service which would let me run every client and server version, and every service pack level, of Windows released starting with Windows 2000 SP4 to the present day?
Try CloudSigma, Defiantly can upload your own ISO's and run any x86 and 64bit OS you like on it. They have their in-house versions to get started but you can bring your own OS versions.
Based in Switzerland but they would have also the servers in the US, performance i've expected to quite good.
https://www.cloudsigma.com/
There is also a free trail on at the moment
https://cs.cloudsigma.com/accounts/signup/
The list of Open Virtualization Alliance members may have some candidates for you.
A search on the page for "operating system" suggests the following possibilities (in addition to the already-mentioned CloudSigma):
ElasticHosts
stepping stone GmbH (I'm less sure about this one)
Sublime IP
No, commercial cloud services like Azure and Amazon EC2 are themselves virtual, so you don't get a great deal of control over the operating system.
An option may be to consider renting a full physical server (colocated, or managed) and then use a battery of virtual machines to run the tests. Something like VMWare's snapshot feature sounds perfect: spin up a clean virtual machine, deploy the test code, then throw away changes to the disk once the tests have been completed.
Or, indeed, as #Stuart suggests - run the tests locally.
This definitely isn't something Azure offers - I think all of Azure's images are based near to Windows Server 2008 R2.
For EC2 you could set up images for Server 2003 through to 2008R2 - but nothing else. There are also some services out there to assist with this - e.g. VaasNet http://www.vaasnet.com/catalog
For testing the other Windows operating systems, I simply don't think there's a cloud service available to let you do this. I don't even think there are any cloud services where you can run "Virtual PC" type applications on top of the hosted operating system - as I think most of the virtualization APIs are disabled in the cloud environments (virtualization within virtualization not supported!)
Sorry to say this, but your best bet may be local test hardware running VirtualPC images.
It appears that the Xen Cloud Platform might do what you're after. This page ends with:
Guest Operating Systems: the XCP binary distribution is delivered with a wide range of Linux and Widnows guests. Check out the release notes for a complete list.
And their PDF document Xen Cloud Platform Virtual Machine Installation Guide (Release 0.1, Published October 2009) says that Windows 2000 Server has "No known issues."
(I don't have any affiliation with Xen)
In conjunction with the above, there is also a list of Xen VirtualPrivateServerProviders, several of which say they include Windows.
Buy time on an EC2 instance and use it to host VirtualBox VMs with VMs set up for each operating system you want to test for. Use a RDP client or VNC or some other means to control the guest OS. This forum post seems to point to that being possible. But yes it is not a cloud service itself and you would have todo some initial setup and configuration work yourself.
I use Linux as primary OS. I need some suggestions regarding how should I set up my desktop and development. I do work on mostly .Net and Drupal, but some time on other lamp products and C/C++, Qt. I'm also interested in mobile (android..) and embedded development.
Currently I install everything on my main OS, even I use it a little. I use VMs a little (for lamp server).
Should I use separate VM for each kind of development (like one for .Net/Mono, another C++, one for mobile and one for db only, one for xyz things etc)
Keep primary development environment on main os and move others in VM.
main os should be messed up
keep things easy to organize (must)
performance should be optimal (optimal settings for best performance of components)
I'm interested to know how others' are doing.
There are both pros and cons with VM's.
Pros:
portability: you can move image to
different server
easy backup (but lengthy)
replication (new member joins team)
Cons
performance
hardware requirements
size of backups (20-40 GB per VM ...)
management of backed up images (what is the difference is not obvious)
keeping all images up to date
(patching / Windows updates)
For your scenario, I would create base VM with core OS and shared components (Web server, database), replicated it and installed specific tools into separate VM. If you combine tools within VM, you may end up with same mess as in case of using base OS - the advantage is that it is much easier to get rid of it ;-)
Optimal performance != using VMs
if you need to use VMs anyway, then yes: it could be better to use a separate VM for each thing that need one, unless you need more than one at once
Now that OCI containers are stable and well supported, using those through docker, podman or other similar tool is an increasingly popular option.
They are isolated, but under the same kernel, so:
they are almost as portable as virtual machines,
like virtual machines they can have their own virtual IP addresses, so they can run services not visible from the outside and without occupying port on the host, but
they don't reserve any extra space on disk or in memory like virtual machines and
they are not slowed by any virtualization layers and
mounting directories from the host is easy and does not require any special support.
The usual approach is to have the checkout in the developer's normal home directory and mount it into containers for building, testing and running.
Also building in containers is now supported by Remote Development extension for Visual Studio Code
I'd like to create a VM in Virtual PC 2007 for use as a development environment/sandbox for an existing ASP.NET application in Visual Studio 2005/SQL Server 2005 (and VSS for source control).
I'm thinking that I need to create a 'base' copy of the environment (with the os, Visual Studio, and Sql Server), and then copy that to a 'work' version that I do actual development in. I would be sharing this VM with one or two other developers who would be working on different parts of the app.
Is this a good idea? What is the best way to get my app/databases in and out of the VM and the changes I make into VSS? Is it just a copy from the host location to the VM share and back again? How do I keep everything synchronized?
Thanks!
I would seriously suggest you the following things:
Use a "server" solution, rather than a desktop solution. That's far more reasonable if you want to share the VM environment with other developers.
Use VMware's products rather than Microsoft's.
From these two points it follows that you should use VMware ESX Server and related products. If you don't want to / can't invest money in it there's a free version of this product: http://www.vmware.com/go/getesxi/, but I never used it.
Whether you choose to use the enterprise version of ESX server or the free version, I suggest you put your IT organization's IT department on it.
It's not a bad idea, if you think there's a need for it.
I do something similar when I need to develop a Windows App because it's just nice to have a clean environment. That way I don't accidentally add a reference to something that's not necessarily included in the .NET Framework. It forces me to install any 3rd party components as I'm developing and documenting. This way, I can anticipate prerequisites, and ensure that I have them documented before I load software to a user's PC and wonder why it doesn't work.
Just make sure the PC it's hosted on can handle the additional load. My main Dev PC is a dual core processor with 4GB RAM. I devote 2GB to any virtual PC I plan on using as a development environment so that I don't hit too much of a performance snag.
As for keeping everything synchronized, you will want to use some sort of source control (as you should even in a normal environment). (I like SVN with Tortoise SVN as my client of choice, but there are plenty of alternatives.) Just treat the virtual PCS as if they were normal PCs. Make sure they can access the network, so you can all access your source code repository.
You can use the snapshot feature (or whatever it is called) - that chagnes to the "system" are saved to a delta file so that you can easily revert to an earlier state of the virtual pc. It has some performance penalty. This way you don't have to keep base and work copies.
I use Virtual PCs for all of my Windows development. The company I work for has legacy products in FoxPro and current products in .NET so I have 2 environments set up:
1 - Windows XP with Foxpro and VSS - I can access VSS directly from this image and the code never enters other machines in my network (I work remotely)
2 - Windows 7 with VS2008 and all the associated bits and pieces needed to develop our .NET software (including TFS). This is the machine I use every day - I have a meaty desktop PC so I I am able to give the VPC 4GB RAM and runs as fast as a 'normal' PC.
I have my VPCs running in VirtualBox and it is equally as good as the other offerings. A previous answer mentioned VMWare ESX which is an excellent product for large scale deployment but if you want a server solution then VMWare Server is free and is a nice virtualisation platform.
If you are looking at ways to experiment with changes and still want to use VPC then undo disks are excellent - you fire up the machine, hack away to your hearts content and when you shut down you can choose to save or discard the entire session.
For me Virtual PCs are an excellent way to quickly set-up / tear down development environments and I would struggle to return to using a single machine for all my work.