What is the best way to manage code between VMs and a central SVN repository?
To be more specific, I have a desktop with a linux VM environment, as well as a laptop with a linux VM environment. Both are running under VMWare workstation. I switch back and forth between desktop and laptop all the time, but have trouble keeping the desktop and laptop in sync.
The most obvious--yet probably least efficient--choice is to just commit everything before I switch machines. However, this leads to committing code that is partially complete, just so I can work on a different machine.
I've considered using something like rsync to keep my two development environments in sync. I think this would be better because then I can still commit changes to svn when I want to, while keeping both desktop and laptop in sync.
So while I'm tempted to go the rsync route, I'm still concerned that I have to proactively sync things. In my case, I'm picturing a scenario where I'm working on something on my desktop, then leave to go to a coffee shop to do work with my laptop, only to realize that I didn't sync before leaving the house (DOH!).
I don't know if there's really any way around this. Maybe I could rsync everything to a centralized server that's always online? And set up cron jobs to run every few mins or whatever to sync with my various development environments?
Is there a better option?
You could consider using distributed version control instead. If you don't have the ability to change the central server, there are still wrappers like git-svn that allow you to use git on your end, while interacting with a Subversion server.
The workflow in a DVCS setup:
Make changes on machine #1, committing locally, repeat.
At switch time, commit locally.
Pull or push changesets from machine #1 to machine #2
Continue work on machine #2.
At switch time, commit locally.
Pull or push changesets from machine #2 to machine #1
Repeat
When it's time to actually push to the server, whichever computer you're on should have the latest code and you can push up to the master server (SVN or whatever).
This does make you commit intermediate changes - but I've found that to be more of a benefit of using a DVCS than a burden.
An alternative to this might be to keep your whole dev directory in a Dropbox folder or some equivalent. Then you don't have to deal with rsync or anything yourself, but you have less control over syncing.
syncd may be what you are looking for: https://github.com/drunomics/syncd. It uses inotify and rsync to listen for file system changes rsync changed files to a remote server.
It is a one way sync though, so you will have to stop it when you stop working on one machine and start it on the other. You will also need to have ssh server running on both machines.
Related
I work with a number of different specialized and configured OS environments but I generally only use one at a time. I have a processor-beefy laptop but storage is always an issue. It would also be good to have a running backup of each environment so I can work from other hardware.
What would be ideal would be if I could run some kind of VM library server that maintained canonical copies of each environment from which I could DL local execution copies to my local machine to work with and then stream changes back to the server image as I did my work.
In my research it seems like a number of the virtual machine providers used to have services like this (Citrix Player, VMWare Mirage) but that they have all been EOLd.
Is there a way to set something like this up today? I'd love a foss solution based on KVM but id be willing to take a free proprietary solution.
I am very very new to Docker. Our team has had a very nice deployment line up where We have different CI engines for different projects including Jenkis and TeamCity.
Developers usually check-in and CI takes over, deploys and its perfectly ready for test team to test. I always thought this to be a perfect model. Of course, some parts and our implementation have their flaws but it worked very well for what we wanted.
Now, our Dev-Ops is introducing Docker where test teams get a Docker Image from Docker Registry Everytime we run a build from teamcity. While it sounds really really fancy I am still failing to understand the benefit of it.
After my research, my conclusion was that Dockers can be a good light weight replacements for VM. BUT that is ONLY IF you are using any VMs? We are not using any VMS? I just do not understand what is the real value here? Also, while searching I found a relatively good link on Docker:
https://www.ctl.io/developers/blog/post/what-is-docker-and-when-to-use-it/
Where they discuss when you should use Docker and one of the point says that:
Use Docker whenever your app needs to go through multiple phases of development (dev/test/qa/prod, try Drone or Shippable, both do Docker CI/CD)
Ok. Howeve rthey do not further elaborate on why is docker useful when my app has to go through multiple phases?
And how it is exptremely helpful over regular Dev/Test set up when the existing set up is already working smooth?
First, you are right about comparing it to VMs in that it is similar to a VM. However, docker is incredibly lightweight. This property is the one that surprised me most in the beginning. As opposed to virtual machines, containers share resources much more efficiently. Virtual machines are isolated. Containers can run simultaneously on a host machine with very little overhead. You can configure containers to be able to talk to each other (via volume or port bindings).
Furthermore, in my team, docker brings the following benefits:
our application consists of one big application and several other few microservices. But we want to release all as one package with inter-dependencies among the applications, which eliminates problems with figuring out which version of application and microservices should be deployed together (compatiblity) etc. That is, the image contains all you need and you can bring all applications or one-by-one up/down using docker-compose. You do not need to deploy, you simply pull the image and fire a container/s. If you wish to stop one of the microservices, it can be done without affecting the others.
developers in the team, can run the very same image on local machine, for example to troubleshoot a problem occurred in the production; which means troubleshooting can be done in the same environment as in the production. This brings environment standardization and no more "but it works on machine" talk.
another benefit it brings to us is the following: we build a docker image, run our tests against it, and push it to the registry once all these phases succeed, which translates into a great portability.
Ability to version control the containers. You can easily inspect containers between the current version and the previous versions. If you wish to rollback - that is done smoothly.
Isolating and securing applications. All containers are isolated and you can easily control what goes in and out.
It took me a year before I got used to the idea, but now it seems simple enough.
I think part of that comes from the fact that people keep calling Docker a "virtual machine", which is not accurate. That's really just a nickname for what's happening behind the scenes. In a lot of ways, Docker will NOT replace a complete virtualization solution, such as VMWare. It does, however, bring forth a new way of thinking about infrastructure. One that many people have a difficult time wrapping their heads around.
You can start asking yourself: What makes a Linux distribution unique?
Aside from the kernel, everything else is just a "standard way" of organizing binaries, libraries, runtime and configuration files. You need your binaries in /bin, your libs in /lib, your configuration in /etc. User installations get placed under /usr...
Most distributions will keep the main structure from the Unix legacy and add its own quirks. Each one will have its own way to manage and distribute packages. Each will maintain their own versions of libraries, drivers, etc.
The key ingrident is the kernel. That's something they all have in common. Nowadays, recent builds of the Linux kernel are compatible with pretty much all major distributions available. So, aside from /boot, most of everything else is just a matter of having the right files in the right place with the right permissions.
Now, imagine you take all that distribution bundle (except the kernel) and place it all in another directory of your running OS. Taking advantage of the same kernel you are already running, you isolate a new process so that it "thinks" that / is now that directory. Bingo! This process now "thinks" it's running all by itself on another operating system.
Docker builds on top of Linux Containers, which allows us to do excatly that, but in a more friendly and easier way. Don't think of it as a virtual machine. Think of it as process isolation. The running kernel will share the machine's resources with this process, while keeping it isolated from the rest of the system. It's like jails on steroids.
That was a broad simplification. But, given the concept, think about the implications of this idea.
You can have on the same host, multiple processes with completely different environments that might otherwise conflict with each other. One may be a legacy binary that needs old libraries in place (legacy systems that never die). Another may be the most recent build of a bleeding edge technology. Sharing the same kernel is a efficient, and valuable resource management.
The most value I found comes from managing the infrastructure. Once you install Docker on the hosts, configure a swarm, and define a way of deploying containers, you mostly forget about the hosts. Adding users, installing packages, customizing, editing configuration files... All that becomes a development task on your desktop. There's an incentive to script more, to automate more. To keep your hands away from the physical or virtual machines, unless absolutely necessary.
Gone are the days when someone changed some obscure setting on the server to work around some weird application behavior, forgot to tell anyone about it and took a vacation. Changes to the environment can be commited to version control, tracked and improved by everyone on the team. If your datacenter goes through a disaster, recreating the whole environment is a matter of rebuilding images and redeploying containers. Your infrastructure becomes consistent and reproducible, while keeping the doors open to a wide variety of operating systems and customized configurations for each application.
Developers can take advantage of Docker with the ability of recreating dev/staging/production environments on their desktops. No need to polute a dev machine with application servers and database installations, or even the toll of Virtual Box to emulate all that.
Testing can be automated with a higher level of isolation. The Selenium team already has official Docker images. Creating an entire test hub should be a walk in the park with those puppies.
Building custom software, such as compiling Nginx with third party modules, can also be done inside containers from specialized images. No need to keep an entire server dedicated to it, or even polute your desktop with all the dependencies and build packages.
Overall, we've been having a great experience with Docker. We've migrated our staging environment to this new platform, and plan to migrate other parts of the infrastructure as well, eventually into production. So far, so good.
I hope you can convince enough people to take a better look at it. I'll admit, it took me sometime to get used to the idea. But once you get it, it's actually worth it.
I'm looking for a good way to push code quickly and securely to my company's Windows web servers for release deployments.
I have a *nix background and in the past have always used rsync in conjunction with ssh for such tasks because it is quick, secure, and scriptable.
Right now our deployment process is very manual and requires logging into each server over remote desktop and using TortoiseHg to pull code from our main repo into the server (obviously this requires the webserver to have credentials into the central Hg repo). Needless to say, this process is very human, and accordingly error prone, not to mention tedious and slow. We also have several servers that we use internally for dev staging, QA team, etc.
What I would like to know is
1) Is there a straightforward way to do this either with rsync & ssh (and cygwin or powershell).
2) What is the most accepted way to script pushing code to Windows boxes??
Thanks,
Jamie
Check out Jon Tørresdal's blog series on No-Click Web Deployment part 1 and part 2.
For testing our product's installer, I maintain a tree of virtual machine snapshots with different previous versions installed. It is a tedious task to do Windows Update, re-snapshot, delete parent snapshot on each VM.
Is there an automated solution for keeping a group of VMs up-to-date? I use VirtualBox but have access to VMware Workstation and would switch if maintenance would improve.
We keep a baseline of VMs in a library of sorts. They're are about 20-odd (with mixtures of different versions of Java, DB2, WAS and so on) that the development and test teams can copy out for their own use.
The librarian (developer, doing this part-time) is responsible for keeping them up to date. What they'll do is copy one of the VMs every week or so, boot it and install all updates, then copy the updated VM back over the original. This means it's available for checking out except when the copy operation is being done. Additionally, the number of VMs that need to be updated is minimized by virtue of the fact that they're shared.
That's how we do our snapshots, by copying the directories partially because it's easier to manage but mostly because we're too tight to buy the workstation version :-) We use the player instead.
It's mostly automated since all the VMs grab their updates from our SUS server and we know when they have updates ready to go. The librarian is notified by a script which VMs need to be updated and just has to run another script which copies the VM and starts the copy.
Once the librarian is satisfied the copy is up to date, they shut it down and yet another script copies that VM back into the library and updates its status.
I don't know of an automated solution for all of your VMs, but I would recommend using Windows Server Update Services to keep track of the update status of every VM and provide a local Windows Update repository to speed up the updating process.
I suppose you could use a combination of WSUS and Group Policy to do these updates, setting up automatic update installation, and just turning on all of your VMs for a given period to make sure they all get the updates.
That doesn't solve the problem with managing snapshots etc though. I wonder if VMWare has an API...
Heres the problem. I use around three different machines for development. My partner is using two. We have to go through the same freaking set up procedure on all five machines to get to work.
Working with a php project here, so:
Install and configure, PDT, a php debugger, and some version of XAMPP.
Then possible install an svn client, and any other tools.
Again, to each of the five machines.
What if, instead, we did all of this once, in a virtual machine that is set up with the same stack, same versions, as the production server. Then each of us could grab a copy of the VM image, run that image on each of the five machines and do all of our development in that VM. Put Eclipse, apache, mysql, the works, all in that vm.
The only negative of this approach, and please correct me on the only part, is performance. Is it really that big of an issue though? The slowest machine out of the five is a Samsung NC10 powered by an Intel Atom 1.6 ghz processor.
Do you think this is possible and practically usable? Or am I crazy?
I use a VM for development (running on my laptop) and have never had performance problems. Another approach that you could take would be to image the drive in the state that you want. Use Acronis or Ghost to re-image each machine when you need to. Only takes about 5-10 minutes to restore an image on any modern PC.
I use a VM for all my "work" as it keeps it away from my "play". This set up allows me to use the office VPN without exposing my whole machine to the office environment (which I trust about as much as the internets. ;-) Also I don't have to worry about messing up my development environment by trying games or other software. My work VM is currently running inside VirtualBox but I have used VMWare in the past. I have only noticed performance issues when using graphic intensive programs like Webex or the Terminal Server Client.
It can certainly be done. What turns me off is the size of the VM image, which would normally be several GBs. Having it on a network share means it can take longer to transfer then your current setup process takes. I guess an external hard drive would be the easiest way to move it around.
Performance wouldn't be an issue with any web development.
I have to ask why your current machines need to be "re-imaged" each time you sit down for work?
If you're using Windows you'll probably want to use SYSPREP on the master image so that the 'mini-setup' runs when you boot up the virtual machines for the first time.
Otherwise in terms of Windows' point of view, the machines have the exact same SID, hostname and other things - running multiple machines with the same SID on the same network can cause tons of headaches. Even more if you want them to communicate with each other.
I've run websphere for zSeries on a vmware virtual machine with no problem and websphere is more resource intensive then any PHP stack. I find that having a multi core machine or at least hyper threading makes it run a lot faster.
With vmware, disk operations are slower. For PHP development I doubt it would be a problem, but you'd definitely notice it if you are compiling a large C++ project. There is also Sun's VirtualBox which is free, and the latest version is rather nice (but I haven't looked at how slow disk operations are yet).
I am using that idea in practice. Virtual machines are generally great for development.
To run on multiple operating systems and multiple separate development environments.
Preserver older development environments for later support.
Can be easily backed up, when hard drive crashes no need to start from beginning.
Can be copied from developer to another, so everyone don't have to do tedious installations and configurations.
Down sides are:
Virtual machines are slower, you need more powerful computers than you would need otherwise. I would recommend having at least 4 G of ram, but preferably more like 16, fast multi core processors and fast hard drives.
Copying Windows OS virtual machines, each used copy of virtual machine should have it's own product key. When you make a copy, it needs to be registered with new product key.
Did you think about a software configuration manager like ansible, chef or puppet? With such software automation of such tasks is very easy! It can even create fresh vm and then configure it.