Why is it better to use Vagrant instead of sharing the VM's Disk itself? - development-environment

I was searching for the best way to distribute a development environment to my team and found Vagrant.
After some reading and testing, here is what I can say about it
Pro:
Automates the process of creating a new VirtualBox VM (for a new user).
Con:
Nullifies the only pro I found by requiring the new user to learn a new tool.
The learning curve is way more time expensive than actually copying the VDI and configuring a new VirtualBox VM tu use it.
(It is not that great as well, but is still higher than learning a new tool.)
I definitely didn't understand why using Vagrant makes the process of distributing a development environment so different than just creating a normal VirtualBox VM and sharing the disk.
Maybe I'm missing a point here, can anyone enlight me?

I can only speak from my own experience: the learning curve is well worth it.
As a programmer, I love being able to set up separate isolated environments for development, testing, and deployment.
Sure, you can copy the VDI around and whatever, but it's easier-- in my opinion-- to execute a few command line programs and it's all ready to go. It just seems dirty to me to copy around VDIs or other types of images.
Also, I can make packaged up vagrant boxes that I can send to my coworkers or distribute on the Internet.

There are a lot more advantages to vagrant than just that:
100% reproducible environments, everywhere the same.
Can be stored in source control (git/svn/...) and versioned.
Saves diskspace. This can be important on laptops, I right now have 92 Vagrantfiles on my system, not all created. Imagine them all being full blown 20gb vm's...
Experimentation becomes trivial. A quick vagrant init precise64, vagrant up, try some stuff, vagrant destroy, vagrant up, retry some stuff, vagrant destroy - and done.
It requires some discipline not to install required packages manually or updating the common Vagrantfile when doing so, but once you have a workflow worked out, it's simply brilliant. I find

I think the major advantage of Vagrant is that it let's you have one base package that can be re-configured for different purposes covering dev, testing, management, operations, etc. simply by changing the manifest/cookbook. It's just more convenient to share and start up.
Less major, but still nice is that it lets you start over effortlessly. I know you can use snapshots in VirtualBox, but sometimes, it becomes annoying to keep going back and forth between them. With Vagrant, you can start a box, test some things, destroy it. Than you can start again, the same box, test something. You can start twice the same VM easily, and test slightly differently in each, destroy them.
Also check out the answer here: https://superuser.com/questions/584100/why-should-i-use-vagrant-instead-of-just-virtualbox
EDIT: In my first answer, I was thinking of Packer, not Vagrant, my mistake.

We are discussing the same question in the office... Right now, btw!
When I first got in touch with vagrant, I was sceptic, but the advantages I could see, beyond personal opinion is:
you can start from a common configuration and automatize many "forks", without a snapshot-chaining (what is a big pain when you need to delete one snapshot or two, I might say);
the process of cloning (packing) a machine is way faster than export appliance;
using vagrant to setup a machine still makes you able to use VBox iface, so no problem to make a start point based on Vagrant's features and let the teammates decide if they will use it or not;
as said, automatization.
We are still making some tests... Now talking about particular opinion, to me the main point is replication and the fact that packing and unpacking is faster than exports.

Related

needed : Program for easily switching developer environment

This might be the wrong forum, but its filled with so many smart people so someone might know a solution.
One of my customers has given me several assignments that require a quite similar yet wery different development setup (different versions of class-libraries and such) and the problem is that each time I need to switch between the projects I need to do a lot of configuring to get the compilation to work and not include wrong versions of tools used.
If I dont get it right there might be a lot of cleanup afterwards.
It takes me at least an hour to switch projects. Often several.
Now I realize that the customer has an issue with a lot of branching in their setup and they are working on that, but thats a long process.
So.. My question is. Is there some tool that allows me to take "snapshots" of working project-environments and switch between them?
I'm working on windows.
There are many options. One of them is virtualization: running an entire operating system on top of your operating system. It will emulate a fake hard drive, fake network card, etc. This will run an entire desktop 'in a box'.
In the last years containerization is very popular: running a separate environment (file system, OS configuration) but leaving the heavy stuff to the host (in your case, Windows) such as networking, hardware, etc. This is geared towards running a single application (though this is flexible) 'in a box'.
In your case, since you need to encapsulate an entire development environment, it would seem to be best to go with a virtual machine.
A no-cost solution is Virtualbox. Paid alternatives such as VMWare may offer better usability, performance or features.
If your class libraries are in a specific location, you could containerize the compiler: you work on your PC, then build inside of a 'box' with its own environment. Docker is often used.
Depending on what language you are programming in, there may be a specific solution. Python has VirtualEnv, for example.

How to test a VB program which will run on a network

I am a self-taught programmer and have only delved into new areas of programming as the need arises. I have never done any network programming, everything I have written has been for a single computer. I have written a program for an old board game and it runs great. But, now I want to try to write it to run for multiple players across a local network. I have an idea of what has happen in terms of constantly checking a specified folder/file for changes. But... how do you test this without actually building/compiling the program and installing it on another computer every time you make any changes? I have tried to search various forms of what I have as the title here, but all that comes up is about testing network connections, or socket programming (would this be easier/needed) or systemfilewatcher (which may be an option too if it will run on Windows 7 and 10... but, I find nothing about testing programs to actually access the network and simulating 2 copies of the program running. Any suggestions, links, etc. would be greatly appreciated.
I think you will be disappointed in the performance of a file-based network game unless reaction or refresh time is of little consequence for your "board" game. You may also need to work out potential concurrency issues (ie, someone updating a file you've just read). If you have any desire to do other games in the future you should be using sockets (most likely UDP unless you have a good reason not to) to create a client server system.
As to your question, yes, you should be able to test it. You just need to run both a compiled exe and the source in VS debug mode, accessing the same folder on your drive. If you go with the socket-based option, you would use your PC's loopback address 127.0.0.1 (sometimes known as localhost), but the 2 different parts will need to communicate on different ports.

Best Practices for Development Backups

I am a intermediate-level developer (I think). I almost always work alone.
I have always just save my code to my hard drive and then published it to my server. I almost always over-write old code. If I make a big mistake I will get a backup restored from my web host. Obviously this can be a pain and cost time.
I know there must be a better way. I guess I could just save copies each time I change a file but that seems like it could get confusing too if I have 1000 different versions each time I make a minor tweak.
What is the best solution? It seems GIT type services may be more hassle than it is worth in my situation.
You work on the wrong way...
Try a CVS, that kind of software administrate versions and changes by itself.
Read about how implement a SVN solution, that will help you a lot.
GIT, Codeplex and other repositories are based on that kind of tools.
the problem in steps:
Source control
Google has a full list of articles, if you use GIT, this is my favorite:
http://nvie.com/posts/a-successful-git-branching-model/
and see this other
www.bignerdranch.com/blog/you-need-source-code-control-now/
but Source control isn't just a backup.
See
Is there a fundamental difference between backups and version control?
Backup your Source
You need a copy of your code in a backup server, site or external dispositive.
GIT or Subversion is for source control not for copy.
see "Backup Best Practices: Read This First!"
and set a tool for this work periodically.
Software Configuration Management
You need set a system for the software change, step by step, start with the user needs and cover all methodology work
see redmine ...

Software testing with version control

So, as a developer, you probably write a small amount of code, and then test it to see if it works before you move onto something else. This is because you don't want to write thousands of lines of code and find that doesn't work. Stating the obvious here. So myself and a few others(soon) are working on a php application where I want to implement some form of version control, most likely subversion since we all know how to use it, somewhat. My question is how do I implement this writing process stated above with writing, and then testing.
My idea was to set each developer up with their own workstation including a web server, and php/mysql etc.. so they can checkout the repo and then test on their own computer as they are writing. I'm really looking for some direction here with that. Currently we aren't using version control as there are only two developers and we simply use a shared directory thats located on the web server. When we make changes, we can view them immediately on the web server. Any input on this? Whats the best way to handle multiple developers when in the development process of an application?
There are a number of different ways to approach this:
1) Each developer has a whole web server stack on their machine, deploys to it, and tests there, then checks in working code.
2) There's a separate test/integration machine. Developers take turns deploying to that machine, do their testing, then check in working code.
3) You use branches in Subversion. Development happens on a branch, and it's OK to check in broken code on a branch. There may be a branch for each developer, or a branch for each feature, or whatever. The developer checks in code onto the branch, checks it out on the separate test machine, tests, fixes, then checks the working code onto the trunk.
Which one is right depends on how big your team is and how complex your server setup is. Choose one that makes sense for your team.
You'd need to start thinking about a build server, using a piece of software like cruisecontrol which monitors for source code changes and is then able to build, run tests and deploy you're code (ed: in a manner as close to live like as possible!).
I'd highly recommend integrating a build server as soon as possible, otherwise you'll find out down the line that automating something that somebody has been doing manually for the last 5 years is somewhat difficult :)
You might also find that each dev ends up with their own methods of deployment and custom environment, it'd be far better to centralise this in one place and then have other devs use the scripts and from that deployment if they want to run the same process locally.
Configuration management is something you want to get right to begin with!
One important thing with CI: You only want to push working code to the central repository. This requires a private repository for each developer, but has the advantage that you never break trunk.
Git and Mercurial are the most obvious tools, and can work with svn as a central repository.
To prevent merge conflicts, there's one trick to prevent pushing broken code to central: always pull/merge from central first, and frequently, prior to pushing:
http://martinfowler.com/bliki/FeatureBranch.html
And have a look at our sponsor: http://hginit.com for examples of workflows with multiple developers.

How do you automate some routine actions for improving productivity? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Every morning, after logging into your machine, you do a variety of routine stuffs.
The list can include stuffs like opening/checking your email clients, rss readers, launching visual studio, running some business apps, typing some replies, getting latest version from Source Control, compiling, connecting to a different domain etc. To a big extend, we can automate using scripting solutions like AutoIt, nightly jobs etc.
I would love to hear from you geeks out there for the list of stuffs you found doing repeatedly and how you solved it by automating it. Any cool tips?
I use Linux. I have a bunch of scripts that do anything I want. Typically I write a script whenever a "block" of work can be reused in the future. For example, simple refactorings, deployments, etc...
Over time I started to combine these blocks, hence getting ever more efficient.
Regarding the "load stuff at startup", under Linux that comes out of the box (you can "save your current session" when you log out or turn off the computer).
On windows, my suggestion is to use programs that can be automated via command line.
A favorite way is to leave the computer on at night or better, if it's a laptop, put it to sleep. Running a web browsing virtual machine in VMware or similar works also, you can set the VM start along with the machine and save its state on shutdown, so your web pages and email client stay open. This works for development also if you're doing scripting or something similar where the performance hit of the VM on large compiles won't negate the benefits.
SlickRun is very handy for this, just a few keys to navigate to anything common and a very small footprint. With input variables and file path recognition all part of it I can quick remote desktop to any machine, search anything, pull up whatever's needed.
On OS X, I have an Applescript that I run at the beginning of the day. It sets an away message on IM, hides or quits programs that would distract me, gets new mail, and so forth. I also plug in my USB backup disk, so when I'm going home, another script ejects it and quits some programs. When the script is done, so am I.
I invoke these scripts with key combos using Quicksilver.
If you don't have a Mac, by the way, Quicksilver and Applescript are probably the #1 and #2 reasons to switch. Between the two of them, you can tell your computer to do practically anything you want in very short order.
Use a good app launcher such as Quicksilver or Launchy to cut down on the time it takes to perform simple tasks. They're usually not scriptable, but they do let you do each step faster.
Writing shell scripts (Applescript, Bash, PowerShell, etc..) is a great way to automate most mundane tasks, assuming your apps are scriptable, as well as pick up a new language. As you venture further into this practice, you'll find yourself more and more annoyed at the apps you use that aren't scriptable, to the point where it starts to affect your choice of apps ;-)
Also, consider a cron job, Windows scheduled task, or similar OS X analog to automatically run certain tasks at certain times of day/week/month/year. You can use this for anything from the "workday morning" scripts mentioned previously, to reminding you of your wife's birthday and anniversary every year. There's some more info here for *NIX systems, or here for Windows boxes.
Happy automation!
I have a hard time wrapping my head around Applescript, but since Apple runs BASH scripts just fine, I just use those instead. I've got a development server on my mac, so I've got a script that I can run to create a new site directory, create a new virtual host in apache, add a new domain to my /etc/hosts file, etc.
It's especially cool to integrate Bash (or probably applescript, although I don't know how) with Growl. That way, you can put a nice message up on the screen, complete with a png icon. This is more useful for things that your scripts do during the day though.
I do most of my programming work on a development server at work, so in the evening I simply detach my screen session and re-attach it in the morning, so it takes just a few seconds until I'm exactly where I left the day before.
I have some macros defined in mutt to clean up my inbox (archive commit mails etc.), I have a script that mounts some directories on the development server on my notebook via sshfs (works without interaction using public keys), and after that all I have to do is start up a browser and get a coffee. :)