My development team is looking to implement IPv6 on embedded platform. One the primary issues we're encountering at this stage is creating our test environment. Currently the only verification suite that we have found is the one created by TAHI.org. Running through an initial setup of this suite, it appears to only be for *NIX based implementations.
Is there an available solution for creating a test environment other than this or going to UNH?
The TAHI tests, while they require a FreeBSD box to run, do not require that the target be UNIX based. In fact, we ran them against a VxWorks-based embedded device.
If memory serves, there are several "remote" scripts that you must implement, however, to (for example) reboot your target device so that compliance can be tested in cases where the IPv6 interface must go down and come back up.
UNH uses essentially the same tests as the TAHI suite. Running the TAHI tests is therefore highly recommended.
Related
I am very very new to Docker. Our team has had a very nice deployment line up where We have different CI engines for different projects including Jenkis and TeamCity.
Developers usually check-in and CI takes over, deploys and its perfectly ready for test team to test. I always thought this to be a perfect model. Of course, some parts and our implementation have their flaws but it worked very well for what we wanted.
Now, our Dev-Ops is introducing Docker where test teams get a Docker Image from Docker Registry Everytime we run a build from teamcity. While it sounds really really fancy I am still failing to understand the benefit of it.
After my research, my conclusion was that Dockers can be a good light weight replacements for VM. BUT that is ONLY IF you are using any VMs? We are not using any VMS? I just do not understand what is the real value here? Also, while searching I found a relatively good link on Docker:
https://www.ctl.io/developers/blog/post/what-is-docker-and-when-to-use-it/
Where they discuss when you should use Docker and one of the point says that:
Use Docker whenever your app needs to go through multiple phases of development (dev/test/qa/prod, try Drone or Shippable, both do Docker CI/CD)
Ok. Howeve rthey do not further elaborate on why is docker useful when my app has to go through multiple phases?
And how it is exptremely helpful over regular Dev/Test set up when the existing set up is already working smooth?
First, you are right about comparing it to VMs in that it is similar to a VM. However, docker is incredibly lightweight. This property is the one that surprised me most in the beginning. As opposed to virtual machines, containers share resources much more efficiently. Virtual machines are isolated. Containers can run simultaneously on a host machine with very little overhead. You can configure containers to be able to talk to each other (via volume or port bindings).
Furthermore, in my team, docker brings the following benefits:
our application consists of one big application and several other few microservices. But we want to release all as one package with inter-dependencies among the applications, which eliminates problems with figuring out which version of application and microservices should be deployed together (compatiblity) etc. That is, the image contains all you need and you can bring all applications or one-by-one up/down using docker-compose. You do not need to deploy, you simply pull the image and fire a container/s. If you wish to stop one of the microservices, it can be done without affecting the others.
developers in the team, can run the very same image on local machine, for example to troubleshoot a problem occurred in the production; which means troubleshooting can be done in the same environment as in the production. This brings environment standardization and no more "but it works on machine" talk.
another benefit it brings to us is the following: we build a docker image, run our tests against it, and push it to the registry once all these phases succeed, which translates into a great portability.
Ability to version control the containers. You can easily inspect containers between the current version and the previous versions. If you wish to rollback - that is done smoothly.
Isolating and securing applications. All containers are isolated and you can easily control what goes in and out.
It took me a year before I got used to the idea, but now it seems simple enough.
I think part of that comes from the fact that people keep calling Docker a "virtual machine", which is not accurate. That's really just a nickname for what's happening behind the scenes. In a lot of ways, Docker will NOT replace a complete virtualization solution, such as VMWare. It does, however, bring forth a new way of thinking about infrastructure. One that many people have a difficult time wrapping their heads around.
You can start asking yourself: What makes a Linux distribution unique?
Aside from the kernel, everything else is just a "standard way" of organizing binaries, libraries, runtime and configuration files. You need your binaries in /bin, your libs in /lib, your configuration in /etc. User installations get placed under /usr...
Most distributions will keep the main structure from the Unix legacy and add its own quirks. Each one will have its own way to manage and distribute packages. Each will maintain their own versions of libraries, drivers, etc.
The key ingrident is the kernel. That's something they all have in common. Nowadays, recent builds of the Linux kernel are compatible with pretty much all major distributions available. So, aside from /boot, most of everything else is just a matter of having the right files in the right place with the right permissions.
Now, imagine you take all that distribution bundle (except the kernel) and place it all in another directory of your running OS. Taking advantage of the same kernel you are already running, you isolate a new process so that it "thinks" that / is now that directory. Bingo! This process now "thinks" it's running all by itself on another operating system.
Docker builds on top of Linux Containers, which allows us to do excatly that, but in a more friendly and easier way. Don't think of it as a virtual machine. Think of it as process isolation. The running kernel will share the machine's resources with this process, while keeping it isolated from the rest of the system. It's like jails on steroids.
That was a broad simplification. But, given the concept, think about the implications of this idea.
You can have on the same host, multiple processes with completely different environments that might otherwise conflict with each other. One may be a legacy binary that needs old libraries in place (legacy systems that never die). Another may be the most recent build of a bleeding edge technology. Sharing the same kernel is a efficient, and valuable resource management.
The most value I found comes from managing the infrastructure. Once you install Docker on the hosts, configure a swarm, and define a way of deploying containers, you mostly forget about the hosts. Adding users, installing packages, customizing, editing configuration files... All that becomes a development task on your desktop. There's an incentive to script more, to automate more. To keep your hands away from the physical or virtual machines, unless absolutely necessary.
Gone are the days when someone changed some obscure setting on the server to work around some weird application behavior, forgot to tell anyone about it and took a vacation. Changes to the environment can be commited to version control, tracked and improved by everyone on the team. If your datacenter goes through a disaster, recreating the whole environment is a matter of rebuilding images and redeploying containers. Your infrastructure becomes consistent and reproducible, while keeping the doors open to a wide variety of operating systems and customized configurations for each application.
Developers can take advantage of Docker with the ability of recreating dev/staging/production environments on their desktops. No need to polute a dev machine with application servers and database installations, or even the toll of Virtual Box to emulate all that.
Testing can be automated with a higher level of isolation. The Selenium team already has official Docker images. Creating an entire test hub should be a walk in the park with those puppies.
Building custom software, such as compiling Nginx with third party modules, can also be done inside containers from specialized images. No need to keep an entire server dedicated to it, or even polute your desktop with all the dependencies and build packages.
Overall, we've been having a great experience with Docker. We've migrated our staging environment to this new platform, and plan to migrate other parts of the infrastructure as well, eventually into production. So far, so good.
I hope you can convince enough people to take a better look at it. I'll admit, it took me sometime to get used to the idea. But once you get it, it's actually worth it.
We are looking for an automated testing software for our web application. We need to come up with a solution or software that our non-it staffs could write test cases as well as the developers.
For example I've run through some of them such as: SmartBear, National Instrument and IBM. Most of these guys are MS Windows based or commercial Linux distros which remove them from our list since we are all Debian based.
Any recommendation or guideline would be much appreciated.
Ps. We don't have any budget limit!
You're going to have a hard time getting tooling for non-technical testers to build test cases if you limit yourselves to Debian OS for developing and running the tests on. There's no reason you couldn't have a few Windows system to manage your test suites from -- those would run against your web site just fine, regardless of what stack it's hosted on. That would open you up to the tools you mentioned (and Telerik's Test Studio, the tool I help promote).
Those Windows systems could easily be run via whatever virtualization host you prefer, so you wouldn't even need physical systems to deal with that. You could easily share the same source control repository as your devs, too, since nearly every decent SCM has Windows clients.
If you're unwilling to consider having a few Windows boxes around for your testing, then you'll need to have a look at getting all your testers proficient in APIs and frameworks like WebDriver and Robot Framework. The Pages gem from Jeff Morgan (#chzy) in Ruby would be another option, as would Adam Goucher's Saunter (in Python).
I heard that minecraft server is very leaky, can consume a lot of resources very quickly. People say to use a virtual machine, all well and good. I'm making an application to automate server setup, and I'd like my whole application (including minecraft) to run in an ultra basic auto setup vm (or something similar). I've heard of mineos, but I'm not sure if that can be set up very quickly. The vm will be so basic it won't even have a ui. I'm using a Mac, not planning to distribute the server WITH the application but have it download from the minecraft server, not modified.
I want it to be like a one-click-done solution for the end user, they don't have to worry about minecraft server gobbling up resources because it's be in a controllable virtual machine.
Distrubuting minecraft server (Notch's property) could be an issue, but if anyone knows about that if be happy to hear.
If you intend for a server to be fully configured and only for your user to only have to download and 'open' it, what you're seeking is known as an 'appliance'. Virtualbox supports the open-standard of such appliances, allowing a single file to be distributed and it contains all the virtualized hardware info as well as the OS/filesystem. A number of other formats exist, such as Turnkey.
In all likelihood, I would find MineOS CRUX to be perfectly suited for this sort of one-click-done, since the OS was designed for pretty much exactly what you're trying to do...only without the configure-the-hardware-for-the-user (it uses an ISO and an installer, the process you would automate for the end-user).
That said, this distribution has never at any point packaged Minecraft files, as clearly stated: "this Linux distro does not contain ANY Minecraft files. The scripts are, however, designed to download/update files directly from the source: http://minecraft.net"
Hope this answers all the concerns, despite being an old thread.
Practical uses of virtualization in software development are about as diverse as the techniques to achieve it.
Whether running your favorite editor in a virtual machine, or using a system of containers to host various services, which use cases have proven worth the effort and boosted your productivity, and which ones were a waste of time ?
I'll edit my question to provide a summary of the answers given here.
Also it'd be interesting to read about about the virtualization paradigms employed too, as they have gotten quite numerous over the years.
Edit : I'd be particularly interested in hearing about how people virtualize "services" required during development, over the more obvious system virtualization scenarios mentioned so far, hence the title edit.
Summary of answers :
Development Environment
Allows encapsulation of a particular technology stack, particularly useful for build systems
Testing
Easy switching of OS-specific contexts
Easy mocking of networked workstations in a n-tier application context
We deploy our application into virtual instances at our host (Amazon EC2). It's amazing how easy that makes it to manage our test, QA and production environments.
Version upgrade? Just fire up a few new virtual servers, install the software to be tested/QA'd/used in production, verify the deployment went well, and throw away the old instances.
Need more capacity? Fire up new virtual servers and deploy the software.
Peak usage over? Just dispose of no-longer-needed virtual servers.
Virtualization is used mainly for various server uses where I work:
Web servers - If we create a new non-production environment, the servers for it tend to be virtual ones so there is a virtual dev server, virtual test server, etc.
Version control and QA applications - Quality Center and SVN are run on virtual servers. The SVN box also runs CC.Net for our CI here.
There may be other uses but those seem to be the big ones at the moment.
We're testing the way our application behaves on a new machine after every development iteration, by installing it onto multiple Windows virtual machines and testing the functionality. This way, we can avoid re-installing the operating system and we're able to test more often.
We needed to test the setup of a collaborative network application in which data produced on some of the nodes was shared amongst cooperating nodes on the network in a setup with ~30 machines, which was logistically (and otherwise) prohibitive to deploy and set up. The test runs could be long, up to 48 hours in some cases. It was also tedious to deploy changes based on the results of our tests because we'd have to go around to each workstation and make the appropriate changes, which was a manual and error-prone process involving several tired developers.
One approach we used with some success was to deploy stripped-down virtual machines containing the software to be tested to various people's PCs and run the software in a simulated data-production/sharing mode on those PCs as a background task in the virtual machine. They could continue working on their day-to-day tasks (which largely consisted of producing documentation, writing email, and/or surfing the web, as near as I could tell) while we could make more productive use of the spare CPU cycles without "harming" their PC configuration. Deployment (and re-deployment) of the software was simplified, since we could essentially just update one image and re-use it on all the PCs. This wasn't the entirety of our testing, but it did make that particular aspect a lot easier.
We put the development environments for older versions of the software in virtual machines. This is particularly useful for Delphi development, as not only do we use different units, but different versions of components. Using the VMs makes managing this much easier, and we can be sure that any updated exes or dlls we issue for older versions of our system are built against the right stuff. We don't waste time changing our compiler setups to point at the right shares, or de-installing and re-installing components. That's good for productivity.
It also means we don't have to keep an old dev machine set up and hanging around just-in-case. Dev machines can be re-purposed as test machines, and it's no longer a disaster if a critical old dev machine expires in a cloud of bits.
What are the key use cases for the use of virtualization -- that is, running one or more "virtual PCs" using software such as VMWare and Microsoft Virtual PC -- for software development?
Also -- are there other instances/uses of virtualization that aren't covered by my definition above (use of a tool like MS Virtual PC or VMWare), and that are useful to developers?
My impetus for asking is this StackOverflow comment by Metro Smurf asserting "You'll wonder how you ever developed without it!", regarding use of virtualization.
(Please include just one use case per response. Thanks!)
Application testing in multiple environments is one obvious use of virtualization that I'm aware of. Testing your application on other operating systems (without requiring additional physical computers to do so), as well as testing that involves software that generally only allows you to install a single version on a given machine (such as the Internet Explorer browser; running both IE6 and IE7 on the same machine is not an officially supported configuration), are good candidates for virtual machine usage.
If your build-server is running in a VM, you can make a snapshots of it for every software release in order to be 100% sure that you can recreate the build environment (in case you want to make patches to old releases, for example).
If you set up snapshots of your development environment (and back them up) it can be very easy to resume productivity if your computer breaks down. When your machine breaks down right before your release - and you can resume immediately with all your tools installed and configured, it can be a lifesaver.
The simplest case which applies to my current situation is that we have a complex client-server environment and with virtualization every developer can very quickly get a baseline set of operating systems to deploy their local build to and verify end to end functionality.
Locally you have your dev box, and N client boxes which get re-initialized as fresh OSes each time you want to try a build. Essentially it's the test environment equivalent of a 'make clean' where even the client workstation gets replaced with a new OS.
Quickly distributing environments between team members is a very nice use case to for virtualization especially if you have a lot of various components, tools, etc.. This can save you a ton of time with new hires, contractors, or other individuals who need an environment quickly.
Many presenters use a VM for presentations - it allows them to revert immediately to reset the presentation for the next day, transfer all presentation materials quickly between computers, and not have to show your attendees your messy My Documents folder.
Using virtualization for sales activities is also a great use case. You can take a snapshot at a particular time that you can save as your demo baseline. Then once you run through the demonstration and change the data, etc. you can restore back to your previous baseline for future demonstrations. You can also capture multiple baselines and pick and choose which baseline best fits the upcoming demo.
Test environments. If you have more than one setup that a system needs to be targeted for (e.g. Windows & Linux, XP & Vista) then a machine with lots of RAM and VMWare (or on of the others) is a good way to manage the environments.
Another is developing on one system and targeting another. For example, at one point I did some J2EE work on a workstation running Linux where the client was I.E. 5.5. A VM with Windows 2000 and IE 5.5 would let me test the application.
Reasons I use virtual machines for development.
Isolate different development environments.
Testing environments.
Easy recovery due to computer hardware failure/upgrade.
Ability to "roll-back" changes to your development environment if something corrupts it.
Currently, I am using VirtualBox for my VM setup. I used to use VirtualPC, but I REALLY hated not having any type of "snapshot" feature (like VMware and VirtualBox have).
We develop software for use in our SaaS application, our production environment has a large number of servers and their software environment needs to be absolutely predictable; we can't have ANYTHING installed extra, or absent from our development machines.
Moreover, our application requires a number of different server types in order to function properly (at least 7 last time I counted); mostly they can't be installed on the same (virtual) machine - at least, not without violating the "same software as production" requirement.
In order to have a consistent environment, it's necessary to use VMs. I don't know how anyone ever manages without them.
Snapshots and rollbacks are nice too, but I use them only occasionally (really useful during installation / upgrade tests).
Suppose you're developing a new version of your software, and checking that the upgrade from the previous version works correctly... how long does it take to do a test cycle without being able to rollback the box? Do you have to reinstall the OS then the old version? Can you guarantee that the uninstall really uninstalls everything?
Being able to test/retest your deployment process is a huge savings.
Developing Add-Ins for different versions of Microsoft Office (using Visual Studio Tools for Office).
My main work machine has Office 2007. When I work with Add-Ins for Office 2003 I use a virtual machine with Visual Studio and Office 2003.
I'm suprised that nobody has mentioned the VMware record/replay feature (awesome video demo) which is great for debugging.
I have a headless server running ESXi which runs various machines for building installers (so I don't have to give up processing power on my desktop), automated testing (server is faster than any desktop) and various test environments (about 20 different configurations) so that the support team can easily jump onto a configuration that closely matches a customers system.
When you have one really beefy server running VMs that can be shared between support, test and dev teams, you introduce huge cost savings. In all, we're running ~25 VMs on ESXi (dual-quad core Xeon 2.5G + 8Gb RAM) shared between 5-10 people, some of the developers use Virtual PC and then I use VMware Workstation on my desktop. All of the Mac users here use VMware Fusion as well
I am surprised that no one has mentioned the benefit of increased security by isolating, for example, the database server and web server in different VM's.
Some server applications can use VMs too. When one vm is not used much, the server can locate the resources to other vms.
Some sort of test environment: if you are debugging malware (either writing or developing a pill against it) it is not clever to use the real OS. The only possible disadvantage is that the viruses can detect that they are being run in the virtualization. :( One of the possibilities to do it is because the VM engines can emulate a finite set of hardware.