I have created a multi-job with few Sub-Jobs (which run my each test suite), in the main job i am cloning my repo from git and store it jenkins/workspace in a linux machine. all my sub jobs run in windows nodes. is there any way to share or access my linux machines jenkins/workspace with other jobs running on windows node. Please suggest if there is any other way to acheive it.
Related
Is there a vagrant box or any other kind of VMs that simulates Open Build System environments?
I'd like to make sure my package works fine locally before sending it to the building system. The problem is many times my local environments have more stuff installed or different version from the building environment.
I think that having a local VM to simulate the environments would be ideal but I couldn't find it.
Disclaimer: I don't use OBS nor have I tried this myself.
OBS has an appliance that likely can be run in vagrant/vbox here:
https://en.opensuse.org/openSUSE:Build_Service_Appliance
Im trying to setup Jenkins to run tests on a Virtual Machine but im not to sure how to proceed.
What id like to be able to do is to get Jenkins to build the environment on the vm and then have Jenkins execute the test scripts on the vm environment. After the tests have passed/failed id then like Jenkins to clean the database and pull down the virtual environment.
Server box - Windows 7
Virtaul Machine - VMWare
So what im looking for is some information or tutorials on how to implement the above. It would also be helpful if you could recommend what Jenkins plugins I can use to implement the above or if you want to go above and beyond can you outline the steps needed to achieve the above.
Any help would be appreciated.
I'm doing just that in my environment using the vSphere Cloud Plugin. Here's a basic step-by-step guide:
Install the plugin
Configure your ESX/ESXi server as a new "vSphere Cloud"
Create a new Jenkins node, of type "Slave virtual computer running under vSphere Cloud" (which becomes available after Installing the plugin).
When configuring the new node, optionally specify a snapshot name. This will revert the VM to this snapshot when the node launches.
Use the node in a pipeline script: node("node-name-or-label") { ...your code here... }
I use the method above with about 10 Windows nodes, reverting each to a "Clean" snapshot to start each build with a known state.
Recently I've been dabbling with vagrant and docker. These are quite interesting tools, but I haven't been able to convince myself that it's the way to go quite yet on my OS X machine. Being an old Unix hat, I have to say that I like having a consolidated and sandboxed environment for development purposes.
I've seen a lot of chatter and a number of friends have been using vagrant with just stock vim for editing. I'm not really a fan of that approach and would probably prefer to use the vm provider's sharing mechanism OR, more likely, NFS.
Personally I'd like to be able to edit directly in TextMate, SublimeText, Emacs (on OS X), or even perhaps use RubyMine and its various IDE features, etc.
Is there any way to really get the workflow down so that such an environment will be essentially like working on a local environment without having to pull a lot of additional background strings to make things work out?
I suppose a few well placed scripts could go a long way, but I've not found any solid answers on really making this a seamless environment.
What actually worked for me was to use boot2docker which makes it easy to install a lightweight virtual machine (with VirtualBox) that will host your docker deamon and images. The only thing you need in order to run docker commands is to run $(boot2docker shellinit) when you open a new Terminal.
If you need to also have your files on an OS X folder and share them with a running docker image, you need some additional setup, but once you do it, you won't have to do it again.
Have a look here for a nice walkthrough on how to do it. The steps in short are:
Get a special boot2docker image that allows you to use shared folders for VirtualBox
Configure VirtualBox to share a folder:
VBoxManage sharedfolder add boot2docker-vm -name home -hostpath /Users
This will share your /Users folder with the boot2docker image that hosts docker.
From you Mac share the folder you need with a folder in a docker image like:
docker run -it -v /Users/me/dev/my-project:/root/src:rw ubuntu /bin/bash
One small annoyance that I haven't found how to overcome is that you do not longer access your software through localhost because it actually runs on boot2docker instance. You have to run boot2docker ip and access that ip.
Hope that helps!
I've been using VMWare Player for ages now for both Windows development on my Linux box and (more importantly) automated testing of Windows applications.
Basically what I do is to:
have my development VM running and I build my code and automatically transfer the install package to Linux.
when this shows up at Linux, automatically copy a "known-state", snapshot VM to my test work area (I say snapshot but it's really just a backup copy of the whole directory, not a real VMWare snapshot).
also automatically start the VM in the work area once it's copied.
the VM has a single never-changing startup script which pulls a real startup script from Linux and runs it.
that startup script is responsible for getting down the install package and doing a silent install.
it then runs a test suite and uploads results back to Linux where I have automated scripts which check them.
So, it's basically a one-button test process.
Now I notice more and more people seem to be using VirtualBox.
First off, I'd like to confirm that it can also do a similar thing, primarily being able to backup and restore whole VMs and having shared folders between VirtualBox and Linux.
Secondly, and this is the crux: I'd like to know if that has any concrete advantages over VMWare Player, especially for the automated testing jobs.
I switched to VirtualBox because of one concrete advantage, I wasn't able to setup the network as I wanted to in player. I don't remember if it was bridging or port-forward or whatever that didn't work, but something didn't work the way I wanted it to with the network-setup (cause I needed the pay-version for that) and thus I switched. Personally I've found that both have good and bad sides, but I still use virtualbox cause of that network-thing.
I am trying to get Hudson to run my ruby based selenium tests. I have installed the Selenium Grid plugin, but I don't want to have the RC's running as slaves in a Hudson cluster. The reason for this is I don't want to waste the next six years of my life trying to configure each of my projects in various Windows environments.
Hudson currently pulls each project from Github and builds it just fine. With a regular Selenium Grid setup, I am able to edit the grid_configuration.yml file to represent the various environments I wish to tests against, then pass environment variables to the rake task that runs the test i.e. which browser/platfom to run on and the URL of the application under test -- usually a port on the hub machine running in a specific environment.
In this way, the machines on which the RC's run don't need to know anything about the source code of my apps, they just need to have selenium-grid installed and have registered with the hub.
Is there a way of elegantly emulating this with Hudson?
do you have a concept of build agents, I do not know much about Hudson. We are using Anthill Pro at work and we have set up an Ahtill Pro agent. The source code is downloaded to the agent and the agent is responsible to run the maven goal for running the tests. It works pretty well for us as the RC machines are not part of the build environment. the tests are responsible to talk to Selenium HUB and get new sessions and do the testing.
I hope this helps.
Cheers
Haroon
I chose to not use the plugin in order to take advantage of the newer Grid version. I cloned a few VMs with a startup script that runs ant launch-remote-control from a shared drive that they can all access. Hudson doesn't have, and doesn't need any access to the Grid nodes and they aren't slaves to Hudson. I altered my Hudson server to launch the hub on machine startup. This setup allows me to run the grid normally with or without Hudson.