How do I start an Amazon EC2 VM from a saved AMI using Jenkins? - api

I'm trying to create a Jenkins job to spin up a VM on Amazon EC2 based on an AMI that I currently have saved. I've done my searching and can't find an easy way to do this other than through Amazon's GUI. This isn't very ideal as there are a lot of manual steps involved and it's time-consuming.
If anyone's had any luck doing this or could point me in the right direction that would be great.
Cheers,
Darwin

Unless I'm misunderstanding the question this should be possible using the cli, assuming you can install and configure the cli on your jenkins server you can just run the command as a shell script as part of the build.
Create an instance with CLI.
The command would be something along the lines of:
[path to cli]/aws ec2 run-instances --image-id ami-xyz
If your setup is too complicated for a single cli command, I would recommend creating a simple cloudformation template.
If you are unable to install the cli, you could use any number of sdk's e.g. java to make a simple application you could run with jenkins.

There is the Jenkins EC2 Plugin
Looking at the document it looks like you may be able to reuse your AMI. If not, you can configure it with an init script
Next, configure AMIs that you want to launch. For this, you need to
find the AMI IDs for the OS of your choice. ElasticFox is a good tool
for doing that, but there are a number of other ways to do it. Jenkins
can work with any Unix AMIs. If using an Ubuntu EC2 or UEC AMI you
need to fill out the rootCommandPrefix and remoteAdmin fields under
'advanced'. Windows is currently unsupported.

Related

Travis config for deploying a static site without any build actions

I'd like to use Travis to push a static HTML/JavaScript website to an Amazon S3 bucket on each commit to master. Is there any way to configure my .travis.yml so it doesn't try to run any sort of build process? Just a deploy?
It seems like this is mainly controlled by the language setting which defaults to Ruby, so Ruby is being (unnecessarily) installed on each build.
I don't know how the ruby box works (I use the java box for my work); that being said, I think that the travis CI boxes have their base language already installed so you aren't really unnecessarily installing ruby each time.
If you want, there supposedly is an undocumented option language: generic.
This way you can just run the required bash commands to deploy your code to Amazon S3

Do you run yeoman/gruntjs inside your vagrant (vm)

So I want to start using yeoman (Gruntjs/requirejs/bower), but I was wondering if this could be done from inside your vm or would it be better for my workflow to have it installed on my host machine (OSX)? As far as I know you need to have a couple dependancies like node.js.
Is this a subjective thing or is there a guideline?
As #matt-cooper said, it's a subjective thing.
Personally, I run it on my host because that's where git and my IDE live and I consider Yeoman etc to be development tools that belong outside the backend code, whereas inside my VM I expect it to reflect my deployment server which doesn't need to meet the same requirements as Yeoman.
This is purely a subjective thing... you can do either.
If you are only ever going to use one VM then you could install grunt etc on the VM or the host and use it, it would mean that you would have to ssh into the VM each time you wanted to run grunt commands though.
If however you are going to have more than one VM setup then you might be better to have grunt etc. installed on your host machine rather than having to maintain multiple versions.

problems deploying openMRS.war to glassfish v.2

I'm trying to deploy openMRS v.1.9.2 to a local VM running CentOS & Glassfish 2 for work. Unfortunately, I could not get it to work. Normally, I just download the standalone found at source forge. I just double-click the jar, and I'm good to go.
I normally just SSH into the the VM, so I first tried doing everything through a terminal. Here are the steps I took:
Using wget, retrieve the .zip
Create a dir (I just called it /openmrs), cd into the new directory, and then expand the .zip.
cd into the directory.
At this point, there are two options to start openMRS.
Run the bash script: ./run-on-linux.sh
Run the .JAR: java -jar [insert_jar_name].jar -commandline
When I run the .JAR, I get a stack trace.
When I try to run the bash script, I get another error.
Anyways, I thought I found a potential solution in an openMRS JIRA ticket, but it seems aimed at Glassfish 3, and not Glassfish 2 (which is what I need to use).
I then tried deploying the .WAR via the Glassfish admin UI. I thought it would work, but after going through the steps of selecting a language, whether or not to use demo data, etc. I received this.
Does anyone have experience deploying openMRS to Glassfish 2.1.1? Unfortunately Glassfish 3 doesn't seem to be a realistic option. I would really appreciate any help here. Thanks.
Although it doesn't solve my problem of not being able to successfully deploy openMRS to an instance of Glassfish v.2, I did manage to get myself further by just installing MySQL on the VM. Our work machines are all set up for postgres, so I think should have guessed earlier that not having a MySQL server installation was the problem.
Here is a tutorial I used to install MySQL

Automatic Jenkins deployment

I want to be able to automate Jenkins server installation using a script.
I want, given Jenkins release version and a list of {(plugin,version)}, to run a script that will deploy me a new jenkins server and start it using Jetty or Tomcat.
It sounds like a common thing to do (in need to replicate Jenkins master enviroment or create a clean one). Do you know what's the best practice in this case?
Searching Google only gives me examples of how to deploy products with Jenkins but I want to actually deploy Jenkins.
Thanks!
this may require some additional setup at the beginning but perhaps could save you time in the long run. You could use a product called puppet (puppetlabs.com) to automatically trigger the script when you want. I'm basically using that to trigger build outs of my development environments. As I find new things that need to be modified, I simply update my puppet modules and don't need to worry about what needs to be done to recreate the environments through testing for the next go round.

How Can I Automate Running Pig Batch Jobs on Elastic MapReduce without Amazon GUI?

I have some pig batch jobs in .pig files I'd love to automatically run on EMR once every hour or so. I found a tutorial for doing that here, but that requires using Amazon's GUI for every job I setup, which I'd really rather avoid. Is there a good way to do this using Whirr? Or the Ruby Elastic-mapreduce client? I have all my files in s3, along with a couple pig jars with functions I need to use.
Though I don't know how to run pig scripts with the tools that you mention, I know of two possible ways:
To run files locally: you can use cron
To run files on the cluster: you can use OOZIE
That being said, most tools with a GUI, can be controlled via the command line as well. (Though setup may be easier if you have the GUI available).