Every few weeks I have to test some installers that my company produces. I'd like to automate the process, if this was possible. Here are the requirements:
Run on a Macbook.
Access data within AWS's EC2 console.
Access data within AWS's S3 console and download files from the same.
Open a Terminal session and perform scp commands.
In Terminal, connect to an AWS instance and perform commands therein.
Intuitively I'm convinced that I could automate this but I need a tool that would allow me to interact easily with Terminal and a Chrome browser.
Does such a tool exist?
Robert
The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.
Hence, by using combination of aws-cli + Shell script, you should be able to automate your tasks very easily on your MacOS.
The GUI approach you are looking for is not the best one unless you have very strong reason to do it.
All tasks you want to do can be done in a scripted and easy way with the help of some automation tool and according to your requirements ansible will just work fine for you.
It can potentially performs tasks related to AWS.
Can perform scp commands.
Can ssh to ec2 instance and perform commands.
And also there are couple of things you can do in a better way and easy to learn YAML format using Ansible (Just checkout Ansible modules)
Can manage multiple envs.
Works on mac
Open Source
One other advantage with Ansible is that it is very easy to learn and write because of its simple YAML format.
Related
Is there any way to automatically run regression/functional tests on Nifi flows using Jenkins pipeline ?
Searched for it, without any success.
Thanks for your help.
With the recent release of NiFI-1.5.0 and NiFi-Registry-0.1.0, the community has come together to produce a number of SDLC/CICD integration tools to make using things like Jenkins Pipeline easier.
There is both Python (NiPyAPI), and Java (NiFi-Toolkit-CLI) API wrappers being produced by a team of collaborators to allow scripted manipulation of NiFi Flows across different environments.
Common functions include interaction with integrated version control, import/export of flows as JSON documents, deployment between environments, start/stop of flows, etc.
So, we are working quickly towards supporting things like an integrated wrapper for declarative Jenkins Pipelines, and I would add it is being done fully in public codebase under the Apache license, so we (I am the lead NiPy author) would welcome your collaboration.
I was looking for a uniform configuration management tool for remote installation of OS on remote servers(similar to puppet/chef) having wide range of platform support. I think we can use PXE/kickstart for remote installation. I am not sure that can be used to install OS on multiple servers in parallel? Other way to spin up the EC2 instance from AWS and pay amazon for the usage. I was wondering is there any other best option for this requirement?
Regards
Bubunia
You can consider ansible as a strong candidate for this.
Some of its features:
Open source with large development community
Number of modules which can help you building flexible solutions.
Cloud focused development modules
Ansible inventory which can help you automate things end to end on the basis of tags to your instances
Agent less
Easy to write, read and understand yaml format
Pre-builded modules for multiple installations available in open source community
Work with multiple OS
It is efficient as well I am using it from last 1 year and found it very good.
Sparrow6 cm supports quite a range of platforms/os. You can choose Sparrowdo to run configuration jobs in push manner over ssh.
I am looking for ways to set up like a central 'hub' for Selenium in my work, allowing anyone to access in within the company. For example, Tester A writes test scripts, the Person B can run without having to manually copy over the test scripts to their local workstation)
So far, I've only thought of installing Selenium in a VM which will then execute as per normal. But if I run Selenium Grid, it will run VMs within VM (?). My only concern with VMs is that it'd run slowly.
If anyone can think of a better solution or recommendation please do give me some advice. Thank you in advance.
One idea. You can create an infrastructure combining Jenkins/Selenium/Amazon.
The following is my solution from another post.
You can do it with a grid.
First of all you need to create a Selenium hub with an EC2 ubuntu 14.04 AMI without UI and link it as a jenkins slave to your Jenkins master. Or as directly a master. What you want. Only command line. Download Selenium Server standalone. (be careful on downloading the version. If you Download the Selenium3Beta, things could change). Here you can configure the HUB. You can also add the Selenium Hub as a service and configure to run automatically at server start. its important that you open the Selenium default port (or the one that you configured) so the nodes can connect to it. You can do that on the Amazon EC2 console when you have created your instance. You just need to add a security group with an inbound rule for TCP in the port you want for the IPs you want.
Then, you can create a Windows server 2012 instance server (for example, that's what I did), and do the same process. Download the same version for Selenium and the chromedriver (there is no need to download any firefoxdriver for Selenium versions before Selenium3). Generate a txt file and prepare the Selenium command to link to the HUB as a NODE. And convert it to *.bat in order to execute it. If you want to run the bat at start you can create a service with the task scheduler or use NSSM (https://nssm.cc/). Don't forget to add the rules to the security groups for this machine too!
Next, create the Jenkins server. You can use the Selenium Hub as the Jenkins master or as a slave.
Last step is configuring a job to be run in the Jenkins-Selenium machine. This job needs to be linked to your code repository (git, mercurial...) Using the parametrized build plugion for jenkins you can tell that job to pull the revision you want (where every developer can pull the revision with the new changes and new tests) and run the Selenium tests in that build with the current breanch/revision and against one unique selenium. You can use ANT or Maven to run the Selenium tests in Jenkins.
May be it's complicated to understand because there are so many concepts here but it's robust and it works fine!
If you have doubts, tell me!
If Internet Explorer is not one of the browsers on which you must run your automation tests, I would recommend that you consider docker selenium.
Selenium is providing pre-configured docker images for both Selenium Hub and Node ( refer here for more information ). For making use of docker selenium all you need to do is find a machine (preferably unix machine), install docker on it by following instructions detailed here and then start the hub and node by starting off those containers. In the case of docker you can literally transform a VM (or) a physical machine into a VM farm and yet not have to worry about slowness etc., because I believe docker is optimised for these and it runs your VM as a process.
Resorting to using Amazon cloud for running your selenium nodes is all fine, but if you have corporate policies that prevent in-coming traffic from the internet into your intranet region, then I am not sure how far Amazon cloud would be useful.
Also remember that Jenkins is not something that is absolutely required but is more of a good to have part in the setup because it would let anyone run their tests from a web UI. This will however require that all your tests are checked-in and made available in a central version control system in your organization.
PS : The reason why called out Internet Explorer as an exception is because IE runs only on windows and there are no docker images (yet) for windows. All the docker images are UNIX based images.
Possible ways to accomplish it:
Creating dedicated WCF service for this purpose (currently my favorite option)
Using the REST API?
Azure PowerShell?
Explanation:
Publishing a web-role cloud-service takes about 10 minutes. It's much too long during development - I try to do as much as I can offline, unit-test-ish and modular, but it's just impossible to completely avoid development cycles altogether with the VM.
Apparently, the long time is mostly a result of the machine being wholly restarted, so I'm trying to find an automatic solution, like uploading and installing the binaries.
What is the best way to accomplish it?
What do you think? would it cut at least 50% of the publishing time?
Do you expect any critical problems?
The solutions proposed below are definitely against best practices and should NEVER-EVER be used in production environment.
If your objective is to quickly test your changes in your development environment, there are two ways you can go about it.
Enable RDP and copy your modified binaries or other files directly in the appropriate folders on the VM. You could enable Remote Desktop on your web role and copy the files manually in appropriate folders.
Use Web Deploy: This will only work for web roles in your project but you could enable Web Deploy on your Web Roles and use that to make faster deployment. Please see this link for more details on how to use this feature: https://msdn.microsoft.com/en-us/library/azure/ff683672.aspx.
I have two jobs in Jenkins. One for build and the other for deployment.
Once the build job is successful i create a build tag and publish it on Github.
Next i take that tag and deploy those artifacts using publish over ssh plugin and selecting the option send files or execute commands over ssh as my post build step. I am also adding the already configured server at this step.
Now my concern is in some cases server details are not informed i.e username/password well in advance.
Is there a feature in Jenkins which can ask me to enter servername/username/password for deploying? Can i have a parametrized build having these 3 fields as inputs? So that when i click "build now" in deployment job it asks for these fields.
The publish over SSH plugin is designed to use credentials previously setup and managed by Jenkins. This is necessary because Jenkins managed the distribution of credentials when you run builds on slave nodes.
As an alternative solution that you could consider is using the Rundeck plugin. Rundeck is an general purpose automation tool, similar to Jenkins but focused on general purpose automation. The advantage is that you can use dedicated tools for build and deployment (useful when you have separate Dev and Ops teams) and Rundeck is better suited to managing large numbers of run-time servers.