Deployment through Jenkins - ssh

I have two jobs in Jenkins. One for build and the other for deployment.
Once the build job is successful i create a build tag and publish it on Github.
Next i take that tag and deploy those artifacts using publish over ssh plugin and selecting the option send files or execute commands over ssh as my post build step. I am also adding the already configured server at this step.
Now my concern is in some cases server details are not informed i.e username/password well in advance.
Is there a feature in Jenkins which can ask me to enter servername/username/password for deploying? Can i have a parametrized build having these 3 fields as inputs? So that when i click "build now" in deployment job it asks for these fields.

The publish over SSH plugin is designed to use credentials previously setup and managed by Jenkins. This is necessary because Jenkins managed the distribution of credentials when you run builds on slave nodes.
As an alternative solution that you could consider is using the Rundeck plugin. Rundeck is an general purpose automation tool, similar to Jenkins but focused on general purpose automation. The advantage is that you can use dedicated tools for build and deployment (useful when you have separate Dev and Ops teams) and Rundeck is better suited to managing large numbers of run-time servers.

Related

Rundeck server with Internet clients

I recently started evaluating Rundeck for our runbook automation needs. However I found that it works on SSH based connection method and our endpoints where we want to perform automation is in our customer locations. So I was hoping that it has an agent which we will install on those Windows 10 IoT based endpoints and then perform the runbook automation task remotely, but it appears that there is no agent for Rundeck.
Anyone made it work with such arrangements?
Rundeck is an agentless solution, for Windows based machines/servers Rundeck uses the PyWinRM (Python WinRM) plugin out of the box. Check the requirements, configuration, node definition and a good job example.

Best tool to automate repetitive tasks across multiple environments?

Every few weeks I have to test some installers that my company produces. I'd like to automate the process, if this was possible. Here are the requirements:
Run on a Macbook.
Access data within AWS's EC2 console.
Access data within AWS's S3 console and download files from the same.
Open a Terminal session and perform scp commands.
In Terminal, connect to an AWS instance and perform commands therein.
Intuitively I'm convinced that I could automate this but I need a tool that would allow me to interact easily with Terminal and a Chrome browser.
Does such a tool exist?
Robert
The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts.
Hence, by using combination of aws-cli + Shell script, you should be able to automate your tasks very easily on your MacOS.
The GUI approach you are looking for is not the best one unless you have very strong reason to do it.
All tasks you want to do can be done in a scripted and easy way with the help of some automation tool and according to your requirements ansible will just work fine for you.
It can potentially performs tasks related to AWS.
Can perform scp commands.
Can ssh to ec2 instance and perform commands.
And also there are couple of things you can do in a better way and easy to learn YAML format using Ansible (Just checkout Ansible modules)
Can manage multiple envs.
Works on mac
Open Source
One other advantage with Ansible is that it is very easy to learn and write because of its simple YAML format.

Visual Studio Team Services Test Running

Apologies if similar has been asked before, I couldn't seem to find anything, just link me in the right direction if so.
I'm brand new to test automation, I will be writing selenium tests against a third party website hosted on an internal network. Our source control is provided by Visual Studio Team Services, although it is possible I can install TFS on premise.
Eventually I need to schedule test runs, I believe all this can be done with team services, seen some demo's, all good.
I will be using a URL to access the system under test which is on our internal network, if team services tries to run a selenium test and connect to the URL it will fail I imagine as it's running from wherever Microsoft are holding the code and building.
I don't think there would be a chance that we would allow Team services any access to our internal network if that was even possible.
So the question is, what are my options? can the build be moved from VS Team Services onto a local machine to run the tests with the internal URL? Is this a good idea if it can? Am i relying too much on the internet for testing on our internal network and is this a risk?
I have spent a bit of time on "the google" but struggling to find a great deal of information, it's possible I am asking the wrong questions.
Any help is greatly appreciated, links to articles are fine, don't mind doing the leg work, just need some pointers.
Many thanks for your help, apologies if any of that makes no sense.
You have a few options:
Install a VSTS Build agent on-premise and connect it to VSTS. The agent connects to VSTS using an outbound connection and it will be able to execute Builds and Release pipelines and from there orchestrate the execution of tests. You can either put this agent in a specific Agent Pool or Agent Queue, or you can add a Capability to it (e.g. "onprem"). By setting the Build Definition to use the specified Pool/Queue the agent will be selected. Or by adding the Demand "onprem" to your Build Definition it will ensure that it always requires that capability of any agent.
Use TFS 2015u3 or TFS2017 with the same agent, but that would mean you loose all the goodness that VSTS has to bring with regards to licenses, "free upgrades" and all.
With regards to security.
Adding a agent to your network that executes commands queued on a cloud service adds a risk. You can minimize that risk by configuring the build agent with a limited account, use Active Directory to limit the machines this user can run processes on/logon to and you can limit the access to this agent through permissions on the Queue and Pool as well. You can ensure that the users who have access to this pool and all your VSTS administrators have configured 2-factor-authentication on their AAD account and if needed add IP access control to these accounts as well. It's recommended that users that administer such agent pools/queues do not have alternate credentials configured and that the Personal Access Token used to register the agent is scoped to the permissions required to do just that.
With these extra measures in place you'll have a pretty secure setup. And it beats the hassle of having to install, backup, maintain a couple of TFS servers on-premise.

Vagrant in production

I've been reading about Vagrant, and I find it quite useful for my development. I am currently managing a series of services (mail, web, LDAP, file sharing, etc.), and often one of these falls and needs a quick backup. Is it possible (and recommended) to use Vagrant for these purposes?
So far I've virtual machines installed like real machines.
I would also like to know about an alternative to Vagrant which would allow me to setup a simple configuration file and put a virtual machine, for example, with Zimbra, and quickly have an alternate mail server, enable RabbitMQ, etc.
Vagrant should be used more like a staging environment to test your infrastructure changes. It should be your test bed for automated infrastructure changes.
The way we use it at my company is like so:
Create VMs for our managed servers in Vagrant.
Create puppet definitions for each server.
Create cucumber tests for each server.
Make infrastructure changes via puppet and add cucumber tests.
Launch our servers to test for failures.
Fix bugs, release and/or back to step 4.
Basically when we're happy with our changes, we'll pull our puppet changes into production to make it happen.
I'd not recommend using vagrant to manage VMs for real production. I'd use something else like razor, virsh, OpenStack or one of the many other vm management systems out there.
This page suggests that the Vagrant push command is meant for deploying to production:
https://www.hashicorp.com/blog/vagrant-push-one-command-to-deploy-any-application/
"Additionally, multiple config.push.define declarations can be in a Vagrantfile to define multiple pushes, perhaps one to staging and one to production, for example."
From my experience, Vagrant mainly used in a development environment.
Vagrant configuration and provisioning options are limited compared to Terraform for example.
If you are working on a cloud based environment, you can use Terraform for infrastructure provisioning.
If your environment is local or your VMs will be hosted on a datacenter, you can use Ansible, chef or puppet for you configuration management and automation.
Hashicorp just published Otto, which is meant to be the Vagrant's successor. It is designed to support deployment environments.
From their Github page:
The key features of Otto are:
Automatic development environments: Otto detects your application
type and builds a development environment tailored specifically for that
application, with zero or minimal configuration. If your application depends
on other services (such as a database), it'll automatically configure and
start those services in your development environment for you.
Built for Microservices: Otto understands dependencies and versioning
and can automatically deploy and configure an application and all
of its dependencies for any environment. An application only needs to
tell Otto its immediate dependencies; dependencies of dependencies are
automatically detected and configured.
Deployment: Otto knows how to deploy applications as well develop
them. Whether your application is a modern microservice, a legacy
monolith, or something in between, Otto can deploy your application to any
environment.
Docker: Otto can use Docker to download and start dependencies
for development to simplify microservices. Applications can be containerized
automatically to make deployments easier without changing the developer
workflow.
Production-hardened tooling: Otto uses production-hardened tooling to
build development environments (Vagrant),
launch servers (Terraform), configure
services (Consul), and more. Otto builds on
tools that powers the world's largest websites.
Otto automatically installs and manages all of this tooling, so you don't
have to.
I had the same question and have been investigating the use of Vagrant push which as per their documentation, as of version 1.7, Vagrant is capable of deploying or "pushing" application code in the same directory as your Vagrantfile to a remote such as an FTP server.
I'm considering having vagrant spin up in a VM for developers, while also giving you the option to deploy your code to a live server for production environments.
As mentioned by #andrerpena, Otto is the successor of Vagrant.
From www.ottoproject.io :
Otto can deploy your application. Users of Vagrant for years have wanted a way to deploy their Vagrant environments to production. Unfortunately, the Vagrantfile doesn't contain enough information to build a proper production environment with industry best practices. An Appfile is made to encode this knowledge, and deployment is a single command away.

Publish an web application on build with NAnt, MSBuild or any other tool

I have a scenario where I have to setup a test environment where I want to be able to tell my NAnt or other build tool to make an new IIS web application, put the latest bins in the newly created IIS web application, and post me an email where the new address and port where the new application are addressed, is this possible and how? which tool?
There are several ways to approach this:
Set up a continuous integration (CI) server on the test environment. This is a viable option if your test environment machine doesn't change often and it's a single machine.
Push the installation from your development machine using tools like PsExec
Combination of the two: you have a build CI server which pushes the installation to (multiple) test environments.
Of course, you also need a good build script which will set up the IIS application (NAnt offers tasks for this). Emailing to you can be done by CI server (CruiseControl.NET Email Publisher, Hudson...).
I suggest taking some time to read this excellent article series: Automation for the people: Deployment-automation patterns
Our CruiseControl .Net build server does exactly this as part of it's NAnt build-script process...
Once the code is retrieved from source control, it's all built/compiled in turn. Web projects are then handled slightly differently to normal .dlls, as they are deployed to a particular folder (either on the current machine or otherwise) where IIS (also set-up by the script) to serve the pages.
Admittedly, we're using Virtual Directories instead of creating and disposing of new website instances on the server, as otherwise we'd have to manage the port numbers for each website.
NAnt has the capabilities of doing all of this IIS work, as well as all of the email work too - I'd certainly recommend looking at this avenue of enquiry to solve your problem. Plus, you also get the continous integration aspect as a side-benefit in your case!