I'm using Terraform to automate build out of an AWS EC2 based docker host and then using its remote exec option to download a docker file, build and run it.
I'd hoped to integrate this with Serverspec but am struggling to work out two things:
The best way to pass the external dns of the newly created AWS EC2 instance to Serverspec.
How to configure the SSH options for Serverspec so that it executes correctly on an Amazon Linux AMI using the ec2-user account.
I would normally connect to the EC2 instance using a pre-defined key pair and never use a password however ServerSpec seems to run commands on the server with a sudo -p format.
Any advice much appreciated.
Contents of spec_helper.rb
require 'serverspec'
require 'net/ssh'
set :ssh_options, :user => 'ec2-user'
Also using edited rakefile as follows to force correct EC2 external dns (masked):
require 'rake'
require 'rspec/core/rake_task'
hosts = %w(
ec2-nn-nn-nn-nnn.eu-west-1.compute.amazonaws.com
)
set :ssh_options, :user => 'ec2-user'
task :spec => 'spec:all'
namespace :spec do
task :all => hosts.map {|h| 'spec:' + h.split('.')[0] }
hosts.each do |host|
short_name = host.split('.')[0]
role = short_name.match(/[^0-9]+/)[0]
desc "Run serverspec to #{host}"
RSpec::Core::RakeTask.new(short_name) do |t|
ENV['TARGET_HOST'] = host
t.pattern = "spec/Nexus/*_spec.rb"
end
end
end
You could make the IP address an output in Terraform. In fact, the link gives an example doing just that to get the IP address of an AWS instance, named web in this case:
output "address" {
value = "${aws_instance.web.public_dns}"
}
Then you can get this value from the command line after a terraform apply with terraform output address.
You can set the sudo password with the config option :sudo_password. If the ec2-user can run sudo without a password, set this to ''. (See this blog post for an example.) Or pass it in the SUDO_PASSWORD environment variable, described here: http://serverspec.org/tutorial.html
Related
I'm trying to automate google cloud virtual instances remotely only using external ip addresses of virtual machines. I can ssh into the virtual machines using command line with user name shishir9159_gmail_com . But If I use any ansible commands like this:
ansible -i hosts -u shishir9159_gmail_com --private-key=~/.ssh/google_compute_engine -m ping all
and it results in this following error:
"msg": "Failed to connect to the host via ssh: shishir9159#35.202.219.6: Permission denied (publickey,gssapi-keyex,gssapi-with-mic)."
I've added some parameters in my ansible.cfg:
host_key_checking = False
ssh_args = -o ControlMaster=no
But I don't think they do much of a help according to this post:
https://serverfault.com/questions/929222/ansible-where-do-preferredauthentications-ssh-settings-come-from
And I tried many methods and recommendations. I have a service account but it doesn't seem to me necessary for this simple ping command.
The problem is in the underscores of the user name. Try to add a username without underscore or try using quote.
I solved the problem by adding ansible_ssh_user and ansible_ssh_pass at the hosts file. This post contain the solution.
ansible SSH connection fail
Airflow version- 1.9.0
I have installed apache airflow and post configuration i am able to run sample DAG's with sequential executor.
Also, created new sample user which i can see under Admin > Users.
But unable to get the login window/screen when we visit webserver adress at :8080/ it directly opens up Airflow webserver with admin user.
It will be great help if anyone can provide some info on how to activate login screen/page, so that user credentials can be used for logging into webserver.
Steps followed to enable web user authentication:
https://airflow.apache.org/security.html?highlight=authentication
Check the following in your airflow.cfg file:
[webserver]
authenticate = True
auth_backend = airflow.contrib.auth.backends.password_auth
And also remember to Restart Airflow Webserver, if it still doesn't work, run airflow initdb and restart the webserver.
Also, double-check in airflow.cfg file that it does not contain multiple configurations for authenticate or auth_backend. If there is more than one occurrence, than it can cause that issue.
If necessary, install flask_bcrpyt package of python2.x/3.x
For instance,
$ python3.7 -m pip install flask_bcrypt
Make sure you have an admin user created,
airflow create_user -r Admin -u admin -e admin#acme.com -f admin -l user -p *****
edit airflow.cfg
inside [webserver] section
change authenticate = True. by default it is set to False.
add auth_backend = airflow.contrib.auth.backends.password_auth.
change rbac = True for Role-based-access-control – RBAC.
airflow initdb
restart airflow webserver
just add rbac = True to airflow.cfg, and you are good to go.
Now all you need to is restart your airflow webserver.
And in case if you want to add a new user. You can use this command,
airflow create_user -r Admin -u admin -f Ashish -l malgawa -p test123 -e ashishmalgawa#gmail.com
“-r” is the role we want for the user
“-u” is the username
“-f” is the first name
“-l” is the last name
“-e” is the email id
“-p” is the password
For more details, you can follow this article
https://www.cloudwalker.io/2020/03/01/airflow-rbac-role-based-access-control/#:~:text=RBAC%20is%20the%20quickest%20way,access%20to%20DAGs%20as%20well
I have to deploy a rails app after the server had a problem, and the IP Address has changed.
I've updated the IP Address in deploy/production.rb, and also git's remote branches, to the correct value, namely 192.168.30.24, but as you can see from the following output, the deployment is failing due to trying to connect over 192.168.30.23.
Where is Capistrano retrieving 192.168.30.23 from?
INFO [fa83a838] Running /usr/bin/env git remote update as code#192.168.30.24
DEBUG [fa83a838] Command: cd /var/www/paperless_office/repo && ( export RBENV_ROOT="~/.rbenv" RBENV_VERSION="2.3.0" GIT_ASKPASS="/bin/echo" GIT_SSH="/tmp/paperless_office/git-ssh.sh" ; /usr/bin/env git remote update )
DEBUG [fa83a838] Fetching origin
DEBUG [fa83a838] ssh: connect to host 192.168.30.23 port 22: No route to host
Capfile
# Load DSL and Setup Up Stages
require 'capistrano/setup'
# Includes default deployment tasks
require 'capistrano/deploy'
require 'capistrano/rbenv'
require 'capistrano/bundler'
require 'capistrano/rails/assets'
require 'capistrano/rails/migrations'
# Loads custom tasks from `lib/capistrano/tasks' if you have any defined.
Dir.glob('lib/capistrano/tasks/*.cap').each { |r| import r }
production.rb as follows:
role :app, %w{192.168.30.24}
role :web, %w{192.168.30.24}
role :db, %w{192.168.30.24}
server '192.168.30.24', user: 'code', roles: %w{web app}
after 'deploy:publishing', 'deploy:restart'
Thanks
Fixed this by removing the remote repo that Capistrano builds, so that on the next deploy, it was rebuilt using the correct IP Address.
I was deploying to /var/www/app_name so the repo to remove was /var/www/app_name/repo
I have a Chef (solo) recipe which generates a CSS file and puts it somewhere into web root directory (/vagrant/css on VM in my case). The problem is that the recipe needs to know an absolute path to vagrant synced directory on VM - it is a folder where Vagrantfile is, and by default it maps to /vagrant inside a VM.
I know how to set that path:
Vagrant.configure("2") do |config|
config.vm.synced_folder ".", "/synced/dir/on/vm"
end
But the problem is how to let the recipe know that /synced/dir/on/vm.
Currently I use that:
Vagrant.configure("2") do |config|
config.vm.provision :chef_solo do |chef|
chef.json = {
"base_directory" => "/vagrant" # THIS IS IT
}
end
end
It lets me use node["base_directory"] inside the recipe code, but there is a downside to that: if I was to write multiple recipes, it would be inconvinient to use node["base_directory"] in every recipe. It is much better that hardcoding the path, but it forces me to use same key on chef.json for every recipe.
Furthermore, if I'd wish to share my recipe, I would force users to use that "base_directory" => "/vagrant" key/value pair in their Vagrantfile.
Is there an API method to get this synched directory path on VM in the recipe code? Or more genarally: is there a way to get Vagrant-specific properties from Chef recipes?
I scoured Vagrant docs, but there seems to be just a single page on that topic, and because it is specific to Vagrant, there is no related information in Chef docs either.
So it seems there's some disconnect on the understanding of how this is meant to work.
When writing a recipe, it's common to use node attributes to define where thing will end up - such as your web_root directory.
I can conceive of the recipe's attributes file containing:
default['base_directory'] = '/var/www/html'
Which would apply to many production servers out there.
Then, when writing your recipes, use this attribute to send the files where you want them to, e.g.:
cookbook_file "#{node['base_directory']/css/myfile.css" do
owner "root"
...
end
When sharing your cookbook, anyone executing this on a server that has the /var/www/html directory will receive your file in the correct place.
In your Vagrantfile, in order to override the node's base_directory attribute to the synced directory, you can do something like this:
SYNCED_FOLDER = "/synced/dir/on/vm"
Vagrant.configure("2") do |config|
config.vm.synced_folder ".", SYNCED_FOLDER
config.vm.provision :chef_solo do |chef|
chef.json = {
"base_directory" => SYNCED_FOLDER
}
end
end
However, you mentioned that you didn't want to have to specify base_directory in your recipes, so I'd ask what node attribute you are using to drive the target location of your web root?
If you're using something like the apache2 cookbook from the Community site, then there's already an attribute for this: node['apache']['docroot_dir'], so using that you can control where thing are referenced.
Need some help connecting Resque Web UI (Rack config.ru) to a Redis server with AUTH
Using Resque + Unicorn + Nginx and installed most using apt-get install (Debian) and gem install
So basically Unicorn loads up resque-web (via Rack) using the standard config.ru
http://etagwerker.wordpress.com/2011/06/27/how-to-setup-resque-web-with-nginx-and-unicorn/
#!/usr/bin/env ruby
# Put this in /var/www/resque-web/config.ru
require 'logger'
$LOAD_PATH.unshift ::File.expand_path(::File.dirname(__FILE__) + '/lib')
require 'resque/server'
Resque::Server.use Rack::Auth::Basic do |username, password|
password == '{{password}}' # password
end
# Set the RESQUE_CONFIG env variable if you’ve a `resque.rb` or similar
# config file you want loaded on boot.
if ENV['RESQUECONFIG'] && ::File.exists?(::File.expand_path(ENV['RESQUE_CONFIG']))
load ::File.expand_path(ENV['RESQUE_CONFIG'])
end
use Rack::ShowExceptions
run Resque::Server.new
I'm trying to find out how to connect this to a Redis server with AUTH per the documentation here: http://redis.io/topics/security (basically in /etc/redis/redis.conf)
This rack configuration seem to only connection to a "vanilla" Redis server using defaults (localhost with standard 6379 port) -- how do I specify the Redis connection so I can pass the user/pass in the format below
redis://user:PASSWORD#redis-server:6379
I've tried using ENV['RESQUE_CONFIG'] to load up a resque.rb file
require 'resque'
Resque.redis = Redis.new(:password => '{{password}}')
this gets pulled via /etc/unicorn/resque-web.conf
# Put this in /etc/unicorn/resque-web.conf
RAILS_ROOT=/var/www/resque-web
RAILS_ENV=production
RESQUE_CONFIG=/var/www/resque-web/config/resque.rb
but it's still not really working
BTW, everything works without the Redis AUTH and just using the "vanilla" localhost Redis connection
Try this
redis_client = Redis.new(:url => "redis://user:PASSWORD#redis-server:6379")
and then do this
Resque.redis = redis_client
Hope this help