Vagrantfile setup to allow Ansible to SSH in - ssh

I have a vagrantfile from a book about Ansible for Devops. The issue I have is that I can SSH into the servers but Ansible cannot. Here is my vagrantfile;
# -*- mode: ruby -*-
# vi: set ft=ruby
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# General Vagrant VM configuration
config.vm.box = "geerlingguy/centos7"
config.ssh.insert_key = false
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.provider :virtualbox do |v|
v.memory = 256
v.linked_clone = true
end
# Application server 1
config.vm.define "app1" do |app|
app.vm.hostname = "orc-app1.dev"
app.vm.network :private_network, ip: "192.168.60.4"
end
# Application server 2
config.vm.define "app2" do |app|
app.vm.hostname = "orc-app2.dev"
app.vm.network :private_network, ip: "192.168.60.5"
end
# Database server
config.vm.define "db" do |db|
db.vm.hostname = "orc-db.dev"
db.vm.network :private_network, ip: "192.168.60.6"
end
end
And my Ansible hosts file;
# Application servers
[app]
192.168.60.4
192.168.60.5
# Database servers
[db]
192.168.60.6
# Group 'multi' with all servers
[multi:children]
app
db
# Variables that will be appliedto all servers
[multi:vars]
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
I know I can explicitly add ansible_ssh_port=2200 etc but I'd rather have it setup in the vagrantfile

You could try several things
Set the full ssh key path, in ansible config.
Try connect by your self, using ssh.
Check that 22th port opened, using telnet. If it’s closed you can try to disable firewall in VM. CentOS has it enabled by default.

Related

InfluxDB refuses connection from telegraf when changing from HTTP to HTTPS

In my centos7 server, I have set up Telegraf and InfluxDB. InfluxDB successfully receives data from Telegraf and stores them in the database. But when I reconfigure both services to use https, I see the following error in Telegraf's logs
Dec 29 15:13:11 localhost.localdomain telegraf[31779]: 2020-12-29T13:13:11Z E! [outputs.influxdb] When writing to [https://127.0.0.1:8086]: Post "https://127.0.0.1:8086/write?db=GRAFANA": dial tcp 127.0.0.1:8086: connect: connection refused
Dec 29 15:13:11 localhost.localdomain telegraf[31779]: 2020-12-29T13:13:11Z E! [agent] Error writing to outputs.influxdb: could not write any address
InfluxDB doesn't show any errors in it's logs.
Below is my telegraf.conf file:
[agent]
hostname = "local"
flush_interval = "15s"
interval = "15s"
# Input Plugins
[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
report_active = false
[[inputs.disk]]
ignore_fs = ["tmpfs", "devtmpfs", "devfs"]
[[inputs.io]]
[[inputs.mem]]
[[inputs.net]]
[[inputs.system]]
[[inputs.swap]]
[[inputs.netstat]]
[[inputs.processes]]
[[inputs.kernel]]
# Output Plugin InfluxDB
[[outputs.influxdb]]
database = "GRAFANA"
urls = [ "https://127.0.0.1:8086" ]
insecure_skip_verify = true
username = "telegrafuser"
password = "metricsmetricsmetricsmetrics"
And this is the uncommented [http] section of the influxdb.conf
# Determines whether HTTP endpoint is enabled.
enabled = false
# Determines whether the Flux query endpoint is enabled.
flux-enabled = true
# The bind address used by the HTTP service.
bind-address = ":8086"
# Determines whether user authentication is enabled over HTTP/HTTPS.
auth-enabled = false
# Determines whether HTTPS is enabled.
https-enabled = true
# The SSL certificate to use when HTTPS is enabled.
https-certificate = "/etc/ssl/server-cert.pem"
# Use a separate private key location.
https-private-key = "/etc/ssl/server-key.pem"

Gitlab-ci problems pushing to the private registry with HTTPS

I'm trying to push an image to my registry with the gitlab ci. I can login without any problems (the before script). However I get the following error on the push command. error parsing HTTP 400 response body: invalid character '<' looking for beginning of value: "<html>\r\n<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>\r\n<body>\r\n<center><h1>400 Bad Request</h1></center>\r\n<center>The plain HTTP request was sent to HTTPS port</center>\r\n<hr><center>nginx</center>\r\n</body>\r\n</html>\r\n"
This in the config.toml from the used gitlab-runner
[[runners]]
name = "e736f9d48a40"
url = "https://gitlab.domain.com/"
token = "token"
executor = "docker"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "docker"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
This is the relevant part of the gitlab-ci
image: docker
services:
- docker:dind
variables:
BACKEND_PROJECT: "test"
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
containerize:
stage: containerize
before_script:
- "docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY"
only:
- master
script:
- "cd backend/"
- "docker build -t $CI_REGISTRY_IMAGE/api:latest ."
- "docker push $CI_REGISTRY_IMAGE/api:latest"
The GitLab omnibus registry configuration
registry_external_url 'https://gitlab.domain.com:5050'
registry_nginx['enable'] = true
registry_nginx['ssl_certificate_key'] = "/etc/letsencrypt/live/gitlab.domain.com/privkey.pem"
registry_nginx['ssl_certificate'] = "/etc/letsencrypt/live/gitlab.domain.com/fullchain.pem"
registry_nginx['port'] = 443
registry_nginx['redirect_http_to_https'] = true
### Settings used by Registry application
registry['enable'] = true
registry_nginx['proxy_set_headers'] = {
"Host" => "$http_host",
"X-Real-IP" => "$remote_addr",
"X-Forwarded-For" => "$proxy_add_x_forwarded_for",
"X-Forwarded-Proto" => "http",
"X-Forwarded-Ssl" => "on"
}
Can someone help me with this problem?
Okay, the solution was quite simple. I only had to change the
"X-Forwarded-Proto" => "http",
to
"X-Forwarded-Proto" => "https",

Vagrant provision multiple playbooks with multiple ssh users

I'm trying to provision a vm using vagrant's ansible provisioner. But I have two playbooks and both need to use different ssh users. My use case is this, I have a pre-provisioning script that runs under the vagrant ssh user that is set up by default. My pre-provision script then adds a different ssh user provisioner that is set up to ssh onto the VM with its own key. The actual provision script has a task that deletes the insecure vagrant user on the system so it has to run as a different ssh user, provsioner, the user that the pre-provisioner creates.
I can not figure out how to change the ssh user in the Vagrantfile. Example below is how far I've gotten. Despite changing the config.ssh.username vagrant always sets the ssh user to the last value, in this case provisioner and that doesn't authenticate when running the pre-provision script because it hasn't been created yet.
Can I override the ssh user somehow? Maybe with an ansible variable itself inside the do |ansible| block (below)?
Is what I'm trying to achieve possible? It seems so straightforward I'm shocked I'm having this much trouble with it.
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "base_box"
config.vm.box_url = "s3://bucket/base-box/base.box"
config.vm.network "private_network", ip: "10.0.3.10"
config.ssh.keep_alive = true
config.vm.define "vagrant_image"
config.vm.provision "ansible" do |ansible_pre|
config.ssh.username = "vagrant"
ansible_pre.playbook = "provisioning/pre_provisioning.yml"
ansible_pre.host_vars = {
"vagrant_image" => {
"ansible_host" => "127.0.0.1",
}
}
ansible_pre.vault_password_file = ENV['ANSIBLE_VAULT_PASSWORD_FILE']
end
config.vm.provision "ansible" do |ansible|
config.ssh.username = "provisioner"
ansible.playbook = "provisioning/provisioning.yml"
ansible.host_vars = {
"vagrant_image" => {
"ansible_host" => "127.0.0.1",
}
}
ansible.vault_password_file = ENV['ANSIBLE_VAULT_PASSWORD_FILE']
end
end
(In case you were wondering the s3 box url only works because I've installed the vagrant-s3auth (1.3.2) plugin)
You can set it in several places. Vagrantfile (but not config, it will be overridden), through Ansible extravars:
config.vm.provision "ansible" do |ansible_pre|
ansible_pre.playbook = "provisioning/pre_provisioning.yml"
ansible_pre.host_vars = {
"vagrant_image" => {
"ansible_host" => "127.0.0.1",
}
}
ansible_pre.extra_vars = {
ansible_user: "vagrant"
}
ansible_pre.vault_password_file = ENV['ANSIBLE_VAULT_PASSWORD_FILE']
end
config.vm.provision "ansible" do |ansible|
ansible.playbook = "provisioning/provisioning.yml"
ansible.host_vars = {
"vagrant_image" => {
"ansible_host" => "127.0.0.1",
}
ansible.extra_vars = {
ansible_user: "provisioner"
}
ansible.raw_ssh_args = "-i /path/to/private/key/id_rsa"
ansible.vault_password_file = ENV['ANSIBLE_VAULT_PASSWORD_FILE']
end
But you can also write a single playbook and switch users inside. See ansible_user and meta: reset_connection.

vagrant up command times out after "SSH auth method: private key" line

I have installed Ubuntu 16.04 on in VirtualBox on Windows 8.1 operating system.
I boot my Virtual box with Ubuntu 16.04 and inside it, when I am trying to run vagrant up, it freezes at default: ssh auth method: private key line and finally times out after 600 seconds (which I set into Vagrantfile).
Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period. If you look above, you should be able to see the error(s) that Vagrant had when attempting to connect to the machine. These errors are usually good hints as to what may be wrong. If you're using a custom box, make sure that networking is properly working and you're able to connect to the machine. It is a common problem that networking isn't setup properly in these boxes. Verify that authentication configurations are also setup properly,
as well. If the box appears to be booting properly, you may want to increase the timeout ("config.vm.boot_timeout") value.
I am running vagrant up command from its location.
$ /var/www/yrc-2017$ vagrant up
My Vagrantfile looks like this:
if Gem::Version.new(Vagrant::VERSION) < Gem::Version.new("1.5.0")
puts "ERROR: Outdated version of Vagrant"
puts " Chassis requires Vagrant 1.5.0+ "
puts
exit 1
end
if not File.exist?(File.join(File.dirname(__FILE__), "puppet", "modules", "apt", ".git"))
puts "NOTICE: Submodules not found, updating for you"
if not system("git submodule update --init", :chdir => File.dirname(__FILE__))
puts "WARNING: Submodules may be missing, and could not automatically\ndownload them for you."
end
# Extra new line, please!
puts
end
require_relative "puppet/chassis.rb"
CONF = Chassis.config
Chassis.install_extensions(CONF)
base_path = Pathname.new( File.dirname( __FILE__ ) )
module_paths = [ base_path.to_s + "/puppet/modules" ]
module_paths.concat Dir.glob( base_path.to_s + "/extensions/*/modules" )
module_paths.map! do |path|
pathname = Pathname.new(path)
pathname.relative_path_from(base_path).to_s
end
Vagrant.configure("2") do |config|
# Set up potential providers.
config.vm.provider "virtualbox" do |vb|
# Use linked clones to preserve disk space.
vb.linked_clone = true if Vagrant::VERSION =~ /^1.8/
end
config.vm.box = "bento/ubuntu-16.04"
# Adding boot timeout
config.vm.boot_timeout = 600
# Enable SSH forwarding
config.ssh.forward_agent = true
# Disable updating of Virtual Box Guest Additions for faster provisioning.
if Vagrant.has_plugin?("vagrant-vbguest")
config.vbguest.auto_update = false
end
# Having access would be nice.
if CONF['ip'] == "dhcp"
config.vm.network :private_network, type: "dhcp", hostsupdater: "skip"
else
config.vm.network :private_network, ip: CONF['ip'], hostsupdater: "skip"
end
config.vm.hostname = CONF['hosts'][0]
config.vm.network "forwarded_port", guest: 22, host: 2222, host_ip: "127.0.0.1", id: 'ssh'
preprovision_args = [
CONF['apt_mirror'].to_s,
CONF['database']['has_custom_prefix'] ? "" : "check_prefix"
]
config.vm.provision :shell, :path => "puppet/preprovision.sh", :args => preprovision_args
config.vm.provision :puppet do |puppet|
puppet.manifests_path = "puppet/manifests"
puppet.manifest_file = "development.pp"
module_paths.map! { |rel_path| "/vagrant/" + rel_path }
puppet.options = "--modulepath " + module_paths.join( ':' ).inspect
puppet.options += " --hiera_config /dev/null"
puppet.options += " --disable_warnings=deprecations"
end
config.vm.provision :shell do |shell|
shell.path = "puppet/postprovision.sh"
shell.args = [
# 0 = hostname
CONF['hosts'][0],
# 1 = username
CONF['admin']['user'],
# 2 = password
CONF['admin']['password']
]
end
synced_folders = CONF["synced_folders"].clone
synced_folders["."] = "/vagrant"
mount_opts = CONF['nfs'] ? [] : ["dmode=777","fmode=777"]
synced_folders.each do |from, to|
config.vm.synced_folder from, to, :mount_options => mount_opts, :nfs => CONF['nfs']
if CONF['nfs'] && Vagrant.has_plugin?("vagrant-bindfs")
config.bindfs.bind_folder to, to
end
end
# Success?
end
I only added the following two lines in above file:
config.vm.boot_timeout = 600
Reference
and
config.vm.network "forwarded_port", guest: 22, host: 2222, host_ip: "127.0.0.1", id: 'ssh'
Reference
What should I do?
UPDATE
I have the following settings in VirtualBox > System on Windows 8.1
Paravitualization Interface: Default
Hardware Virtualization:
- Enable VT-x/AMD-V
- Enable Nested Paging
And all the options above are disabled, means I cannot change anything.
Screenshot:
Try and write "Vagrant ssh" after vagrant up, and if it asks for credentials it is "vagrant", if the issue is not fixed with a vagrant up --provision or a vagrant reload (halt and up), then delete your box and rebuild it from the box you started out with.
it have happen to me some times, when i try to upgrade the boxes i have, then i just have to re-setup the box and everything works.
it happens that the system does not gets the system propperly started.

Initializing an ssh session from cap 3?

I used to use cap-ssh for a shortcut to initiate an ssh connection to the server, but it doesn't look like it works in capistrano 3 anymore.
Does anyone have any suggestions for starting an ssh connection from capistrano in cap 3?
You can define ssh task like this:
desc 'Start an ssh session to your servers.'
task :ssh do
role = (ENV['ROLE'] || :app).to_sym
on roles(role) do
hosts = env.instance_variable_get(:#servers).instance_variable_get(:#servers)
hosts = hosts.select { |h| h.roles.include? role } if role
if hosts.size > 1
$stdout.puts "Pick a server to connect to:"
hosts.each.with_index do |host, i|
$stdout.puts "\t#{i + 1}: #{host.user}##{host.hostname} role: #{host.roles.to_a}"
end
selected = $stdin.gets
selected = 1 if selected.empty?
host = hosts[selected.to_i - 1]
else
host = hosts.first
end
fail "No server defined!" unless host
port = host.netssh_options[:port] || fetch(:ssh_options) && fetch(:ssh_options)[:port] || 22
system "ssh -t -p #{port} #{host.user}##{host.hostname} #{host.netssh_options[:forward_agent] ? '-a' : ''} 'cd #{current_path}; bash --login'"
end
end