Packer error - sudo: no tty present and no askpass program specified - automation

I'm been learning how to use Packer this week on my home lab where I have an ESXi 7 host.
I'm simply trying to deploy a Ubuntu 18.04 VM however at the end of the build I get this error in the packer console:
sudo: no tty present and no askpass program specified
This is what I have done.
Build.json
Preseed.cfg
variables.json
Command I run:
sudo packer build -var-file=variables.json build.json
In ESXi I see the VM build and complete and reboot and it gets an IP and I get a SSH prompt briefly before Packer deletes the VM after I see the above message.
This is the full error:
==> Ubuntu-18.04: Connecting to VNC over websocket...
==> Ubuntu-18.04: Waiting 10s for boot...
==> Ubuntu-18.04: Typing the boot command over VNC...
==> Ubuntu-18.04: Waiting for SSH to become available...
==> Ubuntu-18.04: Connected to SSH!
==> Ubuntu-18.04: Provisioning with shell script: /tmp/packer-shell382031289
==> Ubuntu-18.04: sudo: no tty present and no askpass program specified
==> Ubuntu-18.04: Provisioning step had errors: Running the cleanup provisioner, if present...
==> Ubuntu-18.04: Stopping virtual machine...
==> Ubuntu-18.04: Destroying virtual machine...
Build 'Ubuntu-18.04' errored after 8 minutes 21 seconds: Script exited with non-zero exit status: 1.Allowed exit codes are: [0]
==> Wait completed after 8 minutes 21 seconds
==> Some builds didn't complete successfully and had errors:
--> Ubuntu-18.04: Script exited with non-zero exit status: 1.Allowed exit codes are: [0]
==> Builds finished but no artifacts were created.
What am I doing wrong?

You need to tell sudo to read from stdin like this:
echo 'password' | sudo -S echo "I am groot"
This way your sudo command should work.

Related

aerospike installed using vagrant is not working properly

I'm trying to install aerospike in my local by following the steps here.
mkdir ~/aerospike-vm && cd ~/aerospike-vm
vagrant init aerospike/aerospike-ce
vagrant up
All the above commands are successful and there are no error.
Below is the minimal log of the vagrant up command.
default: Successfully added box 'aerospike/aerospike-ce' (v4.5.0.5) for 'virtualbox'!
.
.
.
Going on, assuming VBoxService is correct...
==> default: Checking for guest additions in VM...
default: The guest additions on this VM do not match the installed version of
default: VirtualBox! In most cases this is fine, but in rare cases it can
default: prevent things such as shared folders from working properly. If you see
default: shared folder errors, please make sure the guest additions within the
default: virtual machine match the version of VirtualBox you have installed on
default: your host and reload your VM.
default:
default: Guest Additions Version: 5.2.12
default: VirtualBox Version: 6.0
==> default: Configuring and enabling network interfaces...
==> default: Mounting shared folders...
default: /vagrant => /Users/rajkumar.natarajan/aerospike-vm
Below command clearly show only amc is running but not aerospike.
BOSM0001-RANATA:aerospike-vm rajkumar.natarajan$ vagrant ssh -c "sudo service aerospike status"
asd is stopped
Connection to 127.0.0.1 closed.
BOSM0001-RANATA:aerospike-vm rajkumar.natarajan$ vagrant ssh -c "sudo service amc status"
amc (pid 1458) is running...
Connection to 127.0.0.1 closed.
BOSM0001-RANATA:aerospike-vm rajkumar.natarajan$ vagrant ssh -c "sudo grep -i cake /var/log/aerospike/aerospike.log"
Connection to 127.0.0.1 closed.
Any idea what is wrong here.
Do: $ vagrant ssh
that will get you inside the shell. Then see why aerospike did not start.
First try:
$ sudo service aerospike start
then
$ sudo service aerospike status
If it is not running, go through /var/log/aerospike/aerospike.log and see what the log file is showing as the error.

Ansible SSH error during play

I get a strange error with Ansible. First of all, the first role works fine but when Ansible tries to execute the seconde one it failed because of ssh error.
Environment:
OS: CentOS 7
Ansible version: 2.2.1.0
Python version: 2.7.5
OpenSSH version: OpenSSH_6.6.1p1, OpenSSL 1.0.1e-fips 11 Feb 2013
Ansible command which is executed:
ansible-playbook -vvvv -i inventory/dev playbook_update_system.yml --limit "db[0]"
Playbook:
- name: "HUB Playbook | Updating system packages on {{ ansible_hostname }}"
hosts: release_first_half
roles:
- upgrade_system_package
- reboot_server
Role: upgrade_system_package:
- name: "upgrading CentOS system packages on {{ ansible_hostname }}"
shell: sudo puppet apply -e 'exec{"upgrade-package":command => "/usr/bin/yum clean all; /usr/bin/yum -y update;"}'
when: ansible_distribution == 'CentOS' and 'cassandra' not in group_names
Role: reboot_server:
- name: "reboot CentOS [{{ ansible_hostname }}] server"
shell: sudo puppet apply -e 'exec{"reboot-os":command => "/usr/sbin/reboot"}'
when: ansible_distribution == 'CentOS' and 'cassandra' not in group_names
Current behavior:
Connection to "db1" node and execute role "upgrade system packages" => OK
Try to connect to "db1" and execute role "reboot_server" => failed due to ssh.
Error message returned by Ansible:
fatal: [db1]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: OpenSSH_6.6.1, OpenSSL 1.0.1e-fips 11 Feb 2013\r\ndebug1: Reading configuration data /USR/newtprod/.ssh/config\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 56: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug2: mux_client_hello_exchange: master version 4\r\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\r\ndebug3: mux_client_request_session: entering\r\ndebug3: mux_client_request_alive: entering\r\ndebug3: mux_client_request_alive: done pid = 64994\r\ndebug3: mux_client_request_session: session request sent\r\ndebug1: mux_client_request_session: master session id: 2\r\ndebug3: mux_client_read_packet: read header failed: Broken pipe\r\ndebug2: Control master terminated unexpectedly\r\nShared connection to db1 closed.\r\n",
"unreachable": true
}
I don't understand because the previous role has been executed successfully on this node. Moreover, we have a lot of playbook which are using same inventory file and they works fine. I tried on another node too but same result.
It's a simple and pretty well-known issue: the shutdown process causes SSH daemon to quit and this breaks the current SSH session (you get the "broken pipe" error). The server reboots properly, but Ansible flow gets interrupted.
You need to add a delay to your shell command and run it with async option, so that Ansible's SSH session can finish before it gets killed.
shell: sleep 5; sudo puppet apply -e 'exec{"reboot-os":command => "/usr/sbin/reboot"}'
async: 0
poll: 0

How check the httpd is enabled and running using InSpec with Kitchen-docker on CentOS?

Running my test with InSpec I am unable to test if the httpd is enabled and running.
InSpec test
describe package 'httpd' do
it { should be_installed }
end
describe service 'httpd' do
it { should be_enabled }
it { should be_running }
end
describe port 80 do
it { should be_listening }
end
The output for kitchen verify is:
System Package
✔ httpd should be installed
Service httpd
✖ should be enabled
expected that `Service httpd` is enabled
✖ should be running
expected that `Service httpd` is running
Port 80
✖ should be listening
expected `Port 80.listening?` to return true, got false
Test Summary: 1 successful, 3 failures, 0 skipped
Recipe for httpd installation:
if node['platform'] == 'centos'
# do centos installation
package 'httpd' do
action :install
end
execute "chkconfig httpd on" do
command "chkconfig httpd on"
end
execute 'apache start' do
command '/usr/sbin/httpd -DFOREGROUND &'
action :run
end
I do not know what I am doing wrong.
More info
CentOS version on docker instance
kitchen exec --command 'cat /etc/centos-release'
-----> Execute command on default-centos-72.
CentOS Linux release 7.2.1511 (Core)
Chef version installed in my host
Chef Development Kit Version: 1.0.3
chef-client version: 12.16.42
delivery version: master (83358fb62c0f711c70ad5a81030a6cae4017f103)
berks version: 5.2.0
kitchen version: 1.13.2
UPDATE 1: Kitchen yml with driver attributes
The platform has the configuration recommended by coderanger :
---
driver:
name: docker
use_sudo: false
provisioner:
name: chef_zero
verifier: inspec
platforms:
- name: centos-7.2
driver:
platform: rhel
run_command: /usr/lib/systemd/systemd
provision_command:
- /bin/yum install -y iniscripts net-tools wget
suites:
- name: default
run_list:
- recipe[apache::default]
verifier:
inspec_tests:
- test/integration
attributes:
And it is the output when run kitchen test:
... some docker steps...
Step 16 : RUN echo ssh-rsa\ AAAAB3NzaC1yc2EAAAADAQABAAABAQDIp1HE9Zbtl3zAH2KKL1mVzb7BU1WxK7mi5xpIxNRBar7EZAAzxi1pVb1JwUXFSCVoAmUyfn/lBsKlgXnUD49pKrqkeLQQW7NoG3uCFiXBUTof8nFVuLYtw4CTiAudplyMvu5J7HQIP1Hve1caY27tFs/kpkQaXHCEuIkqgrM2rreMKK0n8im9b36L2SwWyM/GwqcIS1z9mMttid7ux0\+HOWWHqZ\+7gumOauh6tLRbtjrm3YYoaIAMyv945MIX8BFPXSQixThBVOlXGA9iTwUZWjU6WvZThxVFkKPR9KZtUTuTCT7Y8\+wFtQ/9XCHpPR00YDQvS0Vgdb/LhZUDoNqV\ kitchen_docker_key >> /home/kitchen/.ssh/authorized_keys
---> Using cache
---> c0e6b9e98d6a
Successfully built c0e6b9e98d6a
d486d7ebfe000a3138db06b1424c943a0a1ee7b2a00e8a396cb8c09f9527fb4b
0.0.0.0:32841
Waiting for SSH service on localhost:32841, retrying in 3 seconds
Waiting for SSH service on localhost:32841, retrying in 3 seconds
Waiting for SSH service on localhost:32841, retrying in 3 seconds
Waiting for SSH service on localhost:32841, retrying in 3 seconds
.....
You cannot, at least not out of the box. This is one area where kitchen-docker shows its edges. We try to pretend that a container is like a tiny VM but in reality it isn't, and one notable place where the pretending breaks down is init systems. With CentOS 7, it uses systemd. It is possible to get systemd to run inside the container (see https://github.com/poise/yolover-example/blob/master/.kitchen.yml#L17-L33) but not all features are supported and it can generally be a bit odd :-/ That example should be enough to make your tests work though. For completeness, CentOS 6 uses Upstart which just flat out won't run inside Docker so no love there either.

Why rsync fails with jenkins

When rsync is used with jenkins as Execute shell Command on CentOS 6.4, it fails:
[workspace] $ /bin/sh -xe /tmp/hudson3424899639384884888.sh
+ rsync -av /var/lib/jenkins/jobs/myjob/workspace/target/classes/ myuser#myserver.com:/home/myuser/test
rsync: Failed to exec ssh: Permission denied (13)
rsync error: error in IPC code (code 14) at pipe.c(84) [sender=3.0.6]
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in IPC code (code 14) at io.c(600) [sender=3.0.6]
However, it works from the command line:
su jenkins
rsync -av /var/lib/jenkins/jobs/myjob/workspace/target/classes/ myuser#myserver.com:/home/myuser/test
sending incremental file list
sent 17875 bytes received 83 bytes 3990.67 bytes/sec
total size is 1981027 speedup is 110.31
What has to be done to make it work in jenkins as well?
The problem was with SElinux installed on CentOS, which for some reason were blocking ssh for rsync.
Here is a line from /var/log/messages which says the ssh was blocked:
Jun 12 13:45:59 myserver kernel: type=1400 audit(1434109559.911:33346): avc: denied { execute } for pid=11862 comm="rsync" name="ssh" dev=dm-1 ino=11931741 scontext=unconfined_u:system_r:rsync_t:s0 tcontext=system_u:object_r:ssh_exec_t:s0 tclass=file
For now we disabled SElinux on our server, proper solution would be to create custom policy module (1)
I had a similar problem.
In my case jenkins was not executing rsync with the expected user (jenkins) but with another (jboss in my case)
adding 'whoami' to the script and using ssh verbose:
rsync -e "ssh -v" .......
helped to find the problem.
Note, that when you change (add) jenkins user to some group, permission will apply after slave (agent) restart.

Changing vagrant ssh user creates permission erros

I'm trying to alter an Vagrant box I created for my office. Currently, like most boxes, running vagrant ssh logins me in as the vagrant user, but team members get frustrated having to use su - xxadmin to switch to our primary admin user.
In my Vagrantfile, I added: config.ssh.username = "xxadmin", but then I started receiving the common Vagrant error when running vagrant up:
[default] Configuring and enabling network interfaces...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
sed -e '/^#VAGRANT-BEGIN/,/^#VAGRANT-END/ d' /etc/network/interfaces > /tmp/vagrant-network-interfaces
Stdout from the command:
Stderr from the command:
sudo: no tty present and no askpass program specified
and when running vagrant halt:
[default] Attempting graceful shutdown of VM...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
shutdown -h now
Stdout from the command:
Stderr from the command:
sudo: no tty present and no askpass program specified
What's going on here? Why would simply changing the ssh user create these errors? How do i find a solution forward?
Specs:
OS X Mavericks (host)
Vagrant 1.3.5
Virtualbox 4.3.2
Debian 7 Wheezy (vm client)
In your box, you need to modify your sudoers file by running visudo and adding the following:
Defaults !requiretty
I kept running into this error until I made sure that my user's NOPASSWD sudoers entry was not being squashed.