I am trying to install an old version of RabbitMQ using Chef (cookbook 'rabbitmq', '~> 5.8.5') and Kitchen, below my configuration:
Attributes
#Erlang
default['erlang']['install_method'] = 'source'
default['erlang']['source']['version']='R13B03'
default['erlang']['source']['checksum']='e7c46c8b2778f22064a3b369c1a1b572a1cc0e8a2198166858d4b9a1b488d662'
#RabbitMQ
default['rabbitmq']['erlang']['enabled'] = true
default['rabbitmq']['version'] = "3.4.4"
default['rabbitmq']['rpm_package'] ='rabbitmq-server-3.4.4-1.noarch.rpm'
Recipe:
include_recipe 'rabbitmq::default'
When I run kitchen converge, I am getting the following exception:
Running handlers:
[2020-08-22T22:20:07+00:00] ERROR: Running exception handlers
Running handlers complete
[2020-08-22T22:20:07+00:00] ERROR: Exception handlers complete
Chef Infra Client failed. 9 resources updated in 06 minutes 26 seconds
[2020-08-22T22:20:07+00:00] FATAL: Stacktrace dumped to /tmp/kitchen/cache/chef-stacktrace.out
[2020-08-22T22:20:07+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report
[2020-08-22T22:20:07+00:00] FATAL: Mixlib::ShellOut::ShellCommandFailed: rpm_package[/tmp/kitchen/cache/rabbitmq-server-3.4.4-1.noarch.rpm] (rabbitmq::default line 224) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of ["rpm", "-i", "/tmp/kitchen/cache/rabbitmq-server-3.4.4-1.noarch.rpm"] ----
STDOUT:
STDERR: warning: /tmp/kitchen/cache/rabbitmq-server-3.4.4-1.noarch.rpm: Header V4 DSA/SHA1 Signature, key ID 056e8e56: NOKEY
error: Failed dependencies:
erlang >= R13B-03 is needed by rabbitmq-server-3.4.4-1.noarch
---- End output of ["rpm", "-i", "/tmp/kitchen/cache/rabbitmq-server-3.4.4-1.noarch.rpm"] ----
Ran ["rpm", "-i", "/tmp/kitchen/cache/rabbitmq-server-3.4.4-1.noarch.rpm"] returned 1
But when I logged in to the VM, I can see erlang is installed:
[vagrant#kitchen-rmq-server-centos-7 ~]$ erl
Erlang R13B03 (erts-5.7.4) [source] [64-bit] [rq:1] [async-threads:0] [hipe] [kernel-poll:false]
Eshell V5.7.4 (abort with ^G)
1>
And it is the same version required by RMQ (R13B03)
Any idea how to solve this issue?
Edit: to replicate the issue https://github.com/Proximator/chef-rmq
Firstly, we have to make sure erlang is installed by the rabbitmq cookbook, and not by any other means. This is the note found on Chef supermarket for rabbitmq cookbook:
The packages are cannot be installed alongside with other Erlang packages, for example, those from standard Debian repositories or Erlang Solutions.
To make sure that the Erlang cookbook is not used by rabbitmq::default
Also, there is a compatibility matrix of RabbitMQ and Erlang versions. RabbitMQ 3.7.0 being the lowest supported version, for which the lowest compatible Erlang version is 19.3.
There are zero dependency Erlang RPMs "just enough to run RabbitMQ" as documented here:
https://github.com/rabbitmq/erlang-rpm
For example - to install RabbitMQ 3.7.x with the compatible Erlang 19.3.x:
You should have these attributes:
default['rabbitmq']['erlang']['enabled'] = true
default['rabbitmq']['version'] = '3.7.6'
default['rabbitmq']['erlang']['yum']['baseurl'] = 'https://dl.bintray.com/rabbitmq-erlang/rpm/erlang/19/el/7'
default['rabbitmq']['erlang']['version'] = '19.3.6.13'
Then include below recipes:
include_recipe 'rabbitmq::erlang_package'
include_recipe 'rabbitmq::default'
Related
I'm trying to build a reactor sls file, which starts running when an event occurs.
The content of the sls file should be as the following cli commands:
sudo salt minion git.add /srv/salt .
sudo salt minion git.commit /srv/salt test
sudo salt minion git.push /srv/salt origin master identity=/home/autogit/.ssh/id_rsa
If i run the code bellow triggered by the reactor. I get the following error message.
[DEBUG ] Reactor is populating module client cache
[ERROR ] An un-handled exception from the multiprocessing process 'Reactor-9:1' was caught:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/utils/process.py", line 765, in _run
return self._original_run()
File "/usr/lib/python2.7/dist-packages/salt/utils/reactor.py", line 271, in run
self.call_reactions(chunks)
File "/usr/lib/python2.7/dist-packages/salt/utils/reactor.py", line 228, in call_reactions
self.wrap.run(chunk)
File "/usr/lib/python2.7/dist-packages/salt/utils/reactor.py", line 330, in run
self.populate_client_cache(low)
File "/usr/lib/python2.7/dist-packages/salt/utils/reactor.py", line 324, in populate_client_cache
self.reaction_class[reaction_type](self.opts['conf_file'])
KeyError: u'module'
[CRITICAL] Engine 'reactor' could not be started!
I've tried different syntax (old style and new style) but couldn't figure out what the problem is. Always getting an KeyError: u'module' or u'git'.
Also tried it with runner function to run it locally on the master.
git pull:
module.run:
- git.pull:
- cwd: /srv/salt
- remote: git#git.xyz.com:user/sbt.git
- identity: /home/autogit/.ssh/id_rsa
- git.add:
- cwd: /srv/salt
- filename: .
- git.commit:
- cwd: /srv/salt
- remote: git#git.xyz.com:user/sbt.git
- git.push:
- cwd: /srv/salt
- remote: git#git.xyz.com:user/sbt.git
- identity: /home/autogit/.ssh/id_rsa
salt --versions-report
Salt Version:
Salt: 2019.2.0
Dependency Versions:
cffi: Not Installed
cherrypy: unknown
dateutil: 2.6.1
docker-py: Not Installed
gitdb: 2.0.3
gitpython: 2.1.8
ioflo: Not Installed
Jinja2: 2.10
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: Not Installed
Mako: 1.0.7
msgpack-pure: Not Installed
msgpack-python: 0.5.6
mysql-python: Not Installed
pycparser: Not Installed
pycrypto: 2.6.1
pycryptodome: Not Installed
pygit2: Not Installed
Python: 2.7.15rc1 (default, Nov 12 2018, 14:31:15)
python-gnupg: 0.4.1
PyYAML: 3.12
PyZMQ: 16.0.2
RAET: Not Installed
smmap: 2.0.3
timelib: Not Installed
Tornado: 4.5.3
ZMQ: 4.2.5
System Versions:
dist: Ubuntu 18.04 bionic
locale: UTF-8
machine: x86_64
release: 4.15.0-46-generic
system: Linux
version: Ubuntu 18.04 bionic
Since i'm quite new to Salt, hopefully you can give me a hint what i'm doing wrong:
You didn't provide the master config.
About module.run confusion: add in your settings (minion and maybe to master since I don't know your use-case)
use_superseded:
- module.run
That will enable your syntax, more doc about this here: https://docs.saltstack.com/en/latest/ref/states/all/salt.states.module.html#salt.states.module.run
In general: you are executing execution modules from the place that state modules are allowed only (the term module is heavily overused in salt...)
You didn't provide the full Master config. Reactor requires dedicated config to match events to sls files:
https://docs.saltstack.com/en/latest/ref/configuration/master.html#master-reactor-settings
You can also check the doc I've written some time ago about events and reactors:
https://github.com/kiemlicz/util/wiki/Salt-Events-and-Reactor
Assuming you've configured your events-to-sls-files-matching in master config, your provided sls:
git pull:
module.run:
- git.pull:
- cwd: /srv/salt
- remote: git#git.xyz.com:user/
...
will not work.
Mind that reaction happens on Salt Master thus the reaction sls file need to provide type of reaction (local, runner etc.) since it's no longer 'view of one minion' but possibly of tons of minions!
First create runner reaction type (which delegates to some orchestration sls file which will contain your logic wrapped with (I think) salt.function )
Help yourself with aforementioned github link to my attempt of explaining Reactor.
Refer to official doc as well: https://docs.saltstack.com/en/latest/topics/reactor/index.html
I was following instruction given in the link. Updated my repository list and tried to upgrade the version but it gives the following error.
Earlier, cluster of two node of rabbitmq 3.2.4 was running.
Error:
Reading package lists... Done
Building dependency tree
Reading state information... Done
rabbitmq-server is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 467 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] Y
Setting up rabbitmq-server (3.6.1-1) ...
* Starting message broker rabbitmq-server * FAILED - check /var/log/rabbitmq/startup_\{log, _err\}
[fail]
invoke-rc.d: initscript rabbitmq-server, action "start" failed.
dpkg: error processing package rabbitmq-server (--configure):
subprocess installed post-installation script returned error exit status 1
Errors were encountered while processing:
rabbitmq-server
E: Sub-process /usr/bin/dpkg returned an error code (1)
[1]: http://monkeyhacks.com/post/installing-rabbitmq-on-ubuntu-14-04
In the error log i am getting this :
Cluster upgrade needed but other nodes are running [add_ip_to_listener,
exchange_decorators,
exchange_event_serial,gm,
mirrored_supervisor,
policy_apply_to,
queue_decorators,
remove_user_scope,
semi_durable_route,
topic_trie,
topic_trie_node,
user_admin_to_tags]
and I want [add_ip_to_listener,cluster_name,exchange_event_serial,gm,
internal_system_x,mirrored_supervisor,policy_apply_to,
recoverable_slaves,remove_user_scope,semi_durable_route,
topic_trie,topic_trie_node,user_admin_to_tags,
user_password_hashing]
I did a apt-get upgrade because the load times of our production server were about 40 seconds. I don't have a snapshot before nor after the upgrade.(Although there is a snapshot of six months old) Load times improved to 15-ish seconds but our erizo service stopped working. Erizo was also running on that instance. Restarting the services didn't help so I tried upgrading the packages to the previous version (https://askubuntu.com/questions/138284/how-to-downgrade-a-package-via-apt-get), just like it was but on almost every package there was an error: the previous package version did not excist.(which is strange, because I copied the output of dpkg -l)
Only a few of them were successfully downgraded but I got a serious error when upgrading e1fslibs to it's previous version.:The following packages have unmet dependencies:
e2fsprogs: PreDepends: e2fslibs
Somehow that messed up initramfs and/or initramfs-tools and now the instance is running but I can't get into it.
Connecting to the instance in google cloud platform :Connecting...
Could not connect, retrying (1/3).
google cloud shell isn't able to gcloud compute ssh : Permission denied (publickey).
using gcloud locally also says Permission denied (publickey).
I checked the following:
There are project public keys defined; there aren't any instance public keys defined or any other metadata ( Google Cloud SSH Keys )
In google cloud platform >> compute engine >> VM instances >> permissions>> I see 'compute' is disabled
verify that the daemon is running by navigating to the serial console output page and looking for output lines prefixed with the accounts-from-metadata: string. If you are using a standard image but you do not see these output prefixes in the serial console output, the daemon might be stopped--> I don't see this so I expect it's NOT running.
check firewall rules:(gcloud compute firewall-rules list)
default-allow-ssh default 0.0.0.0/0 tcp:22 //rule is present
Following packages were upgraded:
apt
apt-transport-https
apt-utils
binutils
cloud-init
cloud-initramfs-growroot
cloud-initramfs-rescuevol
comerr-dev
dosfstools
e2fslibs
e2fsprogs
gce-cloud-config
gce-daemon
gce-imagebundle
gce-startup-scripts
google-cloud-sdk
landscape-client
landscape-common l
ibapt-inst1.4 libapt-pkg4.12
libcomerr2
libss2
libudev0 mountall
nginx
nginx-common
nginx-full
ntp
ntpdate
procps
python-apt
python-apt-common
python-lazr.restfulclient
udev
unattended-upgrades
update-manager-core
upstart
whoopsie
x11-utils
This is get from the serial output ::
- mountall: Event failed
- landscape-client is not configured, please run landscape-config.
What to do next?
Apply a startup script to running instance (following this https://cloud.google.com/compute/docs/startupscript) and try to perform Apt-get upgrade ?
try to create a new public key (again) in google cloud shell to access the instance?
In google cloud shell the first time this file was generated after typing gcloud compute --project "enduring-palace-762" ssh --zone "europe-west1-c" "tta-media-test-2"
WARNING: The private SSH key file for Google Compute Engine does not exist.WARNING: You do not have an SSH key for Google Compute Engine.WARNING: [/usr/bin/ssh-keygen] will be executed to generate a key. This tool needs to create the directory /home/developer/.ssh
the generated public key was stored in /home/developer/.ssh /google_compute_engine.pub I made a copy of that, prepended the username and added the content of the public key to compute engine >> metadata>>ssh keys. *key is accepted but the username doesn't show like it does with all the other username - key pairs
I get Permission denied (publickey) error though when using gcloud compute ssh tta-media-test-2 --zone europe-west1-c
When I provide the ssh key file like this
gcloud compute ssh tta-media-test-2 --zone europe-west1-c --ssh-key-file=my-ssh-keys_copy.pub (pwd is inside the folder where key file is)
WARNING: The public SSH key file for Google Compute Engine does not exist.
WARNING: You do not have an SSH key for Google Compute Engine.
WARNING: [/usr/bin/ssh-keygen] will be executed to generate a key.
I get same result when i generate a new key with ssh-keygen -t rsa -f my-ssh-keys
Any other possible solution would be much appreciated.
[update] I am able to ssh the 'broken' instance from local using ssh user#externalIpOfInstance My plan is to bring it to a upgraded stable state, create a snapshot and see from there..
sudo apt-get -f install
0 upgraded, 0 newly installed, 0 to remove and 5 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Setting up initramfs-tools (0.99ubuntu13.5) ...
update-initramfs: deferring update (trigger activated)
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-3.13.0-79-generic
E: /usr/share/initramfs-tools/hooks/fixrtc failed with return 1.
update-initramfs: failed for /boot/initrd.img-3.13.0-79-generic with 1.
dpkg: error processing initramfs-tools (--configure):
subprocess installed post-installation script returned error exit status 1
Errors were encountered while processing:
initramfs-tools
E: Sub-process /usr/bin/dpkg returned an error code (1)
sudo apt-get upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages have been kept back:
google-chrome-stable
The following packages will be upgraded:
comerr-dev libcomerr2 libss2 unattended-upgrades
4 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.
1 not fully installed or removed.
Need to get 0 B/188 kB of archives.
After this operation, 4,096 B of additional disk space will be used.
Do you want to continue [Y/n]? y
Preconfiguring packages ...
(Reading database ... 178509 files and directories currently installed.)
Preparing to replace comerr-dev 2.1-1.42-1ubuntu2.2 (using .../comerr-dev_2.1-1.42-1ubuntu2.3_amd64.deb) ...
Unpacking replacement comerr-dev ...
Preparing to replace libcomerr2 1.42-1ubuntu2.2 (using .../libcomerr2_1.42-1ubuntu2.3_amd64.deb) ...
Unpacking replacement libcomerr2 ...
Preparing to replace libss2 1.42-1ubuntu2.2 (using .../libss2_1.42-1ubuntu2.3_amd64.deb) ...
Unpacking replacement libss2 ...
Preparing to replace unattended-upgrades 0.76ubuntu1.1 (using .../unattended-upgrades_0.76ubuntu1.2_all.deb) ...
Unpacking replacement unattended-upgrades ...
Processing triggers for install-info ...
Processing triggers for man-db ...
Processing triggers for ureadahead ...
Setting up initramfs-tools (0.99ubuntu13.5) ...
update-initramfs: deferring update (trigger activated)
Setting up libcomerr2 (1.42-1ubuntu2.3) ...
Setting up comerr-dev (2.1-1.42-1ubuntu2.3) ...
Setting up libss2 (1.42-1ubuntu2.3) ...
Setting up unattended-upgrades (0.76ubuntu1.2) ...
Processing triggers for initramfs-tools ...
update-initramfs: Generating /boot/initrd.img-3.13.0-79-generic
E: /usr/share/initramfs-tools/hooks/fixrtc failed with return 1.
update-initramfs: failed for /boot/initrd.img-3.13.0-79-generic with 1.
dpkg: error processing initramfs-tools (--configure):
subprocess installed post-installation script returned error exit status 1
No apport report written because MaxReports is reached already
Processing triggers for libc-bin ...
ldconfig deferred processing now taking place
Errors were encountered while processing:
initramfs-tools
E: Sub-process /usr/bin/dpkg returned an error code (1)
sudo apt-get remove initramfs-tools-bin
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
cron : Depends: adduser but it is not going to be installed
procps : Depends: initscripts
upstart : Depends: initscripts
Depends: mountall
Depends: ifupdown (>= 0.6.10ubuntu5)
E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.
what to do here?
If you were able to SSH into the instance using a given SSH key before, the most likely reason it would stop working is if you somehow removed that SSH key or if the SSH daemon wasn't running/was otherwise broken. It appears as though in the downgrade you broke this machine.
Why do you need this particular VM instance? Does it have important data? If so, you can shut it off, mount its disk using a fresh VM instance, and copy that data off.
If it runs a service, you should probably cut over to a new machine: even if you're able to get into the instance, there's no telling what still works and what doesn't.
i'm facing issue in bigbluebutton insatllation
Reading state information...
You might want to run 'apt --fix-broken install' to correct these.
The following packages have unmet dependencies:
bigbluebutton : Depends: bbb-config but it is not going to be installed
gce-compute-image-packages : Depends: google-compute-engine but it is not going to be installed
E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution).
I am new to chef and following "Learning Chef" book from "O'Riely" to learn the chef basics.
In its chapter 07 they have described to install httpd service on chef client(node) from chef host using cookbook.
This is how my .kitchen.yaml file looks like :
---
driver:
name: vagrant
provisioner:
name: chef_zero
platforms:
- name: centos_apache
driver:
box: learningchef/centos65
boxurl: learningchef/centos65
suites:
- name: default
run_list:
- recipe[my_apache::default]
attributes:
The recipe for installing httpd service looks like :
#
# Cookbook Name:: my_apache
# Recipe:: default
#
# Copyright (c) 2015 The Authors, All Rights Reserved.
#
yum_package 'httpd' do
source "/home/vipul/Downloads/httpd-2.2.15-39.el6.centos.x86_64.rpm"
action :install
end
And this is the log which I get after executing the command "kitchen converge"
-----> Starting Kitchen (v1.4.0)
-----> Converging <default-centos-apache>...
Preparing files for transfer
Preparing dna.json
Preparing current project directory as a cookbook
Removing non-cookbook files before transfer
Preparing validation.pem
Preparing client.rb
-----> Chef Omnibus installation detected (install only if missing)
Transferring files to <default-centos-apache>
Starting Chef Client, version 12.4.0
[2015-07-08T12:56:06+00:00] WARN: Child with name 'dna.json' found in multiple directories: /tmp/kitchen/dna.json and /tmp/kitchen/dna.json
resolving cookbooks for run list: ["my_apache::default"]
Synchronizing Cookbooks:
- my_apache
Compiling Cookbooks...
Converging 1 resources
Recipe: my_apache::default
================================================================================
Error executing action `install` on resource 'yum_package[httpd]'
================================================================================
Mixlib::ShellOut::ShellCommandFailed
------------------------------------
Expected process to exit with [0], but received '1'
---- Begin output of /usr/bin/python /opt/chef/embedded/apps/chef/lib/chef/provider/package/yum-dump.py --options --installed-provides --yum-lock-timeout 30 ----
STDOUT: [option installonlypkgs] kernel kernel-bigmem installonlypkg(kernel-module) installonlypkg(vm) kernel-enterprise kernel-smp kernel-debug kernel-unsupported kernel-source kernel-devel kernel-PAE kernel-PAE-debug
Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=6&arch=x86_64&repo=os error was
14: PYCURL ERROR 7 - "Failed to connect to 2a02:2498:1:3d:5054:ff:fed3:e91a: Network is unreachable"
STDERR: yum-dump Repository Error: Cannot find a valid baseurl for repo: base
---- End output of /usr/bin/python /opt/chef/embedded/apps/chef/lib/chef/provider/package/yum-dump.py --options --installed-provides --yum-lock-timeout 30 ----
Ran /usr/bin/python /opt/chef/embedded/apps/chef/lib/chef/provider/package/yum-dump.py --options --installed-provides --yum-lock-timeout 30 returned 1
Resource Declaration:
---------------------
# In /tmp/kitchen/cache/cookbooks/my_apache/recipes/default.rb
8: yum_package 'httpd' do
9: source "/home/vipul/Downloads/httpd-2.2.15-39.el6.centos.x86_64.rpm"
10: action :install
11: end
Compiled Resource:
------------------
# Declared in /tmp/kitchen/cache/cookbooks/my_apache/recipes/default.rb:8:in `from_file'
yum_package("httpd") do
action :install
retries 0
retry_delay 2
default_guard_interpreter :default
package_name "httpd"
source "/home/vipul/Downloads/httpd-2.2.15-39.el6.centos.x86_64.rpm"
flush_cache {:before=>false, :after=>false}
declared_type :yum_package
cookbook_name "my_apache"
recipe_name "default"
end
Running handlers:
[2015-07-08T12:56:14+00:00] ERROR: Running exception handlers
Running handlers complete
[2015-07-08T12:56:14+00:00] ERROR: Exception handlers complete
Chef Client failed. 0 resources updated in 11.596431753 seconds
[2015-07-08T12:56:14+00:00] FATAL: Stacktrace dumped to /tmp/kitchen/cache/chef-stacktrace.out
[2015-07-08T12:56:14+00:00] ERROR: yum_package[httpd] (my_apache::default line 8) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of /usr/bin/python /opt/chef/embedded/apps/chef/lib/chef/provider/package/yum-dump.py --options --installed-provides --yum-lock-timeout 30 ----
STDOUT: [option installonlypkgs] kernel kernel-bigmem installonlypkg(kernel-module) installonlypkg(vm) kernel-enterprise kernel-smp kernel-debug kernel-unsupported kernel-source kernel-devel kernel-PAE kernel-PAE-debug
Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=6&arch=x86_64&repo=os error was
14: PYCURL ERROR 7 - "Failed to connect to 2a02:2498:1:3d:5054:ff:fed3:e91a: Network is unreachable"
STDERR: yum-dump Repository Error: Cannot find a valid baseurl for repo: base
---- End output of /usr/bin/python /opt/chef/embedded/apps/chef/lib/chef/provider/package/yum-dump.py --options --installed-provides --yum-lock-timeout 30 ----
Ran /usr/bin/python /opt/chef/embedded/apps/chef/lib/chef/provider/package/yum-dump.py --options --installed-provides --yum-lock-timeout 30 returned 1
[2015-07-08T12:56:14+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
>>>>>> Converge failed on instance <default-centos-apache>.
>>>>>> Please see .kitchen/logs/default-centos-apache.log for more details
>>>>>> ------Exception-------
>>>>>> Class: Kitchen::ActionFailed
>>>>>> Message: SSH exited (1) for command: [sh -c '
sudo -E /opt/chef/bin/chef-client --local-mode --config /tmp/kitchen/client.rb --log_level auto --force-formatter --no-color --json-attributes /tmp/kitchen/dna.json --chef-zero-port 8889
']
>>>>>> ----------------------
I want to install httpd service using the local rpm package. Chef client is already installed on the virtual machine.
I have tried various steps, but getting same output always.
Update: So I did yum update in my host and client both.
After that the output log changed.. I says that it couldn't find the package at defined source, while it is present there. Please suggest::
-----> Starting Kitchen (v1.4.0)
-----> Converging <default-centos-apache>...
Preparing files for transfer
Preparing dna.json
Preparing current project directory as a cookbook
Removing non-cookbook files before transfer
Preparing validation.pem
Preparing client.rb
-----> Chef Omnibus installation detected (install only if missing)
Transferring files to <default-centos-apache>
Starting Chef Client, version 12.4.0
[2015-07-09T14:16:57+00:00] WARN: Child with name 'dna.json' found in multiple directories: /tmp/kitchen/dna.json and /tmp/kitchen/dna.json
resolving cookbooks for run list: ["my_apache::default"]
Synchronizing Cookbooks:
- my_apache
Compiling Cookbooks...
Converging 1 resources
Recipe: my_apache::default
================================================================================
Error executing action `install` on resource 'yum_package[httpd]'
================================================================================
Chef::Exceptions::Package
-------------------------
Package httpd not found: /home/vipul/Downloads/httpd-2.2.15-39.el6.centos.x86_64.rpm
Resource Declaration:
---------------------
# In /tmp/kitchen/cache/cookbooks/my_apache/recipes/default.rb
8: package "httpd" do
9: source "/home/vipul/Downloads/httpd-2.2.15-39.el6.centos.x86_64.rpm"
10: action :install
11: end
Compiled Resource:
------------------
# Declared in /tmp/kitchen/cache/cookbooks/my_apache/recipes/default.rb:8:in `from_file'
yum_package("httpd") do
action :install
retries 0
retry_delay 2
default_guard_interpreter :default
package_name "httpd"
source "/home/vipul/Downloads/httpd-2.2.15-39.el6.centos.x86_64.rpm"
flush_cache {:before=>false, :after=>false}
declared_type :package
cookbook_name "my_apache"
recipe_name "default"
end
Running handlers:
[2015-07-09T14:17:01+00:00] ERROR: Running exception handlers
Running handlers complete
[2015-07-09T14:17:01+00:00] ERROR: Exception handlers complete
Chef Client failed. 0 resources updated in 6.822340816 seconds
[2015-07-09T14:17:01+00:00] FATAL: Stacktrace dumped to /tmp/kitchen/cache/chef-stacktrace.out
[2015-07-09T14:17:01+00:00] ERROR: yum_package[httpd] (my_apache::default line 8) had an error: Chef::Exceptions::Package: Package httpd not found: /home/vipul/Downloads/httpd-2.2.15-39.el6.centos.x86_64.rpm
[2015-07-09T14:17:01+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
>>>>>> Converge failed on instance <default-centos-apache>.
>>>>>> Please see .kitchen/logs/default-centos-apache.log for more details
>>>>>> ------Exception-------
>>>>>> Class: Kitchen::ActionFailed
>>>>>> Message: SSH exited (1) for command: [sh -c '
sudo -E /opt/chef/bin/chef-client --local-mode --config /tmp/kitchen/client.rb --log_level auto --force-formatter --no-color --json-attributes /tmp/kitchen/dna.json --chef-zero-port 8889
']
>>>>>> ----------------------
Might want to try following the steps outlined here if you can't work out your proxy/network issues. http://xmodulo.com/how-to-fix-yum-errors-on-centos-rhel-or-fedora.html
Regards,
The answer to the problem is in comments of question,by Mark.
Hence just pasting it here.
Terminal proxies are not enough. Kitchen is running chef client within a virtual machine. See: docs.chef.io/config_yml_kitchen.html#work-with-proxies
I'm working on a project with Vagrant and Ansible and Virtualbox.
When I try to install Apache on an ubuntu precise (14.04) box, Vagrant fails. I improved the answer after.
It seems a known bug, but even if I'm installing a newer version, the error shows up.
I tried also as stated here, but with no luck.
How can I resolve this issue?
Thank you.
UPDATED ANSWER
This is the Ansible task.
Version 1:
- name: Install Apache
sudo: yes
apt: pkg=apache2 state=latest
register: apache2_apt
Output:
failed: [default] => {"failed": true}
stderr: E: Sub-process /usr/bin/dpkg returned an error code (1)
stdout: Reading package lists...
Building dependency tree...
Reading state information...
Suggested packages:
www-browser apache2-doc apache2-suexec-pristine apache2-suexec-custom
The following NEW packages will be installed:
apache2
0 upgraded, 1 newly installed, 0 to remove and 183 not upgraded.
Need to get 0 B/146 kB of archives.
After this operation, 460 kB of additional disk space will be used.
(Reading database ... 52932 files and directories currently installed.)
Unpacking apache2 (from .../apache2_2.4.12-1+deb.sury.org~precise+5_amd64.deb) ...
dpkg: error processing /var/cache/apt/archives/apache2_2.4.12-1+deb.sury.org~precise+5_amd64.deb (--unpack):
error setting ownership of `/var/www/html.dpkg-new': Operation not permitted
Processing triggers for man-db ...
Processing triggers for ureadahead ...
Errors were encountered while processing:
/var/cache/apt/archives/apache2_2.4.12-1+deb.sury.org~precise+5_amd64.deb
msg: '/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install 'apache2'' failed: E: Sub-process /usr/bin/dpkg returned an error code (1)
FATAL: all hosts have already failed -- aborting
Version 2:
- name: Install Apache
command: "sudo apt-get install apache2"
register: apache2_apt
Output:
failed: [default] => {"changed": true, "cmd": ["sudo", "apt-get", "install", "apache2"], "delta": "0:00:07.745095", "end": "2015-06-09 11:08:53.726031", "rc": 100, "start": "2015-06-09 11:08:45.980936", "warnings": []}
stderr: E: Sub-process /usr/bin/dpkg returned an error code (1)
stdout: Reading package lists...
Building dependency tree...
Reading state information...
Suggested packages:
www-browser apache2-doc apache2-suexec-pristine apache2-suexec-custom
The following NEW packages will be installed:
apache2
0 upgraded, 1 newly installed, 0 to remove and 183 not upgraded.
Need to get 0 B/146 kB of archives.
After this operation, 460 kB of additional disk space will be used.
(Reading database ... 52932 files and directories currently installed.)
Unpacking apache2 (from .../apache2_2.4.12-1+deb.sury.org~precise+5_amd64.deb) ...
dpkg: error processing /var/cache/apt/archives/apache2_2.4.12-1+deb.sury.org~precise+5_amd64.deb (--unpack):
error setting ownership of `/var/www/html.dpkg-new': Operation not permitted
Processing triggers for man-db ...
Processing triggers for ureadahead ...
ureadahead will be reprofiled on next reboot
Errors were encountered while processing:
/var/cache/apt/archives/apache2_2.4.12-1+deb.sury.org~precise+5_amd64.deb
FATAL: all hosts have already failed -- aborting
There are few possible issues for this
You need to disable apparmor or better add a rule to apparmor service for ability to have access by the script to /var/www within guest machine
There is a trouble with host machine permissions for /var/www folder. Try to check if the user has access to local folder, mapped as shared folder from host to guest - possibly you need to add permissions for local user at host machine.
try to use ansible-galaxy and search for already created role with fixes for both previous issues