Facing authentication and Permissions issue when building nginx container - authentication

Code:
version: '2'
settings:
conductor_base: centos:7
services:
ansible.play_container:
from: "nginx_base"
roles:
nginx_container
ports:
"xxx"
user: root
command: ['app/xxx/docker-entrypoint.sh']
registries: {}```
OS/Environment :
Ansible Container, version 0.9.2
Linux, 3.10.0-327.13.1.el7.x86_64, #1 SMP Mon Feb 29 13:22:02 EST 2016, x86_64
2.7.5 (default, May 3 2017, 07:55:04)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-14)] /usr/bin/python
Command used:
Sudo ansible-container --debug build
Error Log:
fatal: [ansible.nginx-container]: UNREACHABLE! => {
"changed": false,
"msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote temp path in ansible.cfg to a path rooted in \"/tmp\". Failed command was: ( umask 77 && mkdir -p \"` echo ~/.ansible/tmp/ansible-tmp-1512122910.09-221104636739910 `\" && echo ansible-tmp-1512122910.09-221104636739910=\"` echo ~/.ansible/tmp/ansible-tmp-1512122910.09-221104636739910 `\" ), exited with result 1, stderr output: Error response from daemon: Container c94048b2a046a9077fbff0558919ce55704e6b8634af611abe6ec2d58a2ccd18 is not running\n"
Please help in resolving the permissions error

I'd be happy to try to help, however there's a lot more information needed to triage your issue. If you'd be so kind as to try to clean up the formatting of your YAML above, include the output of ansible-container --debug version, and include the full debug output from your build with ansible-container --debug build - I'll be able to be of service. Thanks!

Related

Why Molecule is not able to start a docker container (Failed to create temporary directory)

I found similar case here, that I am using molecule to test my ansible roles, but for some reason it is skipping "creation" part and gives error like:
fatal: [rabbitmq]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo ~/.ansible/tmp `\"&& mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1638541586.6239848-828-250053975102429 `\" && echo ansible-tmp-1638541586.6239848-828-250053975102429=\"` echo ~/.ansible/tmp/ansible-tmp-1638541586.6239848-828-250053975102429 `\" ), exited with result 1", "unreachable": true}
It is skipping the create process: Skipping, instances already created. However, nothing is running:
name#EEW00438:~/.cache$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
name#EEW00438:~/.cache$
what I tried:
molecule destroy
molecule reset
restart
rm -rf ~/.cache/
changed remote_tmp to /tmp/.ansible/ in /etc/ansible/ansible.cfg
reinstall molecule
This issue is only with one role.
UPDATE:
it is failing on step:
mkdir \"` echo ~/.ansible/tmp/ansible-tmp-1638782939.31706-2913-12516475286623 `\" && echo ansible-tmp-1638782939.31706-2913-12516475286623=
mkdir: cannot create directory ‘"/home/user/.ansible/tmp/ansible-tmp-1638782939.31706-2913-12516475286623"’: No such file or directory
I stumbled upon this issue as well.
When you create the role you need to create it as molecule init role --driver-name docker ns.myrole to enable docker. Be sure to install the docker driver too if you haven't pip install --upgrade molecule-docker
So if you need to tweak the container that runs, edit molecule.yml. It defaults to centos. I switched to ubuntu in there, an created a Dockerfile to provision the container with things that need to exist.
molecule.yml
---
dependency:
name: galaxy
driver:
name: docker
platforms:
- name: instance
image: ubuntu:22.04 # this is required but ignored since I specify a `dockerfile`
pre_build_image: false
dockerfile: Dockerfile
provisioner:
name: ansible
verifier:
name: ansible
For example, Ubuntu 22.04 doesn't use python anymore, so I added an alias at the end of what molecule renders so that Ansible can use python and have it redirect to python3
FROM ubuntu:22.04
RUN if [ $(command -v apt-get) ]; then export DEBIAN_FRONTEND=noninteractive && apt-get update && apt-get install -y python3 sudo bash ca-certificates iproute2 python3-apt aptitude && apt-get clean && rm -rf /var/lib/apt/lists/*; \
elif [ $(command -v dnf) ]; then dnf makecache && dnf --assumeyes install /usr/bin/python3 /usr/bin/python3-config /usr/bin/dnf-3 sudo bash iproute && dnf clean all; \
elif [ $(command -v yum) ]; then yum makecache fast && yum install -y /usr/bin/python /usr/bin/python2-config sudo yum-plugin-ovl bash iproute && sed -i 's/plugins=0/plugins=1/g' /etc/yum.conf && yum clean all; \
elif [ $(command -v zypper) ]; then zypper refresh && zypper install -y python3 sudo bash iproute2 && zypper clean -a; \
elif [ $(command -v apk) ]; then apk update && apk add --no-cache python3 sudo bash ca-certificates; \
elif [ $(command -v xbps-install) ]; then xbps-install -Syu && xbps-install -y python3 sudo bash ca-certificates iproute2 && xbps-remove -O; fi
RUN echo 'alias python=python3' >> ~/.bashrc
It's been years since I last used Molecule, and I must say... it's gone downhill. It used to be easy/clear/direct to get things working. Sigh. I guess I should stick to containers and force the migration off VMs sooner!
The problem may be caused by a Docker context change performed at the start of Docker Desktop. Despite this, Molecule does create a container, but in an inactive context.
At startup, Docker Desktop automatically switches the context from default to desktop-linux [1]. The active context determines which containers are available from CLI.
The context cannot be set in the molecule, i.e. the default context is always used to create containers [2].
$ molecule create --scenario-name test
... # The output with the error is skipped because it duplicates the output from the question
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ docker context ls
NAME TYPE DESCRIPTION DOCKER ENDPOINT KUBERNETES ENDPOINT ORCHESTRATOR
default moby Current DOCKER_HOST based configuration unix:///var/run/docker.sock swarm
desktop-linux * moby unix:///home/bkarpov/.docker/desktop/docker.sock
$ docker context use default
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a71bfd28992f geerlingguy/docker-ubuntu2004-ansible "bash -c 'while true…" 5 minutes ago Up 5 minutes some-instance
$ molecule login --scenario-name test
INFO Running test > login
root#some-instance:/#
Solutions
Switch the context back to default manually
docker context use default
This solution is suitable for one-time execution, since the context will need to be switched every time Docker Desktop is started. Docker Desktop service will continue to work using the desktop-linux context.
Issue with the request to add context switching to Docker Desktop - https://github.com/docker/roadmap/issues/47
Stop Docker Desktop
systemctl --user stop docker-desktop
Stopping the Docker Desktop service will automatically switch to the default context.
Set DOCKER_CONTEXT so that Docker Desktop does not change the context in the current shell
export DOCKER_CONTEXT=default
systemctl --user restart docker-desktop
When stopping, the context returns to default, and when starting, it does not switch to desktop-linux.
References
https://docs.docker.com/desktop/install/ubuntu/#launch-docker-desktop
https://github.com/ansible-community/molecule-docker#faq

Openstack install fail on RabbitMQ - Centos8

I am following this post to install Packstack on my Centos8 server. Everything goes fine until I reach this install step - "packstack --answer-file /root/openstack-answer.txt". Here is the error;
...
...
Copying Puppet modules and manifests [ DONE ]
Applying 192.168.168.171_controller.pp
192.168.168.171_controller.pp: [ ERROR ]
Applying Puppet manifests [ ERROR ]
ERROR : Error appeared during Puppet run: 192.168.168.171_controller.pp
Error: Facter: error while resolving custom fact "rabbitmq_nodename": undefined method `[]' for nil:NilClass
You will find full trace in log /var/tmp/packstack/20210515-120855-k817cwco/manifests/192.168.168.171_controller.pp.log
Please check log file /var/tmp/packstack/20210515-120855-k817cwco/openstack-setup.log for more information
Additional information:
* Parameter CONFIG_NEUTRON_L2_AGENT: You have chosen OVN Neutron backend. Note that this backend does not support the VPNaaS or FWaaS services. Geneve will be used as the encapsulation method for tenant networks
* Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
* File /root/keystonerc_admin has been created on OpenStack client host 192.168.168.171. To use the command line tools you need to source the file.
* To access the OpenStack Dashboard browse to http://192.168.168.171/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
Here is the openstack-setup.log
2021-05-15 12:08:56::INFO::shell::100::root:: [localhost] Executing script:
rm -rf /var/tmp/packstack/20210515-120855-k817cwco/manifests/*pp
2021-05-15 12:08:56::INFO::shell::100::root:: [localhost] Executing script:
mkdir -p ~/.ssh
chmod 500 ~/.ssh
grep 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCn8amY2BL11DJlLFjnAgxseuUag93JnVXxmnUpiEvKC2GfYcMq6fEjdqlj5be70V1LRRP4dlHkp2HhkM3dWsp/sDVLUGJIXqwmI08QiEuW7JR35pfnATTf+aw2FgRf/0yvR4uH9oWXw2R909ZEPdqcpD8T72Cz4rAcJjWA3IdWilOIGGxCs3yLN7t2v7RAaIHwEsURiI8DWRo4LcvwMw1dMhd2S4HvFu98uv7Nqd16rdlWR3QpJHZFK/4JLxWtK/7/Bf/o4RFKNlOH+mRmRlaxiT1O//zlKglUtMY/YkhbUhrMGB/jJSq6sSRlyxeLHrhrT3V4AbChH56jEMDOXnGL07FFHvVtWzJv0chyEL1Dav7Ua8N1QfoaHcfskem0rWXgtCs3QZjQWde7rFSGRg1/7cQpb51n9ZdXZagPHhLRNNI/eTKA5C2ed8p/KK1S00PNHSub4BP8Jsw5eVhUZAjZG38YfS536tORo0ciYj42dkAAVIWI44X8psU8BirQotU= root#openstack.thomsoncodes.com' ~/.ssh/authorized_keys > /dev/null 2>&1 || echo ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCn8amY2BL11DJlLFjnAgxseuUag93JnVXxmnUpiEvKC2GfYcMq6fEjdqlj5be70V1LRRP4dlHkp2HhkM3dWsp/sDVLUGJIXqwmI08QiEuW7JR35pfnATTf+aw2FgRf/0yvR4uH9oWXw2R909ZEPdqcpD8T72Cz4rAcJjWA3IdWilOIGGxCs3yLN7t2v7RAaIHwEsURiI8DWRo4LcvwMw1dMhd2S4HvFu98uv7Nqd16rdlWR3QpJHZFK/4JLxWtK/7/Bf/o4RFKNlOH+mRmRlaxiT1O//zlKglUtMY/YkhbUhrMGB/jJSq6sSRlyxeLHrhrT3V4AbChH56jEMDOXnGL07FFHvVtWzJv0chyEL1Dav7Ua8N1QfoaHcfskem0rWXgtCs3QZjQWde7rFSGRg1/7cQpb51n9ZdXZagPHhLRNNI/eTKA5C2ed8p/KK1S00PNHSub4BP8Jsw5eVhUZAjZG38YfS536tORo0ciYj42dkAAVIWI44X8psU8BirQotU= root#openstack.thomsoncodes.com >> ~/.ssh/authorized_keys
chmod 400 ~/.ssh/authorized_keys
restorecon -r ~/.ssh
2021-05-15 12:08:56::INFO::shell::100::root:: [192.168.168.171] Executing script:
rpm -q --whatprovides yum-utils || yum install -y yum-utils
2021-05-15 12:08:56::INFO::shell::49::root:: Executing command:
rpm -qa --qf='%{name}-%{version}-%{release}.%{arch}
' | grep centos-release-openstack
2021-05-15 12:09:10::INFO::shell::100::root:: [192.168.168.171] Executing script:
(rpm -q 'centos-release-openstack-ussuri' || yum -y install centos-release-openstack-ussuri) || true
2021-05-15 12:09:10::INFO::shell::49::root:: Executing command:
rpm -q rdo-release --qf='%{version}-%{release}.%{arch}
'
2021-05-15 12:09:10::INFO::shell::100::root:: [192.168.168.171] Executing script:
rpm -q --whatprovides yum-utils || yum install -y yum-utils
yum clean metadata
2021-05-15 12:09:11::INFO::shell::100::root:: [192.168.168.171] Executing script:
yum install -y puppet hiera openssh-clients tar nc rubygem-json
yum update -y puppet hiera openssh-clients tar nc rubygem-json
rpm -q --whatprovides puppet
rpm -q --whatprovides hiera
rpm -q --whatprovides openssh-clients
rpm -q --whatprovides tar
rpm -q --whatprovides nc
rpm -q --whatprovides rubygem-json
2021-05-15 12:09:38::INFO::shell::100::root:: [192.168.168.171] Executing script:
mkdir -p /var/tmp/packstack
mkdir --mode 0700 /var/tmp/packstack/18227dca781e48cda2db45952d159190
mkdir --mode 0700 /var/tmp/packstack/18227dca781e48cda2db45952d159190/modules
mkdir --mode 0700 /var/tmp/packstack/18227dca781e48cda2db45952d159190/resources
2021-05-15 12:09:38::INFO::shell::100::root:: [192.168.168.171] Executing script:
facter -p
2021-05-15 12:09:42::INFO::shell::100::root:: [192.168.168.171] Executing script:
[[ -f /etc/hiera.yaml ]] && [[ ! -L /etc/puppet/hiera.yaml ]] && ln -s /etc/hiera.yaml /etc/puppet/hiera.yaml || echo "skipping creation of hiera.yaml symlink"
sed -i 's;:datadir:.*;:datadir: /var/tmp/packstack/18227dca781e48cda2db45952d159190/hieradata;g' $(puppet config print hiera_config)
2021-05-15 12:09:43::INFO::shell::100::root:: [192.168.168.171] Executing script:
vgdisplay cinder-volumes
2021-05-15 12:09:43::INFO::shell::100::root:: [localhost] Executing script:
ssh-keygen -t rsa -b 2048 -f "/var/tmp/packstack/20210515-120855-k817cwco/nova_migration_key" -N ""
2021-05-15 12:09:43::INFO::shell::100::root:: [localhost] Executing script:
ssh-keyscan 192.168.168.171
2021-05-15 12:09:43::INFO::shell::100::root:: [192.168.168.171] Executing script:
systemctl
2021-05-15 12:09:43::INFO::shell::100::root:: [192.168.168.171] Executing script:
systemctl is-enabled NetworkManager
2021-05-15 12:09:44::INFO::shell::100::root:: [192.168.168.171] Executing script:
systemctl is-active NetworkManager
2021-05-15 12:09:44::INFO::shell::100::root:: [192.168.168.171] Executing script:
echo $HOME
2021-05-15 12:09:44::INFO::shell::100::root:: [localhost] Executing script:
cd /var/tmp/packstack/20210515-120855-k817cwco/hieradata
tar --dereference -cpzf - ../hieradata | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root#192.168.168.171 tar -C /var/tmp/packstack/18227dca781e48cda2db45952d159190 -xpzf -
cd /usr/lib/python3.6/site-packages/packstack/puppet
cd /var/tmp/packstack/20210515-120855-k817cwco/manifests
tar --dereference -cpzf - ../manifests | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root#192.168.168.171 tar -C /var/tmp/packstack/18227dca781e48cda2db45952d159190 -xpzf -
cd /usr/share/openstack-puppet/modules
tar --dereference -cpzf - aodh apache ceilometer certmonger cinder concat firewall glance gnocchi heat horizon inifile ironic keystone magnum manila memcached mysql neutron nova nssdb openstack openstacklib oslo ovn packstack panko placement rabbitmq redis remote rsync sahara ssh stdlib swift sysctl systemd tempest trove vcsrepo vswitch xinetd | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root#192.168.168.171 tar -C /var/tmp/packstack/18227dca781e48cda2db45952d159190/modules -xpzf -
2021-05-15 12:25:43::ERROR::run_setup::1062::root:: Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/packstack/installer/run_setup.py", line 1057, in main
_main(options, confFile, logFile)
File "/usr/lib/python3.6/site-packages/packstack/installer/run_setup.py", line 681, in _main
runSequences()
File "/usr/lib/python3.6/site-packages/packstack/installer/run_setup.py", line 648, in runSequences
controller.runAllSequences()
File "/usr/lib/python3.6/site-packages/packstack/installer/setup_controller.py", line 81, in runAllSequences
sequence.run(config=self.CONF, messages=self.MESSAGES)
File "/usr/lib/python3.6/site-packages/packstack/installer/core/sequences.py", line 109, in run
step.run(config=config, messages=messages)
File "/usr/lib/python3.6/site-packages/packstack/installer/core/sequences.py", line 50, in run
self.function(config, messages)
File "/usr/lib/python3.6/site-packages/packstack/plugins/puppet_950.py", line 215, in apply_puppet_manifest
wait_for_puppet(currently_running, messages)
File "/usr/lib/python3.6/site-packages/packstack/plugins/puppet_950.py", line 128, in wait_for_puppet
validate_logfile(log)
File "/usr/lib/python3.6/site-packages/packstack/modules/puppet.py", line 107, in validate_logfile
raise PuppetError(message)
packstack.installer.exceptions.PuppetError: Error appeared during Puppet run: 192.168.168.171_controller.pp
Error: Facter: error while resolving custom fact "rabbitmq_nodename": undefined method `[]' for nil:NilClass
You will find full trace in log /var/tmp/packstack/20210515-120855-k817cwco/manifests/192.168.168.171_controller.pp.log
2021-05-15 12:25:43::INFO::shell::100::root:: [192.168.168.171] Executing script:
rm -rf /var/tmp/packstack/18227dca781e48cda2db45952d159190
Here is the controller.pp.log
Error: Facter: error while resolving custom fact "rabbitmq_nodename": undefined method `[]' for nil:NilClass
Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/6.14/deprecated_language.html
(file & line not available)
Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
(file: /etc/puppet/hiera.yaml)
...
...
Notice: /Stage[main]/Swift/Swift_config[swift-hash/swift_hash_path_suffix]/value: value changed 2399ecebcf7a4128 to 00a7d595320749e9
Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/password]/value: value changed dc6fbb7c617a48c0 to e2187def7d184d58
Error: Systemd start for rabbitmq-server failed!
journalctl log for rabbitmq-server:
-- Logs begin at Sat 2021-05-15 11:54:15 CDT, end at Sat 2021-05-15 12:18:53 CDT. --
May 15 12:18:23 openstack systemd[1]: Starting RabbitMQ broker...
May 15 12:18:23 openstack rabbitmq-server[11773]: 2021-05-15 12:18:23 [warning] Both old (.config) and new (.conf) format config files exist.
May 15 12:18:23 openstack rabbitmq-server[11773]: Using the old format config file: /etc/rabbitmq/rabbitmq.config
May 15 12:18:23 openstack rabbitmq-server[11773]: Please update your config files to the new format and remove the old file.
May 15 12:18:53 openstack rabbitmq-server[11773]: ERROR: epmd error for host openstack: timeout (timed out)
May 15 12:18:53 openstack systemd[1]: rabbitmq-server.service: Main process exited, code=exited, status=1/FAILURE
May 15 12:18:53 openstack systemd[1]: rabbitmq-server.service: Failed with result 'exit-code'.
May 15 12:18:53 openstack systemd[1]: Failed to start RabbitMQ broker.
Error: /Stage[main]/Rabbitmq::Service/Service[rabbitmq-server]/ensure: change from 'stopped' to 'running' failed: Systemd start for rabbitmq-server failed!
journalctl log for rabbitmq-server:
-- Logs begin at Sat 2021-05-15 11:54:15 CDT, end at Sat 2021-05-15 12:18:53 CDT. --
May 15 12:18:23 openstack systemd[1]: Starting RabbitMQ broker...
May 15 12:18:23 openstack rabbitmq-server[11773]: 2021-05-15 12:18:23 [warning] Both old (.config) and new (.conf) format config files exist.
May 15 12:18:23 openstack rabbitmq-server[11773]: Using the old format config file: /etc/rabbitmq/rabbitmq.config
May 15 12:18:23 openstack rabbitmq-server[11773]: Please update your config files to the new format and remove the old file.
May 15 12:18:53 openstack rabbitmq-server[11773]: ERROR: epmd error for host openstack: timeout (timed out)
May 15 12:18:53 openstack systemd[1]: rabbitmq-server.service: Main process exited, code=exited, status=1/FAILURE
May 15 12:18:53 openstack systemd[1]: rabbitmq-server.service: Failed with result 'exit-code'.
May 15 12:18:53 openstack systemd[1]: Failed to start RabbitMQ broker.
Notice: /Stage[main]/Swift::Deps/Anchor[swift::config::end]: Triggered 'refresh' from 2 events
Notice: /Stage[main]/Swift::Deps/Anchor[swift::service::begin]: Triggered 'refresh' from 2 events
Notice: /Stage[main]/Keystone::Deps/Anchor[keystone::config::end]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Keystone/Exec[keystone-manage fernet_setup]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_user[keystone_admin#%]/password_hash: changed password
Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_127.0.0.1]/Mysql_user[keystone_admin#127.0.0.1]/password_hash: changed password
Notice: /Stage[main]/Keystone::Deps/Anchor[keystone::db::end]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::begin]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Keystone::Db::Sync/Exec[keystone-manage db_sync]: Triggered 'refresh' from 2 events
Notice: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::end]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Keystone::Bootstrap/Exec[keystone bootstrap]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Keystone::Deps/Anchor[keystone::service::begin]: Triggered 'refresh' from 4 events
Warning: /Stage[main]/Apache::Service/Service[httpd]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Deps/Anchor[keystone::service::end]: Skipping because of failed dependencies
Warning: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::service::end]: Skipping because of failed dependencies
Warning: /Stage[main]/Aodh::Deps/Anchor[aodh::service::end]: Skipping because of failed dependencies
Warning: /Stage[main]/Placement::Deps/Anchor[placement::service::end]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Cron::Fernet_rotate/Cron[keystone-manage fernet_rotate]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone/Keystone_domain[Default]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone/Exec[restart_keystone]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone/Anchor[default_domain_created]: Skipping because of failed dependencies
Warning: /Stage[main]/Packstack::Keystone/Keystone_role[_member_]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_role[admin]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_user[admin]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_tenant[services]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_tenant[admin]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_user_role[admin#admin]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_service[keystone::identity]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_endpoint[RegionOne/keystone::identity]: Skipping because of failed dependencies
Warning: /Stage[main]/Horizon::Deps/Anchor[horizon::service::end]: Skipping because of failed dependencies
Warning: /Stage[main]/Swift::Keystone::Auth/Keystone_role[SwiftOperator]: Skipping because of failed dependencies
Warning: /Stage[main]/Swift::Keystone::Auth/Keystone_role[ResellerAdmin]: Skipping because of failed dependencies
Warning: /Stage[main]/Heat::Keystone::Auth/Keystone_role[heat_stack_user]: Skipping because of failed dependencies
Warning: /Stage[main]/Heat::Keystone::Auth/Keystone_role[heat_stack_owner]: Skipping because of failed dependencies
Warning: /Stage[main]/Heat::Keystone::Domain/Keystone_domain[heat]: Skipping because of failed dependencies
Warning: /Stage[main]/Heat::Keystone::Domain/Keystone_user[heat_admin::heat]: Skipping because of failed dependencies
Warning: /Stage[main]/Heat::Keystone::Domain/Keystone_user_role[heat_admin::heat#::heat]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user[glance]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user_role[glance#services]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_service[glance::image]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_endpoint[RegionOne/glance::image]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Deps/Anchor[glance::service::begin]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Api/Service[glance-api]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Registry/Service[glance-registry]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Deps/Anchor[glance::service::end]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone::Resource::Service_identity[cinder]/Keystone_user[cinder]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone::Resource::Service_identity[cinder]/Keystone_user_role[cinder#services]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone::Resource::Service_identity[cinderv2]/Keystone_service[cinderv2::volumev2]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone::Resource::Service_identity[cinderv2]/Keystone_endpoint[RegionOne/cinderv2::volumev2]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone::Resource::Service_identity[cinderv3]/Keystone_service[cinderv3::volumev3]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone::Resource::Service_identity[cinderv3]/Keystone_endpoint[RegionOne/cinderv3::volumev3]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Deps/Anchor[cinder::service::end]: Skipping because of failed dependencies
Error: Could not prefetch cinder_type provider 'openstack': Could not authenticate
Error: Failed to apply catalog: Could not authenticate
I could see that people have experienced the similar issues but those solutions did not work for me. I also have done below steps.
Verify the hostname and host file
Opened the port or disabled the firewalld
Disabled the SELINUX
I changed hostname to openstack and here is my host file
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
<my-ip> openstack
I am not sure is that a hostname issue or firewall or anything else. I have been struggling this for quite sometime and a help would be greately appreciated
I met the same question.Maybe you can modify your /etc/hosts and add your hostname in it not openstack.

Ansible failed to connect to the host via ssh, but ssh command works

From inventory.yml:
myhost:
ansible_host: myhost # actually it was ansible_ssh_host (see my answer)
ansible_user: myuser # actually it was ansible_ssh_user (see my answer)
ansible_pass: mypass # actually it was ansible_ssh_pass (see my answer)
So far, Ansible worked fine. I could also ssh myuser#myhost.
Then, I changed the ssh port from default 22 to 23 and edited the inventory.yml:
myhost:
ansible_host: myhost
ansible_user: myuser
ansible_pass: mypass # THE PROBLEM! Must be ansible_ssh_pass. (see my answer)
ansible_port: 23
As expected, I can ssh myuser#myhost -p 23, but Ansible gives the error:
fatal: [staging]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: myuser#myhost: Permission denied (publickey,password).", "unreachable": true}
What could be causing the error?
The solution is quite unexpected and slightly embarrassing:
While changing the SSH port, I also read this:
Ansible 2.0 has deprecated the “ssh” from ansible_ssh_user,
ansible_ssh_host, and ansible_ssh_port to become ansible_user,
ansible_host, and ansible_port.
I edited inventory.yml a bit too eagerly, as I also changed ansible_ssh_pass to ansible_pass. Hence: missing password -> permission denied.
So, my question had been phrased in a wrong way. I have updated it accordingly.
The variable for password is ansible_password. See documentation here to create your inventory.yml properly.
Notice that you should never store your password in plain text, but use a vault in stead.
In my case, where key based ssh is only allowed, ansible ping was failing with -
"msg": "Failed to connect to the host via ssh: Permission denied (publickey).",
The inventory file had entry for ansible_user=$user. Removing entry from inventory file helped.
My environment :
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.7 LTS
Release: 16.04
Codename: xenial
$ ansible --version
ansible 2.9.27
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/USER_NAME/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Mar 1 2021, 11:38:31) [GCC 5.4.0 20160609]

install zabbix_sender in centos

I installed zabbix-sender but when execute zabbix-sender i get command not found.
[root#db1 zabbix]# yum info zabbix-sender
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Installed Packages
Name : zabbix-sender
Arch : x86_64
Version : 3.4.6
Release : 1.el7
Size : 1.0 M
Repo : installed
From repo : _ftp
Summary : Zabbix Sender
URL : http://www.zabbix.com/
License : GPLv2+
Description : Zabbix sender command line utility
when i use thi command
[root#db2 zabbix]# zabbix_sender -z zabbix -s db2 -k mysql[Binlog_cache_disk_use] -o 2000 -vv
-bash: zabbix_sender: command not found
Why does this happen?
[root#db1 zabbix]# !r
rpm -lq zabbix-sender
/usr/bin/zabbix_sender
/usr/share/doc/zabbix-sender-3.4.6
/usr/share/doc/zabbix-sender-3.4.6/AUTHORS
/usr/share/doc/zabbix-sender-3.4.6/COPYING
/usr/share/doc/zabbix-sender-3.4.6/ChangeLog
/usr/share/doc/zabbix-sender-3.4.6/NEWS
/usr/share/doc/zabbix-sender-3.4.6/README
/usr/share/man/man1/zabbix_sender.1.gz
[root#db1 zabbix]# zabbix-sender -z zabbix -s db1 -k mysql[Uptime] -o 1540 -vv
-bash: zabbix-sender: command not found
[root#db1 zabbix]#
checked again
I understand why does this happen! I remove Zabbix-sender and install again. when using Zabbix-sender -z Zabbix -s db1 -k mysql[Uptime] -o 1540 -vv there is no problem and working but after execute prel MySQL-check.pl root password then Zabbix-sender it seems to be removed. Does anyone know why? I using this template link
List files provided by rpm package:
$ rpm -ql zabbix-sender
/usr/bin/zabbix_sender
...
Try /usr/bin/zabbix_sender - do you have correct PATH setting (is /usr/bin/ included there)?

Activemq will not start on my Ubuntu VM

I'm trying to run activemq on my ubuntu virtual machine but have constantly been running into issues getting it to start up. I've tried downloading the binary and source with no luck. Currently I have downloaded the source, run "mvn clean install -Dmaven.test.skip=true" and mvn reported successful installation. I then hunted around in my .m2 folder found apache-activemq-5.5.1-bin.tar.gz and extracted it to my home/USERNAME dir and attempted to run "bash bin/activemq start" only to receive the following error.
INFO: Loading '/etc/default/activemq'
INFO: Using java '/usr/bin/java'
INFO: Starting - inspect logfiles specified in logging.properties
and log4j.properties
to get details
bin/activemq: line 370: /usr/bin/java -Xms256M -Xmx256M -Dorg.apache.activemq.UseDedicatedTaskRunner=true
-Djava.util.logging.config.file=logging.properties
-Dcom.sun.management.jmxremote
-Dactivemq.classpath="/home/jacob/activeMq1/apache-activemq-5.5.1/conf;"
-Dactivemq.home="/home/jacob/activeMq1/apache-activemq-5.5.1"
-Dactivemq.base="/home/jacob/activeMq1/apache-activemq-5.5.1"
-jar "/home/jacob/activeMq1/apache-activemq-5.5.1/bin/run.jar" start >/dev/null 2>&1 &
RET="$?"; APID="$!";
echo $APID > /home/jacob/activeMq1/apache-activemq-5.5.1/data/activemq.pid;
echo "INFO: pidfile created : '/home/jacob/activeMq1/apache-activemq-5.5.1/data/activemq.pid' (pid '$APID')";
exit $RET: No such file or directory
Has any one run into this type of error before?
Looks like I'm answering one of my questions again, but maybe this will help someone in the future.
steps.
I ended up getting activemq to work by creating a configuration file via running the command "./bin/activemq setup newConfig" (exclude the quotes)
I then replaced the current config file "activemq" which was located at etc/default/. (I made a backup of the original activemq file before overwriting it with newConfig).
Run "./bin/activemq start" which will create a PID file.
After the file is created re-run "./bin/activemq start" to finally start up the broker.
You can then test the install by navigating to "http://localhost:8161/admin/" or by doing a "netstat -an | grep 61616" if you kept the default ports etc.
I install activemq 5.13 on Debian, download and unzip in /opt, then I go to /opt/apache-activemq-5.13.1/run "./bin/activemq start", then appear this error:
xx#debian:/opt/apache-activemq-5.13.1$ ./bin/activemq start
INFO: Loading '/etc/default/activemq'
INFO: Using java '/usr/bin/java'
INFO: Starting - inspect logfiles specified in logging.properties and log4j.properties to get details
./bin/activemq: 330: ./bin/activemq: "/usr/bin/java" -Djava.util.logging.config.file=logging.properties -Djava.security.auth.login.config=/opt/apache-activemq-5.13.1//conf/login.config -Djava.awt.headless=true -Djava.io.tmpdir="/opt/apache-activemq-5.13.1//tmp" -Dactivemq.classpath="/opt/apache-activemq-5.13.1//conf:/opt/apache-activemq-5.13.1//../lib/:" -Dactivemq.home="/opt/apache-activemq-5.13.1/" -Dactivemq.base="/opt/apache-activemq-5.13.1/" -Dactivemq.conf="/opt/apache-activemq-5.13.1//conf" -Dactivemq.data="/opt/apache-activemq-5.13.1//data" -jar "/opt/apache-activemq-5.13.1//bin/activemq.jar" start >/dev/null 2>&1 &
RET="$?"; APID="$!";
echo $APID > /opt/apache-activemq-5.13.1//data/activemq.pid;
echo "INFO: pidfile created : '/opt/apache-activemq-5.13.1//data/activemq.pid' (pid '$APID')";exit $RET: not found
What I did is to check the Debian version using "uname -a":
Linux debian 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt20-1+deb8u3 (2016-01-17) x86_64 GNU/Linux
I found my debian is 64 bit system. so I run
./bin/linux-x86-64/activemq start
It shows:
Starting ActiveMQ Broker...
Then I can access the site:http://localhost:8161/admin/ with username "admin" and password "admin"
with Ubuntu 14.04, I had to create a link in /etc/activemq/instances-enabled
sudo ln -s ../instances-available/main/
similar to apache2 setup
then started the server with /etc/init.d/activemq start
sudo is necessary.
bin$ sudo ./activemq start
bin$ sudo ./activemq status
INFO: Loading '/opt/runtime/apache-activemq-5.11.1/bin/env' INFO: Using java '/usr/bin/java' ActiveMQ is running (pid '29887')