crash/problems with suprove solver - yosys

I have a fairly simple sequential problem I am trying to formally prove with a "mode prove" in symbiyosys
I'm using "aiger suprove" as the engine and am getting the following crash:
$ sby -f assert_seq_proof.sby
SBY 9:02:40 [assert_seq_proof] Removing direcory 'assert_seq_proof'.
SBY 9:02:40 [assert_seq_proof] Copy 'assert_seq_proof.sv' to 'assert_seq_proof/src/assert_seq_proof.sv'.
SBY 9:02:40 [assert_seq_proof] engine_0: aiger suprove
SBY 9:02:40 [assert_seq_proof] nomem: starting process "cd assert_seq_proof/src; yosys -ql ../model/design_nomem.log ../model/design_nomem.ys"
SBY 9:02:40 [assert_seq_proof] nomem: finished (returncode=0)
SBY 9:02:40 [assert_seq_proof] aig: starting process "cd assert_seq_proof/model; yosys -ql design_aiger.log design_aiger.ys"
SBY 9:02:40 [assert_seq_proof] aig: finished (returncode=0)
SBY 9:02:40 [assert_seq_proof] engine_0: starting process "cd assert_seq_proof; suprove model/design_aiger.aig"
SBY 9:02:40 [assert_seq_proof] engine_0: finished (returncode=127)
Traceback (most recent call last):
File "/usr/local/bin/sby", line 388, in <module>
retcode |= run_job(t)
File "/usr/local/bin/sby", line 346, in run_job
job.run(setupmode)
File "/usr/local/bin/../share/yosys/python3/sby_core.py", line 634, in run
self.taskloop()
File "/usr/local/bin/../share/yosys/python3/sby_core.py", line 251, in taskloop
task.poll()
File "/usr/local/bin/../share/yosys/python3/sby_core.py", line 170, in poll
self.handle_exit(self.p.returncode)
File "/usr/local/bin/../share/yosys/python3/sby_core.py", line 108, in handle_exit
self.exit_callback(retcode)
File "/usr/local/bin/../share/yosys/python3/sby_engine_aiger.py", line 81, in exit_callback
assert retcode == 0
AssertionError
using the "abc pdr" engine with exactly the same script and design I get:
$ sby -f assert_seq_proof.sby
SBY 9:19:20 [assert_seq_proof] Removing direcory 'assert_seq_proof'.
SBY 9:19:20 [assert_seq_proof] Copy 'assert_seq_proof.sv' to 'assert_seq_proof/src/assert_seq_proof.sv'.
SBY 9:19:20 [assert_seq_proof] engine_0: abc pdr
SBY 9:19:20 [assert_seq_proof] nomem: starting process "cd assert_seq_proof/src; yosys -ql ../model/design_nomem.log ../model/design_nomem.ys"
SBY 9:19:21 [assert_seq_proof] nomem: finished (returncode=0)
SBY 9:19:21 [assert_seq_proof] aig: starting process "cd assert_seq_proof/model; yosys -ql design_aiger.log design_aiger.ys"
SBY 9:19:21 [assert_seq_proof] aig: finished (returncode=0)
SBY 9:19:21 [assert_seq_proof] engine_0: starting process "cd assert_seq_proof; yosys-abc -c 'read_aiger model/design_aiger.aig; fold; strash; pdr; write_cex -a engine_0/trace.aiw'"
SBY 9:19:24 [assert_seq_proof] engine_0: ABC command line: "read_aiger model/design_aiger.aig; fold; strash; pdr; write_cex -a engine_0/trace.aiw".
SBY 9:19:24 [assert_seq_proof] engine_0: Warning: The last 7 outputs are interpreted as constraints.
SBY 9:19:24 [assert_seq_proof] engine_0: Invariant F[80] : 225 clauses with 112 flops (out of 124) (cex = 0, ave = 21.31)
SBY 9:19:24 [assert_seq_proof] engine_0: Verification of invariant with 225 clauses was successful. Time = 0.00 sec
SBY 9:19:24 [assert_seq_proof] engine_0: Property proved. Time = 2.91 sec
SBY 9:19:24 [assert_seq_proof] engine_0: Counter-example is not available.
SBY 9:19:24 [assert_seq_proof] engine_0: finished (returncode=0)
SBY 9:19:24 [assert_seq_proof] engine_0: Status returned by engine: PASS
SBY 9:19:24 [assert_seq_proof] summary: Elapsed clock time [H:MM:SS (secs)]: 0:00:03 (3)
SBY 9:19:24 [assert_seq_proof] summary: Elapsed process time [H:MM:SS (secs)]: 0:00:03 (3)
SBY 9:19:24 [assert_seq_proof] summary: engine_0 (abc pdr) returned PASS
SBY 9:19:24 [assert_seq_proof] DONE (PASS, rc=0)
any ideas on how to debug/solve the suprove crash?

Related

How to ask a remote server over SSH to run a background job?

I'm trying to start a long-running process on a remote server, over SSH:
$ echo Hello | ssh user#host "cat > /tmp/foo.txt; sleep 100 &"
Here, sleep 100 is a simulation of my long-running process. I want this command to exit instantly, but it waits for 100 seconds. Important to mention that I need the job to receive an input from me (Hello in the example above).
Server:
$ sshd -?
OpenSSH_8.2p1 Ubuntu-4ubuntu0.5, OpenSSL 1.1.1f 31 Mar 2020
Saying "I want this command to exit instantly" is incompatible with "long-running". Perhaps you mean that you want the long-running command to run in the background.
If output is not immediately needed locally (ie. it can be retrieved by another ssh in future), then nohup is simple:
echo hello |
ssh user#host '
cat >/tmp/foo.txt;
nohup </dev/null >cmd.out 2>cmd.err cmd &
'
If output must be received locally as the command runs, you can background ssh itself using -f:
echo hello |
ssh -f user#host '
cat >/tmp/foo.txt;
cmd
' >cmd.out 2>cmd.err

valgrind --track-fds=yes exit code 0 even when there are FD leaks

I am trying to set up a CI that will fail if file descriptors leak is detected.
Here is a simple test:
$ valgrind --quiet --track-fds=yes --error-exitcode=1 ./hello_world
hello world!
$ echo $?
0
$ valgrind --quiet --track-fds=yes --error-exitcode=1 ./hello_world_leak
hello world!
==889092== FILE DESCRIPTORS: 4 open (3 std) at exit.
==889092== Open file descriptor 3: /tmp/vg-test/main.cpp
==889092== at 0x4B968DB: open (open64.c:48)
==889092== by 0x109249: main (in /tmp/vg-test/hello_world_leak)
==889092==
==889092==
$ echo $?
0
(the --quiet option supresses the output if 3 std FDs are the only non-closed FDs on exit, which is fine).
As you see, even with --error-exitcode=1 and the FD leak in the program, the valgrind exits with code 0.
My next idea on how to make it work was to write valgrind output to a file, then parse it check if it contains the FILE DESCRIPTORS string:
$ valgrind --quiet --track-fds=yes --error-exitcode=1 --log-file=/tmp/valgrind_out.log ./hello_world
hello world!
$ cat /tmp/valgrind_out.log
==891904== FILE DESCRIPTORS: 4 open (3 std) at exit.
==891904== Open file descriptor 3: /tmp/valgrind_out.log
==891904== <inherited from parent>
==891904==
But here comes the next problem: the file opened for valgrind output (/tmp/valgrind_out.log) is considered open at exit! Same happens when I tee the valgrind's stderr output to a file.
So the only solution I came up with was to parse the output and check that there are exactly 4 open (3 std) FDs at exit.
Are there any less ugly solutions?
EDIT: I've created a wrapper script that does precisely that: github.com/Roman-/valgrind-wrapper. Still looking forward for more elegant solutions.

Openstack install fail on RabbitMQ - Centos8

I am following this post to install Packstack on my Centos8 server. Everything goes fine until I reach this install step - "packstack --answer-file /root/openstack-answer.txt". Here is the error;
...
...
Copying Puppet modules and manifests [ DONE ]
Applying 192.168.168.171_controller.pp
192.168.168.171_controller.pp: [ ERROR ]
Applying Puppet manifests [ ERROR ]
ERROR : Error appeared during Puppet run: 192.168.168.171_controller.pp
Error: Facter: error while resolving custom fact "rabbitmq_nodename": undefined method `[]' for nil:NilClass
You will find full trace in log /var/tmp/packstack/20210515-120855-k817cwco/manifests/192.168.168.171_controller.pp.log
Please check log file /var/tmp/packstack/20210515-120855-k817cwco/openstack-setup.log for more information
Additional information:
* Parameter CONFIG_NEUTRON_L2_AGENT: You have chosen OVN Neutron backend. Note that this backend does not support the VPNaaS or FWaaS services. Geneve will be used as the encapsulation method for tenant networks
* Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
* File /root/keystonerc_admin has been created on OpenStack client host 192.168.168.171. To use the command line tools you need to source the file.
* To access the OpenStack Dashboard browse to http://192.168.168.171/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
Here is the openstack-setup.log
2021-05-15 12:08:56::INFO::shell::100::root:: [localhost] Executing script:
rm -rf /var/tmp/packstack/20210515-120855-k817cwco/manifests/*pp
2021-05-15 12:08:56::INFO::shell::100::root:: [localhost] Executing script:
mkdir -p ~/.ssh
chmod 500 ~/.ssh
grep 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCn8amY2BL11DJlLFjnAgxseuUag93JnVXxmnUpiEvKC2GfYcMq6fEjdqlj5be70V1LRRP4dlHkp2HhkM3dWsp/sDVLUGJIXqwmI08QiEuW7JR35pfnATTf+aw2FgRf/0yvR4uH9oWXw2R909ZEPdqcpD8T72Cz4rAcJjWA3IdWilOIGGxCs3yLN7t2v7RAaIHwEsURiI8DWRo4LcvwMw1dMhd2S4HvFu98uv7Nqd16rdlWR3QpJHZFK/4JLxWtK/7/Bf/o4RFKNlOH+mRmRlaxiT1O//zlKglUtMY/YkhbUhrMGB/jJSq6sSRlyxeLHrhrT3V4AbChH56jEMDOXnGL07FFHvVtWzJv0chyEL1Dav7Ua8N1QfoaHcfskem0rWXgtCs3QZjQWde7rFSGRg1/7cQpb51n9ZdXZagPHhLRNNI/eTKA5C2ed8p/KK1S00PNHSub4BP8Jsw5eVhUZAjZG38YfS536tORo0ciYj42dkAAVIWI44X8psU8BirQotU= root#openstack.thomsoncodes.com' ~/.ssh/authorized_keys > /dev/null 2>&1 || echo ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCn8amY2BL11DJlLFjnAgxseuUag93JnVXxmnUpiEvKC2GfYcMq6fEjdqlj5be70V1LRRP4dlHkp2HhkM3dWsp/sDVLUGJIXqwmI08QiEuW7JR35pfnATTf+aw2FgRf/0yvR4uH9oWXw2R909ZEPdqcpD8T72Cz4rAcJjWA3IdWilOIGGxCs3yLN7t2v7RAaIHwEsURiI8DWRo4LcvwMw1dMhd2S4HvFu98uv7Nqd16rdlWR3QpJHZFK/4JLxWtK/7/Bf/o4RFKNlOH+mRmRlaxiT1O//zlKglUtMY/YkhbUhrMGB/jJSq6sSRlyxeLHrhrT3V4AbChH56jEMDOXnGL07FFHvVtWzJv0chyEL1Dav7Ua8N1QfoaHcfskem0rWXgtCs3QZjQWde7rFSGRg1/7cQpb51n9ZdXZagPHhLRNNI/eTKA5C2ed8p/KK1S00PNHSub4BP8Jsw5eVhUZAjZG38YfS536tORo0ciYj42dkAAVIWI44X8psU8BirQotU= root#openstack.thomsoncodes.com >> ~/.ssh/authorized_keys
chmod 400 ~/.ssh/authorized_keys
restorecon -r ~/.ssh
2021-05-15 12:08:56::INFO::shell::100::root:: [192.168.168.171] Executing script:
rpm -q --whatprovides yum-utils || yum install -y yum-utils
2021-05-15 12:08:56::INFO::shell::49::root:: Executing command:
rpm -qa --qf='%{name}-%{version}-%{release}.%{arch}
' | grep centos-release-openstack
2021-05-15 12:09:10::INFO::shell::100::root:: [192.168.168.171] Executing script:
(rpm -q 'centos-release-openstack-ussuri' || yum -y install centos-release-openstack-ussuri) || true
2021-05-15 12:09:10::INFO::shell::49::root:: Executing command:
rpm -q rdo-release --qf='%{version}-%{release}.%{arch}
'
2021-05-15 12:09:10::INFO::shell::100::root:: [192.168.168.171] Executing script:
rpm -q --whatprovides yum-utils || yum install -y yum-utils
yum clean metadata
2021-05-15 12:09:11::INFO::shell::100::root:: [192.168.168.171] Executing script:
yum install -y puppet hiera openssh-clients tar nc rubygem-json
yum update -y puppet hiera openssh-clients tar nc rubygem-json
rpm -q --whatprovides puppet
rpm -q --whatprovides hiera
rpm -q --whatprovides openssh-clients
rpm -q --whatprovides tar
rpm -q --whatprovides nc
rpm -q --whatprovides rubygem-json
2021-05-15 12:09:38::INFO::shell::100::root:: [192.168.168.171] Executing script:
mkdir -p /var/tmp/packstack
mkdir --mode 0700 /var/tmp/packstack/18227dca781e48cda2db45952d159190
mkdir --mode 0700 /var/tmp/packstack/18227dca781e48cda2db45952d159190/modules
mkdir --mode 0700 /var/tmp/packstack/18227dca781e48cda2db45952d159190/resources
2021-05-15 12:09:38::INFO::shell::100::root:: [192.168.168.171] Executing script:
facter -p
2021-05-15 12:09:42::INFO::shell::100::root:: [192.168.168.171] Executing script:
[[ -f /etc/hiera.yaml ]] && [[ ! -L /etc/puppet/hiera.yaml ]] && ln -s /etc/hiera.yaml /etc/puppet/hiera.yaml || echo "skipping creation of hiera.yaml symlink"
sed -i 's;:datadir:.*;:datadir: /var/tmp/packstack/18227dca781e48cda2db45952d159190/hieradata;g' $(puppet config print hiera_config)
2021-05-15 12:09:43::INFO::shell::100::root:: [192.168.168.171] Executing script:
vgdisplay cinder-volumes
2021-05-15 12:09:43::INFO::shell::100::root:: [localhost] Executing script:
ssh-keygen -t rsa -b 2048 -f "/var/tmp/packstack/20210515-120855-k817cwco/nova_migration_key" -N ""
2021-05-15 12:09:43::INFO::shell::100::root:: [localhost] Executing script:
ssh-keyscan 192.168.168.171
2021-05-15 12:09:43::INFO::shell::100::root:: [192.168.168.171] Executing script:
systemctl
2021-05-15 12:09:43::INFO::shell::100::root:: [192.168.168.171] Executing script:
systemctl is-enabled NetworkManager
2021-05-15 12:09:44::INFO::shell::100::root:: [192.168.168.171] Executing script:
systemctl is-active NetworkManager
2021-05-15 12:09:44::INFO::shell::100::root:: [192.168.168.171] Executing script:
echo $HOME
2021-05-15 12:09:44::INFO::shell::100::root:: [localhost] Executing script:
cd /var/tmp/packstack/20210515-120855-k817cwco/hieradata
tar --dereference -cpzf - ../hieradata | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root#192.168.168.171 tar -C /var/tmp/packstack/18227dca781e48cda2db45952d159190 -xpzf -
cd /usr/lib/python3.6/site-packages/packstack/puppet
cd /var/tmp/packstack/20210515-120855-k817cwco/manifests
tar --dereference -cpzf - ../manifests | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root#192.168.168.171 tar -C /var/tmp/packstack/18227dca781e48cda2db45952d159190 -xpzf -
cd /usr/share/openstack-puppet/modules
tar --dereference -cpzf - aodh apache ceilometer certmonger cinder concat firewall glance gnocchi heat horizon inifile ironic keystone magnum manila memcached mysql neutron nova nssdb openstack openstacklib oslo ovn packstack panko placement rabbitmq redis remote rsync sahara ssh stdlib swift sysctl systemd tempest trove vcsrepo vswitch xinetd | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null root#192.168.168.171 tar -C /var/tmp/packstack/18227dca781e48cda2db45952d159190/modules -xpzf -
2021-05-15 12:25:43::ERROR::run_setup::1062::root:: Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/packstack/installer/run_setup.py", line 1057, in main
_main(options, confFile, logFile)
File "/usr/lib/python3.6/site-packages/packstack/installer/run_setup.py", line 681, in _main
runSequences()
File "/usr/lib/python3.6/site-packages/packstack/installer/run_setup.py", line 648, in runSequences
controller.runAllSequences()
File "/usr/lib/python3.6/site-packages/packstack/installer/setup_controller.py", line 81, in runAllSequences
sequence.run(config=self.CONF, messages=self.MESSAGES)
File "/usr/lib/python3.6/site-packages/packstack/installer/core/sequences.py", line 109, in run
step.run(config=config, messages=messages)
File "/usr/lib/python3.6/site-packages/packstack/installer/core/sequences.py", line 50, in run
self.function(config, messages)
File "/usr/lib/python3.6/site-packages/packstack/plugins/puppet_950.py", line 215, in apply_puppet_manifest
wait_for_puppet(currently_running, messages)
File "/usr/lib/python3.6/site-packages/packstack/plugins/puppet_950.py", line 128, in wait_for_puppet
validate_logfile(log)
File "/usr/lib/python3.6/site-packages/packstack/modules/puppet.py", line 107, in validate_logfile
raise PuppetError(message)
packstack.installer.exceptions.PuppetError: Error appeared during Puppet run: 192.168.168.171_controller.pp
Error: Facter: error while resolving custom fact "rabbitmq_nodename": undefined method `[]' for nil:NilClass
You will find full trace in log /var/tmp/packstack/20210515-120855-k817cwco/manifests/192.168.168.171_controller.pp.log
2021-05-15 12:25:43::INFO::shell::100::root:: [192.168.168.171] Executing script:
rm -rf /var/tmp/packstack/18227dca781e48cda2db45952d159190
Here is the controller.pp.log
Error: Facter: error while resolving custom fact "rabbitmq_nodename": undefined method `[]' for nil:NilClass
Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/6.14/deprecated_language.html
(file & line not available)
Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
(file: /etc/puppet/hiera.yaml)
...
...
Notice: /Stage[main]/Swift/Swift_config[swift-hash/swift_hash_path_suffix]/value: value changed 2399ecebcf7a4128 to 00a7d595320749e9
Notice: /Stage[main]/Swift::Proxy::Authtoken/Swift_proxy_config[filter:authtoken/password]/value: value changed dc6fbb7c617a48c0 to e2187def7d184d58
Error: Systemd start for rabbitmq-server failed!
journalctl log for rabbitmq-server:
-- Logs begin at Sat 2021-05-15 11:54:15 CDT, end at Sat 2021-05-15 12:18:53 CDT. --
May 15 12:18:23 openstack systemd[1]: Starting RabbitMQ broker...
May 15 12:18:23 openstack rabbitmq-server[11773]: 2021-05-15 12:18:23 [warning] Both old (.config) and new (.conf) format config files exist.
May 15 12:18:23 openstack rabbitmq-server[11773]: Using the old format config file: /etc/rabbitmq/rabbitmq.config
May 15 12:18:23 openstack rabbitmq-server[11773]: Please update your config files to the new format and remove the old file.
May 15 12:18:53 openstack rabbitmq-server[11773]: ERROR: epmd error for host openstack: timeout (timed out)
May 15 12:18:53 openstack systemd[1]: rabbitmq-server.service: Main process exited, code=exited, status=1/FAILURE
May 15 12:18:53 openstack systemd[1]: rabbitmq-server.service: Failed with result 'exit-code'.
May 15 12:18:53 openstack systemd[1]: Failed to start RabbitMQ broker.
Error: /Stage[main]/Rabbitmq::Service/Service[rabbitmq-server]/ensure: change from 'stopped' to 'running' failed: Systemd start for rabbitmq-server failed!
journalctl log for rabbitmq-server:
-- Logs begin at Sat 2021-05-15 11:54:15 CDT, end at Sat 2021-05-15 12:18:53 CDT. --
May 15 12:18:23 openstack systemd[1]: Starting RabbitMQ broker...
May 15 12:18:23 openstack rabbitmq-server[11773]: 2021-05-15 12:18:23 [warning] Both old (.config) and new (.conf) format config files exist.
May 15 12:18:23 openstack rabbitmq-server[11773]: Using the old format config file: /etc/rabbitmq/rabbitmq.config
May 15 12:18:23 openstack rabbitmq-server[11773]: Please update your config files to the new format and remove the old file.
May 15 12:18:53 openstack rabbitmq-server[11773]: ERROR: epmd error for host openstack: timeout (timed out)
May 15 12:18:53 openstack systemd[1]: rabbitmq-server.service: Main process exited, code=exited, status=1/FAILURE
May 15 12:18:53 openstack systemd[1]: rabbitmq-server.service: Failed with result 'exit-code'.
May 15 12:18:53 openstack systemd[1]: Failed to start RabbitMQ broker.
Notice: /Stage[main]/Swift::Deps/Anchor[swift::config::end]: Triggered 'refresh' from 2 events
Notice: /Stage[main]/Swift::Deps/Anchor[swift::service::begin]: Triggered 'refresh' from 2 events
Notice: /Stage[main]/Keystone::Deps/Anchor[keystone::config::end]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Keystone/Exec[keystone-manage fernet_setup]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_%]/Mysql_user[keystone_admin#%]/password_hash: changed password
Notice: /Stage[main]/Keystone::Db::Mysql/Openstacklib::Db::Mysql[keystone]/Openstacklib::Db::Mysql::Host_access[keystone_127.0.0.1]/Mysql_user[keystone_admin#127.0.0.1]/password_hash: changed password
Notice: /Stage[main]/Keystone::Deps/Anchor[keystone::db::end]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::begin]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Keystone::Db::Sync/Exec[keystone-manage db_sync]: Triggered 'refresh' from 2 events
Notice: /Stage[main]/Keystone::Deps/Anchor[keystone::dbsync::end]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Keystone::Bootstrap/Exec[keystone bootstrap]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Keystone::Deps/Anchor[keystone::service::begin]: Triggered 'refresh' from 4 events
Warning: /Stage[main]/Apache::Service/Service[httpd]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Deps/Anchor[keystone::service::end]: Skipping because of failed dependencies
Warning: /Stage[main]/Gnocchi::Deps/Anchor[gnocchi::service::end]: Skipping because of failed dependencies
Warning: /Stage[main]/Aodh::Deps/Anchor[aodh::service::end]: Skipping because of failed dependencies
Warning: /Stage[main]/Placement::Deps/Anchor[placement::service::end]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Cron::Fernet_rotate/Cron[keystone-manage fernet_rotate]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone/Keystone_domain[Default]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone/Exec[restart_keystone]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone/Anchor[default_domain_created]: Skipping because of failed dependencies
Warning: /Stage[main]/Packstack::Keystone/Keystone_role[_member_]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_role[admin]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_user[admin]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_tenant[services]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_tenant[admin]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_user_role[admin#admin]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_service[keystone::identity]: Skipping because of failed dependencies
Warning: /Stage[main]/Keystone::Bootstrap/Keystone_endpoint[RegionOne/keystone::identity]: Skipping because of failed dependencies
Warning: /Stage[main]/Horizon::Deps/Anchor[horizon::service::end]: Skipping because of failed dependencies
Warning: /Stage[main]/Swift::Keystone::Auth/Keystone_role[SwiftOperator]: Skipping because of failed dependencies
Warning: /Stage[main]/Swift::Keystone::Auth/Keystone_role[ResellerAdmin]: Skipping because of failed dependencies
Warning: /Stage[main]/Heat::Keystone::Auth/Keystone_role[heat_stack_user]: Skipping because of failed dependencies
Warning: /Stage[main]/Heat::Keystone::Auth/Keystone_role[heat_stack_owner]: Skipping because of failed dependencies
Warning: /Stage[main]/Heat::Keystone::Domain/Keystone_domain[heat]: Skipping because of failed dependencies
Warning: /Stage[main]/Heat::Keystone::Domain/Keystone_user[heat_admin::heat]: Skipping because of failed dependencies
Warning: /Stage[main]/Heat::Keystone::Domain/Keystone_user_role[heat_admin::heat#::heat]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user[glance]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_user_role[glance#services]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_service[glance::image]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Keystone::Auth/Keystone::Resource::Service_identity[glance]/Keystone_endpoint[RegionOne/glance::image]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Deps/Anchor[glance::service::begin]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Api/Service[glance-api]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Registry/Service[glance-registry]: Skipping because of failed dependencies
Warning: /Stage[main]/Glance::Deps/Anchor[glance::service::end]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone::Resource::Service_identity[cinder]/Keystone_user[cinder]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone::Resource::Service_identity[cinder]/Keystone_user_role[cinder#services]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone::Resource::Service_identity[cinderv2]/Keystone_service[cinderv2::volumev2]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone::Resource::Service_identity[cinderv2]/Keystone_endpoint[RegionOne/cinderv2::volumev2]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone::Resource::Service_identity[cinderv3]/Keystone_service[cinderv3::volumev3]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone::Resource::Service_identity[cinderv3]/Keystone_endpoint[RegionOne/cinderv3::volumev3]: Skipping because of failed dependencies
Warning: /Stage[main]/Cinder::Deps/Anchor[cinder::service::end]: Skipping because of failed dependencies
Error: Could not prefetch cinder_type provider 'openstack': Could not authenticate
Error: Failed to apply catalog: Could not authenticate
I could see that people have experienced the similar issues but those solutions did not work for me. I also have done below steps.
Verify the hostname and host file
Opened the port or disabled the firewalld
Disabled the SELINUX
I changed hostname to openstack and here is my host file
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
<my-ip> openstack
I am not sure is that a hostname issue or firewall or anything else. I have been struggling this for quite sometime and a help would be greately appreciated
I met the same question.Maybe you can modify your /etc/hosts and add your hostname in it not openstack.

I am trying to rebuild my old working project and this error is showing what to do?

Error:In FontFamilyFont, unable to find attribute android:fontVariationSettings
Error:java.util.concurrent.ExecutionException: com.android.ide.common.process.ProcessException: Error while executing process /sdk/build-tools/27.0.2/aapt with arguments {package -f --no-crunch -I /sdk/platforms/android-26/android.jar -M /project/kitchen33app/milla/build/intermediates/manifest/androidTest/debug/AndroidManifest.xml -S /project/kitchen33app/milla/build/intermediates/res/merged/androidTest/debug -m -J /project/kitchen33app/milla/build/generated/source/r/androidTest/debug -F /project/kitchen33app/milla/build/intermediates/res/androidTest/debug/resources-debugAndroidTest.ap_ -0 apk --output-text-symbols /project/kitchen33app/milla/build/intermediates/symbols/androidTest/debug --no-version-vectors}
Error:com.android.ide.common.process.ProcessException: Error while executing process /sdk/build-tools/27.0.2/aapt with arguments {package -f --no-crunch -I /sdk/platforms/android-26/android.jar -M /project/kitchen33app/milla/build/intermediates/manifest/androidTest/debug/AndroidManifest.xml -S /project/kitchen33app/milla/build/intermediates/res/merged/androidTest/debug -m -J /project/kitchen33app/milla/build/generated/source/r/androidTest/debug -F /project/kitchen33app/milla/build/intermediates/res/androidTest/debug/resources-debugAndroidTest.ap_ -0 apk --output-text-symbols /project/kitchen33app/milla/build/intermediates/symbols/androidTest/debug --no-version-vectors}
Error:org.gradle.process.internal.ExecException: Process 'command '/sdk/build-tools/27.0.2/aapt'' finished with non-zero exit value 1
Error:Execution failed for task ':milla:processDebugAndroidTestResources'.
Failed to execute aapt
Just replace referencing of color code with the actual color in vector drawable files and it should work.
In your app build.gradle add the following line:
defaultConfig{
vectorDrawables.useSupportLibrary = true
}

Login via Shell Script

My issue is that I want run a script from root for which I always have to login with root manually by typing "su -" on command line.
My query is that the script which I am executing it automatically login with root by just prompting me for password. Help me!!!
::::::::::Script:::::::::::::
if [ "$(whoami)" != "root" ]; then
echo -e '\E[41m'"\033[1mYou must be root to run this script\033[0m"
**su - # at this line I want to login as root but it is not working**
exit 1
fi
sleep 1
if [ "$(pwd)" != "/root" ]; then
echo -e '\E[41m'"\033[1mCopy this script to /root & then try again\033[0m"
cd /root
exit 1
fi
sleep 1
echo -e '\E[36;45m'"\033[1mDownloading Flash Player from ftp.etilizepak.com\033[0m"
sleep 2
wget -vcxr ftp://soft:S0ft\!#ftp.abc.com/ubuntu/ubuntu\ 12.04/adobe-flashplugin=/install_flash_player_11_linux.i386.tar.gz
cd ftp.abc.com/ubuntu/ubuntu\ 12.04/adobe-flashplugin/
sleep 1
echo -e '\E[42m'"\033[1mUnzipping .tar File...\033[0m"
sleep 1
tar -xzf install_flash_player_11_linux.i386.tar.gz
echo "Unzipping Compeleted"
sleep 2
echo -e '\E[42m'"\033[1mCopying libflashplayer.so\033[0m"
cp libflashplayer.so /usr/lib/mozilla/plugins/
:::::::::::::::END:::::::::::::::::::::
I'm not sure if I understand your question but I suppose you want to run something inside you script with root privileges - then you shuold use "sudo" command.
You can also suppress the password prompt, this can be configured in sudoers" configuration file.
Some more info here:
https://unix.stackexchange.com/questions/35338/su-vs-sudo-s-vs-sudo-bash
Shell script calls sudo; how do I suppress the password prompt
There is tons of examples, google something like "linux sudo examples" and you will get lots of examples how to use su, sudo ans sudoers commands.
According to your comments to my previous answer, here is how i do it:
There are two files in the same directory:
-rwx------ 1 root root 19 Sep 10 13:04 test2.sh
-rwxrwxrwx 1 root root 29 Sep 10 13:06 test.sh
File test.sh:
#!/bin/bash
# put your message here
su -c ./test2.sh
File test2.sh:
#!/bin/bash
echo You run as:
whoami
# put your code here
Result:
> ./test.sh
Password:****
You run as:
root
If you want to suppress the password prompt for this script only, replace "su -c" with "sudo" and configure sudoers file according to insctructions from here: https://askubuntu.com/questions/155791/how-do-i-sudo-a-command-in-a-script-without-being-asked-for-a-password