How to use Salt pkg.installed module to install local rpm - yum

We have some .rpm applications we wanted to install on CentOS 6 & 7 machines. The machines don't have access to internet. How can we write a state that would make sure the application is installed? Here's my code:
Install Nessus Agent:
pkg.installed:
- name: NessusAgent
- sources: '[{"NessusAgent": "salt:///root/NessusAgent-7.1.1-es{{ osmajorrelease }}.x86_64.rpm"}]'
Error I get when I ran the state:
Comment: An exception occurred in this state: Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/salt/state.py", line 1913, in call
**cdata['kwargs'])
File "/usr/lib/python2.7/site-packages/salt/loader.py", line 1898, in wrapper
return f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/salt/states/pkg.py", line 1617, in installed
if next(iter(list(x.keys()))) in targets]
AttributeError: 'unicode' object has no attribute 'keys'
What is the correct way to install local rpm packages with a Salt state?

I got it to work by rewriting the "sources" parameter:
Install Nessus Agent:
pkg.installed:
- name: NessusAgent
- enable: True
- sources:
- NessusAgent: salt:///files/nessus/NessusAgent-7.1.1-es7.x86_64.rpm

Related

SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed on automate-git.py

I'm trying to compile CEF locally on my Ubuntu 20.10 machine, but my automate-git.py can't finish due to a strange error while running hooks:
Apply runhooks.patch in /home/user/code/chromium_git/chromium/src
9 5 build/toolchain/win/setup_toolchain.py
11 0 build/vs_toolchain.py
... successfully applied.
-------- Running "gclient runhooks --jobs 16" in "/home/user/code/chromium_git/chromium"...
Running hooks: 5% ( 6/101) nacltools
________ running 'vpython src/build/download_nacl_toolchains.py --mode nacl_core_sdk sync --extract' in '/home/user/code/chromium_git/chromium'
INFO: --Syncing arm_trusted to revision 2--
INFO: Downloading package archive: emulator_arm_trusted_precise.tgz (1/1)
package_version: Could not download URL (https://storage.googleapis.com/nativeclient-archive2/toolchain/2/emulator_arm_trusted_precise.tgz): <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727)>
Error: Command 'vpython src/build/download_nacl_toolchains.py --mode nacl_core_sdk sync --extract' returned non-zero exit status 1 in /home/user/code/chromium_git/chromium
Traceback (most recent call last):
File "../automate/automate-git.py", line 1385, in <module>
run("gclient runhooks --jobs 16", chromium_dir, depot_tools_dir)
File "../automate/automate-git.py", line 69, in run
return subprocess.check_call(
File "/usr/lib/python3.8/subprocess.py", line 364, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['gclient', 'runhooks', '--jobs', '16']' returned non-zero exit status 2.
On the restart it it succeeds, though, but there are compilation errors in future. I pasted check_certificate = off in ~/.wgetrc and insecure in ~/.curlrc, but no luck yet. What do I do?
I solved this issue by setting up a VM and compiling CEF inside it, and it all magically started working, so I guess it was my system's issue.

Tensorflow dataloading issue

I'm working with an example program that uses the MNIST dataset.
It tries to load the dataset using this line:
dataset = tfds.load(name='mnist', split=split)
However, this yields the following error:
2020-07-30 12:08:17.926262: W tensorflow/core/platform/cloud/google_auth_provider.cc:184] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "Not found: Could not locate the credentials file.". Retrieving token from GCE failed with "Failed precondition: Error executing an HTTP request: libcurl code 6 meaning 'Couldn't resolve host name', error details: Couldn't resolve host 'metadata'".
Traceback (most recent call last):
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/utils/py_utils.py", line 399, in try_reraise
yield
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/registered.py", line 244, in builder
return builder_cls(name)(**builder_kwargs)
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/api_utils.py", line 69, in disallow_positional_args_dec
return fn(*args, **kwargs)
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/dataset_builder.py", line 206, in __init__
self.info.initialize_from_bucket()
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/dataset_info.py", line 423, in initialize_from_bucket
data_files = gcs_utils.gcs_dataset_info_files(self.full_name)
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/utils/gcs_utils.py", line 71, in gcs_dataset_info_files
return gcs_listdir(posixpath.join(GCS_DATASET_INFO_DIR, dataset_dir))
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/utils/gcs_utils.py", line 64, in gcs_listdir
if is_gcs_disabled() or not tf.io.gfile.exists(root_dir):
File "/home/tflynn/.local/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py",
line 267, in file_exists_v2
_pywrap_file_io.FileExists(compat.as_bytes(path))
tensorflow.python.framework.errors_impl.FailedPreconditionError: Error executing an HTTP
request: libcurl code 77 meaning 'Problem with the SSL CA cert (path? access rights?)',
error details: error setting certificate verify locations:
CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
when reading metadata of gs://tfds-data/dataset_info/mnist/3.0.1
I've searched on google, but couldn't find any other instances of this error with tensorflow. The node is connected to the internet, if that makes a difference.
I've had the same issue on an Fedora32 system. The directory /etc/ssl/certs/ exists and there is a file ca-bundle.crt. The following command solved the problem:
sudo ln -s /etc/ssl/certs/ca-bundle.crt /etc/ssl/certs/ca-certificates.crt
Probably same as [1]. Run
apt-get update
apt-get install -y ca-certificates
if on linux before executing your code or commands of similar effect on your OS.
[1] https://github.com/tensorflow/serving/issues/1022
If you are unable to create a symbolic link or install using apt, you can try to upgrade to a more recent version of tfds. This issue is not present in the nightly build version 3.2.1.
pip install tfds-nightly: Released every day, contains the last versions of the datasets.
According to the TensorFlow/datasets GitHub repository, one commenter suggests to downgrade to 3.0.0, however; I have not tried this to see if it works.
The error is saying CURL couldn't find the file on your computer with root SSL certificates. On my machine this file is stored at /etc/ssl/certs/ca-bundle.crt.
You can override the path CURL looks for by setting the CURL_CA_BUNDLE environment variable. For example, either add this to the top of your python script/notebook:
import os
os.environ['CURL_CA_BUNDLE'] = "/etc/ssl/certs/ca-bundle.crt"
or you could set the environment variable in a shell, e.g.
export CURL_CA_BUNDLE=/etc/ssl/certs/ca-bundle.crt

How is the correct syntax/call to get the sls-file working - salt

I'm trying to build a reactor sls file, which starts running when an event occurs.
The content of the sls file should be as the following cli commands:
sudo salt minion git.add /srv/salt .
sudo salt minion git.commit /srv/salt test
sudo salt minion git.push /srv/salt origin master identity=/home/autogit/.ssh/id_rsa
If i run the code bellow triggered by the reactor. I get the following error message.
[DEBUG ] Reactor is populating module client cache
[ERROR ] An un-handled exception from the multiprocessing process 'Reactor-9:1' was caught:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/utils/process.py", line 765, in _run
return self._original_run()
File "/usr/lib/python2.7/dist-packages/salt/utils/reactor.py", line 271, in run
self.call_reactions(chunks)
File "/usr/lib/python2.7/dist-packages/salt/utils/reactor.py", line 228, in call_reactions
self.wrap.run(chunk)
File "/usr/lib/python2.7/dist-packages/salt/utils/reactor.py", line 330, in run
self.populate_client_cache(low)
File "/usr/lib/python2.7/dist-packages/salt/utils/reactor.py", line 324, in populate_client_cache
self.reaction_class[reaction_type](self.opts['conf_file'])
KeyError: u'module'
[CRITICAL] Engine 'reactor' could not be started!
I've tried different syntax (old style and new style) but couldn't figure out what the problem is. Always getting an KeyError: u'module' or u'git'.
Also tried it with runner function to run it locally on the master.
git pull:
module.run:
- git.pull:
- cwd: /srv/salt
- remote: git#git.xyz.com:user/sbt.git
- identity: /home/autogit/.ssh/id_rsa
- git.add:
- cwd: /srv/salt
- filename: .
- git.commit:
- cwd: /srv/salt
- remote: git#git.xyz.com:user/sbt.git
- git.push:
- cwd: /srv/salt
- remote: git#git.xyz.com:user/sbt.git
- identity: /home/autogit/.ssh/id_rsa
salt --versions-report
Salt Version:
Salt: 2019.2.0
Dependency Versions:
cffi: Not Installed
cherrypy: unknown
dateutil: 2.6.1
docker-py: Not Installed
gitdb: 2.0.3
gitpython: 2.1.8
ioflo: Not Installed
Jinja2: 2.10
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: Not Installed
Mako: 1.0.7
msgpack-pure: Not Installed
msgpack-python: 0.5.6
mysql-python: Not Installed
pycparser: Not Installed
pycrypto: 2.6.1
pycryptodome: Not Installed
pygit2: Not Installed
Python: 2.7.15rc1 (default, Nov 12 2018, 14:31:15)
python-gnupg: 0.4.1
PyYAML: 3.12
PyZMQ: 16.0.2
RAET: Not Installed
smmap: 2.0.3
timelib: Not Installed
Tornado: 4.5.3
ZMQ: 4.2.5
System Versions:
dist: Ubuntu 18.04 bionic
locale: UTF-8
machine: x86_64
release: 4.15.0-46-generic
system: Linux
version: Ubuntu 18.04 bionic
Since i'm quite new to Salt, hopefully you can give me a hint what i'm doing wrong:
You didn't provide the master config.
About module.run confusion: add in your settings (minion and maybe to master since I don't know your use-case)
use_superseded:
- module.run
That will enable your syntax, more doc about this here: https://docs.saltstack.com/en/latest/ref/states/all/salt.states.module.html#salt.states.module.run
In general: you are executing execution modules from the place that state modules are allowed only (the term module is heavily overused in salt...)
You didn't provide the full Master config. Reactor requires dedicated config to match events to sls files:
https://docs.saltstack.com/en/latest/ref/configuration/master.html#master-reactor-settings
You can also check the doc I've written some time ago about events and reactors:
https://github.com/kiemlicz/util/wiki/Salt-Events-and-Reactor
Assuming you've configured your events-to-sls-files-matching in master config, your provided sls:
git pull:
module.run:
- git.pull:
- cwd: /srv/salt
- remote: git#git.xyz.com:user/
...
will not work.
Mind that reaction happens on Salt Master thus the reaction sls file need to provide type of reaction (local, runner etc.) since it's no longer 'view of one minion' but possibly of tons of minions!
First create runner reaction type (which delegates to some orchestration sls file which will contain your logic wrapped with (I think) salt.function )
Help yourself with aforementioned github link to my attempt of explaining Reactor.
Refer to official doc as well: https://docs.saltstack.com/en/latest/topics/reactor/index.html

Plone ldap add on installation issue

I am trying to get ldap authentication to work on Plone version 4.2. I have hammered at the issue for several hours without results. I have even tried these steps:
Install python-ldap 2.6 (C:\Python26)
Install Plone 4.2 with the installer (D:\Plone)
Edit buildout.cfg with plone.app.ldap in the EGG and ZCML section
Create a new folder called python_ldap-2.3.12-py2.6.egg in D:\Plone\buildout-cache\eggs\
Copy C:\Python26\lib\site-packages\python_ldap-2.3.12-py2.6.egg-info to D:\Plone\buildout-cache\eggs\python_ldap-2.3.12-py2.6.egg\ and rename to EGG-INFO
Also copy the ldap folder in C:\Python26\lib\site-packages\ to D:\Plone\buildout-cache\eggs\python_ldap-2.3.12-py2.6.egg\
Also copy the file ldapurl.py to C:\Python26\lib\site-packages\ to D:\Plone\buildout-cache\eggs\python_ldap-2.3.12-py2.6.egg\
Next copy:
folder: C:\Python26\lib\site-packages\python_ldap-2.3.12-py2.6.egg-info
folder: C:\Python26\lib\site-packages\ldap
to D:\Plone\python\Lib\site-packages
Start commandbox and run bin\buildout
Start Plone, log in as admin and go to the extra products section. Here you will find the LDAP product. Install it and enter you LDAP details.
None of that really helped. When i try bin/buildout, I get the following message:
Installing instance.
Getting distribution for 'dataflake.fakeldap'.
zip_safe flag not set; analyzing archive contents...
Installed /tmp/easy_install-oISsVG/dataflake.fakeldap-1.0/setuptools_git-0.4.2-py2.6.egg
Got dataflake.fakeldap 1.0.
Generated script '/usr/local/Plone/zinstance/bin/instance'.
Installing zopepy.
Generated interpreter '/usr/local/Plone/zinstance/bin/zopepy'.
Installing zopeskel.
Generated script '/usr/local/Plone/zinstance/bin/zopeskel'.
Generated script '/usr/local/Plone/zinstance/bin/paster'.
Updating backup.
Updating chown.
chown: Running
echo Dummy references to force this to execute after referenced parts
echo /usr/local/Plone/zinstance/var/backups sudo -u plone
chmod 600 .installed.cfg
find /usr/local/Plone/zinstance/var -type d -exec chmod 700 {} \;
chmod 744 /usr/local/Plone/zinstance/bin/*
Dummy references to force this to execute after referenced parts
/usr/local/Plone/zinstance/var/backups sudo -u plone
Updating repozo.
Updating unifiedinstaller.
*************** PICKED VERSIONS ****************
[versions]
Products.LDAPMultiPlugins = 1.14
Products.LDAPUserFolder = 2.23
Products.PloneLDAP = 1.1
collective.sendaspdf = 2.6
dataflake.fakeldap = 1.0
jquery.pyproxy = 0.4.1
plone.app.ldap = 1.2.8
*************** /PICKED VERSIONS ***************
When I try bin/buildout, it says daemon process started and gives an id but when i try localhost:8080, it says "Problem loading page" and the page does not load. I tried bin/instance fg to display the errors and i following message.
bin/instance fg
2012-07-24 08:53:18 INFO ZServer HTTP server started at Tue Jul 24 08:53:18 2012
Hostname: 0.0.0.0
Port: 8080
2012-07-24 08:53:18 INFO Zope Set effective user to "plone"
2012-07-24 08:53:19 WARNING SecurityInfo Conflicting security declarations for "setText"
2012-07-24 08:53:19 WARNING SecurityInfo Class "ATTopic" had conflicting security declarations
2012-07-24 08:53:19 ERROR Application Could not import Products.LDAPMultiPlugins
Traceback (most recent call last):
File "/usr/local/Plone/buildout-cache/eggs/Zope2-2.13.15-py2.6.egg/OFS/Application.py", line 606, in import_product
product=__import__(pname, global_dict, global_dict, silly)
File "/usr/local/Plone/buildout-cache/eggs/Products.LDAPMultiPlugins-1.14-py2.6.egg/Products/LDAPMultiPlugins/__init__.py", line 22, in <module>
from Products.LDAPMultiPlugins.LDAPMultiPlugin import addLDAPMultiPluginForm
File "/usr/local/Plone/buildout-cache/eggs/Products.LDAPMultiPlugins-1.14-py2.6.egg/Products/LDAPMultiPlugins/LDAPMultiPlugin.py", line 29, in <module>
from Products.LDAPUserFolder import manage_addLDAPUserFolder
File "/usr/local/Plone/buildout-cache/eggs/Products.LDAPUserFolder-2.23-py2.6.egg/Products/LDAPUserFolder/__init__.py", line 20, in <module>
from Products.LDAPUserFolder.LDAPUserFolder import LDAPUserFolder
File "/usr/local/Plone/buildout-cache/eggs/Products.LDAPUserFolder-2.23-py2.6.egg/Products/LDAPUserFolder/LDAPUserFolder.py", line 52, in <module>
from Products.LDAPUserFolder.LDAPDelegate import filter_format
File "/usr/local/Plone/buildout-cache/eggs/Products.LDAPUserFolder-2.23-py2.6.egg/Products/LDAPUserFolder/LDAPDelegate.py", line 19, in <module>
import ldap
File "/usr/local/Plone/buildout-cache/eggs/python_ldap-2.3.12-py2.6.egg/ldap/__init__.py", line 22, in <module>
from _ldap import *
ImportError: No module named _ldap
Traceback (most recent call last):
File "/usr/local/Plone/buildout-cache/eggs/Zope2-2.13.15-py2.6.egg/Zope2/Startup/run.py", line 76, in <module>
run()
File "/usr/local/Plone/buildout-cache/eggs/Zope2-2.13.15-py2.6.egg/Zope2/Startup/run.py", line 22, in run
starter.prepare()
File "/usr/local/Plone/buildout-cache/eggs/Zope2-2.13.15-py2.6.egg/Zope2/Startup/__init__.py", line 86, in prepare
self.startZope()
File "/usr/local/Plone/buildout-cache/eggs/Zope2-2.13.15-py2.6.egg/Zope2/Startup/__init__.py", line 259, in startZope
Zope2.startup()
File "/usr/local/Plone/buildout-cache/eggs/Zope2-2.13.15-py2.6.egg/Zope2/__init__.py", line 47, in startup
_startup()
File "/usr/local/Plone/buildout-cache/eggs/Zope2-2.13.15-py2.6.egg/Zope2/App/startup.py", line 67, in startup
OFS.Application.import_products()
File "/usr/local/Plone/buildout-cache/eggs/Zope2-2.13.15-py2.6.egg/OFS/Application.py", line 583, in import_products
import_product(product_dir, product_name, raise_exc=debug_mode)
File "/usr/local/Plone/buildout-cache/eggs/Zope2-2.13.15-py2.6.egg/OFS/Application.py", line 606, in import_product
product=__import__(pname, global_dict, global_dict, silly)
File "/usr/local/Plone/buildout-cache/eggs/Products.LDAPMultiPlugins-1.14-py2.6.egg/Products/LDAPMultiPlugins/__init__.py", line 22, in <module>
from Products.LDAPMultiPlugins.LDAPMultiPlugin import addLDAPMultiPluginForm
File "/usr/local/Plone/buildout-cache/eggs/Products.LDAPMultiPlugins-1.14-py2.6.egg/Products/LDAPMultiPlugins/LDAPMultiPlugin.py", line 29, in <module>
from Products.LDAPUserFolder import manage_addLDAPUserFolder
File "/usr/local/Plone/buildout-cache/eggs/Products.LDAPUserFolder-2.23-py2.6.egg/Products/LDAPUserFolder/__init__.py", line 20, in <module>
from Products.LDAPUserFolder.LDAPUserFolder import LDAPUserFolder
File "/usr/local/Plone/buildout-cache/eggs/Products.LDAPUserFolder-2.23-py2.6.egg/Products/LDAPUserFolder/LDAPUserFolder.py", line 52, in <module>
from Products.LDAPUserFolder.LDAPDelegate import filter_format
File "/usr/local/Plone/buildout-cache/eggs/Products.LDAPUserFolder-2.23-py2.6.egg/Products/LDAPUserFolder/LDAPDelegate.py", line 19, in <module>
import ldap
File "/usr/local/Plone/buildout-cache/eggs/python_ldap-2.3.12-py2.6.egg/ldap/__init__.py", line 22, in <module>
from _ldap import *
ImportError: No module named _ldap
What am i doing wrong? Help wil be deeply appreciated
Your buildout ran successfully, there were no problems there. Some of the packages you picked were not pinned, so your buildout reported what versions it choose for you.
Your server itself is not indeed running because the Python LDAP egg you installed seems to be incorrectly installed. The buildout-cache/eggs/python_ldap-2.3.12-py2.6.egg/ldap/_ldap.so library file is missing.
Remove the whole egg (rm -rf buildout-cache-eggs/python_ldap-2.3.12-py2.6.egg) make sure you have the OpenLDAP 2.x library and headers installed on your system (on Ubuntu and Debian the libldap2-dev should be enough). Then re-run buildout to reinstall the egg.
Alternatively, you could try and install the system python-ldap package (remove the egg) and see if buildout picks that up instead.
You need to install 2 libs:
sudo apt-get install libldap2-dev
sudo apt-get install libsasl2-dev
Hope that will help.

problems with setuptools while setting up trac on remote system without internet connection

I am trying to set up trac on a remote system running Fedora 12 that doesn't have an internet connection. So, I am copying all files from my system with the internet connection to that system and trying to install from source on that system.
When I run
python ./setup.py install
I get this:
Traceback (most recent call last):
File "./setup.py", line 17, in <module>
from setuptools import setup, find_packages
ImportError: No module named setuptools
I tried installing setuptools from source. I downloaded setuptools-0.6c11-1.src.rpm and ran
rpm -i setuptools-0.6c11-1.src.rpm
but I get the following warnings and nothing seems to happen:
warning: user pje does not exist - using root
warning: group pje does not exist - using root
warning: user pje does not exist - using root
warning: group pje does not exist - using root
How do I proceed?