I am trying to get LetsEncrypt SSL certificates installed on a Centos 6 server using Cerbot-Auto, however no matter what I try, it just hangs on:
Apache version is 2.2.15
Command
./certbot-auto -v
When I press CTRL + C to exit the program, it takes about 15 seconds and then exits with a stack trace:
Exiting abnormally:
Traceback (most recent call last):
File "/opt/eff.org/certbot/venv/bin/letsencrypt", line 9, in <module>
load_entry_point('letsencrypt==0.7.0', 'console_scripts', 'letsencrypt')()
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot/main.py", line 1240, in main
return config.func(config, plugins)
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot/main.py", line 981, in run
installer, authenticator = plug_sel.choose_configurator_plugins(config, plugins, "run")
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot/plugins/selection.py", line 189, in choose_configurator_plugins
authenticator = installer = pick_configurator(config, req_inst, plugins)
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot/plugins/selection.py", line 25, in pick_configurator
(interfaces.IAuthenticator, interfaces.IInstaller))
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot/plugins/selection.py", line 77, in pick_plugin
verified.prepare()
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot/plugins/disco.py", line 248, in prepare
return [plugin_ep.prepare() for plugin_ep in six.itervalues(self._plugins)]
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot/plugins/disco.py", line 248, in <listcomp>
return [plugin_ep.prepare() for plugin_ep in six.itervalues(self._plugins)]
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot/plugins/disco.py", line 130, in prepare
self._initialized.prepare()
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot_apache/configurator.py", line 225, in prepare
self.parser = self.get_parser()
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot_apache/override_centos.py", line 39, in get_parser
self.version, configurator=self)
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot_apache/override_centos.py", line 47, in __init__
super(CentOSParser, self).__init__(*args, **kwargs)
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot_apache/parser.py", line 74, in __init__
if self.find_dir("Define", exclude=False):
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/certbot_apache/parser.py", line 401, in find_dir
"%s//*[self::directive=~regexp('%s')]" % (start, regex))
File "/opt/eff.org/certbot/venv/lib64/python3.4/site-packages/augeas.py", line 413, in match
ctypes.byref(array))
KeyboardInterrupt
Please see the logfiles in /var/log/letsencrypt for more details.
I thought it may be a python version issue but when checked, the server is running Python 2.6.6, which, according to the Certbot System Requirements is acceptable.
Letsencrypt.log
When I checked the log, it is exactly the same stacktrace as was reported by the script previously.
Any ideas?
Related
I am trying to run the following project:
https://github.com/krishrustagi/Accident-Detection-System
I do not know what I am doing wrong.This project seems really interesting and I really want to learn more about it.
How do I get the github repository above to work.
Maybe a simple fix?
Here is the error code I get when executing the camera.py command:
(env) pi#raspberrypi:~/Project/Accident-Detection-System $ python3 camera.py Traceback
(most recent call last):
File "/home/pi/Project/Accident-Detection-System/camera.py", line 6, in <module>
model = AccidentDetectionModel("model.json", 'model_weights.h5')
File "/home/pi/Project/Accident-Detection-System/detection.py", line 15, in __init__
self.loaded_model.load_weights(model_weights_file)
File "/home/pi/Project/Accident-Detection-System/env/lib/python3.9/site-
packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/pi/Project/Accident-Detection-System/env/lib/python3.9/site-
packages/h5py/_hl/files.py", line 533, in __init__
fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
File "/home/pi/Project/Accident-Detection-System/env/lib/python3.9/site-
packages/h5py/_hl/files.py", line 226, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 106, in h5py.h5f.open
FileNotFoundError: [Errno 2] Unable to open file (unable to open file: name =
'model_weights.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags
= 0)
The error trace is telling you that the file model_weights.h5 does not exist anywhere in the project root directory.
After looking at the GitHub repo, it seems that model.json exists but model_weights.h5 does not, while both are supplied as arguments in camera.py Line 6:
model = AccidentDetectionModel("model.json", 'model_weights.h5')
The README mentioned this in Section 5:
accident-classification.ipynb: This is a jupyter notebook that generates a model to classify the above data. This file generates two important files model.json and model_weights.h5.
Hence you should run accident-classification.ipynb first before running camera.py.
Attempting to restart an Ambari-managed cluster and getting errors related to the Timeline Service V2.0 Reader service starting:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/timelinereader.py", line 108, in <module>
ApplicationTimelineReader().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/timelinereader.py", line 51, in start
hbase(action='start')
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/hbase_service.py", line 80, in hbase
createTables()
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/package/scripts/hbase_service.py", line 147, in createTables
logoutput=True)
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 263, in action_run
returns=self.resource.returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 308, in _call
raise ExecuteTimeoutException(err_msg)
resource_management.core.exceptions.ExecuteTimeoutException: Execution of 'ambari-sudo.sh su yarn-ats -l -s /bin/bash -c 'export PATH='"'"'/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/local/texlive/2016/bin/x86_64-linux:/usr/local/texlive/2016/bin/x86_64-linux:/usr/local/texlive/2016/bin/x86_64-linux:/usr/lib64/qt-3.3/bin:/usr/local/texlive/2016/bin/x86_64-linux:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/maven/bin:/root/bin:/opt/maven/bin:/opt/maven/bin:/var/lib/ambari-agent'"'"' ; sleep 10;export HBASE_CLASSPATH_PREFIX=/usr/hdp/3.0.0.0-1634/hadoop-yarn/timelineservice/*; /usr/hdp/3.0.0.0-1634/hbase/bin/hbase --config /usr/hdp/3.0.0.0-1634/hadoop/conf/embedded-yarn-ats-hbase org.apache.hadoop.yarn.server.timelineservice.storage.TimelineSchemaCreator -Dhbase.client.retries.number=35 -create -s'' was killed due timeout after 300 seconds
I have not changed any configs or installed anything new between the restart attempt; simply stopped the cluster services and attempted to restart them. Not sure what this error message means. Any debugging tips or fixes?
Found the solution on another community post.
navigate to the host where Timeline Reader is installed and Install Hbase Client in that host
Here is how I installed HBase Client from via the Ambari UI...
In the Ambari UI, go to Hosts then click the host you want to install the hbase client component on
In the list on components, you will have option to add more, see...
From here I installed the HBase client
Then stopped and restarted the cluster via Ambari UI (got notification of stale configs (though not sure if this was my problem all along or if installing the HBase Client reaised the stale configs alert))
I'm attempting to run the Dask-MPI "Getting Started" (http://mpi.dask.org/en/latest/) example in a fresh Anaconda environment.
I set up an environment using
conda create -n dask-mpi -c conda-forge python=3.7 dask-mpi
conda activate dask-mpi
Inside the environment, I run
mpirun -np 4 dask-mpi --scheduler-file ./scheduler.json
Then, from a python interpreter on the same machine (and in the same folder), I run
from dask.distributed import Client
client = Client(scheduler_file='/path/to/scheduler.json')
This results in the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/distributed/client.py", line 712, in __init__
self.start(timeout=timeout)
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/distributed/client.py", line 858, in start
sync(self.loop, self._start, **kwargs)
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/distributed/utils.py", line 331, in sync
six.reraise(*error[0])
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/six.py", line 693, in reraise
raise value
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/distributed/utils.py", line 316, in f
result[0] = yield future
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/tornado/gen.py", line 729, in run
value = future.result()
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/tornado/gen.py", line 736, in run
yielded = self.gen.throw(*exc_info) # type: ignore
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/distributed/client.py", line 954, in _start
yield self._ensure_connected(timeout=timeout)
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/tornado/gen.py", line 729, in run
value = future.result()
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/tornado/gen.py", line 736, in run
yielded = self.gen.throw(*exc_info) # type: ignore
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/distributed/client.py", line 1015, in _ensure_connected
timedelta(seconds=timeout), self._update_scheduler_info()
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/tornado/gen.py", line 729, in run
value = future.result()
tornado.util.TimeoutError: Timeout
The terminal that I ran dask-mpi from does not have any output which would indicate that something is trying to connect. I have verified that the port in question, 8786, is open. I've also verified via debugger that the client is getting the correct address from the scheduler file.
I've tried this in quite a few different environments and on a few different machines, including a fresh Ubuntu 18.04 docker container. I'm completely at a loss for what steps I might be missing.
It turns out this was due to an error in newer versions of dask.distributed (1.25.3) which broke the behavior of dask-mpi. This seems to be fixed as of dask-mpi 1.0.3 (https://github.com/dask/dask-mpi/releases/tag/1.0.3).
I am using HDP 2.4.2 and I had previously installed the zeppelin server. It was working fine but today when i restarted the cluster ( AWS nodes were restarted), Ambari shows that Zeppelin server is not running and fails to start the server with the following error:
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package/scripts/master.py", line 235, in <module>
Master().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/2.4/services/ZEPPELIN/package/scripts/master.py", line 169, in start
+ params.zeppelin_log_file, user=params.zeppelin_user)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 154, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 158, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 121, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 238, in action_run
tries=self.resource.tries, try_sleep=self.resource.try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/hdp/current/zeppelin-server/lib/bin/zeppelin-daemon.sh start >> /var/log/zeppelin/zeppelin-setup.log' returned 1. /usr/hdp/current/zeppelin-server/lib/bin/zeppelin-daemon.sh: line 187: /var/run/zeppelin-notebook/zeppelin-zeppelin-ip-10-0-0-11.eu-west-1.compute.internal.pid: Permission denied
cat: /var/run/zeppelin-notebook/zeppelin-zeppelin-ip-10-0-0-11.eu-west-1.compute.internal.pid: No such file or directory
In the zeppelin logs:
ERROR [2016-06-06 03:20:36,714] ({main} VFSNotebookRepo.java[list]:140) - Can't read note file:///usr/hdp/current/zeppelin-server/lib/notebook/screenshots java.io.IOException: file:///usr/hdp/current/zeppelin-server/lib/notebook/screenshots/note.json not found
ERROR [2016-06-06 03:34:12,795] ({main} Notebook.java[loadNoteFromRepo]:330) - Failed to load 2BHU1G67J java.io.IOException: file:///usr/hdp/current/zeppelin-server/lib/notebook/2BHU1G67J is not a directory
But for some reason, the zeppelin port is listening and despite these errors, the zeppelin server is running fine and executing all the queries. Please advice on how to correct the issue in Ambari and start the service without error from ambari.
The problem is with the PID file for the zeppelin service. It's either owned by the wrong user or has the wrong permissions. Manually stop the zeppelin service then delete the pid file locate at: /var/run/zeppelin-notebook/zeppelin-zeppelin-ip-10-0-0-11.eu-west-1.compute.internal.pid. Double check the owner/permissions on the /var/run/zeppelin-notebook folder as well. You should then be able to restart the service in the Ambari UI.
I'm trying to run bokeh-server with supervisor with redis as a backend and I get this error message on startup:
Traceback (most recent call last):
File "/usr/share/nginx/test-status/flask/bin/bokeh-server", line 7, in <module>
bokeh.server.run()
File "/usr/share/nginx/test-status/flask/lib/python2.7/site-packages/bokeh/server/__init__.py", line 175, in run
start_server(args)
File "/usr/share/nginx/test-status/flask/lib/python2.7/site-packages/bokeh/server/__init__.py", line 179, in start_server
start.start_simple_server(args)
File "/usr/share/nginx/test-status/flask/lib/python2.7/site-packages/bokeh/server/start.py", line 54, in start_simple_server
start_redis()
File "/usr/share/nginx/test-status/flask/lib/python2.7/site-packages/bokeh/server/start.py", line 40, in start_redis
save=redis_save)
File "/usr/share/nginx/test-status/flask/lib/python2.7/site-packages/bokeh/server/services.py", line 81, in start_redis
stdin=subprocess.PIPE
File "/usr/share/nginx/test-status/flask/lib/python2.7/site-packages/bokeh/server/services.py", line 32, in __init__
self.add_to_pidfile()
File "/usr/share/nginx/test-status/flask/lib/python2.7/site-packages/bokeh/server/services.py", line 46, in add_to_pidfile
with open(self.pidfilename, "w+") as f:
IOError: [Errno 13] Permission denied: '/bokehpids.json'
Note that I can run the server with supervisor if I use memory as the backend, and I can run bokeh-server manually with redis as a backend just fine. Does anyone know where the permissions I should change lie?
Turns out it was trying to access the pidfile in the root directory...
I solved this by changing the directory in the supervisor config file:
[program:bokeh]
...
directory=/usr/share/nginx/test-status
...