twisted Using Processes - process

I am learning to use twisted (latest 12.3.0 release), as a way to do some simple server side processing for a mobile app.
My first assigment is essentially to run a 'tail' command on a logs file and deliver the postprocessed found lines to the mobile app. That should be easy...
Now in the docs on the TwistedMatrix site there is a 'Using Processes' page, where I got the following code:
from twisted.internet import protocol, utils, reactor
from twisted.python import failure
from cStringIO import StringIO
class CommandRunner(protocol.Protocol):
#command = "ls /"
command = "tail -n 100 /var/log/system.log"
def connectionMade(self):
output = utils.getProcessOutput(self.command)
output.addCallbacks(self.writeResponse, self.noResponse)
def writeResponse(self, resp):
self.transport.write(resp)
self.transport.loseConnection()
def noResponse(self, err):
print err
self.transport.write("Houston, we have an error!\n")
self.transport.loseConnection()
if __name__ == '__main__':
f = protocol.Factory()
f.protocol = CommandRunner
reactor.listenTCP(10999, f)
reactor.run()
It is 99.9% identical to the published code snippet under the 'Doing it the Easy Way'. The only change is the shell command that twisted should execute (on my Mac I do not seem to have the fortune command).
After launching the sample code when I try to connect on the 10999 port from a second terminal with telnet I get this error:
[Failure instance: Traceback (failure with no frames): : got stderr: 'Upon
execvpe tail -n 100 /var/log/system.log [\'tail -n 100
/var/log/system.log\'] in environment id 4315532016\n:Traceback (most
recent call last):\n File
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.3.0-py2.7-macosx-10.6-intel.egg/twisted/internet/process.py",
line 420, in _fork\n executable, args, environment)\n File
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Twisted-12.3.0-py2.7-macosx-10.6-intel.egg/twisted/internet/process.py",
line 466, in _execChild\n os.execvpe(executable, args,
environment)\n File
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.py",
line 353, in execvpe\n _execvpe(file, args, env)\n File
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/os.py",
line 368, in _execvpe\n func(file, *argrest)\nOSError: [Errno 2] No
such file or directory\n']
I do not see any obvious reason why the code should file with a [Errno 2] No such file or directory\n'] error..
Tia

The program you want to run is "tail". You want to pass it several arguments: "-n", "100", and "/var/log/system.log".
Instead, what your code does is run the program "tail -n 100 /var/log/system.log", which presumably does not exist on your system (I wouldn't expect it to).
The proper use of getProcessOutput is to pass the program separately from the argument list:
getProcessOutput("tail", ["-n", "100", "/var/log/system.log"])

Related

Python3 replacing Python errors with different output

I have one issue that I would like fixed but I am not able to. I have a small script that requires SSH key to work properly. When the SSH key is not properly loaded, the following Python Error appears:
SSH: Permission denied (publickey). Traceback (most recent call last):
File "/path/to/python3file.py", line 117, in
func.func_check() File "/path/to/python3file.py", line 18, in func_check
ssh = subprocess.check_output(["ssh", "-p22", "{}#{}".format("user", self.host), command]) File
"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py",
line 411, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File
"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/subprocess.py",
line 512, in run
raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ssh', '-p22', 'user#chost',
' script func user']' returned non-zero exit status 255.
Basically, I want to replace the entire error code from the above with something more user-friendly like:
Please import SSH key
Is that even possible?
Thank you.
Use Try Except, and specify the type of error you are handling. In this case it is IOError.
try:
print('my code here')
except IOError, e:
print('Please import SSH key')

python script works interactively but fails with pandas write permission problems as a cron job

I have Anaconda with python 3.8 on macOS Big Sur. The python script works just fine within PyCharm or interactively inside a shell script :
/Users/nicholaskalita/opt/anaconda3/bin/python3.8 /Users/nicholaskalita/PycharmProjects/CrpytoScrape/CMCScrape.py
The shell script need to be launched regularly, which is where the trouble begins. launchd starts it (crontab doesn't seem to work on MacOs but that's another story) as root but the python script fails with
Traceback (most recent call last):
File "/Users/nicholaskalita/PycharmProjects/CrpytoScrape/CMCScrape.py", line 241, in
dframe.to_csv(FilePath+NQuotes, index=False)
File "/Users/nicholaskalita/opt/anaconda3/lib/python3.8/site-packages/pandas/core/generic.py", line 3170, in to_csv
formatter.save()
File "/Users/nicholaskalita/opt/anaconda3/lib/python3.8/site-packages/pandas/io/formats/csvs.py", line 185, in save
f, handles = get_handle(
File "/Users/nicholaskalita/opt/anaconda3/lib/python3.8/site-packages/pandas/io/common.py", line 493, in get_handle
f = open(path_or_buf, mode, encoding=encoding, errors=errors, newline="")
PermissionError: [Errno 1] Operation not permitted: '/usr/local/ ...
The destination directory happens to be on network drive but neither applying chmod 777 to it nor moving to a local disk solves the problem.
After clearing out the following directories this problem vanished
~/Library/LaunchAgents
/Library/LaunchAgents
/Library/LaunchDaemons
/Library/StartupItems

Tensorflow dataloading issue

I'm working with an example program that uses the MNIST dataset.
It tries to load the dataset using this line:
dataset = tfds.load(name='mnist', split=split)
However, this yields the following error:
2020-07-30 12:08:17.926262: W tensorflow/core/platform/cloud/google_auth_provider.cc:184] All attempts to get a Google authentication bearer token failed, returning an empty token. Retrieving token from files failed with "Not found: Could not locate the credentials file.". Retrieving token from GCE failed with "Failed precondition: Error executing an HTTP request: libcurl code 6 meaning 'Couldn't resolve host name', error details: Couldn't resolve host 'metadata'".
Traceback (most recent call last):
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/utils/py_utils.py", line 399, in try_reraise
yield
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/registered.py", line 244, in builder
return builder_cls(name)(**builder_kwargs)
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/api_utils.py", line 69, in disallow_positional_args_dec
return fn(*args, **kwargs)
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/dataset_builder.py", line 206, in __init__
self.info.initialize_from_bucket()
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/dataset_info.py", line 423, in initialize_from_bucket
data_files = gcs_utils.gcs_dataset_info_files(self.full_name)
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/utils/gcs_utils.py", line 71, in gcs_dataset_info_files
return gcs_listdir(posixpath.join(GCS_DATASET_INFO_DIR, dataset_dir))
File "/home/tflynn/pylocal/lib/python3.7/site-
packages/tensorflow_datasets/core/utils/gcs_utils.py", line 64, in gcs_listdir
if is_gcs_disabled() or not tf.io.gfile.exists(root_dir):
File "/home/tflynn/.local/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py",
line 267, in file_exists_v2
_pywrap_file_io.FileExists(compat.as_bytes(path))
tensorflow.python.framework.errors_impl.FailedPreconditionError: Error executing an HTTP
request: libcurl code 77 meaning 'Problem with the SSL CA cert (path? access rights?)',
error details: error setting certificate verify locations:
CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: none
when reading metadata of gs://tfds-data/dataset_info/mnist/3.0.1
I've searched on google, but couldn't find any other instances of this error with tensorflow. The node is connected to the internet, if that makes a difference.
I've had the same issue on an Fedora32 system. The directory /etc/ssl/certs/ exists and there is a file ca-bundle.crt. The following command solved the problem:
sudo ln -s /etc/ssl/certs/ca-bundle.crt /etc/ssl/certs/ca-certificates.crt
Probably same as [1]. Run
apt-get update
apt-get install -y ca-certificates
if on linux before executing your code or commands of similar effect on your OS.
[1] https://github.com/tensorflow/serving/issues/1022
If you are unable to create a symbolic link or install using apt, you can try to upgrade to a more recent version of tfds. This issue is not present in the nightly build version 3.2.1.
pip install tfds-nightly: Released every day, contains the last versions of the datasets.
According to the TensorFlow/datasets GitHub repository, one commenter suggests to downgrade to 3.0.0, however; I have not tried this to see if it works.
The error is saying CURL couldn't find the file on your computer with root SSL certificates. On my machine this file is stored at /etc/ssl/certs/ca-bundle.crt.
You can override the path CURL looks for by setting the CURL_CA_BUNDLE environment variable. For example, either add this to the top of your python script/notebook:
import os
os.environ['CURL_CA_BUNDLE'] = "/etc/ssl/certs/ca-bundle.crt"
or you could set the environment variable in a shell, e.g.
export CURL_CA_BUNDLE=/etc/ssl/certs/ca-bundle.crt

Cannot run dask-mpi with Python 3.7 -- timeout when connecting client to dask-mpi scheduler

I'm attempting to run the Dask-MPI "Getting Started" (http://mpi.dask.org/en/latest/) example in a fresh Anaconda environment.
I set up an environment using
conda create -n dask-mpi -c conda-forge python=3.7 dask-mpi
conda activate dask-mpi
Inside the environment, I run
mpirun -np 4 dask-mpi --scheduler-file ./scheduler.json
Then, from a python interpreter on the same machine (and in the same folder), I run
from dask.distributed import Client
client = Client(scheduler_file='/path/to/scheduler.json')
This results in the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/distributed/client.py", line 712, in __init__
self.start(timeout=timeout)
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/distributed/client.py", line 858, in start
sync(self.loop, self._start, **kwargs)
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/distributed/utils.py", line 331, in sync
six.reraise(*error[0])
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/six.py", line 693, in reraise
raise value
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/distributed/utils.py", line 316, in f
result[0] = yield future
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/tornado/gen.py", line 729, in run
value = future.result()
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/tornado/gen.py", line 736, in run
yielded = self.gen.throw(*exc_info) # type: ignore
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/distributed/client.py", line 954, in _start
yield self._ensure_connected(timeout=timeout)
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/tornado/gen.py", line 729, in run
value = future.result()
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/tornado/gen.py", line 736, in run
yielded = self.gen.throw(*exc_info) # type: ignore
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/distributed/client.py", line 1015, in _ensure_connected
timedelta(seconds=timeout), self._update_scheduler_info()
File "/home/nleaf/anaconda3/envs/dask-mpi/lib/python3.7/site-packages/tornado/gen.py", line 729, in run
value = future.result()
tornado.util.TimeoutError: Timeout
The terminal that I ran dask-mpi from does not have any output which would indicate that something is trying to connect. I have verified that the port in question, 8786, is open. I've also verified via debugger that the client is getting the correct address from the scheduler file.
I've tried this in quite a few different environments and on a few different machines, including a fresh Ubuntu 18.04 docker container. I'm completely at a loss for what steps I might be missing.
It turns out this was due to an error in newer versions of dask.distributed (1.25.3) which broke the behavior of dask-mpi. This seems to be fixed as of dask-mpi 1.0.3 (https://github.com/dask/dask-mpi/releases/tag/1.0.3).

Running script in Python command line

I am quite new to Python. I have python27 installed in my PC(windows). I am trying to run a script in python command line. My script was named "Script". And that contains
import sys # Load a library module
print(sys.platform)
print(2 ** 100) # Raise 2 to a power
x = 'Spam!'
print(x * 8) # String repetition
and when i import the script with writing import Script, it gives this
win32
1267650600228229401496703205376
Spam!Spam!Spam!Spam!Spam!Spam!Spam!Spam!
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named py
why the error message appears here? TIA. :)
instead of writing import myscript.py simply use
import myscript
note that myscript.py should be in the same location