twisted threadpool import fail - twisted

Context:
I am trying to launch graphite with pypy interpreter.
Error:
launching graphite(*) leads to
ImportError: cannot import name 'threadpool',
even though launching a python (pypy) interpreter and typing from twisted.python import threadpool works.
Full stacktrace:
15/09/2014 13:29:09 :: File "app_main.py", line 75, in run_toplevel
15/09/2014 13:29:09 :: File "/opt/graphite/bin/carbon-cache.py", line 30, in <module>
15/09/2014 13:29:09 :: run_twistd_plugin(__file__)
15/09/2014 13:29:09 :: File "/opt/graphite/lib/carbon/util.py", line 93, in run_twistd_plugin
15/09/2014 13:29:09 :: runApp(config)
15/09/2014 13:29:09 :: File "/home/vagrant/test2/site-packages/twisted/scripts/twistd.py", line 23, in runApp
15/09/2014 13:29:09 :: File "/home/vagrant/test2/site-packages/twisted/application/app.py", line 380, in run
15/09/2014 13:29:09 :: File "/home/vagrant/test2/site-packages/twisted/scripts/_twistd_unix.py", line 193, in postApplication
15/09/2014 13:29:09 :: File "/home/vagrant/test2/site-packages/twisted/scripts/_twistd_unix.py", line 390, in startApplication
15/09/2014 13:29:09 :: File "/home/vagrant/test2/site-packages/twisted/application/app.py", line 658, in startApplication
15/09/2014 13:29:09 :: File "/home/vagrant/test2/site-packages/twisted/application/service.py", line 282, in startService
15/09/2014 13:29:09 :: File "/home/vagrant/test2/site-packages/twisted/application/service.py", line 282, in startService
15/09/2014 13:29:09 :: File "/opt/graphite/lib/carbon/writer.py", line 191, in startService
15/09/2014 13:29:09 :: reactor.callInThread(writeForever)
15/09/2014 13:29:09 :: File "/home/vagrant/test2/site-packages/twisted/internet/base.py", line 997, in callInThread
15/09/2014 13:29:09 :: File "/home/vagrant/test2/site-packages/twisted/internet/base.py", line 989, in getThreadPool
15/09/2014 13:29:09 :: File "/home/vagrant/test2/site-packages/twisted/internet/base.py", line 954, in _initThreadPool
15/09/2014 13:29:09 :: ImportError: cannot import name 'threadpool'
I am using pypy (2.3.1-linux_x86_64-portable) on centos 6.5
and have run pip to install twisted, whisper (and applied an additional patch)
(*) test2/bin/python /opt/graphite/bin/carbon-cache.py --instance=a start
Edit:
the virtualenv is called test2.
test2/site-packages/twisted/python/threadpool.py{,c} shows twisted has threadpool
test2/bin/pypy --info shows [usemodules] thread = True
'twisted.python.threadpool' in sys.modules returns false just before the failing import
Edit2:
adding from twisted.python import threadpool earlier in the call stack (in /opt/graphite/lib/carbon/util.py for instance) works and make graphite work.

Does your installation of Twisted have the threadpool module? Look for a file named threadpool.py in the twisted/python/ directory. If this is missing, your installation of Twisted has been corrupted somehow. You might be able to fix it by re-installing Twisted (perhaps destroying your ... virtualenv? and creating a new one).
Does it have the stdlib threading module? Twisted's threading support only works if the underlying Python runtime supports threads. If this is missing you may need a different Python runtime. PyPy supports threads but perhaps you got a build with threads disabled somehow.
If neither of these is the problem, you might learn more by running Python with import debugging enabled. python -v enables minimal import debugging and python -vv enables more verbose import debugging. I'm not sure whether these behave the same way on PyPy as they do on CPython. Hopefully they do or things are a bit more difficult.
If that doesn't help then you might also try adding a breakpoint with pdb just before the import of the twisted.python.threadpool module and then stepping carefully through, inspecting state as you go. One thing to check could be whether sys.modules has a 'twisted.python.threadpool' item in it already, and if so, what value is associated with it. None will prevent the import system from even looking for a module implementation on disk and just fail.

I had a similar problem. The problem was that in carbon.conf's USER setting, I specified my own user "graphite". When you do this, the carbon-cache daemon will run as this user. The graphite user did not have permissions to the twisted modules. Solution was
chown -R graphite /usr/lib64/python2.6/site-packages/twisted

Related

Tensorflow - ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed

I am struggling to utilise TensorFlow within Jupyter notebook. I have installed TensorFlow via Anaconda Prompt (running as admin). But when I go to call it in my notebook I get the following error string (sorry this is long).
Any help or thoughts would be welcomed here - should I run a full uninstall and try again?
kind regards
Jack
I am running:
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
The error message is as follows (there are a number of others but this is the first and the rest have similar issue):
Traceback (most recent call last):
File "C:\Users\jjcon\Anaconda3\lib\site-packages\tensorflow_core\python\pywrap_tensorflow.py", line 58, in
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\jjcon\Anaconda3\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 28, in
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\jjcon\Anaconda3\lib\site-packages\tensorflow_core\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "C:\Users\jjcon\Anaconda3\lib\imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "C:\Users\jjcon\Anaconda3\lib\imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: DLL load failed: A dynamic link library (DLL) initialization routine failed.
During handling of the above exception, another exception occurred:
You have to install the Microsoft Visual C++ 2015-2019 Redistributable (x64) from here.
If you are facing any other issues possible reasons are
Your CPU does not support AVX2 instructions
Your CPU/Python is on 32 bits
There is a library that is in a different location/not installed on your system that cannot be loaded.
Please refer tested build configurations for windows CPU and GPU.

PermissionError: [Errno 1] Operation not permitted while using Selenium with Pythonista on iOS

I want to create a program in pythonista that can control the web browser. I know Selenium is the best for this but I have tried it on pythonista for my iOS iPhone and I get an error.
This is the code:
from selenium import webdriver
browser = webdriver.Chrome()
browser.get('http://www.yahoo.com')
Here is the error:
PermissionError: [Errno 1] Operation not permitted
Traceback (most recent call last):
File "/private/var/mobile/Containers/Shared/AppGroup/A2EBDF28-CB6C-4190-8199-7406AA3821A3/Pythonista3/Documents/selen.py", line 3, in <module>
browser = webdriver.Chrome()
File "/private/var/mobile/Containers/Shared/AppGroup/A2EBDF28-CB6C-4190-8199-7406AA3821A3/Pythonista3/Documents/site-packages-3/selenium/webdriver/chrome/webdriver.py", line 68, in __init__
self.service.start()
File "/private/var/mobile/Containers/Shared/AppGroup/A2EBDF28-CB6C-4190-8199-7406AA3821A3/Pythonista3/Documents/site-packages-3/selenium/webdriver/common/service.py", line 76, in start
stdin=PIPE)
File "/var/containers/Bundle/Application/24DD2A57-320E-4E21-9BE2-7C3605830DE0/Pythonista3.app/Frameworks/Py3Kit.framework/pylib/subprocess.py", line 708, in __init__
restore_signals, start_new_session)
File "/var/containers/Bundle/Application/24DD2A57-320E-4E21-9BE2-7C3605830DE0/Pythonista3.app/Frameworks/Py3Kit.framework/pylib/subprocess.py", line 1261, in _execute_child
restore_signals, start_new_session, preexec_fn)
PermissionError: [Errno 1] Operation not permitted
This error message...
PermissionError: [Errno 1] Operation not permitted
...implies that the ChromeDriver was unable to create a desired new resource e.g. logfile while initializing a new WebDriver and Web Client session.
As per the discussion Pythonista - Limitations due to iOS following are some of the limitations while using Pythonista :
No fork/exec for new processes. Impacts the subprocess module.
Due to missing fork, no full cleanup of process resources (memory, threads, file handles).
No file access outside of application directory.
No /dev/null and other special files.
Limited processing power of devices (compared to typical PC/Mac).
Process usually is stopped/killed after a while.
An simple example is as follows :
>>> import subprocess
>>> subprocess.call(["ls", "-l"])
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/private/var/mobile/Containers/Bundle/Application/8C59C68D-71BF-4CBB-90F8-373A1752DEE1/Pythonista.app/pylib/subprocess.py", line 524, in call
return Popen(*popenargs, **kwargs).wait()
File "/private/var/mobile/Containers/Bundle/Application/8C59C68D-71BF-4CBB-90F8-373A1752DEE1/Pythonista.app/pylib/subprocess.py", line 711, in __init__
errread, errwrite)
File "/private/var/mobile/Containers/Bundle/Application/8C59C68D-71BF-4CBB-90F8-373A1752DEE1/Pythonista.app/pylib/subprocess.py", line 1205, in _execute_child
self.pid = os.fork()
OSError: [Errno 1] Operation not permitted
What's wrong in your usecase
There can be 2 issues as follows :
When you invoke the following line of code :
browser = webdriver.Chrome()
The ChromeDriver tries to create/modify/access the scoped_directory within the file system. For example on Windows OS :
"chromedriverVersion": "2.35.528161 (5b82f2d2aae0ca24b877009200ced9065a772e73)",
"userDataDir": "C:\\Users\\username\\AppData\\Local\\Temp\\scoped_dir5188_12717"
Possibly ChromeDriver is unable to perform this task/method/functionality.
Again when you invoke the following line of code :
browser = webdriver.Chrome()
As per selenium.webdriver.chrome.webdriver ChromeDriver tries to create a logfile within the file system as per the constructor as follows :
class selenium.webdriver.chrome.webdriver.WebDriver(executable_path='chromedriver', port=0, options=None, service_args=None, desired_capabilities=None, service_log_path=None, chrome_options=None)
Possibly ChromeDriver is unable to perform this task/method/functionality,
Due to the above mentioned reasons you are seeing the error :
PermissionError: [Errno 1] Operation not permitted
Solution
Incase of any of the above mentioned cases the solution would be to restrict the access/creation of the resources within the application directory only.

Tensorflow could not initialize the libcurl library on Mac OS

I have build up the WebAPP of this Project on Mac OS using conda and tensorflow v0.12.1. It work so well still I try to train, tensorflow show this error message:
W tensorflow/core/platform/cloud/google_auth_provider.cc:151] All
attempts to get a Google authentication bearer token failed, returning
an empty token. Retrieving token from files failed with "Failed
precondition: Could not initialize the libcurl library. Please make
sure that libcurl is installed in the OS or statically linked to the
TensorFlow binary.". Retrieving token from GCE failed with "Failed
precondition: Could not initialize the libcurl library. Please make
sure that libcurl is installed in the OS or statically linked to the
TensorFlow binary.".
Logs:
2018-04-03 09:33:49,154 - candysorter.views.api - INFO - === Start training: id=9120093671565748, session=20180403_093211_9120093671565748 ===
2018-04-03 09:33:49,154 - candysorter.views.api - INFO - Creating labels file: job_id=candy_sorter_20180403_093211_9120093671565748
2018-04-03 09:33:49,184 - candysorter.views.api - ERROR - Unexpected error.
Traceback (most recent call last):
File "/Users/wubinbin/anaconda3/envs/candy/lib/python2.7/site-packages/flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/wubinbin/anaconda3/envs/candy/lib/python2.7/site-packages/flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/Users/wubinbin/Developer/FindYourCandy/webapp/candysorter/views/api.py", line 101, in wrapper
return f(*args, **kwargs)
File "/Users/wubinbin/Developer/FindYourCandy/webapp/candysorter/views/api.py", line 337, in train
candy_trainer.create_labels_file(job_id, labels)
File "/Users/wubinbin/Developer/FindYourCandy/webapp/candysorter/models/images/train.py", line 76, in create_labels_file
f.write(json.dumps(labels, separators=(',', ':')))
File "/Users/wubinbin/anaconda3/envs/candy/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 150, in __exit__
self.close()
File "/Users/wubinbin/anaconda3/envs/candy/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 182, in close
pywrap_tensorflow.Set_TF_Status_from_Status(status, ret_status)
File "/Users/wubinbin/anaconda3/envs/candy/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
File "/Users/wubinbin/anaconda3/envs/candy/lib/python2.7/site-packages/tensorflow/python/framework/errors_impl.py", line 469, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
FailedPreconditionError: Could not initialize the libcurl library. Please make sure that libcurl is installed in the OS or statically linked to the TensorFlow binary.
I use TF1.7.0 instead of TF0.12.1, and fix this problem. But I still don't know what happen in TF0.12.1.

breaking change to distributed training moving from TF v1.3 to v1.4: "UnavailableError: Trying to connect an http1.x server"

When creating a managed session to use for distributed training with this line:
with sv.managed_session(server.target, config=config) as sess, sess.as_default():
I get this error (full stack trace at bottom) on the chief worker:
tensorflow.python.framework.errors_impl.UnavailableError: Trying to connect an http1.x server
Everything still seems to be fine on the parameter server which reports:
E1106 11:26:32.844686639 5543 ev_epoll1_linux.c:1051] grpc epoll fd: 8
2017-11-06 11:26:32.851773: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job ps -> {0 -> localhost:12222}
2017-11-06 11:26:32.851863: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job worker -> {0 -> 127.0.0.1:12223}
2017-11-06 11:26:32.856802: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:324] Started server with target: grpc://localhost:12222
I only receive this error when using the new v1.4 of tensorflow built from source (found same problem when installing from pip). Everything works fine in v1.3 . Does anyone know if there's been a breaking change made, I'm assuming with respect to how tensorflow works with grpc?
I'm wondering if this has something to do with http2 vs http1? I see GRPC seems to work with protobuf across http2, and this seems to be indicating its trying to connect with http1, but still doesn't explain why this breaks just when upgrading v1.3 to v1.4
Does anyone know any more around what that error
UnavailableError: Trying to connect an http1.x server
is referring to or what might be a fix here?
I am working on RedHat Linux and trying to do distributed training across processes on the same localhost...not even trying to go over the network. I'd appreciate any thoughts, and hope this can help others with the same problem as well.
Full stacktrace:
E1106 11:28:24.383745692 5787 ev_epoll1_linux.c:1051] grpc epoll fd: 8
2017-11-06 11:28:24.391084: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize
GrpcChannelCache for job ps -> {0 -> 127.0.0.1:12222}
2017-11-06 11:28:24.391185: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize
GrpcChannelCache for job worker -> {0 -> localhost:12223}
2017-11-06 11:28:24.392285: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:324] Started server
with target: grpc://localhost:12223
2017-11-06 11:28:37.875632: E tensorflow/core/distributed_runtime/master.cc:269] Master init: Unavailable:
Trying to connect an http1.x server
Traceback (most recent call last):
File "/app/sbtt/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1323, in
_do_call
return fn(*args)
File "/app/sbtt/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1293, in
_run_fn
self._extend_graph()
File "/app/sbtt/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1354, in
_extend_graph
self._session, graph_def.SerializeToString(), status)
File "/app/sbtt/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 473,
in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.UnavailableError: Trying to connect an http1.x server
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/pycharm-community-2017.2.3/helpers/pydev/pydevd.py", line 1599, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "/opt/pycharm-community-2017.2.3/helpers/pydev/pydevd.py", line 1026, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/opt/pycharm-community-2017.2.3/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "worker.py", line 426, in <module>
main()
File "worker.py", line 418, in main
run(args, server)
File "worker.py", line 174, in run
sess.run(trainer.sync)
File "/app/sbtt/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 889, in run
run_metadata_ptr)
File "/app/sbtt/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1120, in
_run
feed_dict_tensor, options, run_metadata)
File "/app/sbtt/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1317, in
_do_run
options, run_metadata)
File "/app/sbtt/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1336, in
_do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.UnavailableError: Trying to connect an http1.x server
if you follow #NoahEisen suggestion and
export GRPC_VERBOSITY="DEBUG"
you'll see something more informative like this:
E1108 17:37:57.085195825 17711 ev_epoll1_linux.c:1051] grpc epoll fd: 5
D1108 17:37:57.085309439 17711 ev_posix.c:111] Using polling engine: epoll1
D1108 17:37:57.085380147 17711 dns_resolver.c:301] Using native dns resolver
I1108 17:37:57.085819333 17711 socket_utils_common_posix.c:223] Disabling AF_INET6 sockets because ::1 is not available.
I1108 17:37:57.086001584 17711 tcp_server_posix.c:322] Failed to add :: listener, the environment may not support IPv6: {"created":"#1510180677.085876868","description":"OS Error","errno":97,"file":"external/grpc/src/core/lib/iomgr/socket_utils_common_posix.c","file_line":256,"os_error":"Address family not supported by protocol","syscall":"socket","target_address":"[::]:12223"}
2017-11-08 17:37:57.092525: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job ps -> {0 -> 127.0.0.1:12222}
2017-11-08 17:37:57.092648: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job worker -> {0 -> localhost:12223}
2017-11-08 17:37:57.093435: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:324] Started server with target: grpc://localhost:12223
D1108 17:38:02.607109518 17830 http_proxy.c:70] userinfo found in proxy URI
I1108 17:38:02.611335569 17807 http_connect_handshaker.c:304] Connecting to server 127.0.0.1:12222 via HTTP proxy ipv4:xx.xx.xx.xx:xxxx
2017-11-08 17:38:02.617814: E tensorflow/core/distributed_runtime/master.cc:269] Master init: Unavailable: Trying to connect an http1.x server
I am behind a proxy, but i am only trying to do distributed training on the localhost. For some reason it tries to connect via the proxy even tho the IP 127.0.0.1 should be equivalent to localhost right? IE note this part in particular:
Connecting to server 127.0.0.1:12222 via HTTP proxy ipv4:xx.xx.xx.xx:xxxx
I guess this was lazy in my python code. If I change the ps to "localhost" explicitly in the cluster spec instead of the IP 127.0.0.1 everything seems to work again in TF1.4 because its not trying to connect to the localhost via my proxy server (which indeed, was HTTP1.x only i think).
#PeteWaren - does this constitute an actual bug in tensorflow or grpc? Should these note be equivalent localhost=127.0.0.1? Either way, the way its handled has changed from TF1.3 to TF1.4
Thanks for everyones help

ImportError: No module named date_capnp

While trying to understand nupic minecraft demo code. I am running nupic_client.py from pycharm IDE. Python version is 2.7.8 on Mac OS with nupic dowloaded as package version 0.2.8
On running nupic_client.py following error occurs ImportError: No module named date_capnp
Any help is appreciated.
Stack Trace:
File "/Users/msghotra/Sandbox/CodeHub/datascience/workspace/nupic_minecraft/nupic_client.py", line 5, in <module>
from nupic.frameworks.opf.modelfactory import ModelFactory
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nupic/frameworks/opf/modelfactory.py", line 32, in <module>
from clamodel import CLAModel
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nupic/frameworks/opf/clamodel.py", line 44, in <module>
from nupic.encoders import MultiEncoder, DeltaEncoder
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nupic/encoders/__init__.py", line 24, in <module>
from date import DateEncoder
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/nupic/encoders/date.py", line 28, in <module>
from nupic.encoders.date_capnp import DateEncoderProto
ImportError: No module named date_capnp
I believe you mean nupic==0.2.3, since 0.2.8 isn't out yet. This should be fixed with https://github.com/numenta/nupic/pull/2231. More details are available in the issue you referenced (the direct link is https://github.com/numenta/nupic/issues/2166). Hope this helps!
EDIT This should now be resolved in nupic==0.2.6