Twitsted ValueError: Unknown ECC curve on Raspian Stretch - scrapy

I want to use my Raspberry Pi 3, running Rapian Stretch for a web scraping project. For python i use the berryconada distribution.
When I run my Spider, I get
ValueError: Unknown ECC curve
On my Laptop (Xubuntu 16.04) everything runs fine. Maybe I need to install an additional library or something?
Down below the full traceback.
Traceback (most recent call last):
File "/home/pi/berryconda3/lib/python3.6/site-packages/twisted/internet/defer.py", line 1384, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/home/pi/berryconda3/lib/python3.6/site-packages/twisted/python/failure.py", line 393, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/home/pi/berryconda3/lib/python3.6/site-packages/scrapy/core/downloader/middleware.py", line 43, in process_request
defer.returnValue((yield download_func(request=request,spider=spider)))
File "/home/pi/berryconda3/lib/python3.6/site-packages/scrapy/utils/defer.py", line 45, in mustbe_deferred
result = f(*args, **kw)
File "/home/pi/berryconda3/lib/python3.6/site-packages/scrapy/core/downloader/handlers/__init__.py", line 65, in download_request
return handler.download_request(request, spider)
File "/home/pi/berryconda3/lib/python3.6/site-packages/scrapy/core/downloader/handlers/http11.py", line 63, in download_request
return agent.download_request(request)
File "/home/pi/berryconda3/lib/python3.6/site-packages/scrapy/core/downloader/handlers/http11.py", line 300, in download_request
method, to_bytes(url, encoding='ascii'), headers, bodyproducer)
File "/home/pi/berryconda3/lib/python3.6/site-packages/twisted/web/client.py", line 1633, in request
endpoint = self._getEndpoint(parsedURI)
File "/home/pi/berryconda3/lib/python3.6/site-packages/twisted/web/client.py", line 1617, in _getEndpoint
return self._endpointFactory.endpointForURI(uri)
File "/home/pi/berryconda3/lib/python3.6/site-packages/twisted/web/client.py", line 1494, in endpointForURI
uri.port)
File "/home/pi/berryconda3/lib/python3.6/site-packages/scrapy/core/downloader/contextfactory.py", line 59, in creatorForNetloc
return ScrapyClientTLSOptions(hostname.decode("ascii"), self.getContext())
File "/home/pi/berryconda3/lib/python3.6/site-packages/scrapy/core/downloader/contextfactory.py", line 56, in getContext
return self.getCertificateOptions().getContext()
File "/home/pi/berryconda3/lib/python3.6/site-packages/scrapy/core/downloader/contextfactory.py", line 51, in getCertificateOptions
acceptableCiphers=DEFAULT_CIPHERS)
File "/home/pi/berryconda3/lib/python3.6/site-packages/twisted/python/deprecate.py", line 792, in wrapped
return wrappee(*args, **kwargs)
File "/home/pi/berryconda3/lib/python3.6/site-packages/twisted/internet/_sslverify.py", line 1595, in __init__
self._ecCurve = _OpenSSLECCurve(_defaultCurveName)
File "/home/pi/berryconda3/lib/python3.6/site-packages/twisted/internet/_sslverify.py", line 1744, in __init__
raise ValueError("Unknown ECC curve.")

I dropped berryconda and pip installed scrapy. If you're getting this error on Jessie, moving to Stretch gives you access to the newer openssl libs which contain the missing things.
After I upgraded to Stretch I cut berryconda from my path, pip uninstalled cryptography, twisted, pyopenssl, and scrapy.
Then with the no cache option I pip installed scrapy, which brought all those packages back, and now my spider is running.

Related

tensorflow.python.framework.errors_impl.NotFoundError while running deep learning model on Google Colaboratory

I'm trying to run on cloud this deep learning model:
https://github.com/razvanmarinescu/brgm#image-reconstruction-with-pre-trained-stylegan2-generators
What I do is simply utilizing their Colab Notebook: https://colab.research.google.com/drive/1G7_CGPHZVGFWIkHOAke4HFg06-tNHIZ4?usp=sharing#scrollTo=qMgE6QFiHuSL
When I try to exectute:
!python recon.py recon-real-images --input=/content/drive/MyDrive/boeing/EDGEconnect/val_imgs --masks=/content/drive/MyDrive/boeing/EDGEconnect/val_masks --tag=brains --network=dropbox:brains.pkl --recontype=inpaint --num-steps=1000 --num-snapshots=1
I receive this error:
args: Namespace(command='recon-real-images', input='/content/drive/MyDrive/boeing/EDGEconnect/val_imgs', masks='/content/drive/MyDrive/boeing/EDGEconnect/val_masks', network_pkl='dropbox:brains.pkl', num_snapshots=1, num_steps=1000, recontype='inpaint', superres_factor=4, tag='brains')
Local submit - run_dir: results/00004-brains-inpaint
dnnlib: Running recon.recon_real_images() on localhost...
Processing image 1/4
Loading networks from "dropbox:brains.pkl"...
Setting up TensorFlow plugin "fused_bias_act.cu": Preprocessing... Loading... Failed!
Traceback (most recent call last):
File "recon.py", line 270, in <module>
main()
File "recon.py", line 263, in main
dnnlib.submit_run(sc, func_name_map[subcmd], **kwargs)
File "/content/drive/MyDrive/boeing/brgm/brgm/dnnlib/submission/submit.py", line 343, in submit_run
return farm.submit(submit_config, host_run_dir)
File "/content/drive/MyDrive/boeing/brgm/brgm/dnnlib/submission/internal/local.py", line 22, in submit
return run_wrapper(submit_config)
File "/content/drive/MyDrive/boeing/brgm/brgm/dnnlib/submission/submit.py", line 280, in run_wrapper
run_func_obj(**submit_config.run_func_kwargs)
File "/content/drive/MyDrive/boeing/brgm/brgm/recon.py", line 189, in recon_real_images
recon_real_one_img(network_pkl, img_list[image_idx], masks, num_snapshots, recontype, superres_factor, num_steps)
File "/content/drive/MyDrive/boeing/brgm/brgm/recon.py", line 132, in recon_real_one_img
_G, _D, Gs = pretrained_networks.load_networks(network_pkl)
File "/content/drive/MyDrive/boeing/brgm/brgm/pretrained_networks.py", line 83, in load_networks
G, D, Gs = pickle.load(stream, encoding='latin1')
File "/content/drive/MyDrive/boeing/brgm/brgm/dnnlib/tflib/network.py", line 297, in __setstate__
self._init_graph()
File "/content/drive/MyDrive/boeing/brgm/brgm/dnnlib/tflib/network.py", line 154, in _init_graph
out_expr = self._build_func(*self.input_templates, **build_kwargs)
File "<string>", line 395, in G_synthesis_stylegan2
File "<string>", line 359, in layer
File "<string>", line 106, in modulated_conv2d_layer
File "<string>", line 75, in apply_bias_act
File "/content/drive/MyDrive/boeing/brgm/brgm/dnnlib/tflib/ops/fused_bias_act.py", line 68, in fused_bias_act
return impl_dict[impl](x=x, b=b, axis=axis, act=act, alpha=alpha, gain=gain)
File "/content/drive/MyDrive/boeing/brgm/brgm/dnnlib/tflib/ops/fused_bias_act.py", line 122, in _fused_bias_act_cuda
cuda_kernel = _get_plugin().fused_bias_act
File "/content/drive/MyDrive/boeing/brgm/brgm/dnnlib/tflib/ops/fused_bias_act.py", line 16, in _get_plugin
return custom_ops.get_plugin(os.path.splitext(__file__)[0] + '.cu')
File "/content/drive/MyDrive/boeing/brgm/brgm/dnnlib/tflib/custom_ops.py", line 156, in get_plugin
plugin = tf.load_op_library(bin_file)
File "/tensorflow-1.15.2/python3.7/tensorflow_core/python/framework/load_library.py", line 61, in load_op_library
lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.NotFoundError: /content/drive/MyDrive/boeing/brgm/brgm/dnnlib/tflib/_cudacache/fused_bias_act_237d55aca3e3c3ec0547da06888d8e66.so: undefined symbol: _ZN10tensorflow12OpDefBuilder4AttrENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
I found that the very last part of an error:
tensorflow.python.framework.errors_impl.NotFoundError: /content/drive/MyDrive/boeing/brgm/brgm/dnnlib/tflib/_cudacache/fused_bias_act_237d55aca3e3c3ec0547da06888d8e66.so: undefined symbol: _ZN10tensorflow12OpDefBuilder4AttrENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
Can be solved by changing a flag in Cuda Makefile: https://github.com/mgharbi/hdrnet_legacy/issues/2 or by installing tf 1.14(colab runs on 1.15.2 and this change made no positive effect).
My question is, how can I get rid of this error, is there an option to change smth inside Google Colab's Cuda Makefile?

Unable to load web page with seleniumwire

Unable to load the web page using seleniumwire, I am observing this error in the browser.
This page isn't working
xxx.xyz didn't send any data.
ERR_EMPTY_RESPONSE
When I replace seleniumwire with selenium while initializing the webdriver, the issue is no longer observed.
Seleniumwire was working fine and the below-mentioned error started occurring a couple of days ago.
Seleniumwire version: 4.4.0
Python 3.9
MacOS Big Sur
AttributeError: module 'lib' has no attribute 'SSL_CTX_get0_param'
ERROR:seleniumwire.server:127.0.0.1:61095: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/server/server.py",
line 113, in handle root_layer() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/server/modes/http_proxy.py",
line 9, in call layer() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/server/protocol/tls.py",
line 285, in call layer() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/server/protocol/http1.py",
line 100, in call layer() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/server/protocol/http.py",
line 206, in call if not self._process_flow(flow): File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/server/protocol/http.py",
line 285, in _process_flow return self.handle_regular_connect(f) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/server/protocol/http.py",
line 224, in handle_regular_connect layer() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/server/protocol/tls.py",
line 278, in call self._establish_tls_with_client_and_server() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/server/protocol/tls.py",
line 358, in _establish_tls_with_client_and_server self._establish_tls_with_server() File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/server/protocol/tls.py",
line 445, in _establish_tls_with_server self.server_conn.establish_tls( File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/connections.py",
line 295, in establish_tls self.convert_to_tls(cert=client_cert, sni=sni, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/net/tcp.py",
line 382, in convert_to_tls context = tls.create_client_context( File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/seleniumwire/thirdparty/mitmproxy/net/tls.py",
line 285, in create_client_context param = SSL._lib.SSL_CTX_get0_param(context._context)
AttributeError: module 'lib' has no attribute 'SSL_CTX_get0_param'
This looks like you are using an outdated version of the cryptography library.

Apache BEAM pipeline fails when writing TF Records - AttributeError: 'str' object has no attribute 'iteritems'

The issue started appearing over the weekend. For some reason, it feels to be a DataFlow issue.
Previously, I was able to execute the script and write TF records just fine. However, now, I am unable to initialize the computation graph to process the data.
The traceback is:
Traceback (most recent call last):
File "my_script.py", line 1492, in <module>
MyBeamClass()
File "my_script.py", line 402, in __init__
self.run()
File "my_script.py", line 514, in run
transform_fn_io.WriteTransformFn(path=self.JOB_DIR + '/transform/'))
File "/anaconda3/envs/ml27/lib/python2.7/site-packages/apache_beam/pipeline.py", line 426, in __exit__
self.run().wait_until_finish()
File "/anaconda3/envs/ml27/lib/python2.7/site-packages/apache_beam/runners/dataflow/dataflow_runner.py", line 1238, in wait_until_finish
(self.state, getattr(self._runner, 'last_error_msg', None)), self)
apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow pipeline failed. State: FAILED, Error:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 649, in do_work
work_executor.execute()
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/executor.py", line 176, in execute
op.start()
File "apache_beam/runners/worker/operations.py", line 531, in apache_beam.runners.worker.operations.DoOperation.start
def start(self):
File "apache_beam/runners/worker/operations.py", line 532, in apache_beam.runners.worker.operations.DoOperation.start
with self.scoped_start_state:
File "apache_beam/runners/worker/operations.py", line 533, in apache_beam.runners.worker.operations.DoOperation.start
super(DoOperation, self).start()
File "apache_beam/runners/worker/operations.py", line 202, in apache_beam.runners.worker.operations.Operation.start
def start(self):
File "apache_beam/runners/worker/operations.py", line 206, in apache_beam.runners.worker.operations.Operation.start
self.setup()
File "apache_beam/runners/worker/operations.py", line 480, in apache_beam.runners.worker.operations.DoOperation.setup
with self.scoped_start_state:
File "apache_beam/runners/worker/operations.py", line 485, in apache_beam.runners.worker.operations.DoOperation.setup
pickler.loads(self.spec.serialized_fn))
File "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py", line 247, in loads
return dill.loads(s)
File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 317, in loads
return load(file, ignore)
File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 305, in load
obj = pik.load()
File "/usr/lib/python2.7/pickle.py", line 864, in load
dispatch[key](self)
File "/usr/lib/python2.7/pickle.py", line 1232, in load_build
for k, v in state.iteritems():
AttributeError: 'str' object has no attribute 'iteritems'
I am using tensorflow==1.13.1 and tensorflow-transform==0.9.0 and apache_beam==2.7.0
with beam.Pipeline(options=self.pipe_opt) as p:
with beam_impl.Context(temp_dir=self.google_cloud_options.temp_location):
# rest of the script
_ = (
transform_fn
| 'WriteTransformFn' >>
transform_fn_io.WriteTransformFn(path=self.JOB_DIR + '/transform/'))
I was experiencing the same error.
It seems to be triggered by a mismatch in the tensorflow-transform versions of your local (or master) machine and the workers one (specified in the setup.py file).
In my case I was running tensorflow-transform==0.13 on my local machine whereas the workers were running 0.8.
Downgrading the local version to 0.8 fixed the issue.

fetch chromium - No such file or directory

I've cloned depot tools given instructions from here on ubuntu 17.04.
When I run fetch chromium I get the following error.
./fetch chromium
Running: gclient root
Traceback (most recent call last):
File "./fetch.py", line 299, in <module>
sys.exit(main())
File "./fetch.py", line 294, in main
return run(options, spec, root)
File "./fetch.py", line 280, in run
if not options.force and checkout.exists():
File "./fetch.py", line 82, in exists
gclient_root = self.run_gclient('root').strip()
File "./fetch.py", line 78, in run_gclient
return self.run(cmd_prefix + cmd, **kwargs)
File "./fetch.py", line 68, in run
return subprocess.check_output(cmd, **kwargs)
File "/usr/lib/python2.7/subprocess.py", line 212, in check_output
process = Popen(stdout=PIPE, *popenargs, **kwargs)
File "/usr/lib/python2.7/subprocess.py", line 390, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1024, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
I have come across similar errors in stackoverflow while searching for a resolution however none of them helped resolve the problem.
I'm not clear on if there is any steps required after cloning to install depot tools? The readme
Any thoughts on what is required to solve the problem?
Conteh

MemeryError installing numpy on Ubuntu 16.04 on Digital Ocean

I have a python application that uses numpy running on my digital Ocean droplet. I am trying to pip install numpy into my virtual environment and each time i am getting an error like this:
Collecting numpy
Downloading numpy-1.13.1-cp27-cp27mu-manylinux1_x86_64.whl (16.6MB)
99% |████████████████████████████████| 16.6MB 40.5MB/s eta
0:00:01Exception:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 209, in
main
status = self.run(options, args)
File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 328,
in run
wb.build(autobuilding=True)
File "/usr/lib/python2.7/dist-packages/pip/wheel.py", line 748, in build
self.requirement_set.prepare_files(self.finder)
File "/usr/lib/python2.7/dist-packages/pip/req/req_set.py", line 360, in
prepare_files
ignore_dependencies=self.ignore_dependencies))
File "/usr/lib/python2.7/dist-packages/pip/req/req_set.py", line 577, in
_prepare_file
session=self.session, hashes=hashes)
File "/usr/lib/python2.7/dist-packages/pip/download.py", line 810, in
unpack_url
hashes=hashes
File "/usr/lib/python2.7/dist-packages/pip/download.py", line 649, in
unpack_http_url
hashes)
File "/usr/lib/python2.7/dist-packages/pip/download.py", line 871, in
_download_http_url
_download_url(resp, link, content_file, hashes)
File "/usr/lib/python2.7/dist-packages/pip/download.py", line 595, in
_download_url
hashes.check_against_chunks(downloaded_chunks)
File "/usr/lib/python2.7/dist-packages/pip/utils/hashes.py", line 46, in
check_against_chunks
for chunk in chunks:
File "/usr/lib/python2.7/dist-packages/pip/download.py", line 563, in
written_chunks
for chunk in chunks:
File "/usr/lib/python2.7/dist-packages/pip/utils/ui.py", line 139, in iter
for x in it:
File "/usr/lib/python2.7/dist-packages/pip/download.py", line 552, in
resp_read
decode_content=False):
File "/usr/share/python-wheels/urllib3-1.13.1-py2.py3-none-
any.whl/urllib3/response.py", line 344, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "/usr/share/python-wheels/urllib3-1.13.1-py2.py3-none-
any.whl/urllib3/response.py", line 301, in read
data = self._fp.read(amt)
File "/usr/share/python-wheels/CacheControl-0.11.5-py2.py3-none-
any.whl/cachecontrol/filewrapper.py", line 54, in read
self.__callback(self.__buf.getvalue())
File "/usr/share/python-wheels/CacheControl-0.11.5-py2.py3-none-
any.whl/cachecontrol/controller.py", line 224, in cache_response
self.serializer.dumps(request, response, body=body),
File "/usr/share/python-wheels/CacheControl-0.11.5-py2.py3-none-
any.whl/cachecontrol/serialize.py", line 81, in dumps
).encode("utf8"),
MemoryError
Any who is able lto help me figure out how to solve this problem i have tried installing numpy from outside the virtual-env but it is still refusing to download and install.