I am using a remote interpreter in PyCharm to run code that plots heat maps. yesterday it worked fine, but today it stopped working without me changing the code or my conda environment.
I am trying to run this code:
...
...
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(img.permute(1,2,0))
ax2.imshow(heatmap)
plt.title(f'event: {event}')
plt.show()
but and it crushes on plt.show()
the exception I get is this:
...
...
Traceback (most recent call last):
File "/home/<user>/.pycharm_helpers/pycharm_display/datalore/display/display_.py", line 60, in try_empty_proxy
urllib_request.urlopen(url, buffer)
File "/home/<user>/miniconda3/envs/catenv/lib/python3.8/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/home/<user>/miniconda3/envs/catenv/lib/python3.8/urllib/request.py", line 525, in open
response = self._open(req, data)
File "/home/<user>/miniconda3/envs/catenv/lib/python3.8/urllib/request.py", line 542, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
File "/home/<user>/miniconda3/envs/catenv/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/home/<user>/miniconda3/envs/catenv/lib/python3.8/urllib/request.py", line 1383, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "/home/<user>/miniconda3/envs/catenv/lib/python3.8/urllib/request.py", line 1357, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 104] Connection reset by peer>
I searched for this error, and all questions regarding it were with people implementing request stuff, but i'm not implementing any internet related code.
btw: this exact bug happened to me in the past, and disappeared out of nowhere about two weeks later.
Related
I am running the following command in spyder,
import matplotlib.pyplot as plt
fig, axs = plt.subplots(2, 2)
Traceback (most recent call last):
File "/home/hh/.local/lib/python3.8/site-packages/matplotlib_inline/backend_inline.py", line 41, in show
display(
File "/home/hh/anaconda3/envs/gee/lib/python3.8/site-packages/IPython/core/display.py", line 327, in display
publish_display_data(data=format_dict, metadata=md_dict, **kwargs)
File "/home/hh/anaconda3/envs/gee/lib/python3.8/site-packages/IPython/core/display.py", line 119, in publish_display_data
display_pub.publish(
File "/home/hh/.local/lib/python3.8/site-packages/ipykernel/zmqshell.py", line 138, in publish
self.session.send(
File "/home/hh/anaconda3/envs/gee/lib/python3.8/site-packages/jupyter_client/session.py", line 830, in send
to_send = self.serialize(msg, ident)
File "/home/hh/anaconda3/envs/gee/lib/python3.8/site-packages/jupyter_client/session.py", line 704, in serialize
content = self.pack(content)
File "/home/hh/anaconda3/envs/gee/lib/python3.8/site-packages/jupyter_client/session.py", line 95, in json_packer
return jsonapi.dumps(
File "/home/hh/anaconda3/envs/gee/lib/python3.8/site-packages/zmq/utils/jsonapi.py", line 40, in dumps
s = jsonmod.dumps(o, **kwargs)
File "/home/hh/anaconda3/envs/gee/lib/python3.8/site-packages/simplejson/init.py", line 398, in dumps
return cls(
File "/home/hh/anaconda3/envs/gee/lib/python3.8/site-packages/simplejson/encoder.py", line 296, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/home/hh/anaconda3/envs/gee/lib/python3.8/site-packages/simplejson/encoder.py", line 378, in iterencode
return _iterencode(o, 0)
File "/home/hh/anaconda3/envs/gee/lib/python3.8/site-packages/simplejson/encoder.py", line 44, in encode_basestring
s = str(s, 'utf-8')
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte
Got the same error if I run the same code in jupyter lab. However, if I run the command on terminal, the fig, ax = plt.subplots() works fine.
This only happens recently and I didn't have this issue before. Checked online material, but didn't find a solution. appreciate if only can provide any insights. thanks.
I had the same problem running plain jupyter and within vscode.
Try updating the jupyter installation incl. required libraries. In my case
pip3 install --upgrade jupyter_client pyzmq
fixed the problem.
I implemented a custom federated learning GAN training loop with TFF similar to this code by Google Research.
The client data for a particular training round is found using the following code snippet:
def client_dataset_fn():
# Sample clients and data
sampled_clients = np.random.choice(train_data.client_ids, size=cfg.clients_per_round, replace=False)
datasets = [(next(client_gen_inputs_iterator),
train_data.create_tf_dataset_for_client(client_id).take(cfg.n_critic))
for client_id in sampled_clients]
return datasets
client_noise_inputs, client_real_data = zip(*client_dataset_fn())
This works perfectly up until cfg.clients_per_round is set to 99. When it is set to 100 or a larger value (with the total number of clients being larger of course), I receive the following error:
Traceback (most recent call last):
File "main.py", line 109, in main
metrics = run_single_trial(train_data, test_data, cfg)
File "/mnt/workspace/tff/GAN/federated/fedgan_main.py", line 73, in run_single_trial
metrics = train_loop(iterative_process, server_dataset_fn, client_dataset_fn, model, eval_hook_fn, cfg)
File "/mnt/workspace/tff/GAN/federated/fedgan_main.py", line 124, in train_loop
client_real_data)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_federated/python/core/impl/computation/function_utils.py", line 525, in __call__
return context.invoke(self, arg)
File "/usr/local/lib/python3.6/dist-packages/retrying.py", line 49, in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
File "/usr/local/lib/python3.6/dist-packages/retrying.py", line 206, in call
return attempt.get(self._wrap_exception)
File "/usr/local/lib/python3.6/dist-packages/retrying.py", line 247, in get
six.reraise(self.value[0], self.value[1], self.value[2])
File "/usr/local/lib/python3.6/dist-packages/six.py", line 703, in reraise
raise value
File "/usr/local/lib/python3.6/dist-packages/retrying.py", line 200, in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_federated/python/core/impl/executors/execution_context.py", line 226, in invoke
_ingest(executor, unwrapped_arg, arg.type_signature)))
File "/usr/lib/python3.6/asyncio/base_events.py", line 484, in run_until_complete
return future.result()
File "/usr/local/lib/python3.6/dist-packages/tensorflow_federated/python/common_libs/tracing.py", line 396, in _wrapped
return await coro
File "/usr/local/lib/python3.6/dist-packages/tensorflow_federated/python/core/impl/executors/execution_context.py", line 111, in _ingest
ingested = await asyncio.gather(*ingested)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_federated/python/core/impl/executors/execution_context.py", line 116, in _ingest
return await executor.create_value(val, type_spec)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_federated/python/common_libs/tracing.py", line 201, in async_trace
result = await fn(*fn_args, **fn_kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_federated/python/core/impl/executors/reference_resolving_executor.py", line 294, in create_value
value, type_spec))
File "/usr/local/lib/python3.6/dist-packages/tensorflow_federated/python/common_libs/tracing.py", line 201, in async_trace
result = await fn(*fn_args, **fn_kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_federated/python/core/impl/executors/thread_delegating_executor.py", line 111, in create_value
self._target_executor.create_value(value, type_spec))
File "/usr/local/lib/python3.6/dist-packages/tensorflow_federated/python/core/impl/executors/thread_delegating_executor.py", line 105, in _delegate
result_value = await _delegate_with_trace_ctx(coro, self._event_loop)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_federated/python/common_libs/tracing.py", line 396, in _wrapped
return await coro
File "/usr/local/lib/python3.6/dist-packages/tensorflow_federated/python/common_libs/tracing.py", line 201, in async_trace
result = await fn(*fn_args, **fn_kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_federated/python/core/impl/executors/federating_executor.py", line 394, in create_value
return await self._strategy.compute_federated_value(value, type_spec)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_federated/python/core/impl/executors/federated_composing_strategy.py", line 279, in compute_federated_value
py_typecheck.check_type(value, list)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_federated/python/common_libs/py_typecheck.py", line 41, in check_type
type_string(type_spec), type_string(type(target))))
TypeError: Expected list, found tuple.
During debugging, I looked at the target variable in the final line of the traceback and found it to be the abovementioned client_real_data and client_noise_inputs. Their types are in fact tuples not lists, however, this does not change with different numbers of cfg.clients_per_round. The only usage of cfg.clients_per_round is shown above in the random choice.
I really cannot explain why this is happening, maybe somebody out there has experienced something similar and can help me out.
My used package versions are as follows:
Python 3.6.9 or 3.8.10 (checked both)
tensorflow 2.5.1
tensorflow-federated 0.19.0
retrying 1.3.3
six 1.15.0
As a workaround I now manually change the data type of client_noise_inputs and client_real_data using list(tuple_var), but I am still curious as to why the list is required somehow.
(Copying and pasting from original on GitHub)
This seems to me to be an implementation distinction between the federated_composing_strategy and the federated_resolving_strategy. IIRC, by default we don't inject a composing executor into your stack until you hit 100 clients--which would be the source of this exciting mystery.
In particular, the composing strategy is programmed against the assumption that the incoming clients-placed value is represented as a list, whereas the resolving strategy codes against a much more flexible set of containers.
It's not wild to coerce your clients-placed value to a list--we also could extend the permitted representation of clients-placed values in the composing executor to match that in the resolving one, possibly pulling the appropriate logic to a shared place like here. I think its a contribution wed be very happy to accept if youre up for it!
I'm testing this locally where I have a ~/.aws/config file.
~/.aws/config looks some thing like:
[profile a]
...
[profile b]
...
I also have a AWS_PROFILE environmental variable set as "a".
I would like to read a file in which is accessible with profile b using pandas.
I am able to access it through s3fs by doing:
import s3fs
fs = s3fs.S3FileSystem(profile="b")
fs.get("BUCKET/FILE.parquet", "FILE.parquet")
pd.read_parquet("FILE.parquet")
However, if I try to pass this to pd.read_parquet using storage_options I get a PermissionError: Forbidden.
pd.read_parquet(
"s3://BUCKET/FILE.parquet",
storage_options={"profile": "b"},
)
full Traceback below
Traceback (most recent call last):
File "/home/ray/local/bin/anaconda3/envs/main/lib/python3.8/site-packages/s3fs/core.py", line 233, in _call_s3
out = await method(**additional_kwargs)
File "/home/ray/local/bin/anaconda3/envs/main/lib/python3.8/site-packages/aiobotocore/client.py", line 154, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ray/local/bin/anaconda3/envs/main/lib/python3.8/site-packages/pandas/io/parquet.py", line 459, in read_parquet
return impl.read(
File "/home/ray/local/bin/anaconda3/envs/main/lib/python3.8/site-packages/pandas/io/parquet.py", line 221, in read
return self.api.parquet.read_table(
File "/home/ray/local/bin/anaconda3/envs/main/lib/python3.8/site-packages/pyarrow/parquet.py", line 1672, in read_table
dataset = _ParquetDatasetV2(
File "/home/ray/local/bin/anaconda3/envs/main/lib/python3.8/site-packages/pyarrow/parquet.py", line 1504, in __init__
if filesystem.get_file_info(path_or_paths).is_file:
File "pyarrow/_fs.pyx", line 438, in pyarrow._fs.FileSystem.get_file_info
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/_fs.pyx", line 1004, in pyarrow._fs._cb_get_file_info
File "/home/ray/local/bin/anaconda3/envs/main/lib/python3.8/site-packages/pyarrow/fs.py", line 226, in get_file_info
info = self.fs.info(path)
File "/home/ray/local/bin/anaconda3/envs/main/lib/python3.8/site-packages/fsspec/asyn.py", line 72, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/home/ray/local/bin/anaconda3/envs/main/lib/python3.8/site-packages/fsspec/asyn.py", line 53, in sync
raise result[0]
File "/home/ray/local/bin/anaconda3/envs/main/lib/python3.8/site-packages/fsspec/asyn.py", line 20, in _runner
result[0] = await coro
File "/home/ray/local/bin/anaconda3/envs/main/lib/python3.8/site-packages/s3fs/core.py", line 911, in _info
out = await self._call_s3(
File "/home/ray/local/bin/anaconda3/envs/main/lib/python3.8/site-packages/s3fs/core.py", line 252, in _call_s3
raise translate_boto_error(err)
PermissionError: Forbidden
Note: there is an old question somewhat related to this but it didn't help: How to read parquet file from s3 using dask with specific AWS profile
You just need to add the following argument to the function:
storage_options=dict(profile='your_profile_name')
Hence the read statement is:
pd.read_parquet("s3://your_bucket",storage_options=dict(profile='your_profile_name'))
It was working fine for me to run following steps in past two days to connect to Big query in pycharm in my PC
step 1:gcloud auth application-default login
step 2 :then connect to BG in the Pycharm in my local PC.
however, when I used the same method to try today:
Below error occurs:
Since I am behind "Great Wall" in China, so can only use VPN to login in google cloud, so I am not sure if it is caused by the VPN or does it have something to do with the setting of my google'account ? however, I tried with "service-account-key", it happen to be the same issue log:
Traceback (most recent call last):
File "C:/Users/emma/PycharmProjects/GCP/INIT.py", line 134, in <module>
explicit()
File "C:/Users/emma/PycharmProjects/GCP/INIT.py", line 54, in explicit
for dataset in bigquery_client.list_datasets():
File "C:\Python27\lib\site-packages\google\cloud\iterator.py", line 218, in _items_iter
for page in self._page_iter(increment=False):
File "C:\Python27\lib\site-packages\google\cloud\iterator.py", line 247, in _page_iter
page = self._next_page()
File "C:\Python27\lib\site-packages\google\cloud\iterator.py", line 347, in _next_page
response = self._get_next_page_response()
File "C:\Python27\lib\site-packages\google\cloud\iterator.py", line 396, in _get_next_page_response
query_params=params)
File "C:\Python27\lib\site-packages\google\cloud\_http.py", line 299, in api_request
headers=headers, target_object=_target_object)
File "C:\Python27\lib\site-packages\google\cloud\_http.py", line 193, in _make_request
return self._do_request(method, url, headers, data, target_object)
File "C:\Python27\lib\site-packages\google\cloud\_http.py", line 223, in _do_request
body=data)
File "C:\Python27\lib\site-packages\google_auth_httplib2.py", line 187, in request
self._request, method, uri, request_headers)
File "C:\Python27\lib\site-packages\google\auth\credentials.py", line 121, in before_request
self.refresh(request)
File "C:\Python27\lib\site-packages\google\oauth2\service_account.py", line 310, in refresh
request, self._token_uri, assertion)
File "C:\Python27\lib\site-packages\google\oauth2\_client.py", line 143, in jwt_grant
response_data = _token_endpoint_request(request, token_uri, body)
File "C:\Python27\lib\site-packages\google\oauth2\_client.py", line 104, in _token_endpoint_request
method='POST', url=token_uri, headers=headers, body=body)
File "C:\Python27\lib\site-packages\google_auth_httplib2.py", line 116, in __call__
url, method=method, body=body, headers=headers, **kwargs)
File "C:\Python27\lib\site-packages\httplib2\__init__.py", line 1609, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "C:\Python27\lib\site-packages\httplib2\__init__.py", line 1351, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "C:\Python27\lib\site-packages\httplib2\__init__.py", line 1272, in _conn_request
conn.connect()
File "C:\Python27\lib\site-packages\httplib2\__init__.py", line 1075, in connect
raise socket.error, msg
socket.error: [Errno 10060]
I update the python-cloud library, then everything works fine now.
Hope it keep it all the time :(
I try to use matplotlib to create a pgf file for LaTeX:
from matplotlib.pyplot import subplots
from numpy import linspace
x = linspace(0, 100, 30)
fig, ax = subplots(figsize = (10, 6))
ax.scatter(x, x)
fig.tight_layout()
fig.savefig('/home/mark/dicp/python/figure.pgf')
But I get OSError: [Errno 2] No such file or directory:
Traceback (most recent call last):
File "visualize/latex_figs.py", line 32, in <module>
fig.savefig('/home/mark/dicp/python/figure.pgf')
File "/usr/local/lib/python2.7/dist-packages/matplotlib/figure.py", line 1421, in savefig
self.canvas.print_figure(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/matplotlib/backend_bases.py", line 2220, in print_figure
**kwargs)
File "/usr/local/lib/python2.7/dist-packages/matplotlib/backend_bases.py", line 1957, in print_pgf
return pgf.print_pgf(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/matplotlib/backends/backend_pgf.py", line 818, in print_pgf
self._print_pgf_to_fh(fh, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/matplotlib/backends/backend_pgf.py", line 797, in _print_pgf_to_fh
RendererPgf(self.figure, fh),
File "/usr/local/lib/python2.7/dist-packages/matplotlib/backends/backend_pgf.py", line 409, in __init__
self.latexManager = LatexManagerFactory.get_latex_manager()
File "/usr/local/lib/python2.7/dist-packages/matplotlib/backends/backend_pgf.py", line 223, in get_latex_manager
new_inst = LatexManager()
File "/usr/local/lib/python2.7/dist-packages/matplotlib/backends/backend_pgf.py", line 305, in __init__
cwd=self.tmpdir)
File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
It also generates this part of the output file:
%% [whole bunch of comments]
\begingroup%
\makeatletter%
\begin{pgfpicture}%
\pgfpathrectangle{\pgfpointorigin}{\pgfqpoint{10.000000in}{6.000000in}}%
\pgfusepath{use as bounding box}%
I do not understand what OSError: No such file or directory in subprocesses.py has to do with anything... The file I'm trying to save is writable. Am I misunderstanding something, or is this a bug I should report?
I also had this problem while trying to run the example scripts. The problem occurs where backend_pgf.py first tries to use the default LaTeX command. It seems that the PGF backend assumes that it should use xelatex by default. If the problem is the same for you as for me, then you have two options:
add the key "pgf.texsystem" : "pdflatex" (or lualatex, whatever) to your matplotlib.rcParams. For example, add the following snippet to the top of your script:
import matplotlib
pgf_with_rc_fonts = {"pgf.texsystem": "pdflatex"}
matplotlib.rcParams.update(pgf_with_rc_fonts)
ensure that you have xelatex, and that it is on your PATH, and use that as the default latex command (i.e. assuming you're on a Mac or Linux system, which xelatex should return a path).