I got error when I used matplotlib.pyplot to show image
5 plt.ylim(-5,6)
6 plt.title('Question 1(c): sample cluster data (10,000 points per cluster)')
----> 7 plt.show()
C:\Users\yashi\Anaconda3\envs\CSC411\lib\site-packages\matplotlib\pyplot.py in show(*args, **kw)
242 In non-interactive mode, display all figures and block until
243 the figures have been closed; in interactive mode it has no
--> 244 effect unless figures were created prior to a change from
245 non-interactive to interactive mode (not recommended). In
246 that case it displays the figures but does not block.
C:\Users\yashi\Anaconda3\envs\CSC411\lib\site-packages\ipykernel\pylab\backend_inline.pyc in show(close, block)
37 display(
38 figure_manager.canvas.figure,
---> 39 metadata=_fetch_figure_metadata(figure_manager.canvas.figure)
40 )
41 finally:
C:\Users\yashi\Anaconda3\envs\CSC411\lib\site-packages\ipykernel\pylab\backend_inline.pyc in _fetch_figure_metadata(fig)
172 """Get some metadata to help with displaying a figure."""
173 # determine if a background is needed for legibility
--> 174 if _is_transparent(fig.get_facecolor()):
175 # the background is transparent
176 ticksLight = _is_light([label.get_color()
C:\Users\yashi\Anaconda3\envs\CSC411\lib\site-packages\ipykernel\pylab\backend_inline.pyc in _is_transparent(color)
193 def _is_transparent(color):
194 """Determine transparency from alpha."""
--> 195 rgba = colors.to_rgba(color)
196 return rgba[3] < .5
AttributeError: 'module' object has no attribute 'to_rgba'
According to the post,
I updated matplotlib to 2.23 but it still doesn't work. How can I fix it?
I also encountered this situation,that caused by ipykernel versions. I change ipykernel from 4.10.0 to 4.9.0.The problem can be solved.
Run the command line on your windows / terminal on Mac and perform
conda install ipykernel=4.9.0
Related
When executing the below
profile = ProfileReport(df,title="Data Profile Report")
profile.to_file("data_profile_report.html")
Here is the exception thrown
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
c:\Projections 2022-08-16\Projections.py in <cell line: 4>()
102 # %%
103 #Creating EDA of data
104 profile = ProfileReport(df_cdap,title="CDAP Data Profile Report")
----> 105 profile.to_file("cdap_data_profile_report.html")
File c:\Users\fengq\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas_profiling\profile_report.py:257, in ProfileReport.to_file(self, output_file, silent)
254 self.config.html.assets_prefix = str(output_file.stem) + "_assets"
255 create_html_assets(self.config, output_file)
--> 257 data = self.to_html()
259 if output_file.suffix != ".html":
260 suffix = output_file.suffix
File c:\Users\fengq\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas_profiling\profile_report.py:368, in ProfileReport.to_html(self)
360 def to_html(self) -> str:
361 """Generate and return complete template as lengthy string
362 for using with frameworks.
363
(...)
366
367 """
--> 368 return self.html
...
--> 810 fig = manager.canvas.figure
811 if fig_label:
812 fig.set_label(fig_label)
AttributeError: 'NoneType' object has no attribute 'canvas'
I've tried to re-install python and reinstalling the dependencies for pandas-profiling but nothing seems to work so far. I've also tried downgrading python to python 3.9 and the matplotlib to an older version as well. It has not changed this error.
I notice that the error seems to be attributed to "manager.canvas.figure" but I'm not sure how to resolve it from that point onwards. Any help is greatly appreciated!
The problem resolved as I set the matplotlib to inline as per some comments that I was able to find on another forum. I'm still really interested to learn what causes this! Please feel free to answer and suggest other solutions! I would love to try them!
I am trying to train a rather large model (Longformer-large with a CNN classification head on top) on Google Cloud Platform. I am using Tensorflow-cloud and Colab to run my model. I tried to run this with batchsize 4 and 4 P100-GPUs but I still get an OOM error, so I would like to try it with TPU. I have increased batch size to 8 now.
However, I get the error that TPU config cannot be the chief_worker_config.
This is my code:
tfc.run(
distribution_strategy="auto",
requirements_txt="requirements.txt",
docker_config=tfc.DockerConfig(
image_build_bucket=GCS_BUCKET
),
worker_count=1,
worker_config= tfc.COMMON_MACHINE_CONFIGS["TPU"],
chief_config=tfc.COMMON_MACHINE_CONFIGS["TPU"],
job_labels={"job": JOB_NAME})
This is the error:
Validating environment and input parameters.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-26-e1be60d71623> in <module>()
19 worker_config= tfc.COMMON_MACHINE_CONFIGS["TPU"],
20 chief_config=tfc.COMMON_MACHINE_CONFIGS["TPU"],
---> 21 job_labels={"job": JOB_NAME},
22 )
2 frames
/usr/local/lib/python3.7/dist-packages/tensorflow_cloud/core/run.py in run(entry_point, requirements_txt, docker_config, distribution_strategy, chief_config, worker_config, worker_count, entry_point_args, stream_logs, job_labels, service_account, **kwargs)
256 job_labels=job_labels or {},
257 service_account=service_account,
--> 258 docker_parent_image=docker_config.parent_image,
259 )
260 print("Validation was successful.")
/usr/local/lib/python3.7/dist-packages/tensorflow_cloud/core/validate.py in validate(entry_point, requirements_txt, distribution_strategy, chief_config, worker_config, worker_count, entry_point_args, stream_logs, docker_image_build_bucket, called_from_notebook, job_labels, service_account, docker_parent_image)
78 _validate_distribution_strategy(distribution_strategy)
79 _validate_cluster_config(
---> 80 chief_config, worker_count, worker_config, docker_parent_image
81 )
82 _validate_job_labels(job_labels or {})
/usr/local/lib/python3.7/dist-packages/tensorflow_cloud/core/validate.py in _validate_cluster_config(chief_config, worker_count, worker_config, docker_parent_image)
160 "Invalid `chief_config` input. "
161 "`chief_config` cannot be a TPU config. "
--> 162 "Received {}.".format(chief_config)
163 )
164
ValueError: Invalid `chief_config` input. `chief_config` cannot be a TPU config. Received <tensorflow_cloud.core.machine_config.MachineConfig object at 0x7f5860afe210>.
Can someone tell me how I can run my code on GCP-TPUs? I actually don't care too much about time, I just want some configuration that runs without getting OOM issues (so GPU if it works totally fine with me as well).
Thank you!
My input is:
test=pd.read_csv("/gdrive/My Drive/data-kaggle/sample_submission.csv")
test.head()
It ran as expected.
But, for
test.to_csv('submitV1.csv', header=False)
The full error message that I got was:
OSError Traceback (most recent call last)
<ipython-input-5-fde243a009c0> in <module>()
9 from google.colab import files
10 print(test)'''
---> 11 test.to_csv('submitV1.csv', header=False)
12 files.download('/gdrive/My Drive/data-
kaggle/submission/submitV1.csv')
2 frames
/usr/local/lib/python3.6/dist-packages/pandas/core/generic.py in
to_csv(self, path_or_buf, sep, na_rep, float_format, columns,
header, index, index_label, mode, encoding, compression, quoting,
quotechar, line_terminator, chunksize, tupleize_cols, date_format,
doublequote, escapechar, decimal)
3018 doublequote=doublequote,
3019 escapechar=escapechar,
decimal=decimal)
-> 3020 formatter.save()
3021
3022 if path_or_buf is None:
/usr/local/lib/python3.6/dist-packages/pandas/io/formats/csvs.pyi
in save(self)
155 f, handles = _get_handle(self.path_or_buf,
self.mode,
156 encoding=self.encoding,
--> 157
compression=self.compression)
158 close = True
159
/usr/local/lib/python3.6/dist-packages/pandas/io/common.py in
_get_handle(path_or_buf, mode, encoding, compression, memory_map,
is_text)
422 elif encoding:
423 # Python 3 and encoding
--> 424 f = open(path_or_buf, mode,encoding=encoding,
newline="")
425 elif is_text:
426 # Python 3 and no explicit encoding
OSError: [Errno 95] Operation not supported: 'submitV1.csv'
Additional Information about the error:
Before running this command, if I run
df=pd.DataFrame()
df.to_csv("file.csv")
files.download("file.csv")
It is running properly, but the same code is producing the operation not supported error if I try to run it after trying to convert test data frame to a csv file.
I am also getting a message A Google Drive timeout has occurred (most recently at 13:02:43). More info. just before running the command.
You are currently in the directory in which you don't have write permissions.
Check your current directory with pwd. It might be gdrive of some directory inside it, that's why you are unable to save there.
Now change the current working directory to some other directory where you have permissions to write. cd ~ will work fine. It wil chage the directoy to /root.
Now you can use:
test.to_csv('submitV1.csv', header=False)
It will save 'submitV1.csv' to /root
I want to run tests on Google Colab to ensure reproducibility but I get a system error at the end, which I do not on my local machine.
I set up TensorFlow in Google Colab with
!pip install tensorflow==1.12.0
import tensorflow as tf
print(tf.__version__)
which, after some lines of installation, prints:
1.12.0
I then want to run a simple test:
import tensorflow as tf
class Tests(tf.test.TestCase):
def test_gpu(self):
self.assertEqual(False, tf.test.is_gpu_available())
tf.test.main()
The test passes (along with a default session test) on my local machine, and also on Colab, but after that the kernel returns a system error:
..
----------------------------------------------------------------------
Ran 2 tests in 0.005s
OK
An exception has occurred, use %tb to see the full traceback.
SystemExit: False
After calling %tb, I get a long stack trace pasted below, which gives little indication. How can I fix it?
The stacktrace is:
SystemExit Traceback (most recent call last)
<ipython-input-20-6a87bf6320f2> in <module>()
7 self.assertEqual(False, tf.test.is_gpu_available())
8
----> 9 tf.test.main()
10
11
/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/test.py in main(argv)
62 """Runs all unit tests."""
63 _test_util.InstallStackTraceHandler()
---> 64 return _googletest.main(argv)
65
66
/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/googletest.py in main(argv)
98 args = sys.argv
99 return app.run(main=g_main, argv=args)
--> 100 benchmark.benchmarks_main(true_main=main_wrapper)
101
102
/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/benchmark.py in benchmarks_main(true_main, argv)
342 app.run(lambda _: _run_benchmarks(regex), argv=argv)
343 else:
--> 344 true_main()
/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/googletest.py in main_wrapper()
97 if args is None:
98 args = sys.argv
---> 99 return app.run(main=g_main, argv=args)
100 benchmark.benchmarks_main(true_main=main_wrapper)
101
/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py in run(main, argv)
123 # Call the main function, passing through any arguments
124 # to the final program.
--> 125 _sys.exit(main(argv))
126
/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/googletest.py in g_main(argv)
68 if ('TEST_TOTAL_SHARDS' not in os.environ or
69 'TEST_SHARD_INDEX' not in os.environ):
---> 70 return unittest_main(argv=argv)
71
72 total_shards = int(os.environ['TEST_TOTAL_SHARDS'])
/usr/lib/python3.6/unittest/main.py in __init__(self, module, defaultTest, argv, testRunner, testLoader, exit, verbosity, failfast, catchbreak, buffer, warnings, tb_locals)
93 self.progName = os.path.basename(argv[0])
94 self.parseArgs(argv)
---> 95 self.runTests()
96
97 def usageExit(self, msg=None):
/usr/lib/python3.6/unittest/main.py in runTests(self)
256 self.result = testRunner.run(self.test)
257 if self.exit:
--> 258 sys.exit(not self.result.wasSuccessful())
259
260 main = TestProgram
SystemExit: False
The error you're seeing is coming from unittest trying to exit the python process, which Jupyter prevents on your behalf. You can avoid that with e.g.:
import tensorflow as tf
class Tests(tf.test.TestCase):
def test_gpu(self):
self.assertEqual(False, tf.test.is_gpu_available())
import unittest
unittest.main(argv=['first-arg-is-ignored'], exit=False)
(note the last line is different to yours and is lifted from https://github.com/jupyter/notebook/issues/2746)
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns;
tips = sns.load_dataset('tips')
tips.head()
tips['tip_pct'] = 100 * tips['tip'] / tips['total_bill']
grid = sns.FacetGrid(tips, row="sex", col="time", margin_titles=True)
grid.map(plt.hist, "tip_pct", bins=np.linspace(0, 40, 15));
When I run the above code in Spyder IDE (Anaconda Navigator's Package), I get the desired results. But when the same code is run in Jupter QtConsole (including the line: %matplotlib inline) I get the following errors:
Out:
ValueErrorTraceback (most recent call last)
<ipython-input-47-c7ea1bbe0c80> in <module>()
----> 1 grid.map(plt.hist, "tip_pct", bins=np.linspace(0, 40, 15));
/Users/waqas/anaconda/lib/python3.5/site-packages/seaborn/axisgrid.py in map(self, func, *args, **kwargs)
701
702 # Get the current axis
--> 703 ax = self.facet_axis(row_i, col_j)
704
705 # Decide what color to plot with
/Users/waqas/anaconda/lib/python3.5/site-packages/seaborn/axisgrid.py in facet_axis(self, row_i, col_j)
832
833 # Get a reference to the axes object we want, and make it active
--> 834 plt.sca(ax)
835 return ax
836
/Users/waqas/anaconda/lib/python3.5/site-packages/matplotlib/pyplot.py in sca(ax)
905 m.canvas.figure.sca(ax)
906 return
--> 907 raise ValueError("Axes instance argument was not found in a figure.")
908
909
ValueError: Axes instance argument was not found in a figure.
I don't know what's going on.
Somewhat related...I was getting the same error when running jupyter notebook because I was running the following lines in separate cells.
g = sns.FacetGrid(data=titanic,col='sex')
g.map(plt.hist,'age')
Once I ran them both in the same cell the image displayed properly.
Since you're using Qt console see if it helps to assign your mapping to grid.
grid = grid.map(plt.hist, "tip_pct", bins=np.linspace(0, 40, 15))
You'll see the same approach is used in the documentation for FacetGrid.