When I run tensorflow training code through SSH, I got the following error:
QXcbConnection: Could not connect to display
This happens most likely due to summary object saving the models. How do I fix this error?
Try X11forwarding using -Y flag, e.g.:
ssh -Y user#server
Did you use matplotlib in your tensorflow training code?
If your answer is yes, you can try to add the following lines in your code.
import matplotlib
matplotlib.use('Agg')
Related
I try to convert caffe model. I am using coremltools v5.
this is my code
import coremltools
caffe_model = ('oxford102.caffemodel', 'deploy.prototxt')
labels = 'flower-labels.txt'
coreml_model = coremltools.converters.caffe.convert(
caffe_model,
class_labels=labels,
image_input_names='data'
)
coreml_model.save('FlowerClassifier.mlmodel')
I convert using below command
python3 convert-script.py
And i get an error message like below.
error message
Does anybody face this problem and have solution on it?
I just came across this as I was having the same problem. The caffe support is not available in the newer versions of coremltools API. To make this code run an older version of coremltools (such as 3.4) must be used, which requires using Python 2.7 - which is best done in a virtual environment.
I assume you've solved your issue already, but I added this in case anyone else stumbles onto this question.
There are several solutions according to your case:
I had the same issue on my M1 Mac. You can resolve the same by duplicating your Terminal, and running it with Rosetta.(This worked for me)
cd ~/.virtualenvs/<your venv name here>/bin
mkdir bk; cp python bk; mv -f bk/python .;rmdir bk
codesign -s - --preserve-metadata=identifier,entitlements,flags,runtime -f python
Fore more solutions and issue you can watch this issue on github
I had the same error running python 3.7
In the virtualenv, solution is to run:
pip install coremltools==3.0
Don't have to change python versions and just rerun the script
Is that possible to generate texts from OpenAI GPT-2 using TensorFlowJS?
If not what is the limitation, like model format or ...?
I don't see any reason as to why not, other than maybe some operation that is in gpt-2 that is not supported by tensorflowjs.
I don't know how to do it, but here's a nice starting point:
install.sh
python3 -m pip install -q git+https://github.com/huggingface/transformers.git
python3 -m pip install tensorflow
save.py
from transformers import TFGPT2LMHeadModel, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
# add the EOS token as PAD token to avoid warnings
model = TFGPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id)
model.save("./test_gpt2")
that will give you a SavedModel file. Now you can try figure out the input and output nodes, and use tensorflowjs_converter to try and convert it. Pointer: https://www.tensorflow.org/js/tutorials/conversion/import_saved_model.
I create a GCP instance with a prebuilt image (c3-deeplearning-tf-ent-1-15-cu100-20200313). I remotely executed "script.py" using the following command.
$ gcloud compute ssh me_as_user#instance -- 'python script.py'
and I got tensorflow import error, but there is the package under me_as_user, and there is no issue executing "script.py" in SSH.
Please advise me if there there is any suggestion.
Thank you
I was trying to run the Mask-RCNN repository provided by the matterport in Github. https://github.com/matterport/Mask_RCNN. when I run the demo in the anaconda, it showed "C:\Anaconda\lib\site-packages\matplotlib\figure.py:445: UserWarning: Matplotlib is currently using agg, which is a non-GUI backend, so cannot show the figure. % get_backend()) ". Is there someone came cross the similar problem?
If you are trying to plot on a remote server, ssh X11 forwarding can then be used to display matplotlib plots.
Try to use this,
import matplotlib
matplotlib.use('tkagg')
Make sure you have XMing or XQuartz (if you're on mac), and use -Y
$ shh -Y username#servidorIP
Enter or change the line in ~/.config/matplotlib/matplotlibrc starting with "backend : " in:
backend : tkagg
I have installed TensorFlow in virtual environment on Ubunut 16.04. when I enter in virtualen by using command "source ~/tensorflow/bin/activate" it enters in virtualen. but after that when I enter the command " import tensorflow as tf" it gives me the following error
"import: not authorized 'tf' # error/constitute.c/WriteImages/1028."
how to solve this..
Maybe you forgot about telling which interpreter to use. Two variants:
Add shebang #!/usr/bin/env python3 at the beginning of you script
OR
Run script like python3 my_scripy.py