I create a GCP instance with a prebuilt image (c3-deeplearning-tf-ent-1-15-cu100-20200313). I remotely executed "script.py" using the following command.
$ gcloud compute ssh me_as_user#instance -- 'python script.py'
and I got tensorflow import error, but there is the package under me_as_user, and there is no issue executing "script.py" in SSH.
Please advise me if there there is any suggestion.
Thank you
Related
I'm using vcsode on windows with Remote SSH to develop python code hosted on linux.
My python environment is a conda env based on python3.7.
During the test discovery stage, the run_adapter.py script is launched and fails with the following log:
python /home/scharlois/.vscode-server/extensions/ms-python.python-2020.2.64397/pythonFiles/testing_tools/run_adapter.py discover pytest -- --rootdir /path/to/my/project -s --cache-clear tests
Test Discovery failed:
Error: ERROR 1: PROJ: proj_create_from_database: Open of /home/scharlois/.conda/envs/conda37/share/proj failed
I have no error when I execute the same command in the conda env. on the remote host
Which interpreter is used to run the run_adapter.py script? Is the conda python one ?
It is the python conda interpreter. (displayed modifying the run_adapter script)
I found a workaround: inserting in the script the folowing lines before the main execution:
import os
os.environ["PROJ_LIB"]=""
I'am trying to create a package for iOS with kivy.:https://kivy.org/docs/guide/packaging-ios.html
I try to run: ./toolchain.py build kivy in terminal on mac.
Error: sudo: unable to execute ./toolchain.py: Permission denied.
My python is setup in anaconda and is running correctly.
In the first line of ./toolchain.py is: #!/anaconda/envs/python2/bin python2.7
Anyone knows how to change the permissions/how to get it to work?
When I set python to default: /usr/bin/env and adjust first line of ./toolchain.py it does execute, but in default python I'am not able to install pip.
Ensure ./toolchain.py got execution bit set (chmod a+x toolchain.py)
The gsutil command in my VM is failing with the following error:
(...)
packages/google/iam/v1/iam_policy_pb2.py", line 296, in
_sym_db.RegisterServiceDescriptor(_IAMPOLICY)
AttributeError: 'SymbolDatabase' object has no attribute 'RegisterServiceDescriptor'
Ideas??
When did this issue start to appear, was it after a configuration change to this VM? If not because of a configuration change, below steps should help:
Please ssh into the instance and run below command to see which Cloud SDK version and gsutil version you’re using: 'gcloud version'
As it appears to be a gsutil issue it might help to update your gsutil:
'sudo gcloud components update gsutil'
Enter ‘N’ at the ‘Do you want to run install instead (y/N)?’ prompt and you should be able to update gsutil. You might have to use ‘sudo apt-get install google-cloud-sdk’ which should give you the same results, if Cloud SDK component manager is not enabled.
Check to see if above steps help.
I am trying to run e2e test case of kubernetes but facing this issue .
../../cluster/../cluster/gce/util.sh: line 127: gcloud: command not found
I am using this command
go run hack/e2e.go -- -v --test
What should be the fix for this
Try prepending
KUBERNETES_PROVIDER=local KUBE_MASTER=local go run hack/e2e.go -- -v --test
The E2E tests are written to build up and tear down a cluster for you. The provider is used to do just that. there are providers e.g. for Gcloud and AWS. That is also the reason why you get the gcloud error. It tries to build a new cluster on GCloud and cannot find the CLI binary.
With the local provider this shouldn't happen.
When I run tensorflow training code through SSH, I got the following error:
QXcbConnection: Could not connect to display
This happens most likely due to summary object saving the models. How do I fix this error?
Try X11forwarding using -Y flag, e.g.:
ssh -Y user#server
Did you use matplotlib in your tensorflow training code?
If your answer is yes, you can try to add the following lines in your code.
import matplotlib
matplotlib.use('Agg')