My colab is working for training deep learning models , but when I use remocolab.setupVNC() for SSH and VNC , colab disconnects after running remocolab.setupVNC() about 5 minutes .
I've used it for running CARLA simulator on colab and it worked , but from three weeks ago it has not worked , how can I solve this problem ?
I've used this code : "https://colab.research.google.com/github/MichaelBosello/carla-colab/blob/master/carla-simulator.ipynb"
Related
So I'm using colab and it disconnects me every 12 hours while using GPU. I bought a subscription to colab pro, and still have the same problem. What to do to fix this? Please I need a quick help.
I have kept a tensorflow model for run throughout night, but in the morning i see that the tensorflow code did not run even when the laptop is active.
can some one please help on how do i make the tensorflow code running when i am away laptop
Running Windows 10 on my laptop (with a compatible GPU) and just started using tensorflow-gpu 2 days ago. Every time I close the lid to put the laptop to sleep, it restarts when I wake it up. This never happened when I was just running normal tensorflow. I thought I'd see here if this is normal or known before running it up the flagpole on their git repo.
I tried to install Stellarium by following http://projectable.me/3d-printed-raspberry-pi-powered-planetarium-projector-nightlight-part-1/ tutorial. When I run the code
bzr co lp:stellarium stellarium
my internet connection was disconnected after few hours and the procedure was failed. It is difficult to run that code again because it takes more than five hours and 4Gb data. How I fix this? Thank you.
I am using Google Cloud (4 CPU,15 GB RAM) to host tensorflow serving (branch 0.5.1). The model is a pre-trained Resnet which I imported using Keras and converted to .pb format using SavedModelBuilder. I followed Tensorflow Serving installation and compilation steps as mentioned in the installation docs.Did a bazel build using :
bazel build tensorflow_serving/...
Doing inference on an image from my local machine using a python client, gave me results in approximately 23 secs.This I was able to fine tune a bit by following the advice here. Replaced the bazel build to the below command to use CPU optimization. This brought the response time down to 12 secs.
bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma
--copt=-msse4.2 //tensorflow_serving/model_servers:tensorflow_model_server
Other stuff I tried which resulted in no difference to response times..
1. Increased 4 CPU to 8 CPU machine
2. Tried on a GPU Tesla K80 + 4 CPU machine
I haven't tried batch optimization , as I am currently just testing it out with a single inference request. The configuration doesnt user docker or Kubernettes.
Appreciate any pointers which can help in bringing down the inference times. Thanks !
Solved and closing this issue. Now am able to get a sub second prediction time. There were multiple problems.
One was the image upload/download times which was playing a role.
The second was when I was running using the GPU, tensorflow serving wasnt compiling using GPU Support. The GPU issue got resolved using two approaches outlined in these links - https://github.com/tensorflow/serving/issues/318 and https://github.com/tensorflow/tensorflow/issues/4841