I have a model (based on Mask_RCNN) which I have exported to a servable. I can run it with tf serving in a docker container locally on my Macbook pro and using the json API it will respond in 15-20s, which is not fast but I didn't really expect it to be.
I've tried to serve it on various AWS machines based off the DLAMI and also tried some Ubuntu AMIs specifically using a p2.xlarge with a gpu, 4vcpus and 61GB of RAM. When I do this the same model responds in about 90s. The configurations are identical since I've built a docker image with the model inside it.
I also get a timeout using the AWS example here: https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-tfserving.html
Has anyone else experienced anything similar to this or have any ideas on how I can fix or isolate the problem?
Related
I want to deploy odoo as cheap as possible. I tried with gcloud sql (15-30€/m) + cloud run. But after some minutes passed the odoo interface shows me a white screen with so many logs in the console similar to this:
GET 404 1.04 KB24 ms Chrome 91 https://bf-dev3-u7raxlu3nq-ew.a.run.app/web/content/290-f328144/1/website.assets_editor.css
My interpretation is that, as cloud run is stateless, and the web static files seems to be stored in the core module, after the container is killed this information is lost. As I've been one month working looking for a solution, before trying any another way of deploying I ask the community: Have you found a way of persisting the odoo core modules in v14 different form a volume? And so, it is possible deploying odoo in gcloud run?
Here I listed all the ideas that I tried:
First, I thought that this css files were store in the werkzeug session, so I tried two addons that stored this session in a place different from the filestore. These addons were camptocamp odoo-cloud-platform-14.0/session-redis and misc-addons-13.0/base_session_store_psql. But, then the problem persisted.
Then I read that the static css and js file generated in the web editor are stored in odoo as attachments, and the addons misc-addons-13.0/ir_attachment_s3 could store these files in s3. But, although I configured this addon the problem persisted.
Next, I found this link describing needing to regenerate assets so them to be stored in the db. But, although I did that the problem persisted.
Finally, I thought to deploy odoo in other ways. The way of directly in a vm seems to be the more minimalistic and standard, and so seem to have the more chances to work, although it will be difficult to implement gitops. It can be deployed containers in the vm through docker compose what will help deploying updates. Gke anthos seems to implement gitops too and seems to persist volumes, but in the description it shows gke anthos is stateless. Finally, there's the way of deploying in a k8s cluster, this way will implement containers and allow autoscaling vs the docker compose way in a vm. But it's true it seems to be more expensive and more difficult to implement. Regarding seem to be more expensive it is thought of trying little working nodes machines so the cost stays small during the night. Regarding the difficulty of deploying, it is desired to implement gitops so it seems argo or other should be added. Also, I heard gke autopilot has a good free tier and is easier to deploy.
Thanks in advance :)
Cloud Run isn't the good solution for that. Indeed, if the werkzeug session is persisted in memory, the same client isn't sure to access to the same instance each time, and thus to lost the file even in the middle of a session.
The best solution is to use VM with sticky session configuration. You can use old school deployment on Compute Engine, or Cloud Native solution with GKE/K8S. It's more or less the same cost if you have only 1 cluster (the first one is free)
Just a correction about GKE Anthos. I think you talk about Cloud Run on Anthos, and yes, it's like Cloud Run but use KNative on GKE to manage the containers, and it's also serverless. But GKE can handle stateful deployment, as you need with odoo
I noticed that whenever I open a Google Colab notebook my system fans go high and all of my 4 cores show huge usage (on my ubuntu laptop). Clearly a lot of JS is running on my system.
However, when I host a Jupiter notebook on another machine and use that from my laptop, all the resource usage is normal.
Q: Is there a way to make Google Colab use minimal resources of my PC?
While google colab is an awesome way to share my code (and ask questions), the sound from fan speed annoys me a lot.
p.s; If this is not th right plac to ask this, kindly let me know where can I ask it?
Check if your Google Colab is running in local runtime. By default, it runs on its own Compute Engine, but you do have the option to alter it.
P.S It could also be Google Chrome simply using too many resources when running Colab. Try Edge or other lesser power-hungry browsers.
I'm using Heroku (free) to try to deploy a relatively simple neural network I made using Django. The problem is that when I import tensorflow to load the saved model, tf takes longer than 30 seconds to import, causing my single web worker to timeout and kill the page load.
Looking around on the internet, I found that using another worker thread might help with my slow import and model loading IO. However, I'm not sure what the best way to do this is, as the import chain from simply loading the page view spreads down to the tensorflow import. Trying to use basic Python threading within my app to put the imports and model loading in a different thread didn't help heroku load.
It turns out that using python threading DID help get past the R12 heroku timeout, but I then had a confounding error: my Django ALLOWED_HOSTS setting did not have the correct localhost url listed to run the site.
I would like to understand how feasible it would be to spin up my own instance of a Colaboratory server that I could run within a closed network. Using the public version is unfortunately not yet an option in my company. I would really like to have something equivalent that I could use internally, which has all of the nice features such as collaborative editing.
Has anyone tried doing this? Is it even possible?
There's no way to spin up a full instance of the Colab service; i.e., the bits that integrate with GSuite / Docs / GCP / TPUs.
But, you can run local backends using the instructions here:
http://research.google.com/colaboratory/local-runtimes.html
I am using a Google Colab Jupyter notebook for algorithm training and have been struggling with an annoying problem. Since Colab is running in a VM environment, all my variables become undefined if my session is idle for a few hours. I come back from lunch and the training dataframe that takes a while to load becomes undefined and I have to read_csv again to load my dataframes.
Does anyone know how to rectify this?
If the notebook is idle for some time, it might get recycled: "Virtual machines are recycled when idle for a while" (see colaboratory faq)
There is also an imposed hard limit for a virtual machine to run (up to about 12 hours !?).
What could also happen is that your notebook gets disconnected from the internet / google colab. This could be an issue with your network. Read more about this here or here
There are no ways to "rectify" this, but if you have processed some data you could add a step to save it to google drive before entering the idle state.
You can use local runtime with Google Colab. Doing so, the Colab notebook will use your own machine's resources, and you won't have any limits. More on this: https://research.google.com/colaboratory/local-runtimes.html
There are various ways to save your data in the process:
you can save on the Notebook's VM filesystem, e. g. pd.to_csv("my_data.csv")
you can import sqlite3 which is the Python implementation of the popular SQLite database. Difference between SQLite and other SQL databases is that the DBMS runs inside your application, and data is saved to the file system of that application. Info: https://docs.python.org/2/library/sqlite3.html
you can save to your google drive, download to your local file system through your browser, upload to GCP... more info here: https://colab.research.google.com/notebooks/io.ipynb#scrollTo=eikfzi8ZT_rW