Tensorflow imports cause Heroku timeout (Django Python) - tensorflow

I'm using Heroku (free) to try to deploy a relatively simple neural network I made using Django. The problem is that when I import tensorflow to load the saved model, tf takes longer than 30 seconds to import, causing my single web worker to timeout and kill the page load.
Looking around on the internet, I found that using another worker thread might help with my slow import and model loading IO. However, I'm not sure what the best way to do this is, as the import chain from simply loading the page view spreads down to the tensorflow import. Trying to use basic Python threading within my app to put the imports and model loading in a different thread didn't help heroku load.

It turns out that using python threading DID help get past the R12 heroku timeout, but I then had a confounding error: my Django ALLOWED_HOSTS setting did not have the correct localhost url listed to run the site.

Related

Mlflow UI is taking forever to install and still not done

I am running my code on google colab to bring mlflow dashboard and whenever I ran !mlflow ui and it is taking forever to execute. The last text on my screen is the Booting worker with pid. This is my first time working with mlflow can anyone tell me why this is happening and what I can do to fix it?
You can use mlflow ui to see logs, it doesn't install anything. In fact, it hosts a server using gunicorn. In order to connect to the tracking server created by Colab, reading this thread could be useful. (Also this doc)
I recommend you to run mlflow ui command on your local host and then go to the listening address to see what happens. (Things tracked on the colab don't show here!)
aimlflow might be helpful. It helps to run a beautiful UI on top of mlflow logs
the code: https://github.com/aimhubio/aimlflow

Tensorflow serving performance slow

I have a model (based on Mask_RCNN) which I have exported to a servable. I can run it with tf serving in a docker container locally on my Macbook pro and using the json API it will respond in 15-20s, which is not fast but I didn't really expect it to be.
I've tried to serve it on various AWS machines based off the DLAMI and also tried some Ubuntu AMIs specifically using a p2.xlarge with a gpu, 4vcpus and 61GB of RAM. When I do this the same model responds in about 90s. The configurations are identical since I've built a docker image with the model inside it.
I also get a timeout using the AWS example here: https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-tfserving.html
Has anyone else experienced anything similar to this or have any ideas on how I can fix or isolate the problem?

optimize build size vuejs

When I build the production build, the size of the css+js is going up to 3.8MB.
The only thing I could see is bootstrap which is taking half of the size amongst 3.8MB.
The app contains CRUD functionality in admin module where I have used bootstrap mostly and the other module is a list of static pages wherein I have used only grid of bootstrap.
Kindly guide on How can I make improvement in optimizing this further?
This is expected and using bootstrap and there's nothing you can do. If you had, instead, used bootstrap-vue you could import only the specific parts of the modules that you need (javascript) and that would significantly reduce the size of your bundle.
With that said, there's nothing wrong here. The gzip size of these is 252kb at maximum and that's quite cheap.
If you serve your site using http2 and the browser supports it, your request will be multiplexed and will use TCP pipelines to load the assets. This has huge gains and improvements over HTTP1 in that:
the connection to your server is opened through a TCP socket
the TCP socket then balances the requests by using Frames (which are asynchronous) vs http1 which is synchronous and could only manage 2 synchronous HTTPD threads at a time
the pipeline does not wait for assets and continues to cascade requests for assets, which improves the page load vastly.
So to summarize - serve your assets gzipped and make sure your web server uses http2 and your issue is trivial at this time.
Consider using purgecss plugin to get rid of all unused bootstrap classes: https://www.purgecss.com/guides/vue

Google Colab variable values lost after VM recycling

I am using a Google Colab Jupyter notebook for algorithm training and have been struggling with an annoying problem. Since Colab is running in a VM environment, all my variables become undefined if my session is idle for a few hours. I come back from lunch and the training dataframe that takes a while to load becomes undefined and I have to read_csv again to load my dataframes.
Does anyone know how to rectify this?
If the notebook is idle for some time, it might get recycled: "Virtual machines are recycled when idle for a while" (see colaboratory faq)
There is also an imposed hard limit for a virtual machine to run (up to about 12 hours !?).
What could also happen is that your notebook gets disconnected from the internet / google colab. This could be an issue with your network. Read more about this here or here
There are no ways to "rectify" this, but if you have processed some data you could add a step to save it to google drive before entering the idle state.
You can use local runtime with Google Colab. Doing so, the Colab notebook will use your own machine's resources, and you won't have any limits. More on this: https://research.google.com/colaboratory/local-runtimes.html
There are various ways to save your data in the process:
you can save on the Notebook's VM filesystem, e. g. pd.to_csv("my_data.csv")
you can import sqlite3 which is the Python implementation of the popular SQLite database. Difference between SQLite and other SQL databases is that the DBMS runs inside your application, and data is saved to the file system of that application. Info: https://docs.python.org/2/library/sqlite3.html
you can save to your google drive, download to your local file system through your browser, upload to GCP... more info here: https://colab.research.google.com/notebooks/io.ipynb#scrollTo=eikfzi8ZT_rW

Where do I put a TensorFlow model in a web application and how do I access it?

I have a trained TensorFlow DNN model that I trained in Python but I saved it so I am able to access it through a Java program. I have no issues using the model with a regular Java application. I loaded the model using the following code:
SavedModelBundle bundle = SavedModelBundle.load("/scitweetsInfo/NeuralNetwork/scitweetsJavaModel/1514675940", "serve")
However, I'm not sure where to put the folder that contains the saved model in my Java web project. I'm also not sure what path I have to use to access it. Thanks for the help!
EDIT: So I found that if I put the absolute disk path ("D:/Github/....) it worked, so now I want to how I would do this without having to hardcode the disk path.
You can build an api using flask(python). When you turn up the api make sure that you load the DNN model. When the API is hit, feed forward the network and return results or results path. Now the entire model is a REST api, hit the api with java web application.
Give the input to the flask api, get the result path and view on the web page.