I'm trying to run several models of neural networks in tensorflow in parallel, each model is independent of the rest. It is necessary to create a session for each of the executions I launch with tensorflow or I could reuse the same session for each of the models ?. Thank you
A session is linked to a specific Tensorflow Graph instance. If you want to have one session for all, you need to put all your models in the same graph. This may cause you naming problems for tensors and is IMO generally a bad idea (you should keep things that are not related to each other separate).
Having everything in the same graph also raises your model's resources requirements (you always load everything even if you run only a sub-graph), which is another reason to split things in independent graphs. With independent graphs, you'll have to use multiple sessions.
Related
I'm training a neural network using keras but I'm not sure how to feed the training data into the model in the way that I want.
My training data set is effectively infinite, I have some code to generate training examples as needed, so I just want to pipe a continuous stream of novel data into the network. keras seems to want me to specify my entire dataset in advance by creating a numpy array with everything in it, but this obviously wont work with my approach.
I've experimented with creating a generator class based on keras.utils.Sequence which seems like a better fit, but it still requires me to specify a length via the __len__ method which makes me think it will only create that many examples before recycling them. Can someone suggest a better approach?
I would like to serve about ~600 models with Tensorflow Serving.
I am trying to find a solution to eventually reduce the number of models:
My models have the same architecture, only the weights changes.
Is it possible to load only one model and changing the weights?
Would it be possible to aggregate all those models together and effectively, the first layer of the model would be an ID and the input features for that model?
Has anyone tried having couple of hundreds models running on one machine? I have find this cortex solution, but wanted to avoid using another tech.
https://towardsdatascience.com/how-to-deploy-1-000-models-on-one-cpu-with-tensorflow-serving-ec4297bff54b
If the models have the same architecture but different weight, you can try merging all those model into a "super model". However I would need to know more about the task to see if that's possible.
To serve 600 models, you would need a very powerful machine and lot of memory (depending on how big your models are and how much you use them in parallel).
You can either run TFServe yourself, or use a provider such as Inferrd.com/Google/AWS.
I need to put in production a Tensorflow model with a simple APIs endpoint. The model should be shared among processes/workers/threads in order to not waste too many resources in terms of memory.
I already tried with multiple gunicorn workers setting the --preload option and loading the model before the definition of the resources, but after a while I get timeout error. The model is not responding. I know that there is Tensorflow serving service available for this purpose, but the problem is about the fact that the model is not still available at deployment time and the ML pipeline is composed of many different components (the Tensorflow model is just one of these components). The end user (customer) is the one that trains/saves the model. (I am using docker) Thanks in advance
The issue is that the Tensorflow runtime and global state is not fork safe.
See this thread: https://github.com/tensorflow/tensorflow/issues/5448 or this one: https://github.com/tensorflow/tensorflow/issues/51832
You could attempt to load the model only in the workers using the post_fork and post_worker_init server hooks.
I am trying to write code that is eager and graph compatible. However, there is very little information online for how to do this, being a literal footnote on TensorFlow's website. Furthermore, what they have wrote is confusing, saying:
The same code written for eager execution will also build a graph during graph execution. Do this by simply running the same code in a new Python session where eager execution is not enabled.
This implies that a same code solution is possible, where the only change required is the addition or removal of tf.enable_eager_execution().
Currently I use tf.keras to define my model and tf.data for my input pipeline. However, many eager operations don't work in graph, with the opposite also being true.
For example, I keep track of my number of epochs using tf.train.Checkpoint(). In eager mode, after restoring I can access it using epochs.numpy() to assign its value to a local variable. However, this does not work with graphs, which instead would require sess.run(epochs) due to the values not being defined during execution.
Again, to compute my gradients in eager I need to use some form of autograd, in my case tf.GradientTape(). This is not compatible with graphs, as "tf.GradientTape.gradients() does not support graph control flow."
I see that tfe.py_func exists, but once again, this only works when eager is not enabled, thus not helping for this problem.
So how do I make a same code solution, when it seems that many aspects of eager and graph directly conflict with each other?
The benefit would be that I can store and load individual models using tf.train.export_meta_graph() but I'm not sure if this usage is what TensorFlow was designed for. Does it have any negative impacts on parallelism/performance, functionality, etc to use multiple graphs in parallel, as long as I don't want to share data between them?
It's not a good idea because passing data between models would require fetching from one session, and feeding the Python object back into the other session. Locally, that's unnecessary copy operations, and it's worse in the distributed setting.
There is now export_scoped_meta_graph() and import_scoped_meta_graph() in tf.contrib.framework.meta_graph to save and load parts of a graph and using a single global graph is recommended.