TensorFlow optimisation during running model speed up Predict - tensorflow

I want to disable a computation of several filters during Predict call with Tensorflow 2 and Keras.
Do i have to modify the source code of Tensorflow to achieve that ?

Short answer: No, you don't have to modify the Tensorflow source code.
Long answer with example detailled here.

Related

Evaluate the TensorFlow object detection model

How can I evaluate my object detection model in a simple and understandable way, I used the TensorFlow's Object Detection API, but I didn’t understand the Tensorboard graphs. Can I evaluate it manually?
Any help? :(
Welcome to StackOverflow!
In short, yes you can. Yet it could be quite time-consuming to achieve your goal.
Here are the steps you might want to follow(assuming you have some basic understanding of Tensorflow graphs and sessions, otherwise please update your question):
Export your model to a frozen graph(*.pb file) via HERE. This step will give you an out-of-the-box model that you could load without any dependencies of Object Detection API.
Write a script to load your model(frozen graph) and perform the evaluation. Some instructions can be found from HERE. Make sure you use tools such as Netron to check the input and output node names of your frozen graph.
Once you could perform the evaluation, you could write metrics on your own dataset, such as mAP, and loop through all images to get the desired evaluation performed.
You could use the confusion matrix to evaluate your model on the test dataset.
After training the model on your dataset, export the inference graph for evaluation.
Find the attached link which helps you step by step towards evaluation.
Best of luck!
confusion_matrix

Is there a Tensorflow or Keras equivalent to fastai's interp.plot_top_losses?

Is there a Tensorflow or Keras equivalent to fastai's interp.plot_top_losses? If not, how can I manually obtain the predictions with the greatest loss?
Thank you.
I found the answer, it is ktrain! Comes with learning rate finder, learning rate schedules, ready to used per-trained models and many more features inspired by fastai.
https://github.com/amaiya/ktrain

Distributed Tensorflow Using fit_generator()

Is it possible to use Tensorflow in a distributed manner and use the fit_generator()? In my research so far I have not seen anything on how to do this or if it is possible. If it is not possible then what are some possible solutions to use distributed Tensorflow when all the data will not fit in memory.
Using fit_generator() is not possible under a tensorflow distribution scope.
have a lookt at tf.data. i rewrote all my Keras ImageDataGenerators to a tensorflow data pipeline. doesn't need much time, is more transparent and quite remarkably faster.

Is it possible to train pytorch and tensorflow model together on one GPU?

I have a pytorch model and a tensorflow model, I want to train them together on one GPU, following the process bellow: input --> pytorch model--> output_pytorch --> tensorflow model --> output_tensorflow --> pytorch model.
Is is possible to do this? If answer is yes, is there any problem which I will encounter?
Thanks in advance.
I haven't done this but it is possible but implementing is can be a little bit.
You can consider each network as a function, you want to - in some sense - compose these function to form your network, to do this you can compute the final function by just giving result of one network to the other and then use chain-rule to compute the derivatives(using symbolic differentiation from both packages).
I think a good way for implementing this you might be to wrap TF models as a PyTorch Function and use tf.gradients for computing the backward pass.
Doing gradient updates can really get hard (because some variables exist in TF's computation graph) you can turn TF variables to PyTorch Variable turn them into placeholdes in TF computation graph, feed them in feed_dict and update them using PyTorch mechanisms, but I think it would be really hard to do, instead if you do your updates inside backward method of the function you might be able to do the job(it is really ugly but might do the job).

How to predict using Tensorflow?

This is a newbie question for the tensorflow experts:
I reading lot of data from power transformer connected to an array of solar panels using arduinos, my question is can I use tensorflow to predict the power generation in future.
I am completely new to tensorflow, if can point me to something similar I can start with that or any github repo which is doing similar predictive modeling.
Edit: Kyle pointed me to the MNIST data, which I believe is a Image Dataset. Again, not sure if tensorflow is the right computation library for this problem or does it only work on Image datasets?
thanks, Rajesh
Surely you can use tensorflow to solve your problem.
TensorFlow™ is an open source software library for numerical
computation using data flow graphs.
So it works not only on Image dataset but also others. Don't worry about this.
And about prediction, first you need to train a model(such as linear regression) on you dataset, then predict. The tutorial code can be found in tensorflow homepage .
Get your hand dirty, you will find it works on your dataset.
Good luck.
You can absolutely use TensorFlow to predict time series. There are plenty of examples out there, like this one. And this is a really interesting one on using RNN to predict basketball trajectories.
In general, TF is a very flexible platform for solving problems with machine learning. You can create any kind of network you can think of in it, and train that network to act as a model for your process. Depending on what kind of costs you define and how you train it, you can build a network to classify data into categories, predict a time series forward a number of steps, and other cool stuff.
There is, sadly, no short answer for how to do this, but that's just because the possibilities are endless! Have fun!