Keras as theano as backend can use the code of Keras as tensorflow backend? - tensorflow

Can the code which is written on keras using tensorflow as backend can run on another keras environment were theano is the backend? Is there any computational benefits when running on any of them?

To avoid dimensional errors just add
"image_data_format": "channels_last"
in the Keras.Json configuration file at
~\.keras
If there is not such a file you can create it yourself.

Related

Deploy TensorFlow probability regression model as Sagemaker endpoint

I would like to develop a TensorFlow probability regression model locally and deploy as Sagemaker endpoint. I have deployed standard XGB models like this previously and understand that one can deploy TensorFlow model like so:
from sagemaker.tensorflow.model import TensorFlowModel
tensorflow_model = TensorFlowModel(
name=tensorflow_model_name,
source_dir='code',
entry_point='inference.py',
model_data=<TENSORFLOW_MODEL_S3_URI>,
role=role,
framework_version='<TENSORFLOW_VERSION>')
tensorflow_model.deploy(endpoint_name=<ENDPOINT_NAME>,
initial_instance_count=1,
instance_type='ml.m5.4xlarge',
wait=False)
However, I do not think this will cover for example the dependency:
import tensorflow_probability as tfp
Do I need to use script mode or Docker instead? Any pointer would be very much appreciated. Thanks.
Another way would be to add "tensorflow_probability" to "requirements.txt" and both your code and requirements into dependencies if you are not using source_dir:
Model(entry_point='inference.py',
dependencies=['code', 'requirements.txt'])
You can create a requirements.txt in your source_dir(code) and place tensorflow-probability in it. Sagemaker will install the dependencies listed in requirements.txt before running your script.

Using Huggingface pipeline transformers on Mac M1, fresh PyTorch install errors

I am running a very basic sentiment analysis pipeline utilising the XLM-Roberta model on Huggingface. I am trying to ensure I am utilising the M1 chip as I will be looping over ~10e7 entries.
So as to be consistent I am running a fresh install of PyTorch following the yml file and step outlined in this (very useful) video, I subsequently pip install sentence-piece and protobuf (version 3.2.0) to deal with a few subsequent errors. When running a simple pipeline model I am however faced with the below:
# Imports
import pandas as pd
import datetime as dt
import itertools
from transformers import pipeline, AutoTokenizer
sentiment_model = pipeline(model="cardiffnlp/twitter-xlm-roberta-base-sentiment", return_all_scores = True)
ValueError: google.__spec__ is None
Interesting following the install methods for Tensorflow from the same channel runs fine but does not access the M1 chip and simply runs on CPU.
Has anyone faced this prior or have a method such that I can run PyTorch?
Many thanks in advance.

Grappler optimization failed. Error: Op type not registered 'FusedBatchNormV3' in Tensorflow Serving

I am serving a model using Tensorflow Serving.
TensorFlow ModelServer: 1.13.0-rc1+dev.sha.fd92d2f
TensorFlow Library: 1.13.0-rc1
I sanity tested with load_model and predict(...) in notebook and it is making the expected predictions. The model is a ResNet50 with custom head (fine tuned).
If i try to submit the request as instructed in:
https://www.tensorflow.org/tfx/tutorials/serving/rest_simple
I got error
2022-02-10 22:22:09.120103: W external/org_tensorflow/tensorflow/core/kernels/partitioned_function_ops.cc:197] Grappler optimization failed. Error: Op type not registered 'FusedBatchNormV3' in binary running on tensorflow-no-gpu-20191205-rec-eng. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
2022-02-10 22:22:09.137225: W external/org_tensorflow/tensorflow/core/framework/op_kernel.cc:1401] OP_REQUIRES failed at partitioned_function_ops.cc:118 : Not found: Op type not registered 'FusedBatchNormV3' in binary running on tensorflow-no-gpu-20191205-rec-eng. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
Any idea how to resolve? Will provide more details upon request.
I found out what's wrong. the tensorflow server version seemed wrong. Just ensure it is:
TensorFlow ModelServer: 2.8.0-rc1+dev.sha.9400ef1
TensorFlow Library: 2.8.0

Is there any way to load FaceNet model as a tf.keras.layers.Layer using Tensorflow 2.3?

I want to use FaceNet as a embedding layer (which won't be trainable).
I tried loading FaceNet like so :
tf.keras.models.load_model('./path/tf_facenet')
where directory ./path/tf_facenet contains 4 files that can be downloaded at https://drive.google.com/file/d/0B5MzpY9kBtDVZ2RpVDYwWmxoSUk/edit
but a message error shows up :
OSError: SavedModel file does not exist at: ./path/tf_facenet/{saved_model.pbtxt|saved_model.pb}
And the h5 files downloaded from https://github.com/nyoki-mtl/keras-facenet doesn't seem to work either (they use tensorflow 1.3)
I had issued like you when load model facenet-keras. Maybe you python env missing h5py modules.
So you should install that conda install h5py
Hope you success!!!

Is there any way to run keras on GPU with tensorflow backend in many jupyter notebooks simultaneously?

The scenario is:
I run keras in notebookA.
I run keras in notebookB.
I got InternalError: Blas GEMV launch failed when I start training my model in notebookB.
I know there is similar question here TensorFlow: InternalError: Blas SGEMM launch failed and I can solve this by closing another notebook (notebookA) which was using the gpu).
But I'd like to know is there anyway to run keras with tensorflow backend in multiple notebooks simultaneously? Or it's no way to achieve this for now?
I tried to use the settings below in each notebook but it didn't work (got the same error).
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.3
set_session(tf.Session(config=config))
Thank you very much!