Use SageMaker Clarify with TensorFlow - amazon-sagemaker-clarify

We want to use SageMaker Clarify with TensorFlow or Keras. We do not know if that is. If it is possible, is there any guide or examples?

Related

Pretrained alexnet in tensorflow [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 12 months ago.
Improve this question
I want to use pretrained Alexnet for transfer learning. I dont see its available in Keras library.
Am I missing something here?
Other Alternative I see here is to create model and
load pretrained weight
train from scratch
Training from scratch using imagenet dataset is not possible for me due to resource constraint.
Loading pre-trained weight will work.
Would you provide any pointers for getting the pretrained weight for Alexnet?
Thanks,
As of right now, Keras does not (officially) seem to offer a pre-trained AlexNet model. PyTorch, on the other hand, does. If you are willing to use a different framework for the task, you can use PyTorch. You can retrieve a pre-trained version of the AlexNet like so:
import torchvision.models as models
alexnet = models.alexnet(pretrained=True)
You can find the list of available pre-trained models here, and a transfer learning tutorial for image classification here.
Hope that answers your question!

need guidance on using pre-trained weights in segmentation_models API [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I want to use a pre-trained Unet model using segmentation_models API for the Cityscapes dataset, but I need the pre-trained weights for the same. Where can I find the pre-trained weights for a Unet model trained on the Cityscapes dataset?
Please guide me on this!!!
UNet is absent from the benchmark so i assume it is not adapted for this dataset (too slow and not enough performant probably). However, I advise you to start with DeepLabv3+ from Google which is not so complicated and more adapted for this dataset.
You can use this repository where it is implemented, well documented and useable with pretrained weights from cityscape dataset (and also PascalVOC dataset).

How to add hugging face lib to a Kaggle notebook [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
How to add hugging face lib to a Kaggle notebook. I want to add this one to my notebook . the code sample below does not work in the notebook I have. is there some additional step I have missed?
All models of the huggingface model hub are available once you install the library. This one in particular is only available in PyTorch, so you should install both transformers and torch:
!pip install transformers torch
You can then use the model:
from transformers import BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
This model is a summarization model so I would recommend reading the summarization tutorial on Hugging Face's website.

What is the advantage of using tensorflow instead of scikit-learn for doing regression? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am new to machine learning and I want to start doing basic regression analysis. I saw that scikit-learn provides a simple way to do this. But why people use tensorflow for regression instead? Thanks!
If the only thing you are doing is regression, scikit-learn is good enough and will definitely do you job. Tensorflow is more a deep learning framework for building deep neural networks.
There're people using Tensorflow to do regression maybe just out of personal interests or they think Tensorflow is more famous or "advanced".
Tensorflow is a deep learning framework and involves far more complex decisions concerning algorithm design.
In the first step, it is recommended to use sklearn, because you will get a first ml model with scikit-learn faster. Later you can use a dl model with tensorflow. :-)

Tensorflow 2.0 : frozen graph support [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Will the support for frozen graph continue in tensorflow 2.0 or deprecated?
I mean the scripts and APIs to create/optimize frozen graph from saved_model. Also the APIs to run the inference for the same.
Assuming it will be supported in future, what is the recommended method to run the inference on frozen graph in tensorflow 2.0 ?
The freeze graph APIs - freeze_graph.py and converter_variables_to_constants - will not be supported in TensorFlow 2.0.
In 2.0, the primary export format is SavedModels so APIs are built to directly support SavedModels.
Inference on existing frozen graphs can be run using the v1.compat path.
Now, freeze_graph is officially gone with TensorFlow 2.0 stable release.
Check Here.
if you use estimator to biuld a model, you can using tf.estimator.Estimator.export_saved_model to freeze your model.
model = tf.estimator.Estimator(
model_fn=model_fn,
model_dir=model_saved_dir)
def serving_input_receiver_fn():
# in here, my input is 512 x 512 single channel image
feature = tf.compat.v1.placeholder(tf.float32, shape=[None, 512, 512, 1], name="inputs")
return tf.estimator.export.TensorServingInputReceiver(feature, feature)
model.export_saved_model(model_saved_dir, serving_input_receiver_fn)
this code is work in tensorflow 2.0
or you use keras, You can refer to the steps of the official website
https://www.tensorflow.org/tutorials/keras/save_and_load#savedmodel_format