Getting model metadata from SageMaker model endpoint - tensorflow2.0

I have a TF model object deployed to SageMaker endpoint, and it's working fine when I invoke it to make a prediction. On its own, the model object itself contains key attributes that is accessible if I open it with h5py.File() like this:
with h5py.File(model_path2, 'r') as f:
labels = [n.decode("ascii", "ignore") for n in f['labels']]
img_norm_vec = np.array(f['norm_vector'])
My question is, can I access the metadata attributes from a SM endpoint? I searched through the SM documentation and didn't see anything related to this.

Sorry for the delayed response. SM endpoint doesn't provide a way to extract model metadata attributes at this time. Following SDK doc link outlines the calls you can make against an Endpoint: https://sagemaker.readthedocs.io/en/stable/api/inference/predictors.html.

Related

Ways to provide authorisation header to use google ml-engine

I've currently got involved in a project using GCP Ml-engine. It's already set & ready so my task is to use it's predict command to leverage the API. The whole project exists in VM instance so I want to know, does it help to get access token in a more concise way? I mean, SDK or something like that, because I didn't find anything useful. If not, what are my options here? JWT?
You might find this useful. https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/ml_engine/online_prediction/predict.py
Especially these lines:
# Create the ML Engine service object.
# To authenticate set the environment variable
# GOOGLE_APPLICATION_CREDENTIALS=<path_to_service_account_file>
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}'.format(project, model)
if version is not None:
name += '/versions/{}'.format(version)
response = service.projects().predict(
name=name,
body={'instances': instances}
).execute()
You can create the service account file from the project IAM page and download the token onto the VM.

Load a model into Tensorflow serving container and use protobufs for communicating with it

I know how to load models into the tensorflow serving container and communicate with it via http request but I am a little confused of how to use protobufs. What are the steps for using protobufs? Shall I just load a model into the container and use something like below:
from tensorflow_serving.apis import
request = predict_pb2.PredictRequest()
request.model_spec.name = 'resnet'
request.model_spec.signature_name = 'serving_default'
Or after/before loading the model I have to do some extra steps?
Here is the sample code for making an inferencing call to gRPC in Python:
resnet_client_grpc.py
In the same folder above, you will find example for calling the REST endpoint.

Update bigquery dataset access from java

We have a requirement where we need to give access to a particular user group in a bigquery dataset that contains views created by java code. I found that datasets.patch method can help me do it but not able to find documentation of what needs to be passed in the http request.
You can find the complete documentation on how to update BigQuery dataset access controls in the documentation page linked. Given that you are already creating the views in your dataset programatically, I would advise that you use the BigQuery client library, which may be more convenient than performing the API call to the datasets.patch method. In any case, if you are still interested in calling the API directly, you should provide the relevant portions of a dataset resource in the body of the request.
The first link I shared provides a good example of updating dataset access using the Java client libraries, but in short, this is what you should do:
public List<Acl> updateDatasetAccess(DatasetInfo dataset) {
// Make a copy of the ACLs in order to modify them (adding the required group)
List<Acl> previousACLs = dataset.getAcl();
ArrayList<Acl> ACLs = new ArrayList<>(previousACLs);
ACLs.add(Acl.of(new Acl.User("your_group#gmail.com"), Acl.Role.READER));
DatasetInfo.Builder builder = dataset.toBuilder();
builder.setAcl(ACLs);
bigquery.update(builder.build());
}
EDIT:
The way to define the dataset object is the following one:
BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();
Dataset dataset = bigquery.getDataset(DatasetId.of("YOUR_DATASET_NAME"));
Take into account that if you do not specify credentials when constructing the client object bigquery, the client library will look for credentials in the GOOGLE_APPLICATION_CREDENTIALS environment variable.

How do I make authenticated Rest call to google machine learning predict endpoint?

I want to make a simple http rest call to a google machine learning predict endpoint, but I can't find any information on how to do that. As far as I can tell from the limited documentation, you have to use either the Java or Python library (or figure out how to properly encrypt everything when using the REST auth endpoints) and get a credentials object. Then the instructions end and I have no idea how to actually use my credentials object. This is my code so far:
import urllib2
from google.oauth2 import service_account
# Constants
ENDPOINT_URL = 'ml.googleapis.com/v1/projects/{project}/models/{model}:predict?access_token='
SCOPES = ['https://www.googleapis.com/auth/cloud-platform']
SERVICE_ACCOUNT_FILE = 'service.json'
credentials = service_account.Credentials.from_service_account_file(SERVICE_ACCOUNT_FILE, scopes=SCOPES)
access_token=credentials.token
opener = urllib2.build_opener(urllib2.HTTPHandler)
request = urllib2.Request(ENDPOINT_URL + access_token)
request.get_method = lambda: 'POST'
result = opener.open(request).read()
print(str(result))
If I print credentials.valid it returns False, so I think there is an issue with the credentials object init but I don't know what since no errors are reported, the fields are all correct inside the credentials object, and I did everything according to the instructions. Also my service.json is the same one our mobile team is successfully using to get an access token so I know the json file has the correct data.
How do I get an access token for the machine learning service that I can use to call the predict endpoint?
It turns out the best way to do a simple query is to use the gcloud console. I ended up following the instructions here to setup my environment: https://cloud.google.com/sdk/docs/quickstart-debian-ubuntu
Then the instructions here to actually hit the endpoint (with some help from the person that originally setup the model):
https://cloud.google.com/sdk/gcloud/reference/ml-engine/predict
It was way easier than trying to use the python library and I highly recommend it to anyone trying to just hit the predict endpoint.

Generate interactive API docs from Tornado web server code

I have a Tornado web server that exposes some endpoints in its API.
I want to be able to document my handlers (endpoints) in-code, including description, parameters, example, response structure, etc., and afterwards generate an interactive documentation that enables one to "play" with my API, easily make requests and experience the response on a sandbox environment.
I know Swagger, and particularly their SwaggerUI solution is one of the best tools for that, but I get confused how it works. I understand that I need to feed the SwaggerUI engine some .yaml that defines my API, but how do I generate it from my code?
Many github libraries I found aren't good enough or only support Flask...
Thanks
To my understanding, SwaggerUI is dependent on swagger specification.
So, it boils down to generating the Swagger Specification in a clean and elegant manner.
Did you get a chance to look at apispec?
I am finding this to be an active project with a plugin for tornado.
Here's how we are doing it in our project. We made our own module and we are still actively developing this. For more info: https://pypi.org/project/tornado-swirl/
import tornado.web
import tornado_swirl as swirl
#swirl.restapi('/item/(?P<itemid>\d+)')
class ItemHandler(tornado.web.RequestHandler):
def get(self, itemid):
"""Get Item data.
Gets Item data from database.
Path Parameter:
itemid (int) -- The item id
"""
pass
#swirl.schema
class User(object):
"""This is the user class
Your usual long description.
Properties:
name (string) -- required. Name of user
age (int) -- Age of user
"""
pass
def make_app():
return swirl.Application(swirl.api_routes())
if __name__ == "__main__":
app = make_app()
app.listen(8888)
tornado.ioloop.IOLoop.current().start()