How to specify the model version label in a REST API request? - tensorflow-serving

As described in the documentation, using the version_labels field, you can specify a label to a model version in order to handle canary deployments.
https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/serving_config.md#assigning-string-labels-to-model-versions-to-simplify-canary-and-rollback
For example, you can have model 43 labeled as stable and model 44 labeled as canary.
That feature sounds really neat, but I did not find in the doc how to adapt my POST request to specify the label I want to use.
Until now, I was using something of the sort:
curl -d '{"instances": <<my input data>>}' -X POST http://localhost:8501/v1/models/<<my model name>>:predict
Any idea ?

Update:
Based on comments on this GitHub Issue, #misterpeddy states that, as of August 14th 2019:
Re: not being able to access the version using labels via HTTP - this is something that's not possible today (AFAIR) - only through the grpc interface can you declare labels :(
To the best of my knowledge, this feature is yet to be implemented.
Original Answer:
It looks like the current implementation of the HTTP API Handler expects the version to be numeric.
You can see the regular expression that attempts to parse the URL here.
prediction_api_regex_(
R"((?i)/v1/models/([^/:]+)(?:/versions/(\d+))?:(classify|regress|predict))")
The \d defines an expectation for a numeric version indicator rather than a text label.
I've opened a corresponding TensorFlow Serving issue here.

The REST API for TensorFlow Serving is defined here: https://www.tensorflow.org/tfx/serving/api_rest#url_4
For the predict method it would be:
http://host:port/v1/models/${MODEL_NAME}[/versions/${MODEL_VERSION}]:predict
where ${MODEL_VERSION} would be stable or canary

Related

Graph Store protocol support in GraphDB

I'm having troubles with using the Graph Store protocol, as documented in GraphDB's help section (the REST API docs). Specifically, I have two issues:
The Graph Store protocol is supposed to support PUT requests (see https://rdf4j.org/documentation/reference/rest-api/), but the GraphDB REST API documentation only indicates GET, DELETE and POST operations (when listing all operations under "graph-store" section of the docs)
The notion of "directly referenced graph" does not seem to be working, not sure if I'm doing something wrong. What I tried is:
Step1. I created repository myrepo and included a named graph with the IRI http://example.org/graph1
Step2. I tried to access the graph by including various forms of its IRI in the URL. None of the following works:
http://localhost:7200/repository/myrepo/rdf-graphs/graph1
http://localhost:7200/repository/myrepo/rdf-graphs/http://example.org/graph1
http://localhost:7200/repository/myrepo/rdf-graphs/http%3A%2F%2Fexample.org%2Fgraph1
Also, the "Try it out!" button provided in the REST API docs under each operation reports Bad Request if I try to fill those boxes (repository=myrepo, graph=graph1)
Any ideas how this feature can actually be used?
Is there a specific way of writing the "directly referenced named graph" in the request URL? (perhaps GraphDB generates some resolvable identifiers for each named graph? how would they look like?)
I confirm your observations and posted a bug GDB-5486
instead of POST you could use DELETE then PUT.
For the time being, use "indirectly referenced".
For the record, "indirectly referenced graph" works, and returns various formats, eg:
> curl -HAccept:text/turtle 'http://localhost:7200/repository/myrepo/rdf-graphs/service?graph=http%3A%2F%2Fexample.org%2Fgraph1'
<http://example.org/s> <http://example.org/p> <http://example.org/o> .
> curl -HAccept:application/trig 'http://localhost:7200/repository/myrepo/rdf-graphs/service?graph=http%3A%2F%2Fexample.org%2Fgraph1'
<http://example.org/graph1> {
<http://example.org/s> <http://example.org/p> <http://example.org/o> .
}
> curl -HAccept:text/nquads 'http://localhost:7200/repository/myrepo/rdf-graphs/service?graph=http%3A%2F%2Fexample.org%2Fgraph1'
<http://example.org/s> <http://example.org/p> <http://example.org/o> <http://example.org/graph1> .
> curl -HAccept:application/ld+json 'http://localhost:7200/repository/myrepo/rdf-graphs/service?graph=http%3A%2F%2Fexample.org%2Fgraph1'
[ {
"#graph" : [ {
"#id" : "http://example.org/s",
"http://example.org/p" : [ {
"#id" : "http://example.org/o"
} ]
} ],
"#id" : "http://example.org/graph1"
} ]
The SPARQL 1.1 Graph Store HTTP protocol is often misunderstood, particularly the notion of "directly referenced graph". When you call the protocol with a URL like http://localhost:7200/repository/myrepo/rdf-graphs/graph1 you literally provide a named graph identified by the whole URL, i.e. your named graph would be "http://localhost:7200/repository/myrepo/rdf-graphs/graph1" and not just "graph1". Consequently you can't use a URL like "http://localhost:7200/repository/myrepo/rdf-graphs/http://example.org/graph1" and expect that the protocol will interpret this as addressing the named graph "http://example.org/graph1". The protocol also supports "indirectly referenced graphs", which is the only way to use a graph URI that isn't derived from the URL used to call the protocol. Please see https://www.w3.org/TR/sparql11-http-rdf-update/#direct-graph-identification for a more detailed explanation.
Because of the above confusion I recommend to avoid using the Graph Store protocol entirely and instead use the SPARQL 1.1 Protocol, which can do everything the Graph Store protocol can except for the convoluted notion of directly referenced graphs. Admittedly the REST API doc "Try it out" feature is broken for some of the Graph Store protocol endpoints.
E.g. to fetch all statements in the named graph http://example.com/graph1 you could do this with curl:
curl -H 'Accept: text/turtle' 'http://localhost:7200/repositories/myrepo/statements?context=%3Chttp%3A%2F%2Fexample.org%2Fgraph1%3E'
To add data to a named graph simply send the data using POST, to replace the data use PUT and to delete the data issue a DELETE request.
This is available in the REST API doc section of the GraphDB Workbench, under "repositories". Note that in the SPARQL 1.1 Protocol URIs must be encircled in < >, unlike in the SPARQL 1.1 Graph Store protocol.

recommended way of profiling distributed tensorflow

Currently, I am using tensorflow estimator API to train my tf model. I am using distributed training that is almost 20-50 workers and 5-30 parameter servers based on the training data size. Since I do not have access to the session, I cannot use run metadata a=with full trace to look at the chrome trace. I see there are two other approaches :
1) tf.profiler.profile
2) tf.train.profilerhook
I am specifically using
tf.estimator.train_and_evaluate(estimator, train_spec, test_spec)
where my estimator is a prebuilt estimator.
Can someone give me some guidance (concrete code samples and code pointers will be really helpful since I am very new to tensorflow) what is the recommended way to profile estimator? Are the 2 approaches getting some different information or serve the same purpose? Also is one recommended over another?
There are two things you can try:
ProfilerContext
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/profiler/profile_context.py
Example usage:
with tf.contrib.tfprof.ProfileContext('/tmp/train_dir') as pctx:
train_loop()
ProfilerService
https://www.tensorflow.org/tensorboard/r2/tensorboard_profiling_keras
You can start a ProfilerServer via tf.python.eager.profiler.start_profiler_server(port) on all workers and parameter servers. And use TensorBoard to capture profile.
Note that this is a very new feature, you may want to use tf-nightly.
Tensorflow have recently added a way to sample multiple workers.
Please have a look at the API:
https://www.tensorflow.org/api_docs/python/tf/profiler/experimental/client/trace?version=nightly
The parameter of the above API which is important in this context is :
service_addr: A comma delimited string of gRPC addresses of the
workers to profile. e.g. service_addr='grpc://localhost:6009'
service_addr='grpc://10.0.0.2:8466,grpc://10.0.0.3:8466'
service_addr='grpc://localhost:12345,grpc://localhost:23456'
Also, please look at the API,
https://www.tensorflow.org/api_docs/python/tf/profiler/experimental/ProfilerOptions?version=nightly
The parameter of the above API which is important in this context is :
delay_ms: Requests for all hosts to start profiling at a timestamp
that is delay_ms away from the current time. delay_ms is in
milliseconds. If zero, each host will start profiling immediately upon
receiving the request. Default value is None, allowing the profiler
guess the best value.

zabbix 1.8 api get event information

How to get from api information shown in the picture?
I have the connection to api, and I couldn't find in api documentation, the method to get this event details.
thank you.
You need the event.get method. See this page in the Zabbix manual: https://www.zabbix.com/documentation/3.0/manual/api/reference/event/get
Specifically, you probably need the second example, "Retrieving events by time period".
If you want to get trigger names, use the selectRelatedObject flag. As the default source is triggers, you should not get discovery, internal and auto-registration events.
If you really use the old 1.8 version, the flag for trigger names is select_triggers - see https://www.zabbix.com/documentation/1.8/api/event/get .

quickblox special update operators for custom objects rest api

Does any one tried to use special update operators in Quickblox for custom objects in docs at
Custom object documentation
there are few examples provided but none of them are working when typed in terminal. For example when i try to remove element from answers array in my InvitationStore custom object with id 56165731a28f9af6df000236 and session token d536d482c4637beb8fef79eeb8d45c0473dae9aa i type following command in terminal
curl -X PUT -H "QB-Token: d536d482c4637beb8fef79eeb8d45c0473dae9aa"
-d "pop[answers]=1" https://api.quickblox.com/data/InvitationStore/56165731a28f9af6df000236.json
but it makes no effect in response i receive custom object with same structure as before operation.Id and session token are correct because when using standard operation for example seting answers array to value [#"YES" ,#"NO"] it works and the structure does change it only takes no effect for special update operators.
I work on starter plan so question is is this functionality not available in my current plan(can't find it in docs) or is it something wrong with my code or is it Quickblox error?

How do I use the LookbackAPI for burnup charts?

I need a good example of using the LookbackAPI to get the data for a burn up chart. I see some limited questions and responses on the API but no examples on how I would use it to do so. I need to get the current scope on story points and story points completed.
Sorry for the scarcity of available examples. More and better examples will be coming as the LBAPI beta matures. I'd definitely recommend that you become familiar with the Lookback API (LBAPI) Documentation, as there are good examples there for formulating queries.
For a burnup, let's say you want to get the state Snapshots for an Iteration going from 15-Jan-2013 through 30-Jan-2013, and that the Iteration applies to a Project hierarchy that is four-deep. The following LBAPI query would obtain the PlanEstimate, ToDo, and Schedule State for Stories scheduled into that Iteration:
{
find:
{
_TypeHierarchy:"HierarchicalRequirement",
Children:null,
_ValidFrom:{
$gte:"2013-01-15TZ",
$lt:"2013-01-30TZ"
},
Iteration:{
$in:[
12345678910,
12345678911,
12345678912,
12345678913
]
}
},
fields:[
"PlanEstimate",
"ToDo",
"ScheduleState"
]
}
Where:
$in:[
12345678910,
12345678911,
12345678912,
12345678913
]
Are the ObjectID's of the Iteration called "Iteration 1". It's probably easiest to get these Object ID's from a standard WSAPI query on Iterations: (Name = "Iteration 1"). For Iterations copied into a four-deep project hierarchy, we would see the four Iteration OID's similar to the above.
For charting, the toughest part right now is an easy way to deal with the Time-Series data. The most robust way to query and process LBAPI data currently is by working directly against the REST endpoint and processing the returned JSON results in your own code.
With Javascript apps, for processing the data and turning it into a Chart, the preferred toolkit is AppSDK2, specifically the SnapshotStore.
For Javascript apps, the Lumenize javascript library is separate from LBAPI, but was developed by Rally's director of analytics and is bundled in the SDK. You can find some examples of using LBAPI and Lumenize to produce charting as part of some Rally-internal and Rally-customer Hackathon projects here:
https://github.com/RallyHackathon
Please be cautious with these examples for a couple of reasons:
Several aspects of the Lumenize namespace will be changing/renamed for clarity
There's a bug in the current version of Lumenize where its timeSeriesCalculator does not correctly account for stories deleted or reparented.
Hopefully there will be an updated version of AppSDK2 bundled and released soon to consolidate the Lumenize namespace and resolve the bug, so that there's better glue between AppSDK2 and LBAPI for Javascript App development.
Unfortunately, the .NET, Java and Python toolkits have not yet been updated to support the Lookback API. As a result, you'll have to do an HTTP POST to the Lookback API's REST endpoint directly, with a body similar to the one Mark W listed above and Content-Type 'application/json'.
I'd recommend using the Chrome extension 'XHR Poster' to experiment with what you're sending from a browser:
https://chrome.google.com/webstore/detail/xhr-poster/akdbimilobjkfhgamdhneckaifceicen