quickblox special update operators for custom objects rest api - quickblox

Does any one tried to use special update operators in Quickblox for custom objects in docs at
Custom object documentation
there are few examples provided but none of them are working when typed in terminal. For example when i try to remove element from answers array in my InvitationStore custom object with id 56165731a28f9af6df000236 and session token d536d482c4637beb8fef79eeb8d45c0473dae9aa i type following command in terminal
curl -X PUT -H "QB-Token: d536d482c4637beb8fef79eeb8d45c0473dae9aa"
-d "pop[answers]=1" https://api.quickblox.com/data/InvitationStore/56165731a28f9af6df000236.json
but it makes no effect in response i receive custom object with same structure as before operation.Id and session token are correct because when using standard operation for example seting answers array to value [#"YES" ,#"NO"] it works and the structure does change it only takes no effect for special update operators.
I work on starter plan so question is is this functionality not available in my current plan(can't find it in docs) or is it something wrong with my code or is it Quickblox error?

Related

Zabbix Media Type Parameters not replaced correctly

I'm trying to call an API with Zabbix when there is a security warning
But when I receive the warning, some elements are not replaced.
However, the HOSTID parameters seems to exist in the Zabbix documentation
Here is the object I receive
{HOST.HOSTID} is a non-existent macro, there isn't a direct mapping from the api fields and the macros.
You should use {HOST.ID}
You can find the complete macro by location list here

Graph Store protocol support in GraphDB

I'm having troubles with using the Graph Store protocol, as documented in GraphDB's help section (the REST API docs). Specifically, I have two issues:
The Graph Store protocol is supposed to support PUT requests (see https://rdf4j.org/documentation/reference/rest-api/), but the GraphDB REST API documentation only indicates GET, DELETE and POST operations (when listing all operations under "graph-store" section of the docs)
The notion of "directly referenced graph" does not seem to be working, not sure if I'm doing something wrong. What I tried is:
Step1. I created repository myrepo and included a named graph with the IRI http://example.org/graph1
Step2. I tried to access the graph by including various forms of its IRI in the URL. None of the following works:
http://localhost:7200/repository/myrepo/rdf-graphs/graph1
http://localhost:7200/repository/myrepo/rdf-graphs/http://example.org/graph1
http://localhost:7200/repository/myrepo/rdf-graphs/http%3A%2F%2Fexample.org%2Fgraph1
Also, the "Try it out!" button provided in the REST API docs under each operation reports Bad Request if I try to fill those boxes (repository=myrepo, graph=graph1)
Any ideas how this feature can actually be used?
Is there a specific way of writing the "directly referenced named graph" in the request URL? (perhaps GraphDB generates some resolvable identifiers for each named graph? how would they look like?)
I confirm your observations and posted a bug GDB-5486
instead of POST you could use DELETE then PUT.
For the time being, use "indirectly referenced".
For the record, "indirectly referenced graph" works, and returns various formats, eg:
> curl -HAccept:text/turtle 'http://localhost:7200/repository/myrepo/rdf-graphs/service?graph=http%3A%2F%2Fexample.org%2Fgraph1'
<http://example.org/s> <http://example.org/p> <http://example.org/o> .
> curl -HAccept:application/trig 'http://localhost:7200/repository/myrepo/rdf-graphs/service?graph=http%3A%2F%2Fexample.org%2Fgraph1'
<http://example.org/graph1> {
<http://example.org/s> <http://example.org/p> <http://example.org/o> .
}
> curl -HAccept:text/nquads 'http://localhost:7200/repository/myrepo/rdf-graphs/service?graph=http%3A%2F%2Fexample.org%2Fgraph1'
<http://example.org/s> <http://example.org/p> <http://example.org/o> <http://example.org/graph1> .
> curl -HAccept:application/ld+json 'http://localhost:7200/repository/myrepo/rdf-graphs/service?graph=http%3A%2F%2Fexample.org%2Fgraph1'
[ {
"#graph" : [ {
"#id" : "http://example.org/s",
"http://example.org/p" : [ {
"#id" : "http://example.org/o"
} ]
} ],
"#id" : "http://example.org/graph1"
} ]
The SPARQL 1.1 Graph Store HTTP protocol is often misunderstood, particularly the notion of "directly referenced graph". When you call the protocol with a URL like http://localhost:7200/repository/myrepo/rdf-graphs/graph1 you literally provide a named graph identified by the whole URL, i.e. your named graph would be "http://localhost:7200/repository/myrepo/rdf-graphs/graph1" and not just "graph1". Consequently you can't use a URL like "http://localhost:7200/repository/myrepo/rdf-graphs/http://example.org/graph1" and expect that the protocol will interpret this as addressing the named graph "http://example.org/graph1". The protocol also supports "indirectly referenced graphs", which is the only way to use a graph URI that isn't derived from the URL used to call the protocol. Please see https://www.w3.org/TR/sparql11-http-rdf-update/#direct-graph-identification for a more detailed explanation.
Because of the above confusion I recommend to avoid using the Graph Store protocol entirely and instead use the SPARQL 1.1 Protocol, which can do everything the Graph Store protocol can except for the convoluted notion of directly referenced graphs. Admittedly the REST API doc "Try it out" feature is broken for some of the Graph Store protocol endpoints.
E.g. to fetch all statements in the named graph http://example.com/graph1 you could do this with curl:
curl -H 'Accept: text/turtle' 'http://localhost:7200/repositories/myrepo/statements?context=%3Chttp%3A%2F%2Fexample.org%2Fgraph1%3E'
To add data to a named graph simply send the data using POST, to replace the data use PUT and to delete the data issue a DELETE request.
This is available in the REST API doc section of the GraphDB Workbench, under "repositories". Note that in the SPARQL 1.1 Protocol URIs must be encircled in < >, unlike in the SPARQL 1.1 Graph Store protocol.

How to specify the model version label in a REST API request?

As described in the documentation, using the version_labels field, you can specify a label to a model version in order to handle canary deployments.
https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/serving_config.md#assigning-string-labels-to-model-versions-to-simplify-canary-and-rollback
For example, you can have model 43 labeled as stable and model 44 labeled as canary.
That feature sounds really neat, but I did not find in the doc how to adapt my POST request to specify the label I want to use.
Until now, I was using something of the sort:
curl -d '{"instances": <<my input data>>}' -X POST http://localhost:8501/v1/models/<<my model name>>:predict
Any idea ?
Update:
Based on comments on this GitHub Issue, #misterpeddy states that, as of August 14th 2019:
Re: not being able to access the version using labels via HTTP - this is something that's not possible today (AFAIR) - only through the grpc interface can you declare labels :(
To the best of my knowledge, this feature is yet to be implemented.
Original Answer:
It looks like the current implementation of the HTTP API Handler expects the version to be numeric.
You can see the regular expression that attempts to parse the URL here.
prediction_api_regex_(
R"((?i)/v1/models/([^/:]+)(?:/versions/(\d+))?:(classify|regress|predict))")
The \d defines an expectation for a numeric version indicator rather than a text label.
I've opened a corresponding TensorFlow Serving issue here.
The REST API for TensorFlow Serving is defined here: https://www.tensorflow.org/tfx/serving/api_rest#url_4
For the predict method it would be:
http://host:port/v1/models/${MODEL_NAME}[/versions/${MODEL_VERSION}]:predict
where ${MODEL_VERSION} would be stable or canary

AWS Lambda function trigger on object creation in S3 does not work

I am doing an upload like this:
curl -v -X PUT -T "test.xml" -H "Host: my-bucket-upload.s3-eu-central-1.amazonaws.com" -H "Content-Type: application/xml" https://my-bucket-upload.s3-eu-central-1.amazonaws.com/test.xml
The file gets uploaded and I can see it in my S3 bucket.
The trick is, when I try to create a lambda function to be triggered on creation, it never gets invoked. If I upload the file using the S3 web interface, it works fine. What am I doing wrong? Is there any clear recipe on how to do it?
Amazon S3 APIs such as PUT, POST, and COPY can create an object. Using
these event types, you can enable notification when an object is
created using a specific API, or you can use the s3:ObjectCreated:*
event type to request notification regardless of the API that was used
to create an object.
Check the notification event setup on the bucket
Go to bucket on AWS management console
Click the properties tab on the bucket
Click the Events to check the notification event setup
Case 1:
s3:ObjectCreated:* - Lambda should be invoked regardless of PUT, POST or COPY
Other case:-
If the event is setup for specific HTTP method, use that method on
your CURL command to create the object on S3 bucket. This way it
should trigger the Lambda function
Check the prefix in bucket/properties.
If there is a world like foo/, that means that only the objects inside the foo folder will trigger the evert to lambda.
Make sure the prefix you're adding contains safe special characters mentioned here. As per AWS documentation, some characters require special handling. Please be mindful of that.
Also, I noticed modifying the trigger on lambda page doesn't get applied until you delete the trigger and new one (even if it is same). Learned hard way. AWS does behaves weird sometime.
Faced similar issues and figured our that the folder names should not have spaces.

How to define API keys in restFul meteor?

I am new to rest api in meteorjs. I am trying to run the example explained in The meteor chef but getting this error:
ReferenceError: APIKeys is not defined
at Object.API.authentication (api/config/api.js:4:19)
at Object.API.connection (api/config/api.js:16:34)
at Object.API.handleRequest (api/config/api.js:28:26)
at [object Object].Router.route.where (api/pizza.js:9:9)
at boundNext (packages/iron_middleware-stack/lib/middleware_stack.js:251:1)
at runWithEnvironment (packages/meteor/dynamics_nodejs.js:110:1)
at packages/meteor/dynamics_nodejs.js:123:1
at [object Object].urlencodedParser (/Users/mac/.meteor/packages/iron_router/.1.0.12.13720an++os+web.browser+web.cordova/npm/node_modules/body-parser/lib/types/urlencoded.js:84:40)
at packages/iron_router/lib/router.js:277:1
at [object Object]._.extend.withValue (packages/meteor/dynamics_nodejs.js:56:1)
The code is same as explained in the example
It is because you didn't created APIKeys mongo collection as shown here: https://github.com/themeteorchef/writing-an-api/blob/master/code/collections/api-keys.js
Add this file to your project and then it'll work.
Here is explanation from post you linked:
Next, we try to insert a new key for our user into the APIKeys collection. Wait a minute! Where did this come from?! This collection was setup beforehand, but let’s talk about why we have a separate collection to begin with. The reason we want to separate our API key storage from the more predictable location of our user’s profile (the writable portion of a user’s record in the Meteor.users() collection) is that by default, the profile object is writable.
So you just missed this part of tutorial where they created APIKeys collection.