Trying to configure a music service on Sonos. I have been Following the Sonos guide for programmed radio.
https://developer.sonos.com/build/content-service-add-features/add-programmed-radio/
But not sure what should be returned from the smapi server to have the player use the endpoints declared in the manifest.
That would be step three in this graphic.
https://developer-assets.ws.sonos.com/doc-assets/prog_radio_seq10_review.png
I've tried adding radio as an itemType and using some of the existing types but so far I never got the player to make any requests to the cloud queue server.
<SOAP-ENV:Envelope
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Header/>
<SOAP-ENV:Body>
<ns2:getMetadataResponse
xmlns:ns2="http://www.sonos.com/Services/1.1">
<ns2:getMetadataResult>
<ns2:index>0</ns2:index>
<ns2:count>2</ns2:count>
<ns2:total>2</ns2:total>
<ns2:mediaCollection>
<ns2:id>smapicontainer:31</ns2:id>
<ns2:itemType>radio</ns2:itemType>
<ns2:title>radio collection</ns2:title>
</ns2:mediaCollection>
<ns2:mediaMetadata>
<ns2:id>smapicontainer:32</ns2:id>
<ns2:itemType>radio</ns2:itemType>
<ns2:title>radio metadata</ns2:title>
</ns2:mediaMetadata>
</ns2:getMetadataResult>
</ns2:getMetadataResponse>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
expecting to see some calls to the endpoint for radio type. That are declared in the manifest. The manifest seems to be configured correctly as it does get calls to /radio/timePlayed when playing sample tracks.
{
"schemaVersion": "1.0",
"endpoints": [
{
"type": "radio",
"uri": "https://13467fb8.ngrok.io/flight/radio"
},{
"type": "reporting",
"uri": "https://13467fb8.ngrok.io/flight/radio"
}
],
"presentationMap": {
"uri": "https://13467fb8.ngrok.io/flight/assets/presentationmap.xml",
"version": 2
},
"strings": {
"uri": "https://13467fb8.ngrok.io/flight/assets/strings.xml",
"version": 2
}
}
updated smapi response with mediaMetaData with itemType program. Seems to be missing something still, as the manifest "radio" endpoint does prevent calls to the smapi server. But it still doesn't make any requests to the endpoint associated with radio. I get "unable to play selected item" alerts when the items are selected.
<SOAP-ENV:Envelope
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/">
<SOAP-ENV:Header/>
<SOAP-ENV:Body>
<ns2:getMetadataResponse
xmlns:ns2="http://www.sonos.com/Services/1.1">
<ns2:getMetadataResult>
<ns2:index>0</ns2:index>
<ns2:count>3</ns2:count>
<ns2:total>3</ns2:total>
<ns2:mediaMetadata>
<ns2:id>prad:32</ns2:id>
<ns2:itemType>program</ns2:itemType>
<ns2:title>radio channel a</ns2:title>
</ns2:mediaMetadata>
<ns2:mediaMetadata>
<ns2:id>smapicontainer:33</ns2:id>
<ns2:itemType>program</ns2:itemType>
<ns2:title>radio channel b</ns2:title>
</ns2:mediaMetadata>
<ns2:mediaMetadata>
<ns2:id>radio:34</ns2:id>
<ns2:itemType>program</ns2:itemType>
<ns2:title>radio channel c</ns2:title>
</ns2:mediaMetadata>
</ns2:getMetadataResult>
</ns2:getMetadataResponse>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
Below is the only traffic I can generate to the endpoints in the manifest file.
Nothing for type radio, but do get some for reporting if I play one of the sample tracks included in the smapi sample server.
image of traffic to the endpoint
You need to add the version number somewhere in the radio and reporting endpoints. See Sonos Music API service reporting and manifest file for details.
You should be returning an array of mediaMetadata objects for the getMetadataResponse, with itemType of program. See https://musicpartners.sonos.com/node/286
Related
I want to control the soundbar by LAN.
I can turn on and off the soundbar with postman and the given Sony Api's.
When i Want to change the input, in Postman appears error 12.
I don't understand why, because other API's like getInformation are working fine. (network, url, port, connection, and library should be also ok)
From the Api getInformation, I could read the information with the name of the inputs.
But then I paste them to setActiveTerminal, Error 12 occurs.
Does anyone know, where is the problem??
Here, the used code:
http://169.254.75.11:10000/sony/avContent
{
"method":"setActiveTerminal",
"id":55,
"params":[
{
"active": "active",
"uri": "extInput:hdmi?port=1"
}
],
"version":"1.0"
}
The setActiveTerminal is for activating or deactivating output ports extOutput or "zone" that it is called on the STR-DN1080. Since the ZF9 don't have multi zone capabilities the use of this method is very limited.
I'm guessing you want to set the input port and that is done via setPlayContent (with out the output parameter in the json for ZF9) see
Sony Audio Control API - Can't Change Input on AV Receiver for more info.
I'm using wit ai for a bot and I think it's amazing. However, I must provide the customer with screens in my web app to train and manage the app. And here I found a big problem (or maybe I'm just lost). The documentation of the REST API is not enough to design a client that acts like the wit console (not even close). it's like a tutorial of what endpoints you can hit and an overview of the parameters, but no clean explanation of the structure of the response.
For example, there is no endpoint to get the insights edge. Also and most importantly, no clear documentation about the response structure when hitting the message endpoints (i.e. the structure the returned entities: are they prebuilt or not, and if they are, is the value a string or an object or array, and what the object might contain [e.g. datetime]). Also the problem of the deprecated guide and the new guide (the new guide should be done and complete by now). I'm building parts of the code based on my testing. Sometimes when I test something new (like adding a range in the datetime entity instead of just a value), I get an error when I try to set the values to the user since I haven't parsed the response right, and the new info I get makes me modify the DB structure at my end sometimes.
So, the bottom line, is there a complete reference that I can implement a complete client in my web app (my web app is in Java by the way and I couldn't find a client library that handles the latest version of the API)? Again, the tool is AWESOME but the documentation is not enough, or maybe I'm missing something.
The document is not enough of course but I think its pretty straightforward. And from what I read there is response structure under "Return the meaning of a sentence".
It's response in JSON format. So you need to decode the response first.
Example Request:
$ curl -XGET 'https://api.wit.ai/message?v=20170307&q=how%20many%20people%20between%20Tuesday%20and%20Friday' \
-H 'Authorization: Bearer $TOKEN'
Example Response:
{
"msg_id": "387b8515-0c1d-42a9-aa80-e68b66b66c27",
"_text": "how many people between Tuesday and Friday",
"entities": {
"metric": [ {
"metadata": "{'code': 324}",
"value": "metric_visitor",
"confidence": 0.9231
} ],
"datetime": [ {
"value": {
"from": "2014-07-01T00:00:00.000-07:00",
"to": "2014-07-02T00:00:00.000-07:00"
},
"confidence": 1
}, {
"value": {
"from": "2014-07-04T00:00:00.000-07:00",
"to": "2014-07-05T00:00:00.000-07:00"
},
"confidence": 1
} ]
}
}
You can read more about response structure under Return the meaning of a sentence
I'm taking my first experimental steps with google-pre-setup templates in a Google Cloud Template (Cloud Pub/Sub to BigQuery).
As a milestone to my final goal (having physical gadgets reporting a data stream to Google Cloud Pub/Bub), my wish is to achieve something like this:
POSTMAN (make authenticated POST request with JSON message to an Google Cloud Platform, GPC, endpoint) --> GPC Pub/Sub --> GPC DataFlow --> GPC BigQuery.
Right now I am following the tutorial found in Executing Templates, https://cloud.google.com/dataflow/docs/templates/executing-templates, "Example 2: Custom template, streaming job". This section states:
...This example projects.templates.launch request creates a streaming job
from a template that reads from a Pub/Sub topic and writes to a
BigQuery table. The BigQuery table must already exist with the
appropriate schema. If successful, the response body contains an
instance of LaunchTemplateResponse. ...
and further more how to do a POST:
https://dataflow.googleapis.com/v1b3/projects/[YOUR_PROJECT_ID]/templates:launch?gcsPath=gs://[YOUR_BUCKET_NAME]/templates/TemplateName
{
"jobName": "[JOB_NAME]",
"parameters": {
"topic": "projects/[YOUR_PROJECT_ID]/topics/[YOUR_TOPIC_NAME]",
"table": "[YOUR_PROJECT_ID]:[YOUR_DATASET].[YOUR_TABLE_NAME]"
},
"environment": {
"tempLocation": "gs://[YOUR_BUCKET_NAME]/temp",
"zone": "us-central1-f"
}
}
There are two things that confuses me. Let's for the sake of a simple example say that I have multiple vehicles who continuously should report their current status. I have already created my MQTT topic: VEHICLE_STATUS. Each och my vehicles should be able to report its:
Position [String]
Speed [Float]
Time [String]
VehicleID [Integer]
=======
I'm aware of the prototype for a PubsubMessage:
{
"data": string,
"attributes": {
string: string,
...
},
"messageId": string,
"publishTime": string,
}
My questions:
How should my BigQuery table schema look (which columns do I need to create)?
How should the entire corresponding JSON message look? What should my vehicle report to the endpoint each time?
I'm using the Docusign REST Api and in the create envelope request I am requesting event notifications for "voided" see below. The callback occurs, but the voidedReason is not present in the XML, so to fetch voidedReason I have to make a separate API call to get the status of the envelope as suggested in: DocuSign - getting void envelope reason.
Is there some reason (no pun intended) that voidedReason is not included in the webhook callback XML for docusignenvelopeinformation.envelopestatus? It seems inconsistent in tha declinereason is provided in the receipientstatuses.recipientstatus object. Would be nice to not have to make the additional API call.
eventNotification: {
url: docusignCallbackUrl,
loggingEnabled: "true",
includeDocumentFields: "true",
requireAcknowledgment: "true",
envelopeEvents: [
{envelopeEventStatusCode: "completed"},
{envelopeEventStatusCode: "declined"},
{envelopeEventStatusCode: "voided"},
],
recipientEvents: [
{recipientEventStatusCode: "Completed"},
],
}
The DocuSign connect configuration offers a way to "Include Envelope Voided Reason" in the DocuSign connect XML payload/notification. This was added In October 2016 timeframe.
Add:
includeEnvelopeVoidReason: "true"
to your eventNotification.
I am using worklight 6.1 and I'm trying to send logs that are created in my client to the server in order to be able to view the logs in case the application crashes. What I have done is (based on this link http://pic.dhe.ibm.com/infocenter/wrklight/v5r0m6/index.jsp?topic=%2Fcom.ibm.worklight.help.doc%2Fdevref%2Fc_using_client_log_capture.html):
Set the below in wlInitOptions.js
logger : {
enabled: true,
level: 'debug',
stringify: true,
pretty: false,
tag: {
level: false,
pkg: true
},
whitelist: [],
blacklist: [],
nativeOptions: {
capture: true
}
},
In the client I have set the below where I want to send a log:
WL.Logger.error("test");
WL.Logger.send();
Implemented the necessary adapter WLClientLogReceiver-impl.js with the log function based on the link
Unfortunately I can't see the log in the messages.log. Anyone have any ideas?
I have also tried to send the log in the analytics DB based on this link http://www-01.ibm.com/support/knowledgecenter/SSZH4A_6.2.0/com.ibm.worklight.monitor.doc/monitor/c_op_analytics_data_capture.html.
What I did is:
WL.Analytics.log( { "_activity" : "myCustomActivity" }, "My log" );
however no new entry is added in the app_Activity_Report table. Is there something I am missing?
Couple of things:
Follow Idan's advice in his comments and be sure you're looking at the correct docs. He's right; this feature has changed quite a bit between versions.
You got 90% of the configuration, but you're missing the last little bit. Simply sending logs to your adapter is not enough for them to show in your messages.log. You need to do one of the following to get it into messages.log:
set audit="true" attribute in the <procedure> tag of the WLClientLogReceiver.xml file, or
log the uploaded data explicitly in your adapter implementation. Beware, however, that the WL.Logger API on the server is subject to the application server level configuration.
Also, WL.Analytics.log data does not go into the reports database. The only public API that populates the database is WL.Client.logActivity. I recommend sticking with the WL.Logger and WL.Analytics APIs.