Sonos: Create playbackSession that automatically ends after finishing the playlist. No repeat - sonos

When I create a playbackSession and load a track/playlist with loadStreamUrl, this playlist repeats playing from the start after the last song was reached. I instead want it to stop at the end.
I tried to find a property similar to playOnComplete (payload of loadStreamUrl) which starts playback automatically after buffering the track, just for ending playback after the track was played.
I also tried to use playback->setPlayModes to forbid repeating, but this is just ignored.
{
"playModes": {
"repeat": false
}
}
I know this is possible by setting up an Event Callback and process the playbackStatus events, but I am looking for a simple "fire-and-forget" solution.

The loadStreamUrl command is for streaming radio. Since you're playing a playlist, you should use loadCloudQueue.
Use loadCloudQueue with a mediaUrl for the track instead of a SMAPI MusicObjectId if you don't want to set up a SMAPI server. See loadCloudQueue and Play audio (cloud queue) for details.
Alternatively, you can try the undocumented loadQueue command. loadQueue works like loadCloudQueue but it doesn't require a cloud queue. To play a track without a cloud queue, send the following calls:
createSession
loadQueue (described below)
skipToItem
loadQueue
Initializes the Sonos queue with custom metadata and playback policies. Use this command with skipToItem to send a track to the player. The player stops playing at the end of the track.
Parameters
Name | Type | Description
metadata | container | Container metadata describing the queue. This could be a programmed radio station, an album, a playlist, etc.
policies | playbackPolicy | Playback policies for the session.
Sample requests
POST [base URL]/groups/{groupId}/playbackSession
{...}
POST [base URL]/playbackSessions/{sessionId}/playbackSession/queue
{...}
POST [base URL]/playbackSessions/{sessionId}/playbackSession/skipToItem
{...}
See Control API list for the base URL.

Related

Is it possible to remote participant track.stop()

I tried by following codes
// remove particaipant tracks
vm.tc.currentVideoRoom.participants.forEach((remoteParticipant) => {
remoteParticipant.tracks.forEach((track) => {
console.log(track); //here i found the remote participant video and audio track.
track.stop() // but here i found "track.stop() is not a function" error
})
});
I check twilio video documentation. but don't found any solution in here.
and i also check GitHub issue here the link. the guy mention remote participants tracks stop is not possible.
then how can i stop remote participant tracks using Twilio?
You can't stop a remote track like that. The remote track object is a representation of the track as a stream from the remote participant as a stream. A local track is a representation of a track as a stream from the device's camera or microphone, so when you call stop, it stops interacting with the hardware.
So you cannot call stop on a remote track, because that would imply you're trying to stop the track coming from the remote participant's camera or microphone.
If you want to stop seeing or hearing a remote track, you can detach the track from the page. If you want to unsubscribe from the track so that you stop receiving the stream of it, you can use the track subscriptions API. And if you want to actually stop the remote participant's device, you would have to send a message to the remote participant (possibly via the DataTrack API) and have them execute the track.stop() locally.

What's the Sonos Cloud Queue GET /version periodic polling interval?

We've recently started adding programmed radio to our existing SMAPI implementation. I've followed the Sonos Developer documentation and (eventually) got it working as expected. I'm just seeking for some clarification around the 'auto updating' based on the 'queueVersion' value.
Our schedules which are feeding the programmed radio can change from time to time. These changes should be reflected on the Sonos Players as soon as possible. For what I understand this should be possible by modifying the queueVersion property in both GET /context, GET /itemWindow and GET /version.
Looking at the GET /version documentation I see that Players "[...] are responsible for periodically polling this [QueueVersion] value to detect changes in the cloud queue track list, [...]".
I've monitored our API logs for about 15 minutes in which I would expect at least a GET /version request, but none showed up. The only calls I'm seeing are POST /timePlayed.
Can anyone (from the Sonos team perhaps?) clarify what this interval is set to, or how it can be controlled?
Given that you aren't seeing GET /version requests, there may be an error in your configuration.
The player sends a GET /version request every 5 minutes when paused and every 10 minutes when playing. This is by design, not depending on any setting that you can control. However, players fetch new tracks as needed using GET /itemWindow. The player requires a version in your response, so it doesn't send a GET /version request in this case. After the player gets a new item window, it resets the polling interval to another 10 minutes.
See the Play audio (cloud queue) page for details.

Sonos event subscription without cloud service to receive events

I'm trying to wrap my head around how to subscribe to events in the new Sonos API for an iOS app.
It seems like a cloud service is needed to receive events from the Sonos Cloud.
As described here:
[Subscribing to events with Sonos API
[https://developer.sonos.com/build/direct-control/connect]
Is there any way for an iOS app to subscribe to events (volume and grouping change) without having to run a cloud service?
If not, any features based on event subscriptions will not be able to work if there is trouble connecting to the cloud for whatever reason.
No, there's no way to run without a cloud service. You must have a reliable cloud service for events and subscriptions.
Each device also has a super fast / local / undocumented, UPNP service that also supports events.
this answer should give you some pointers on how to get it working in node.
In a nutshell:
Setup an http endpoint on the device (not sure how that works in Swift)
Tell (in node) the speaker to start sending events for a specific service
Handle the received XML events.
Sample event from RenderingControlService (yes it has nested encoded xml in the <LastChange> property):
<e:propertyset xmlns:e="urn:schemas-upnp-org:event-1-0"><e:property><LastChange><Event xmlns="urn:schemas-upnp-org:metadata-1-0/RCS/"><InstanceID val="0"><Volume channel="Master" val="15"/><Volume channel="LF" val="100"/><Volume channel="RF" val="100"/><Mute channel="Master" val="0"/><Mute channel="LF" val="0"/><Mute channel="RF" val="0"/><Bass val="0"/><Treble val="0"/><Loudness channel="Master" val="1"/><OutputFixed val="0"/><HeadphoneConnected val="0"/><SpeakerSize val="3"/><SubGain val="0"/><SubCrossover val="0"/><SubPolarity val="0"/><SubEnabled val="1"/><SonarEnabled val="1"/><SonarCalibrationAvailable val="1"/><PresetNameList val="FactoryDefaults"/></InstanceID></Event></LastChange></e:property></e:propertyset>

Stopping own audio in stream

I am trying to implement video chat in my application with webrtc.
I am attaching the stream via this:
getUserMedia(
{
// Permissions to request
video: true,
audio: true
},
function (stream) {
I am passing that stream to remote client via webrtc.
I am able to see both the videos on my screen (mine as well as of client).
The issue I am getting is that I am getting my own voice too in the stream which I don't want. I want the audio of other party only.
Can you let me know what can be the issue?
Did you add the "muted" attribute to your local video tag as follow :
<video muted="true" ... >
Try setting echoCancellation flag to true on your constraints.
4.3.5 MediaTrackSupportedConstraints
W3.org
Media Capture and Streams
When one or more audio streams is being played in the processes of
various microphones, it is often desirable to attempt to remove the
sound being played from the input signals recorded by the microphones.
This is referred to as echo cancellation. There are cases where it is
not needed and it is desirable to turn it off so that no audio
artifacts are introduced. This allows applications to control this
behavior.

Set AudioAttributes Volume

After searching for a very long time for a way to play notification noises only through the headphones (when plugged in), on a stream separate from STREAM_MUSIC, in a way that could interrupt and be completely audible over any background music, Android finally came out with the AudioAttributes API. By using the following code, I'm able to achieve exactly what I want for notifications, at least in API 21 or higher (STREAM_MUSIC is the best option I've found for lower versions):
AudioAttributes audioAttributes = new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_ASSISTANCE_SONIFICATION)
.setContentType(AudioAttributes.CONTENT_TYPE_SONIFICATION)
.build();
Unfortunately, there doesn't appear to be any way to adjust the volume of the sonification in my app's settings. I currently use the AudioManager in the following way, but it only allows volume adjustments to streams, and none of STREAM_ALARM, STREAM_NOTIFICATION, STREAM_RING, or STREAM_MUSIC applies to whatever routing strategy is used for the sonification:
audioManager.setStreamVolume(AudioManager.STREAM_NOTIFICATION, originalVolume, 0);
Does anyone have any suggestion on how to set the volume corresponding to the AudioAttributes output? Keep in mind that the audio is actually played in a BroadcastReceiver that's used for the actual notification, and the audio setting would be specified in just some settings Activity.
Well, it appears that I missed a critical table in the API documentation:
https://source.android.com/devices/audio/attributes.html
It seems that STREAM_SYSTEM is the equivalent of what I was attempting to do with AudioAttributes. Basically, using the code I have above is sufficient for API 21 and forward, and use of STREAM_SYSTEM does everything necessary for the AudioManager and APIs prior to 21.