Is it possible to remote participant track.stop() - vue.js

I tried by following codes
// remove particaipant tracks
vm.tc.currentVideoRoom.participants.forEach((remoteParticipant) => {
remoteParticipant.tracks.forEach((track) => {
console.log(track); //here i found the remote participant video and audio track.
track.stop() // but here i found "track.stop() is not a function" error
})
});
I check twilio video documentation. but don't found any solution in here.
and i also check GitHub issue here the link. the guy mention remote participants tracks stop is not possible.
then how can i stop remote participant tracks using Twilio?

You can't stop a remote track like that. The remote track object is a representation of the track as a stream from the remote participant as a stream. A local track is a representation of a track as a stream from the device's camera or microphone, so when you call stop, it stops interacting with the hardware.
So you cannot call stop on a remote track, because that would imply you're trying to stop the track coming from the remote participant's camera or microphone.
If you want to stop seeing or hearing a remote track, you can detach the track from the page. If you want to unsubscribe from the track so that you stop receiving the stream of it, you can use the track subscriptions API. And if you want to actually stop the remote participant's device, you would have to send a message to the remote participant (possibly via the DataTrack API) and have them execute the track.stop() locally.

Related

Cumulocity - managedObject Event - detect device first connection

Looking to understand whether there is a a bulletproof event from the namagedObject side of c8y where we know the device has just connected.
I have a microservice that listens for events in real time and I want to trigger a process once we know a device has connected to send its payload.
We have used:
"c8y_Connection": {"status":"CONNECTED"}
We have had the microservice log to Slack all events from managedObjects where we saw for three days the "status":"CONNECTED" value in the payload of our demo devices at reporting times.
But after three days, we see no more this "CONNECTED" state (all payloads showing "DISCONNECTED").
What I am trying to achieve from the inventoryObject event is to understand when a device had connected and sent payload to know when data had arrived. I then go get the data and process it externally. This is post registration and as part of the daily data send cycle for my type of device.
What would be the best way to understand when a device has sent payload in a microservice? I want to notify an external application with either “data is arriving for id 35213” or even better, “data has arrived for device 35213, and here’s the $payload”.
Just as a general information ahead:
The c8y_Connection fragment showing connected shows an active MQTT connection or an active long polling connection and it is only evaluated once every minute.
So if the client is just sending data and immediately disconnecting afterwards this might not picked up.
If you want to see the device having send something to Cumulocity maybe the c8y_Availability fragment is a better as it holds the timestamp when the device last send something.
{ "lastMessage": "2022-10-11T14:49:50.201+09:00", "status": "UNAVAILABLE"}
Also here the evaluation (or better the update to database) only happens every minute.
Both c8y_Availability and c8y_Connection however are only generated if the availability monitoring has been activated for the device (by defining a required interval for the device).
So if you have activated the availability monitoring and you see a "lastMessage" you can reliably say that the device has already send something to Cumulocity.

Agora RTC - Agora-SDK [DEBUG]: Ignoring event undefined

I'm currently trying to implement Agora's RTC and so far it has been working. However, there seems to be an inconsistent error that only sometimes occurs. When joining a channel with an already existing user the following error is shown and the stream is not added and consequently can't be played.
Agora-SDK [DEBUG]: Ignoring event undefined {uid: xyz}
In this case, xyz is the user id of that given existing user. I thought that it might be an issue with the code but it works sporadically and it doesn't seem to be a time lag or something like that either. Did anybody encounter this and know a solution?
Ba, I was experiencing the same issue. I am not sure of the setup you are using to call certain Agora functions, but the problem I was running into was sometimes my publish local stream function would be called before Agora was done capturing the user's media. I found this error in the console accompanied with "No track in stream" or something close to that. My fix was to ensure that the Agora init() function, which captures the user's media, completed before calling for the local stream to publish.

Sonos: Create playbackSession that automatically ends after finishing the playlist. No repeat

When I create a playbackSession and load a track/playlist with loadStreamUrl, this playlist repeats playing from the start after the last song was reached. I instead want it to stop at the end.
I tried to find a property similar to playOnComplete (payload of loadStreamUrl) which starts playback automatically after buffering the track, just for ending playback after the track was played.
I also tried to use playback->setPlayModes to forbid repeating, but this is just ignored.
{
"playModes": {
"repeat": false
}
}
I know this is possible by setting up an Event Callback and process the playbackStatus events, but I am looking for a simple "fire-and-forget" solution.
The loadStreamUrl command is for streaming radio. Since you're playing a playlist, you should use loadCloudQueue.
Use loadCloudQueue with a mediaUrl for the track instead of a SMAPI MusicObjectId if you don't want to set up a SMAPI server. See loadCloudQueue and Play audio (cloud queue) for details.
Alternatively, you can try the undocumented loadQueue command. loadQueue works like loadCloudQueue but it doesn't require a cloud queue. To play a track without a cloud queue, send the following calls:
createSession
loadQueue (described below)
skipToItem
loadQueue
Initializes the Sonos queue with custom metadata and playback policies. Use this command with skipToItem to send a track to the player. The player stops playing at the end of the track.
Parameters
Name | Type | Description
metadata | container | Container metadata describing the queue. This could be a programmed radio station, an album, a playlist, etc.
policies | playbackPolicy | Playback policies for the session.
Sample requests
POST [base URL]/groups/{groupId}/playbackSession
{...}
POST [base URL]/playbackSessions/{sessionId}/playbackSession/queue
{...}
POST [base URL]/playbackSessions/{sessionId}/playbackSession/skipToItem
{...}
See Control API list for the base URL.

What is the secret behind multiple webRTC?

I want to create a conference room using pusher and pure javascript, I managed to create a peer-to-peer connection but a many-to-many connection is proving that is a hard nut to crack. What I am trying to do is creating a connection of a specific user to a room and so that any participant that subscribes the channel their stream are share within and also can access all streams present in the channel.
participant= new RTCPeerConnection();
participant.creatOffer().then(function(desc){
//I add a stream fetch from users camera
participant.addStream(stream);
participant.setLocalDescription(new RTCSessionDescription(desc));
//using pusher I send an event that has the desc, room and from
channel.trigger('client-sdp'{
sdp:desc,
roomEvent: rooms,
from:id
})
});
I can share the whole code if required all I need to know is how to handle many-to-many connect in webRTC RTCPeerConnection();

Stopping own audio in stream

I am trying to implement video chat in my application with webrtc.
I am attaching the stream via this:
getUserMedia(
{
// Permissions to request
video: true,
audio: true
},
function (stream) {
I am passing that stream to remote client via webrtc.
I am able to see both the videos on my screen (mine as well as of client).
The issue I am getting is that I am getting my own voice too in the stream which I don't want. I want the audio of other party only.
Can you let me know what can be the issue?
Did you add the "muted" attribute to your local video tag as follow :
<video muted="true" ... >
Try setting echoCancellation flag to true on your constraints.
4.3.5 MediaTrackSupportedConstraints
W3.org
Media Capture and Streams
When one or more audio streams is being played in the processes of
various microphones, it is often desirable to attempt to remove the
sound being played from the input signals recorded by the microphones.
This is referred to as echo cancellation. There are cases where it is
not needed and it is desirable to turn it off so that no audio
artifacts are introduced. This allows applications to control this
behavior.