WebRTC: Have multiple tracks (or streams) and identify them on the other side - webrtc

I'm using WebRTC to build a Skype-like application. I want one party to be able to send a feed from their webcam, while sharing their screen at the same time.
On the receiving end, however, I can't find any way to identify what type of stream I'm receiving -- label and ID are reset to a new value (bummer, I was hoping to identify it by its source ID), and I can't find any options for adding my own metadata to the streams or tracks. How does the receiving client know what type of media I'm sending them?
Any ideas? Thanks in advance!

As it turns out, MediaStreamTracks get a new ID assigned on the other side. MediaStreams however, keep their assigned IDs, so use those when doing AddTrack, and then use a DataChannel to send information about the stream based on its ID.

Related

Retrieving messages in chat application

I am working on an application for chatting. I wondering how to design the API for getting messages from a server.
When the application loads the chat window, I display the last 20 messages from the server using the endpoint:
/messages?user1={user1Id}&user2={user2Id}&page=0
Secondly, I allow user to load and display previous messages when the users scrolls to the top using the same endpoint but with different page (1)
/messages?user1={user1Id}&user2={user2Id}&page=1
But this design doesn't work correctly when users start to send messages to each others. The reason is that the endpoint returns the messages using descending order. Invoking the endpoint will give different result sets before sending/receiving a message and after.
My goal is to get always 20 previous messages when a user scrolls through conversation history.
How would you implement this in a clean way (including the REST API design)? I can imagine some solutions, but they seem dirty to me.
REST does not scale well for real time applications. It takes too many resources to open the HTTP connection again and again. Better to use websockets for it.
As of the upper problem if you start pagination with the latest message, not with the first message, then I would not wonder that it changes. To keep the pages after it you need to send the message id you started with. So for example: GET ...&count=20&latest=1, after that you get a 20 list of messages, you get the message id of the latest one and do the pagination with GET ...&count=20&basePoint={messageId}&page=0 to always get the exact same page no matter that you got new messages. After that you continue with GET ...&count=20&basePoint={messageId}&page=1 and so on... You will have negative pages GET ...&count=20&basePoint={messageId}&page=-1 if there are newer messages. Though I don't know any chatting application which uses pagination this way.
What I would do is GET ...&count=20&latest=1 and get the 20th message and do GET ...&count=20&before={messageId_20th} after that get the last message again from that list which is the 40th and do GET ...&count=20&before={messageId_40th} and so on. To get the new messages you can do GET ...&count=20&latest=1 again or GET ...&count=20&after={messageId_1st}.
None of the above is really good if you are using a caching mechanism in the client. I would add something like key frames in videos, so key messages. For example every 20th message can be a key message. I would do caching based on the key message ids so the responses could be cacheable, something like GET ...&count=20&latest=1 would return messages and one in the middle of the list would have a property of keyMessage=true. I would start the next request from the last key message something like GET ...&count=20&before={messageId_lastKey}, so the response will be cacheable.
Another way of doing this is starting pagination from the very first message. So when you do GET ...&count=20&latest=1 it will write the index of the message, something like 1234th message. You can do caching based on every 20th message just like in the key message solution, but instead of the message ids, you can do GET ...&count=20&to=1220 and merge the pages on the client. Or if you really want to spare with data, then GET ...&from=1201&to=1220&before=1215 or GET ...&from=1201&to=1220&last=1214 or GET ...&from=1201&to=1214, etc... And continue normal pagination with GET ...&count=20&to=1200 or GET ...&from=1181&to=1200.
What is good with the upper fixed pages approach, that you can use range headers too. For example GET .../latest would return the last 20 message with the header of Content-Range: messages 1215-1234/1234. After that you can do GET .../ Range: messages=1201-1214 and GET .../ Range: messages=1181-1200 and so on... When the total message count is updated in the response Content-Range: messages 1181-1200/1245, then you can automagically download the new message with GET .../ Range: messages=1235-1245 or with GET .../ Range: messages=1235- if you expect the 1245 to change meanwhile. You can do the same thing with the URI too, but if you have a standard solution like range headers it is better to use it instead of reinventing the wheel.
As of the &user1={user1Id}&user2={user2Id} part I would order it based on alphabet, so always user1 would be earlier in alphabetic order than user2, something like &user1=B&user2=A -> &user1=A&user2=B. So that will be properly cacheable too, not necassarily on the client, but you can add a server side cache too, so if the two participating users try to get the same conversation recently it won't be queried from the database again. Another way of doing this is adding a conversation id, which can be random unique or generated from the two user ids, but it must be always the same for the two users e.g. /conversations/A+B or /conversations/x23542asda. I would do the latter, because if you start support conversations with multiple users it is better to have a dedicated conversation identifier. Another thing you can support is having multiple topic related conversations between the two users, so you won't need to do /conversations/A+B/{topic} just use a unique id and search for conversations before creating a new one, so either /conversations?participants[]=A&participants[]=B&topic={topic} will give you an empty list or a list with a single conversation. This could be used to list coversations of a single user /conversations?participants[]=A or list conversations of a single user in a certain topic /conversations?participants[]=A&topic={topic}.
Generally speaking, for "system design" questions, there isn't any one correct answer. Multiple approaches can be considered, based on their pros and cons. You've mentioned you want a clean way to retrieve messages, so your expectation might be different from mine.
Here is a simple approach/idea to get you started:
Have a field in your database that can be used to order messages. It could be a timestamp, a message_id, or something else that you want. Let's call this m_id for now.
On your client, have a field that tracks the m_id of the earliest message currently available locally on the client. This key (let's call it earliest_m_id) will be used to query the database, because you can simply fetch x number of messages before the earliest_m_id.
This would solve your issue of incorrect messages being fetched. Using paging will not work, since pages will continuously change with the exchange of messages.

How to know count or last of messages in MediaGroup telegram

I want to attach a button for a media group
To do this, I intercept the message and see if there is the same mediaGroup_id, then I save the file_id to the database
After all messages from this media group have been received, I send them to a group in a separate channel (here is the problem) -> how can I determine that this is the last message from the media group, I had a stupid idea to create a job with a delay of several seconds, enough time to receive the entire media group, and then send the entire media group in this job, however, I am worried about the reliability of this method, and for sure it will be buggy if one day I have to use asynchronous
Then, in the main channel, I send a message containing a link to the media group and a button, as I wanted
Is there some way to do this more elegantly?
That actually sounds rather reasonable and in fact I know a bot that does something very similar. The idea why this works, is that TG apparently first uploads all the media files and then sends all messages at once rather then looping over "upload, then send".

PubNub - Is there a way to find Channel Groups that contain a channel?

Is there any way other than getting all Channel Groups and checking each one individually? I am wanting to go through and have the server remove the channel from all of the Channel Groups who contain that channel.
I have the client unsubscribing as it should, but in the event that it does not unsubscribe properly (browser crash/loss of connection/etc). I know using presence and detecting leave/timeout is probably the best way to handle this. But, it is just not something we can do in this first implementation.

Forwarding data from SigFox to Cumulocity

Currently we are trying to forward data from SigFox device to cumulocity by configuring a Callback from the SigFox admin panel but we always receive a 400 – Bad Request HTTP response.
If I change the URL to forward on requestBin, there is no problem, I got a 200 HTTP status code.
If I use Postman to send the request with the same header params and same body it works too.
Do you know what can be the issue?
Moreover, can you tell us what is the best way to manage the mapping between SigFox Device Ids and Cumulocity ones? Callbacks are created for a group of device in SigFox, so we can’t hardcode the Cumulocity Id as “source” in the body of the each request.
Maybe it’s possible to use the Identity API to register the mapping between SigFox Device Id and the Cumulocity device Id? I thought it would be possible to write a Processing task in CEL to listen the EventCreated in order to extract the SigFox Id from the received object and query the database to get the internal Id given the external Id with one of your pre-defined functions found here http://www.cumulocity.com/guides/reference/cumulocity-event-language/
But there is no function to query the Identity documents. So if you have already solved this specific use case, can you please give us the best approach?

Multiple push channels (WNS) for a single app?

Basically I have two distinct services I wish to use (my own WCF back end service) and an Azure Mobile Service that both use push notifications. They're associated with the same app in the windows store.
In my code, I have two separate modules that call.
var newChannel = await PushNotificationChannelManager.CreatePushNotificationChannelForApplicationAsync();
This all seemed like fun and games and unless I horribly misread the documentations, having multiple channels for one App should be ok.
However, when I sent a notification from the WCF service to the app, it went to the AMS handler and naturally threw an invalid format exception given that I'm using my own Raw push notification format.
So my question is this; do I need to re-engineer the structure to have only one push channel handler that will divide the messages based on their format to the correct handlers, or what is the methodology I need to follow in order to get multiple push channels for a single app?
See the only formats being supported in wns push notification us either e xml based format or the json data format. If while communicating to the wns you are sending some other format then then it is bound to exceptions. ? Go through the demo from the link
Push notification sample
if this does not solve the problem then please leave a comment behind