mediasoup - miss match between PayloadTypes - webrtc

I'm trying to use mediasoup to forward RTP streams with room.createRtpStreamer
my problem is that the payload type (for OPUS) I get from producer.rtpParameters.codecs[i].payloadType is 111,
while the one I get on the actual RTP packets is 100 (seen on Wireshark)
I tried to set preferredPayloadType in my server's config, but it seems to make no difference.
Note:
if I hardcode 100 as the Payload Type for the OPUS stream I can view/hear the stream using FFPlay
I'm using Chrome 55 (latest) and mediasoup 2.0.5 (latest)
any help will be appreciated.

The Producer has the RTP parameters decided by the client (browser), so the PT of OPUS is 111 (the default value generated by Chrome).
But, once in mediasoup server, the Consumers associated to that Producer use the RTP parameters given during the room creation. So, if the codecs given to room = new server.Room(codecs) [1] have a preferredPayloadType field, that will be used within the Consumers (otherwise it will be randomly chosen by the server).
So, when you call room.createRtpStreamer() you provide a Producer, and the generated RtpStreamer [2] has an associated Consumer and PlainRtpTransport. So, you should read the rtpStreamer.consumer.rtpParameters rather than the producer's ones.
[1] https://mediasoup.org/documentation/mediasoup/api/#server-Room
[2] https://mediasoup.org/documentation/mediasoup/api/#RtpStreamer

You should have a look at the SDP of the call setup message and check whether you get 111 or 100 for the OPUS payload.
From there you can decide which part has the bug (Chrome or mediasoup).
In the call setup message (initiating the call) check the payload of the OPUS code.
The called party should respond with the same payload number if it accepts OPUS and then both parties should use the same payload number in the RTP packets.

So I found that the payload I get on producer.rtpParameters.codecs[i].payloadType was the original payload and that room.createRtpStreamer changes the payload type.
Ended up doing the below to resolve the issue
// get the payload (type) from the room.rtpCapabilities.codecs.preferredPayloadType for the specific codec
let payload = this.room.rtpCapabilities.codecs.find((c)=>{
return c.name === producer.rtpParameters.codecs[i].name;
}).preferredPayloadType;

Related

gRPC and C#: receive message bigger than maximum allowed

I am doing some test to request some data to a remote database from a client. For that, I have a client gRPC that call a method in the gRPC, this gRPC server use EF to get the data and send the result to the client.
Well, in my case, I get about 3MB of data, that is higher than the default maximum size allowed for the channel.
I know that I can resolve the problem when I create the channel in the client, in this way, for example, to 60 mb:
var channel = GrpcChannel.ForAddress("http://localhost:5223",
new GrpcChannelOptions
{
MaxReceiveMessageSize = 62914560,
MaxSendMessageSize = 62914560,
});
But although I can increase this when I create the channel, I can't ensure that some query returns more data than the maximum allowed.
So I would like to know how I can handle this.
In this case, the method is unaray, it is not a stream.
Thanks.

What does MPNS response with status code 200 and notification status 'Dropped' mean?

For some push messages sent using MPNS I am getting a response with the following values:
statusCode = 200
notificationStatus = Dropped
deviceConnectionStatus = Connected
subscriptionStatus = Active
Looking at the only documentation I found it seems the meaning of this particular combination is not explained:
https://msdn.microsoft.com/library/windows/apps/ff941100(v=vs.105).aspx
What I want to know is if I should treat this as an error and if so, should I retry later or just give up?
Even if we cannot find a specific documentation to check the particular combination you provide, we can still analyze it based on our common experience:
200 OK means your request has been received successfully
Dropped means the MPNS has not received your request normally
Connected refers to your device status when the request is sent
The last header returns if the channel is still valid(Active) or not(Expired)
Thus, I think you can retry later since your channel is still valid.

Pyspark: how to streaming Data from a given API Url

I was given an API url, and a method getUserPost() which returns the data needed for my data processing function. I am able to get the data by using Client from suds.client as follow:
from suds.client import Client
from suds.xsd.doctor import ImportDoctor, Import
url = 'url'
imp = Import('http://schemas.xmlsoap.org/soap/encoding/')
imp.filter.add('filter')
d = ImportDoctor(imp)
client = Client(url, doctor=d)
tempResult = client.service.getUserPosts(user_ids = '',date_from='2016-07-01 03:19:57', date_to='2016-08-01 03:19:57', limit=100, offset=0)
Now, each tempResult will contain 100 records. I want to stream the data from given API url to RDD for parallelized processing. However, after reading the pySpark.Streaming documentation I can't find a streaming method for customized data source. Could anyone give me an ideal how to do so?
Thank you.
After a while digging, I found out how to solve the problem. I employed the use of Kafka Streaming. Basically you need to create a producer from given API, specify topic and Port for communication. Then a consumer to listen to that specific topic and Port to start streaming the data.
Note that the Producer and Consumer must be working as different threads in order to archive real-time streaming.

Failed to respond to incoming message using data source row correlation

I am totally new to Parasoft virtualize. I created a virtual asset and added three fields in my data source correlation, My request xml has 4 fields. I am getting this error after processing the request.
<Reason>Failed to respond to incoming message using data source row correlation</Reason>
<Details>Values in incoming message did not match values in the data source "GetSubscriptionOperationsRequest"</Details>
Any suggestions on what might be the problem here?
The message which you are getting explains your problem.
The virtual asset cannot correlate your request with data in your data source to build response.
Try to send request with one of the values from column used for Datasource's corelation in your responder.
That value should be in request's filed used by correlation.
Maybe you should try to add catch all responder to see if you have correct corelation between incoming requests and your responder.
You can also ask Parasoft's technical support for help.

How to write to different hbase tables in apache flume

I have configured Apache Flume to receive messages (JSON type) in HTTP source. My sinks are MongoDB and HBase.
How can I write the message according to a specified field to different collections and tables?
For example: let's assume we have T_1 and T_2. Now there is an incoming message that should be saved in T_1. How can I handle those messages and assign them where to be saved?
Try using the Multiplexing Channel Selector. The default one (Replicating Channel Selector copies the Flume event produced by the source to all its configured channels. Nevertheless, the multiplexing one is able to put the event into a specific channel depending on the value of a header within the Flume event.
In order to create such a header accordingly to your application logic you will need to create a custom handler for the HTTPSource. This can be easily done by implementing the HttpSourceHandler interface of the API.
you can use regex for tagging message type + multiplexing for sending it to right destination.
example , based on message "TEST"
regex for a string / field
agent.sources.s1.interceptors.i1.type=regex_extractor
agent.sources.s1.interceptors.i1.regex=(TEST1)
assign interceptor to serializer SE1
agent.sources.s1.interceptors.i1.serializers=SE1
agent.sources.s1.intercetpros.i1.serializers.SE1.name=Test
send to required channel , channels (c1,c2) you can map to different sinks
agent.sources.s1.selector.type=multiplexing
agent.sources.s1.selector.header=Test
agent.sources.s1.selector.mapping.Test=c1
all events of test regex will go to channel c1 , others will be defaulted to C2
agent.sources.s1.selector.default=c2