RTP delay in concurrent calls - webrtc

This the call scenario:
DTLS=true, earlymedia=false
Call A: make a first web to web call, established ok, call media ok. Call ongoing
Call B: make a second web to web call, established ok, call media ok. Call ongoing.
Call C: make a third web to web call, it is established with media delayed of 3 seconds. Call ongoing.
During the time the call C is established, the other two calls A and B starts having too a 3 seconds media delay.
When Call C is hang up, after 10/15 seconds the calls A and B returns to media path without delays.
When I check Wireshark RTP streams, seeing that 0 jitter and 0 delta. also I checked the graph it showed that a pike but couldn't figure out the relationship.
How can I investigate further?

Related

Marketo API - Maximum of 10 concurrent API calls

I'd like to know what Marketo means by 10 concurrent API calls. If for example 20 people use an API in the same time, is it going to crash ? And if I make the script sleep for X seconds if I get that limit response and try the API call again, will it work ?
Thanks,
Best Regards,
Martin
Maximum of 10 concurrent API calls means, that Marketo will process only 10 simultaneous API requests per subscription at maximum.
So, for example if you have a service that directly queries the API every time it is used, and this very service gets called 11 or more times in the same time, than Marketo will respond with an error message for the eleventh call and the rest. The first 10 calls should be processed fine. According to the docs, the error message the following requests will receive will have an error code of 615.
If your script is single threaded (like standard PHP) and you have more that 10 API calls, and your script is running in one instance, than you are fine, since the calls are performed one after another (so they are not concurrent). However, if your script can run in multiple instance you can hit the limit easily. In case a sleep won't help you, but you can always check the response code in your script and retry the call if it received an error. This retry process is often called Exponential Backoff. Here is a great article on this topic.

IMX6 USB Host controller details

I have a system running WinCE7 on an board with IMX6 processor. Occasionally when the system is heavily loaded, I have seen on the USB tracer, that for about 2 seconds , the IN tokens are not seen (only SOF is seen indicating the bus is alive).
Somewhere in the driver the call to the function "IssueBulkTransfer" is made which I believe goes through the Microsoft library and reaches the BSP.
My question is that if I tell the Host controller to send an IN token, will the controller's microcode keep on sending the IN tokens if it receives NAK , without my driver having to resend the IN tokens every time (thus using CPU time)?
Thanks
From the description I am assuming that you are talking about EHCI controller.
Answer in two points -
1 - Yes the controller will continuously send the IN Token for NAKs till NAC counter reaches 0 for that endpoint.
2 - The idle period you are seeing is also expected I suppose. Please see the quote from EHCI specification Section 4.9.
Note that when all queue heads in the Asynchronous Schedule either
exhausts all transfers or all NakCnt's go to zero, then the host
controller will detect an empty Asynchronous Schedule and idle
schedule traversal (see Section 4.8.3).
So the controller will stop traversing the schedule list which might be the 2 secs idle you are seeing. The moment the controller starts traversal again, it reloads the NAK counter and starts in IN token again.

RestKit network limits blocks other calls when parallel requests are running

we are facing a problem.
we have background requests that are downloading files constantly (up to 5MB each file). meanwhile, we have a UI that most navigations require REST calls.
we limited the number of background downloads so it won't suffocate the operationQueue that RESTkit uses.
when several files are downloaded in background, we see the network usage with 1->2 MB (which is understandable).
The problem is: the user navigates through the app, and each navigation calls a quick REST call that should return very little data. but because of the background downloads, the UI call is taking forever (~10 seconds).
Priority did not help, i saw that the UI call i make instantly is handled by the operation queue (because we limited the downloads limit and the NSOperationQueue had more space to fulfill other requests.
when we limited the concurrent REST download calls to 5 - the REST calls from the UI took 10 seconds.
when we limited the concurrent REST download calls to 2 - everything worked fine.
the issue here is that because we let only 2 downloads occur in the background - the whole background operation of downloading files will take forever.
the best scenario would be that every UI call would be considered as most important network-wise and even pause the background operations and let only the UI call to be handled - then resume the background operation - but i'm not sure it's possible.
any other idea to address this issue?
You could use 2 RKObjectManagers so that you have 2 separate queues, then use one for 'UI' and the other for 'background'. On top of that you can set the concurrent limits for each queue differently and you could suspend the background queue. Note that suspending the queue doesn't mean already running operations are paused, it just stops new operations from being started.
By doing this you can gain some control, but better options really are to limit the data flow, particularly when running on a mobile data network, and to inform the user what is happening so they can accept the situation or pause it till later.

WCF Service calling an external web service results in timeouts in heavy load environment

I have got the following scneario:
Our .NET client calls our WCF service - which in turn calls an external third party service to retrieve some data. Once the data is retrieved, our WCF service sets some values and then returns the control back to the client. The process of calling the external service has to be synchronous.
My problem is that this all works in a low load environment but when load gets high then we start queueing multiple requests, the WCF service starts timing out. We have set the "sendTimeout" property for the binding to 5 seconds and it times out after that.
I've tried replacing the external service with a mocked out local version and that handles the load OK but on the same hand the call to external service on it own is very quick - around 0.5 second. I can only presume that the timeouts are happening because too many requests were queued and WCF service couldn't respond within those allocated 5 seconds.
I have tried the following:
Set the values of maxConcurrentCalls, maxConcurrentSessions & maxConcurrentInstances to very high numbers
Set the value of system.net - connectionManagement - maxconnection to a very high number
Does any one have any ideas about what we can do in this scneario?
does your cpu peak during these high load times ? if not then you might be running out of threads. Make your wcf service that receives the original call asynchronous, and then call the external service asynchronously.
you will have to use asnyc pattern throughout your call chain to make sure nothing is blocking the thread.
http://msdn.microsoft.com/en-us/library/ms731177.aspx

WCF call order in single concurrency mode

Assume a WCF service with ServiceBehavior.ConcurrencyMode = Single.
When exactly does the service start blocking for concurrent calls?
For example, say we have two clients: Slow and Fast.
At time 0 Slow starts a slow service call that includes a huge chunk of data.
At time 1 Fast makes a fast service call.
At time 2 the slow data finally arrives and the service code is executed on the server.
Assuming buffers configured in WCF to be larger than the huge chunk, which call will get executed first?
In other words, does blocking start when all call data has been received at the server side or when the client initiates the call?
Is the service blocked during the data transfer or only during code execution?
Unless you configure InstanceContextMode to Single as well both calls will be executed concurrently. So suppose that you have InstanceContextMode set to Single.
I didn't test it but I would expect such behavior. Concurrency mode is service behavior so it takes place once the service instance / instance context is resolved. In buffered mode that happens after whole message is received in streaming mode it should happen after message headers are received. So in case of buffered transport I would expect that fast client will be processed first and in case of streamed transport it depends if message headers from slow client was already received.
But as I wrote before this is only my expectation.