Agora RTC - Agora-SDK [DEBUG]: Ignoring event undefined - agora.io

I'm currently trying to implement Agora's RTC and so far it has been working. However, there seems to be an inconsistent error that only sometimes occurs. When joining a channel with an already existing user the following error is shown and the stream is not added and consequently can't be played.
Agora-SDK [DEBUG]: Ignoring event undefined {uid: xyz}
In this case, xyz is the user id of that given existing user. I thought that it might be an issue with the code but it works sporadically and it doesn't seem to be a time lag or something like that either. Did anybody encounter this and know a solution?

Ba, I was experiencing the same issue. I am not sure of the setup you are using to call certain Agora functions, but the problem I was running into was sometimes my publish local stream function would be called before Agora was done capturing the user's media. I found this error in the console accompanied with "No track in stream" or something close to that. My fix was to ensure that the Agora init() function, which captures the user's media, completed before calling for the local stream to publish.

Related

Why do successive calls to ImageCapture.takePhoto() fail in my PWA?

I am writing a PWA at the moment that uses the camera to do some light computer vision on successive frames from a mobile device camera. When calling takePhoto() a second (or sometimes 3rd time) I get the following error: Uncaught (in promise) DOMException: The associated Track is in an invalid state.
It is noted in the spec that when calling takePhoto()
devices MAY temporarily stop streaming data, reconfigure themselves with the appropriate photo settings, take the photo, and then resume streaming." It is also noted that "If the operation cannot be completed for any reason (for example, upon invocation of multiple takePhoto() method calls in rapid succession), then reject p with a new DOMException whose name is UnknownError, and abort these steps.
Why? Is there a way around this with grabFrame()? Will it change in the future? Is there a discussion within the Chrome team?
I note that I am able to successfully take the photos in quick succession using a Webcam attached to my PC, but NOT using the camera in my phone.

Error Domain=NSPOSIXErrorDomain Code=28 “No space left on device” UserInfo={_kCFStreamErrorCodeKey=28, _kCFStreamErrorDomainKey=1}

Lately numerous network requests with Alamofire made from our iOS device fail with the following error:
Error Domain=NSPOSIXErrorDomain Code=28 "No space left on device"
UserInfo={_NSURLErrorFailingURLSessionTaskErrorKey=LocalDataTask
.<3>,
_kCFStreamErrorDomainKey=1, _NSURLErrorRelatedURLSessionTaskErrorKey=( "LocalDataTask .<3>" ),
_kCFStreamErrorCodeKey=28}
Our app has a mechanism to send a network request if the user has moved +- 10 meters. This is checked every 5 seconds, so in theory every five seconds a call can be made. The network request fails occasionally with this message, returning no status code and the above error.
The message implies the error has to do with available disk/memory space on the device. However, after checking both there is no link to be found since there is plenty of space available. Also, the error occurs on multiple devices, all running iOS 14.4 or higher.
Is there information available regarding error code 28 and what could be the culprit on iOS devices? Even better; how can this error be prevented?
To answer the occurrence of the error itself:
NSPOSIXErrorDomain Code=28 "No space left on device"
With logs in the Xcode terminal:
2021-05-07 15:56:50.873428+0200 MYAPP[21757:7406020] [] nw_path_evaluator_create_flow_inner NECP_CLIENT_ACTION_ADD_FLOW 05CD829A-810D-412F-B86E-7524369359E8 [28: No space left on device]
2021-05-07 15:56:50.877243+0200 MYAPP[21757:7400322] Task <5504BCDF-7DFE-4045-BD4B-E75054636D5B>.<1> finished with error [28] Error Domain=NSPOSIXErrorDomain Code=28 "No space left on device" UserInfo={_NSURLErrorFailingURLSessionTaskErrorKey=LocalUploadTask <5504BCDF-7DFE-4045-BD4B-E75054636D5B>.<1>, _kCFStreamErrorDomainKey=1, _NSURLErrorRelatedURLSessionTaskErrorKey=(
"LocalUploadTask <5504BCDF-7DFE-4045-BD4B-E75054636D5B>.<1>"
), _kCFStreamErrorCodeKey=28}
It appears to get called when there are too many NSURLSessions created, reaching a limit of (in our tests) 600-700 sessions, which are not maintained or closed properly. The error started to get thrown since iOS 14, so it is interesting to see if there was a limit introduced.
Linked is a github issue raised stating the same issues on the ktor microservices framework by JetBrains, pointing in the same direction, mentioning the invalidation of sessions to prevent this issue:
https://github.com/ktorio/ktor/issues/1341
In our own project the origin of the problem turned out to be our implementation of the StarScream websocket library. This might not be relevant for the issues others are having, but explained anyways to create a complete picture of the problem. It is the cause and fix of our specific situation.
At first we assumed it had something to do with the URLSession created by Alamofire (networking library used) since POST requests started to get cancelled, and a kill of the app seemed the only solution to do requests again.
However, we also make use of websocket connections using the StarScream library, which attempts to connect to an socket, and if failed retry to connect every two seconds for a max time of two hours. This would mean for two hours, every two seconds, we connect to the socket -> receive a failure to connect -> disconnect the socket -> connect again. Using a singleton of the socket it was thought there was no possibility of creating multiple URLSessions, since the socket was only initiated once. However calling the connect to the socket again would create a new nw_connection object every single time, since the library did not handle the disconnect properly.
image of NWConcrete_nw_connection objects generated in socket connection
The way this was validated was using the instruments app to check for the creation of new nw_connection objects. Logged as a "memory leak" there, the creation of the nw_connection objects was getting logged and the solution was to make sure we disconnect the socket (invalidate the session) properly before connecting again.
I hope to answer a big part of the issue here, and I will mark my own question answered since this was the solution to the problem at hand. I think Apple should consider giving accurate reports on the number of objects created being limited, instead of giving an error "No space left on device".
Just wanted to chime in with more info, since we're experiencing the same issue.
Based on our analytics, this issue only started happening since iOS 14. We've verified it happening on 14.2, 14.4 and 14.5. Naturally the most straightforward cause for this error would be low memory or disk storage. We've excluded this option with additional logging, as you seem to have done as well.
A possibly related SO post has attributed the issue to a network inspecting framework that was enabled in their release build. It's worth checking if you use a similar tool.
Another report of this issue, this time on the Github of AFNetworking (predecessor to the Alamofire library you use), says they were able to fix it by limiting the creation of URLSession objects.
For us personally, neither of these did the trick. We created a support ticket with Apple, but this hasn't lead to a solution. They requested a small sample project that reproduces the issue, but the error only manifested after 7 days of continuous use in our app. If you have a faster way to reproduce this, it may be worth it to submit your own support ticket.
Hopefully this helps you find a solution, if you do please add this to your post to help others!

Stream Analytics - No such host is known

We are in Australia East.
We have an event hub with events coming through from an application. On Friday 19th March morning I created a Stream Analytics job, to try and read one of the event streams. This worked successfully on the Event Hub and returned results in the "Input preview" window when setting this up. This seems to match the timings on the message below (we are about 12 hours in front of UTC).
However by Friday afternoon, it started failing with one of the error messages "InternalServerError" or "No such host is known". I was working through the drop down boxes available when creating a new input after selecting "Select Event Hub from your subscriptions", so I know I haven't got anything wrong in the setup.
When trying to submit a support request, we get this slightly cryptic message:
The link doesn't work, it claims Stream Analytics is not supported in Resource Health, even though it is. Does this mean "It's down sorry, we are working on it" (as in actually working on it), or is it a canned response and we should escalate it?
Or is anyone else having trouble creating Stream Analytics Jobs and we are suffering an outage? The Azure Status monitor shows they are in good health.
Looks like this was a permission error. I needed to go into the IAM of the Event Hub and set the Stream Analytics Job up with Reader permissions. For some reason, this wasn't automatically added when setting up the Stream Analytics Job, as I thought it would be.
Once the permission was set, the job started successfully.

EventHub Triggered FunctionApp Locally - Where are messages stored?

While I am developing the Azure Function App with Event Hub triggered locally, something wired which drew my attention. When I started debugging, my consumer function app will occasionally automatically be triggered with my previous message through event hub, however, I didn't even fire my event hub publisher at that time! It felt like some event messages were stored in some cache places where I have no idea where they are, that were also trying to trigger my function app from background again and again...
My App settings for my function is using UseDevelopmentStorage=true, and is not related to any of my storage account, in addition above scenario did not always happen every time, but it made me concerned because I had no idea why the same message to be triggered multiply times that are out of my control, once message were published and consumed by function app, it should disappear from event hub message queue, right?
Can anyone please let me know where I can check my messaged stored locally or when published in Azure portal? Thank you very much!
Can anyone please let me know where I can check my messaged stored
locally or when published in Azure portal?
Firstly,i'm afraid that azure function won't save your messages into cache.Based on the official document:
When all function execution completes (with or without errors),
checkpoints are added to the associated storage account. When
check-pointing succeeds, all 1,000 messages are never retrieved again.
Above is description of event hub checkpoint mechanism.Besides,you could refer to this blog. The AzureWebJobsStorage is set to be UseDevelopmentStorage=true when you debug function locally,so i suggest you checking the data in the local storage account.When you run it on the portal,associated storage account will be checked.
Here are some similar issues for your reference:
1.https://github.com/Azure/azure-functions-host/issues/2796
2.https://github.com/Azure/Azure-Functions/issues/589
3.https://github.com/Azure/azure-event-hubs-dotnet/issues/358
Of course,you could open a stack here to get more help.

Track won't play even if track is streamable

I'm using the soundcloud API and so far it was working fine until I hit this track:
https://soundcloud.com/katyperryofficial/roar
I don't know what's wrong with this track but it really wouldn't play. I can get all info of it just not the stream part. I checked chrome network tab and it gives me this. It just cancels without any error:
Name Method Status Type Initiator Size Time
stream?consumer_key=### GET (canceled) Other 13B 1.02s
Any ideas? Have I missed something?
Soundcloud devs made some changes in their code, and i don't know why, they are switching back to rtmp protocol.
Even the response said that track is streamable it can't be streamed with a regular stream_url.
After some digging in dev tools, i've noticed that some tracks use rtmp protocol instead of http/https
Anyway, you can find the streams of the track on:
http://api.soundcloud.com/tracks/TrackID/streams?consumer_key=XXX
from here, you're on your own. from my research, only flash (why?) can play rtmp streams.