Unable to stream in stereo - agora.io

I am using a 3d mic that works like a charm on the iPhone using 1/8th jack into an adapter. It works great with the camera app so I know the hardware is able to receive the stereo.
However in my agora.io iPhone app I have the following settings:
audio.setChannelProfile(.liveBroadcasting)
audio.setAudioProfile(.musicHighQualityStereo, scenario: .showRoom)
Is there anything else I need to do for it to work?

I was able to reach Agora Support. The following answer is what I received:
iOS devices does not support stereo audio capture. You would need to use external video source which support stereo audio to do the capture.
I wish this were included in the iOS documentation.
For my use case, a Mac app would be better, so I'm just going to go with that!

Related

Using DJI Windows SDK to display/decode a video streamed over UDP/RDP

I am wondering if anybody knows if it's possible to use the DJI Windows SDK to decode a video in real-time (render video frames as the video is being retrieved frame-by-frame)?
I don't see anything relevant in the documentation or API reference sections from DJI Windows SDK.
At this point i'll have to dig into the Samples and see if there is anything useful there. Otherwise the online documentation seems rather useless.
Here is the DJI Windows SDK documention.
I agree with you that DJI documentation sucks. But again what you are asking is unclear.
use the DJI Windows SDK to decode a video. So u got an-online video and you want to decoded it. Why not use ffmepg and ffplay???? We use that for DJI tello and IP camera all the time.
If you want to grab the feed from the drone, there are DJI github sample that shows you how. https://github.com/DJI-Windows-SDK-Tutorials/Windows-FPVDemo/tree/master/DJIWSDKFPVDemo
So not 100% sure whats your use case.

Xcode & Objective-C: Is tookbox SDK able to make a SIP video call between a video entry-phone and an iOS Device?

Someone knows it the TookBox SKD is able to make a SIP video call between a video entry-phone and an iOS Device?
Thanks!
Yes. If you use WebRTC then definitely yes.

Android - Things Raspberry Pi - Google Mobile Vision Support

I'm trying to run an android app on Android Things OS.
The app uses facial detection as a first step filter for reaching facial recognition.
The recognition process is made by a third party (remote) API, so there is nothing to worry about it, but the detection is being carried out by the Google Mobile Vision API for Android. The problem I'm facing is that the app crashes every time a camera preview is about to start.
The code of the app is derived from this example: https://github.com/googlesamples/android-vision (Face tracking). So if this code runs, my app runs.
I also know that there is a known issue with the raspberry pi and the camera trying to create more than one output surface.
My questions are:
(1) Is there a way to successfully run the code in the example https://github.com/googlesamples/android-vision (Face tracking)?
(2) When is going to resolved that known issue?
Thanks in advance.
Attentive,
Gersain CastaƱeda.
The latest version of Android Things(DP6) which is with API 27, supports the new camera API Camera2 as its explained here.
Camera2 API supports more than one output surface and its runnable on Android Things.
For getting more inspirations of how doing this check this tutorial(how to use camera2) and this very useful sample(how to use Google Vision).

getUserMedia: access system output stream

I know how to access audio input devices via getUserMedia() and route them to the WebAudio API. This works fine for microphones and such. But in my use-case, I would rather like to hook into the audio stream of an output device. The use case is that I want to create a spectrum analyser for audio coming from a digital audio workstation (DAW) running on the same PC.
I tried to enumerate the devices and call getUserMedia() with the device id of an audio device, but the stream returned only showed silence data. The only solution I found so far is to install an audio loopback device (like Soundflower on Macs) to route the DAW's output to and then use this as an input device for getUserMedia(). But this will require the user to install 3rd party software.
Is there any way to hook directly into the audio stream of an output device instead, before it is actually sent to the physical device (speaker or external soundcard)?
This can be achieved using the desktop capture APIs (chrome.desktopCapture.chooseDesktopMedia). An example for chrome is included here

How to use a camera in Corona Simulator?

I'm wondering how to use a camera in Corona Simulator. I want to test my app (which requires a camera) out on the simulator but i don't know how to enable usage for a camera. Is there anyway to set up my webcam to act as the simulators camera? I just want to know if this is possible or would I just have to put my app onto a device to test it with a camera.
Thanks in advance
To access the device camera or photo library you would call something like:
media.show( mediaSource, listener [, filetable] )
In order to test this, you'd have to load the app on your phone. Corona doesn't allow you to use a webcam for the simulator. For more information check out the Corona Docs # http://docs.coronalabs.com/api/library/media/show.html