Problem of Command Window with LIVE555 testRTSPClient project - live555

I would like to implement how to capture audio from a microphone in IP (network) camera, and in real time stream it so that can listen to it live.
I was downloading and building the LIVE555 library.
I compiled the project with the testRTSPClient.cpp in the directory of the testProgs in live555 (without modifying the code).
The compile succeeds, but the command window opens and then closes immediately. What's the problem?

The testRTSPClient application expect an url argument.
From the documentation
RTSP client
testRTSPClient is a command-line program that shows you
how to open and receive media streams that are specified by a RTSP URL
- i.e., an URL that begins with rtsp://
You should open a console and execute something like :
testRTSPClient rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov

Related

MacOS Catalina 10.15.7 AudioUnit Microphone notification callback not invoked

I am building a command line tool using only command line tools (mainly clang) in ObjC++ using AudioUnit v2(C) API. Output to speakers works fine but the input from microphone callback is never invoked. The iTerm or Terminal hosts have access according to Settings. The executable also has an embedded info.plist although I do not think this is relevant.
The precise security model is not clear to me, it looks like a major security hole if it worked (anything run from terminal would have access): my guess is that the process launched by an "App" has permissions which then propagate to any child process. However this view is confused by another case where an executable I generate does network access (as it happens only to localhost because it is a regression test) and in this case the executable is asking for network access, not the terminal.
The source code is actually written in Felix which is translated to C++ and then compiled and linked by clang with the -ObjC option so embedded Objective C is supported. The translator is mature enough to have reasonable confidence in its correctness in this kind of simple application. The AudioUnit configuration for the microphone input is:
// configure
var outputElement = 0u32;
var inputElement = 1u32;
// establish callback
status = AudioUnitSetProperty(
outputAudioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
inputElement,
(&inputCallback).address,
C_hack::sizeof[AURenderCallbackStruct].uint32
);
assert noErr == status;
and the inputElement is enabled and outputElement disabled. A second audio unit is constructed later with similar technology which pumps a sine wave to the speakers and that works fine. The actual callback just prints a diagnostic and exits, but the diagnostic is never seen. Originally, the terminal had no permissions, and we guessed the code was correct but failed due to lack of permission to access the microphone. The executable still has no permission but the terminal does now (if I try to run the executable from file manager a terminal pops up).
No errors are reported at any stage. The callback simply isn't invoked.
To get a callback, you need to
enable IO
set the audio unit input device
Number 2. trips people up because it's not necessary for output [which sensibly defaults to the default output device], nor is it necessary on iOS, probably because there is no concept of Audio Device there, at least not in the AudioUnit API.
Surprisingly, both these requirements are actually documented! Technote 2091 covers the steps needed to record audio using AudioUnits and code listings 3. and 4. have sample code that enables IO and sets the input device. Listing 4. sets the audio unit input device to whatever the default input device was, but any input device will do.
Since macOS Mojave (10.14), you need an NSMicrophoneUsageDescription string in your Info.plist. Without this, your app is aborted with an exception. With this, the user is shown a prompt requesting permission to access input devices. You can control when this happens using code found here.
For a command line tool, you can embed an Info.plist file during the link stage.
On Catalina you also seem to need to opt into audio-input enabled sandboxing or the hardened runtime (or both!). Without one of these your callback is called, but with silence! Both of these runtime environments are enabled using "entitlements" which are metadata that is embedded in your app via codesigning, so you will need some form of codesigning. I don't think this necessarily means you will need a certificate from Apple, there is "local/ad-hoc" code signing, which seems to embed entitlements without a certificate, although I'm not sure how distributable the resulting binaries will be.

getUserMedia: access system output stream

I know how to access audio input devices via getUserMedia() and route them to the WebAudio API. This works fine for microphones and such. But in my use-case, I would rather like to hook into the audio stream of an output device. The use case is that I want to create a spectrum analyser for audio coming from a digital audio workstation (DAW) running on the same PC.
I tried to enumerate the devices and call getUserMedia() with the device id of an audio device, but the stream returned only showed silence data. The only solution I found so far is to install an audio loopback device (like Soundflower on Macs) to route the DAW's output to and then use this as an input device for getUserMedia(). But this will require the user to install 3rd party software.
Is there any way to hook directly into the audio stream of an output device instead, before it is actually sent to the physical device (speaker or external soundcard)?
This can be achieved using the desktop capture APIs (chrome.desktopCapture.chooseDesktopMedia). An example for chrome is included here

How do I test customized stream playback paths in Red5

I have a Red5 application that I inherited from a previouse developer and I am trying to get it running correctly. I am able to start the Red5 server and stream video files from my /webapps/myapp/streams/ directory. I am able to test this by going to http://localhost:8080/myApps/streams/testVid.mp4 and the video plays normally.
However, I need to be able to stream my videos from any directory in my file explorer. The application already has a Application.java, a CustomFilenameGenerator.java, the needed bean for CustomFilenameGenerator in red5-web.xml and a playbackPath and recordPath in my red5-web.properties file.
From my research it seems like the previous developer that worked on this project was probably able to get the streaming working from a custom path, but I am not sure how to get it running.
Assuming that everything with the application/configurations is correct how do I actually test it, if the files I want to stream are in C:\Users\myUser\Desktop\StreamRecordings?
I have tried going to http://localhost:8080/myApp/StreamRecordings/testVid.mp4 but I get 404 error (probably since StreamRecordings is not in a myApp subdirectory.
Thanks!

cocoa ffmpeg streaming created improper file

I am trying to capture video stream and upload it to RTMP server using FFMPEG in my cocoa app. I am able to play that file via FFPLAY but when i stop streaming file is not created properly.
It shows file size as 1kb even when streaming is done for 5 mins.
However if i try to save on my local system it works fine. A proper file is created and i am able to view it.
RTMP server is also fine coz from windows files are being created and saved on that server.
Thanks in advance. :)
Updating FFMPEG to version 3.1.4 solved my problem of streaming via terminal.

Creating a WebRTC receiver

I am new to WebRTC and trying to figure out how to create a program outside a browser which receives a WebRTC audio stream and outputs it on speakers.
Are there any WebRTC libraries for Java or C#?
That receiver will be running on a linux machine.
--
I've been thinking about using getUserMedia() to access the microphone. But then:
In what format will such a stream be transmitted?
Let's say I use WebRTC2SIP and build a Java endpoint using JSIP;
or I just use a socket and send the stream over http.
What audio format will I get on the receiver side? So far I have read WebRTC does compress the stream somehow.
I guess there are two ways for you:
build the whole WebRTC voice engine for android/iOS or Mac etc., and just use the API provide by VOE.
build standalone NS/VAD/AECM/AGC modules and using it in your project. for example, you build standalone NS module for android mobile, you use AudioRecord(java layer, android things) to record sound from MIC, and do the noise suppression process on these data(jni layer, WebRTC things), and finally playback the processed data by using AudioTrack(java layer, android things).
EDIT:
for the 2nd situation, the format is PCM raw data.
Check out the working Audio demo and code at demo.easyrtc.com
The code is all open source and can be checked out at https://github.com/priologic/easyrtc
You can look for any known issues around easyRTC at our forum at
https://groups.google.com/forum/#!forum/easyrtc
Also check out our main site at easyrtc.com