NAudio - Detect Audio via Application - naudio

Windows Volume Mixer shows audio output for individual applications.
Using NAudio, what is the right way for me to tap into this information? I essentially want to be able to make my application say:
Always record all audio input/output. Unless otherwise specified, only keep a buffer of the last 30 seconds. Throw the test away. (I know how to do this)
When Skype, Vonage, or Ring Central plays audio for more than 5 seconds, ask the user if they want to start saving the audio. (How would I do this?)
If so, save the 30-second buffer to a file and then start recording live. (I know how to do this)
Thanks for the help!

Windows won't let you capture audio from individual applications. You can use NAudio's WasapiLoopbackCapture to capture audio from all applications.
If you just want to see audio output levels for all apps, that can be achieved with the IMMDevice APIs which NAudio has wrappers for. It doesn't come with a specific demo showing that, but there's another open source project, EarTrumpet that you could explore to see how its done.

Related

How to measure the performance of my site's video streaming and playback?

I have developed a site that hosts user videos. I store the video files in AWS S3, I deliver them through AWS Cloudfront and I use video.js as the site's player with HTML5 as default and flash as fallback.
Generally the video streaming seems to work fine but in some cases I receive complaints from users for slow or choppy video playback. I want to create some tests to measure the performance of streaming in order to be able to distinguish user problems (e.g. slow connection at the user side) or with my service.
Are there any best practices or tools to collect video delivery metrics? I'm interested in open source solutions or something that I can implement myself because it's just a personal project, but I don't want to rediscover the wheel.
Testing progressive download implies checking the transmission bandwidth and its continuity. For example for a high transmission rate the initial client buffer will be filled faster and the playback will start sooner. However, losing that transmission capacity at some later time can cause re-buffering. The total transmission time of your file must be lower than the video duration.
To identify potential issues you can start with the S3 bucket logs and the CloudFront cache statistics and access logs.
There's a load testing tool written in Java called Apache JMeter. It cannot use JavaScript so it must be configured to request the files directly.
The disadvantage of using a load test tool in a single location is pretty evident. Different geographical areas and carriers have different characteristics and test results will be different.
There are online, non open-source tools that can load test from multiple locations but they are generally paid though some offer free trials.
Here's another way to look at this.
but in some cases I receive complaints from users for slow or choppy video playback.
If you're using an Adaptive HLS stream, and you're CloudFront, and the video is still choppy to some users, that's probably because of their own internet connection speeds.
In that case, you can encode your video in multiple resolutions (using just one AWS MediaConvert job, btw) - like 1080p, 720p, 360p, 240p, 144p etc.
And then Videojs has a stream switcher plugin that will 1) automatically start playing the highest possible resolution - and no higher - that's right for the viewer's connection and 2) give the user the option via a "Settings" (gear) icon in the control bar that they can use to switch resolutions manually.
That way, even those with really poor internet connections should be able to watch your video.
Of course, the other alternative is to use progressive download videos that the viewer can simply click play, then immediately click pause, and wait for the video to buffer, and then play it after it's fully downloaded.
Check out the Videojs Resolution Switcher demo here.
-- Ravi Jayagopal

Stream audio from Microphone in IOS

I am trying to get audio from the microphone and stream it across as data through socket to a node server. Since I haven't really done this before, I am confused on how to do this properly.
First of all, how do you actually stream data from a specific microphone? I need the audio to be streamed, not recorded then sent. In other words, it needs to be like a call, where the word you say is automatically sent to to the recipient rather than recorded, then sent as some sort of recording.
Second of all, how do I specify where the audio comes in? I have seen some questions about this but I couldn't find a good solution to how to do this, especially for my case, where the audio input is from the lightening audio.
I couldn't find a good example of how to do this using AVAudioSession. Is there any good resources (examples, tutorials) that I can use to help me?
Thanks!

Can you obtain audio stream data to the System output device using CoreAudio?

Is it possible to obtain a stream of audio data arriving at the system output (speakers, headphones, etc.) using CoreAudio or another framework?
Example: You're listening to a song on iTunes while watching a YouTube video, all while playing a computer game that makes sounds of its own, all of which are being played through your computer's speakers (Probably terribly annoying). My app would need to receive the entire mix as streaming data.
Thanks in advance.
Not at a user application's Core Audio or other app framework level. Some audio output capture/snoop apps may do this with a kernel extension (kext), or perhaps a replacement audio hardware driver.

are there osmf concepts for scheduled live events?

Lots of ancient, non-negotiable history due to mergers and acquisitions, so I realize there are better ways to do all of this, however... I am faced with the following:
I have an osmf based video player where a particular playlist item (for a live video), must do the following:
play a preRoll prior to displaying a countdown
display a countdown until the video start time (synched with the server time)
play another preRoll prior to video playback
play the live stream until the server time reaches the stream end time
then play a post roll video
I've gotten this working for the most part, but I'm running into walls with the arbitrary insertion of "ads", since I don't want to trigger events associated with loading new media. If I try to inject a new ad (particularly after the stream has played), the live stream will display again. While I could figure out some horrible way to make this work, I just wanted to make sure first that I'm not missing something critical about osmf and live events. I'm also a bit uncertain as to what is native osmf in the architecture I'm working with and what is homegrown.
1) Does osmf have a concept for a scheduled live event that might make this easier
2) Does osmf have an option to arbitrarily insert a video into playback based on some external call without changing the playlist index or returning to the beginning of the video.

What is the fastest way to send a small video from one iOS device to another?

I'd like to make an application in which one person takes a short video, maybe five to twenty seconds in length, and sends it to another user as quickly as possible. An example would be an instant replay at a sporting event. What would be the fastest and most reliable way to transfer a video of that size? I am considering the following two options, but am open to other suggestions.
uploading the video to my own server and performing some kind of push operation
performing a direct transfer over a shared wifi network (what about long distance?)
I'd take your first option
Record your video and compress it to a reasonable size on the source device.
Upload the video to an external server.
From the server, send push notifications to the particular users who should be able view the video.
If recipients/consumers of the video could stream the uploaded video from the server it would be a pretty reasonable user experience.