I would like to use an API that checks pronunciation. Its input is audio. I would like to achieve the following: the user speaks to the microphone - an audio file is generated - and it is sent to the API. The API sends back the answer - the evaluation of the pronunciation. How can achieve this?
I would be also interested in how to display the microphone for the users.
My main aim is to make it work in a browser.
Thank you very much for your answer.
I couldn't set up the audio recording yet.
If I try to load back 2 back clips, it appears that they interupt each other. The documentation seems to suggest that by using audioClipStatus you get an array of audioClips, suggesting they can be queued.
Anyone has any info on this?
Secondly, I like the "Chime" audioclip and would love to be able to concatenate an audioClip preceeded by a Chime to announce an announcement. It would be great if there was an option like that.
One suggestion to the Sonos Dev team, perhaps allow the loadAudioClip to have an array of clips as opposed to a single clip.
Any reaction from the Sonos Dev team?
Windows Volume Mixer shows audio output for individual applications.
Using NAudio, what is the right way for me to tap into this information? I essentially want to be able to make my application say:
Always record all audio input/output. Unless otherwise specified, only keep a buffer of the last 30 seconds. Throw the test away. (I know how to do this)
When Skype, Vonage, or Ring Central plays audio for more than 5 seconds, ask the user if they want to start saving the audio. (How would I do this?)
If so, save the 30-second buffer to a file and then start recording live. (I know how to do this)
Thanks for the help!
Windows won't let you capture audio from individual applications. You can use NAudio's WasapiLoopbackCapture to capture audio from all applications.
If you just want to see audio output levels for all apps, that can be achieved with the IMMDevice APIs which NAudio has wrappers for. It doesn't come with a specific demo showing that, but there's another open source project, EarTrumpet that you could explore to see how its done.
What is required to use SMIL file to utilize adaptive streaming in a videojs player. I have created the SMIL file in my wowza application and it is creating my 4 separate streams and making them available. However I cannot get my webpage, that uses videojs, to correctly play the SMIL file. Hints on that coding or where to go to find the correct documentation would be greatly appreciated.
There aren't many implementations of SMIL players. I'm sure I've seen wowza URLs that suggest it will output the SMIL as other formats, something like whatever.smil/manifest.m3u8. That's HLS which could be played on mobile and Safari natively and with videojs-contrib-hls elsewhere.
I know the question is old, but I've been struggling with this recently, so I want to share my experience in case anyone is interested. My scenario is very similar: want to deliver adaptive bitrate streaming from Wowza to clients using videojs.
There is a master link that explains how to setup and run Wowza Transcoder for live streaming, and how to set up your Adaptive Bitrate Streams using an SMIL file. Following the video in there you can achieve to have a stream that uses ABS, but the SMIL file is attached to the stream name, so it is not a solution if you have streams that come to Wowza from another Media Server origin and that need to be transcoded before being served to the clients. In the article there are a few key things mentioned (like the Stream Name Groups), but somehow things doesn't seem pretty clear, at least to me. So here is some clarification from what I understood from all articles I read and what I did to achieve ABS:
You can achieve ABS in Wowza either with SMIL files or with Stream Name Groups (NGRP). NGRP refres to a block of streams that is defined in the Transcoder template that can be played back using multi-bitrate streaming (dynamically) (<- this is what I used). And SMIL files are used to create a "static" list of streams for multi-bitrate VOD streaming. If you are using Wowza Origin-Edge Delivery you'll need the .smil file, because NGRP do not get forwarded to the edge. (Source for all this information: here).
In case you need the SMIL file, you probably need to generate a new one for every stream, and probably you want to do that in an automated way, so best way would be through an HTTP request (in the link above it is explained how to achieve this).
In case you can live with NGRP, things are a bit easier:
You need to enable Wowza Transcoder (this is pretty easy and steps are in the video I mention above).
You should create your own Transcoder Template with the different stream presets you want to deliver, as an example you can check the default ones that are already there. The more presets you add, the more work Wowza will need to do whenever a stream comes, since it will need to generate a new stream for every preset that you have defined.
Now is when we generate the NGRPs. In your Transcoder Template, you can generate as many NGRPs as you want (to clarify: these are like groups of streams, that you will be able to set in your clients video player. Each NGRP contains the streams that the video will be able to use when doing the adaptive bitrate streaming). For instance, these are the default NGRPs:
If you play the ngrp "_mobile" in the clients video player, the ABS algorithm in the player will be able to adapt itself to play either the 240p or the 160p streams based on the client capabilities.
So imagine you have these two NGRP. In order to play them in videoJS, you will need to set the source to:
http://[wowza-ip-address]:1935/<name-of-your-application>/ngrp:myStream_all/playlist.m3u8
or
http://[wowza-ip-address]:1935/<name-of-your-application>/ngrp:myStream_mobile/playlist.m3u8
... based on how many options you want to provide to the client player to use for the ABS. (For instance: if your targets are old mobile devices, you probably just want to offer a couple of low bitrate streams).
(This would be in case you're delivering an HLS stream. If other format, the extension would change, for instance if you are delivering a DASH stream you would have "/manifest.mpd" instead of "playlist.m3u8").
That is all, there is also a very helpful link in video.js documentation explaining how it does the bitrate switching: here.
I hope it helps someone! At least clarifying things! :)
I want to record and play the video. I have found the below article to capture the same. but the recorded video is playing so fast and not playing all of the actions. after 32 seconds, it gets reset to starting position.
http://html5-demos.appspot.com/static/getusermedia/record-user-webm.html
I have used the code from (http://www.html5rocks.com/en/tutorials/getusermedia/intro/#toc-history). I can share it if required.
Any clues to resolve this?
Thanks
Off the top of my head, a reason why the video plays back too fast may be the fact that you are using a resolution that is to high for the current API to handle when recording.
I've tested similar attempts to record video directly in browser and had the same issue with playbacks of recording made with high resolutions.
Maybe this link could help
https://www.webrtc-experiment.com/RecordRTC/
or for an extensive overview of the current state of webRTC you can read the following article:
http://hdfvr.com/html5-video-recording