In microsoft speaker recognition api what data we have to use for body parameter.
There written as "binary data" . Does this imply that we have to convert audio file to binary data and then paste there.
Currently, it's not possible to use this API testing console for posting application/octet-stream or application/form-data. I believe there's some work on this, and it should be available soon.
As an alternative, you can use PostMan or Fiddler. PostMan might be easier to use. Give it a try and let me know if you have a problem.
The binary data needs to be a WAV file of a specific format:
Container: WAV
Encoding: PCM
Rate: 16K
Sample Format: 16 bit
Channels: Mono
You can check out a working example web page over here - I've used an altered copy of recorderjs (altered by reverse engineering the Speaker Recognition API examples page) to get the right bit rate and sample rate WAV:
https://rposbo.github.io/speaker-recognition-api/
You could potentially use the test console, since you can send base64 encoded audio data (as the official demo page does):
https://azure.microsoft.com/en-gb/services/cognitive-services/speaker-recognition/
Related
I'm trying to upload videos to my Vimeo account via their API using the TUS approach.
I've managed to get the first step working fine, using POST https://api.vimeo.com/me/videos to create the video placeholder in Vimeo, and getting a response which includes the video upload.upload_link
The second step requires the binary data of the video to be patched to the upload link returned in step one using PATCH{upload.upload_link} along with some specific headers, which is fine, but what I'm struggling to work out is where and how exactly to include the binary data, as it doesn't really say in the Vimeo API documentation.
Do I just put the binary data in the Body, on it's own? Or do I need to insert it between some code in the body? Or do I set a parameter and add it as a key value, and if so what is the key?
Also, I'm assuming it should be a binary string and not base64, is that correct?
Any help or guidance on this would be much appreciated.
You put the binary data directly in the request body. Vimeo API uploading uses the tus upload protocol. There is more information about the PATCH request at https://tus.io/protocols/resumable-upload.html#patch
The big picture is that I want to live upload recorded audio from the browser directly to google drive.
This is a pet project so I am happy to play with experimental web technologies.
I currently have the browser fetching a feed from the microphone using MediaDevices.getUserMedia() and encoding it to mp3 in a WebWorker. The encoder returns an rxjs.Observable<Int16Array> that will produce chunks of the encoded file on subscribe.
I would like to use a resumable upload to upload the file, preferably in the "single request" style. The challenge is in uploading the file as it is produced by the encoder.
I appreciate that I could probably achieve a similar result by using their "multiple chunks" style and collecting the results of the encoder into Blobs and sending them on an interval. My problem with this is that the more "live" the upload is (smaller chunks) the more POST requests I will be making.
XMLHttpRequest.send() does specify that I can provide a ReadableStream as the body. BUT it appears that this experimental tech does not yet support byte streams
Looking first for links to good documentation that correctly explains pseudostreaming, byte range requests and mp4 fragmenting. Note, I will be using only the mp4 container (h264 codec) and HTML5 video (no flash).
My understanding of pseudostreaming is that the client can send off a start parameter that the server "seeks" to in it's response. MOOV data must be upfront and it implicitly implies that buffering of the original source stops in favor of the new response starting at the "start"/seek position. How is the client forced to make pseudo calls? Does the MP4 have to formatted in a special way?
Byte range requests are different send rather than just a start parameter a range is sent. Sounds more like progressive downloading. How would "seeking" work? Does it with byte range? Can the segment size be pre-determined with movie box information?
How does MP4 fragmentation fit in? Looking like a construct originally designed by microsoft for silverlight. But is it applicable to other browser html5 video implementations?
Finding it difficult to sort out information on the web. Looking to both live feed and take historical segments of h264 files produced from rtp camera streams. Got a bunch of files time-ordered in a MongoDB. Created my own h264 decoder in JavaScript and can construct mpeg-dash boxes on the fly off a range query. Using Chrome's support for MSE to append segments. Works great but not a universal solution. Want to fall back on other techniques other than flash but with the html5 video.
I have some podcast feeds already up and running in iTunes for my client, but we're thinking of switching their audio from self-hosted to SoundCloud.
Is it possible to use the SoundCloud API to get an mp3 download link and the file length for uploaded tracks?
As an example, here's the enclosure tag from the existing feed for a recent podcast episode:
<enclosure url="http://marfapublicradio.org/wp-content/uploads/2013/08/TLK-130813-Steve-Murdoch-WEB.mp3" length="28077244" type="audio/mpeg" />
If I could just insert SoundCloud track data for the url and length attributes I'd be good to go, but after a brief look through at the API documentation I'm not sure whether it can be done.
Any input would be greatly appreciated.
Seeing this in the related questions list:
SoundCloud, download or stream file via api
led me to further examination of the API docs ( http://developers.soundcloud.com/docs/api/reference#tracks ), where I found that the track properties do include download_url and duration.
So the answer to my question is "yes, it is possible".
Edit as of September 2, 2013:
I was able to make a download link, but only one that initiated a download dialog, and not a URL for an mp3 file that would be appropriate for me feed's enclosure tag.
I tried e-mailing the SoundCloud api support e-mail address, but got no response. I then tried their general support e-mail, and did receive a reply telling me that the answer to my question is NO.
SoundCloud's api does NOT support .mp3 URLs to drop into the enclosure tags in my pre-existing feeds. It was instead suggested that I apply for their podcasting beta, which I will now investigate.
Dealing with this myself at the moment. While I am not immediately seeing a direct link to an mp3, I have noticed that the value for the waveform (a random BBC stream used here) can be used to form a working mp3 url:
First get the track info:
https://api.soundcloud.com/resolve.json?url=https://soundcloud.com/bbc-media-show/nikkei-buys-financial-times&client_id=[yourClientIdHere]
Notice the waveform url:
7Yp3d9EHloKg_m.png
Use that identifier (remove the _m) to form the working stream URL.
http://media.soundcloud.com/stream/7Yp3d9EHloKg.mp3
Forgive my poor text formatting here... I have never used stackoverflow before.
We have an application where a user records video which is encoded by Adobe Flash Media Server 4. We now need to put that file on an S3 bucket to get it into our CDN. Ideally we would simply like to PUT the file to the bucket using the RESTful interface once the encoding is complete but it looks like LoadVars does not support PUT. So now our two options are:
Use multipart/form-data in the RESTful interface, but doing all that boundary stuff looks complicated.
Use PutObjectInline in the SOAP interface, but now I have to base64 encode the file, and I don't see how to do that.
You would think that 'encode video, put on world wide web' is a common enough problem but apparently not.
Any suggestions would be appreciated!