The big picture is that I want to live upload recorded audio from the browser directly to google drive.
This is a pet project so I am happy to play with experimental web technologies.
I currently have the browser fetching a feed from the microphone using MediaDevices.getUserMedia() and encoding it to mp3 in a WebWorker. The encoder returns an rxjs.Observable<Int16Array> that will produce chunks of the encoded file on subscribe.
I would like to use a resumable upload to upload the file, preferably in the "single request" style. The challenge is in uploading the file as it is produced by the encoder.
I appreciate that I could probably achieve a similar result by using their "multiple chunks" style and collecting the results of the encoder into Blobs and sending them on an interval. My problem with this is that the more "live" the upload is (smaller chunks) the more POST requests I will be making.
XMLHttpRequest.send() does specify that I can provide a ReadableStream as the body. BUT it appears that this experimental tech does not yet support byte streams
Related
In microsoft speaker recognition api what data we have to use for body parameter.
There written as "binary data" . Does this imply that we have to convert audio file to binary data and then paste there.
Currently, it's not possible to use this API testing console for posting application/octet-stream or application/form-data. I believe there's some work on this, and it should be available soon.
As an alternative, you can use PostMan or Fiddler. PostMan might be easier to use. Give it a try and let me know if you have a problem.
The binary data needs to be a WAV file of a specific format:
Container: WAV
Encoding: PCM
Rate: 16K
Sample Format: 16 bit
Channels: Mono
You can check out a working example web page over here - I've used an altered copy of recorderjs (altered by reverse engineering the Speaker Recognition API examples page) to get the right bit rate and sample rate WAV:
https://rposbo.github.io/speaker-recognition-api/
You could potentially use the test console, since you can send base64 encoded audio data (as the official demo page does):
https://azure.microsoft.com/en-gb/services/cognitive-services/speaker-recognition/
I realize that similar questions have been asked before. However, none of them was answered.
My problem is the following:
To upload a file to Google Drive, you need to create DriveContents.
You either do this by creating them out of thin air:
DriveApi.DriveContentsResult driveContentsResult = Drive.DriveApi.newDriveContents(getGoogleApiClient()).await();
DriveContents contents = driveContentsResult.getDriveContents();
Or you do this by opening an already existing file:
DriveApi.DriveContentsResult driveContentsResult = driveFileResult.getDriveFile().open(getGoogleApiClient(), DriveFile.MODE_WRITE_ONLY, null).await();
DriveContents contents = driveContentsResult.getDriveContents();
You are now ready to fill the DriveContents with data. You do this by obtaining an OutputStream and by writing to this OutputStream:
FileOutputStream fileOutputStream = new FileOutputStream(driveContentsResult.getDriveContents().getParcelFileDescriptor().getFileDescriptor());
Now this is where the problem starts: by filling this OutputStream, Google Play services just copy the file I want to upload and create a local copy. If you have 0.5 GB of free space on your phone and you want to upload a 1.3 GB file, this is not going to work! There is not enough storage space.
So how is it done? Is there a way to directly upload to Google Drive via the GDAA that does not involve creating a local copy first, and THEN uploading it?
Does the Google REST API handle these uploads any different? Can it be done via the Google REST API?
EDIT:
It seems this cannot be done via the GDAA. For people looking for a way to do resumable uploads with the Google REST API, have a look at my example here on StackOverflow.
I'm not sure if it can but you can surely try to use Google REST APIs to upload your file.
You could use Multipart upload or Resumable upload:
Multipart upload
If you have metadata that you want to send along with the data to upload, you can make a single multipart/related request. This is a good choice if the data you are sending is small enough to upload again in its entirety if the connection fails.
Resumable upload
To upload data files more reliably, you can use the resumable upload protocol. This protocol allows you to resume an upload operation after a communication failure has interrupted the flow of data. It is especially useful if you are transferring large files and the likelihood of a network interruption or some other transmission failure is high, for example, when uploading from a mobile client app. It can also reduce your bandwidth usage in the event of network failures because you don't have to restart large file uploads from the beginning.
You must remember as discussed in this SO question:
The GDAA's main identifier, the DriveId lives in GDAA (GooPlaySvcs) only and does not exist in the REST Api.
ResourceId can be obtained from the DriveId only after GDAA committed (uploaded) the file/folder)
You will run into a lot of timing issues caused by the fact that GDAA 'buffers' network requests on it's own schedule (system optimized), whereas the REST Api let your app control the waiting for the response..
Lastly you can check this related SO question regarding tokens and authentication in HTTP request in android. There are also some examples by seanpj for both GDAA and the REST api that might help you.
Hope this helps.
I'm writing a fairly involved application for working with Sony cameras.
I can list the contents of the camera and copy image files no problem at all, but I can't seem to figure out the size of the files before I start to download them.
I'm receiving the file list using the standard getContentList API, and finding the files using the originals array in the response. That response seems to have no file size information in it.
Is this possible? Knowing the file size before downloading is important for a good user experience, and all the other camera APIs support it.
I do get the size when I start to download in the HTTP Content-Length header, but performing HEAD requests to hundreds of URLs in a row seems very inefficient!
Unfortunately the API does not support getting the file size.
Looking first for links to good documentation that correctly explains pseudostreaming, byte range requests and mp4 fragmenting. Note, I will be using only the mp4 container (h264 codec) and HTML5 video (no flash).
My understanding of pseudostreaming is that the client can send off a start parameter that the server "seeks" to in it's response. MOOV data must be upfront and it implicitly implies that buffering of the original source stops in favor of the new response starting at the "start"/seek position. How is the client forced to make pseudo calls? Does the MP4 have to formatted in a special way?
Byte range requests are different send rather than just a start parameter a range is sent. Sounds more like progressive downloading. How would "seeking" work? Does it with byte range? Can the segment size be pre-determined with movie box information?
How does MP4 fragmentation fit in? Looking like a construct originally designed by microsoft for silverlight. But is it applicable to other browser html5 video implementations?
Finding it difficult to sort out information on the web. Looking to both live feed and take historical segments of h264 files produced from rtp camera streams. Got a bunch of files time-ordered in a MongoDB. Created my own h264 decoder in JavaScript and can construct mpeg-dash boxes on the fly off a range query. Using Chrome's support for MSE to append segments. Works great but not a universal solution. Want to fall back on other techniques other than flash but with the html5 video.
I'm able to upload videos to youtube using their xml input/output format but their documentation on how to implement uploading with json-c is frustratingly sparse. For instance, what is the 'key' for the json data I'm sticking in the body? Or put a different way, how is the json string added to the body of the request?
Here are instructions for uploading a video using JSON-C:
https://developers.google.com/youtube/2.0/developers_guide_jsonc#Add_Video
The upload is done in two parts: 1) First you upload the metadata in JSON format. The response of this will contain an upload url. 2) Upload the actual video to the upload url.
However #Alexander is right, the Objective-C client may be a better route, since it handles all the upload details for you:
http://code.google.com/p/gdata-objectivec-client/