How to upload video to vimeo through swift - alamofire

I need to upload a video file to vimeo from my ios app.
Vimeo's iOS library is deprecated, so I'm trying to upload a video using the api on the Vimeo developer site.
https://developer.vimeo.com/api/upload/videos
I'm using the resumable approach.
There are 3 steps in total. Step 1 was successful and step 2 is still failing.
Here's the method I tried in step 2:
private func uploadVideoToVimeo(uploadLink:String) {
let urlString = uploadLink
let headers: HTTPHeaders = [ "Tus-Resumable":"1.0.0",
"Upload-Offset": "0",
"Content-Type": "application/offset+octet-stream",
"Accept":"application/vnd.vimeo.*+json;version=3.4"]
var request = URLRequest(url: URL(string: urlString)!)
request.headers = headers
request.method = .patch
AF.upload(multipartFormData: { multipartFormData in
let timestamp = NSDate().timeIntervalSince1970
do {
let data = try Data(contentsOf: self.videoLocalURL, options:.mappedIfSafe)
print("data size :\(data)")
multipartFormData.append(data, withName: "\(timestamp)")
} catch {}
}, with: request).responseString { response in
switch response.result {
case .success(let data):
print("esponse :\(response)")
case let .failure(error):
print("ERROR :\(error)")
}
}
}
When I do this, the response is “missing or invalid Content-Type header”.
Any help would be greatly appreciated.

Alamofire, and Apple's network frameworks in general, don't support the TUS protocol for uploads. You either need to implement that manually and upload a stream, or switch to using the form-based approach outlined in the Vimeo docs.

Related

Access encodingResult when uploading with Alamofire 5

I'm trying to update my app to Alamofire 5 and having difficulties due to a hack-ish way I'm using it I guess.
Anyhow, I need background uploads and Alamofire is not really designed to do this. Even so, I was using it to create a properly formatted file containing multipart form so I can give it to the OS to upload in the background later.
I'll post the code doing this in Alamofire 4, my question is how can I get the url of the file I was previously getting with encodingResults?
// We're not actually going to upload photo via alamofire. It does not offer support for background uploads.
// Still we can use it to create a request and more importantly properly formatted file containing multipart form
Api.alamofire.upload(
multipartFormData: { multipartFormData in
multipartFormData.append(imageData, withName: "photo[image]", fileName: filename, mimeType: "image/jpg")
},
to: "http://", // if we give it a real url sometimes alamofire will attempt the first upload. I don't want to let it get to our servers but it fails if I feed it ""
usingThreshold: UInt64(0), // force alamofire to always write to file no matter how small the payload is
method: .post,
headers: Api.requestHeaders,
encodingCompletion: { encodingResult in
switch encodingResult {
case .success(let alamofireUploadTask, _, let url):
alamofireUploadTask.suspend()
defer { alamofireUploadTask.cancel() }
if let alamofireUploadFileUrl = url {
// we want to own the multipart file to avoid alamofire deleting it when we tell it to cancel its task
let fileUrl = ourFileUrl
do {
try FileManager.default.copyItem(at: alamofireUploadFileUrl, to: fileUrl)
// use the file we just created for a background upload
} catch {
}
}
case .failure:
// alamofire failed to encode the request file for some reason
}
}
)
Multipart encoding is fully integrated into the now-asynchronous request pipeline in Alamofire 5. That means there's no separate step to use. However, you can use the MultipartFormData type directly, just like you would in the request closure.
let data = MultipartFormData()
data.append(Data(), withName: "dataName")
try data.encode()

Stream api with fetch in a react-native app

I was trying to use Stream api with fetch in react-native app, I implemented with the help of a great example mentioned at jeakearchibald.com . code is something similar to :-
fetch('https://html.spec.whatwg.org/').then(function(response) {
console.log('response::-', response)
var reader = response.body.getReader();
var bytesReceived = 0;
reader.read().then(function processResult(result) {
if (result.done) {
console.log("Fetch complete");
return;
}
bytesReceived += result.value.length;
console.log(`Received ${bytesReceived} bytes of data so far`);
return reader.read().then(processResult);
});
});
Stream api reference is :-
https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API
But it seems fetch implementation of react-native is little different than of browsers and it is not easy to use Stream in the same way as used on web.
There is already an unresolved issue on react-native for the same
https://github.com/facebook/react-native/issues/12912
On web we can access Stream from response.body.getReader(), where response is just normal result retuned from fetch call of stream url, but in react-native there is no way we can access body and hence getReader from response of fetch call.
So to overcome this I tried to use rn-fetch-blob npm package , because it supports Streams, but that to seems to support only from locale file paths because there readStream functions doesn't seems to have support to pass Authorization and other necessary headers, so I tried to use RNFetchBlob.fetch with the remote url and necessary headers and then using readStream method from response but that always returns me there is no stream with the current response.
RNFetchBlob.fetch('GET', 'https://html.spec.whatwg.org/')
.progress((received, total) => {
console.log('progress', received / total);
})
.then((resp) => {
// const path = resp.path();
console.log('resp success:-', resp);
RNFetchBlob.fs.readStream(path, 'utf8').then((stream) => {
let data = '';
stream.open();
stream.onData((chunk) => {
data += chunk;
});
stream.onEnd(() => {
console.log('readStream::-', data);
});
// });
})
.catch((err) => {
console.log('trackAppointmentStatus::-', err);
});
I may be doing something wrong in both approaches of mine, so little guidance may help me or someone else in the future. Or I may need to find a way to do it natively with writing a bridge.

Microsoft Azure Cognitive Services - Bing Text to Speech API - Play audio using javascript

I am following this documentation to convert text to speech using the Text To Speech REST API.
I'm successfully able to get a valid response using Postman and I'm able to pay the audio in PostMan. But I am not able to play the audio using JavaScript. Below is my Javascript code. I'm not sure what to do with the response.
function bingSpeech(message) {
var authToken = "TokenToCommunicateWithRestAPI";
var http = new XMLHttpRequest();
var params = `<speak version='1.0' xml:lang='en-US'><voice xml:lang='en-US' xml:gender='Female' name='Microsoft Server Speech Text to Speech Voice (en-US, JessaRUS)'>${message}</voice></speak>`;
http.open('POST', 'https://speech.platform.bing.com/synthesize', true);
//Send the proper header information along with the request
http.setRequestHeader("Content-Type", "application/ssml+xml");
http.setRequestHeader("Authorization", "bearer " + authToken);
http.setRequestHeader("X-Microsoft-OutputFormat", "audio-16khz-32kbitrate-mono-mp3");
http.onreadystatechange = function () {
if (http.readyState == 4 && http.status == 200) {
// I am getting the respone, but I'm not sure how to play the audio file. Need help here
}
}
http.send(params);
}
Thanks.
I referred to the following repository for my code in Java. It plays the audio in IDE and saves the audio file to your system.
https://github.com/Azure-Samples/Cognitive-Speech-TTS/tree/master/Samples-Http/Java/TTSSample/src/com/microsoft/cognitiveservices/ttssample

Google Apps Script: Salesforce API Call

Just finished breakfast and already hit a snag. I'm trying to call the salesforce REST api from my google sheets. I've written a working script locally in python, but converting it into JS, something went wrong:
function authenticateSF(){
var url = 'https://login.salesforce.com/services/oauth2/token';
var options = {
grant_type:'password',
client_id:'XXXXXXXXXXX',
client_secret:'111111111111',
username:'ITSME#smee.com',
password:'smee'
};
var results = UrlFetchApp.fetch(url, options);
}
Here is the error response:
Request failed for https://login.salesforce.com/services/oauth2/token
returned code 400. Truncated server response:
{"error_description":"grant type not
supported","error":"unsupported_grant_type"} (use muteHttpExceptions
option to examine full response) (line 12, file "Code")
Mind you, these exact parameters work fine in my local python script (putting the key values inside quotations).
Here are the relevant docs:
Google Script: Connecting to external API's
Salesforce: REST API guide
Thank you all!
Google's UrlFetchApp object automatically defaults to a GET request. To authenticate, you have to explicitly set in the options the method "post":
function authenticateSF(){
var url = 'https://login.salesforce.com/services/oauth2/token';
var payload = {
'grant_type':'password',
'client_id':'XXXXXXXXXXX',
'client_secret':'111111111111',
'username':'ITSME#smee.com',
'password':'smee'
};
var options = {
'method':'post',
'payload':payload
};
var results = UrlFetchApp.fetch(url, options);
}

Firefox add-on SDK: Get http response headers

I'm new to add-on development and I've been struggling with this issue for a while now. There are some questions here that are somehow related but they haven't helped me to find a solution yet.
So, I'm developing a Firefox add-on that reads one particular header when any web page that is loaded in any tab in the browser.
I'm able to observer tab loads but I don't think there is a way to read http headers inside the following (simple) code, only url. Please correct me if I'm wrong.
var tabs = require("sdk/tabs");
tabs.on('open', function(tab){
tab.on('ready', function(tab){
console.log(tab.url);
});
});
});
I'm also able to read response headers by observing http events like this:
var {Cc, Ci} = require("chrome");
var httpRequestObserver =
{
init: function() {
var observerService = Cc["#mozilla.org/observer-service;1"].getService(Ci.nsIObserverService);
observerService.addObserver(this, "http-on-examine-response", false);
},
observe: function(subject, topic, data)
{
if (topic == "http-on-examine-response") {
subject.QueryInterface(Ci.nsIHttpChannel);
this.onExamineResponse(subject);
}
},
onExamineResponse: function (oHttp)
{
try
{
var header_value = oHttp.getResponseHeader("<the_header_that_i_need>"); // Works fine
console.log(header_value);
}
catch(err)
{
console.log(err);
}
}
};
The problem (and a major source of personal confusion) is that when I'm reading the response headers I don't know to which request the response is for. I want to somehow map the request (request url especially) and the response header ("the_header_that_i_need").
You're pretty much there, take a look at the sample code here for more things you can do.
onExamineResponse: function (oHttp)
{
try
{
var header_value = oHttp.getResponseHeader("<the_header_that_i_need>");
// URI is the nsIURI of the response you're looking at
// and spec gives you the full URL string
var url = oHttp.URI.spec;
}
catch(err)
{
console.log(err);
}
}
Also people often need to find the tab related, which this answers Finding the tab that fired an http-on-examine-response event