We're using signed urls to upload from the browser. I haven't been able to figure out how to set the cache-control header while uploading.
We're using the gcloud-node library to sign urls:
var bucket = gcs.bucket('mybucket');
var file = bucket.file('image.jpg');
var expireDate = new Date
expireDate.setDate(expireDate.getDate() + 1);
file.getSignedUrl({
action: 'write',
expires: expireDate,
contentType: 'image/jpeg'
}, function (err, signedUrl) {
if (err) {
console.error('SignedUrl error', err);
} else {
console.log(signedUrl);
}
});
How do I set the Cache-Control headers while uploading a file to GCS?
The code to upload is running in the browser:
var signedUrl = ...; // get from nodejs server
var fileList = this.files;
var file = fileList[0];
jQuery.ajax({
url: signedUrl,
type: 'PUT',
data: file,
processData: false,
contentType: 'image/jpeg'
})
This is possible, but the documentation is terrible. First you need to setup CORS on the bucket you're uploading to with:
gsutil cors set cors.json gs://bucket-name
Where cors.json contains something like:
[{
"maxAgeSeconds": 3600,
"method": ["GET", "PUT", "POST"],
"origin": [
"http://localhost:3000"
],
"responseHeader": ["Content-Type", "Cache-Control"]
}]
"Cache-Control" needs to be listed in the "responseHeader" field. Then upload like you normally would, but set the Cache-Control header. Using fetch it would be:
fetch(uploadUrl, {
method: 'PUT',
body: blob,
headers: {
'Content-Type': blob.type,
'Cache-Control': 'public, max-age=31536000',
},
});
the snippet you have is getting a signed url. when you upload (insert) the object into GCS, you should be able to set it via the API:
https://cloud.google.com/storage/docs/json_api/v1/objects/insert
Related
I am new to aws-sdk and I want to sign Cache-Control, Content-Type and x-amz-acl headers for presigned url. Is it possible to do with s3-request-presigner? I can't find any example.
var command = new PutObjectCommand({
Bucket: 'mybucket',
Key: 'file.txt',
ACL: 'public-read',
CacheControl: 'public, max-age=1000',
ContentType: 'text/plain',
});
var signedUrl = await getSignedUrl(s3Client, command, {
expiresIn: 3600,
signableHeaders:
new Set(['Cache-Control', 'Content-Type', 'x-amz-acl'])
});
The resulting url contains this: X-Amz-SignedHeaders=host. That is, my headers
don't get signed. What do I have to do?
Also, can I restrain the size of the upload with something like content-length-range and sign it as well?
You have probably solved this by now, but what XHR client were you using?
I had the same issue using axios because I was calling axios like this:
axios({
url: signedRequest,
method: 'put',
data: Body,
headers: {
'x-amz-acl': 'public-read-write',
'Content-Type': 'application/pdf'
},
maxContentLength: Infinity,
maxBodyLength: Infinity
})
I needed to remove the headers property in this axios call because I was already setting ACL and ContentType when getting the signed request. Once I remove the headers property from my axios call, it started working.
I have the following method in a VueJS application:
const downloadImageBase64 = async imageUrl => {
try {
var result = await axios({
method: "get",
url: imageUrl,
responseType: 'blob',
crossdomain: true
});
return blobToBase64(result.data);
}
catch (err) {
console.log("err: ", err);
return "";
}
};
I am downloading images and returning them as a base64 strings because I'm embedding them into PDF's that I'm creating using JSPDF. The images themselves are hosted in AWS S3. I have the CORS policy set up in the appropriate S3 bucket:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET"
],
"AllowedOrigins": [
"https://mydomain.co.za"
"http://localhost:8082"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3000
}
]
When running the app on my localhost, the image download succeeds with both FireFox and Chrome:
However, the moment I deploy the app to my staging environment, the download starts to fail with CORS issues, but only on Chrome. Looking at the request headers, no CORS info is even being sent:
I don't know if there's a preflight that Chrome is not showing in the network traffic, but the console gives me the following error:
Access to XMLHttpRequest at 'https://my-bucket.s3-eu-west-1.amazonaws.com/my-image-path.png' from origin 'https://mydomain.co.za' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
It turns out the issue is that I display the images on the webpage where the download PDF button is placed, which means that chrome caches these images and when attempting to download the images for the PDF, chrome is returning the cached images without the CORS headers. I more or less got the answer from here:
https://www.hacksoft.io/blog/handle-images-cors-error-in-chrome
So the solution is to append a throw-away get parameter to the URL when downloading for the PDF:
const downloadImageBase64 = async imageUrl => {
try {
var result = await axios({
method: "get",
url: `${imageUrl}?not-from-cache-please`,
responseType: 'blob',
crossdomain: true
});
return blobToBase64(result.data);
}
catch (err) {
console.log("err: ", err);
return "";
}
};
Note the:
url: `${imageUrl}?not-from-cache-please`
I am generating a presignedurl from a hosted lambda. I get the presigned url but when i do a put using the following:
axios.put(response.data, acceptedFiles[0], { headers: {'Access-Control-Allow-Origin': '*', 'Content-Type': 'application/json', 'X-Amz-ACL': 'public-read'}} ).then((response) => {
console.log('put', response);
});
I get an issue which is SignatureDoesNotMatch
The lambda which generates the presigned url is
var AWS = require('aws-sdk');
var s3 = new AWS.S3({
signatureVersion: 'v4',
});
exports.handler = (event, context, callback) => {
const url = s3.getSignedUrl('putObject', {
Bucket: 'bucketname',
Key: 'test.json',
Expires: 600,
ACL: 'public-read',
ContentType: 'application/json'
});
const res ={
"statusCode": 200,
"headers": {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*",
"ACL": 'public-read',
}
};
res.body = url;
callback(null, res);
I am a bit stuck on this now as have been trying to work on it for the past few days.
I got Watson Speech-to-Text working on the web. I am now trying to do it on react native but am getting errors on the file upload part.
I am using the HTTPS Watson API. I need to set the Content-Type otherwise Watson returns a error response. However in react-native, for the file upload to work, we seem to need to set 'Content-Type' to 'multipart/form-data'. Is there anyway to upload a file in react-native while setting Content-Type to 'audio/aac'?
The error Watson API gives me if I set 'Content-Type': 'multipart/form-data' is:
{
type: "default",
status: 400,
ok: false,
statusText: undefined,
headers: Object,
url: "https://stream.watsonplatform.net/speech-to-text/api/v1/recognize?continuous=true",
_bodyInit: Blob,
_bodyBlob: Blob
}
The response body is:
{
"code_description": "Bad Request",
"code": 400,
"error": "No JSON object could be decoded"
}
Here is my code (full code is here - gist.github.com ):
const ext = 'aac';
const file_path = '/storage/emulated/0/Music/enter-the-book.aac';
data.append('file', {
uri: `file://${file_path}`,
name: `recording.${ext}`,
type: `audio/${ext}`
}, `recording.${ext}`);
const response = await fetch('https://stream.watsonplatform.net/speech-to-text/api/v1/recognize?continuous=true', {
method: 'POST',
headers: {
// 'Content-Type': `audio/${ext}`,
'Content-Type': 'multipart/form-data',
'X-Watson-Authorization-Token': token
},
body: data
});
console.log('watson-stt::getResults - response:', response);
if (response.status !== 200) {
const error = await response.text();
throw new Error(`Got bad response "status" (${response.status}) from Watson Speach to Text server, error: "${error}"`);
}
Here is a screenshot of the error I get when I set 'Content-Type': 'audio/aac':
Thanks so much to DanielBolanos and NikolayShmyrev this is the solution I used:
This code is for iOS so I recorded the audio as blah.ulaw BUT the part_content_type is aduio/mulaw;rate=22050 this is very important to use mulaw even though file ext is ulaw. An interesting note: I couldn't play the blah.ulaw file on my macOS desktop.
Also note that you MUST NOT set Content-Type to multipart/form-data this will destroy the boundary.
Also Bluemix requires rate in the part_content_type for mulaw
const body = new FormData();
let metadata = {
part_content_type: 'audio/mulaw;rate=22050' // and notice "mulaw" here, "ulaw" DOES NOT work here
};
body.append('metadata', JSON.stringify(metadata));
body.append('upload', {
uri: `file://${file_path}`,
name: `recording.ulaw`, // notice the use of "ulaw" here
type: `audio/ulaw` // and here it is also "ulaw"
});
const response = await fetch('https://stream.watsonplatform.net/speech-to-text/api/v1/recognize?continuous=true', {
method: 'POST',
headers: {
// 'Content-Type': 'multipart/form-data' // DO NOT SET THIS!! It destroys the boundary and messes up the request
'Authorization': `Basic ${btoa(`${USERNAME}:${PASSWORD}`)}`
},
body
});
According to the documentation for multipart requests the request should be:
curl -X POST -u "{username}":"{password}"
--header "Transfer-Encoding: chunked"
--form metadata="{
\"part_content_type\":\"audio/flac\",
\"timestamps\":true,
\"continuous\":true}"
--form upload="#audio-file1.flac"
"https://stream.watsonplatform.net/speech-to-text/api/v1/recognize"
So the content-type should be multipart/form-data, you can specify aac as "part_content_type": "audio/aac".
The big problem you have is that audio/aac is not in supported formats. You might probably need another codec.
I am using ng-file-upload to send a file to AWS-S3 in my angular app.
Upload.http({
url: '/presignedurl',
headers : {
'Content-Type': file.type
},
data: file
})
It is giving me 403 Forbidden error saying
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
AWS S3 needs binary/octet-stream so you can use FileReader class in JavaScript to convert file data to binary/octet-stream
Replace your code with this
var reader = new FileReader();
var xhr = new XMLHttpRequest();
xhr.open("PUT", $scope.url);
reader.onload = function(evt) {
xhr.send(evt.target.result);
};
reader.readAsArrayBuffer($files[file]);
You can try something like this
var config = {
url: result.signed_request,
headers: {
"Content-Type": files[0].type != '' files[0].type : 'application/octet-stream'
},
method: 'PUT',
data: files[0]
};
Upload.http(config);