Axios CORS not working on chrome deployed site - amazon-s3

I have the following method in a VueJS application:
const downloadImageBase64 = async imageUrl => {
try {
var result = await axios({
method: "get",
url: imageUrl,
responseType: 'blob',
crossdomain: true
});
return blobToBase64(result.data);
}
catch (err) {
console.log("err: ", err);
return "";
}
};
I am downloading images and returning them as a base64 strings because I'm embedding them into PDF's that I'm creating using JSPDF. The images themselves are hosted in AWS S3. I have the CORS policy set up in the appropriate S3 bucket:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET"
],
"AllowedOrigins": [
"https://mydomain.co.za"
"http://localhost:8082"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3000
}
]
When running the app on my localhost, the image download succeeds with both FireFox and Chrome:
However, the moment I deploy the app to my staging environment, the download starts to fail with CORS issues, but only on Chrome. Looking at the request headers, no CORS info is even being sent:
I don't know if there's a preflight that Chrome is not showing in the network traffic, but the console gives me the following error:
Access to XMLHttpRequest at 'https://my-bucket.s3-eu-west-1.amazonaws.com/my-image-path.png' from origin 'https://mydomain.co.za' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.

It turns out the issue is that I display the images on the webpage where the download PDF button is placed, which means that chrome caches these images and when attempting to download the images for the PDF, chrome is returning the cached images without the CORS headers. I more or less got the answer from here:
https://www.hacksoft.io/blog/handle-images-cors-error-in-chrome
So the solution is to append a throw-away get parameter to the URL when downloading for the PDF:
const downloadImageBase64 = async imageUrl => {
try {
var result = await axios({
method: "get",
url: `${imageUrl}?not-from-cache-please`,
responseType: 'blob',
crossdomain: true
});
return blobToBase64(result.data);
}
catch (err) {
console.log("err: ", err);
return "";
}
};
Note the:
url: `${imageUrl}?not-from-cache-please`

Related

next.js 404 fail to load resource /api/ on vercel

i'm building a next js website that i deploy on vercel.
I made a next.js api /api/contact which send a mail via nodemailer. It works fine when i try the code on my pc but when i upload on vercel (with github integration) i get a "404 failed to load resource" for /api/contact in the console and it doesn't work.
Is there any more configuration to do for next.js api to work on vercel ?
Here is the code for the api call :
fetch("/api/contact", {
method: "POST",
headers: {
Accept: "application/json, text/plain, */*",
"Content-Type": "application/json",
},
body: JSON.stringify(data),
}).then((res) => {
contact.js in api folder
So here are the answers that worked :
"failed to load resource" disappeared after several deployments but i still had a 404.
Problem was that transporter.sendMail needed to be async + i had issue with gmail, i ended using another mail provider (zoho). So for anyone faceing the same issues here is a working code (maybe not the best but it's working) :
export default async (req, res) => {
require('dotenv').config()
const nodemailer = require('nodemailer');
async function mail() {
console.log('enter async function');
const transporter = nodemailer.createTransport({
name: "smtp.zoho.com",
port: 465,
host: "smtp.zoho.com",
auth: {
user: process.env.mailsender,
pass: process.env.mailpw,
},
secure: true,
})
let mail = await transporter.sendMail({
from: process.env.mailsender,
to: process.env.mailreceive,
subject: `${req.body.message}`,
text: `${req.body.message}`,
html: `<div><p>${req.body.message}</p></div>`
});
}
try {
console.log('sending mail');
await mail();
res.status(200);
console.log('mail should be sent');
} catch (error) {
console.log(error);
console.log('error sending mail');
res.status(404);
} finally {
res.end();
};
}

Browser Cancels PUT requests

I've got a rails api and a react front end with axios to interact with the api. Have enabled CORS in rails, but the below request gets cancelled by the browser and can't find the reason for it.
Request copied as "fetch" call:
fetch("http://localhost:3000/api/v1/profile",
{
"credentials":"include",
"headers":{
"accept":"application/json, text/plain, */*",
"accept-language":"en-GB,en-US;q=0.9,en;q=0.8",
"authorization":"Bearer eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiIxIiwic2NwIjoiYWNjb3VudCIsImF1ZCI6bnVsbCwiaWF0IjoxNTg2OTY4MjIwLCJleHAiOjE1ODcwNTQ2MjAsImp0aSI6IjhkNTE2YjIzLTA1MGQtNGU2MS04ZWE1LWM3ZGIwMzkxNTg0NCJ9.KuwtVNB5minrOs3lvfNjt7lVQSWNRXdqZsbErb6SrGM",
"content-type":"application/json",
"sec-fetch-dest":"empty",
"sec-fetch-mode":"cors",
"sec-fetch-site":"same-site"
},
"referrer":"http://localhost:3001/profile?",
"referrerPolicy":"no-referrer-when-downgrade",
"body":"{\"first_name\":\"testsdasd\",\"last_name\":\"asadeesdfsfs\"}",
"method":"PUT",
"mode":"cors"
});
Is this getting cancelled due to CORS? BTW other post requests are getting through.
Thanks a lot.
Below is the code using axios.
async updateProfileData(profile) {
try {
let axiosResponse = await AxiosClient.instance().put('http://localhost:3000/api/v1/profile', {
first_name: profile.first_name,
last_name: profile.last_name
},{
headers: {
"Content-Type": "application/json"
}
});
return axiosResponse;
} catch (e) {
return e.response;
}
}
Found the answer.
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at ‘http://localhost:3000/api/v1/profile’. (Reason: Credential is not supported if the CORS header ‘Access-Control-Allow-Origin’ is ‘*’).
https://developer.mozilla.org/docs/Web/HTTP/CORS/Errors/CORSNotSupportingCredentials?utm_source=devtools&utm_medium=firefox-cors-errors&utm_campaign=default

How to Fix '422 Unprocessable Entity' when sending a POST request to Redmine API?

I am trying to create a wiki page using redmine rest api.
The Authentication was succeeded, however the wiki page is not being created because of a 422 error.
The Redmine documentation says: "When trying to create or update an object with invalid or missing attribute parameters, you will get a 422 Unprocessable Entity response. That means that the object could not be created or updated."
But I can seem to find out where I have mess up. The PROBLEM CAME UP WHEN I DID THE SECOND REQUEST- "PUT REQUEST".
so we know the problem is somewhere in that section.
My guess is, it is either the file path or the content-type.
This is what I have so far....
const wordDocument="C:\Users\adasani\Desktop\practice\RedmineApi/RedmineText.txt";
creatingWikiPage_Request(wordDocument);
function creatingWikiPage_Request(wordDocument) {
axios({
method: 'post',
url: '<redmine_url>/uploads.json',
headers: { 'Content-Type': 'application/octet-stream' },
params: { 'key': '<api-key>' },
data: wordDocument
})
.then(function (response) {
console.log("succeeed---> ");
console.log(response.data.upload.token)
axios({
method: 'put',
url: '<redmine_url>/projects/Testing/wiki/WikiTesting.json',
headers: { 'Content-Type': 'application/octet-stream' },
params: { 'key': '<api-key>' },
data: {
"wiki_page": {
"text": "This is a wiki page with images, and other files.",
"uploads":[
{ "token": response.data.upload.token, "filename": "RedmineText.txt", "content-type": "text/plain" }
]
}
}
})
.then(response => {
console.log("PUT is Succeed-->>>")
console.log(response)
})
.catch(error => {
console.log("Error-->>")
console.log(error.response)
})
})
.catch(function (error) {
console.log("failed-----> ");
console.log(error.response.statusText, "-->", error.response.status);
console.log(error.response.headers)
console.log(error.message)
console.log("failed-----> ");
})
}
I am suppose to see a wiki page being created in my redmine dashboard but I am getting a 422 error.
You are sending the update request to the JSON api, i.e. <redmine_url>/projects/Testing/wiki/WikiTesting.json with Content-Type: application/octet-stream. Because of this, Redmine is unable to parse the PUTed payload since it doesn't know in what format the data is.
To solve this, you should always make sure to set the correct content type when posting data. In this case, you should set the Content-Type header to application/json when sending any JSON-formatted data to Redmine.
Note that in principal, you can send XML data to Redmine and get JSON back. The output format is determined by the file ending in the URL (.json or .xml), the format of the data sent by you is always identified by the Content-Type header.
I have similar issue while uploading multiple files to server from my flutter app; The issue is some server needs to have [] format to receive multiple files;
=> Change From
formData.files.add(MapEntry(
"videos",
await MultipartFile.fromFile(curPost.url, filename: getFileNameByFullPath(curPost.url)),
));
=> TO
formData.files.add(MapEntry(
"videos[]",
await MultipartFile.fromFile(curPost.url, filename: getFileNameByFullPath(curPost.url)),
));
Here I just make change key from videos to videos[].

Downloading images form AWS S3 via Lambda and API Gateway--using fetch class

I'm trying to use the JavaScript fetch API, AWS API Gateway, AWS Lambda, and AWS S3 to create a service that allows users to upload and download media. Server is using NodeJs 8.10; browser is Google Chrome Version 69.0.3497.92 (Official Build) (64-bit).
In the long term, allowable media would include audio, video, and images. For now, I'd be happy just to get images to work.
The problem I'm having: my browser-side client, implemented using fetch, is able to upload JPEG's to S3 via API Gateway and Lambda just fine. I can use curl or the S3 Console to download the JPEG from my S3 bucket, and then view the image in an image viewer just fine.
But, if I try to download the image via the browser-side client and fetch, I get nothing that I'm able to display in the browser.
Here's the code from the browser-side client:
fetch(
'path/to/resource',
{
method: 'post',
mode: "cors",
body: an_instance_of_file_from_an_html_file_input_tag,
headers: {
Authorization: user_credentials,
'Content-Type': 'image/jpeg',
},
}
).then((response) => {
return response.blob();
}).then((blob) => {
const img = new Image();
img.src = URL.createObjectURL(blob);
document.body.appendChild(img);
}).catch((error) => {
console.error('upload failed',error);
});
Here's the server-side code, using Claudia.js:
const AWS = require('aws-sdk');
const ApiBuilder = require('claudia-api-builder');
const api = new ApiBuilder();
api.corsOrigin(allowed_origin);
api.registerAuthorizer('my authorizer', {
providerARNs: ['arn of my cognito user pool']
});
api.get(
'/media',
(request) => {
'use strict';
const s3 = new AWS.S3();
const params = {
Bucket: 'name of my bucket',
Key: 'name of an object that is confirmed to exist in the bucket and to be properly encoded as and readable as a JPEG',
};
return s3.getObject(params).promise().then((response) => {
return response.Body;
})
;
}
);
module.exports = api;
Here are the initial OPTION request and response headers in Chrome's Network Panel:
Here's the consequent GET request and response headers:
What's interesting to me is that the image size is reported as 699873 (with no units) in the S3 Console, but the response body of the GET transaction is reported in Chrome at roughly 2.5 MB (again, with no units).
The resulting image is a 16x16 square and dead link. I get no errors or warnings whatsoever in the browser's console or CloudWatch.
I've tried a lot of things; would be interested to hear what anyone out there can come up with.
Thanks in advance.
EDIT: In Chrome:
Claudia requires that the client specify which MIME type it will accept on binary payloads. So, keep the 'Content-type' config in the headers object client-side:
fetch(
'path/to/resource',
{
method: 'post',
mode: "cors",
body: an_instance_of_file_from_an_html_file_input_tag,
headers: {
Authorization: user_credentials,
'Content-Type': 'image/jpeg', // <-- This is important.
},
}
).then((response) => {
return response.blob();
}).then((blob) => {
const img = new Image();
img.src = URL.createObjectURL(blob);
document.body.appendChild(img);
}).catch((error) => {
console.error('upload failed',error);
});
Then, on the server side, you need to tell Claudia that the response should be binary and which MIME type to use:
const AWS = require('aws-sdk');
const ApiBuilder = require('claudia-api-builder');
const api = new ApiBuilder();
api.corsOrigin(allowed_origin);
api.registerAuthorizer('my authorizer', {
providerARNs: ['arn of my cognito user pool']
});
api.get(
'/media',
(request) => {
'use strict';
const s3 = new AWS.S3();
const params = {
Bucket: 'name of my bucket',
Key: 'name of an object that is confirmed to exist in the bucket and to be properly encoded as and readable as a JPEG',
};
return s3.getObject(params).promise().then((response) => {
return response.Body;
})
;
},
/** Add this. **/
{
success: {
contentType: 'image/jpeg',
contentHandling: 'CONVERT_TO_BINARY',
},
}
);
module.exports = api;

Google Cloud Storage set cache-control with signed urls upload

We're using signed urls to upload from the browser. I haven't been able to figure out how to set the cache-control header while uploading.
We're using the gcloud-node library to sign urls:
var bucket = gcs.bucket('mybucket');
var file = bucket.file('image.jpg');
var expireDate = new Date
expireDate.setDate(expireDate.getDate() + 1);
file.getSignedUrl({
action: 'write',
expires: expireDate,
contentType: 'image/jpeg'
}, function (err, signedUrl) {
if (err) {
console.error('SignedUrl error', err);
} else {
console.log(signedUrl);
}
});
How do I set the Cache-Control headers while uploading a file to GCS?
The code to upload is running in the browser:
var signedUrl = ...; // get from nodejs server
var fileList = this.files;
var file = fileList[0];
jQuery.ajax({
url: signedUrl,
type: 'PUT',
data: file,
processData: false,
contentType: 'image/jpeg'
})
This is possible, but the documentation is terrible. First you need to setup CORS on the bucket you're uploading to with:
gsutil cors set cors.json gs://bucket-name
Where cors.json contains something like:
[{
"maxAgeSeconds": 3600,
"method": ["GET", "PUT", "POST"],
"origin": [
"http://localhost:3000"
],
"responseHeader": ["Content-Type", "Cache-Control"]
}]
"Cache-Control" needs to be listed in the "responseHeader" field. Then upload like you normally would, but set the Cache-Control header. Using fetch it would be:
fetch(uploadUrl, {
method: 'PUT',
body: blob,
headers: {
'Content-Type': blob.type,
'Cache-Control': 'public, max-age=31536000',
},
});
the snippet you have is getting a signed url. when you upload (insert) the object into GCS, you should be able to set it via the API:
https://cloud.google.com/storage/docs/json_api/v1/objects/insert