Direct Upload to S3 from the browser with Authorization Signature Ver 4 - file-upload

I need to upload a file to S3 directly from the browser. In the beginning I created a script that is working but to authorize I need to put my credentials accessKeyId and secretAccessKey, what it is not secure.
I figured out that I can use for authorization the "Authorization Signature"
It seems great but I can't find where I can put this authorization header to the request in the upload() method.
An example of my authorization header:
Authorization: AWS4-HMAC-SHA256
Credential=/20151016//s3/aws4_request,
SignedHeaders=content-type;host;x-amz-date,
Signature=4eee344a71a58623febc4079024a27cb62f3d26546695422244fcefe50d0168d
Thanks for your advice.

I have found solution for this issue. My solution is based on example from this site.
In final solution I don't use javascript SDK, it is using post form with authorization inputs what is sending with post parameters.

You can enclose a signed policy document with your POST request in order to authenticate securely, with AWS Signature Version 4.
If you're on Node, you can use the aws-s3-form package on the server to generate the necessary form data your client requires in order to send a successful request to S3.
You might want to read my blog post on the subject for full insight.
Example Server Side Code (Node)
let AwsS3Form = require('aws-s3-form')
[...]
// A hapi.js server route
server.route({
method: ['GET',],
path: '/api/s3Settings',
config: {
auth: 'session',
handler: (request, reply) => {
let {key,} = request.query
let keyPrefix = `u/${request.auth.credentials.username}/`
let region = process.env.S3_REGION
let s3Form = new AwsS3Form({
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region,
bucket,
keyPrefix,
successActionStatus: 200,
})
let url = `https://s3.${region}.amazonaws.com/${bucket}/${keyPrefix}${key}`
let formData = s3Form.create(key)
reply({
bucket,
region,
url,
fields: formData.fields,
})
},
},
})
Example Client Side Code
let R = require('ramda')
let ajax = require('./ajax')
class S3Uploader {
constructor({folder,}) {
this.folder = folder
}
send(file) {
let key = `${this.folder}/${file.name}`
return ajax.getJson(`s3Settings`, {key,})
.then((s3Settings) => {
let formData = new FormData()
R.forEach(([key, value,]) => {
formData.append(key, value)
}, R.toPairs(s3Settings.fields))
formData.append('file', file)
return new Promise((resolve, reject) => {
let request = new XMLHttpRequest()
request.onreadystatechange = () => {
if (request.readyState === XMLHttpRequest.DONE) {
if (request.status === 200) {
resolve(s3Settings.url)
} else {
reject(request.responseText)
}
}
}
let url = `https://s3.${s3Settings.region}.amazonaws.com/${s3Settings.bucket}`
request.open('POST', url, true)
request.send(formData)
})
}, (error) => {
throw new Error(`Failed to receive S3 settings from server`)
})
}
}

Related

How to use Nuxt 3 server as a passthrough API with FormData to hide external endpoints

I'm trying to get my head around the Nuxt /server API and can't seem to figure out how to send a POST request with form-data (ie files) to Nuxt server to forward on to an external service:
In my pages.vue file I have this method:
async function onSubmit() {
const formData = new FormData();
for (let file of form.files) {
await formData.append("image", file);
}
await $fetch("/api/send", {
method: "POST",
body: formData
});
}
and then in /server/api/send.js I have:
export default defineEventHandler(async (event) => {
const { method } = event.node.req;
// I THINK THE ISSUE IS HERE
const body =
method !== "GET" && method !== "HEAD"
? await readMultipartFormData(event)
: undefined;
const response = await $fetch.raw(https://*******, {
method,
baseURL: *********,
headers: {
},
body: body
});
return response._data;
}
I'm effectively creating a passthrough API using Nuxt so that the external endpoint isn't exposed to the end user. Just can't figure out how to access the formData in the correct format to pass through on the server side. I don't think I am supposed to use readMultipartFormData() because that seems to be parsing the data somehow whereas I just want to pass the formData straight through to the external API. Any tips?
I've tried using both readMultipartFormData() and readBody() and neither seem to work. I don't actually need to read the body but rather get it and pass it through without any formatting...
If you want to pass the data with formdata to the endpoint try this library:
https://www.npmjs.com/package/object-to-formdata
code:
import { serialize } from 'object-to-formdata';
const formData = serialize(body);
const response = await $fetch.raw(https://*******, {
method,
baseURL: *********,
headers: {
},
body: formData
});
I managed to make it work with ugly solution, first you have to update nuxt to version 3.2.0 min then here my front side
let jobApplicationDTO = {
firstName: values.firstName,
lastName: values.lastName,
email: values.email,
phoneNumber: values.phoneNumber,
company: values.company,
shortDescription: values.shortDescription
};
const formData = new FormData();
formData.append("application", new Blob([JSON.stringify(jobApplicationDTO)], {type: "application/json"}));
formData.append("file", values.file) ;
//formData.append("file", values.file );
await useFetch("/api/application", {
method: "POST",
body: formData,
onResponse({request, response, options}) {
// Process the response data
if (response.status === 200) {
errorMessage.value = "";
successMessage.value = "Your application wa sent successfully, you will be contacted soon !";
}
},
onResponseError({request, response, options}) {
console.debug(response);
if (response.status === 400) {
successMessage.value = "";
errorMessage.value = "There may be an issue with our server. Please try again later, or send an email to support#mantiq.com";
} else {
successMessage.value = "";
errorMessage.value = "Sorry we couldn’t send the message, there may be an issue with our server. Please try again later, or send an email to support#mantiq.com";
}
},
});
}
and server side
import {FormData} from "node-fetch-native";
export default defineEventHandler(async (event) => {
const {BACKEND_REST_API, ENQUIRY_TOKEN} = useRuntimeConfig();
//retrieve frontend post formData
const form = await readMultipartFormData(event);
const applicationUrl = BACKEND_REST_API + '/job/apply'
console.log("url used for enquiry rest call :" + applicationUrl);
console.log("Job application token :" + ENQUIRY_TOKEN);
const formData = new FormData();
console.log(form);
if (form) {
formData.append(form[0].name, new Blob([JSON.stringify(JSON.parse(form[0].data))], {type: form[0].type}));
formData.append(form[1].name, new Blob([form[1].data], {type: form[1].type}), form[1].filename);
}
console.log(formData.values);
return await $fetch(applicationUrl, {
method: "POST",
body: formData,
headers: {
Authorization: ENQUIRY_TOKEN,
},
});
})
What is funny is on frontend you have to create a formData , then to get content and to recreate a formData from your previous formData converted in MultiFormPart[], i created a ticket on nuxt to see how to do it properly

Trying to set a cookie established on a web session as a header back to API

I am trying to login via the webfront end and trying to intercept a cookie and then using that in the subsequent API request. I am having trouble getting the cookie back into the GET request. Code posted below.
import https from 'https';
import { bitbucketUser } from "../userRole.js"
import { ClientFunction } from 'testcafe';
fixture `Request/Response API`
// .page `https://myurl.company.com/login`
.beforeEach(async t => {
await t.useRole(bitbucketUser)
});
test('test', async t => {
const getCookie = ClientFunction(() => {
return document.cookie;
});
var mycookie = await getCookie()
const setCookie = ClientFunction(mycookie => {
document.cookie = mycookie;
});
var validatecookie = await getCookie()
console.log(validatecookie)
const executeRequest = () => {
return new Promise(resolve => {
const options = {
hostname: 'myurl.company.com',
path: '/v1/api/policy',
method: 'GET',
headers: {
'accept': 'application/json;charset=UTF-8',
'content-type': 'application/json'
}
};
const req = https.request(options, res => {
console.log('statusCode:', res.statusCode);
console.log('headers:', res.headers);
let body = "";
res.on("data", data => {
body += data;
});
res.on("end", () => {
body = JSON.parse(body);
console.log(body);
});
resolve();
});
req.on('error', e => {
console.error(e);
});
req.end();
});
};
await setCookie(mycookie)
await executeRequest();
});
I have tried several examples but am quite not able to figure what is it that I am missing.
When you call the setCookie method, you modify cookies in your browser using the ClientFunction.
However, when you call your executeRequest method, you run it on the server side using the nodejs library. When you set cookies on the client, this will not affect your request sent from the server side. You need to add cookie information directly to your options object as described in the following thread: How do I create a HTTP Client Request with a cookie?.
In TestCafe v1.20.0 and later, you can send HTTP requests in your tests using the t.request method. You can also use the withCredentials option to attach all cookies to a request.
Please also note that TestCafe also offers a cookie management API to set/get/delete cookies including HTTPOnly.

Bing Ads Script to change shared campaign budget on multiple accounts using Google Sheets

I have a Google Ads script running to change campaign budgets, but implementation of the same script into Bing Ads is more difficult for me. I'm having problems with the code to connect Google Sheets with Bing Ads Script. I got clientId, clientSecret and refresh token to authorize Google service in Bing, but am struggling with the code to allow the script read my Google Sheets file.
I attached some code responsible for connecting Google Sheets file to Bing Script. It should allow it to read it's content and later change it to whatever values I provided in that file.
const credentials = {
accessToken: '', // not sure if i needed it if I got refresh token
clientId: 'HIDDEN',
clientSecret: 'HIDDEN',
refreshToken: 'HIDDEN'
};
function main() {
var SPREADSHEET_URL = 'HIDDEN';
var GoogleApis;
(function (GoogleApis) {
GoogleApis.readSheetsService = credentials => readService("https://sheets.googleapis.com/$discovery/rest?version=v4", credentials);
 
// Creation logic based on https://developers.google.com/discovery/v1/using#usage-simple
function readService(SPREADSHEET_URL, credentials) {
const content = UrlFetchApp.fetch(SPREADSHEET_URL).getContentText();
const discovery = JSON.parse(content);
const accessToken = getAccessToken(credentials);
const standardParameters = discovery.parameters;
}
function getAccessToken(credentials) {
if (credentials.accessToken) {
return credentials.accessToken;
}
const tokenResponse = UrlFetchApp.fetch('https://www.googleapis.com/oauth2/v4/token', { method: 'post', contentType: 'application/x-www-form-urlencoded', muteHttpExceptions: true, payload: { client_id: credentials.clientId, client_secret: credentials.clientSecret, refresh_token: credentials.refreshToken, grant_type: 'refresh_token' } });
const responseCode = tokenResponse.getResponseCode();
const responseText = tokenResponse.getContentText();
if (responseCode >= 200 && responseCode <= 299) {
const accessToken = JSON.parse(responseText)['access_token'];
return accessToken;
}
throw new Error(responseText);
})(GoogleApis || (GoogleApis = {}));
it throws syntax error on the last line of the code:
})(GoogleApis || (GoogleApis = {}));
but i think there is more than that.
Please try the var GoogleApis declaration outside main() as this example shows: https://learn.microsoft.com/en-us/advertising/scripts/examples/calling-google-services
I hope this helps.

Downloading images form AWS S3 via Lambda and API Gateway--using fetch class

I'm trying to use the JavaScript fetch API, AWS API Gateway, AWS Lambda, and AWS S3 to create a service that allows users to upload and download media. Server is using NodeJs 8.10; browser is Google Chrome Version 69.0.3497.92 (Official Build) (64-bit).
In the long term, allowable media would include audio, video, and images. For now, I'd be happy just to get images to work.
The problem I'm having: my browser-side client, implemented using fetch, is able to upload JPEG's to S3 via API Gateway and Lambda just fine. I can use curl or the S3 Console to download the JPEG from my S3 bucket, and then view the image in an image viewer just fine.
But, if I try to download the image via the browser-side client and fetch, I get nothing that I'm able to display in the browser.
Here's the code from the browser-side client:
fetch(
'path/to/resource',
{
method: 'post',
mode: "cors",
body: an_instance_of_file_from_an_html_file_input_tag,
headers: {
Authorization: user_credentials,
'Content-Type': 'image/jpeg',
},
}
).then((response) => {
return response.blob();
}).then((blob) => {
const img = new Image();
img.src = URL.createObjectURL(blob);
document.body.appendChild(img);
}).catch((error) => {
console.error('upload failed',error);
});
Here's the server-side code, using Claudia.js:
const AWS = require('aws-sdk');
const ApiBuilder = require('claudia-api-builder');
const api = new ApiBuilder();
api.corsOrigin(allowed_origin);
api.registerAuthorizer('my authorizer', {
providerARNs: ['arn of my cognito user pool']
});
api.get(
'/media',
(request) => {
'use strict';
const s3 = new AWS.S3();
const params = {
Bucket: 'name of my bucket',
Key: 'name of an object that is confirmed to exist in the bucket and to be properly encoded as and readable as a JPEG',
};
return s3.getObject(params).promise().then((response) => {
return response.Body;
})
;
}
);
module.exports = api;
Here are the initial OPTION request and response headers in Chrome's Network Panel:
Here's the consequent GET request and response headers:
What's interesting to me is that the image size is reported as 699873 (with no units) in the S3 Console, but the response body of the GET transaction is reported in Chrome at roughly 2.5 MB (again, with no units).
The resulting image is a 16x16 square and dead link. I get no errors or warnings whatsoever in the browser's console or CloudWatch.
I've tried a lot of things; would be interested to hear what anyone out there can come up with.
Thanks in advance.
EDIT: In Chrome:
Claudia requires that the client specify which MIME type it will accept on binary payloads. So, keep the 'Content-type' config in the headers object client-side:
fetch(
'path/to/resource',
{
method: 'post',
mode: "cors",
body: an_instance_of_file_from_an_html_file_input_tag,
headers: {
Authorization: user_credentials,
'Content-Type': 'image/jpeg', // <-- This is important.
},
}
).then((response) => {
return response.blob();
}).then((blob) => {
const img = new Image();
img.src = URL.createObjectURL(blob);
document.body.appendChild(img);
}).catch((error) => {
console.error('upload failed',error);
});
Then, on the server side, you need to tell Claudia that the response should be binary and which MIME type to use:
const AWS = require('aws-sdk');
const ApiBuilder = require('claudia-api-builder');
const api = new ApiBuilder();
api.corsOrigin(allowed_origin);
api.registerAuthorizer('my authorizer', {
providerARNs: ['arn of my cognito user pool']
});
api.get(
'/media',
(request) => {
'use strict';
const s3 = new AWS.S3();
const params = {
Bucket: 'name of my bucket',
Key: 'name of an object that is confirmed to exist in the bucket and to be properly encoded as and readable as a JPEG',
};
return s3.getObject(params).promise().then((response) => {
return response.Body;
})
;
},
/** Add this. **/
{
success: {
contentType: 'image/jpeg',
contentHandling: 'CONVERT_TO_BINARY',
},
}
);
module.exports = api;

Convert byte array into blob (pdf file) and download using angular 5

I'm receiving a byte array from server side and has converted it successfully to blob. However, when I'm trying to download it, it shows the file is corrupted. Below are my codes -
// In client side controller
this.contractsService.downloadPdf(id)
.then((result) => {
var blob = new Blob([result], { type: "application/pdf" });
var link = document.createElement('a');
link.href = window.URL.createObjectURL(blob);
link.download = "testing.pdf";
link.click();
});
And,
// In client side service
private headers = new HttpHeaders({ 'Content-Type': 'application/json' });
downloadPdf(id: number) {
return this.http.get(this.apiRoutes.download + "/" + id, { headers: this.headers })
.map((res: any) => res)
.toPromise();
}
Any sort of help will be very much appreciated.
Thank you.
Install file-saver
npm i --save file-saver#latest
Your service method
downloadPdf(id: number) {
return this.http
.get(this.apiRoutes.download + "/" + id, { responseType:'blob' })
.toPromise();
}
Now in your component
import { saveAs } from 'file-saver'
this.contractsService.downloadPdf(id)
.then(blob=> {
saveAs(blob, 'testing.pdf');
});
This should do the trick. The HttpClient will now extract the file from the stream. Also have a look in the documentation for blobs with the HttpClient.
In client side service, try explicitly setting the response type of the get request:
downloadPdf(id: number) {
return this.http.get(this.apiRoutes.download + "/" + id, { headers: this.headers; responseType: 'arraybuffer' })
.map((res: any) => res)
.toPromise();
}