Generate signed URL for R2 from Cloudflare worker - amazon-s3

I am trying to replicate the result from this guide: https://developers.cloudflare.com/r2/data-access/s3-api/presigned-urls/
However, with the exact same code, the signedUrl variable return empty object({}).
If anyone has any insight would be much appreciated!
According to my understanding and as written in the docs, I don't have to pass any accessKeyId or secretAccessKey.
However, the s3 url that I got from my cloudflare dashboard is : https://<ACCOUNT ID>.r2.cloudflarestorage.com/<DROPBOX BUCKET>, which is different from the docs.
const r2 = new AwsClient({
accessKeyId: "",
secretAccessKey: "",
service: "s3",
region: "auto",
});
app.get("/getUploadUrl", async (c) => {
const timeOut = +c.req.queries("timeOut") ?? 60 * 5;
const fileName = c.req.queries("fileName") ?? crypto.randomUUID();
const bucketName = c.req.header("bucket");
const bucketUrl = bucketNameToUrl[bucketName];
console.log("fileName", fileName);
const signedUrl = await r2.sign(
new Request(`${bucketUrl}/${fileName}`, {
method: "PUT",
}),
{
aws: { signQuery: true },
headers: {
"X-Amz-Expires": timeOut.toString(),
},
}
);
return c.json(signedUrl);
});

You can also use the URL format:
https://<bucket_name>.<account_id>.r2.cloudflarestorage.com
both the slash at the end and this version should work.
The Access Key and Secret Access Key need to be generated with the button on the right of the R2 console labelled "R2 API Keys". There's also a Discord server where you can get help and learn a lot about R2, and Cloudflare in general

Related

Websocket fails after implementing CloudFlare

I have implemented cloudflare on a live website, the website has a socket server that's setup with socket.io and express, everything were working fine before implementing cloudflare
Currently I'm using port: 2053 which i've allowed access to through Laravel forge
socket.js
var app = require('express')();
const fs = require('fs');
var server = require('https').createServer({
key: fs.readFileSync('/etc/nginx/ssl/mywebsite.com/1234/server.key'),
cert: fs.readFileSync('/etc/nginx/ssl/mywebsite.com/1234/server.crt'),
}, app);
var io = require('socket.io')(server, {
cors: {
origin: function(origin, fn) {
if (origin === "http://mywebsite.test" || origin === "https://mywebsite.com") {
return fn(null, origin);
}
return fn('Error Invalid domain');
},
methods: ['GET', 'POST'],
'reconnect': true
},
});
var Redis = require('ioredis');
var redis = new Redis();
redis.subscribe('asset-channel', () => {
console.log('asset-channel: started');
});
redis.on('message', function(channel, message) {
var message = JSON.parse(message);
io.to(message.data.id).emit(channel + ':' +message.event + ':'+ message.data.id, message.data);
});
io.on("connection", (socket) => {
socket.on("join:", (data) => {
socket.join(data.id);
});
socket.on("leave:", (data) => {
socket.leave(data.id);
});
});
server.listen(2053, () => {
console.log('Server is running!');
});
app.js
if (! window.hasOwnProperty('io')) {
// if (
// window.origin === "http://mywebsite.test" ||
// window.origin === "https://mywebsite.com" ||
// window.origin == "https://mywebsite.test"
// ) {
window.io = io.connect(`${window.origin}:2053`);
window.io.on('connection');
// }
}
As mentioned before everything were working fine before implementing cloudflare and i have tried to read some different documentation like:
https://developers.cloudflare.com/cloudflare-one/policies/zero-trust/cors
https://socket.io/docs/v4/handling-cors/
I found many different problems similar online, and tried several solutions but nothing seem to make the socket connection work
Tried to allow all cors like so:
var io = require('socket.io')(server, {
cors: {
origin: "*",
methods: ['GET', 'POST'],
'reconnect': true
},
});
Didn't work either, tried configure some stuff in nginx which didn't work either
Error
Access to XMLHttpRequest at 'https://mywebsite.com:2053/socket.io/?EIO=4&transport=polling&t=NurmHmi' from origin 'https://mywebsite.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
I think i might have to configure something in the cloudflare dashboard, i just dont know what and my googling skills could not take me to the finish line this time around.
Im not too experienced with sockets so it would be awesome if there are some skilled socket expert who have had this issue before who can guide me in the correct direction? :)
I made it run by adding this to the app.js:
window.io = io.connect(`${window.origin}:2053`, { transports: ["websocket"] });
Apparently it will try to use polling instead of websocket.

Get uploaded object URL with Javascript 'aws-sdk' v3

Currently, we are using aws-sdk v2, and extracting uploaded file URL in this way
const res = await S3Client
.upload({
Body: body,
Bucket: bucket,
Key: key,
ContentType: contentType,
})
.promise();
return res.Location;
Now we have to upgrade to aws-sdk v3, and the new way to upload files looks like this
const command = new PutObjectCommand({
Body: body,
Bucket: bucket,
Key: key,
ContentType: contentType,
});
const res = await S3Client.send(command);
Unfortunately, res object doesn't contain Location property now.
getSignedUrl SDK function doesn't look suitable because it just generates a URL with an expiration date (probably it can be set to some extra huge duration, but anyway, we still need to have a possibility to analyze the URL path)
Building the URL manually does not look like a good idea and a stable solution to me.
Answering myself: I don't know whether a better solution exists, but here is how I do it
const command = new PutObjectCommand({
Body: body,
Bucket: bucket,
Key: key,
ContentType: contentType,
});
const [res, region] = await Promise.all([
s3Client.send(command),
s3Client.config.region(),
]);
const url = `https://${bucket}.s3.${region}.amazonaws.com/${key}`
You can use Upload method from "#aws-sdk/lib-storage" with sample code as below.
import { Upload } from "#aws-sdk/lib-storage";
import { S3Client } from "#aws-sdk/client-s3";
const target = { Bucket, Key, Body };
try {
const parallelUploads3 = new Upload({
client: new S3Client({}),
tags: [...], // optional tags
queueSize: 4, // optional concurrency configuration
leavePartsOnError: false, // optional manually handle dropped parts
params: target,
});
parallelUploads3.on("httpUploadProgress", (progress) => {
console.log(progress);
});
await parallelUploads3.done();
} catch (e) {
console.log(e);
}
Make sure you return parallelUploads3.done() object where you will get location in the return object as below
S3 Upload Response
Reference
https://stackoverflow.com/a/70159394/16729176

Bing Ads Script to change shared campaign budget on multiple accounts using Google Sheets

I have a Google Ads script running to change campaign budgets, but implementation of the same script into Bing Ads is more difficult for me. I'm having problems with the code to connect Google Sheets with Bing Ads Script. I got clientId, clientSecret and refresh token to authorize Google service in Bing, but am struggling with the code to allow the script read my Google Sheets file.
I attached some code responsible for connecting Google Sheets file to Bing Script. It should allow it to read it's content and later change it to whatever values I provided in that file.
const credentials = {
accessToken: '', // not sure if i needed it if I got refresh token
clientId: 'HIDDEN',
clientSecret: 'HIDDEN',
refreshToken: 'HIDDEN'
};
function main() {
var SPREADSHEET_URL = 'HIDDEN';
var GoogleApis;
(function (GoogleApis) {
GoogleApis.readSheetsService = credentials => readService("https://sheets.googleapis.com/$discovery/rest?version=v4", credentials);
 
// Creation logic based on https://developers.google.com/discovery/v1/using#usage-simple
function readService(SPREADSHEET_URL, credentials) {
const content = UrlFetchApp.fetch(SPREADSHEET_URL).getContentText();
const discovery = JSON.parse(content);
const accessToken = getAccessToken(credentials);
const standardParameters = discovery.parameters;
}
function getAccessToken(credentials) {
if (credentials.accessToken) {
return credentials.accessToken;
}
const tokenResponse = UrlFetchApp.fetch('https://www.googleapis.com/oauth2/v4/token', { method: 'post', contentType: 'application/x-www-form-urlencoded', muteHttpExceptions: true, payload: { client_id: credentials.clientId, client_secret: credentials.clientSecret, refresh_token: credentials.refreshToken, grant_type: 'refresh_token' } });
const responseCode = tokenResponse.getResponseCode();
const responseText = tokenResponse.getContentText();
if (responseCode >= 200 && responseCode <= 299) {
const accessToken = JSON.parse(responseText)['access_token'];
return accessToken;
}
throw new Error(responseText);
})(GoogleApis || (GoogleApis = {}));
it throws syntax error on the last line of the code:
})(GoogleApis || (GoogleApis = {}));
but i think there is more than that.
Please try the var GoogleApis declaration outside main() as this example shows: https://learn.microsoft.com/en-us/advertising/scripts/examples/calling-google-services
I hope this helps.

Downloading images form AWS S3 via Lambda and API Gateway--using fetch class

I'm trying to use the JavaScript fetch API, AWS API Gateway, AWS Lambda, and AWS S3 to create a service that allows users to upload and download media. Server is using NodeJs 8.10; browser is Google Chrome Version 69.0.3497.92 (Official Build) (64-bit).
In the long term, allowable media would include audio, video, and images. For now, I'd be happy just to get images to work.
The problem I'm having: my browser-side client, implemented using fetch, is able to upload JPEG's to S3 via API Gateway and Lambda just fine. I can use curl or the S3 Console to download the JPEG from my S3 bucket, and then view the image in an image viewer just fine.
But, if I try to download the image via the browser-side client and fetch, I get nothing that I'm able to display in the browser.
Here's the code from the browser-side client:
fetch(
'path/to/resource',
{
method: 'post',
mode: "cors",
body: an_instance_of_file_from_an_html_file_input_tag,
headers: {
Authorization: user_credentials,
'Content-Type': 'image/jpeg',
},
}
).then((response) => {
return response.blob();
}).then((blob) => {
const img = new Image();
img.src = URL.createObjectURL(blob);
document.body.appendChild(img);
}).catch((error) => {
console.error('upload failed',error);
});
Here's the server-side code, using Claudia.js:
const AWS = require('aws-sdk');
const ApiBuilder = require('claudia-api-builder');
const api = new ApiBuilder();
api.corsOrigin(allowed_origin);
api.registerAuthorizer('my authorizer', {
providerARNs: ['arn of my cognito user pool']
});
api.get(
'/media',
(request) => {
'use strict';
const s3 = new AWS.S3();
const params = {
Bucket: 'name of my bucket',
Key: 'name of an object that is confirmed to exist in the bucket and to be properly encoded as and readable as a JPEG',
};
return s3.getObject(params).promise().then((response) => {
return response.Body;
})
;
}
);
module.exports = api;
Here are the initial OPTION request and response headers in Chrome's Network Panel:
Here's the consequent GET request and response headers:
What's interesting to me is that the image size is reported as 699873 (with no units) in the S3 Console, but the response body of the GET transaction is reported in Chrome at roughly 2.5 MB (again, with no units).
The resulting image is a 16x16 square and dead link. I get no errors or warnings whatsoever in the browser's console or CloudWatch.
I've tried a lot of things; would be interested to hear what anyone out there can come up with.
Thanks in advance.
EDIT: In Chrome:
Claudia requires that the client specify which MIME type it will accept on binary payloads. So, keep the 'Content-type' config in the headers object client-side:
fetch(
'path/to/resource',
{
method: 'post',
mode: "cors",
body: an_instance_of_file_from_an_html_file_input_tag,
headers: {
Authorization: user_credentials,
'Content-Type': 'image/jpeg', // <-- This is important.
},
}
).then((response) => {
return response.blob();
}).then((blob) => {
const img = new Image();
img.src = URL.createObjectURL(blob);
document.body.appendChild(img);
}).catch((error) => {
console.error('upload failed',error);
});
Then, on the server side, you need to tell Claudia that the response should be binary and which MIME type to use:
const AWS = require('aws-sdk');
const ApiBuilder = require('claudia-api-builder');
const api = new ApiBuilder();
api.corsOrigin(allowed_origin);
api.registerAuthorizer('my authorizer', {
providerARNs: ['arn of my cognito user pool']
});
api.get(
'/media',
(request) => {
'use strict';
const s3 = new AWS.S3();
const params = {
Bucket: 'name of my bucket',
Key: 'name of an object that is confirmed to exist in the bucket and to be properly encoded as and readable as a JPEG',
};
return s3.getObject(params).promise().then((response) => {
return response.Body;
})
;
},
/** Add this. **/
{
success: {
contentType: 'image/jpeg',
contentHandling: 'CONVERT_TO_BINARY',
},
}
);
module.exports = api;

Direct Upload to S3 from the browser with Authorization Signature Ver 4

I need to upload a file to S3 directly from the browser. In the beginning I created a script that is working but to authorize I need to put my credentials accessKeyId and secretAccessKey, what it is not secure.
I figured out that I can use for authorization the "Authorization Signature"
It seems great but I can't find where I can put this authorization header to the request in the upload() method.
An example of my authorization header:
Authorization: AWS4-HMAC-SHA256
Credential=/20151016//s3/aws4_request,
SignedHeaders=content-type;host;x-amz-date,
Signature=4eee344a71a58623febc4079024a27cb62f3d26546695422244fcefe50d0168d
Thanks for your advice.
I have found solution for this issue. My solution is based on example from this site.
In final solution I don't use javascript SDK, it is using post form with authorization inputs what is sending with post parameters.
You can enclose a signed policy document with your POST request in order to authenticate securely, with AWS Signature Version 4.
If you're on Node, you can use the aws-s3-form package on the server to generate the necessary form data your client requires in order to send a successful request to S3.
You might want to read my blog post on the subject for full insight.
Example Server Side Code (Node)
let AwsS3Form = require('aws-s3-form')
[...]
// A hapi.js server route
server.route({
method: ['GET',],
path: '/api/s3Settings',
config: {
auth: 'session',
handler: (request, reply) => {
let {key,} = request.query
let keyPrefix = `u/${request.auth.credentials.username}/`
let region = process.env.S3_REGION
let s3Form = new AwsS3Form({
accessKeyId: process.env.AWS_ACCESS_KEY,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region,
bucket,
keyPrefix,
successActionStatus: 200,
})
let url = `https://s3.${region}.amazonaws.com/${bucket}/${keyPrefix}${key}`
let formData = s3Form.create(key)
reply({
bucket,
region,
url,
fields: formData.fields,
})
},
},
})
Example Client Side Code
let R = require('ramda')
let ajax = require('./ajax')
class S3Uploader {
constructor({folder,}) {
this.folder = folder
}
send(file) {
let key = `${this.folder}/${file.name}`
return ajax.getJson(`s3Settings`, {key,})
.then((s3Settings) => {
let formData = new FormData()
R.forEach(([key, value,]) => {
formData.append(key, value)
}, R.toPairs(s3Settings.fields))
formData.append('file', file)
return new Promise((resolve, reject) => {
let request = new XMLHttpRequest()
request.onreadystatechange = () => {
if (request.readyState === XMLHttpRequest.DONE) {
if (request.status === 200) {
resolve(s3Settings.url)
} else {
reject(request.responseText)
}
}
}
let url = `https://s3.${s3Settings.region}.amazonaws.com/${s3Settings.bucket}`
request.open('POST', url, true)
request.send(formData)
})
}, (error) => {
throw new Error(`Failed to receive S3 settings from server`)
})
}
}