AWS Node SDK: How to generate a signed S3 getObject URL that doesn't include AccessKeyId - amazon-s3

If one of my Selenium tests running in CircleCI fails, I upload a browser screenshot to S3 and print a signed getObject URL for it to the console, so that I can look up that screenshot quickly.
The problem is, S3.getSignedUrl adds my AWS AccessKeyId to the URL, and CircleCI is censoring it to ******************** since that value is in my environment variables, so the URL doesn't work:
https://s3.us-west-2.amazonaws.com/<bucket>/ERROR_3_reset_password_workflow_works.png
?AWSAccessKeyId=********************
&Expires=1612389785
&Signature=...
I don't see any options to output a different kind of URL in the getSignedUrl API docs. However, I noticed that when I open an image directly from the S3 console, the URL has a totally different form:
https://s3.us-west-2.amazonaws.com/<bucket>/ERROR_3_reset_password_workflow_works.png
?response-content-disposition=inline
&X-Amz-Security-Token=...
&X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Date=20210128T222508Z
&X-Amz-SignedHeaders=host
&X-Amz-Expires=300
&X-Amz-Credential=...
&X-Amz-Signature=...
Is there a way I can generate this type of URL with the S3 Node SDK? It doesn't use any values that CircleCI would censor, so it would work for what I'm trying to do.
I'm also looking into using CircleCI artifacts for the error screenshots, but I'd still like to understand how the S3 console is building the latter URL.

The Amazon S3 presignedURL examples here yield the format you're looking for. e.g.
[BUCKET]/[OBJECT]?X-Amz-Algorithm=[]&X-Amz-Content-Sha256=[]&X-Amz-Credential=[]&X-Amz-Date=[]&X-Amz-Expires=[]&X-Amz-Signature=[]&X-Amz-SignedHeaders=[]&x-amz-user-agent=[]
Note: These examples use V3 of the AWS SDK for JavaScript.

Related

Programatically generating web console URL to s3 object

Is there a way to generate urls to the s3 web console (https://s3.console.aws.amazon.com/s3/object/...) within python using botocore or boto3? I know boto can be used to generate presigned urls, but is there a way to just generate URL to the webconsole?
There is not a way in boto3 to generate the url but you can easily generate them yourself with variables and string formatting.
print("https://s3.console.aws.amazon.com/s3/buckets/{}?region={}&tab=objects".format("bucket_name", "us-east-1"))

How to integrate API Gateway with S3 AWS Service using greedy path proxy resource

I have files in an S3 bucket that I am serving using AWS API Gateway
I have a resource GET /{file} that maps to mybucket:/{file}
ie http://myapigateway.com/test1.txt correctly returns mybucket:/test1.txt
Now I want to serve files with directory paths:
http://myapigateway.com/dirA/test2.txt should return the file mybucket:/dirA/test2.txt
I can not get this to work. The problem is that when I set the new API Gateway resource to use greedy path matches as a "proxy resource":
GET /{proxy+}
I no longer have the option to integrate with s3 (see image below)
I tried to pass the method.request.querystring.proxy var instead of method.request.path.proxy to prevent the slashes from being stripped but that didn't help.
If I can't integrate with s3 directly, is there any way I can work-around with a lambda function?
screen-shot: no option to integrate with AWS Service

How to find the source for the SignatureDoesNotMatch error on Minio

Since more than a year we are runnig a single page application (SPA with Angular) which receives Json objects with presigned urls from a .NET Core API. The SPA displays a list and uses the presigned url to display the image/video (directly downloaded from the
Suddenly some of the presigned urls in the list still work, others cause a SignatureDoesNotMatch error when the image/video is embedded. The others work.
<Error><Code>SignatureDoesNotMatch</code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>...
Maybe somebody has experince with Minio/S3 and could help me building a check list for finding the source of this error.
So far I have:
Config (access key, secret key, host): since most urls work, some don't this should be valid
Url generation: for working and not working urls I generate them using the Minio .NET SDK (3.02).
await _minio.PresignedGetObjectAsync(bucket, key, ttl);
await _minio.PresignedPutObjectAsync(bucket, key, ttl);
Mixing get and put urls: Could that be a reason? The screenshots within the bug report showed the presigned urls but I haven't seen an indidicator in the url if it was generated as put or get url.
#monty I do not have enough information to root cause. This can be caused maybe by incorrect encoding of the object name which might have been fixed in the newer version of minio and minio dot-net SDK.
What version of minio are you using? I see that you are using Minio Dotnet 3.0.2 version.
Is it happening with certain file and object names?

Is an upload (put) object to AWS S3 from web browser possible?

But is a bit of a random question and no one should ever do it this way, but is it possible to execute a put api call to amazon S3 from the web browser? Using only query params.
For instance, ignoring authentication params, I know you can do https://s3.amazonaws.com/~some bucket~
To list files in the bucket. Is there a way to upload?
Have look at Browser-Based Uploads Using POST

How to Upload PhantomJS Page Content to S3

I am using PhantomJS 1.9.7 to scrape a web page. I need to send the returned page content to S3. I am currently using the filesystem module included with PhantomJS to save to the local file system and using a php script to scan the directory and ship the files off to S3. I would like to completely bypass the local filesystem and send the files directly from PhantomJS to S3. I could not find a direct way to do this within PhantomJS.
I toyed with the idea of using the child_process module and pass in the content as an argument, like so:
var execFile = require("child_process").execFile;
var page = require('webpage').create();
var content = page.content;
execFile('php', '[path/to/script.php, content]', null, function(err,stdout,stdin){
console.log("execFileSTDOUT:", JSON.stringify(stdout));
console.log("execFileSTDERR:", JSON.stringify(stderr));
});
which would call a php script directly to accomplish the upload. This will require using an additional process to call a CLI command. I am not comfortable with having another asynchronous process running. What I am looking for is a way to send the content directly to S3 from the PhantomJS script similar to what the filesystem module does with the local filesystem.
Any ideas as to how to accomplish this would be appreciated. Thanks!
You could just create and open another page and point it to your S3 service. Amazon S3 has a REST API and a SOAP API and REST seems easier.
For SOAP you will have to manually build the request. The only problem might be the wrong content-type. Though it looks as if it was implemented, but I cannot find a reference in the documentation.
You could also create a form in the page context and send the file that way.