How to find the source for the SignatureDoesNotMatch error on Minio - amazon-s3

Since more than a year we are runnig a single page application (SPA with Angular) which receives Json objects with presigned urls from a .NET Core API. The SPA displays a list and uses the presigned url to display the image/video (directly downloaded from the
Suddenly some of the presigned urls in the list still work, others cause a SignatureDoesNotMatch error when the image/video is embedded. The others work.
<Error><Code>SignatureDoesNotMatch</code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>...
Maybe somebody has experince with Minio/S3 and could help me building a check list for finding the source of this error.
So far I have:
Config (access key, secret key, host): since most urls work, some don't this should be valid
Url generation: for working and not working urls I generate them using the Minio .NET SDK (3.02).
await _minio.PresignedGetObjectAsync(bucket, key, ttl);
await _minio.PresignedPutObjectAsync(bucket, key, ttl);
Mixing get and put urls: Could that be a reason? The screenshots within the bug report showed the presigned urls but I haven't seen an indidicator in the url if it was generated as put or get url.

#monty I do not have enough information to root cause. This can be caused maybe by incorrect encoding of the object name which might have been fixed in the newer version of minio and minio dot-net SDK.
What version of minio are you using? I see that you are using Minio Dotnet 3.0.2 version.
Is it happening with certain file and object names?

Related

AWS Node SDK: How to generate a signed S3 getObject URL that doesn't include AccessKeyId

If one of my Selenium tests running in CircleCI fails, I upload a browser screenshot to S3 and print a signed getObject URL for it to the console, so that I can look up that screenshot quickly.
The problem is, S3.getSignedUrl adds my AWS AccessKeyId to the URL, and CircleCI is censoring it to ******************** since that value is in my environment variables, so the URL doesn't work:
https://s3.us-west-2.amazonaws.com/<bucket>/ERROR_3_reset_password_workflow_works.png
?AWSAccessKeyId=********************
&Expires=1612389785
&Signature=...
I don't see any options to output a different kind of URL in the getSignedUrl API docs. However, I noticed that when I open an image directly from the S3 console, the URL has a totally different form:
https://s3.us-west-2.amazonaws.com/<bucket>/ERROR_3_reset_password_workflow_works.png
?response-content-disposition=inline
&X-Amz-Security-Token=...
&X-Amz-Algorithm=AWS4-HMAC-SHA256
&X-Amz-Date=20210128T222508Z
&X-Amz-SignedHeaders=host
&X-Amz-Expires=300
&X-Amz-Credential=...
&X-Amz-Signature=...
Is there a way I can generate this type of URL with the S3 Node SDK? It doesn't use any values that CircleCI would censor, so it would work for what I'm trying to do.
I'm also looking into using CircleCI artifacts for the error screenshots, but I'd still like to understand how the S3 console is building the latter URL.
The Amazon S3 presignedURL examples here yield the format you're looking for. e.g.
[BUCKET]/[OBJECT]?X-Amz-Algorithm=[]&X-Amz-Content-Sha256=[]&X-Amz-Credential=[]&X-Amz-Date=[]&X-Amz-Expires=[]&X-Amz-Signature=[]&X-Amz-SignedHeaders=[]&x-amz-user-agent=[]
Note: These examples use V3 of the AWS SDK for JavaScript.

DownLoad VichBundle file Api Platform

I'm using API platform with VichBundle to store file on the back side and React Native on the Front side.
I've followed the documentation of API platform and the upload part is working well, but I don't know how to download the document.
When I make a GET request I have the entity with the url of the file but I can't do a GET request with this url because there is no route to this file.
Can somebody give me an exemple of how to download file with api platform and Vichbundle.
Thanks
If you are following Api Platfom's documentation your files should be uploaded to your project's ./app/public/media/ folder and available making an HTTP GET request to http(s)://<yourdomain>/public/media/<filename>.<extension>. Just open the URL in your browser.
To get the exact url query yout API for me mediaObject information (for example, /api/media_objects/{id}) and check the contentUrl property.

CrossDomain Access, HLS through CloudFront with Signed URL(JWplayer)

I am using HLS streaming with the Amazon S3 and Cloud Front using the JWplayer.(With Rails)
I used the Signed URL to encrypt the URL and created an Origin Access Identity as given in the Amazon Cloud Front documentation.
The Signed URL's are generated fine.
I also have a 'crossdomain.xml' file in my bucket which is allowing all the origins(I have given '*')
Now, when I am trying to play my Hls video files from my bucket, I am getting crossdomain access denied issue
I think JW Player is trying to access the 'crossdomain.xml' file without the signed hash. So, it's getting that error.
I have tested my file in demo JWplayer Stream tester and this is the error I am getting in console.
Fetch API cannot load http://xxxxxxxx.cloudfront.net/xxx/1/1m_test.ts.
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'http://demo.jwplayer.com' is therefore not allowed access.
The response had HTTP status code 403.
If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
Here is the ScreenShot.
Please help me out. Thank You.
This is the link I followed to configure my CloudFront Distribution
I just had the same problem (but with the Flowplayer). I am not sure yet about security risks (and if all steps are needed), but I got it running with:
adding permissions on the crossdomain.xml for everyone to open/download
adding a behaviour in the cloudfront distribution only for crossdomain.xml without restricting access (above the behaviour for * with restricted access)
and then I noticed that in the bucket, the link to the crossdomain.xml was something like "https://some-server.amazonaws.com/bucket.name/%1Fcrossdomain.xml" (notice the weird %1F) and that when I went on rename of the crossdomain.xml, I could delete one invisible character on first position of the name (I didn't make the crossdomain.xml, so I am not sure how this happened)
Edit:
I had hlsjs also running with this and making the crossdomain.xml accessible somehow disabled the CORS request. I am still looking into this.

AWS CDN create signed url for specific S3 object version

Is it possible to create a signed URL for S3 object with particular version.
The idea is to have the same image name but different signed url for all the versions of the image.
Yes.
Here are some sample pre-signed URLs that point to a particular object version, with old and new signature format:
http://mybucket.s3-ap-southeast-2.amazonaws.com/cat.jpg?versionId=XXX&AWSAccessKeyId=YYY&Expires=1458463363&Signature=ZZZ
https://s3-ap-southeast-2.amazonaws.com/mybucket/cat.jpg?versionId=XXX&X-Amz-Date=20160319T084413Z&X-Amz-Expires=300&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Signature=VVV3&X-Amz-Credential=YYY/20160319/ap-southeast-2/s3/aws4_request&X-Amz-SignedHeaders=Host&x-amz-security-token=ZZZ
You can see this in action in the S3 console -- just create a versioned file, then choose Actions/Open. It will generate a signed URL for the given version of the object.
As to how to code this... I'm not sure! However, I did verify that a signature for one version will not work with another version.

Getting a pre-authenticated URL to an S3 bucket

I am attempting to use an S3 bucket as a deployment location for an internal, auto-updating application's files. It would be the location where the new version's files are dumped for the application to puck up on an update. Since this is an internal application, I was hoping to have the URL be private, but to be able to access it using only a URL. I was hoping to look into using third party auto updating software, which means I can't use the Amazon API to access it.
Does anyone know a way to get a URL to a private bucket on S3?
You probably want to use one of the available AWS Software Development Kits (SDKs), which all implement the respective methods to generate these URLs by means of the GetPreSignedURL() method (e.g. Java: generatePresignedUrl(), C#: GetPreSignedURL()):
The GetPreSignedURL operations creates a signed http request. Query
string authentication is useful for giving HTTP or browser access to
resources that would normally require authentication. When using query
string authentication, you create a query, specify an expiration time
for the query, sign it with your signature, place the data in an HTTP
request, and distribute the request to a user or embed the request in
a web page. A PreSigned URL can be generated for GET, PUT and HEAD
operations on your bucket, keys, and versions.
There are a couple of related questions already and e.g. Why is my S3 pre-signed request invalid when I set a response header override that contains a “+”? contains a working sample in C# (aside from the content type issue Ragesh is experiencing of course).
Good luck!