FilePicker / FileStack Error: The specified policy does not allow the call remove - amazon-s3

I am using Filepicker.io now called FileStack and I am trying to delete a file through the API and am getting this error:
This action has been secured by the developer of this website. Error: The specified policy does not allow the call remove
Has anyone seen this before? I am sending files through FilePicker to Amazon S3 so my questions are:
How can I resolve this?
Is this an issue on Amazon S3 side or FilePicker side?

This is filepicker API error message and it indicate that owner of the file
( filepicker app owner ) is using security mode.
If security is enabled all actions required policy and signature appended.
If you are app owner you can generate proper policy and signature base on your filepicker app secret key.
See more: https://www.filestack.com/docs/security/
https://www.filestack.com/docs/file-ingestion/javascript-api/remove

Related

How to hide specific folder using AWS STS AssumeRole session policy?

I have created STS AssumeRole session token with adding policy document as List only to specific folders, but how we can hide on showing remaining folders which doesn’t have access?
Example:
Let consider I have AWS s3 object paths s3://<bucketName>/folder1/{files…} & s3://<bucketName>/folder2/{files…}
I generated STS token having Action (i.e., S3:List*) and filter Condition policy ( i.e. "StringEquals" : “folder1/*” )
In my application using AWS SDK (Javascript) with above generated STS session token. If I try to List objects under Key (s3:///), response returning both folder1/ & folder2/.
How can I hide folder2/ based on current STS session policy?
(Note: Eventhough we have restrict List access to deep dive into folder2/. I don't my SDK to show folder2/ in frontend.
s3:List is a bucket-level operation, so it will list all the contents as long as the permissions allow it.
You can deny access to folder2/ by adding a condition like you said. However, that folder will still be visible if ListBucket is called in the above directory.

Google Drive API : Can't delete file

I want to use Google Drive API to manage files programaticaly.
To do that, I use ID clients OAuth 2.0 Web Application. In OAuth consent screen (Edit App), I check .../auth/drive and .../auth/drive.readonly scopes.
I'm log in, I have token.json file. I can list and download files
Now, when I want to delete file, I do this in python : service.files().delete(fileId=item['id']).execute(). But I have this issue :
An error occurred: <HttpError 403 when requesting https://www.googleapis.com/drive/v3/files/<fileID>? returned "The user has not granted the app <appID> write access to the file <fileID>.". Details: "[{'message': 'The user has not granted the app <appID> write access to the file <fileID>.', 'domain': 'global', 'reason': 'appNotAuthorizedToFile', 'location': 'Authorization', 'locationType': 'header'}]">
What I do wrong ?
Thanks in advance !
The scopes declared in Google cloud console are mainly for use with verification of your application. Its your code that defines what scopes your application is requesting of the user.
I am going to assume that you are following the Quick start for python or quick start for node.js both use token.json for credential storage. The answer is the same either way.
This sample shows you how to use the files.list method which allows for a readonly scope.
While the file.delete does not allow for a readonly scope it requires write access.
Fix
In the code you can see that the scope is readonly
SCOPES = ['https://www.googleapis.com/auth/drive.metadata.readonly']
This means that when the user authorized your app they authorized it with a readonly scope giving you access to read the files only. This the users credentials are now stored in token.json
To fix your issue change the scope to
SCOPES = ['https://www.googleapis.com/auth/drive.metadata.readonly']
Then delete the token.json file and your app should now be resting full drive access giving you the ability to make changes like delete the files.

S3 Access Denied with boto for private bucket as root user

I am trying to access a private S3 bucket that I've created in the console with boto3. However, when I try any action e.g. to list the bucket contents, I get
boto3.setup_default_session()
s3Client = boto3.client('s3')
blist = s3Client.list_objects(Bucket=f'{bucketName}')['Contents']
ClientError: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
I am using my default profile (no need for IAM roles). The Access Control List on the browser states that the bucket owner has list/read/write permissions. The canonical id listed as the bucket owner is the same as the canonical id I get when I go to 'Your Security Credentials'.
In short, it feels like the account permissions are ok, but boto is not logging in with the right profile. In addition, running similar commands from the command line e.g.
aws s3api list-buckets
also gives Access Denied. I have no problem running these commands at work, where I have a work log-in and IAM roles. It's just running them on my personal 'default' profile.
Any suggestions?
It appears that your credentials have not been stored in a configuration file.
You can run this AWS CLI command:
aws configure
It will then prompt you for Access Key and Secret Key, then will store them in the ~.aws/credentials file. That file is automatically used by the AWS CLI and boto3.
It is a good idea to confirm that it works via the AWS CLI first, then you will know that it should work for boto3 also.
I would highly recommend that you create IAM credentials and use them instead of root credentials. It is quite dangerous if the root credentials are compromised. A good practice is to create an IAM User for specific applications, then limit the permissions granted to that application. This avoids situations where a programming error (or a security compromise) could lead to unwanted behaviour (eg resources being used or data being deleted).

AWS [SES]: the reason(s) behind getting ProductionAccessNotGrantedException

To use the SES within AWS to handle my site's subscriptions, I did the following:
Verified my email and domain that I want to send the email from. Also used IAM and gave the proper access to send a customized email
Using AWS Cli created a CustomVerificationEmailTemplate.
Created a configuration set and linked it to a SNS.
Used Java SDK, created a client of type AmazonSimpleEmailService and a sendCustomVerificationEmailRequest variable and used the sendCustomVerificationEmail method to send the invitation email.
However I do get the following exception:
[ProductionAccessNotGrantedException: null (Service: AmazonSimpleEmailService; Status Code: 400; Error Code: ProductionAccessNotGranted; Request ID: *****
Any idea why I do get this exception? Where should I get the production access?
SES service will be under sandbox status by default in all AWS accounts. In sandbox status, we can not do certain things like using CustomVerificationEmailTemplate.
We need to raise a service request to take it out of sandbox status. more documentation process on how to get it out sandbox status is here.

Amazon S3 authentiaction model

What is the proper way of delegating file access authentication from S3 to our authentiation service?
For example: web site's user(he have our session id in headers) sending request to S3 to get file by url. S3 sends request to our authentication service asking if user with such headers can access that file, and if our auth service allow getting that file it will be downloaded.
There are a lot of information about presigned requests but absolutely nothing about s3 quering with "hidden" authentication.
If a file has been made public on S3, then of course anyone can download it, using a direct link to the file.
If the file is not public, then there needs to be some type of authentication. There are really only two ways a file from S3 can be obtained if it is not public, one is via a pre-signed url, and the other is to be an Amazon user who has access to S3. Obviously this is how it works when you yourself want to access an object on S3, you must provide your access key and a signature in the header of the GET request. You can grant other users access to S3 via Amazon IAM, which is more like the 'hidden' authentication you mentioned. Via the IAM route, there are different ways of providing access including Federated Users. Visit this link to learn more:
http://docs.aws.amazon.com/AmazonS3/latest/dev/MakingAuthenticatedRequests.html
If you are simply trying to provide a authenticated user access to a file, the best and easiest way to do that would be to create a pre-signed url with an expiration time. The expiration time can be something short, like 10 minutes or even 1 minute, to prevent the user from passing the link to others.