How to download files from AWS S3 bucket to springboot app on openshift? - amazon-s3

My Spring Boot application is going to be deployed on Openshift and from my application i need to download files from AWS S3 bucket on other n/w.
What is the best way to connect to S3 and get the files. I am trying to use AmazonS3 client. Do i need to do configurations at the openshift infra level? Is there any other way with which we can download the files?

This is my suggested method using IAM roles.
https://aws.amazon.com/blogs/compute/a-guide-to-locally-testing-containers-with-amazon-ecs-local-endpoints-and-docker-compose/
Scenario: Testing using Task IAM Role credentials
The endpoints container image can also vend credentials from an IAM Role; this allows you to test your application locally using a Task IAM Role.
NOTE: You should not use your production Task IAM Role locally. Instead, create a separate testing role, with equivalent permissions scoped to testing resources. Modifying the trust boundary of a production role will expand its scope.
In order to use a Task IAM Role locally, you must modify its trust policy. First, get the ARN of the IAM user defined by your default AWS CLI Profile (replace default with a different Profile name if needed):
aws --profile default sts get-caller-identity
Then modify your Task IAM Role so that its trust policy includes the following statement. You can find instructions for modifying IAM Roles in the IAM Documentation.
{
"Effect": "Allow",
"Principal": {
"AWS": <ARN of the user found with get-caller-identity>
},
"Action": "sts:AssumeRole"
}
To use your Task IAM Role in your docker compose file for local testing, simply change the value of the AWS container credentials relative URI environment variable on your application container:
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI: "/role/"
For example, if your role is named ecs_task_role, then the environment variable should be set to "/role/ecs_task_role". That is all that is required; the ecs-local-endpoints container will now vend credentials obtained from assuming the task role. You can use this to validate that the permissions set on your Task IAM Role are sufficient to run your application.

Related

How do I edit a bucket policy deployed by organizational-level CloudTrail

we have a multi-account setup where we deployed an organizational-level CloudTrail in our root account's Control Tower.
Organizational-level CloudTrail allows us to deploy CloudTrail in each of our respective accounts and provides them the ability to send logs to CloudWatch in our Root account and to an S3 logging bucket in our central logging account.
Now I have AWS Athena set up in our logging account to try and run queries on the logs generated through our organizational-level CloudTrail deployment. So far, I have managed to create the Athena Table that is built on the mentioned logging bucket and I also created a destination bucket for the query results.
When I try to run a simple "preview table" query, I get the following error:
Permission denied on S3 path: s3://BUCKET_NAME/PREFIX/AWSLogs/LOGGING_ACCOUNT_NUMBER/CloudTrail/LOGS_DESTINATION
This query ran against the "default" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: f72e7dbf-929c-4096-bd29-b55c6c41f582
I figured that the error is caused by the logging bucket's policy lacking any statement allowing Athena access, but when I try to edit the bucket policy I get the following error:
Your bucket policy changes can’t be saved:
You either don’t have permissions to edit the bucket policy, or your bucket policy grants a level of public access that conflicts with your Block Public Access settings. To edit a bucket policy, you need s3:PutBucketPolicy permissions. To review which Block Public Access settings are turned on, view your account and bucket settings. Learn more about Identity and access management in Amazon S3
This is strange since the role I am using has full admin access to this account.
Please advise.
Thanks in advance!
I see this is is a follow up question to your previous one: S3 Permission denied when using Athena
Control Tower guardrail automatically deploys a guardrail which prohibits updating the aws-controltower bucket policy.
In your master account, go to AWS Organizations. Then, go to your Security OU. Then go to Policies tab. You should see 2 guardrail policies:
One of them will contain this policy:
{
"Condition": {
"ArnNotLike": {
"aws:PrincipalARN": "arn:aws:iam::*:role/AWSControlTowerExecution"
}
},
"Action": [
"s3:PutBucketPolicy",
"s3:DeleteBucketPolicy"
],
"Resource": [
"arn:aws:s3:::aws-controltower*"
],
"Effect": "Deny",
"Sid": "GRCTAUDITBUCKETPOLICYCHANGESPROHIBITED"
},
Add these principals below AWSControlTowerExecution:
arn:aws:iam::*:role/aws-reserved/sso.amazonaws.com/*/AWSReservedSSO_AWSAdministratorAccess*
arn:aws:iam::*:role/aws-reserved/sso.amazonaws.com/*/AWSReservedSSO_AdministratorAccess*
Your condition should look like this:
"Condition": {
"ArnNotLike": {
"aws:PrincipalArn": [
"arn:aws:iam::*:role/AWSControlTowerExecution",
"arn:aws:iam::*:role/aws-reserved/sso.amazonaws.com/*/AWSReservedSSO_AWSAdministratorAccess*",
"arn:aws:iam::*:role/aws-reserved/sso.amazonaws.com/*/AWSReservedSSO_AdministratorAccess*"
]
}
},
You shoulld be able to update the bucket after this is applied.

S3 Access Denied with boto for private bucket as root user

I am trying to access a private S3 bucket that I've created in the console with boto3. However, when I try any action e.g. to list the bucket contents, I get
boto3.setup_default_session()
s3Client = boto3.client('s3')
blist = s3Client.list_objects(Bucket=f'{bucketName}')['Contents']
ClientError: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
I am using my default profile (no need for IAM roles). The Access Control List on the browser states that the bucket owner has list/read/write permissions. The canonical id listed as the bucket owner is the same as the canonical id I get when I go to 'Your Security Credentials'.
In short, it feels like the account permissions are ok, but boto is not logging in with the right profile. In addition, running similar commands from the command line e.g.
aws s3api list-buckets
also gives Access Denied. I have no problem running these commands at work, where I have a work log-in and IAM roles. It's just running them on my personal 'default' profile.
Any suggestions?
It appears that your credentials have not been stored in a configuration file.
You can run this AWS CLI command:
aws configure
It will then prompt you for Access Key and Secret Key, then will store them in the ~.aws/credentials file. That file is automatically used by the AWS CLI and boto3.
It is a good idea to confirm that it works via the AWS CLI first, then you will know that it should work for boto3 also.
I would highly recommend that you create IAM credentials and use them instead of root credentials. It is quite dangerous if the root credentials are compromised. A good practice is to create an IAM User for specific applications, then limit the permissions granted to that application. This avoids situations where a programming error (or a security compromise) could lead to unwanted behaviour (eg resources being used or data being deleted).

Assume Role for IAM user to do s3 upload from Jenkins CI

I am trying to use s3upload from Jenkins CI, I have added IAM user S3_User credentials in Jenkins console and using withAWS(region: s3Region ,credentials: s3User). But my IAM user S3_User doesnt have S3 RW policy, IAM user has to assume role S3_UserRoleWithRWpolicy .How do I do that?
Provided S3_User access and secret key in Jenkins IAM credentials and added S3_UserRoleWithRWpolicy in IAM Role to use under IAM Role Support. But still not able to do S3 upload from Jenkins. How could I configure in Jenkins file to use role?
Finally figured out the solution:
I was using this in Jenkins CI file :
withAWS(region: 's3Region', credentials: 'iamUser')
{
s3Upload( file:'jar', bucket:s3Bucket, path: s3Path)
}
It worked fine when iamUser has direct access to S3
but failed when iamUser has to assume role to access S3 (after adding IAM Role to Assume in credentials)
But the below worked:
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', credentialsId: 'iamUser']]) {
withAWS(region: 's3Region')
{
s3Upload( file:'jar', bucket:s3Bucket, path: s3Path)
}
}

Amazon S3 Write Only access

I'm backing up files from several customers directly into an Amazon S3 bucket - each customer to a different folder. I'm using a simple .Net client running under a Windows task once a night. To allow writing to the bucket, my client requires both the AWS access key and the secret key (I created a new pair).
My problem is:
How do I make sure none of my customers could potentially use the pair to peek in the bucket and in a folder not his own? Can I create a "write only" access pair?
Am I approaching this the right way? Should this be solved through AWS access settings, or should I client-side encrypt files on the customer's machine (each customer with a different key) prior to uploading and avoid a the above mentioned cross-access?
I just created a write-only policy like this and it seems to be working:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME/*"
]
}
]
}
I think creating a drop like that is a much neater solution.
Use IAM to create a separate user for each customer (not just an additional key pair), then give each user access to only their S3 folder. For instance, if the bucket is called everybodysbucket, and customer A's files all start with userA/ (and customer B's with userB/), then you can grant permission to everybodysbucket/userA/* to the user for customer A, and to everybodysbucket/userB/* for customer B.
That will prevent each user from seeing any resources not their own.
Use can also control specific S3 operations, not just resources, that each user can access. So yes, you can grant write-only permission to the users if you want.
As a variation on the approach recommended in Charles' answer, you can also manage access control at the user level via SFTP. These users can all share the same global IAM policy:
SFTP does support user-specific home directories (akin to "chroot").
SFTP allows you to manage user access via service managed authentication or via your own authentication provider. I am unsure if service managed authentication has user limits.
If you wish to allow users to access uploaded files using a client, SFTP provides it very cleanly.

Multiple access_keys for different privileges with same S3 account?

I have a single S3/AWS account. I have several websites each which use their own bucket on S3 for reading/writing storage. I also host a lot of personal stuff (backups, etc) on other buckets on S3, which are not publicly accessible.
I would like to not have these websites-- some of which may have other people accessing their source code and configuration properties and seeing the S3 keys-- having access to my private data!
It seems from reading Amazon's docs that I need to partition privileges, by Amazon USER per bucket, not by access key per bucket. But that's not going to work. It also seems like I only get 2 access keys. I need to have one access key which is the master key, and several others which have much more circumscribed permissions-- only for certain buckets.
Is there any way to do that, or to approximate that?
You can achieve your goal by facilitating AWS Identity and Access Management (IAM):
AWS Identity and Access Management (IAM) enables you to securely
control access to AWS services and resources for your users. IAM
enables you to create and manage users in AWS, and it also enables you
to grant access to AWS resources for users managed outside of AWS in
your corporate directory. IAM offers greater security, flexibility,
and control when using AWS. [emphasis mine]
As emphasized, using IAM is strongly recommended for all things AWS anyway, i.e. ideally you should never use your main account credentials for anything but setting up IAM initially (as mentioned by Judge Mental already, you can generate as many access keys as you want like so).
You can use IAM just fine via the AWS Management Console (i.e. their is no need for 3rd party tools to use all available functionality in principle).
Generating the required policies can be a bit tricky in times, but the AWS Policy Generator is extremely helpful to get you started and explore what's available.
For the use case at hand you'll need a S3 Bucket Policy, see Using Bucket Policies in particular and Access Control for a general overview of the various available S3 access control mechanisms (which can interfere in subtle ways, see e.g. Using ACLs and Bucket Policies Together).
Good luck!
Yes, To access different login account with different permission using same AWS Account, you can use AWS IAM. As a developer of Bucket Explorer, I am suggesting try Bucket Explorer- team edition if you are looking for the tool provide you gui interface with different login and different access permission. read http://team20.bucketexplorer.com
Simply create a custom IAM group policy that limits access to a particular bucket
Such as...
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListAllMyBuckets"],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource":
[
"arn:aws:s3:::my.bucket",
"arn:aws:s3:::my.bucket/*"
]
}
]
}
The first action s3:ListAllMyBuckets allows the users to list all of the buckets. Without that, their S3 client will show nothing in the bucket listing when the users logon.
The second action grants full S3 privileges to the user for the bucket named 'my.bucket'. That means they're free to create/list/delete bucket resources and user ACLs within the bucket.
Granting s3:* access is pretty lax. If you want tighter controls on the bucket just look up the relevant actions you want to grant and add them as a list.
"Action": "s3:Lists3:GetObject, s3:PutObject, s3:DeleteObject"
I suggest you create this policy as a group (ex my.bucket_User) so you can assign it to every user who needs access to this bucket without any unnecessary copypasta.