access issue in uploading SSL file in amazon EC2 server - ssl

i am using this command to upload ssl file.
aws iam upload-server-certificate --server-certificate-name CertificateName --certificate-body file://public_key_certificate_file --private-key file://privatekey.pem
i also placed a config file at ~/.aws/config
and values are
[default]
aws_access_key_id = with my own key
aws_secret_access_key = with my own key
region = ********
but it is giving me this error:
A client error (AccessDenied) occurred: User: arn:aws:iam::419351825566:user/** is not authorized to perform: iam:UploadServerCertificate on resource: arn:a
ws:iam::419351825566:server-certificate/**.crt
Am I not writing AWS Credentials properly? Or I have no access? I am also not sure if I am writing region right..

As of Nov 2015, having an IAM user with a policy of 'IAMFullAccess' will make this work. You can create a new user to have that sole policy, or you can use an existing user and just add the policy.
Note: After uploading the SSL file, you can remove the IAMFullAccess policy if you'd like to tighten down permissions/security again.
New user workflow:
In the jumbo Services menu in AWS, go to IAM
In left sidebar, click on Users
Click blue "Create New Users" button
Type in a name for the user, e.g. "ssl-uploader", and create user
Make note of the keys that AWS gives you. You can't retrieve these later (you'd have to go back to step 1 and create a different user).
Assign the IAMFullAccess policy to the new user
In command line, do aws configure and answer the questions:
AWS Access Key ID: - access key from step 5
AWS Secret Access Key: - secret key from step 5
Default region name: - didn't matter in my case, accepted default None
Default output format: - didn't matter in my case, accepted default None
Run command as mentioned in the question, and it should work. You may want to take note of the JSON it returns in case you need it later.

Related

How to GET data from s3 using postman

I am trying to get data from file in s3 bucket. In postman set the response to GET for https://s3.eu-west-1.amazonaws.com/test-bucket-name/file-name? Entered Access Key: xxx123 Secret Access: xxxx12322 aws region: eu-west-1 service name: s3
In response i am getting access denied. Do i need to do any pre work to get the access keys working as they are generated for users and are used for CLI commands from my machine.
Many thanks in advance.
For this you have to make sure that you have an IAM user with the AmazonS3ReadOnlyPolicy to gain access to the object present in it.
I had this similar problem when I first started AWS and this link helped me :-
http://raaviblog.com/how-to-get-aws-s3-bucket-object-data-using-postman/#google_vignette
Have a look, cheers.
Was able to get list of files in the bucket. Issue was with permissions on the file.

S3 Access Denied with boto for private bucket as root user

I am trying to access a private S3 bucket that I've created in the console with boto3. However, when I try any action e.g. to list the bucket contents, I get
boto3.setup_default_session()
s3Client = boto3.client('s3')
blist = s3Client.list_objects(Bucket=f'{bucketName}')['Contents']
ClientError: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
I am using my default profile (no need for IAM roles). The Access Control List on the browser states that the bucket owner has list/read/write permissions. The canonical id listed as the bucket owner is the same as the canonical id I get when I go to 'Your Security Credentials'.
In short, it feels like the account permissions are ok, but boto is not logging in with the right profile. In addition, running similar commands from the command line e.g.
aws s3api list-buckets
also gives Access Denied. I have no problem running these commands at work, where I have a work log-in and IAM roles. It's just running them on my personal 'default' profile.
Any suggestions?
It appears that your credentials have not been stored in a configuration file.
You can run this AWS CLI command:
aws configure
It will then prompt you for Access Key and Secret Key, then will store them in the ~.aws/credentials file. That file is automatically used by the AWS CLI and boto3.
It is a good idea to confirm that it works via the AWS CLI first, then you will know that it should work for boto3 also.
I would highly recommend that you create IAM credentials and use them instead of root credentials. It is quite dangerous if the root credentials are compromised. A good practice is to create an IAM User for specific applications, then limit the permissions granted to that application. This avoids situations where a programming error (or a security compromise) could lead to unwanted behaviour (eg resources being used or data being deleted).

S3 access denied when trying to run aws cli

using the AWS CLI I'm trying to run
aws cloudformation create-stack --stack-name FullstackLambda --template-url https://s3-us-west-2.amazonaws.com/awsappsync/resources/lambda/LambdaCFTemplate.yam --capabilities CAPABILITY_NAMED_IAM --region us-west-2
but I get the error
An error occurred (ValidationError) when calling the CreateStack operation: S3 error: Access Denied
I have already set my credential with
aws configure
PS I got the create-stack command from the AppSync docs (https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-lambda-resolvers.html)
Looks like you accidentally skipped l letter at the end of template file name:
LambdaCFTemplate.yam -> LambdaCFTemplate.yaml
First make sure the S3 URL is correct. But since this is a 403, I doubt it's the case.
Yours could result from a few different scenarios.
1.If both APIs and IAM user are MFA protected, you have to generate temporary credentials using aws sts get-session-token and use it
2.Use a role to provide cloudformation read access to the template object in S3. First create a IAM role with read access to S3. Then create a parameter like below and ref it in resource properties IamInstanceProfile block
"InstanceProfile":{
"Description":"Instance Profile Name",
"Type":"String",
"Default":"iam-test-role"
}

Download from Requester Pays S3 bucket using EC2 identity

I'm trying to list and download files from a Requester Pays S3 bucket:
aws s3 ls --request-payer requester s3://requester-pays-bucket/
I'm running this command from an EC2 instance, but it fails:
Unable to locate credentials. You can configure credentials by running "aws configure".
The error is clear, however I'm still a little surprised. The goal of a Requester Pays bucket is to offload the cost of S3 data transfers to the requester. Since I'm initiating my request from EC2, my identity as requester should already be clear to S3, no?
Can S3 or the AWS CLI somehow automatically pick up my identity from the EC2 instance I'm running on? Or do I have to provide credentials in some explicit way?
You have to explicitly provide credentials of an IAM user which have access to your S3 bucket. Just go to IAM dashboard of your AWS account and create a new user which have programmatic access to s3. After this you will be provided with a secret access key and access key ID.
Then login into your EC2 instance, run command "aws configure" in your terminal and you will be asked for access key id , secret access key , default region if you want to provide ,just enter these details and you are good to go with your command.

Back-end access and secret keys required?

Are Docker Registry S3 back-end access and secret keys required? I don't understand why.
I use an IAM role and can't get access and secret keys from that. Before I didn't have to provide access and secret key to s3 settings in docker registry and it worked automatically since the IAM role granted the server access to the s3 resources. Now the keys are required in the YAML setting (I use docker compose to spin up registry) and it won't start without them.
Is there some way to get around this without having to add an IAM user?
I got it working. All I did was take out the following field and let the docker registry do it's magic (picking up access/secret key from the IAM role I assume)
Delete these 2 entries from docker-compose.yml:
REGISTRY_STORAGE_S3_ACCESSKEY: xxx
REGISTRY_STORAGE_S3_SECRETKEY: xxx