Unable to start the environment. To retry, refresh the browser or restart by selecting Actions, Restart AWS CloudShell - amazon-cloudwatch

I am unable to use aws cloud shell. I operate in the supported region (Ireleand) and my user has the right permissions (AWSCloudShellFullAccess).
{ "Version": "2012-10-17", "Statement": [ { "Action": [ "cloudshell:*" ], "Effect": "Allow", "Resource": "*" } ] }
Why is it disabled?
I tried to follow this guide. But the advice there doesnt work.
AWS cloudshell troubleshooting

So i was able to resolve this issue. Few things to try to create CloudShell environment:
Time Synchronization: Make sure Your machine time is accurate. It means its correct based on world time. did you try from another machine to see if its working there? may be any time sync related issue?
Check in different regions..
check AWSCloudShellFullAccess policy to ensure it has below JSON data.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"cloudshell:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Check in different browser to see if it works in different browser or not. https://docs.aws.amazon.com/cloudshell/latest/userguide/troubleshooting.html
Did you deleted cloudshell home directory or something? Try after resetting home directory. But it can DELETE all your data exists in home directory. https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#deleting-home-directory
check if any DENY policy created for it. remove that.
It is possible your account is not 100% verified. Try this: Create a CloudFront distribution if you get the below error it confirms your account is unverified or if you can create 2 distributions and can't create the 3rd one.
Your account must be verified before you can add new CloudFront
resources. To verify your account, please contact AWS Support
(https://console.aws.amazon.com/support/home#/ ) and include this
error message.
Click the support link
Navigate to:
Support / New case / Service limit increase
Limit type:
CloudFront Distributions
In Requests select:
Limit: Web Distributions per Account
New limit value: <TYPE_YOUR_NEW_VALUE_HERE>
MY CASE: In my case, I had 2 distributions, wanted to create 3rd, but couldn't. So I have put as <TYPE_YOUR_NEW_VALUE_HERE> a number 10.
Note: If nothing works choose last option as your last resort to confirm your account is verified.

Related

AccessDenied error from CreateBucket Permissions for pandas to_csv to S3

I have a script running on an EC2 box that finishes by running pd.to_csv('s3://<my_bucket_name>/<file_path>.
Run locally with my AWS admin credentials, this script runs fine and deposits the csv into the right bucket.
My S3 permissions for the EC2 instance are copied and pasted out of AWS' documentation: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_rw-bucket.html
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::<my_bucket_name>"]
},
{
"Sid": "AllObjectActions",
"Effect": "Allow",
"Action": "s3:*Object*",
"Resource": ["arn:aws:s3:::<my_bucket_name>/*"]
}
]
}
When run on the EC2 instance, my error is botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the CreateBucket operation: Access Denied.
I don't understand why pandas/s3fs is trying to create a bucket when mine already does exist. Suggestions elsewhere was to just provide s3:* access to ec2, but I'd prefer to be a little more restrictive than no restrictions.
Any thoughts on how to resolve this?
Turns out this was more of an issue with The aws batch role that was running the ec2 instance. The write permissions are good enough to write to S3 without bucket listing privileges. The AccessDenied error was a red herring at the more general error that no privileges were being passed to the instance.
A quick look at the Pandas codebase didn't show me anything concrete, but my guess would be that it's checking to see if the bucket exists before listing/updating the objects and failing because it doesn't have the s3:ListAllMyBuckets permission.
You could confirm or deny this theory by giving your role that action (in its own statement), which would hopefully avoid having to give s3:* to it.

Read Only Bucket Policy Settings for Amazon S3 - For Streaming Audio Snippets

I want to outsource audio snippets off my shop page to amazon S3.
My goal is: public/everyone can read but only the owner/me can write.
Here is the code I used
Under Permission - Bucket Policy I'm using the following code
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*"
}
]
}
But the permissions I get are confusing me. see screenshot.
And when I click on the relevant file I get this
Do I have to click on "everyone" and add "read"?
Here is another window where I had to change the policy to false (on the right side) because otherwise I was getting "Access denied"
And then there is a third permission window (kind of global? outside the bucket thing)
I guess what I'm asking is: Is this how you do it, if you want to set up files to "read only" for public and "read and write" for the owner?
Can someone confirm that this is set up and looking right?
Help is very much appreciated. Thanks.
I'm not 100% sure this is the best answer but what comes to mind is having a private read and write s3 that syncs with your public bucket. AWS is strict in their public vs private buckets so I don't imagine they would allow only owner write access. I could be wrong. Basically, have a personal private s3 bucket that syncs to your public bucket for everyone else.
Along the lines of this,
Automatically sync two Amazon S3 buckets, besides s3cmd?

How to redirect the Root of an Amazon S3 Bucket created just for downloads?

We have an Amazon S3 bucket setup simply for downloads (we send a lot of traffic to download pdfs etc). The issue is that anyone can access the root folder and see everything in there.
The links are setup like this:
https://s3.amazonaws.com/bucket-name/file-name.pdf
The bucket is setup to have Public Access.
The Access Control list has just "Write Objects" checked - otherwise we can't upload to it.
To make the Bucket public we have this in our permission folder:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*"
}
]
}
We'd like to tidy it up so that if anyone hits the bucket root, they get redirected to a place we choose.
I have an index.html file setup that is redirecting, however the root folder doesn't load this file by default.
Can anyone point me to the solution for this? Or if it's not possible with our current setup, what steps should I take? We already have the links right through our site so redoing the, all isn't really the best option. I have been through a lot of threads trying to find a solution for this and really appreciate any input!

Amazon AWS - different access permissions for files in the same bucket

Has anyone know is there an option to set file permissions for certain file on AWS S3 service to restrict access to that file only.
Here is the thing. I have a bucket with public read policy as below:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket-name/*"
}
]}
There are bunch of files there which are related to some data in my database. When I delete that record it is not actually deleted. So I want to make that file (which is related to a deleted record) within this bucket to be public inaccessible.
I have two not very pretty ideas how to resolve this.
Copy all that data in another bucket with different policy.
Rename file and update policy to disable access to files with certain prefix of suffix (not sure if this is possible)
But all that requires write/delete action which I'd like to avoid. So the question is, is there is a way to set some kind of a permission to a single file to prevent an access?
Thanks,
Ante
Check if AWS S3 ACL provides what you are looking for http://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html

creating IAM policy for Amazon S3

I am trying to implement an IAM policy where A user can only have access to the folder he is entitled too. I got this code from Amazon docs
Allow a user to list only the objects in his or her home directory in the corporate bucket
This example builds on the previous example that gives Bob a home directory. To give Bob the ability to list the objects in his home directory, he needs access to ListBucket. However, we want the results to include only objects in his home directory, and not everything in the bucket. To restrict his access that way, we use the policy condition key called s3:prefix with the value set to home/bob/. This means that only objects with a prefix home/bob/ will be returned in the ListBucket response.
{
"Statement":[{
"Effect":"Allow",
"Action":"s3:ListBucket",
"Resource":"arn:aws:s3:::my_corporate_bucket",
"Condition":{
"StringLike":{
"s3:prefix":"home/bob/*"
}
}]
}
This is not working for me. When I run my code I am able to see all the folders and sub folders. My modified code looks something like this:
{
"Statement":[{
"Effect":"Allow",
"Action":"s3:ListBucket",
"Resource":"arn:aws:s3:::Test-test",
"Condition":{
"StringLike":{
"s3:prefix":"Test/*"
}
}]
}
When I run my code in c# using the credentials of the user that is attached to the above policy I get all the folders and not just the one under "Test"...
Would really appreciate some help!
I finally got it working. Although I think there is a bug in AWS management console or atleast it seems like one. The problem is my policy was right all along the way but it behaved differently when I accessed it through AWS management console then softwares like CloudBErry. One thing I had to modify was ACL settings for objects and buckets.That too would have been done earlier had the AWS console worked properly. Anyways here is my policy:
{
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*",
"Condition": {}
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:ListBucketVersions"
],
"Resource": "arn:aws:s3:::pa-test",
"Condition": {
"StringLike": {
"s3:prefix": "test/*"
}
}
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::pa-test/test/*",
"Condition": {}
}
]
}
1) The problem is when I access management console for this IAM user through AWS console I get access denied when I click on my bucket although when I log in through Cloudberry I can see my folders.
2) I had to modify the ACL settings for my bucket and objects(folders)
for my bucket:
Owner : Full Control
Authenticated Users : Readonly
For my folders:
Owner : Full Control
Now the issue is that you cannot set ACl settings for folders(object) in AWS console. You can set them for files(object). For example if you right click on a folder(object) inside a bucket and then click properties it won't show you a permission tabs. But if you right click on a bucket or a file(Say test.html) and click properties it will show you a permissions tab.
I am not sure if someone else has noticed this issue. Anyways that is my script and it's working now.
The result you are expecting from the listBucket, is not happen like that.
Because the policy only let you to access allow and deny on the objects according to the bucket policy.
ListBucket will list all the objects but you will have access only on the prefix folder and it's content.
If you want to list only folder then you have to code for that like read IAM policy and then get prefix string and then list with that prefix then you will get only the desired folder. because till now no such option provided by amazon s3.