I am doing some small devices running Debian. They need to sync a S3 bucket to a folder locally. I Have installed S3Tools and s3cmd sync seems to be the perfect tool. But I have to supply the Access Credentials and that seems VERY insecure. I will not be controlling the units once they ship so I need to somehow use the tool without supplying the credentials - AND I need to make sure the credentials can not delete in the bucket.
Does anyone have an idea as to how I go about this?
Regards, Jacob
Use IAM. It allows creation of AWS credentials with predefined permissions, which are under your control.
So you will create one identity per device. You are free to restrict access only to some buckets, keys.
You will not be able updating "device" credentials on your devices (this is simply your constrain), but in case some of your credentials will turn out as compromised, you still have the option to block it via IAM.
And for your primary "root" identity, I strongly recommend using two factor authentication (and of-course never put it to a device, you do not have control of).
Related
I want to use OS Login with GCP because we use IAM for scoping access to all other resources within GCP (storage buckets, SQL, Redis, etc.). I understand how to restrict users from accessing machines using service accounts and roles.
But, I don't understand how to restrict the possible usernames that someone can use to SSH into our Compute Engine machines. Assume we have a VM configured with OS Login. The problem is that everyone connects using a CLI string like
gcloud compute ssh $MACHINE_NAME which (possibly creates and then) logs in to a /home/$USER_DOMAIN_SUFFIX directory. So, the team's shell history, relevant home directory contents (downloaded files, created scripts, etc.), and running processes are all in a different scope (UID). We could soft-enforce that everyone does something like gcloud compute ssh $SPECIAL_USERNAME#$MACHINE_NAME where everyone uses the same $SPECIAL_USERNAME value. But, that doesn't prevent new home directories from being provisioned. It's a convention, not a software policy.
Is there a way to accomplish what I want, where I can freely choose the value of $SPECIAL_USERNAME? I don't want to be locked in to the generated usernames based on the user/service account email.
Using root for everything is unacceptable for a number of reasons (we want to use a non-root container runtime and we want to limit potential damage done by this $SPECIAL_USERNAME).
I want to publish a tutorial where a data from sample tsv file S3 is used by Redshift. Ideally I want it to be simple copy paste operation required to follow the exercises step by step, similar to what's in Load Sample Data from Amazon S3. The problem is with the first data import task using COPY command as it only supports S3, or EMR based load.
This seems like a simple requirement but no hassle-free way to really do it with Redshift COPY (I can make the file available for browser download without any problem but COPY requires CREDENTIALS parameter…)
Variety of options for Redshift COPY Authorization parameters is quite rich:
Should I ask user to Create an IAM Role for Amazon Redshift
himself?
Should I create it myself and publish the IAM role ARN? Sounds most hassle
free (copy paste) but security wise doesn't sound well…? Do I need to restrict S3 permissions to limit the access to only that particular file for that role?
Should I try temporary access instead?
You are correct:
Data can be imported into Amazon Redshift from Amazon S3 via the COPY command
The COPY command requires permission to access the data stored in Amazon S3. This can be granted either via:
Credentials (Access Key + Secret Key) associated with an IAM User, or
An IAM Role
You cannot create a Role for people and let them use it, because their Amazon Redshift cluster will be running in a different AWS Account than your IAM Role. You could possibly grant trust access so that other accounts can use the role, but this is not necessarily a wise thing to do.
As for credentials, they could either use their own or ones the you supply. They can access their own Access Key + Secret Key in the IAM console.
If you wish to supply credentials for them to use, you could create an IAM User that has permission only to access the Amazon S3 files they need. It is normally unwise to publish your AWS credentials because they might expose a security hole, so you should think carefully before doing this.
At the end of the day, it's probably best to show them the correct process so they understand how to obtain their own credentials. Security is very important in the cloud, so you would also be teaching them good security practice, in additional to Amazon Redshift itself.
I have tried looking for an answer to this but I think I am perhaps using the wrong terminology so I figure I will give this a shot.
I have a Rails app where a company can have an account with multiple users each with various permissions etc. Part of the system will be the ability to upload files and I am looking at S3 for storage. What I want is the ability to say that users from Company A can only download the files associated with that company?
I get the impression I can't unless I restrict the downloads to my deployment servers IP range (which will be Heroku) and then feed the files through a controller and a send_file() call. This would work but then I am reading data from S3 to Heroku then back to the user vs. direct from S3 to the user.
If I went with the send_file method can I close off my S3 server to the outside world and have my Heroku app send the file direct?
A less secure idea I had was to create a unique slug for each file and store it under that name to prevent random guessing of files i.e. http://mys3server/W4YIU5YIU6YIBKKD.jpg etc. This would be quick and dirty but not 100% secure.
Amazon S3 Buckets support policies for granting or denying access based on different conditions. You could probably use those to protect your files from different user groups. Have a look at the policy documentation to get an idea what is possible. After that you can switch over to the AWS policy generator to generate a valid policy depending on your needs.
I'm storing all my content in AWS S3 and I would like to know which is the best approach to retrieve my images:
should I use the account access keys or should I create a user with the correct policies and then use the access keys for that "user"?
Always always always create users with their own IAM policies. You should never use the root account credentials to do anything if you can help it.
It's like permanently running commands on your local machine as the root user. The account-level access and secret access keys are the absolute keys to the kingdom. With them, a hacker, malicious employee, or well-intentioned-but-prone-to-accidents administrator could completely destroy every AWS resource you have, download anything off them, and in general cause chaos and discord. Even machines with pem files aren't safe. A root-level user could just cut an AMI off an existing machine.
Take a look at the IAM policy generator. Writing JSON policies is not fun and error prone, but tools like that one will help you get most of the way there.
I want to upload pictures to the AWS s3 through the iPhone. Every user should be able to upload pictures but they must remain private for each one of them.
My question is very simple. Since I have no real experience with servers I was wondering which of the following two approaches is better.
1) Use some kind of token vending machine system to grant the user access to the AWS s3 database to upload directly.
2) Send the picture to the EC2 Servlet and have the virtual server place it on the S3 storage.
Edit: I would also need to retrieve, should i do it directly or through the servlet?
Thanks in advance.
Hey personally I don't think it's a good idea to use token vending machine to directly upload the data via the iPhone, because it's much harder to control the access privileges, etc. If you have a chance use ec2 and servlet, but that will add costs to your solution.
Also when dealing with S3 you need to take in consideration that some files are not available right after you save them. Look at this answer from S3 FAQ.
For retrieving data directly from S3 you will need to deal with the privileges issue again. Check the access model for S3, but again it's probably easier to manage the access for non public files via the servlet. The good news is that there is no data transfer charge for data transferred between EC2 and S3 within the same region.
Another important point to consider the latter solution
High performance in handling load and network speeds within amazon ecosystem. With direct uploads the client would have to handle complex asynchronous operations of multipart uploads etc instead of focusing on the presentation and rendering of the image.
The servlet hosted on EC2 would be way more powerful than what you can do on your phone.