A developer of mine wants to be able to see the entire contents of the S3 bucket that I've given him to develop with. It seems the only way to do this is to give a limited version of the AWS console to see as objects enter the bucket.
Is this even possible? Is there any other way to allow him to see as objects populate the bucket?
You can use IAM roles to control access to resources at a granular level, even down to individual object contained in an S3 bucket.
You can read more about IAM here https://aws.amazon.com/iam/
Related
I am consuming a document-sharing API in my application, which when used, returns a "downloadUrl" for a given file located in Azure Blob Storage.
I want to take that Azure Blob Storage url, and stream the document into an Amazon S3 bucket.
How would I go about doing this? I see similar questions such as Copy from Azure Blob to AWS S3 using C# , but in that example they seem to have access to the stream of the document itself. Is there any way for me to simply provide S3 with the link, and have them do the rest? Or do I need to get the file on the server, and stream it as in the example above?
Thanks in advance for the help.
There is only one case where S3 can be directed to fetch content "into" an object, and that is when the source is also an existing S3 object. It can be in the same bucket or a different bucket, or even a different AWS region or account, as long as the calling user has permissions on both source and target.
Any other case -- such as what you are contemplating -- requires that you fetch the object from the source, yourself, and then upload it into S3... as in the example.
We have a requirement to get a .csv files from a bucket which is a client location (They would provide the S3 bucket info and other information required). Every day we need to pull this data into our S3 bucket so we can process it further. Please suggest the best way/technology that we can use to achieve the result.
I am planning to do it by Python boto (or Pandas or Pyspark) or Spark; reason being, once we get this data it might be processed further.
You can try the S3 cross account object copy using the S3 copy option. This is more secure and the suggested one. Please go through the below link for more details. It also works for same account different buckets. After copying then you can trigger some lambda function with custom code(python) to do the processing of the .csv files.
How to copy Amazon S3 objects from one AWS account to another by using the S3 COPY operation
If your customer keeps the data in an s3 bucket to which your account has been granted access to it, then it should be possible to use the .csv files as a direct source of data for a spark job. Use the s3a://theirbucket/nightly/*.csv as the RDD source, and save it to s3a://mybucket/somewhere, ideally in a format other than CSV (Parquet, ORC, ...). This lets you do some basic transformation of the format into one easier to work with.
If you just want the raw CSV files, that S3 Copy operation is what you need, as it copies the data within S3 itself (6+MiB/s if in the same S3 location), and not needing any of your own VMs involved.
Use Case:
Upload multiple files into a cloud storage bucket, and then use that data as a source to a bigquery import. Use the name of the bucket as the metadata to drive which sharded table the data should go into.
Question:
In order to prevent partial import to the bigquery table, ideally, I would like to do the following,
Upload the files into a staging bucket
Verify all files have been uploaded correctly
Rename the staging bucket to its final name (for example, gs://20130112)
Trigger the bigquery import to load the bucket into a sharded table
Since gsutil does not seem to support bucket rename, what are the alternative ways to accomplish this?
Google Cloud Storage does not support renaming buckets, or more generally an atomic way to operate on more than one object at a time.
If your main concern is that all objects were uploaded correctly (as opposed to needing to ensure the bucket content is only visible once all objects are uploaded), gsutil cp supports that -- if any object fails to upload, it will report the number that failed to upload and exit with a non-zero status.
So, a possible implementation would be a script that runs gsutil cp to upload all your files, and then checks the gsutil exit status before creating the BigQuery table load job.
Mike Schwartz, Google Cloud Storage team
Object names are actually flat in Google Cloud Storage; from the service's perspective, '/' is just another character in the name. The folder abstraction is provided by clients, like gsutil and various GUI tools. Renaming a folder requires clients to request a sequence of copy and delete operations on each object in the folder. There is no atomic way to rename a folder.
Mike Schwartz, Google Cloud Storage team
Context
I want to have a machine upload a file dump.rdb to s3/blahblahblah/YEAR-MONTH-DAY-HOUR.rdb on the hour.
Thus, I need this machine to have the ability to upload new files to S3.
However, I don't want this machine to have the ability to (1) delete existing files or (2) overwrite existing files.
In a certain sense, it can only "append" -- it can only add in new objects.
Question:
Is there a way to configure an S3 setup like this?
Thanks!
I cannot comment yet, so here is a refinement to #Viccari 's answer...
The answer is misleading because it only addresses #1 in your requirements, not #2. In fact, it appears that it is not possible to prevent overwriting existing files, using either method, although you can enable versioning. See here: Amazon S3 ACL for read-only and write-once access.
Because you add a timestamp to your file names, you have more or less worked around the problem. (Same would be true of other schemes to encode the "version" of each file in the file name: timestamps, UUIDs, hashes.) However, note that you are not truly protected. A bug in your code, or two uploads in the same hour, would result in an overwritten file.
Yes, it is possible.
There are two ways to add permissions to a bucket and its contents: Bucket policies and Bucket ACLs. You can achieve what you want by using bucket policies. On the other hand, Bucket ACLs do not allow you to give "create" permission without giving "delete" permission as well.
1-Bucket Policies:
You can create a bucket policy (see some common examples here), allowing, for example, an specific IP address to have specific permissions.
For example, you can allow: s3:PutObject and not allow s3:DeleteObject.
More on S3 actions in bucket policies can be found here.
2-Bucket ACLs:
Using Bucket ACLs, you can only give the complete "write" permission, i.e. if a given user is able to add a file, he is also able to delete files.
This is NOT possible! S3 is a key/value store and thus inherently doesn't support append only. The PUT/cp command to S3 can always overwrite a file. By enabling versioning on your bucket you are still safe in cause the account uploading the files gets compromised.
If someone goes to the url of my bucket, they are able to see every single file listed.
Although I want the files in my bucket to be able to be seen by the public, I'd prefer not to have this list view available. Is there a way to prevent "directory listings" like this?
you should remove read access for "All Users" built-in group from the bucket's ACL. You can do that using the tool like CloudBerry Explorer freeware
Make sure you keep read access on the files you want to serve from S3.
Thanks
Andy