aws-sdk delete_object deletes entire bucket contents? - amazon-s3

Learning the ropes of AWS S3 and the sdk. Trying to use in in my Rails app via the aws-sdk gem. I was following this post:
Remove entire object directory tree using AWS-SDK
Luckily for me that I was playing around in my staging bucket. I created a folder test and uploaded an example.jpg image. From the rails console I ran (note the lack of {} brackets):
s3.delete_object(bucket: 'mystagingbucket', key: '/test/example.jpg')
=> #<struct Aws::S3::Types::DeleteObjectOutput delete_marker=true, version_id=nil, request_charged=nil>
then I go back into the web console to find my entire bucket empty. All my previously uploaded files, static assets, etc. all gone.
So I realize that I should turn versioning on and try this again to duplicate the issue. After some Googling I see the docs show the {} brackets.
Now I get:
s3.delete_object({bucket: 'mystagingbucket', key: '/test/example.jpg'})
=> #<struct Aws::S3::Types::DeleteObjectOutput delete_marker=true, version_id="blohAaXlnG2.RERnj_JT3zvQmAr8io48", request_charged=nil>
except that nothing happens and the file is not deleted. I have done some more Googling and I now see that it can take up to a few hours to see your files actually deleted so I will check back on the recent delete issues.
As for the bucket contents getting erased am I missing something here?

Related

Directus - AWS S3 Upload - error message: Cannot set property 'value' of undefined

I just set up directus with s3 storage like this: https://docs.directus.io/extensions/storage-adapters.html#core-adapters
It seems to be working: but there is one issue:
When I upload an image, it gives me an error: Cannot set property "value" of undefined
But when I refresh the page, all is done and it seems to be fine. The files are on s3 and I can see it in my admin panel. But the error message and the need to refresh the page sucks - especially when it comes to give it to my clients.
Is it possible that this could be a bug in the source code of directus?
But when its like that, why I am the only person having this issue ... ?
Here are my config details:
It could be a bug... perhaps related to a migrations issue (which would be why it's not seen by more people). You could open a ticket on GitHub and the developers will check it out. Just try to make sure it's not a config issue first. :)

Create an app in Wit.ai from zip

I'm trying to build a sample app in Wit.AI, with a lot of entity values and expressions. Thus, to create that app manually is not an option.
I've tried their "import" feature, but it seems it doesn't work very well or it might be very capricious about the zip. The things I've done and nothing gave a result:
Download a zip from another app in my account
Change the zip command in order to work for the new app
The changed files are expressions.json and a single file in the entities folder, describing an user-defined entity.
Zip the whole folder in order to preserve the structure of the ZIP
Nevertheless how many approaches I've tried (format the JSON and etc.) nothing worked! The server returns 400 Bad Request response.
Further, I've tried with their Web API, but to no avail again. When I'm updating the values of an entity the server responses with Success, the response doesn't contain the new values...
I've checked this article Error importing app from backup on wit.ai and many others as well as some issues on GitHub, but again...nothing helped ;)
So, if anyone could help on that...He/she gets a beer! :)
When you create a new app, you can simply upload the the zip file in Import your app from a backup, and create the new app.
To be sure to not include any redundant files in the app zip file, it is important to use the following to zip app files:
zip AppName.zip AppName/app.json AppName/entities/*.json AppName/expressions.json
and upload AppName.zip.
Note that, the name of new app, zip file, and app name in app.json file, all should be the same (here AppName).

Cloud Storage Transfer "Failed"

I've tried repeatedly to use the Google Developers Console tools to Create a Transfer that works, but haven't had any luck. My source is in S3.
I tried with the "S3://" URL, but when trying to accept the transfer settings, I consistently get "source bucket doesn't exist". I test my URL by placing it in a browser, and I do get it to resolve, so I don't know what's up.
Even more puzzling is when I try using a text file of URLs. These URLs are all http:// strings, and each of them properly loads in a browser. I figured this would be even more straightforward as there are no permissions to be dealt with, really, since each file in the S3 bucket already has read permissions.
Instead, all I get in the Transfer history is "Failed", with no other information at all.
At first, I was greedy and included all my files. When I got nowhere with that, I cut it down to a single file. Still no go.
Here is the text file.
Any clues, por favor?
It looks like your text file doesn't follow the specified format. You should add the header and size/MD5 of each file as described at https://cloud.google.com/storage/transfer/#urls

Restricting access to Paperclip :original files in S3

How do I restrict any access to the :original styled files in S3 but keep access to the rest of the styles's folders in the bucket?
I saw implementations on how to limit all access and then check on attributes of a model. I just want to limit access to :original styles
I did notice this line in paperclip, I just don't know how to use (if possible)
You can limit the files by accessing the files through an action of a controller. This way you can control, which files a user can access and which not.
If you simply make a privat s3 bucket, this won't help you. As a user with a valid key can access any files in the bucket. If you have really file which needs to be protected, you have only view ways to do it (as I think):
Restrict access to the bucket and serve the files through an action of a controller (no real way to work around this)
Rename the specific files to be not easy to predict (e.g. 32 or more characters of numbers and letters). This is quit simple to achieve and you can still serve the files directly from s3
Save the files somewhere else (maybe in an other s3 bucket), so nobody can predict them
For renaming files you can use this stackoverflow question: Paperclip renaming files after they're saved
The answer I am looking for (I think, didn't test it yet) can be found here
http://rdoc.info/github/thoughtbot/paperclip/Paperclip/Storage/S3
s3_permissions: This is a String that should be one of the "canned" access policies that S3 provides (more information can be found here: docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?RESTAccessPolicy.html) The default for Paperclip is :public_read.
You can set permission on a per style bases by doing the following:
:s3_permissions => {
:original => :private
}
Or globaly:
:s3_permissions => :private

How can I update files on Amazon's CDN (CloudFront)?

Is there any way to update files stored on Amazon CloudFront (Amazon's CDN service)?
Seems like it won't take any update of a file we make (e.g. removing the file and storing the new one with the same file name as before).
Do I have to explicitly trigger an update process to remove the files from the edge servers to get the new file contents published?
Thanks for your help
Here is how I do it using the CloudFront control panel.
Select CloudFront from the list of services.
Make sure Distributions from the top left is selected.
Next click the link for the associated distribution from the list (under id).
Select the Invalidations tab.
Click the Create Invalidation button and enter the location of the files you want to be invalidated (updated).
For example:
Then click the Invalidate button and you should now see InProgress under status.
It usually takes 10 to 15 minutes to complete your invalidation
request, depending on the size of your request.
Once it says completed you are good to go.
Tip:
Once you have created a few invalidations if you come back and need to invalidate the same files use the select box and the Copy link will become available making it even quicker.
Amazon added an Invalidation Feature. This is API Reference.
Sample Request from the API Reference:
POST /2010-08-01/distribution/[distribution ID]/invalidation HTTP/1.0
Host: cloudfront.amazonaws.com
Authorization: [AWS authentication string]
Content-Type: text/xml
<InvalidationBatch>
<Path>/image1.jpg</Path>
<Path>/image2.jpg</Path>
<Path>/videos/movie.flv</Path>
<CallerReference>my-batch</CallerReference>
</InvalidationBatch>
Set TTL=1 hour and replace
http://developer.amazonwebservices.com/connect/ann.jspa?annID=655
Download Cloudberry Explorer freeware version to do this on single files:
http://blog.cloudberrylab.com/2010/08/how-to-manage-cloudfront-object.html
Cyberduck for Mac & Windows provides a user interface for object invalidation. Refer to http://trac.cyberduck.ch/wiki/help/en/howto/cloudfront.
I seem to remember seeing this on serverfault already, but here's the answer:
By "Amazon CDN" I assume you mean "CloudFront"?
It's cached, so if you need it to be updated right now (as opposed to "new version will be visible in 24hours") you'll have to choose a new name. Instead of "logo.png", use "logo.png--0", and then update it using "logo.png--1", and change your html to point to that.
There is no way to "flush" amazon cloudfront.
Edit: This was not possible, it is now. See comments to this reply.
CloudFront's user interface offers this under the [i] button > "Distribution Settings", tab "Invalidations": https://console.aws.amazon.com/cloudfront/home#distribution-settings
In ruby, using the fog gem
AWS_ACCESS_KEY = ENV['AWS_ACCESS_KEY_ID']
AWS_SECRET_KEY = ENV['AWS_SECRET_ACCESS_KEY']
AWS_DISTRIBUTION_ID = ENV['AWS_DISTRIBUTION_ID']
conn = Fog::CDN.new(
:provider => 'AWS',
:aws_access_key_id => AWS_ACCESS_KEY,
:aws_secret_access_key => AWS_SECRET_KEY
)
images = ['/path/to/image1.jpg', '/path/to/another/image2.jpg']
conn.post_invalidation AWS_DISTRIBUTION_ID, images
even on invalidation, it still takes 5-10 minutes for the invalidation to process and refresh on all amazon edge servers
CrossFTP for Win, Mac, and Linux provides a user interface for CloudFront invalidation, check this for more details: http://crossftp.blogspot.com/2013/07/cloudfront-invalidation-with-crossftp.html
I am going to summarize possible solutions.
Case 1: One-time update: Use Console UI.
You can manually go through the console's UI as per #CoalaWeb's answer and initiate an "invalidation" on CloudFront that usually takes less than one minute to finish. It's a single click.
Additionally, you can manually update the path it points to in S3 there in the UI.
Case 2: Frequent update, on the Same path in S3: Use AWS CLI.
You can use AWS CLI to simply run the above thing via command line.
The command is:
aws cloudfront create-invalidation --distribution-id E1234567890 --paths "/*"
Replace the E1234567890 part with the DistributionId that you can see in the console. You can also limit this to certain files instead of /* for everything.
An example of how to put it in package.json for a Node/JavaScript project as a target can be found in this answer. (different question)
Notes:
I believe the first 1000 invalidations per month are free right now (April 2021).
The user that performs AWS CLI invalidation should have CreateInvalidation access in IAM. (Example in the case below.)
Case 3: Frequent update, the Path on S3 Changes every time: Use a Manual Script.
If you are storing different versions of your files in S3 (i.e. the path contains the version-id of the files/artifacts) and you need to change that in CloudFront every time, you need to write a script to perform that.
Unfortunately, AWS CLI for CloudFront doesn't allow you to easily update the path with one command. You need to have a detailed script. I wrote one, which is available with details in this answer. (different question)