Migrating Paperclip database to Active Storage Amazon S3 - amazon-s3

We are in the process of migrating paperclip to active storage.But we want to move Paperclip storage from database to Active Storage Amazon S3. Can anyone point me to a documentation for this? I can't seem to find one.

Maybe this article can help you. Code is missing due to Github username update but the main idea is there.
Or simply the documentation of Paperclip. As you wish.

Related

How to transfer tokbox archive videos into my own server

i am very new to tokbox and interested to know whether i can download and store the archived videos/data from tokbox into my own server through rest api without using microsoft azure or amazon s3.
Thanks in advance..
You have two options here. You can provide your own S3/azure bucket, or you can use the default OpenTok bucket, where you will have your archives available for downloading for 72 hours. If you don't want to use your own bucket, just let OpenTok store it, and, when you have finished the archiving session, use the API to get the s3 url to download it wherever you want.
If you want the archive to be created in your own server, the answer is simple. You cannot.
I hope this helps.

AWS S3 not stops uploading from my Lenovo® ix2-dl

I have a NAS drive Lenovo® ix2-dl that I set up to back up to AWS S3. It connected fine. But for some reason it only uploads 5% of my Lenovo® ix2-dl Data. How can I get it to upload my whole Lenovo® ix2-dl Data?
I updated my NAS to the latest Firmware 4.1.218.34037.
I recently had issues with the s3 backup feature, where the uploads simply stopped working. No errors, nothing in logs to indicate an issue. I tested by AWS S3 access key and secret with another method and was able to upload files just fine.
To resolve the issue, i had to create a new AWS S3 bucket, then go into the S3 setup of Lenovo and provide the required info. I think what made this work for me, was i made sure to not have anything in the bucket name other than letters and numbers. My bucket name before was similar to this lastname.family.pics, my new bucket which works is similar to this lastname123.
Hope this helps, this feature has worked fine for a long time, perhaps an update came down which has different requirements for the api.

AWS elasticbeanstalk automating deletion of logs published to S3

I have enabled publishing of logs from AWS elasticbeanstalk to AWS S3 by following these instructions: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.loggingS3.title.html
This is working fine. My question is how do I automate the deletion of old logs from S3, say over one week old? Ideally I'd like a way to configure this within AWS but I can't find this option. I have considered using logrotate but was wondering if there is a better way. Any help is much appreciated.
I eventually discovered how to do this. You can create an S3 Lifecycle rule to delete particular files or all files in a folder more than N days old. Note: you can also archive instead of delete or archive for a while before deleting, among other things- it's a great feature.
Reference: http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectExpiration.html
and http://docs.aws.amazon.com/AmazonS3/latest/dev/manage-lifecycle-using-console.html

Heroku Photos stored on server

I use my server to store user uploaded pictures. This is great however when I make a change to the code it reflects this and deletes my pictures stored on the server.
git push heroku master
How do I prevent this?
Heroku's filesystem is read-only so you can't and shouldn't store uploaded files in your dynos.
If you think about it, it makes sense. You can have multiple dynos running your app so you can't guarantee which dyno is receiving the pictures.
Dynos should be stateless anyway, so you can easily scale your application up or down.
The preferred way to do file uploads on Heroku is to use Amazon S3 as outlined in their DevCenter.
Like leonardoborges Heroku's filesytem is read-only. Since you are using rails you can use a gem like carrierwave that helps when you are handling images in your app and it is easy to set up with Amazon S3.
Other helpful links
Carrierwave Railscast

Allowing users to download files as a batch from AWS s3 or Cloudfront

I have a website that allows users to search for music tracks and download those they they select as mp3.
I have the site on my server and all of the mp3s on s3 and then distributed via cloudfront. So far so good.
The client now wishes for users to be able to select a number of music track and then download them all in bulk or as a batch instead of 1 at a time.
Usually I would place all the files in a zip and then present the user a link to that new zip file to download. In this case, as the files are on s3 that would require I first copy all the files from s3 to my webserver process them in to a zip and then download from my server.
Is there anyway i can create a zip on s3 or CF or is there someway to batch / group files in to a zip?
Maybe i could set up an EC2 instance to handle this?
I would greatly appreciate some direction.
Best
Joe
I am afraid you won't be able to create the batches w/o additional processing. firing up an EC2 instance might be an option to create a batch per user
I am facing the exact same problem. So far the only thing I was able to find is Amazon's s3sync tool:
https://docs.aws.amazon.com/cli/latest/reference/s3/sync.html
In my case, I am using Rails + its Paperclip addon which means that I have no way to easily download all of the user's images in one go, because the files are scattered in a lot of subdirectories.
However, if you can group your user's files in a better way, say like this:
/users/<ID>/images/...
/users/<ID>/songs/...
...etc., then you can solve your problem right away with:
aws s3 sync s3://<your_bucket_name>/users/<user_id>/songs /cache/<user_id>
Do have in mind you'll have to give your server the proper credentials so the S3 CLI tools can work without prompting for usernames/passwords.
And that should sort you.
Additional discussion here:
Downloading an entire S3 bucket?
s3 is single http request based.
So the answer is threads to achieve the same thing
Java api - uses TransferManager
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/transfer/TransferManager.html
You can get great performance with multi threads.
There is no bulk download sorry.