I would like to ask you for a simple beginner's question - I have my app in Rails and it's on Heroku. For storing images use the app S3 by Amazon.
For uploading images I use the Paperclip plugin.
And what I don't understand - I deploy my app from localhost to Heroku. It seems on Heroku my app works fine, I upload the image, this image is stored to S3 and in my app is fine displayed.
But now if I will upload an image on my localhost version - so the image will be uploaded to S3 bucket or will be stored on my hard drive?
Are these two sides separated or if I once set up into my model the S3 support, so that will be mean all images will uploaded to S3 (from heroku and from localhost)?
#phs is correct. The images will be stored on S3 regardless of where you run the app. This can cause you some grief if your :id is embedded in the image location (which it probably is) and your dev database has different ID's than your production/heroku database.
Related
I'm working on an application in React Native that requires to display images from the network. These images are stored on an S3 bucket that I own.
When I try to display the images on my simulator everything goes perfectly. However when I test my code on TestFlight (same behavior on android) no image is displayed.
Here are the tracks I have already explored :
The images I store have no extension. Is this a problem?
My guess would be that the problem is due to an error in the HTTPS certificate? I've also tested it on Amazon Cloud Front and got the same result.
Do you have any idea what could cause this error and an idea for the solution.
Thank you for your time,
I'm developing an application that uses (lots) of image processing.
The general overview of the system is:
User Uploads photos to server (Raw photo, with FULL resolution)
Server Fetches new photos, and apply image processing on them
Server resizes image and serves those photos (delete the full one?)
My current situation is that I have almost no expertise in image hosting nor large data uploading and managing.
What I plan to do is:
User uploads directly from Browser to Amazon S3 (Full Image)
User notifies my server, and add the uploaded file to the Queue for my workers
When worker receives a job, it downloads the full image (from Amazon), and process it. Updates database, and then re-uploads the image to Cloudinary (resize in server?)
Use the hosted image on Cloudinary from now on.
My doubts are regarding the process time. I don't want to upload it directly to my server, because it would require a lot of traffic and create a bottleneck, so using Amazon S3 would reduce that. And hosting images with Amazon would not be that good, since they don't provide specific API's to deal with images as Cloudinary does.
Working with separate servers for uploading, and only triggering my server when upload is done by the browser is ok? Using Cloudinary for hosting images is also something that makes sense? Sending to Amazon, instead of my own server (direct upload to my server) should be avoided?
(This is more a guidance/design question)
Why wouldn't you prefer uploading directly to Cloudinary?
The image can be uploaded directly from the browser to your Cloudinary account, without any further servers involved. Cloudinary then notifies you about the uploaded image and its details, then you can perform all the image processing in the cloud via Cloudinary. You can either manipulate the image while keeping the original, or you may choose to replace the original with the manipulated one.
I am building a photo sharing site, and using amazon s3 for my storage. Everything is working great, except that the pages render slowly.
When I have over 100 images on the page, and requests that look like mysite/s3/bucket/image.jpg?w=200, does this mean that every image is first downloaded, and then resized? If so, how do I configure caching of thumbnails? I can't seem to find that info in the documentation.
You need the DiskCache (and possibly SourceDiskCache) plugins installed. DiskCache will cache the resized images to disk, while SourceDiskCache will cache the S3 images to disk.
If you only have a couple versions of the S3 image, output caching is sufficient, but it is definitely needed.
It's also important to think about the bandwidth requirements between the ImageResizer server and S3. If you're using EC2, make sure you're in the same region as the S3 bucket. If you're using a VM, make sure that you have a big pipe.
The bottleneck is always I/O.
I have an rails app deployed to Heroku. It has image uploads. After deployed to heroku again, i am unable to see old images that are uploaded.
Is heroku reset images folder when app is re-deployed? Please tell me the reason behind it.
Background
Heroku uses an 'ephemeral file system', which from an application architecture point of view should be considered as read-only - it is discarded as soon as the dyno is stopped or restarted (which, along with other occasions, occurs after each push), and is also not shared between multiple dynos.
This is fine for executing code from, as most application data is stored in a database that is independent of the dynos. However, for file uploads this presents a problem, and so any uploads should not be stored directly in the dyno filesystem.
Solution
The simplest solution is to use something like Amazon S3 as your file upload storage solution, and, if using a gem like Paperclip, this is natively supported within the gem. There is a great overview article in the Heroku Dev Center about using S3 and Heroku (https://devcenter.heroku.com/articles/s3), which leads into an article contributed by Thoughtbot (the developers of Paperclip) on implemenation specifics within a Rails app (https://devcenter.heroku.com/articles/paperclip-s3)
I'm currently serving up static images to my site via Amazon Cloudfront and for some reason my images won't update when I try to overwrite them with an updated image. The old image continues to display.
I've even tried deleting the entire images folder and uploading the newest ones without success. The only thing that works is renaming the image.
Anyone else experience this?
recently amazon s3 announced new feature called content invalidation. it allows you to invalidate some file just with a one call. check cloudfront api references for more details.