What is the difference between Cloudinary and Carrierwave, and if they're different, how does one complement the other? (I am planning to use these in a Rails 5.0.2 application.)
Cloudinary is a service for storing images and other media files, and accepts various upload parameters, as well as URL parameters for on-the-fly processing.
CarrierWave is a Ruby library for attaching files, which means it will upload given files to a storage backend (filesystem, S3, Google Cloud etc), and write only the file identifier into the record column.
CarrierWave can use Cloudinary as just another storage backend, and utilize Cloudinary's on-the-fly processing and other features, which is useful if you don't want to process images yourself. CarrierWave can also use another storage backend (filesystem, S3, Google Cloud etc), but most of them are just "dumb object storages" without processing capabilities. Similarly, you can use Cloudinary without CarrierWave, but then you need to implement behaviour for attaching uploaded files to database records yourself.
Related
Okay, I have a working apps that use amazon s3 multipart, they use CreateMultipart, UploadPart and CompleteMultiPart.
Now we are migrating to google cloud storage and we have a problem with multipart. As far as I understood google doesn't support s3 multipart, got info from here Google Cloud Storage support of S3 multipart upload.
So I see that google has closest method Compose https://cloud.google.com/storage/docs/composite-objects, where I just upload different objects and then send request to combine them, or I can use uploadType=multipart https://cloud.google.com/storage/docs/json_api/v1/how-tos/upload#resumable, but this seems to be completely different from s3 multipart. And there is resumable upload https://cloud.google.com/storage/docs/resumable-uploads, that seems to allow upload files in chunks, but without complete multipart.
What is the best option to use? Some services already use CreateMultiPart, UploadPart, CompletePart and I need to write "adapter" for this services in order to make them compatible with google cloud storage.
Update: below answer is no longer correct. GCS does support multipart uploads: https://cloud.google.com/storage/docs/xml-api/post-object-multipart
You are correct. Google Cloud Storage does not currently support multipart upload.
The main benefits to multipart upload are allowing multiple streams to upload in parallel from one or more machines and allowing a partial upload failure not to ruin the whole upload. The best way to get those same benefits with GCS would be to upload the parts as separate objects and then using Compose to combine them into a final object. Indeed, this is exactly what the gsutil command-line utility does when uploading in parallel.
Resumable uploads are a great tool if you want to upload a single object in a single stream, in order, and you want the ability to resume if the connection is lost.
"uploadtype=multipart" uploads are a bit different. They are a way to specify an object's complete metadata and also its data in a single upload operation, using an HTTP multipart request.
I know I can download the image on server and then upload again to S3 or any other cloud hosting service, but is there any way to store the image asset directly on s3 by supplying URL of asset instead of a file, because I don't want to add unwanted download and upload on my server.
Note: I am assured that the URI will be 99.9% up and image file will also be there. And I am OK to use services other than S3 if they have such a feature.
No. There is no API call for Amazon S3 that will retrieve content from another location.
You must supply the content as part of the API call.
My media storage is Openstack object storage (swift) in the cloud (OVH).
Regarding the user-rights on the uploaded media:
Images [A] are viewable by all users, but only deletable by
user-owner/ uploader.
Images [B] are very private. CRUD by user-owner/ uploader and
viewable by some other users.
I looked around for solutions and came across pre-signed (temporary) urls., see also this article.
I was wondering whether this provides an acceptable security level. An alternative I could think of is authenticating all users via openstack's authentication module, Keystone. But maybe that's just completely stupid and/ or overkill. I started to look in that direction as it might be similar to AWS S3 use of IAM policies.
My questions:
Is the pre-signed url solution the way to go? And if not why not?
How would processing images (creating thumbnails) look like? You
grab it from the storage, process and store it back and delete local
versions, I suppose?
I was checking Imageresizer S3 Reader2 plugin, and I have the following question.
My app is basically a c# REST API that has a functionality of serving
photos (resized photos).
Would it be possible to use Imageresizer+Amazon S3 with REST API so I can resize
photos in with Imageresizer in c# before serving it and without transferring original photo over network?
You'll have to transfer the original photo from S3 to your server (at least once) in order to resize it. The S3Reader2 plugin does this automatically. If you want to prevent repeat requests, look into SourceDiskCache.
Otherwise, that's exactly how ImageResizer+S3Reader2 functions.
I'm evaluating potential content management systems I want to use for a project. Many of the users will need to upload static files and include links to the in their posts.
In the Admin UI I can only see the ability to upload an image in a post. Does anyone know if it is possible to upload files to Keystone through the Admin UI?
You could use their Amazon S3 storage adapter. Depending on which version of Keystone you're using (3 or 4), you'll have to do some different things. Either way, you need to make some credentials for Amazon S3's service and configure Keystone to work with it. From there, you can use Types.S3File to allow a certain part of your MongoDB model to be a reference to an S3 object. See this page for more info on the S3File type in Keystone.