How to reuse S3 image? - vue.js

I uploaded images to S3 with carrier wave gem.(Ruby on Rails, Vue.js)
But I want to reuse uploaded image as a file.
I have no idea about how to reuse uploaded S3 images as a file.
To be specific,
I made a model "reaction"
and the image is saved as a column of "reaction".
I can reuse image object like #reaction.image
(#reaction is a "reaction"`s object)
No trial
I totally dont know how to deal with it.

Once you get the image url from your table "Reaction". We can basically downloading an image located at a given URL and saved it as a file locally.
def download_aws_s3(url_aws_s3, filename)
uri = URI(url_aws_s3)
response = Net::HTTP.get_response(uri)
File.open(filename, 'wb'){|f| f.write(response.body)}
end
You can use "open-uri" or "down" gem as an alternative.

Related

Scrapy upload files to dynamically created directories in S3 based on field

I've been experimenting with Scrapy for sometime now and recently have been trying to upload files (data and images) to an S3 bucket. If the directory is static, it is pretty straightforward and I didn't hit any roadblocks. But what I want to achieve is to dynamically create directories based on a certain field from the extract data and place the data & media in those directories. The template path, if you will, is below:
s3://<bucket-name>/crawl_data/<account_id>/<media_type>/<file_name>
For example if the account_id is 123, then the images should be placed in the following directory:
s3://<bucket-name>/crawl_data/123/images/file_name.jpeg
and the data file should be placed in the following directory:
s3://<bucket-name>/crawl_data/123/data/file_name.json
I have been able to achieve this for the media downloads (kind of a crude way to segregate media types, as of now), with the following custom File Pipeline:
class CustomFilepathPipeline(FilesPipeline):
def file_path(self, request, response=None, info=None, *, item=None):
adapter = ItemAdapter(item)
account_id = adapter["account_id"]
file_name = os.path.basename(urlparse(request.url).path)
if ".mp4" in file_name:
media_type = "video"
else:
media_type = "image"
file_path = f"crawl_data/{account_id}/{media_type}/{file_name}"
return file_path
The following settings have been configured at a spider level with custom_settings:
custom_settings = {
'FILES_STORE': 's3://<my_s3_bucket_name>/',
'FILES_RESULT_FIELD': 's3_media_url',
'DOWNLOAD_WARNSIZE': 0,
'AWS_ACCESS_KEY_ID': <my_access_key>,
'AWS_SECRET_ACCESS_KEY': <my_secret_key>,
}
So, the media part works flawlessly and I have been able to download the images and videos in their separate directories based on the account_id, in the S3 bucket. My questions is:
Is there a way to achieve the same results with the data files as well? Maybe another custom pipeline?
I have tried to experiment with the 1st example on the Item Exporters page but couldn't make any headway. One thing that I thought might help is to use boto3 to establish connection and then upload files but that might possibly require me to segregate files locally and upload those files together, by using a combination of Pipelines (to split data) and Signals (once spider is closed to upload the files to S3).
Any thoughts and/or guidance on this or a better approach would be greatly appreciated.

Google Drive - use WebViewLink vs thumbnailLink

I'm using the Google Drive API where I can gain access to 2 pieces of data that I need to display a jpg file oin my program. WebViewLink is the "large" size image while thumbnailLink is the "thumb" smaller size of the same image.
I'm having an issue with downloading the WebViewLink that I do not have with the thumbnailLink. Part of my code calls either exif_imagetype($filename) or getimagesize($filename) so I can retrieve the type, height & width etc for the $filename. This is successful for the thumbnailView but not the WebViewLink...
code snippet...
$WebViewLink = "https://drive.google.com/a/treering.com/file/d/blablabla";
$type = exif_imagetype($WebViewLink);
--- results in the error
"PHP Warning: exif_imagetype(): stream does not support seeking..."
where as...
$thumbnailLink = "https://lh6.googleusercontent.com/blablabla";
$type = exif_imagetype($thumbnailLink);
--- successful
where $type = 2 // a .jpg file
Not sure what I need to do to gain a usable WebViewLink... maybe use the "export" function to copy to a file on my server that is accessible, then use that exported file for the functions that fail above?
Thanks for any help.
John
I think you are using the wrong property to get the image of the file.
WebViewLink
A link for opening the file in a relevant Google editor or viewer in a browser.
thumbnailLink
A short-lived link to the file's thumbnail, if available. Typically lasts on the order of hours.
You can try using the iconLink():
A static, unauthenticated link to the file's icon.
Sample image of thumbnailLink:
Sample image of a iconLink:
It will still show relevant image about the file.
Hope it helps!

How can I reorganize an existing folder hierarchy with CarrierWave?

I am trying to move files around my S3 bucket using CarrierWave to reorganize the folder structure.
I came to an existing Rails application where all images for a class are being uploaded into a folder called /uploads. This is causing problems where if two users upload different images with the same file-name, the second image overwrites the first. To solve this, I want to reorganize the folders to place each image in its own directory according to the ActiveRecord object instance. We are using CarrierWave to manage file uploads.
The old uploader code had the following method:
def store_dir
"uploads"
end
I modified the method to reflect my new file storage scheme:
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
This works great for new images, but breaks the url for old images. Existing images report their URL to be inside the new folder immediately when I change the model, while the image files are still stored in /uploads.
> object.logo.store_dir
=> "uploads/object/logo/133"
This is not correct. This object should report its logo in /uploads.
My solution is to write a script to move the image files, but I haven't found the correct methods in CarrierWave to move the files. My script would look something like this:
MyClass.all.each |image|
filename = file.name #This method exists in my uploader, returns the file name
#Move the file from "/uploads" to "uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
What should I do in line three of my script to move the file to a new location?
WARNING: This is untested, so please don't use on production before testing it out.
Here's the thing, once you change the contents of 'store_dir', all your old uploads will become missing. You know this already. Interacting with S3 directly seems like the most obvious way of solving this, since carrierwave doesn't have a move function.
One thing that might work, would be to re-'store' your uploads and change the 'store_dir' path in the 'before :store' callback.
In your uploader:
#Use the old uploads directory so carriewave knows where the original upload is
def store_dir
'uploads'
end
before :store, :swap_out_store_dir
def swap_out_store_dir
self.class_eval do
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
end
end
And then run a script like this:
MyClass.all.each do |image|
image.image.cache! #create a local cache so that store! has something to store
image.image.store!
end
After this, verify that the files have been copied to the correct locations. You'll then have to delete the old upload files. Also, remove the one time use uploader code above and replace it with your new store_dir path:
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id} "
end
I haven't tested this out, so I can't guarantee it will work. Please use test data first to see if it works and comment here if you've had any success.

Bad URI(is not URI?) on updating user model if avatar URL has a whitespace

I'm using carrierwave to upload images to s3 as an avatar for users, the image is been uploaded correctly, but when I try to update the user model I got an error if the url of uploaded image has a whitespace:
URI::InvalidURIError
bad URI(is not URI?): https://files.s3.amazonaws.com/avatar/110/111134a0-25d6-0130-f023-60eb69762222/photo copy.jpg
What is the better way to solve this?
I'm using carrierwave, fog, carrierwave_direct and rmagick to upload images.
[UPDATE:::::::::::::::::::::::::::::::::::::::::::::::::::::]
After reading this "carrierwave fails to load certain url" I added in AvatarUploader < CarrierWave::Uploader::Base :
def process_uri(uri)
URI.parse(URI.escape(URI.unescape(uri)))
end
But didn't work, It seems that this approach is correct, but keeps saving in the DB the url image with whitespace instead "%20".
Use URI.escape to clean up the URL before you validate it.
1.9.3p327 > URI.escape "https://files.s3.amazonaws.com/avatar/110/111134a0-25d6-0130-f023-60eb69762222/photo copy.jpg"
=> "https://files.s3.amazonaws.com/avatar/110/111134a0-25d6-0130-f023-60eb69762222/photo%20copy.jpg"

Dropbox API working, now trying to copy images to Amazon S3 with carrierwave

I have an ipad app that uses dropbox to sync images to the cloud so that i can access them with a webapp and process them etc etc.
the part i'm having trouble with is getting the file from dropbox to s3 via carrierwave. i have a photo model and i can create a new photo and upload and an image successfully. I can also put a file on dropbox. However when i try to get a file off of dropbox and put it on s3, the contents of the file is just text.
Are there some sort of mime types i need to set or something?
I am using dropbox_sdk and the get_file_and_metadata method. It returns me the file object successfully, but the contents are all just text. this is me hard coding the image file so i can be sure it exists..
dbsession = DropboxSession.deserialize(session[:dropbox_session])
#client = DropboxClient.new(dbsession, ACCESS_TYPE) #raise an exception if session not authorized
#info = #client.account_info # look up account information
#photo = Photo.new(params[:photo])
#db_image metadata = #client.get_file_and_metadata('/IMG_1575.jpg')
the part i don't know how to do is say take this image #db_image and use that file when creating a new photo and store it on S3.
I'm thinking it might be a 'mime type' issue, but i've read that that is only based on the file ext.
any insight you guys could share would really help me get past this hurdle. thanks!
Figured this out. Instead I used the direct_url.url method that is part of the dropbox-api gem used with the carrierwave gem. the direct_url.url method returns a secure full url path to that file that you can use as the remote_url value for carrierwave.
#client = Dropbox::API::Client.new(:token => 'derp', :secret => 'derp')
#dropbox_files = #client.ls "images/#{#event.keyword}/#{photo_size}/"
#dropbox_files.each do |f|
photo_exists = Photo.where(:dropbox_path => f.direct_url.url).count
if photo_exists == 0
#photo = Photo.create(:remote_filename_url => f.direct_url.url,
:dropbox_path => f.direct_url.url,
:event_id => #event.id)
end
end
Now, i'm pretty new at ruby, so i'll be posting a better way to step through the results, as that seems pretty slow and clunky.