I'm having a hard time trying to make Paperclip uploads to Amazon S3 via the fog gem. You see, I'm using Heroku so that's the way to go.
The problem is: I configured the app to send the uploads to my Amazon S3 bucket, the images are being sent and all but the object is saved with the picture attribute set to /pictures/original/missing.png and I don't know why.
Here is some data to help mitigate the problem:
I have a Work model, where I define a paperclip attachment for images called 'picture':
has_attached_file :picture, path: "app/public/system/images/:class/:id/:attachment/:style/img.:extension", styles: { large: "500x500>", medium: "300x300>", thumb: "100x100>" }
and I have created this particular initializer file called paperclip_defaults.rb for telling Rails that in fact, I want the uploads to go to Amazon S3 (actual keys hidden):
Paperclip::Attachment.default_options.update({
path: "app/public/system/images/:class/:id/:attachment/:style/img.:extension",
storage: :fog,
fog_credentials: {
provider: 'AWS',
aws_access_key_id: 'lovelovelove',
aws_secret_access_key: 'dr_strangelove',
region: 'sa-east-1'
},
fog_directory: 'florencioassets',
fog_public: true,
fog_host: "http://florencioassets.s3-website-sa-east-1.amazonaws.com/"
})
And my form is got the multipart attribute set to true and all.
There goes a sample file, which was uploaded, but not properly associated to the Work object I guess: https://s3-sa-east-1.amazonaws.com/florencioassets/app/public/system/images/works/8/pictures/original/img.JPG
So, any idea on why the object is being saved without the actual picture? Guess it's not an uploading problem, per say.
Thank you for your time!
UPDATE:
I really wish someone can help me solve this the elegant way, but anyway, I made a quick workaround for this, if anyone is interested:
%w(original large medium thumb).each do |meth|
define_method("#{meth}_picture_url") do
"https://s3-sa-east-1.amazonaws.com/florencioassets/app/public/system/images/works/#{self.id}/pictures/#{meth}/img.JPG"
end
end
Related
Rails newbie here. I am trying to upload a file to S3 via active storage through a background job. Set up everything as mentioned in this article (https://keithpblog.org/post/active-storage-on-amazon-s3/.)
When I run the background job I get this error. Searched google but I couldn't find anything relevant.
Can someone please help me out?
The below is how my model looks
class ReportOutput < ApplicationRecord
has_one_attached :output_file
end
This is how I am calling it. (not sure if this is correct way)
ReportOutput.new.output_file.attach(io: File.open('./test.xlsx'), filename: 'test.xlsx')
This is the error I get.
Job exception: #<URI::GID::MissingModelIdError: Unable to create a Global ID for ReportOutput without a model id.>
Can I safely use self.image.staged_path to access file which is uploaded to Amazon S3 using Paperclip ? I noticed that I can use self.image.url (which returns https...s3....file) to read EXIF from file on S3 in Production or Development environments. I can't use the same approach in test though.
I found staged_path method which allows me to read EXIF from file in all environments (it returns something like: /var/folders/dv/zpc...-6331-fq3gju )
I couldn't find more information about this method, so the question is: does anyone have experience with this and could advise on reliability of this approach? I'm reading EXIF data in before_post_process callback
before_post_process :load_date_from_exif
def load_date_from_exif
...
EXIFR::JPEG.new(self.image.staged_path).date_time
...
end
I have an existing FineUploader implementation for small files using the Traditional (upload-to-server) version which is working great. However, I'd like to also allow Direct S3 uploads from a different part of the application which deals with large attachments, without rewriting the existing code for small files.
Is there some way to allow both Direct S3 and Traditional uploads to work alongside each other? This is a single-page application, so I can't just load one or the other fine-uploader versions depending on which page I'm on.
I tried just including both fine-uploader JS files, but it seemed to break my existing code.
Client-side code:
$uploadContainer = this.$('.uploader')
$uploadButton = this.$('.upload-button')
$uploadContainer.fineUploader(
request:
endpoint: #uploadUrl
inputName: #inputName
params:
authenticity_token: $('meta[name="csrf-token"]').attr('content')
button: $uploadButton
).on 'complete', (event, id, fileName, response) =>
#get('controller').receiveUpload(response)
Good find, #Melinda.
Fine Uploader lives within a custom-named namespace so that it does not conflict with other potential global variables, this is the qq namespace (historically named). What is happening is that each custom build is redeclaring this namespace along with all member objects when you include it in the <script> tags on your page.
I've opened up an issue on our bug tracker that explains the issue in more technical details, and we're looking to prioritize a fix to the customize page so that in the future no one will have this issue.
I have Cloudinary and CarrierWave setted on Heroku hosting.
I need to upload SVG, and i was told to set "resource_type" to "raw"
Cloudinary DOCS
I tried setting it in my carrierwave uploader:
process :resource_type
def resource_type
return :resource_type => "raw"
end
And it didn't work. Can You help?
The method resource_type is used internally by the Cloudinary CarrierWave code and overriding it like this will break the code.
If you need to override the resource_type to raw you can use:
cloudinary_transformation :resource_type=>:raw
However, this is usually not necessary as Cloudinary detects image vs. raw correctly.
I am trying to upload my files on aws s3 using ruby on rails. Code is working great for smaller upload but for uploads greater than 3-4mb, i get timeout error. I am uploading files on s3 using code:
AWS::S3::S3Object.store(filename, params[:file].read, #BUCKET_NAME, :access => :private)
How can i resolve my issue for larger uploads. Can i increase the timeout interval time for ruby scripts for allowing larger uploads?
Please help...
I would suggest taking advantage of the recent CORS support. I tried to detail clearly how to use it there : http://pjambet.github.com/blog/direct-upload-to-s3/
Assuming you are using: aws-s3 gem
When you are dealing with large files you have to use I/O stream, so that file is read in segments.
Instead you might use something like this:
S3Object.store('roots.mpeg', open('roots.mpeg'), #BUCKET_NAME, :access => :private)
More details can be found: http://amazon.rubyforge.org/
I would suggest you to use http streaming for long request