Every time I re-seed my database locally, duplicate images are being created in my Amazon S3 bucket. I think this is happening because I am not seeding correctly, but I don't know the proper way to do it. I've been using the method shown here. I'm using Rails 4, Ruby 2, paperclip 3.5.2, and aws-sdk 1.20.0.
You can see below in my seeds.rb file, I'm trying to set the image to the url of an image that has already been uploaded to the correct folder in my bucket. However, I think using open() here is causing a new, identical file to be saved to the same folder, usually something like http://s3.amazonaws.com/BUCKET_NAME/restaurants/images/1/original/open-uri20131111-22904-xvzitl.?1384211739.
EDIT: so my bucket will have both this file stored as well as http://s3.amazonaws.com/BUCKET_NAME/restaurants/images/1/original/NAME.jpg
Would really appreciate any help!
model
has_attached_file :image,
:styles => { :medium => "300x300>", :thumb => "100x100>" }
seeds.rb
Restaurant.create!( name: ...,
description: ...,
image: open('https://s3.amazonaws.com/<BUCKET NAME>/restaurants/images/1/original/<NAME>.jpg') )
config/initializers/paperclip.rb
Paperclip::Attachment.default_options[:storage] = :s3
Paperclip::Attachment.default_options[:s3_credentials] = {
:access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:secret_access_key => ENV['AWS_SECRET_ACCESS_KEY']
}
Paperclip::Attachment.default_options[:bucket] = ENV['AWS_BUCKET']
Paperclip::Attachment.default_options[:url] = ":s3_path_url"
Paperclip::Attachment.default_options[:path] = "/:class/:attachment/:id/:style/:basename.:extension"
Paperclip::Attachment.default_options[:default_url] = "https://s3.amazonaws.com/<BUCKET NAME>/images/missing.png"
I'm pretty late to the party on this one but I figure others may still be having the same problem. If you set the attachments on your models to nil before deleting them paperclip will delete them from S3.
Related
I have a user module and I have generated paperclip attachment: profile_pic
user.rb:
has_attached_file :profile_pic,
style: { :medium => "300x300>", thumb: "100x100>" },
default_url: "/images/:style/missing.png"
controller:
image_base = params[:manager][:profile_pic]
if image_base != nil
image = Paperclip.io_adapters.for(image_base)
image.original_filename = params[:manager][:file_name]
current_user.profile_pic = image
current_user.errors.delete(:profile_pic)
current_user.save
end
config/initializers/paperclip.rb:
Paperclip::DataUriAdapter.register
It's not showing any errors but If I am trying to display the image, it gives me following error:
ActionController::RoutingError (No route matches [GET] "/system/managers/profile_pics/000/000/008/original/icon_new.png"):
When I am trying in console like:
user.profile_pic.display
/system/managers/profile_pics/000/000/008/original/icon_new.png?
1556082410 => nil
Picture was saved in public folder
The default folder is the following, so Paperclip uploaded to the right place:
:rails_root/public/system/:class/:attachment/:id_partition/:style/:filename
But it looks like you are having troubles accessing the right location. Have you tried user.profile_pic.url? I tried on my working example, both .display and .url are the same but I use an AWS S3 bucket.
user.profile_pic.url should be where to find the file on the web.
user.profile_pic.path should be where to find the file on the file system.
The problem is with production mode and Solved this by adding the following line in production.rb
config.public_file_server.enabled = true
I'm using Paperclip 4.2.0 and fog 1.24.0, and host files on S3. I want to generate an expiring URL that has the "Content-Disposition" header set to "attachment".
Paperclip has this option to pass additional parameters to S3 expiring URLs but I can't have it working when using Paperclip with Paperclip::Storage::Fog.
This fog issue gives the following solution:
file.url(60.seconds.from_now, { :query => { 'response-content-disposition' => 'attachment' } }
but it does not work for me. My Rails model ResourceDocument has has_attached_file :target. document.target.url(60.seconds.from_now, { :query => { 'response-content-disposition' => 'attachment' } } returns the same URL than document.target.url(60.seconds.from_now), ie no content-disposition is included in the generated URL: "xxx.s3.amazonaws.com/uploads/resource_documents/targets/40/2014-12-01%2017:26:20%20UTC/my_file.csv"
I am using aws-sdk gems and it works fine for me, hope this helpful for you.
gem 'aws-sdk-core'
gem 'aws-sdk'
and model's method:
def download_url
s3 = AWS::S3.new
s3_videos_bucket = 'xxxx' #bucket name goes right here
bucket = s3.buckets[s3_videos_bucket]
object_path = 'xxxx' #file path goes right here
object = bucket.objects[object_path]
object.url_for(:get, {
expires: 60.minutes,
response_content_disposition: 'attachment;'
}).to_s
end
If you got Paperclip + AWS S3 working in your rails 3 application and you want to zip attachments related to a model how to proceed?
Note: Some questions at stackoverflow are outdated, some paperclip methods are gone.
Lets say we got a User and it :has_many => user_attachments
GC.disable
#user = User.find(params[:user_id])
zip_filename = "User attachments - #{#user.id}.zip" # the file name
tmp_filename = "#{Rails.root}/tmp/#{zip_filename}" # the path
Zip::ZipFile.open(tmp_filename, Zip::ZipFile::CREATE) do |zip|
#user.user_attachments.each { |e|
attachment = Paperclip.io_adapters.for(e.attachment) #has_attached_file :attachment (,...)
zip.add("#{e.attachment.original_filename}", attachment.path)
}
end
send_data(File.open(tmp_filename, "rb+").read, :type => 'application/zip', :disposition => 'attachment', :filename => zip_filename)
File.delete tmp_filename
GC.enable
GC.start
The trick is to disable the GC in order to avoid Errno::ENOENT exception. The GC will delete the downloaded attachment from S3 before it gets zipped.
Sources:
to_file broken in master?
io_adapters.for(object.attachment).path failing randomly
I'm trying to post to S3 using AWS in development, but it can't find my ssl bundle. I have it installed for Oauth, and once I tell it where it is, it works fine. I can't seem to configure AWS to see it properly though.
OpenSSL::SSL::SSLError:
SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed
Here's my config from my model:
has_attached_file :image,
:styles => { ... },
:storage => :s3,
:s3_credentials => {
:access_key_id => ACCESS_KEY,
:secret_access_key => SECRET_KEY,
:bucket => BUCKET,
:ssl_ca_file => '/opt/local/share/curl/curl-ca-bundle.crt'
}
I have attempted to add, :ssl_verify_peer => false, and :use_ssl => false. Neither of which work, which makes me think that I'm configuring the AWS gem in the wrong place. Any suggestions where/how I should be doing this?
I'm using paperclip 2.4.0, and aws-sdk 1.3.8
I should also mention that the error occurs in testing with rspec.
Figured it out with help from the github aws-sdk page: https://github.com/amazonwebservices/aws-sdk-for-ruby
In short, I had to create a specific config/initializers/aws.rb that looks like...
# load the libraries
require 'aws'
# log requests using the default rails logger
AWS.config(:logger => Rails.logger)
# load credentials from a file
config_path = File.expand_path(File.dirname(__FILE__)+"/../aws.yml")
AWS.config(YAML.load(File.read(config_path)))
All i had to do then was move my config/s3.yml file to config/aws.yml. And then change my model to use that yml file...
has_attached_file :image,
:styles => { ... },
:storage => :s3,
:s3_credentials => "#{Rails.root.to_s}/config/aws.yml"
And that took care of it. As I suspected, setting the ssl properties via paperclip using the s3_credentials didn't work because the aws object had already been loaded.
Just for completeness, here's the yml file...
development:
access_key_id: ...
secret_access_key: ...
bucket: bucket_name
ssl_ca_file: /opt/local/share/curl/curl-ca-bundle.crt
test:
access_key_id: ...
secret_access_key: ...
bucket: bucket_name
ssl_ca_file: /opt/local/share/curl/curl-ca-bundle.crt
production:
access_key_id: ...
secret_access_key: ...
bucket: bucket_name
What's your bucket name?
If you use something like foo.domain.com as the bucket, paperclip will use that as a prefix for the host name (foo.domain.com.aws.amazon.com), which will cause problems with SSL verification.
Try using a bucket name that doesn't resemble a host name, like mydomain-photos
The code for determining the url is in fog.rb:
if fog_credentials[:provider] == 'AWS'
if #options[:fog_directory].to_s =~ Fog::AWS_BUCKET_SUBDOMAIN_RESTRICTON_REGEX
"https://#{#options[:fog_directory]}.s3.amazonaws.com/#{path(style)}"
else
# directory is not a valid subdomain, so use path style for access
"https://s3.amazonaws.com/#{#options[:fog_directory]}/#{path(style)}"
end
else
directory.files.new(:key => path(style)).public_url
end
and that regex is:
AWS_BUCKET_SUBDOMAIN_RESTRICTON_REGEX = /^(?:[a-z]|\d(?!\d{0,2}(?:\.\d{1,3}){3}$))(?:[a-z0-9]|\.(?![\.\-])|\-(?![\.])){1,61}[a-z0-9]$/
I'm building a rails3 app on heroku, and I'm using aws-s3 gem to manipulate files stored in an Amazon S3 eu bucket.
When I try to perform a AWS::S3::S3Object.delete filename, 'mybucketname' command, I get the following error:
AWS::S3::PermanentRedirect (The bucket you are attempting to access
must be addressed using the specified endpoint. Please send all future
requests to this endpoint.):
I have added the following to my application.rb file:
AWS::S3::Base.establish_connection!(
:access_key_id => "myAccessKey",
:secret_access_key => "mySecretAccessKey"
)
and the following code to my controller:
def destroy
song = tape.songs.find(params[:id])
AWS::S3::S3Object.delete song.filename, 'mybucket'
song.destroy
respond_to do |format|
format.js { render :nothing => true }
end end
I found a proposed solution somewhere to add AWS_CALLING_FORMAT: SUBDOMAIN to my amazon_s3.yml file, as supposedly, aws-s3 should handle differently eu buckets than us.
However, this did not work, same error is received.
Could you please provide any assistance?
Thank you very much for your help.
the problem is you need to type SUBDOMAIN as uppercase string in config, try this out
You can specify custom endpoint at connection initialization point:
AWS::S3::Base.establish_connection!(
:access_key_id => 'myAccessKey',
:secret_access_key => 'mySecretAccessKey',
:server => 's3-website-us-west-1.amazonaws.com'
)
you can find actual endpoint through the AWS console:
full list of valid options - here https://github.com/marcel/aws-s3/blob/master/lib/aws/s3/connection.rb#L252
VALID_OPTIONS = [:access_key_id, :secret_access_key, :server, :port, :use_ssl, :persistent, :proxy].freeze
My solution is to set the constant to the actual service link at initialization time.
in config/initializers/aws_s3.rb
AWS::S3::DEFAULT_HOST = "s3-ap-northeast-1.amazonaws.com"
AWS::S3::Base.establish_connection!(
:access_key_id => 'access_key_id',
:secret_access_key => 'secret_access_key'
)