Carrierwave + s3 + fog (Excon::Errors::SocketError) - ruby-on-rails-3

I'm currently getting the following error: Excon::Errors::SocketError - Broken pipe (Errno::EPIPE) when uploading images bigger than about 150kb. Images under 150kb work correctly. Research indicates that others have also experienced this problem but I'm yet to find a solution.
Error message
Excon::Errors::SocketError at /photos
Message Broken pipe (Errno::EPIPE)
File /Users/thmsmxwll/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/openssl/buffering.rb
Line 375
image_uploader.rb
class ImageUploader < CarrierWave::Uploader::Base
include CarrierWave::RMagick
storage :fog
include CarrierWave::MimeTypes
process :set_content_type
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
version :large do
process :resize_to_limit => [800, 600]
end
end
carrierwave.rb
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
aws_access_key_id: ENV['AWS_ACCESS_KEY_ID'],
aws_secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
:region => 'us-east-1'
}
config.fog_directory = 'abcd'
config.fog_public = true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'}
end

For me, the solution required me to recreate the bucket in the US-Standard region. Originally, the bucket was in the Oregon region and while I wasn't specifying a region in my carrierwave settings, I could not get an upload to complete, even with very small files.

I'm having the same issue, i noticed that only happend when i upload big files (400kb), with a smaller (100kb) it works fine.

Related

Carrierwave uploads to S3 using fog/aws gem result in "No such file or directory # rb_sysopen"

I am using Carrierwave version 1.0.0rc to upload and process files to AWS S3 bucket. Here is my environment:
Rails 4.2.0
Ruby 2.1.1
MiniMagick 4.5.1
ImageMagick 6.9.7-0
My uploader determines if the original image to be uploaded is landscape or portrait and will apply processing rules accordingly. The file uploads to the AWS S3 bucket, but then I get the following error:
Errno::ENOENT in SponsorsController#create
No such file or directory # rb_sysopen - uploads/sponsor/logo/30/Breen_Electrical_Logo.jpg
and the extracted source shows this code highlighted:
image = MiniMagick::Image.open(picture.path)
Here is my uploader code:
class LogoUploader < CarrierWave::Uploader::Base
include CarrierWave::MiniMagick
storage :fog
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
version :landscape, if: :is_landscape?
version :portrait, if: :is_portrait?
version :landscape do
process resize_to_fit: [200, 50]
end
version :landscape_sm, from_version: :landscape do
process resize_to_fit: [100, 25]
end
version :portrait do
process resize_to_fit: [50, 200]
end
version :portrait_sm, from_version: :portrait do
process resize_to_fit: [25, 100]
end
private
def is_landscape? picture
image = MiniMagick::Image.open(picture.path)
image[:width] > image[:height]
end
def is_portrait? picture
image = MiniMagick::Image.open(picture.path)
image[:width] < image[:height]
end
end
The private methods seem to opening up the file to compare its width and height values. This worked just fine when I was storing the files in the local public folder. I am guessing that the "picture.path" url is not pointing up to the S3 bucket path to open the file.
Here is my /config/initializers/carrierwave.rb file
require 'fog/aws'
CarrierWave.configure do |config|
config.fog_provider = 'fog/aws'
config.fog_credentials = {
provider: 'AWS',
aws_access_key_id: <access_key_id>,
aws_secret_access_key: <secret_access_key>,
region: 'us-west-2',
:path_style => true
}
config.fog_directory = <bucketname>
end
I can't seem to find others having the same issue. Any ideas? Thanks in advance.
I think you'll need to refer to the file directly, instead of just the path, since it wouldn't be local. Something like:
ruby
image = MiniMagick::Image.open(picture.file)

Carrierwave Fog (S3) Heroku

I am struggling with this setup and have read through Carrierwave docs and still and pulling hair.
I'm getting an error when I try to start the server even in dev mode.
Exiting
/Users/my-machine/.rvm/gems/ruby-1.9.3-p327/gems/carrierwave-0.7.1/lib/carrierwave/uploader/configuration.rb:73:in
`eval': can't convert nil into String (TypeError)
Here is my setup.
config/initializers/carrierwave.rb
S3_CONFIG = YAML.load_file(Rails.root.join('config', 'amazon_s3.yml'))[Rails.env]
CarrierWave.configure do |config|
config.storage = :s3
config.s3_access_policy = :public_read
config.s3_access_key_id = S3_CONFIG['access_key_id']
config.s3_secret_access_key = S3_CONFIG['secret_access_key']
config.s3_bucket = S3_CONFIG['bucket']
config.s3_region = 'us-east-1'
end
config/amazon_s3.yml
development:
access_key_id: xxxxxxxxxxxxxxxxxxx
secret_access_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
bucket: dev-bucket
test:
access_key_id: xxxxxxxxxxxxxxxxxxx
secret_access_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
bucket: test-bucket
production:
access_key_id: xxxxxxxxxxxxxxxxxxx
secret_access_key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
bucket: prod-bucket
app/uploaders/image_uploader.rb
class ImageUploader < CarrierWave::Uploader::Base
# Choose what kind of storage to use for this uploader:
#storage :file
storage :fog
# Override the directory where uploaded files will be stored.
# This is a sensible default for uploaders that are meant to be mounted:
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
# fix for Heroku, unfortunately, it disables caching,
# see: https://github.com/jnicklas/carrierwave/wiki/How-to%3A-Make-Carrierwave-work-on-Heroku
def cache_dir
"#{Rails.root}/tmp/uploads"
end
end
How are you establishing your S3_CONFIG environment variables in your development environment? Seems as thou they're missing.

issues setting up S3 with carrierwave and fog in Heroku

I've been trying to upload files to Amazon S3 through a form, but I'm getting some errors I do not know how to fix.
Does somebody have experience this type of errors?
Heroku log errors:
response => #<Excon::Response:0x00000007294a30 #body="<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
Excon::Errors::Forbidden (Expected(200) <=> Actual(403 Forbidden)
config/initializers/carrierwage.rb
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => "AWS",
:aws_access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:aws_secret_access_key => ENV['AWS_SECRET_ACCESS_KEY']
}
config.fog_directory = ENV['AWS_S3_BUCKET']
end
app/uploaders/file_uploader.rb
storage :fog
include CarrierWave::MimeTypes
process :set_content_type
def extension_white_list
%w(jpg jpeg gif png pdf)
end
# Override the directory where uploaded files will be stored.
# This is a sensible default for uploaders that are meant to be mounted:
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
I have set
the env. variables on Heroku.
I'm using the US standard for the bucket.

Paperclip + S3 massive zipping

If you got Paperclip + AWS S3 working in your rails 3 application and you want to zip attachments related to a model how to proceed?
Note: Some questions at stackoverflow are outdated, some paperclip methods are gone.
Lets say we got a User and it :has_many => user_attachments
GC.disable
#user = User.find(params[:user_id])
zip_filename = "User attachments - #{#user.id}.zip" # the file name
tmp_filename = "#{Rails.root}/tmp/#{zip_filename}" # the path
Zip::ZipFile.open(tmp_filename, Zip::ZipFile::CREATE) do |zip|
#user.user_attachments.each { |e|
attachment = Paperclip.io_adapters.for(e.attachment) #has_attached_file :attachment (,...)
zip.add("#{e.attachment.original_filename}", attachment.path)
}
end
send_data(File.open(tmp_filename, "rb+").read, :type => 'application/zip', :disposition => 'attachment', :filename => zip_filename)
File.delete tmp_filename
GC.enable
GC.start
The trick is to disable the GC in order to avoid Errno::ENOENT exception. The GC will delete the downloaded attachment from S3 before it gets zipped.
Sources:
to_file broken in master?
io_adapters.for(object.attachment).path failing randomly

rails carrierwave / s3 signature doesn't match. Wrong private key

I'm trying to get carrierwave working with S3, and I am getting a signature doesn't match error. The weird thing is that it doesn't look like carrierwave is sending the right secret key. In the error, the post is given as:
"Authorization"=>"AWS AKIAIOLWQKSJFH6E3I5Q:vKgyAw2z4c8zzqGWoxLUbw7I5oI="
Which I assume is supposed to be my publickey:privatekey. The thing is that vKgyAw2z4c8zzqGWoxLUbw7I5oI= is not the private key I have stored in fog.rb. Is that right?
Any help is appreciated.
Request/Response:
request => {:chunk_size=>1048576, :connect_timeout=>60, :headers=>{"Content-Length"=>1557, "Content-Type"=>"image/
gif", "x-amz-acl"=>"public-read", "Date"=>"Wed, 24 Oct 2012 12:45:17 +0000", "Authorization"=>"AWS AKIAIOLWQKSJFH6E3
I5Q:vKgyAw2z4c8zzqGWoxLUbw7I5oI=", "Host"=>"s3.amazonaws.com:443"}, :instrumentor_name=>"excon", :mock=>false, :nonb
lock=>true, :read_timeout=>60, :retry_limit=>4, :ssl_ca_file=>"/home/tim/.rvm/gems/ruby-1.9.3-p194/gems/excon-0.16.5
/data/cacert.pem", :ssl_verify_peer=>true, :write_timeout=>60, :host=>"myeasybnb.s3.amazonaws.com", :host_port=>"s3.
amazonaws.com:443", :path=>"/images%2Fb1bb6639-dc08-4981-9a9b-7175093ac970.gif", :port=>"443", :query=>nil, :scheme=
>"https", :body=>#<File:/home/tim/Dropbox/myeasybnb/tmp/uploads/20121024-0845-20225-1170/240x240.gif>, :expects=>200
, :idempotent=>true, :method=>"PUT"}
response => #<Excon::Response:0xa7a1098 #body="<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>SignatureD
oesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your
key and signing method.</Message>
fog.rb:
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'AKIAIOLWQKSJFH6E3I5Q', # required
:aws_secret_access_key => '[my secret key]' # required
}
config.fog_directory = 'myeasybnb' # required
config.fog_public = true # optional, defaults to true
end
Uploader.rb:
# encoding: utf-8
class PhotoUploader < CarrierWave::Uploader::Base
# Include RMagick or MiniMagick support:
# include CarrierWave::RMagick
include CarrierWave::MiniMagick
# Include the Sprockets helpers for Rails 3.1+ asset pipeline compatibility:
include Sprockets::Helpers::RailsHelper
include Sprockets::Helpers::IsolatedHelper
# Choose what kind of storage to use for this uploader:
# storage :file
storage :fog
# Override the directory where uploaded files will be stored.
# This is a sensible default for uploaders that are meant to be mounted:
def store_dir
"images"
end
# Provide a default URL as a default if there hasn't been a file uploaded:
class MyUploader < CarrierWave::Uploader::Base
def default_url
asset_path("seeds/" + [version_name, "default.png"].compact.join('_'))
end
end
# Process files as they are uploaded:
# process :scale => [200, 300]
#
# Create different versions of your uploaded files:
process :convert => 'jpg'
version :sidecarousel, :if => :is_side_carousel? do
process :resize_to_fit => [2000,1000]
end
version :thumbnail, :if => :is_thumbnail? do
process :resize_to_fill => [240,240]
end
version :linetext, :if => :is_line? do
process :resize_to_fill => [400,200]
end
version :carousel, :if => :is_carousel? do
process :resize_to_fit => [2200, 1000]
end
version :phone do
process :resize_to_fit => [900,900]
end
# def scale(width, height)
# # do something
# end
def is_side_carousel? photo
model.location == 1
end
def is_thumbnail? photo
model.location == 2
end
def is_line? photo
model.location == 3
end
def is_carousel? photo
model.location == 4
end
# Add a white list of extensions which are allowed to be uploaded.
# For images you might use something like this:
def extension_white_list
%w(jpg jpeg gif png)
end
# Override the filename of the uploaded files:
# Avoid using model.id or version_name here, see uploader/store.rb for details.
def filename
file.nil? ? #filename = nil : #filename = "#{secure_token}.#{file.extension}"
end
def secure_token
var = :"##{mounted_as}_secure_token"
model.instance_variable_get(var) or model.instance_variable_set(var, SecureRandom.uuid)
end
def cache_dir
"#{Rails.root}/tmp/uploads"
end
end
I managed to fix this. I'm not exactly sure what I did, but there was a couple issues. (1) I think I was getting the auth error because I was specifying a folder in my uploader. Now, I set my directory in the uploader to "" and specify the folder through the fog configuration.
(2) Another error I was getting was a time mismatch. I run mint in a virtual machine for development, and the time was different from reality. Amazon threw an error at that. Once I set the correct time, that went away.