Fog error: nodename nor servname provided - ruby-on-rails-3

I have the following set up for FOG basically right out of the FOG website:
def fog_save_file_for(filename, file)
# create a connection
connection = Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => '##',
:aws_secret_access_key => '##'
})
directory = connection.directories.get('upload_dir')
# list directories
#p connection.directories
# upload that resume
file = directory.files.create(
:key => filename,
:body => File.open("cv_uploads/provider_cvs/"+filename),
:public => true
)
end
at run time I get the following error:
getaddrinfo: nodename nor servname provided, or not known (SocketError)

This problem is probably caused by using an incorrect region. Carrierwave/Fog defaults to "us-east-1" which is not necessarily your correct region. To fix this, go lookup your region on AWS (this will not be a country name like "Ireland" but a region like "eu-west-1), then modify your config file to include the following:
:region => 'eu-west-1', #or whatever your region is

Related

Amazon S3 Upload error SSL certificate issues

I'm trying to test Laravel Amazon S3 on my localhost but keep getting the same error:
S3Exception in WrappedHttpHandler.php line 192: Error executing
"ListObjects" on
"https://s3-us-west-2.amazonaws.com/app?prefix=appimages%2FIMG-1469840859-j.jpg%2F&max-keys=1&encoding-type=url";
AWS HTTP error: cURL error 60: SSL certificate problem: unable to get
local issuer certificate (see
http://curl.haxx.se/libcurl/c/libcurl-errors.html)
My code:
$s3 = \Storage::disk('s3');
$filePath = '/images/' . $filename;
$s3->put($filePath, file_get_contents($image), 'public');
You have do a tweak to the php.ini file. Download this file http://curl.haxx.se/ca/cacert.pem and set the path in php.ini like this and then restart the server.
;;;;;;;;;;;;;;;;;;;;
; php.ini Options ;
;;;;;;;;;;;;;;;;;;;;
curl.cainfo = "C:\xampp\php\extras\ssl\cacert.pem"
Above path is common for XAAMP
And that will fix your issue.
$s3 = new S3Client
([
'version' => 'latest',
'scheme' =>'http',
'region' => $this->config->item('s3_region'),
'credentials' => [
'key' => $this->config->item('s3_access_key'),
'secret' => $this->config->item('s3_secret_key')
],
]);
Add 'scheme' =>'http' for development.
I had the same problem.
Error reason is you are working on local or on a not verified server.
Just you need to add the following line to "filesystem.php"
'scheme' => 'http' // to disable SSL verification on local development
Your filesystem.php should look like this :
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'scheme' => 'http' // to disable SSL verification on local development
],
When you run it on your server which has SSL verification, you need to comment 'scheme' line.
Try it and you will see it works.
Enjoy your coding !

Getting 'undefined method batch' for Google Directory API with service account

So I'm writing a script to batch delete users from a Google Apps for Education domain. The code looks like this:
#! /usr/bin/env ruby
require 'google/api_client'
require 'csv'
service_account_email = 'XXXXXXX#developer.gserviceaccount.com'
key_file = 'key.p12'
key_secret = 'notasecret'
admin_email = 'XXX#xxx'
# Build the API Client object
client = Google::APIClient.new(
:application_name => 'XXX',
:application_version => '0.1'
)
key = Google::APIClient::KeyUtils.load_from_pkcs12(key_file, key_secret)
client.authorization = Signet::OAuth2::Client.new(
:token_credential_uri => 'https://accounts.google.com/o/oauth2/token',
:audience => 'https://accounts.google.com/o/oauth2/token',
:scope => 'https://www.googleapis.com/auth/admin.directory.user',
:issuer => service_account_email,
:signing_key => key,
:person => admin_email,
)
client.authorization.fetch_access_token!
directory = client.discovered_api('admin', 'directory_v1')
# Reads and parses CSV input into a hash
# Takes file path as an argument
def import_csv(file)
csv = CSV.new(
File.open(file).read,
:headers => true,
:header_converters => :symbol
)
return csv.to_a.map {|row| row.to_hash}
end
users_to_delete = import_csv('accounts.csv')
puts 'Preparing to delete users...'
users_to_delete.each_slice(1000) do |chunk|
directory.batch do |directory|
chunk.each do |user|
client.execute!(
:api_method => directory.users.delete,
:parameters => { :userKey => user[:emailaddress].downcase }
)
end
end
end
puts 'Users successfully deleted!'
When I run the script without the two outer batch blocks, the script runs perfectly (although incredibly slowly).
What I want to know is what I need to change to stop giving me the undefined method error on the 'batch' method for the directory API. In examples in Google's documentation, I've noticed that they call the API differently (zoo = Google::Apis::ZooV1::ZooService.new instead of zoo = client.discovered_api('zoo', 'v1')). I don't see how that would make a difference though.
You can do achieve it this way:
client = Google::APIClient.new(
:application_name => 'XXX',
:application_version => '0.1'
)
directory = client.discovered_api('admin', 'directory_v1')
batch = Google::APIClient::BatchRequest.new do |result|
puts result.data
end
batch.add(:api_method => directory.users.delete,:parameters => { :userKey => user[:emailaddress].downcase })
client.execute(batch)

Rails + paperclip + S3 + OSX = OpenSSL error

I'm trying to post to S3 using AWS in development, but it can't find my ssl bundle. I have it installed for Oauth, and once I tell it where it is, it works fine. I can't seem to configure AWS to see it properly though.
OpenSSL::SSL::SSLError:
SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed
Here's my config from my model:
has_attached_file :image,
:styles => { ... },
:storage => :s3,
:s3_credentials => {
:access_key_id => ACCESS_KEY,
:secret_access_key => SECRET_KEY,
:bucket => BUCKET,
:ssl_ca_file => '/opt/local/share/curl/curl-ca-bundle.crt'
}
I have attempted to add, :ssl_verify_peer => false, and :use_ssl => false. Neither of which work, which makes me think that I'm configuring the AWS gem in the wrong place. Any suggestions where/how I should be doing this?
I'm using paperclip 2.4.0, and aws-sdk 1.3.8
I should also mention that the error occurs in testing with rspec.
Figured it out with help from the github aws-sdk page: https://github.com/amazonwebservices/aws-sdk-for-ruby
In short, I had to create a specific config/initializers/aws.rb that looks like...
# load the libraries
require 'aws'
# log requests using the default rails logger
AWS.config(:logger => Rails.logger)
# load credentials from a file
config_path = File.expand_path(File.dirname(__FILE__)+"/../aws.yml")
AWS.config(YAML.load(File.read(config_path)))
All i had to do then was move my config/s3.yml file to config/aws.yml. And then change my model to use that yml file...
has_attached_file :image,
:styles => { ... },
:storage => :s3,
:s3_credentials => "#{Rails.root.to_s}/config/aws.yml"
And that took care of it. As I suspected, setting the ssl properties via paperclip using the s3_credentials didn't work because the aws object had already been loaded.
Just for completeness, here's the yml file...
development:
access_key_id: ...
secret_access_key: ...
bucket: bucket_name
ssl_ca_file: /opt/local/share/curl/curl-ca-bundle.crt
test:
access_key_id: ...
secret_access_key: ...
bucket: bucket_name
ssl_ca_file: /opt/local/share/curl/curl-ca-bundle.crt
production:
access_key_id: ...
secret_access_key: ...
bucket: bucket_name
What's your bucket name?
If you use something like foo.domain.com as the bucket, paperclip will use that as a prefix for the host name (foo.domain.com.aws.amazon.com), which will cause problems with SSL verification.
Try using a bucket name that doesn't resemble a host name, like mydomain-photos
The code for determining the url is in fog.rb:
if fog_credentials[:provider] == 'AWS'
if #options[:fog_directory].to_s =~ Fog::AWS_BUCKET_SUBDOMAIN_RESTRICTON_REGEX
"https://#{#options[:fog_directory]}.s3.amazonaws.com/#{path(style)}"
else
# directory is not a valid subdomain, so use path style for access
"https://s3.amazonaws.com/#{#options[:fog_directory]}/#{path(style)}"
end
else
directory.files.new(:key => path(style)).public_url
end
and that regex is:
AWS_BUCKET_SUBDOMAIN_RESTRICTON_REGEX = /^(?:[a-z]|\d(?!\d{0,2}(?:\.\d{1,3}){3}$))(?:[a-z0-9]|\.(?![\.\-])|\-(?![\.])){1,61}[a-z0-9]$/

rails aws-s3 delete file throws AWS::S3::PermanentRedirect error - EU bucket problem?

I'm building a rails3 app on heroku, and I'm using aws-s3 gem to manipulate files stored in an Amazon S3 eu bucket.
When I try to perform a AWS::S3::S3Object.delete filename, 'mybucketname' command, I get the following error:
AWS::S3::PermanentRedirect (The bucket you are attempting to access
must be addressed using the specified endpoint. Please send all future
requests to this endpoint.):
I have added the following to my application.rb file:
AWS::S3::Base.establish_connection!(
:access_key_id => "myAccessKey",
:secret_access_key => "mySecretAccessKey"
)
and the following code to my controller:
def destroy
song = tape.songs.find(params[:id])
AWS::S3::S3Object.delete song.filename, 'mybucket'
song.destroy
respond_to do |format|
format.js { render :nothing => true }
end end
I found a proposed solution somewhere to add AWS_CALLING_FORMAT: SUBDOMAIN to my amazon_s3.yml file, as supposedly, aws-s3 should handle differently eu buckets than us.
However, this did not work, same error is received.
Could you please provide any assistance?
Thank you very much for your help.
the problem is you need to type SUBDOMAIN as uppercase string in config, try this out
You can specify custom endpoint at connection initialization point:
AWS::S3::Base.establish_connection!(
:access_key_id => 'myAccessKey',
:secret_access_key => 'mySecretAccessKey',
:server => 's3-website-us-west-1.amazonaws.com'
)
you can find actual endpoint through the AWS console:
full list of valid options - here https://github.com/marcel/aws-s3/blob/master/lib/aws/s3/connection.rb#L252
VALID_OPTIONS = [:access_key_id, :secret_access_key, :server, :port, :use_ssl, :persistent, :proxy].freeze
My solution is to set the constant to the actual service link at initialization time.
in config/initializers/aws_s3.rb
AWS::S3::DEFAULT_HOST = "s3-ap-northeast-1.amazonaws.com"
AWS::S3::Base.establish_connection!(
:access_key_id => 'access_key_id',
:secret_access_key => 'secret_access_key'
)

Carrierwave and s3 with heroku error undefined method `fog_credentials='

I'm trying to setup carrierwave and s3 with heroku. I'm following the carrierwave docs exactly: https://github.com/jnicklas/carrierwave
I've setup a bucket named testbucket in AWS, then I installed fog and created a new initializer with this inside :
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'my_key_inside_here', # required
:aws_secret_access_key => 'my_secret_access_key_here', # required
:region => 'eu-west-1' # optional, defaults to 'us-east-1'
}
config.fog_directory = 'testbucket' # required
end
Then inside my image_uploader.rb I set
storage :fog
Is there something else I am missing??? Thanks for any help.
If you're using carrier-wave 0.5.2, you have to look in the docs within the gem. They are different than what you see on github. Specifically, check out this file in the gem: lib/carrierwave/storage/s3.rb
Also set store to :s3...not :fog.
You'll see this section:
# CarrierWave.configure do |config|
# config.s3_access_key_id = "xxxxxx"
# config.s3_secret_access_key = "xxxxxx"
# config.s3_bucket = "my_bucket_name"
# end
#