I've been following the excellent Rails Cast by Ryan Bates on uploading files to S3 (Episode 383). Things work fine - but...
I'd like to use the images' HTTP URL instead of HTTPS.
Tried looking in the Carrierwave documentation, but could not find if this was an option.
Tried to see if this was an S3 setting, but by default it seems to support HTTP and HTTPS.
Any help would be appreciated.
Thank you.
You can do this by setting the asset_host config parameter:
CarrierWave.configure do |config|
...
config.fog_directory = 'yourbucket'
# Forcing use of HTTP
config.asset_host = "http://#{config.fog_directory}.s3.amazonaws.com"
...
end
If your bucket is in a region other than US Standard you might need to add that part to the host as well.
CarrierWave 0.9.0 added a configuration param fog_use_ssl_for_aws to disable SSL for public_url.
CarrierWave.configure do |config|
...
config.fog_use_ssl_for_aws = false
...
end
Not sure if this is what you are looking for, but if you want to allow users to download files from your S3 bucket, you will need to create permissions for everyone to list and download files.
That can be done in your S3 bucket configuration panel, under the "Permissions" tab. By default, S3 file will be private so you would need an authenticated url to access them.
Related
I deployed a pretty standard Rails 5 app with AWS EBS.
My /robots.txt is not reacheable and requests to it's URL return a 404 error.
I put it in the /public folder along with 404.html, 422.html and 500.html pages, which are correctly served by nginx.
Any clue about what might be wrong? What shall I check?
EB CLI 3.14.6 (Python 2.7.1)
Ruby 2.4.3 / Rails 5.1.4 / Puma (gem) 3.7
Looks like a very similar question have been asked 4 years ago on the official AWS forum: https://forums.aws.amazon.com/thread.jspa?threadID=150904
Only 4 years later a brave guy from AWS stepped in with a reply! Here below the quoted reply:
Hello hello! I'm Chris, the new Ruby platforms person at Elastic
Beanstalk. Visiting this thread today, it looks like there's been a
lot of pain (and also confusion!) from Beanstalk's Ruby+Puma's
handling of static files.
Quick summary: When this thread was created (in 2014), Beanstalk was
essentially using the default Nginx that comes with Amazon Linux, with
only some logging modifications to support the health monitoring. That
spawned this thread, as static files are generally expected to be
served the the web server when one is present.
So, the folks here went and fixed the /assets folder. Great!
Unfortunately, there was a misunderstanding with the request to fix
serving the /public folder - Beanstalk's Puma platform instead serves
things in '/public' from '/pubilc', not from '/'. This is definitely
an issue, so here's some workarounds:
Workaround 1: Turning on serve static assets. Yes, this wastes some
application threads here or there, but if your use case is only
robots.txt and favicon.ico, you're only robbing a couple of appserver
threads. I'd pick this one unless I was running my application servers
hot.
Workaround 2: Write an .ebextension to modify the Nginx configuration
to serve /public at /. I'm in the process of writing one, so I'll tack
it as a reply to this when I've given it the thought it deserves. Some
of the current ones may serve your app's code, so double check the
configuration if you've already done this workaround.
I've created a tracking issue for the team with this level of detail,
so we'll work to get this corrected. Thank you all for your feedback -
we'd love to serve you and your apps better.
Since then, no further replies; if anybody knows the "aws-approved-way" to edit nginx config with .ebextensions let's post it here please! :)
In AWS EB with PUMA, static files under the public folder are served under the /public/ url. Webcrawlers expect the file available at /robots.txt
I've struggled to try and implement routing to these files and settled instead on a more 'Rails' way of implementing this.
1) config/routes.rb
get "/robots.txt", to: "robots#show"
2) app/controllers/robots_controller.rb
class RobotsController < ApplicationController
def show
render "show", layout: false, content_type: "text/plain"
end
end
3) app/views/robots_txts/show.erb
User-agent: *
Disallow: /
The above link to AWS forums is erroring with a 400 right now, so here's how I fixed this issue. Ruby 2.7 running on AWS2 platform:
Static Files in sub-directory of /public:
Create a file under the .ebextensions folder called static-files.conf. Content should look similar to:
option_settings:
aws:elasticbeanstalk:environment:proxy:staticfiles:
/w3c: public/w3c
/images: public/images
This will ensure that all requests to domain.com/images and domain.com/w3c are served from the appropriate /public sub-directory.
Static Files in top level of /public directory:
For top-level files like robots.txt or sitemap.xml add appropriate entry to routes.rb to serve the static content directly:
get '/robots.txt', to: proc {|env| [200, {}, [File.open(Rails.root.join('public', 'robots.txt')).read]] }
get '/sitemap.xml', to: proc {|env| [200, {}, [File.open(Rails.root.join('public', 'sitemap.xml')).read]] }
Ensure production.rb has static files config set properly:
config.serve_static_files = false
This last part is most-important.
I am using Rails 3.2 asset pipeline to serve my assets(images, javascript. css).
I have added paperclip for photo uploads. paperclip by default stores files in public/system
When I use the url generated by paperclip which is something like
/system/users/avatar/000/000/thumb/whatever.jpg
It gives me no route error. the file is there at the above location but I think it may be issue with asset pipleline.
Any ideas what might be going wrong ?
just like user451893 said. you should configure your web-server (nginx, apache etc) to deliver all static assets!
in case you don't, then you need to turn on static asset serving in rails:
config.serve_static_assets = true
have a look at this issue for more details https://github.com/thoughtbot/paperclip/issues/667
I have an application on Heroku that uses the Carrierwave gem to upload images to S3.
I have set the s3 configuration in an initializer called carrierwave.rb
CarrierWave.configure do |config|
config.s3_access_key_id = 'XXXXXXXXXXXXXXXXXXXX'
config.s3_secret_access_key = 'XXXXXXXXXXXXXXXXX'
config.s3_bucket = 'XXXXX'
config.storage = :s3
end
This works fine in development on my local machine, however once I deploy to Heroku I get the following error
A Errno::EACCES occurred in events#update:
Permission denied - /app/public/uploads
/usr/ruby1.8.7/lib/ruby/1.8/fileutils.rb:243:in `mkdir'
Obviously it's trying to write to the heroku server which is read only and not picking up my s3 settings.
Does anyone know how I can get heroku to send my files to s3?
From CarrierWave wikki:
Heroku has a read-only filesystem, so uploads must be stored on S3 and cannot be cached in the public directory.
You can work around this by setting the cache_dir in your Uploader classes to the tmp directory:
Check out https://github.com/jnicklas/carrierwave/wiki and scroll to the bottom section labeled "CarrierWave on Heroku" to see how they set this up. Hope this helps someone.
Have you looked at this demo app.
In particular the uploaded class here
How do I configure Plupload properly so that it will upload files directly to Amazon S3?
In addition to condictions for bucket, key, and acl, the policy document must contain rules for name, Filename, and success_action_status. For instance:
["starts-with", "$name", ""],
["starts-with", "$Filename", ""],
["starts-with", "$success_action_status", ""],
Filename is a field that the Flash backend sends, but the HTML5 backend does not.
The multipart setting must be True, but that is the default these days.
The multipart_params setting must be a dictionary with the following fields:
key
AWSAccessKeyId
acl = 'private'
policy
signature
success_action_status = '201'
Setting success_action_status to 201 causes S3 to return an XML document with HTTP status code 201. This is necessary to make the flash backend work. (The flash upload stalls when the response is empty and the code is 200 or 204. It results in an I/O error if the response is a redirect.)
S3 does not understand chunks, so remove the chunk_size config option.
unique_names can be either True or False, both work.
Latest Plupload release has illustrative example included, that shows nicely how one might use Plupload to upload files to Amazon S3, using Flash and SilverLight runtimes.
Here is the fresh write-up: Upload to Amazon S3
The official Plupload tutorial, much more detailed than the answers here: https://github.com/moxiecode/plupload/wiki/Upload-to-Amazon-S3
If you are using Rails 3, please check out my sample projects:
Sample project using Rails 3, Flash and MooTools-based FancyUploader to upload directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-FancyUploader
Sample project using Rails 3, Flash/Silverlight/GoogleGears/BrowserPlus and jQuery-based Plupload to upload directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-Plupload
I want to notice, that don't forget to upload crossdomain.xml to your s3 host, and also if you have success_action_redirect url, you need to have crossdomain.xml file on that domain too. I spent 1 day fighting with that problem, and finally found what's wrong. So next time think how flash work inside.
Hope I save time for someone.
Is there a way to make S3 default to an index.html page? E.g.: My bucket object listing:
/index.html
/favicon.ico
/images/logo.gif
A call to www.example.com/index.html works great! But if one were to call www.example.com/ we'd either get a 403 or a REST object listing XML document depending on how bucket-level ACL was configured.
So, the question: Is there a way to have index.html functionality with content hosted on S3?
For people still struggling against this after 3 years, let me add some important information:
The URL for your website (and to which you have to point your DNS) is not
<bucket_name>.s3-us-west-2.amazonaws.com, but
<bucket_name>.s3-website-us-west-2.amazonaws.com.
If you use the first, it will not work as intended, no matter how much you config the Index document.
For a specific example, consider:
http://www-example-com.s3.amazonaws.com/index.html works.
http://www-example-com.s3.amazonaws.com/ fails with AccessDenied.
http://www-example-com.s3-website-us-west-2.amazonaws.com/ works!
To get your true website address, go to your S3 Management Console, select the target bucket, then Properties, then Static Website Hosting. It will show the website URL that will work.
Amazon S3 now supports Index Documents
The index document for a bucket can be set to something like index.html. When accessing the root of the site or a sub-directory containing a document of that name that document is returned.
It is extremely easy to do using the aws cli:
aws s3 website $MY_BUCKET_NAME --index-document index.html
You can set the index document from the AWS Management Console:
You can easily solve it by Amazon CloudFront link. At Amazon CloudFront you could modify the root object. You can download manager here: m1.mycloudbuddy.com/downloads.html.
Since It's been long time, this question being asked, and Amazon S3 changing their Interface. I would like to answer with updated screenshots.
We need to enable 'static web hosting' for S3 to serve as web hosting.
- Go to Properties -> click on static web hosting -> Select 'use this bucket to host a website'
- Enter the index document (index.html by default), error document and redirection rules, if any.
As answered in this answer on Stack Overflow, web hosting link would be: http://bucket-name.s3-website-region.amazonaws.com
I would suggest reading this thread from 2006 (On Amazon web services developers connection). It seems there's no easy solution to this.
Yes. using AWS Cloudfront lets you assign a default file.
you can do it using dns webforwards and cloaking. just forward to the complete path of the index.html
www.example.com forwards to http://www.example.com.s3.amazonaws.com and make sure you cloak the output.