I've recently implemented caching with memcached heroku add-on using Dalli gem for my Rails application. What I find though is when deployed to Heroku, it also caches all my static assets including images, which quickly blows up my memcached size. A sample of heroku logs look like
cache: [GET /assets/application.css] fresh
app[web.1]: cache: [GET /assets/sign-in-twitter.gif] fresh
app[web.1]: cache: [GET /assets/ajax-loader.gif] fresh
app[web.1]: cache: [GET /assets/sign-in-facebook.gif] fresh
Specifically for index pages, the cache size increases by about 5MB for every different request. Is this behavior configurable? Can I configure memcached to cache only my fragment caches and not proactively cache every image in every page?
Using dalli gem, In config/environments/production.rb:
config.action_dispatch.rack_cache = {
:metastore => Dalli::Client.new,
:entitystore => 'file:tmp/cache/rack/body',
:allow_reload => false
}
The above configuration caches the metastore info in memcached but the actual body of the assets to the file system.
In config/application.rb:
if !Rails.env.development? && !Rails.env.test?
config.middleware.insert_before Rack::Cache, Rack::Static, urls: [config.assets.prefix], root: 'public'
end
Rack::Static usage:
The Rack::Static middleware serves up urls with a matching prefix to a root directory. Here I'm giving config.assets.prefix as my url prefix which defaults to '/assets'. This should serve any assets directly out of the public/assets directory instead of hitting Rails::Cache. This will only work if you run the 'rake assets:precompile' in production, otherwise there will be no precompiled assets in 'public/assets'.
Related
I am using rocket chat rest API, every thing works good, but when i upload file to rocket chat rest api, it shows error 413 Request Entity Too Large, but when i upload file from website it uploaded any size of fie.
After checking all scenario, I concluded that file size less than and equal to 1 mb is uploaded successfully, and greater than 1 MB shows this error 413 Request Entity Too Large.
I upload file from post man using this url
https://rocket.chat.url/api/v1/rooms.upload/RoomId
Headers:
Content-Type:application/x-www-form-urlencoded
X-Auth-Token:User-Token
X-User-Id:User-Id
Form-Data:
file - selected file
Html result Error
<html>
<head><title>413 Request Entity Too Large</title></head>
<body bgcolor="white">
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>nginx/1.10.3 (Ubuntu)</center>
</body>
</html>
when File successfully insert it shows following.
{
"success": true
}
After checking many scenarios and search many urls i get solution from this.
I have used rocket chat docker and I append one line to nginx config file.
Solution:
login to ubuntu server
write sudo nano /etc/nginx/nginx.conf and hit enter
Add or update client_max_body_size in
http {
client_max_body_size 8M; #used your exceeded limit instead of 8M
#other lines...
}
Restart nginx by command service nginx restart or systemctl restart nginx
Uploading larger file again, and it is successful.
I have an issue with my OpenCart install. I am currently using 2.0.3.1 on a dedicated server running Plesk.
I installed a Let's Encrypt SSL certificate. The website is running great and I have no issues with OpenCart requesting unsecure pages except when I click on a filter in the category page. It just hangs.
This is the error I get via Chrome developer tools. I apologize for having to blur out the domain. It's for a customer and i can't release it.
Here is my catalog config.php:
// HTTP
define('HTTP_SERVER', 'http://www.xxxxx.com/');
// HTTPS
define('HTTPS_SERVER', 'https://www.xxxxx.com/');
// DIR
define('DIR_APPLICATION', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/catalog/');
define('DIR_SYSTEM', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/system/');
define('DIR_LANGUAGE', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/catalog/language/');
define('DIR_TEMPLATE', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/catalog/view/theme/');
define('DIR_CONFIG', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/system/config/');
define('DIR_IMAGE', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/image/');
define('DIR_CACHE', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/system/cache/');
define('DIR_DOWNLOAD', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/system/download/');
define('DIR_UPLOAD', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/system/upload/');
define('DIR_MODIFICATION', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/system/modification/');
define('DIR_LOGS', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/system/logs/');
// DB
define('DB_DRIVER', 'mysql');
define('DB_HOSTNAME', 'localhost');
define('DB_USERNAME', 'xxx');
define('DB_PASSWORD', 'xxx');
define('DB_DATABASE', 'xxx');
define('DB_PORT', '3306');
define('DB_PREFIX', 'oc_');
Here is the admin config.php:
// HTTP
define('HTTP_SERVER', 'http://www.xxxxx.com/admin/');
define('HTTP_CATALOG', 'http://www.xxxxx.com/');
// HTTPS
define('HTTPS_SERVER', 'https://www.xxxxx.com/admin/');
define('HTTPS_CATALOG', 'https://www.xxxxx.com/');
// DIR
define('DIR_APPLICATION', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/admin/');
define('DIR_SYSTEM', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/system/');
define('DIR_LANGUAGE', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/admin/language/');
define('DIR_TEMPLATE', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/admin/view/template/');
define('DIR_CONFIG', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/system/config/');
define('DIR_IMAGE', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/image/');
define('DIR_CACHE', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/system/cache/');
define('DIR_DOWNLOAD', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/system/download/');
define('DIR_UPLOAD', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/system/upload/');
define('DIR_LOGS', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/system/logs/');
define('DIR_MODIFICATION', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/system/modification/');
define('DIR_CATALOG', '/var/www/vhosts/xxxxx.com/httpdocs/xxxxx/catalog/');
// DB
define('DB_DRIVER', 'mysql');
define('DB_HOSTNAME', 'localhost');
define('DB_USERNAME', 'xxx');
define('DB_PASSWORD', 'xxx');
define('DB_DATABASE', 'xxx');
define('DB_PORT', '3306');
define('DB_PREFIX', 'oc_');
I do not have an htaccess file setup, with the Plesk install I haven't needed it.
Change your HTTP_SERVER for admin
define('HTTP_SERVER', 'https://www.xxxxx.com/admin/');
There's no reason to serve anything from admin with http.
It should be fixed in this commit: Category links (canonical, prev, next) points to https if it is enabled
If I upload an artifact (~23MB file size) in Apache-Archiva 2.2.1, I get an upload-error in the UI:
fileupload.errors.Request Entity Too Large
This occurs also with mvn deploy:
Return code is: 413, ReasonPhrase: Request Entity Too Large
Archiva is running on tomcat 9.0.0.M21, deployed as war.
So, how can I increase the upload file size limit in apache archiva? I can't find any appropriate properties to set in archiva.xml.
So finally i could resolve this problem. It was not an Archiva setting, increasing the upload size in nginx.conf to client_max_body_size 64M; fixed it (Tomcat is running behind a SSL-Proxy).
Thx for watching.
just started with RefineryCMS, sorry for newbie question. It runs fine locally and deployed luckily on Heroku Cedar stack. Created a page called Home. /pages/home responds fine.
routes.rb
root :to => 'pages#home'
and works on localhost:3000 but on Heroku it gives error.
The app is here:
http://refkocedar.herokuapp.com/home works
http://refkocedar.herokuapp.com/ does not work
How to set Home page to root on Heroku? Thanks for help!
$ heroku logs
2012-04-03T02:19:57+00:00 heroku[router]: GET refkocedar.herokuapp.com/assets/application-ddce3db0fc667014faf95d85d24c71d4.js dyno=web.1 queue=0 wait=0ms service=4ms status=304 bytes=0
2012-04-03T02:19:58+00:00 heroku[router]: GET refkocedar.herokuapp.com/favicon.ico dyno=web.1 queue=0 wait=0ms service=4ms status=304 bytes=0
2012-04-03T02:19:58+00:00 app[web.1]: cache: [GET /favicon.ico] miss
2012-04-03T02:20:04+00:00 app[web.1]:
2012-04-03T02:20:04+00:00 app[web.1]:
2012-04-03T02:20:04+00:00 app[web.1]: Started GET "/" for 80.98.142.244 at 2012-04-03 02:20:04 +0000
2012-04-03T02:20:04+00:00 app[web.1]: cache: [GET /] miss
2012-04-03T02:20:04+00:00 app[web.1]: cache: [GET /] miss
2012-04-03T02:20:04+00:00 app[web.1]: cache: [GET /] miss
I was trying Refinery recently on myocal workspace and had a similar issue. Not sure what is different on heroku as I didnt try anything on it. This solution worked for me.
http://groups.google.com/group/refinery-cms/browse_thread/thread/504b72ec2f1575d5
Refinery admin page you have a option as "forward this page" under advanced options. Set "/"
Here I explain step by step how to set up your home as root_path(localhost:3000).
go to http://localhost:3000/refinery/login
1.Then Login with your username and password.
2.Click to Pages
3.click edit link which page you want to see when localhost:3000 loaded.
4.Click Advanced Options
5.Set / in Forward this page to another website or page text box. Then click Save
6.Now your home page will show on localhost:3000
What Sonu linked to from google groups is correct. You need to add the following in your routes.rb:
root :to => 'pages#home'
And then change the setting on your home page (under advanced settings) that says:
Forward this page to another website or page
and have this forwarded to /
This worked for me.
Very short answer. Under "Advanced Options", set "Forward this page to another website or page" to /
It doesn't sound like it should work, but it does.
I'd say this is almost certainly an error with your routes.rb file.
I was working through the Rails Tutorial by Michael Hartl to setup my new app and ran into this error over and over again.
Check to make sure that Heroku knows the correct root path e.g. " root 'application#hello'"
I am trying to upload data to amazon s3 bucket.
I am using aws-s3 gem for this purpose.
I am giving right access key and secure key but still not able to execute S3Object.store/Bucket calls, though the connection is established. They return with error "AWS::S3::SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method."
Interestingly I am running another rails app with paperclip plugin to upload images to S3, and that is working like a charm! with same access key and secure key.
I have tried referencing some links mentioning same problem but to no luck.
[ https://forums.aws.amazon.com/thread.jspa?threadID=16020&tstart=0 ]
Any pointers/help/suggestions would be great. :)
I just got this problem because I did not supply the correct region in the request.
I am using fog and Carrierwave as per the railscast here and I had to configure the region in the config/initializer for Carrierwave
CarrierWave.configure do |config|
config.fog_credentials = {
provider: 'AWS', # required
aws_access_key_id: '[redacted]', # required unless using use_iam_profile
aws_secret_access_key: '[redacted]', # required unless using use_iam_profile
# use_iam_profile: false, # optional, defaults to false
region: 'eu-central-1', # optional, defaults to 'us-east-1'
# host: 's3.example.com', # optional, defaults to nil
# endpoint: 'https://s3.example.com:8080' # optional, defaults to nil
}
config.fog_directory = 'xxx' # required
# config.fog_public = false # optional, defaults to true
# config.fog_attributes = { cache_control: "public, max-age=#{365.days.to_i}" } # optional, defaults to {}
end
interestingly fog was redirected to the correct endpoint with the correct region by amazon, however, the redirected request got the failure on the authentication, maybe a problem with fog in such a situation. Fog did give a nice warning in the log
[fog][WARNING] fog: followed redirect to calm4-files.s3.amazonaws.com, connecting to the matching region will be more performant
but to be more accurate they should say not only more performant, but it will actually work as well