I've just started using the Dalli gem for memory caching.
I'm just wondering do I need to create an instance of Dalli::Client in every request if I want to grab some data from the cache or can I declare a singleton which I can reuse?
In your config/environments/production.rb:
config.cache_store = :dalli_store config.cache_store = :dalli_store,
'cache-1.example.com', { :namespace => NAME_OF_RAILS_APP,
:expires_in => 1.day, :compress => true }
Related
I am building a WebDAV server using sabre/dav, I want to create a WebDAV server file storage location in Wasabi which is compatible with AmazonS3, I did some research and found something that looks like AWS.php but I don't know how to use it. If anyone knows how to do this specifically, please respond.
What we tried:
・Download s3dav (https://github.com/audionamix/s3dav) and install the file.
・Server.php was written as follows
<?php
use Sabre\DAV;
use Aws\S3\S3Client;
// The autoloader
require 'vendor/autoload.php';
$raw_credentials = array(
'credentials' => array(
'key' => '<insert-access-key>',
'secret' => '<insert-secret-key>'
),
//'profile' => 'wasabi',
'endpoint' => 'https://s3.wasabisys.com',
'region' => 'us-east-1',
'version' => 'latest',
'use_path_style_endpoint' => true,
'use_path_style' => true,
'use_ssl' => true,
'port' => 443,
'hostname' => 's3.wasabisys.com',
'bucket' => '<bucket-name>',
);
// establish an S3 Client.
$s3 = S3Client::factory($raw_credentials);
// Now we're creating a whole bunch of objects
//$rootDirectory = new DAV\FS\Directory('public');
$rootDirectory = new DAV\FS\S3Directory("/",'<bucket-name>',$s3);
// The server object is responsible for making sense out of the WebDAV protocol
$server = new DAV\Server($rootDirectory);
// If your server is not on your webroot, make sure the following line has the
// correct information
$server->setBaseUri('/server.php/');
// The lock manager is reponsible for making sure users don't overwrite
// each others changes.
$lockBackend = new DAV\Locks\Backend\File('data/locks');
$lockPlugin = new DAV\Locks\Plugin($lockBackend);
$server->addPlugin($lockPlugin);
// This ensures that we get a pretty index in the browser, but it is
// optional.
$server->addPlugin(new DAV\Browser\Plugin());
// All we need to do now, is to fire up the server
$server->exec();
Result:
The file name list is displayed, but it is displayed as 0 bytes.
Uploading is working, but other operations are not working (file size is correct on Wasabi).
”4.4.0 Exception Cannot traverse an already closed generator" is displayed.
I'm getting an error that I'm exceeding the number of requests allowed per session (30) when using this query (using Include instead of Customize):
ApplicationServer appServer = QuerySingleResultAndSetEtag(session => session
.Include<ApplicationServer>(x => x.CustomVariableGroupIds)
.Include<ApplicationServer>(x => x.ApplicationIdsForAllAppWithGroups)
.Include<ApplicationServer>(x => x.CustomVariableGroupIdsForAllAppWithGroups)
.Include<ApplicationServer>(x => x.CustomVariableGroupIdsForGroupsWithinApps)
.Include<ApplicationServer>(x => x.InstallationEnvironmentId)
.Load <ApplicationServer>(id))
as ApplicationServer;
Note, the error occurs on this line, which is called for each AppWithGroup within an application:
appGroup.Application = QuerySingleResultAndSetEtag(session =>
session.Load<Application>(appGroup.ApplicationId)) as Application;
However, this query (using Customize) doesn't create extra requests:
ApplicationServer appServer = QuerySingleResultAndSetEtag(session =>
session.Query<ApplicationServer>()
.Customize(x => x.Include<ApplicationServer>(y => y.CustomVariableGroupIds))
.Customize(x => x.Include<ApplicationServer>(y => y.ApplicationIdsForAllAppWithGroups))
.Customize(x => x.Include<ApplicationServer>(y => y.CustomVariableGroupIdsForAllAppWithGroups))
.Customize(x => x.Include<ApplicationServer>(y => y.CustomVariableGroupIdsForGroupsWithinApps))
.Customize(x => x.Include<ApplicationServer>(y => y.InstallationEnvironmentId))
.Where(server => server.Id == id).FirstOrDefault())
as ApplicationServer;
However, the above query causes an error:
Attempt to query by id only is blocked, you should use call
session.Load("applications/2017"); instead of
session.Query().Where(x=>x.Id == "applications/2017");
You can turn this error off by specifying
documentStore.Conventions.AllowQueriesOnId = true;, but that is not
recommend and provided for backward compatibility reasons only.
I had to set AllowQueriesOnId = true because it was the only way I could get this to work.
What am I doing wrong in the first query to cause the includes not to work?
By the way, another post had the same issue where he had to use Customize. I'd like to do this correctly though.
I'm not sure why the load isn't doing this for you, What version of raven are you on? I just tested this in Raven 2.5 build 2700 and the include is bringing back the information for me in a single request.
Anyway, with the load not working quite like i would expect, i would switch to a set of lazy queries to get what you want in 2 server round trips. http://ravendb.net/docs/2.5/client-api/querying/lazy-operations.
Another option that might work out better for you, (depending on what you are really doing with all of that data) is a transformer. http://ravendb.net/docs/2.5/client-api/querying/results-transformation/result-transformers?version=2.5
Hope that helps.
what happen when I use one memcached instance for session_store and cache_store?
For example:
config.cache_store = :dalli_store, 'localhost:11211'
config.session_store = :dalli_store, 'localhost:11211'
Can full cache_store remove sessions from memcached?
Thank you.
If you fear that cache can affect on sessions (and otherwise) you can use namespaces:
config.cache_store = :dalli_store, 'localhost:11211', :namespace => 'cache'
config.session_store = :dalli_store, 'localhost:11211', :namespace => 'sessions'
I am using action caching on my Rails 3 app on Heroku with the :expires_in option. I've tried calling expire_action, directly in the controller upon update, and within a sweeper. Nothing seems to expire the cache entry properly.
In my controller:
caches_action :embed, :if => Proc.new { |c| c.request.format.js? || c.request.format.rss? }, :expires_in => 5.minutes
In my action:
expire_action :action => :embed, :format => :js
And I've also attempted it in a sweeper, attempting to use the url generator to get the exact key:
expire_action obj_embed_url(#obj.unique_token)
I wonder if it is Heroku using the Varnish cache layer, which you can't expire. (The cache clearly expires after the 5 minutes, because I can see the content update.) It appears that I have the memcached add-on configured correctly (using the Dalli gem; config.cache_store = :dalli_store), and I can see the appropriate environment variables...
$ heroku config |grep MEM
MEMCACHE_PASSWORD => xxxxxxxxxxxxxxxxx
MEMCACHE_SERVERS => xxx.xxx.northscale.net
MEMCACHE_USERNAME => appxxxxxx%40heroku.com
What am I missing here?
finally figured this out.
Heroku's paths must not be matching up with the expire create/expire calls. so if you specify the path in the cache creation, and call that path specifically in the expire, it will work. also i had to use "expire_fragment" instead of "expire_action". here's my code:
in your controller:
caches_action :load, :up, :juice, :fresh, :cache_path => :custom_cache_path.to_proc
def custom_cache_path
path = "#{params[:controller]}/#{params[:action]}"
path += "/#{params[:id]}" if params[:id]
path += "/#{params[:sha]}" if params[:sha]
path
end
in the expiring method:
expire_fragment "serve/up/#{#site.id}"
expire_fragment "serve/fresh/#{#site.secret}"
I'm building a rails3 app on heroku, and I'm using aws-s3 gem to manipulate files stored in an Amazon S3 eu bucket.
When I try to perform a AWS::S3::S3Object.delete filename, 'mybucketname' command, I get the following error:
AWS::S3::PermanentRedirect (The bucket you are attempting to access
must be addressed using the specified endpoint. Please send all future
requests to this endpoint.):
I have added the following to my application.rb file:
AWS::S3::Base.establish_connection!(
:access_key_id => "myAccessKey",
:secret_access_key => "mySecretAccessKey"
)
and the following code to my controller:
def destroy
song = tape.songs.find(params[:id])
AWS::S3::S3Object.delete song.filename, 'mybucket'
song.destroy
respond_to do |format|
format.js { render :nothing => true }
end end
I found a proposed solution somewhere to add AWS_CALLING_FORMAT: SUBDOMAIN to my amazon_s3.yml file, as supposedly, aws-s3 should handle differently eu buckets than us.
However, this did not work, same error is received.
Could you please provide any assistance?
Thank you very much for your help.
the problem is you need to type SUBDOMAIN as uppercase string in config, try this out
You can specify custom endpoint at connection initialization point:
AWS::S3::Base.establish_connection!(
:access_key_id => 'myAccessKey',
:secret_access_key => 'mySecretAccessKey',
:server => 's3-website-us-west-1.amazonaws.com'
)
you can find actual endpoint through the AWS console:
full list of valid options - here https://github.com/marcel/aws-s3/blob/master/lib/aws/s3/connection.rb#L252
VALID_OPTIONS = [:access_key_id, :secret_access_key, :server, :port, :use_ssl, :persistent, :proxy].freeze
My solution is to set the constant to the actual service link at initialization time.
in config/initializers/aws_s3.rb
AWS::S3::DEFAULT_HOST = "s3-ap-northeast-1.amazonaws.com"
AWS::S3::Base.establish_connection!(
:access_key_id => 'access_key_id',
:secret_access_key => 'secret_access_key'
)