I want to prepopulate my cache with an initializer, but I don't need this code to run every time I run rake or rails g, etc. Rake and Bundler are easy to deal with, but a similar solution does not work for the generators:
# config/initializers/prepop_cache.rb
if !defined?(::Bundler) and !defined?(::Rake) and !defined(Rails::Generators)
# do stuff
end
This must be because rails/generators (or something similar) is requireed at runtime. How can I check to see if the command being run is rails g xyz?
Update:
Two solutions can be found here: Rails 3 initializers that run only on `rails server` and not `rails generate`, etc
Still would like to know if it's possible in the manner I've tried above.
In Rails 3, what you're looking to do is conceivably possible, but in a hacky way. Here's how:
When you make a rails generate call, the callpath looks like this:
bin/rails is called, which eventually routes you to execute script/rails
script/rails is executed which requires rails/commands
rails/commands is loaded, which is the main point of focus.
Within rails/commands the code that runs for generate:
ARGV << '--help' if ARGV.empty?
aliases = {
"g" => "generate",
"c" => "console",
"s" => "server",
"db" => "dbconsole"
}
command = ARGV.shift # <= #1
command = aliases[command] || command
case command
when 'generate', 'destroy', 'plugin', 'benchmarker', 'profiler'
require APP_PATH
Rails.application.require_environment! # <= #2
require "rails/commands/#{command}" # <= #3
The points of interest are numbered above. Namely, that at point #1 the command that you're running is shifting off of ARGV. Which in your case means generate is going to be removed from the command line args.
At point #2 your environment gets loaded, at which point your initializers are going to be executed. And herein is the tough part - because nothing indicating a specific command has been loaded at this point (this occurs at #3) there is no information to determine a generator is being run!
Let's insert a script into config/initializer/debug.rb to see what is available if we ran rails generate model meep:
puts $0 #=> "script/rails"
puts ARGV #=> ["model", "meep"]
As you can see, there is no direct information that a generator is being run. That said, there is indirect information. Namely ARGV[0] #=> "model". Conceivably you could create a list of possible generators and check to see if that generator has been called on ARGV[0]. The responsible developer in me says this is a hack and may break in ways you'd not expect so I'd use this cautiously.
The only other option is to modify script/rails like you suggested -- which isn't too bad a solution, but would likely break when you upgrade to Rails 4.
In Rails 4, you've got more hope! By the time the application environment is being loaded, the generators namespace has already been loaded. This means that in an initializer you could do the following:
if defined? Rails::Generators #=> "constant"
# code to run if generators loaded
else
# code to run if generators not loaded
end
Related
TL;DR
In Puppet Enterprise, how do I run a manifest (testpp.pp) from a task or plan (not Bolt).
plan base_windows::testplan (
TargetSpec $targets,
Optional[String] $contents = undef,
String $filename,
){
$apply_prep($targets)
$apply_results = apply($targets, '_catch_errors' => true) {
class { 'base_windows::testpp': }
}
$apply_results.each | $result | {
notice($result.report)
}
}
apply_prep seems to succeed, but apply is failing with the following error:
{
"msg" : "Evaluation Error: Unknown function: 'report'. (file: /opt/puppetlabs/server/data/orchestration-services/code/environments/development/modules/base_windows/plans/testplan.pp, line: 16, column: 19)",
"kind" : "bolt/plan-failure",
"details" : {
"class" : "Bolt::PAL::PALError"
}
}
If I change the code to:
plan base_windows::testplan (
TargetSpec $targets,
Optional[String] $contents = undef,
String $filename,
){
apply_prep($targets)
$apply_results = apply($targets, '_catch_errors' => true) {
# Is this how to call a class? I cannot find an example.
class { 'base_windows::testpp': }
}
$apply_results.each |$result| {
$target = $result.target.name
if $result.ok {
out::message("${target} returned a value: ${result.value}")
} else {
out::message("${target} errored with a message: ${result.error.message}")
}
}
}
The plan tells me it has failed, but there are no errors in the node's report. In fact, there is no entry for the time the plan was executed.
I cannot find any examples on how to call a class from a plan, so the above apply() is a guess, based on this documentation.
I have installed the puppetlabs_reboot module and successfully ran a plan using it, therefore, I conclude my system is set up correctly, it's just my code that is wrong.
Background
I may be going about this all wrong, so here is some background to the problem. Currently, I have a series of manifests that install various packages from the public Chocolatey repository depending on a node's classification. Package definitions are stored in Hiera data and each package' version is set to latest. At the end of the Package{} resource, some manifests include a reboot.
These manifests are used to provision new nodes and keep existing nodes up-to-date with the latest package version.
The Puppet agent is set to run once per hour and if the source package is updated in the Chocolatey repo, on the next Puppet run, the manifest will update the package, rebooting the node, if required.
Goal
New nodes are provisioned with the latest package version.
Prevent package updates at undetermined times on existing nodes.
Continue to allow Puppet agent runs every hour.
Make use of existing manifests.
Ideas
Split out the package{} code from the profile manifest and place them in tasks / plans, allowing packages to be updated out-of-hours.
Specify the actual package version in Hiera. Although this is more declarative and idempotent, it means keeping an eye on over 100 package version. I guess it would be fairly simple to interrogate the Chocolatey repos with code to pull the latest version number, but even so I am no better off.
Create a task with a script that runs choco upgrade all, however, the next Puppet run would revert package versions according to the version defined in Hiera, meaning Hiera still needs to be kept up-to-date.
Problems
As per the main crux of this question, how do I run manifests (classes) from plans? If I understand correctly, tasks are for ad-hoc scripts, whereas plans can run tasks and manifests. As a lot of time has been invested in writing manifests, I would prefer not to rewrite all my manifests as scripts.
I am confused by the Puppet documentation as it seems to switch between PE and Bolt syntax. I am using Puppet Enterprise where Puppet says they don't recommend using Bolt but their examples seem to site Bolt commands.
No errors in the node' report. apply_prep() reports executed successfully, albeit taking far longer to execute than puppetlabs_reboot module, but apply() results in a failure, but nothing is logged in the node's reports.
Using puppetlabs_reboot module as a reference, it appears their plan uses a bunch of tasks. It appears that they don't use apply() to run their reboot{} class. Is this not duplicating the work?
If anyone has any suggestions or ideas, I'd be grateful if you could share.
I've got it to work. The class I was trying to run, required parameters that I hadn't provided!
plan base_windows::testplan (
TargetSpec $targets,
Optional[String] $contents = undef,
String $filename,
){
apply_prep($targets)
$apply_results = apply($targets, '_catch_errors' => true) {
class { 'base_windows::testpp':
filename => $filename,
contents => $contents,
}
}
}
# Output the whole result_set in the PE console
return $apply_results
I found this out using the logs.
Turn on debug level logging in /etc/puppetlabs/puppetserver/logback.xml (root level="debug")
Tail the following logs:
tail -f /var/log/puppetlabs/bolt-server/bolt-server.log
tail -f /var/log/puppetlabs/puppetserver/puppetserver.log | grep -B 5 -A 5 'testplan'
tail -f /var/log/puppetlabs/orchestration-services/orchestration-services.log
I´m currently writing tests for my application and therefore, I have to test some click.group commands I defined:
Let´s say I defined them like:
#click.group(cls=MyGroup)
#click.pass_context
def myapp(ctx):
init_stuff()
#myapp.command()
#click.option('--myOption')
def foo(myOption: str) -> None:
do_stuff() # change some files, print, create other files
I know that I could use the CliRunner from click.testing. However, I just want to make sure, that the command is called, but I DONT WANT it to execute any code (for example by applying the CliRunner.invoke()).
How could this be done?
I couldn´t come up with a solution using mocking with foo for example. Or do I have to execute code lets say using the isolated_filesystem() which CliRunner provides?
So the question is: What would be the most efficient way to test my commands when defined like shown above?
Many thanks in advance
You could add a --dry-run flag to your group or some commands, and save it it inside the context, and if the flag is enabled, do not execute any code. Then you can use CliRunner.invoke() with the --dry-run flag enabled and just check your invocations have happened, without actually executing the code.
I am using Rails 3.2.12/Ruby 1.9.3 and am trying to set up multiple loggers so I can log both to a file and to a graylog server which we have set up. I have got close using this soltuion but with a Gelf logger - http://railsware.com/blog/2014/08/07/rails-logging-into-several-backends/
So I have back ported the ActiveSupport::Logger to my config/initializers and set up the gelf logger as below
(development.rb)
gelf_logger = GELF::Logger.new("greylogserver", 12201, "WAN", { :host => 'host', :facility => "railslog"})
Rails.logger.extend(ActiveSupport::Logger.broadcast(gelf_logger))
however I am finding that I only get errors logged to the graylog server
ArgumentError: short_message is missing. Options version, short_message and host must be set.
and when I debug the code I can see the args being passed into the Gelf Logger add method (below) always has the 1st element as the level, the 2nd is nil and the 3rd contains the message. This is confusing as the 2nd arg should be the message and the 3rd the progname. The only solution I have come up with is to alter the Gelf-rb gem (as below) by changing the 6th line to use args[1] for message then it works, however this is not ideal and there must be a way to fix this in my code.
def add(level, *args)
raise ArgumentError.new('Wrong arguments.') unless (0..2).include?(args.count)
# Ruby Logger's author is a maniac.
message, progname = if args.count == 2
[args[1], args[1]]
elsif args.count == 0
[yield, default_options['facility']]
elsif block_given?
[yield, args[0]]
else
[args[0], default_options['facility']]
end
....
Just to note when i directly set my Rails logger to use the Gelf logger in development.rb then it works fine
Rails.logger = GELF::Logger.new("greylogserver", 12201, "WAN", { :host => 'host', :facility => "railslog"})
So it has to be something to do with my implementation of ActiveSupport::Logger which is from here - https://github.com/rails/rails/blob/6329d9fa8b2f86a178151be264cccdb805bfaaac/activesupport/lib/active_support/logger.rb
Any help would be much appreciated
As #Leons mentions the same issue was reported to the project as issue #26. The poster wrote a patch with testcases and logged in issue #27 a pull request with a fix to make the interface of the add method identical with the usual definitions.
This was merged in on Dec 22nd, 2014. Since then no new release was made.
I think it is best to compile directly from the github repo with something like:
$ echo "gem 'gelf', :git => 'https://github.com/Graylog2/gelf-rb.git'" >>Gemfile
$ bundle install
or similar.
good luck.
Using deploy.rb to precompile rails assets only when they change, this task is always skipping the compile of my assets :(
namespace :assets do
task :precompile, :roles => :web, :except => {:no_release => true} do
from = source.next_revision(current_revision)
if capture("cd #{latest_release} && #{source.local.log(from)} vendor/assets/ app/assets/ | wc -l").to_i > 0
run %Q{cd #{latest_release} && #{rake} RAILS_ENV=#{rails_env} #{asset_env} assets:precompile}
else
logger.info "Skipping asset pre-compilation because there were no asset changes"
end
end
end
What could causing this complete task not compiling? It always thinks there are no asset changes and throws that message.
I also never really understood the task, for example what does below source.log.local refer to?
source.local.log
Could anyone clarify what the task commands do and has some pointers why it never sees any asset changes? Thank you
What it does:
from = source.next_revision(current_revision)
source is a reference to your source code, as seen through your SCM (git, svn, whatever). This sets from as (essentially) the currently deployed version of your source code.
capture("cd #{latest_release} && #{source.local.log(from)} vendor/assets/ app/assets/ | wc -l")
capture means 'execute this command in the shell, and return its output'. The command in question references the log of changes to your source comparing the deployed version to the current version (specifying the paths where assets live as the ones that 'matter'), and passes that into the word count tool (wc -l). The -l option means that it returns a count of the number of lines in the output. Thus, the output (which is returned by capture) is the number of filenames that have changes between these two versions.
If that number is zero, then no files have changed in any of the specified paths, so we skip precompiling.
Why it doesn't work:
I don't know. There doesn't seem to be anything wrong with the code itself - it's the same snippet I use, more or less. Here's a couple of things that you can check:
Does Capistrano even know you're using the asset pipeline? Check your Capfile. If you don't have load 'deploy/assets' in there, deploying won't even consider compiling your assets.
Have you, in fact, enabled the asset pipeline? Check application.rb for config.assets.enabled = true
Do you have incorrect asset paths specified? The code is checking for changes in vendor/assets/ and app/assets/. If your assets live somewhere else (lib/assets, for instance), they won't be noticed. (If this is the case, you can just change that line to include the correct paths.
Have you, in fact, changed any assets since the last deployment? I recommend bypassing the check for changed assets and forcing precompile to run once, then turning the check back on and seeing it the problem magically resolves itself. In my example, below, setting force_precompile = true would do that.
What I use:
Here's the version of this I currently use. It may be helpful. Or not. Changes from the original:
A more readable way to specify asset paths (if you use it, remember to set asset_locations to the places your assets live)
An easy way to force precompile to run (set force_precompile=true to attempt to run the check, but run precompile regardless)
It prints out the count whether or not precompile runs. I appreciate getting some output just to be sure the check is running.
If an error occurs when trying to compare the files (as will often happen with brand new projects, for instance), it prints the error but runs precompile anyway.
.
namespace :assets do
task :precompile, :roles => :web, :except => { :no_release => true } do
# Check if assets have changed. If not, don't run the precompile task - it takes a long time.
force_compile = false
changed_asset_count = 0
begin
from = source.next_revision(current_revision)
asset_locations = 'app/assets/ lib/assets vendor/assets'
changed_asset_count = capture("cd #{latest_release} && #{source.local.log(from)} #{asset_locations} | wc -l").to_i
rescue Exception => e
logger.info "Error: #{e}, forcing precompile"
force_compile = true
end
if changed_asset_count > 0 || force_compile
logger.info "#{changed_asset_count} assets have changed. Pre-compiling"
run %Q{cd #{latest_release} && #{rake} RAILS_ENV=#{rails_env} #{asset_env} assets:precompile}
else
logger.info "#{changed_asset_count} assets have changed. Skipping asset pre-compilation"
end
end
end
I feel like this should be a simple problem, but I'm pulling my hair out trying to track it down. I'm installed the chargify_api_ares gem, but can't do even basic things such as
Chargify::Subscription.create
As I get this path error. I feel like this must be a gem issue somehow but don't know where to go from here.
UPDATE: bundle show chargify_api_ares shows the correct path, I just somehow can't access it. Still trying random environment related things.
Looks like this is the source of the problem, in active_resource\base.rb:
# Gets the \prefix for a resource's nested URL (e.g., <tt>prefix/collectionname/1.json</tt>)
# This method is regenerated at runtime based on what the \prefix is set to.
def prefix(options={})
default = site.path
default << '/' unless default[-1..-1] == '/'
# generate the actual method based on the current site path
self.prefix = default
prefix(options)
end
As I understand it, Chargify.subdomain should be setting the site.path, but I don't understand activeresource well enough yet to know what's happening and will continue to dig.
I too had the same issue.
I executed the following on the console
Chargify.configure do |c|
c.api_key = "<<api_key>>"
c.subdomain = "<<subdomain>>"
end
After that performing any Chargify console commands went through fine.