hubot-auth not authenticating - authorization

I have just installed hubot and I'm trying some basic tests.
I have this basic script in /scripts:
module.exports = (robot) ->
robot.respond /myscript status/i, (msg) ->
if robot.auth.hasRole(msg.envelope.user, 'test')
msg.reply "Success"
else
msg.reply "Sorry. Need 'test' role."
I issue the appropriate Slack commands:
schroeder has test role
"OK, schroeder has the 'test' role."
myscript status
"Sorry. Need 'test' role."
I have:
tried to reverse the logic (if vs unless)
verified that the scripts are being updated (by changing responses)
verified that the redis backend is storing the role (connected via redis-cli and inspected the key).
After re-reading all the documentation and looking up bug reports I still cannot see what I'm missing. It has got to be something simple, but I'm out of ideas. It is almost as though the script is not able to view the stored role (hubot-auth can, but my script cannot).

Even though on start, hubot says that it is connecting to a local Redis server:
INFO hubot-redis-brain: Using default redis on localhost:6379
It isn't... At least not in a way that you would expect.
If redis is, in fact running, you should get an extra message:
INFO hubot-redis-brain: Data for hubot brain retrieved from Redis
That message does not appear and there is no warning or error that Redis is not running.
If you have not set up hubot-redis-brain properly, you will get strange errors and inconsistencies, like hubot-auth role check functions failing.
In addition, I found that even after I set Redis up properly, my test script did not work. Even though all the tutorials I found test the msg.envelope.user, this did not work for me. I needed to use the msg.message.user.name and resolve with the brain class:
module.exports = (robot) ->
robot.respond /fbctf status/i, (msg) ->
user = robot.brain.userForName(msg.message.user.name)
if robot.auth.hasRole(user, 'test')
msg.reply "Success"
else
msg.reply "Sorry. Need 'test' role."

Related

Running manifests (classes) from a task or plan in Puppet Enterprise

TL;DR
In Puppet Enterprise, how do I run a manifest (testpp.pp) from a task or plan (not Bolt).
plan base_windows::testplan (
  TargetSpec $targets,
  Optional[String] $contents = undef,
  String $filename,
){
  $apply_prep($targets)
  $apply_results = apply($targets, '_catch_errors' => true) {
    class { 'base_windows::testpp': }
  }
  $apply_results.each | $result | {
    notice($result.report)
  }
}
apply_prep seems to succeed, but apply is failing with the following error:
{
"msg" : "Evaluation Error: Unknown function: 'report'. (file: /opt/puppetlabs/server/data/orchestration-services/code/environments/development/modules/base_windows/plans/testplan.pp, line: 16, column: 19)",
"kind" : "bolt/plan-failure",
"details" : {
"class" : "Bolt::PAL::PALError"
}
}
If I change the code to:
plan base_windows::testplan (
  TargetSpec $targets,
  Optional[String] $contents = undef,
  String $filename,
){
  apply_prep($targets)
  $apply_results = apply($targets, '_catch_errors' => true) {
# Is this how to call a class? I cannot find an example.    
class { 'base_windows::testpp': }
  }
  $apply_results.each |$result| {
$target = $result.target.name
if $result.ok {
  out::message("${target} returned a value: ${result.value}")
} else {
 out::message("${target} errored with a message: ${result.error.message}")
}
  }
}
The plan tells me it has failed, but there are no errors in the node's report. In fact, there is no entry for the time the plan was executed.
I cannot find any examples on how to call a class from a plan, so the above apply() is a guess, based on this documentation.
I have installed the puppetlabs_reboot module and successfully ran a plan using it, therefore, I conclude my system is set up correctly, it's just my code that is wrong.
Background
I may be going about this all wrong, so here is some background to the problem. Currently, I have a series of manifests that install various packages from the public Chocolatey repository depending on a node's classification. Package definitions are stored in Hiera data and each package' version is set to latest. At the end of the Package{} resource, some manifests include a reboot.
These manifests are used to provision new nodes and keep existing nodes up-to-date with the latest package version.
The Puppet agent is set to run once per hour and if the source package is updated in the Chocolatey repo, on the next Puppet run, the manifest will update the package, rebooting the node, if required.
Goal
New nodes are provisioned with the latest package version.
Prevent package updates at undetermined times on existing nodes.
Continue to allow Puppet agent runs every hour.
Make use of existing manifests.
Ideas
Split out the package{} code from the profile manifest and place them in tasks / plans, allowing packages to be updated out-of-hours.
Specify the actual package version in Hiera. Although this is more declarative and idempotent, it means keeping an eye on over 100 package version. I guess it would be fairly simple to interrogate the Chocolatey repos with code to pull the latest version number, but even so I am no better off.
Create a task with a script that runs choco upgrade all, however, the next Puppet run would revert package versions according to the version defined in Hiera, meaning Hiera still needs to be kept up-to-date.
Problems
As per the main crux of this question, how do I run manifests (classes) from plans? If I understand correctly, tasks are for ad-hoc scripts, whereas plans can run tasks and manifests. As a lot of time has been invested in writing manifests, I would prefer not to rewrite all my manifests as scripts.
I am confused by the Puppet documentation as it seems to switch between PE and Bolt syntax. I am using Puppet Enterprise where Puppet says they don't recommend using Bolt but their examples seem to site Bolt commands.
No errors in the node' report. apply_prep() reports executed successfully, albeit taking far longer to execute than puppetlabs_reboot module, but apply() results in a failure, but nothing is logged in the node's reports.
Using puppetlabs_reboot module as a reference, it appears their plan uses a bunch of tasks. It appears that they don't use apply() to run their reboot{} class. Is this not duplicating the work?
If anyone has any suggestions or ideas, I'd be grateful if you could share.
I've got it to work. The class I was trying to run, required parameters that I hadn't provided!
plan base_windows::testplan (
TargetSpec $targets,
Optional[String] $contents = undef,
String $filename,
){
apply_prep($targets)
$apply_results = apply($targets, '_catch_errors' => true) {
class { 'base_windows::testpp':
filename => $filename,
contents => $contents,
}
}
}
# Output the whole result_set in the PE console
return $apply_results
I found this out using the logs.
Turn on debug level logging in /etc/puppetlabs/puppetserver/logback.xml (root level="debug")
Tail the following logs:
tail -f /var/log/puppetlabs/bolt-server/bolt-server.log
tail -f /var/log/puppetlabs/puppetserver/puppetserver.log | grep -B 5 -A 5 'testplan'
tail -f /var/log/puppetlabs/orchestration-services/orchestration-services.log

How to rollback changes that has made in the database by wallaby automation browser

I have long test that call a lot of functions that test the all application.
I looking for a way that in case of failure in the test, I can rollback the specific changes that the automation did on the test.
For example:
test "add user and login", session do
session
|> add_user()
# There can be more functions here...
end
def add_user(session, loops // 2) do
try do
session
|> visit("example.com")
|> fill_in(css("#user_name", with "John Doe")
|> click(css("#add_user_button"))
|> assert_has(css("#user_added_successfully_message")
rescue
msg -> if loops > 0, do: add_user(session, loops - 1), else: raise msg
end
end
In case of failure on the assert_has function (the user aded but the message don't show up),
I want to rollback all the changes that happened on the database before the add_user function called again in the rescue.
In case you use Ecto to access the DB everywhere, you can use its sandbox mode.
Most likely you want to configure the sandbox pool in your config/test.exs (if not already there):
config :my_app, Repo, # my_app being the name of the application that holds the Repo
pool: Ecto.Adapters.SQL.Sandbox
Then in your test_helper or tests do something like this:
setup do
:ok = Ecto.Adapters.SQL.Sandbox.checkout(Repo)
Ecto.Adapters.SQL.Sandbox.mode(Repo, {:shared, self()})
end
With this, all your tests run in separate transactions and are rolled back afterwards. Also the above makes sure that all the processes use the same connection and see the same transaction data (see :shared mode in the docs).
This example is taken from the the docs where there is more information on that.
If you can not use Ectos sandbox mode for whatever reason, a good option could be to start a database transaction yourself and share the connection between your test and the code under test. That way you can manually roll back after each test.

getting a "need project id error" in Keen

I get the following error:
Keen.delete(:iron_worker_analytics, filters: [{:property_name => 'start_time', :operator => 'eq', :property_value => '0001-01-01T00:00:00Z'}])
Keen::ConfigurationError: Keen IO Exception: Project ID must be set
However, when I set the value, I get the following:
warning: already initialized constant KEEN_PROJECT_ID
iron.io/env.rb:36: warning: previous definition of KEEN_PROJECT_ID was here
Keen works fine when I run the app and load the values from a env.rb file but from the console I cannot get past this.
I am using the ruby gem.
I figured it out. The documentation is confusing. Per the documentation:
https://github.com/keenlabs/keen-gem
The recommended way to set keys is via the environment. The keys you
can set are KEEN_PROJECT_ID, KEEN_WRITE_KEY, KEEN_READ_KEY and
KEEN_MASTER_KEY. You only need to specify the keys that correspond to
the API calls you'll be performing. If you're using foreman, add this
to your .env file:
KEEN_PROJECT_ID=aaaaaaaaaaaaaaa
KEEN_MASTER_KEY=xxxxxxxxxxxxxxx
KEEN_WRITE_KEY=yyyyyyyyyyyyyyy KEEN_READ_KEY=zzzzzzzzzzzzzzz If not,
make a script to export the variables into your shell or put it before
the command you use to start your server.
But I had to set it explicitly as Keen.project_id after doing a Keen.methods.
It's sort of confusing since from the docs, I assumed I just need to set the variables. Maybe I am misunderstanding the docs but it was confusing at least to me.

How to log correctly with Mocha/Velocity (Meteor testing)?

What's the correct way to go about logging out information about tests using the velocity framework with Meteor?
I have some mocha tests that I'd like to output some values from, I guess it'd be good if the output could end up in the logs section of the velocity window... but there doesn't seem to be any documentation anywhere?
I haven't seen it documented either.
I don't know how to log messages into the Velocity window, though I don't like the idea of logging into the UI.
What I've done is created a simple Logger object that wraps all of my console.{{method}} calls and prevents logging if process.env.IS_MIRROR. That will only output test framework messages on the terminal. If I need to debug an specific test, I activate logging output for a while on Logger.
This is a terrible hack. It will expose an unprotected method that writes to your DB.
But it works.
I was really annoyed to lack this feature so I digged into the Velocity code to find out that they have a VelocityLogs collection that is globally accessible. But you need to access it from your production, not testing, instance to see it in the web reporter.
So it then took me a good while to get Meteor CORS enabled, but I finally managed - even for Firefox - to create a new route within IronRouter to POST log messages to. (CORS could be nicer with this suggestion - but you really shouldn't expose this anyway.)
You'll need to meteor add http for this.
Place outside of /tests:
if Meteor.isServer
Router.route 'log', ->
if #request.method is 'OPTIONS'
#response.setHeader 'Access-Control-Allow-Origin', '*'
#response.setHeader 'Access-Control-Allow-Methods', 'POST, OPTIONS'
#response.setHeader 'Access-Control-Max-Age', 1000
#response.setHeader 'Access-Control-Allow-Headers', 'origin, x-csrftoken, content-type, accept'
#response.end()
return
if #request.method is 'POST'
logEntry = #request.body
logEntry.level ?= 'unspecified'
logEntry.framework ?= 'log hack'
logEntry.timestamp ?= moment().format("HH:mm:ss.SSS")
_id = VelocityLogs.insert(logEntry)
#response.setHeader 'Access-Control-Allow-Origin', '*'
#response.end(_id)
return
, where: 'server'
Within tests/mocha/lib or similar, as a utility function:
#log = (message, framework, level) ->
HTTP.post "http://localhost:3000/log",
data: { message: message, framework: framework, level: level}
(error) -> console.dir error
For coffee haters: coffeescript.org > TRY NOW > Paste the code to convert > Get your good old JavaScript.

What is the correct way to launch your server from vows for testing?

I have an express server which I am testing using vows. I want to run the server from within the vows test suite, so that I dont need to have it running in the background in order for the test suite to work, then I can just create a cake task which runs the server and tests it in isolation.
In server.coffee I have created the (express) server, configured it, set up routes and called app.listen(port) like this:
# Express - setup
express = require 'express'
app = module.exports = express.createServer()
# Express - configure and set up routes
app.configure ->
app.set 'views', etc....
....
# Express - start
app.listen 3030
In my simple routes-test.js I have :
vows = require('vows'),
assert = require('assert'),
server = require('../app/server/server');
// Create a Test Suite
vows.describe('routes').addBatch({
'GET /' : respondsWith(200),
'GET /401' : respondsWith(401),
'GET /403' : respondsWith(403),
'GET /404' : respondsWith(404),
'GET /500' : respondsWith(500),
'GET /501' : respondsWith(501)
}).export(module);
where respondsWith(code) is similar in functionality to the one in the vows doc...
When I require server in the above test, it automatically begins running the server and the tests run and pass, which is great, but I dont feel like I am doing it the 'right' way.
I dont have much control over when the server begins, and what happens if I want to configure the server to point to a 'test' environment rather than the default one, or change the default logging level for when im testing?
PS I am going to convert my vows to Coffeescript but for now its all in js as im in learning mode from the docs!
That is an interesting question because exactly last night I did what you want to do. I have a little CoffeScript Node.js app which happened to be written like the one you showed. Then, I refactored it, creating the following app.coffee:
# ... Imports
app = express.createServer()
# Create a helper function
exports.start = (options={port:3000, logfile:undefined})->
# A function defined in another module which configures the app
conf.configure app, options
app.get '/', index.get
# ... Other routes
console.log 'Starting...'
app.listen options.port
Now I have an index.coffee (equivalent to your server.coffee) as simple as:
require('./app').start port:3000
Then, I wrote some tests using Jasmine-node and Zombie.js. The test framework is different but the principle is the same:
app = require('../../app')
# ...
# To avoid annoying logging during tests
logfile = require('fs').createWriteStream 'extravagant-zombie.log'
# Use the helper function to start the app
app.start port: 3000, logfile: logfile
describe "GET '/'", ->
it "should have no blog if no one was registered", ->
zombie.visit 'http://localhost:3000', (err, browser, status) ->
expect(browser.text 'title').toEqual 'My Title'
asyncSpecDone()
asyncSpecWait()
The point is: what I did and I would suggest is to create a function in a module which starts the server. Then, call this function wherever you want. I do not know if it is "good design", but it works and seems readable and practical to me.
Also, I suspect there is no "good design" in Node.js and CoffeScript yet. Those are brand new, very innovative technologies. Of course, we can "feel something is wrong" - like this situation, where two different people didn't like the design and changed it. We can feel the "wrong way", but it does not mean there is a "right way" already. Summing up, I believe we will have to invent some "right ways" in your development :)
(But it is good to ask about good ways of doing things, too. Maybe someone has a good idea and the public discussion will be helpful for other developers.)