What is the correct way to launch your server from vows for testing? - testing

I have an express server which I am testing using vows. I want to run the server from within the vows test suite, so that I dont need to have it running in the background in order for the test suite to work, then I can just create a cake task which runs the server and tests it in isolation.
In server.coffee I have created the (express) server, configured it, set up routes and called app.listen(port) like this:
# Express - setup
express = require 'express'
app = module.exports = express.createServer()
# Express - configure and set up routes
app.configure ->
app.set 'views', etc....
....
# Express - start
app.listen 3030
In my simple routes-test.js I have :
vows = require('vows'),
assert = require('assert'),
server = require('../app/server/server');
// Create a Test Suite
vows.describe('routes').addBatch({
'GET /' : respondsWith(200),
'GET /401' : respondsWith(401),
'GET /403' : respondsWith(403),
'GET /404' : respondsWith(404),
'GET /500' : respondsWith(500),
'GET /501' : respondsWith(501)
}).export(module);
where respondsWith(code) is similar in functionality to the one in the vows doc...
When I require server in the above test, it automatically begins running the server and the tests run and pass, which is great, but I dont feel like I am doing it the 'right' way.
I dont have much control over when the server begins, and what happens if I want to configure the server to point to a 'test' environment rather than the default one, or change the default logging level for when im testing?
PS I am going to convert my vows to Coffeescript but for now its all in js as im in learning mode from the docs!

That is an interesting question because exactly last night I did what you want to do. I have a little CoffeScript Node.js app which happened to be written like the one you showed. Then, I refactored it, creating the following app.coffee:
# ... Imports
app = express.createServer()
# Create a helper function
exports.start = (options={port:3000, logfile:undefined})->
# A function defined in another module which configures the app
conf.configure app, options
app.get '/', index.get
# ... Other routes
console.log 'Starting...'
app.listen options.port
Now I have an index.coffee (equivalent to your server.coffee) as simple as:
require('./app').start port:3000
Then, I wrote some tests using Jasmine-node and Zombie.js. The test framework is different but the principle is the same:
app = require('../../app')
# ...
# To avoid annoying logging during tests
logfile = require('fs').createWriteStream 'extravagant-zombie.log'
# Use the helper function to start the app
app.start port: 3000, logfile: logfile
describe "GET '/'", ->
it "should have no blog if no one was registered", ->
zombie.visit 'http://localhost:3000', (err, browser, status) ->
expect(browser.text 'title').toEqual 'My Title'
asyncSpecDone()
asyncSpecWait()
The point is: what I did and I would suggest is to create a function in a module which starts the server. Then, call this function wherever you want. I do not know if it is "good design", but it works and seems readable and practical to me.
Also, I suspect there is no "good design" in Node.js and CoffeScript yet. Those are brand new, very innovative technologies. Of course, we can "feel something is wrong" - like this situation, where two different people didn't like the design and changed it. We can feel the "wrong way", but it does not mean there is a "right way" already. Summing up, I believe we will have to invent some "right ways" in your development :)
(But it is good to ask about good ways of doing things, too. Maybe someone has a good idea and the public discussion will be helpful for other developers.)

Related

Enable Impala Impersonation on Superset

Is there a way to make the logged user (on superset) to make the queries on impala?
I tried to enable the "Impersonate the logged on user" option on Databases but with no success because all the queries run on impala with superset user.
I'm trying to achieve the same! This will not completely answer this question since it does not still work but I want to share my research in order to maybe help another soul that is trying to use this instrument outside very basic use cases.
I went deep in the code and I found out that impersonation is not implemented for Impala. So you cannot achieve this from the UI. I found out this PR https://github.com/apache/superset/pull/4699 that for whatever reason was never merged into the codebase and tried to copy&paste code in my Superset version (1.1.0) but it didn't work. Adding some logs I can see that the configuration with the impersonation is updated, but then the actual Impala query is with the user I used to start the process.
As you can imagine, I am a complete noob at this. However I found out that the impersonation thing happens when you create a cursor and there is a constructor parameter in which you can pass the impersonation configuration.
I managed to correctly (at least to my understanding) implement impersonation for the SQL lab part.
In the sql_lab.py class you have to add in the execute_sql_statements method the following lines
with closing(engine.raw_connection()) as conn:
# closing the connection closes the cursor as well
cursor = conn.cursor(**database.cursor_kwargs)
where cursor_kwargs is defined in db_engine_specs/impala.py as the following
#classmethod
def get_configuration_for_impersonation(cls, uri, impersonate_user, username):
logger.info(
'Passing Impala execution_options.cursor_configuration for impersonation')
return {'execution_options': {
'cursor_configuration': {'impala.doas.user': username}}}
#classmethod
def get_cursor_configuration_for_impersonation(cls, uri, impersonate_user,
username):
logger.debug('Passing Impala cursor configuration for impersonation')
return {'configuration': {'impala.doas.user': username}}
Finally, in models/core.py you have to add the following bit in the get_sqla_engine def
params = extra.get("engine_params", {}) # that was already there just for you to find out the line
self.cursor_kwargs = self.db_engine_spec.get_cursor_configuration_for_impersonation(
str(url), self.impersonate_user, effective_username) # this is the line I added
...
params.update(self.get_encrypted_extra()) # already there
#new stuff
configuration = {}
configuration.update(
self.db_engine_spec.get_configuration_for_impersonation(
str(url),
self.impersonate_user,
effective_username))
if configuration:
params.update(configuration)
As you can see I just shamelessy pasted the code from the PR. However this kind of works only for the SQL lab as I already said. For the dashboards there is an entirely different way of querying Impala that I did not still find out.
This means that queries for the dashboards are handled in a different way and there isn't something like this
with closing(engine.raw_connection()) as conn:
# closing the connection closes the cursor as well
cursor = conn.cursor(**database.cursor_kwargs)
My gut (and debugging) feeling is that you need to first understand the sqlalchemy part and extend a new ImpalaEngine class that uses a custom cursor with the impersonation conf. Or something like that, however it is not simple (if we want to call this simple) as the sql_lab part. So, the trick is to find out where the query is executed and create a cursor with the impersonation configuration. Easy, isnt'it ?
I hope that this could shed some light to you and the others that have this issue. Let me know if you did find out another way to solve this issue, or if this comment was useful.
Update: something really useful
A colleague of mine succesfully implemented impersonation with impala without touching any superset related, but instead working directly with the impyla lib. A PR was open with the code to change. You can apply the patch directly in the impyla src used by superset. You have to edit both dbapi.py and hiveserver2.py.
As a reminder: we are still testing this and we do not know if it works with different accounts using the same superset instance.

express define routes confusion app.use()

app.use('/api/users',require('./routes/api/users'));
app.use('/api/auth',require('./routes/api/auth'));
app.use('/api/profile',require('./routes/api/profile'));
app.use('/api/posts',require('./routes/api/posts'))
When i change the routes from 'api' to any other words the server keep giving me 404 not found error and I also changed axio method to the corresponding words.
for example app.use('/ddd/posts',require('./routes/api/posts')) and corresponding axios: const res = await axios.get('/ddd/posts');
please help
If this:
app.use('/api/posts',require('./routes/api/posts'))
works with:
axios.get('/api/posts');
Then, this:
app.use('/ddd/posts',require('./routes/api/posts'))
will work just fine with:
axios.get('/ddd/posts');
Unless there is something interfering with your modified server. Things that could be interfering:
A proxy configured only to allow certain paths through
Your new server didn't get started properly, perhaps because the prior generation of the server is still running

Code coverage in SimpleTest

Is there any way to generate code coverage report when using SimpleTest similar to PHPUnit.
I have read the documentation of SimpleTest on their website but can not find a clear way on how to do it!
I came across this website that says
we can add require_once (dirname(__FILE__).'/coverage.php')
to the intended file and it should generate the report, but it did not work!
If there is a helpful website on how to generate code coverage, please share it here.
Thanks alot.
I could not get it to work in the officially supported way either, but here is something I got working that I was able to hack together by examining their code. This works for v1.1.7 of SimpleTest, not their master code. At the time of this writing v1.1.7 is the latest release, and works with new versions of PHP 7, even though it is an old release.
First off you have to make sure you have Xdebug installed, configured, and working. On my system there is both a CLI and Apache version of the php.ini file that have to be configured properly depending on if I am trying to use PHP through Apache or just directly from the terminal. There are alternatives to Xdebug, but most people us Xdebug.
Then, you have to make the PHP_CodeCoverage library accessible from your code. I recommend adding it to your project as a composer package.
Now you just have to manually use that library to capture code coverage and generate a report. How exactly you do that will depend on how you run your tests. Personally, I run my tests on the terminal, and I have a bootstrap file that php runs before it starts the script. At the end of the bootstrap file, I include the SimpleTest autorun file so it will automatically run the tests in any test classes that get included like so:
require_once __DIR__.'/vendor/simpletest/simpletest/autorun.php';
Somewhere inside your bootstrap file you will need to create a filter, whitelist the directories and files you want to get reported, create a coverage object and pass in the filter to the constructor, start coverage, and create and register a shutdown function that will change the way SimpleTest executes the tests to make sure it also stops the coverage and generates the coverage report. Your bootstrap file might look something like this:
<?php
require __DIR__.'/vendor/autoload.php';
$filter = new \SebastianBergmann\CodeCoverage\Filter();
$filter->addDirectoryToWhitelist(__DIR__."/src/");
$coverage = new \SebastianBergmann\CodeCoverage\CodeCoverage(null, $filter);
$coverage->start('<name of test>');
function shutdownWithCoverage($coverage)
{
$autorun = function_exists('\run_local_tests'); // provided by simpletest
if ($autorun) {
$result = \run_local_tests(); // this actually runs the tests
}
$coverage->stop();
$writer = new \SebastianBergmann\CodeCoverage\Report\Html\Facade;
$writer->process($coverage, __DIR__.'/tmp/code-coverage-report');
if ($autorun) {
// prevent tests from running twice:
exit($result ? 0 : 1);
}
}
register_shutdown_function('\shutdownWithCoverage', $coverage);
require_once __DIR__.'/vendor/simpletest/simpletest/autorun.php';
It took me some time to figure out, as - to put it mildly - the documentation for this feature is not really complete.
Once you have your test suite up and running, just include these lines before the lines that are actually running it:
require_once ('simpletest/extensions/coverage/coverage.php');
require_once ('simpletest/extensions/coverage/coverage_reporter.php');
$coverage = new CodeCoverage();
$coverage->log = 'coverage/log.sqlite'; // This folder should exist
$coverage->includes = ['.*\.php$']; // Modify these as you wish
$coverage->excludes = ['simpletest.*']; // Or it is even better to use a setting file
$coverage->maxDirectoryDepth = '1';
$coverage->resetLog();
$coverage->startCoverage();
Then run your tests, for instance:
$test = new ProjectTests(); //It is an extension of the class TestSuite
$test->run(new HtmlReporter());
Finally generate your reports
$coverage->stopCoverage();
$coverage->writeUntouched();
$handler = new CoverageDataHandler($coverage->log);
$report = new CoverageReporter();
$report->reportDir = 'coverage/report'; // This folder should exist
$report->title = 'Code Coverage Report';
$report->coverage = $handler->read();
$report->untouched = $handler->readUntouchedFiles();
$report->summaryFile = $report->reportDir . '/index.html';
And that's it. Based on your setup, you might need to make some small adjustment to make it work. For instance, if you are using the autorun.php from simpletest, that might be a bit more tricky.

Protractor - How to separate each test to one file and separate variabiles

I have some komplex protractor test written but everything is in one file.
Where I'm on top of it loading all variabiles like:
var userLogin = "John";
and after that somewhere in code I use it together.
What I need to do is
1. Separate all variabiles to aditional file (some config file)
2. Each test to one file
1- I try to make config.js where I add all variabiles and i required it in protractor.conf.js it load correctly problem is that when i use any of this variabiles in some test it's not working (test fail with "userName is not defined")
I know there is a way where i requre config.file in each test script but that's really not best option in my eyes.
2- How can I know what I did in last script if it's separate, like for example how to know I am logged in?
Thanks.
There are multiple things you can make use of.
2) How can I know what I did in last script if it's separate, like for example how to know I am logged in?
This is where beforeEach(), afterEach() can help:
To help a test suite DRY up any duplicated setup and teardown code,
Jasmine provides the global beforeEach and afterEach functions. As the
name implies, the beforeEach function is called once before each spec
in the describe is run, and the afterEach function is called once
after each spec.
There are also beforeAll(), afterAll() available in jasmine 2, or via jasmine-beforeAll third-party for jasmine 1:
The beforeAll function is called only once before all the specs in
describe are run, and the afterAll function is called after all specs
finish. These functions can be used to speed up test suites with
expensive setup and teardown.
1) I try to make config.js where I add all variabiles and i required
it in protractor.conf.js it load correctly problem is that when i use
any of this variabiles in some test it's not working (test fail with
"userName is not defined") I know there is a way where i requre
config.file in each test script but that's really not best option in
my eyes.
One option which I've personally used would be to create a config.js file with all the reusable configuration variables you would need in multiple tests and require the file once - in the protractor config - then set it as a params configuration key value:
var config = require("./config.js");
exports.config = {
...
params: config,
...
};
where config.js is, for example:
var config;
config = {
user: {
login: "user",
password: "password"
}
};
module.exports = config;
Then, you would not need to require config.js in every test, but instead, you'll use browser.params. For example:
expect(browser.params.user.login).toEqual("user");
Also, if you need some sort of a global test preparation step, you can do it in onPrepare() function, see Setting Up the System Under Test. Example configuration that performs a "global" login step is available here.
And an another quick note: you can have custom globally defined variables (like built-in browser or protractor), set them using global in onPrepare. For example, I've defined protractor.ExpectedConditions as a custom global variable:
onPrepare: function () {
global.EC = protractor.ExpectedConditions;
}
Then, in tests, don't require anything, `EC variable would be available in the scope, e.g.:
browser.wait(EC.invisibilityOf(scope.page.dropdown), 5000)
Also, organizing your tests using "Page Object Pattern" would also help to solve the reusability and modularity problem.

How to log correctly with Mocha/Velocity (Meteor testing)?

What's the correct way to go about logging out information about tests using the velocity framework with Meteor?
I have some mocha tests that I'd like to output some values from, I guess it'd be good if the output could end up in the logs section of the velocity window... but there doesn't seem to be any documentation anywhere?
I haven't seen it documented either.
I don't know how to log messages into the Velocity window, though I don't like the idea of logging into the UI.
What I've done is created a simple Logger object that wraps all of my console.{{method}} calls and prevents logging if process.env.IS_MIRROR. That will only output test framework messages on the terminal. If I need to debug an specific test, I activate logging output for a while on Logger.
This is a terrible hack. It will expose an unprotected method that writes to your DB.
But it works.
I was really annoyed to lack this feature so I digged into the Velocity code to find out that they have a VelocityLogs collection that is globally accessible. But you need to access it from your production, not testing, instance to see it in the web reporter.
So it then took me a good while to get Meteor CORS enabled, but I finally managed - even for Firefox - to create a new route within IronRouter to POST log messages to. (CORS could be nicer with this suggestion - but you really shouldn't expose this anyway.)
You'll need to meteor add http for this.
Place outside of /tests:
if Meteor.isServer
Router.route 'log', ->
if #request.method is 'OPTIONS'
#response.setHeader 'Access-Control-Allow-Origin', '*'
#response.setHeader 'Access-Control-Allow-Methods', 'POST, OPTIONS'
#response.setHeader 'Access-Control-Max-Age', 1000
#response.setHeader 'Access-Control-Allow-Headers', 'origin, x-csrftoken, content-type, accept'
#response.end()
return
if #request.method is 'POST'
logEntry = #request.body
logEntry.level ?= 'unspecified'
logEntry.framework ?= 'log hack'
logEntry.timestamp ?= moment().format("HH:mm:ss.SSS")
_id = VelocityLogs.insert(logEntry)
#response.setHeader 'Access-Control-Allow-Origin', '*'
#response.end(_id)
return
, where: 'server'
Within tests/mocha/lib or similar, as a utility function:
#log = (message, framework, level) ->
HTTP.post "http://localhost:3000/log",
data: { message: message, framework: framework, level: level}
(error) -> console.dir error
For coffee haters: coffeescript.org > TRY NOW > Paste the code to convert > Get your good old JavaScript.