Sorry for a newbie question, I am still new to mocha. I have an existing app that I am tasked to create a mocha test case. This app uses passport-auth0 and passport for user login. How do write mocha test such that I can login as a dummy user to test restricted functions?
Since Passport uses strategies that can be swapped in, one option is to put a mock in place of the actual auth0 strategy when running tests. That would look something like:
function MockStrategy() {
}
MockStrategy.prototype.authenticate = function(req) {
self.success({ id: 1, username: 'joe' );
}
// In test setup, mock out the strategy with one that returns
// dummy data.
passport.use('auth0', new MockStrategy());
Now you can write test cases where req.user will always be the joe as supplied by the mock strategy. You can extend that to cover authentication failures in a similar way.
That is my preferred approach, as it is least intrusive. Depending on how the application code is structured, there may be other dependencies that get required and need to be mocked out. For those situations, I've found proxyquire to be useful.
Related
I'm dealing with a problem I have in a Symfony 4 API functional tests. My functional tests consists in making requests to the API and analyze the response given. I've been working like this and works fine.
The problem comes with a new API method I'm implementing which needs the perform a request to an external service. I want to mock during my tests, but I don't know how can I create a mock that persists when the API receives the request from the functional test.
I've been thinking about something like create mocks which are always used in the test environment but I haven't found anything...
you can check in http-client service called url and if it compare your external api url return certain response, it will be look something like this:
$guzzleServiceMock = $this
->getMockBuilder(GuzzleHttp\Client::class)->disableOriginalConstructor()
->setMethods(['get'])
->getMock();
$guzzleServiceMock
->expects($this->any())
->method('get')
->with(
$this->stringContains('/external/api/route')
)
->willReturnCallback(
function ($uri, $options = []) {
return new Response(
200,
[],
'{"result": {
"status": "success",
"data": "fake data",
}}'
);
}
);
next step you will need to inject service into container, with this question you can look this repo, there are good examples of how this can be done: https://github.com/peakle/symfony-4-service-mock-examples/blob/master/tests/Util/BaseServiceTest.php
I'm not sure if I understand correctly, but if you'd like to mock an external API (meaning your application is connecting to another application on another server), one solution would be to actually lunch a mock server to replace your actual external server.
Either you implement such a mock server yourself or you use an existing solution such as http://wiremock.org/
I've been seraching for a while how should I test my data access layer with not a lot of success. Let me list my concerns:
Unit tests
This guy (here: Using InMemoryConnection to test ElasticSearch) says that:
Although asserting the serialized form of a request matches your
expectations in the SUT may be sufficient.
Does it really worth to assert the serialized form of requests? Do these kind of tests have any value? It doesn't seem likely to change a function that should not change the serialized request.
If it does worth it, what is the correct way to assert these reqests?
Unit tests once again
Another guy (here: ElasticSearch 2.0 Nest Unit Testing with MOQ) shows a good looking example:
void Main()
{
var people = new List<Person>
{
new Person { Id = 1 },
new Person { Id = 2 },
};
var mockSearchResponse = new Mock<ISearchResponse<Person>>();
mockSearchResponse.Setup(x => x.Documents).Returns(people);
var mockElasticClient = new Mock<IElasticClient>();
mockElasticClient.Setup(x => x
.Search(It.IsAny<Func<SearchDescriptor<Person>, ISearchRequest>>()))
.Returns(mockSearchResponse.Object);
var result = mockElasticClient.Object.Search<Person>(s => s);
Assert.AreEqual(2, result.Documents.Count()).Dump();
}
public class Person
{
public int Id { get; set;}
}
Probably I'm missing something but I can't see the point of this code snippet. First he mocks an ISearchResponse to always return the people list. then he mocks an IElasticClient to return this previous search response to any search request he makes.
Well it doesn't really surprise me the assertion is true after that. What is the point of these kind of tests?
Integration tests
Integration tests do make more sense to me to test a data access layer. So after a little search i found this (https://www.nuget.org/packages/elasticsearch-inside/) package. If I'm not mistaken this is only about an embedded JVM and an ES. Is it a good practice to use it? Shouldn't I use my already running instance?
If anyone has good experience with testing that I didn't include I would happily hear those as well.
Each of the approaches that you have listed may be a reasonable approach to take, depending on exactly what it is you are trying to achieve with your tests. you haven't specified this in your question :)
Let's go over the options that you have listed
Asserting the serialized form of the request to Elasticsearch may be a sufficient approach if you build a request to Elasticsearch based on a varying number of inputs. You may have tests that provide different input instances and assert the form of the query that will be sent to Elasticsearch for each. These kinds of tests are going to be fast to execute but make the assumption that the query that is generated and you are asserting the form of is going to return the results that you expect.
This is another form of unit test that stubs out the interaction with the Elasticsearch client. The system under test (SUT) in this example is not the client but another component that internally uses the client, so the interaction with the client is controlled through the stub object to return an expected response. The example is contrived in that in a real test, you wouldn't assert on the results of the client call as you point out but rather on the output of the SUT.
Integration/Behavioural tests against a known data set within an Elasticsearch cluster may provide the most value and go beyond points 1 and 2 as they will not only incidentally test the generated queries sent to Elasticsearch for a given input, but will also be testing the interaction and producing an expected result. No doubt however that these types of test are harder to setup than 1 and 2, but the investment in setup may be outweighed by their benefit for your project.
So, you need to ask yourself what kinds of tests are sufficient to achieve the level of assurance that you require to assert that your system is doing what you expect it to do; it may be a combination of all three different approaches for different elements of the system.
You may want to check out how the .NET client itself is tested; there are components within the Tests project that spin up an Elasticsearch cluster with different plugins installed, seed it with known generated data and make assertions on the results. The source is open and licensed under Apache 2.0 license, so feel free to use elements within your project :)
I have an API that I've written and now I'm in the middle of writing an SDK for 3rd parties to more easily interact with my API.
When writing tests for my SDK, it's my understanding that it's best not to simply call all of the API endpoints because:
The tests in the API will be responsible for making sure that the API works.
If the SDK tests did directly call the API, my tests would be really slow.
As an example, let's say my API has this endpoint:
/account
In my API test suite I actually call this endpoint to verify that it returns the proper data.
What approach do I take to test this in my SDK? Should I be mocking a request to /account? What else would I need to do to give my SDK good coverage?
I've looked to other SDKs to see how they're handling this (Stripe, Algolia, AWS), but in some cases it does look like they're calling a sandbox version of the actual API.
(I'm currently working with PHPUnit, but I'll be writing SDKs in other languages as well.)
I ended up taking this approach:
I have both unit tests AND integration tests.
My integration tests call the actual API. I usually run this much less frequently — like before I push code to a remote. (Anyone who consumes my code will have to supply their own API credentials)
My unit tests — which I run very frequently — just make sure that the responses from my code are what I expect them to look like. I trust the 3rd party API is going to give me good data (and I still have the integration tests to back that up).
I've accomplished this by mocking Guzzle, using Reflection to replace the client instance in my SDK code, and then using Mock Handlers to mock the actual response I expect.
Here's an example:
/** #test */
public function it_retrieves_an_account()
{
$account = $this->mockClient()->retrieve();
$this->assertEquals(json_decode('{"id": "9876543210"}'), $account);
}
protected function mockClient()
{
$stream = Psr7\stream_for('{"id": "9876543210"}');
$mock = new MockHandler([new Response(
200,
['Content-Type' => 'application/json'],
Psr7\stream_for($stream)
)]);
$handler = HandlerStack::create($mock);
$mockClient = new Client(['handler' => $handler]);
$account = new SparklyAppsAccount(new SparklyApps('0123456789'));
$reflection = new \ReflectionClass($account);
$reflection_property = $reflection->getProperty('client');
$reflection_property->setAccessible(true);
$reflection_property->setValue($account, $mockClient);
return $account;
}
When writing tests for the SDK you assume that your api DOES work exactly like it should (and you write tests for your api to assure that).
So using some kind of sandbox or even a complete mock of your api is sufficient.
I would recommend to mock your API using something like wiremock and then write your unit tests around that mock API to make sure everything works as desired.
This way when your production app with break, you can at-least make sure (by running unit tests) that nothing broken in your application side but there can be problem with actual API (i.e response format being changed).
I am currently researching ways to integrate a testsuite for an application based on ember.js into travis-ci. So first off, we're not on the open-source service, we use it for private repositories, etc..
I looked at how several open-source projects run their ember.js test suite and it looks like they set up a server with their project which probably gets updated whenever someone pushes to the repository. Then PhantomJS is used to run the tests on that server (and actually not on travis-ci itself).
The problem I have with this approach is that this adds another step (and ultimately complexity): I have to update and maintain a server with the latest code so I can use PhantomJS to run the test suite.
Another drawback is that I don't see how it would enable us to test PRs (pull-requests) either. The server would have to be updated with code from the PR. Testing PRs before they are merge is one of the great things about travis-ci.
I couldn't find much/anything about running ember.js tests only through the CLI – I am hoping someone tackled this issue before me.
I can't speak to your questions about travis-ci ... but I can offer some thoughts about unit testing ember.js code with jasmine.
Before I started using ember.js I was unit testing with jasmine and a simple node.js module called jasmine-node. This allowed me to quickly run a suite of jasmine unit tests from the command line without having to open a browser or hack around with "js-test runner" / etc
That worked great when I had jasmine, jquery and simple javascript modules I used to keep my javascript code human readable. But the moment I needed to use ember/handlebars/etc the jasmine-node module fell down because it expects you have everything available on both global and window. But because ember is just a browser library not everything was on "global"
I started looking at PhantomJS and like yourself couldn't see myself adding the complexity. So instead of hacking around this I decided to take a weekend and write what was missing from the jasmine test runner space. I wanted the same power of jasmine-node (meaning all I would need on my CI box was a recent version of node.js and a simple npm module to run the tests)
I wrote a npm module called jasmine-phantom-node and at the core it's using node.js to run phantomJS => that in turn fires up a regular jasmine html runner and scrapes the page for test results using a very basic express web app.
I spent the time to put 2 different examples in the github project so others could see how it works quickly. It's opinionated so you will need an html file in your project root that will be used by the plugin to execute your tests. It also requires jasmine, and jasmine-html along with a recent jQuery.
It solved this issue for me personally and now I can write tests against ember using simple jasmine and run it from the cmd line without a browser.
Here is a sample jasmine unit test that I wrote against an ember view recently while spiking around with this test runner. Here is a link to the full ember / django project if you want to see how the view under test is used in the app.
require('static/script/vendor/filtersortpage.js');
require('static/script/app/person.js');
describe ("PersonApp.PersonView Tests", function(){
var sut, router, controller;
beforeEach(function(){
sut = PersonApp.PersonView.create();
router = new Object({send:function(){}});
controller = PersonApp.PersonController.create({});
controller.set("target", router);
sut.set("controller", controller);
});
it ("does not invoke send on router when username does not exist", function(){
var event = {'context': {'username':'', 'set': function(){}}};
var sendSpy = spyOn(router, 'send');
sut.addPerson(event);
expect(sendSpy).not.toHaveBeenCalledWith('addPerson', jasmine.any(String));
});
it ("invokes send on router with username when exists", function(){
var event = {'context': {'username':'foo', 'set': function(){}}};
var sendSpy = spyOn(router, 'send');
sut.addPerson(event);
expect(sendSpy).toHaveBeenCalledWith('addPerson', 'foo');
});
it ("does not invoke set context when username does not exist", function(){
var event = {'context': {'username':'', 'set': function(){}}};
var setSpy = spyOn(event.context, 'set');
sut.addPerson(event);
expect(setSpy).not.toHaveBeenCalledWith('username', jasmine.any(String));
});
it ("invokes set context to empty string when username exists", function(){
var event = {'context': {'username':'foo', 'set': function(){}}};
var setSpy = spyOn(event.context, 'set');
sut.addPerson(event);
expect(setSpy).toHaveBeenCalledWith('username', '');
});
});
Here is the production ember view that I'm unit testing above
PersonApp.PersonView = Ember.View.extend({
templateName: 'person',
addPerson: function(event) {
var username = event.context.username;
if (username) {
this.get('controller.target').send('addPerson', username);
event.context.set('username', '');
}
}
});
Working on my first EmberJS app. The entire app requires that a user be logged in. I'm trying to wrap my head around the best way to enforce that a user is logged in now (when the page is initially loaded) and in the future (when user is logged out and there is no refresh).
I have the user authentication hooks handled - right now I have an ember-data model and associated store that connects that handles authorizing a user and creating a user "session" (using sessionStorage).
What I don't know how to do is enforce that a user is authenticated when transitioning across routes, including the initial transition in the root route. Where do I put this logic? If I have an authentication statemanager, how do I hook that in to the routes? Should I have an auth route that is outside of the root routes?
Note: let me know if this question is poorly worded or I need to explain anything better, I will be glad to do so.
Edit:
I ended up doing something that I consider a little more ember-esque, albeit possibly a messy implementation. I have an auth statemanager that stores the current user's authentication key, as well as the current state.
Whenever something needs authentication, it simply asks the authmanager for it and passes a callback function to run with the authentication key. If the user isn't logged in, it pulls up a login form, holding off the callback function until the user logs in.
Here's some select portions of the code I'm using. Needs cleaning up, and I left out some stuff. http://gist.github.com/3741751
If you need to perform a check before initial state transition, there is a special function on the Ember.Application class called deferReadiness(). The comment from the source code:
By default, the router will begin trying to translate the current URL into
application state once the browser emits the DOMContentReady event. If you
need to defer routing, you can call the application's deferReadiness() method.
Once routing can begin, call the advanceReadiness() method.
Note that at the time of writing this function is available only in ember-latest
In terms of rechecking authentication between route transitions, you can add hooks to the enter and exit methods of Ember.Route:
var redirectToLogin = function(router){
// Do your login check here.
if (!App.loggedIn) {
Ember.run.next(this, function(){
if (router.currentState.name != "login") {
router.transitionTo('root.login');
}
})
}
};
// Define the routes.
App.Router = Ember.Router.extend({
root: Ember.Route.extend({
enter: redirectToLogin,
login: Ember.Route.Extend({
route: 'login',
exit: redirectToLogin,
connectOutlets: function(router){
router.get('applicationController').connectOutlet('login');
}
}),
....
})
});
The problem with such a solution is that Ember will actually transition to the new Route (and thus load all data, etc) before then transitioning back to your login route. So that potentially exposes bits of your app you don't want them seeing any longer. However, the reality is that all of that data is still loaded in memory and accessible via the JavaScript console, so I think this is a decent solution.
Also remember that since Ember.Route.extend returns a new object, you can create your own wrapper and then reuse it throughout your app:
App.AuthenticatedRoute = Ember.Route.extend({
enter: redirectToLogin
});
App.Router = Ember.Router.extend({
root: Ember.Route.extend({
index: App.AuthenticatedRoute.extend({
...
})
})
});
If you use the above solution then you can cherry pick exactly which routes you authenticate. You can also drop the "check if they're transitioning to the login screen" check in redirectToLogin.
I put together a super simple package to manage session and auth called Ember.Session https://github.com/andrewreedy/ember-session
Please also take a look at :
http://www.embercasts.com/
There are two screencasts there about authentication.
Thanks.