How to use Stub object with tinytest and meteorjs? - testing

This weekend i tryed to test a package "A" from my meteor app.
This package depends on another package "B" that defines all collections. So the package "B" expose all required collections.
The package "A" expose a main object that have some methods that use the collections exposed in "B".
I want to replace some collections by a code like this :
myCol = {
"findOne": return {_id: 1, "name": ben}
}
But it fails. This code is ok from tinytest.add code, but in the methods of the package "A", it still uses the original Collection variables. I've seen in the build folder that everything is re-written by the build system, so i wonder what is the best way to test my code without depending on those Collection variables.
I have some ideas like storing those variables in a main object that has get/set methods. It might allow me to change everything when i do test.
Thanks for help
Here is the sample app : https://github.com/MeteorLyon/tutorial-package-dependancy-testing
Follow the README.md to run different test.
If you find a solution it's great.

If you are looking for stubs, I'd highly recommend using sinon. Specifically, have a look at the stubs and the sandbox portions of the docs. You can find atmosphere packages here. Here's a quick example:
Tinytest.add('my test', sinon.test(function(test) {
// this is sandboxed stub - we are writing to a global object
// but it will be restored at the end of the test
test.stub(Meteor, 'userId', function() {
return USER_ID;
});
// let's do the same thing with a collection
test.stub(Posts, 'findOne', function() {
return {_id: 1, name: 'ben'};
});
var post = Posts.findOne();
test.equal(post.name, 'ben');
}));
Keep in mind that tinytest is an integration test framework, so you may get better tests by fully utilizing both package's APIs. With respect to testing collection interactions, we've found its better to not stub very much and just insert and cleanup as needed. But that's pretty general advice - there may be some specific reason why this can't work in your particular use case.

Related

How would you redirect calls to the top object in Cypress?

In my application code, there are a lot of calls (like 100+) to the "top object" referring to window.top such as top.$("title") and so forth. Now, I've run into the problem using Cypress to perform end-to-end testing. When trying to log into the application, there are some calls to top.$(...) but the DevTools shows a Uncaught TypeError: top.$ is not a function. This resulted in my team and I discovering that the "top" our application is trying to reach is the Cypress environment itself.
The things I've tried before coming here are:
1) Trying to stub the window.top with the window object referencing our app. This resulted in us being told window.top is a read-only object.
2) Researching if Cypress has some kind of configuration that would smartly redirect calls to top in our code to be the top-most environment within our app. We figured we probably weren't the only ones coming across this issue.
If there were articles, I couldn't find any, so I came to ask if there was a way to do that, or if anyone would know of an alternate solution?
Another solution we considered: Looking into naming window objects so we can reference them by name instead of "window" or "top". If there isn't a way to do what I'm trying to do through Cypress, I think we're willing to do this as a last resort, but hopefully, we don't have to change that, since we're not sure how much of the app it will break upfront.
#Mikkel Not really sure what code I can provide to be useful, but here's the code that causes Cypress to throw the uncaught exception
if (sample_condition) {
top.$('title').text(...).find('content') // Our iframe
} else {
top.$('title').text(page_title)
}
And there are more instances in our code where we access the top object, but they are generally similar. We found out the root cause of the issue is that within Cypress calls to "top" actually interface with Cypress instead of their intended environment which is our app.
This may not be a direct answer to your question, it's just expanding on your request for more information about the technique that I used to pass info from one script to another. I tried to do it within the same script without success - basically because the async nature of .then() stopped it from working.
This snippet is where I read a couple of id's from sessionStorage, and save them to a json file.
//
// At this point the cart is set up, and in sessionStorage
// So we save the details to a fixtures file, which is read
// by another test script (e2e-purchase.js)
//
cy.window().then(window => {
const contents = {
memberId: window.sessionStorage.getItem('memberId'),
cartId: window.sessionStorage.getItem('mycart')
}
cy.writeFile(`tests/cypress/fixtures/cart.json`, contents)
})
In another script, it loads the file as a fixture (fixtures/cart.json) to pull in a couple of id's
cy.fixture(`cart`).then(cart => {
cy.visit(`/${cart.memberId}/${cart.cartId}`)
})

Changing window.navigator within puppeteer to bypass antibot system

I'm trying to make my online bot undetectable. I read number of articles how to do it and I took all tips together and used them. One of them is to change window.navigator.webdriver.
I managed to change window.navigator.webdriver within puppeteer by this code:
await page.evaluateOnNewDocument(() => {
Object.defineProperty(navigator, 'webdriver', {
get: () => undefined
});
});
I'm bypassing this test just fine:
However this test is still laughing at me somehow:
Why WEBDRIVER is inconsistent?
Try this,
First, remove the definition, it will not work if you define and delete from prototype.
Object.defineProperty(navigator, 'webdriver', ()=>{}) // <-- delete this part
Replace your code with this one.
delete navigator.__proto__.webdriver;
Result:
Why does it work?
Removing directly just remove the instance of the object rather than the actual definition. The getter and setter is still there, so browser can find it.
However if you remove from the actual prototype, it will not exist at all in any instance anymore.
Additional Tips
You mentioned you want to make your app undetectable, there are many plugins which achieve the same, for example this package called puppeteer-extra-plugin-stealth includes some cool anti-bot detection techniques. Sometimes it's better to just reuse some packages than to re-create a solution over and over again.
PS: I might be wrong above the above explanation, feel free to guide me so I can improve the answer.

Elasticsearch testing(unit/integration) best practices in C# using Nest

I've been seraching for a while how should I test my data access layer with not a lot of success. Let me list my concerns:
Unit tests
This guy (here: Using InMemoryConnection to test ElasticSearch) says that:
Although asserting the serialized form of a request matches your
expectations in the SUT may be sufficient.
Does it really worth to assert the serialized form of requests? Do these kind of tests have any value? It doesn't seem likely to change a function that should not change the serialized request.
If it does worth it, what is the correct way to assert these reqests?
Unit tests once again
Another guy (here: ElasticSearch 2.0 Nest Unit Testing with MOQ) shows a good looking example:
void Main()
{
var people = new List<Person>
{
new Person { Id = 1 },
new Person { Id = 2 },
};
var mockSearchResponse = new Mock<ISearchResponse<Person>>();
mockSearchResponse.Setup(x => x.Documents).Returns(people);
var mockElasticClient = new Mock<IElasticClient>();
mockElasticClient.Setup(x => x
.Search(It.IsAny<Func<SearchDescriptor<Person>, ISearchRequest>>()))
.Returns(mockSearchResponse.Object);
var result = mockElasticClient.Object.Search<Person>(s => s);
Assert.AreEqual(2, result.Documents.Count()).Dump();
}
public class Person
{
public int Id { get; set;}
}
Probably I'm missing something but I can't see the point of this code snippet. First he mocks an ISearchResponse to always return the people list. then he mocks an IElasticClient to return this previous search response to any search request he makes.
Well it doesn't really surprise me the assertion is true after that. What is the point of these kind of tests?
Integration tests
Integration tests do make more sense to me to test a data access layer. So after a little search i found this (https://www.nuget.org/packages/elasticsearch-inside/) package. If I'm not mistaken this is only about an embedded JVM and an ES. Is it a good practice to use it? Shouldn't I use my already running instance?
If anyone has good experience with testing that I didn't include I would happily hear those as well.
Each of the approaches that you have listed may be a reasonable approach to take, depending on exactly what it is you are trying to achieve with your tests. you haven't specified this in your question :)
Let's go over the options that you have listed
Asserting the serialized form of the request to Elasticsearch may be a sufficient approach if you build a request to Elasticsearch based on a varying number of inputs. You may have tests that provide different input instances and assert the form of the query that will be sent to Elasticsearch for each. These kinds of tests are going to be fast to execute but make the assumption that the query that is generated and you are asserting the form of is going to return the results that you expect.
This is another form of unit test that stubs out the interaction with the Elasticsearch client. The system under test (SUT) in this example is not the client but another component that internally uses the client, so the interaction with the client is controlled through the stub object to return an expected response. The example is contrived in that in a real test, you wouldn't assert on the results of the client call as you point out but rather on the output of the SUT.
Integration/Behavioural tests against a known data set within an Elasticsearch cluster may provide the most value and go beyond points 1 and 2 as they will not only incidentally test the generated queries sent to Elasticsearch for a given input, but will also be testing the interaction and producing an expected result. No doubt however that these types of test are harder to setup than 1 and 2, but the investment in setup may be outweighed by their benefit for your project.
So, you need to ask yourself what kinds of tests are sufficient to achieve the level of assurance that you require to assert that your system is doing what you expect it to do; it may be a combination of all three different approaches for different elements of the system.
You may want to check out how the .NET client itself is tested; there are components within the Tests project that spin up an Elasticsearch cluster with different plugins installed, seed it with known generated data and make assertions on the results. The source is open and licensed under Apache 2.0 license, so feel free to use elements within your project :)

How to use hapi-swaggered without a running server

I have a working hapi service, complete with hapi-swaggered and hapi-swaggered-ui. This is useful for many cases, but I want to add a build step to my CI which will be able to get the JSON generated by hapi-swaggered (which, if changed, would get compiled that into an .Net assembly that gets stored in a local proget).
I know that if I really wanted to, on my build server, I could start an instance of my server, curl to localhost:3000/swagger, kill the server, and proceed, but that seems a little risky (i.e., what if I have two builds running at the same time?).
Has anyone developed a way to directly call the hapi-swaggered API to get the raw JSON?
Well, that didn't take too much longer, but I think I found one solution. In this case, internals is my server. It does not auto-start if its loaded (required'ed) from another file, and the compose method is exposed to use hapi's Glue.compose to assemble the service. It seems that I can then use the inject method to simulate a call.
'use strict';
var internals = require('./');
internals.compose(function(err, server) {
server.inject({ method: 'GET', url: '/swagger' }, function (response) {
console.log(JSON.stringify(response.result));
process.exit();
});
});
If there's anything that I'm missing about this technique, I'd like to hear about it.

SpecFlow - How to use data driven tests like NUnits TestCaseSource property?

I'm a QA who decided to use SpecFlow for my test automation after some consideration. I think it's brilliant, but missing one feature which I did use often with other test runners such as NUnit - something similar to the TestCaseSource property from NUnit to specify a potentially dynamic set of data for tests to be ran against at run time.
I would often have different data in each environment the test should run in, so cannot specify hardcoded values for test parameters. A trivial example is for checking that each type of user account is able to login, the user account credentials can be retrieved using a DB query to populate each test case dynamically in NUnit:
public List<User> GetTestData()
{
List<User> testData = new List<User>();
testData = MyDatabase.GetAllUsersInfo().ToList();
return testData;
}
[Test, TestCaseSource("GetTestData")]
public void CallLoginService(User user)
{
var response = LoginController.TryLogin(User.UserName, User.Password);
if (response.Error != null)
{
Assert.Fail("Failed to Login: {0}", response.Error);
}
Assert.AreEqual("Logged in ok", response.Message, "Login message not as expected");
}
Obviously this is a simple example of that feature, but I think it describes it well enough. I know we have the ability in SpecFlow to use a Scenario Outline and table of test run input data, but that is still static, so doesn't fit the bill.
I've been looking for a while and have not found anything in SpecFlow like this yet, does anybody know of anything similar to the above which can be used (or planned if anyone who works on the project reads this)?
Thanks :)
I have no idea if anything like this is planned but for now the problem is that there is a background code generation step when you edit your feature file via Visual Studio.
When it is saved in Visual Studio it is parsed and converted into the feature.cs file and that is the one that is compiled and used for testing.
So your process would become
edit your data source
export to feature file
get specflow's VS plugin to convert to feature.cs
run msbuild
run tests via Nunit or similar
I wouldn't do this. Instead I'd focus on getting my tests to be better examples. It sounds like you are to trying to exhaustively cover every possibility. Don't come up with examples to cover every possible case, but instead cover as much logic as possible with fewer tests.