We use testcafe for e2e tests and currently we are fixing flaky tests. During this we noticed that we use duplicated names for some fixtures and tests. Should we use unique names for fixtureName and testName or duplicated names are allowed?
Duplicate names are allowed but we don't recommend them as this is bad practice
Nothing in the documentation against it and the practice is allowed, but it's always best to be concise and unique: https://testcafe.io/documentation/402831/guides/basic-guides/organize-tests#fixtures
Related
Is it possible to link multiple requirements to a single test case to create a test scenario? I am aware it is possible to link several test cases together to create a teat plan however the test lead on one of my projects does not want to draft single test cases.
It is possible. A test case could contain several requirements and several links. Also, a test plan could contain several test cases. So what you want is completely achievable. And your test lead should not worry about that. Here is a document about creating test plans and test suites.
In our Behave-based BDD tests we see a need to add some metadata to the scenarios (for the purpose of test reports). The data is in the form of key/value pairs with a handful of keys and values are typically numbers. The structure will be parsed by our custom test report generator during and/or after the test run.
Is there a canonical way to do this in Gherkin? We considered adding them to the text of the scenario itself, e.g.
Scenario: Some scenario (somekey=42)
When ...
Behave also supports tags
#sometag(42)
Scenario: Some scenario
When ...
but since tags have side effects (test selection), this seems messy. Another option we have is to do e.g.
#sometag(42)
Scenario: Some scenario
Given something
When something
Then assert
Then report somekey 42
but no solution feels "clean". Is there a canoncal way in Gherkin to accomplish what we are trying to do?
This is no canonical way to associate meta data with scenarios or features. The closest thing you have is tags. There is nothing particularly wrong with using tags. Sure you can filter your tests by tags, but that doesn't mean you cannot make up your own tag format for meta data. You can do additional processing in a before or after scenario callback/hook.
If you do not need to do any processing during a test run, you can always use comments and then make up your own format. You can write a script to parse the comments in the feature files in a custom script, if you want. I'm sure as long as you are consistent it should be fine.
Gherkin language reference: https://cucumber.io/docs/gherkin/reference/#keywords
I read Bob Martin's brilliant article on how "Given-When-Then" can actual be compared to an FSM. It got me thinking. Is it OK for a BDD test to have multiple "When"s?
For eg.
GIVEN my system is in a defined state
WHEN an event A occurs
AND an event B occurs
AND an event C occurs
THEN my system should behave in this manner
I personally think these should be 3 different tests for good separation of intent. But other than that, are there any compelling reasons for or against this approach?
When multiple steps (WHEN) are needed before you do your actual assertion (THEN), I prefer to group them in the initial condition part (GIVEN) and keep only one in the WHEN section. This kind of shows that the event that really triggers the "action" of my SUT is this one, and that the previous one are more steps to get there.
Your test would become:
GIVEN my system is in a defined state
AND an event A occurs
AND an event B occurs
WHEN an event C occurs
THEN my system should behave in this manner
but this is more of a personal preference I guess.
If you truly need to test that a system behaves in a particular manner under those specific conditions, it's a perfectly acceptable way to write a test.
I found that the other limiting factor could be in an E2E testing scenario that you would like to reuse a statement multiple times. In my case the BDD framework of my choice(pytest_bdd) is implemented in a way that a given statement can have a singular return value and it maps the then input parameters automagically by the name of the function that was mapped to the given step. Now this design prevents reusability whereas in my case I wanted that. In short I needed to create objects and add them to a sequence object provided by another given statement. The way I worked around this limitation is by using a test fixture(which I named test_context), which was a python dictionary(a hashmap) and used when statements that don't have same singular requirement so the '(when)add object to sequence' step looked up the sequence in the context and appended the object in question to it. So now I could reuse the add object to sequence action multiple times.
This requirement was tricky because BDD aims to be descriptive. So I could have used a single given statement with the pickled memory map of the sequence object that I wanted to perform test action on. BUT would it have been useful? I think not. I needed to get the sequence constructed first and that needed reusable statements. And although this is not in the BDD bible I think in the end it is a practical and pragmatic solution to a very real E2E descriptive testing problem.
I'm currently using FactoryGirl and Rspec to test my models, which is great but incredibly slow. The hundreds of tests that I have for each model take about 30 seconds to run, per model.
The core issue is that when I create an object to test, I'm using the FactoryGirl.create() method. That hits the DB, and is definitely slower than using build or build_stubbed. But if I just use build, then I'll never know if I run into an error with the database right (such as trying to write a null value to a column that I've specified as non-null)?
Is there any way to get the best of both world? Or should I test the DB integration part explicitly somewhere outside of model/unit tests?
Don't know if this is applicable in your case, but have you considered tweaking your spec_helper.rb to get your suite to run faster?
I documented the evolution of my spec_helper.rb file in this StackOverflow answer (see specifically Edit 4), and the links to other SO answers and blogs listed there helped me a lot in reducing the running time of the suite.
I tend to use FactoryGirl.build, or just .new to create instances in model specs, and then save them only if the test needs to check some behavior that requires a persisted instance.
This can be problematic when using associations or joins where the row ID must be present. It's something of a tradeoff--speedy tests vs. tests that are easy to write.
you should use build most of the times, I you want to be sure that some value won't be saved as null do some spec just for that, it has no sense to always create the objects on the db
if you test that the factory creates a valid object once then you can trust the factory that it will create valid objects always.
also, always use presence validations on the fields that can't be null/nil, if your field is not nil then you can be sure the db won't have a null value
Is there a way to modularize JMeter tests.
I have recorded several use cases for our application. Each of them is in a separate thread group in the same test plan. To control the workflow I wrote some primitives (e.g. postprocessor elements) that are used in many of these thread groups.
Is there a way not to copy these elements into each thread group but to use some kind of referencing within the same test plan? What would also be helpful is a way to reference elements from a different file.
Does anybody have any solutions or workarounds. I guess I am not the only one trying to follow the DRY principle...
I think this post from Atlassian describes what you're after using Module controllers. I've not tried it myself yet, but have it on my list of things to do :)
http://blogs.atlassian.com/developer/2008/10/performance_testing_with_jmete.html
Jared
You can't do this with JMeter. The UI doesn't support it. The Workbench would be a perfect place to store those common elements but it's not saved in JMX.
However, you can parameterize just about anything so you can achieve similar effects. For example, we use the same regex post processor in several thread groups. Even though we can't share the processor, the whole expression is a parameter defined in the test plan, which is shared. We only need to change one place when the regex changes.
They are talking about saving Workbench in a future version of Jmeter. Once that's done, it's trivial to add some UI to refer to the element in Workbench.
Module controllers are useful for executing the same samples in different thread groups.
It is possible to use the same assertions in multiple thread groups very easily.
At your Test Plan level, create a set of User Defined variables with names like "Expected_Result_x". Then, in your response assertion, simply reference the variable name ${Expected_Result_x}. You would still need to add the assertion manually to every page you want a particular assertion on, but now you only have to change it one place if the assertion changes.