I'm writing some integration test with plone.app.testing.
Sometimes I want to print something to the console, but it seems to me that stuff gets printed on the console only when a particular test has finished.
Does plone.app.testing or some of the packages behind have some logging facility which I can use?
To support testing logging output, IOW tests that check that your code is logging things as it should, I think zope.testing intercepts all logging. Additionally, depending on how you're testing, stdout may be replaced or intercepted, as with doctests for example, so printing to stdout may not work either.
Personally, I used pdb.set_trace() or I force a failure temporarily, IOW without committing it, at the point I want to inspect and run the tests with "-D" for pdb.post_mortem() debugging.
You may, however, be able to make some use of zope.testing.loggingsupport to collect the information you want and then you can use a pdb.set_trace() or "-D" to get a pdb prompt at which you can inspect whatever handler you created to capture logging output.
Take a look on this also: Redirecting log output to sys.stdout on tests.
Related
I have a series of tests that are dependent on a step in the middle (such as creating an account). The API that I'm using for this is a kind of brittle (which is a separate problem), and sometimes fails. I'd like to be able to just quit the tests in the middle there when that fails, instead of wait for TestCafe to fail the initial assertions for the next few tests that follow. Is there a way to get the test controller to stop, or signify to the fixture that the tests should stop? I immediately thought of Spock's #Stepwise annotation, but I can't find anything like that in the TestCafe docs.
The Stop on First Fail option stops the entire run once a failed test occurred. If I understand your scenario correctly, you could add an assertion for a successful account creation and if it fails, exit the entire run with this option.
CLI Documentation
API Documentation (under Parameters)
Is there a way to send a mail with result file (I set this file in console command with option --result) after running.
I have run my selenium test cases in following way
How to Schedule Selenium Web Drivers Tests in C#
The result file was created after OneTimeTearDown function.
If sending an e-mail into OneTimeTearDown function - the result file comes incomplete
Thanks in advance
Sangeetha P.
I'm not sure I'd actually recommend doing this - but I think it's possible. Personally, I'd instead handle the email sending outside of the NUnit console, in a separate script in your CI System.
Anyway. You could achieve this by writing your own ResultWriter extension. Take a look at the implementation of the standard NUnit3XmlResultWriter as an idea - you'd essentially want the same thing, except to send the file by email, rather than write a file. (You may even want to make your ResultWriter actually inherit the NUnit3XmlResultWriter class.)
We are using Gherkin/Behave (in Python) to test an embedded application. The Gherkin code is executed on a server while the actual activity is performed by an application on the device, communicating over the network. The application on the device needs to be started manually.
I need a test to reboot the device. I can get the test application to perform a reboot, but then I need the code on the server to prompt the user to restart the test app so that the test can continue with subsequent steps. However I cannot get the Python code in the "steps" file to output any text.
I appreciate that Gherkin/Behave is meant to provide fully automated testing, but real world restrictions apply here.
for formatter in context._runner.formatters:
formatter.stream.write("Your message here\n")
formatter.stream.write("\n")
The extra newline is needed because Behave prints a description of the step first, then overwrites it in green if it passed. The extra newline ensures that this overwrite overwrites a blank line rather than your text.
Note that when I tested this I was using the default "pretty" formatter. I don't know how well it will work with other formatters.
I've just started exploring automated testing, specifically Codeception, as part of my QA work at a web design studio. The biggest issue I'm experiencing is having Codeception fail a test as soon as an assert fails, no matter where it's placed in the code. If my internet connection hiccups or is too slow, things can become difficult. I was wondering if there were methods to provide more control over when Codeception will fail and terminate a test session, or even better, a way to retry or execute a different block or loop of commands when an assert does fail. For example, I would like to do something similar to the following:
if ( $I->see('Foo') )
{
echo 'Pass';
}
else
{
echo 'Fail';
}
Does anyone have any suggestions that could help accomplish this?
You can use a conditional assertion:
$I->canSeeInCurrentUrl('/user/miles');
$I->canSeeCheckboxIsChecked('#agree');
$I->cantSeeInField('user[name]', 'Miles');
The codeception documentation says:
Sometimes you don't want the test to be stopped when an assertion fails. Maybe you have a long-running test and you want it to run to the end. In this case you can use conditional assertions. Each see method has a corresponding canSee method, and dontSee has a cantSee method.
I'm not sure, if I understand it correctly, but I think, you should try to use Cest.
$ php codecept.phar generate:cest suitename CestName
So you could write one test in one test function. If a test fails, it will abort. You can also configure codeception, that it will not abort and show only the one test which fails in a summary at the end of all tests.
See here in the documentation: https://github.com/Codeception/Codeception/blob/2.0/docs/07-AdvancedUsage.md
Maybe it's better to use:
$I::dontSee('Foo');
Regards
We're using WatiN to test our web portals. During the course of an E2E test, we'll occasionally see client-side script errors on the IE status bar. I'd like to chain a handler onto the script error event and record the error for later analysis and bug filing.
Problem is, I don't know that there's a global script error event or how to chain into it. And if there's not a browser-agnostic way to accomplish this, I can create MyIE and MyFF subclasses but then this becomes two browser-specific questions.
In essence, I'm thinking of something like this entirely made-up call:
browser.ScriptEngine.SetCustomErrorHandler(LogScriptingError);
... where LogScriptErrors is my code that does the obvious.
Many of our client-side scripting errors don't necessarily prevent the test from continuing (a pretty UI element didn't animate, for example, but the underlying form is still submittable), so I'd like to log the error and forge ahead in most cases.
You probably looking for this:
window.onerror=function(message, url, line){logError();};
You can add this code to your pages to handle errors in logError(). but this may not work in all browser(works in IE), check this for browser compatibility:
http://www.quirksmode.org/dom/events/error.html
Or you may try this commercial product:
exceptionhub.com/
You could maybe co-opt the ability to inject eval code (described under "Added Eval functionality") to add a script that caught all errors, not just errors from the eval'ed script. I'm not sure if this would work, but it's an area to explore. Another resource might be this blog post, which discusses how to evaluate Javascript in WatiN.