We have some Spock tests that compare XML as strings. This mostly works fine, but for one test where the strings have ~130k characters, IntelliJ's test runner seems to hang. The terminal's output shows the status of the test, but the left-hand pane with the tests just spins forever. For now we're breaking the strings up into 1000-character chunks and comparing those, but is there anything we can do to get IntelliJ to complete the test when run?
Related
I have a test that I'm running in Comma IDE from a Raku distro downloaded from github.
The tests passed last night. But after rebooting this morning, the test no longer passes. The test runs the raku on the machine. After some investigation, I discovered, that the binary was not getting found in the test:
say (run 'which', 'raku', :out).out.slurp; # outputs nothing
But if I run the test directly with prove6 from the command line, I get the full path to raku.
I'm using rakubrew.
I can easily fix this by adding the full path in the test, but I'm curious to know why Comma IDE sudddenly can't find the path to the raku binary.
UPDATE: I should also mention I reimported the proejct this morning and that caused some problems so I invalidated caches. So it may have been this and not the reboot that caused the problem. I'm unsure.
UPDATE 2: No surprise but
my $raku-path = (shell 'echo $PATH', :out).out.slurp;
yields only /usr/bin:/bin:/usr/sbin:/sbin
My best guess: in the situation where it worked, Comma was started from a shell where rakubrew had set up the environment. Then, after the reboot, Comma was started again, but from a shell where that was not the case.
Unless you choose to do otherwise, environment variables are passed on from parent process to child process. Comma inherits those from the process that starts it, and those are passed on to any Raku process that is spawned from Comma. Your options:
Make your Raku program more robust by using $*EXECUTABLE instead of which raku (this variable holds the path to the currently executing Raku implementation)
Make sure to start Comma from a shell where rakubrew has tweaked the path.
Tweak the environment variables in the Run Configuration in Comma.
In Selenium I often find myself making tests like ...
// Test #1
login();
// Test #2
login();
goToPageFoo();
// Test #3
login();
goToPageFoo();
doSomethingOnPageFoo();
// ...
In a unit testing environment, you'd want separate tests for each piece (ie. one for login, one for goToPageFoo, etc.) so that when a test fails you know exactly what went wrong. However, I'm not sure this is a good practice in Selenium.
It seems to result in a lot of redundant tests, and the "know what went wrong" problem doesn't seem so bad since it's usually clear what went wrong by looking at the what step the test was on. And it certainly takes longer to run a bunch of "build up" tests than it takes to run just the last ("built up") test.
Am I missing anything, or should I just have a single long test and skip all the shorter ones building up to it?
I have built a large test suite in Selenium using a lot of smaller tests (like in your code example). I did it for exactly the same reasons you did. To know "what went wrong" on a test failure.
This is a common best practice for standard unit tests, but if I had to do it over again, I would go mostly with the second approach. Larger built-up tests with some smaller tests when needed.
The reason is that Selenium tests take an order of magnitude longer than standard unit tests to run, particularly on longer scenarios. This makes the whole test suite unbearably long with most of the time being spent on running the same redundant code over and over again.
When you do get an error, say in a step that is repeated at the beginning of 20+ different tests, it does not really help to know you got the same error 20+ times. My test runner runs my test out of order so my first error isn't even on the first incremental test of the "build-up" series so I end up looking at the first test failure and it's error message to see where the failure came from. The same thing I would do with if I had used larger "built-up" tests.
Coverage is a plugin for IntellijIDEA (going back many releases). It captures code coverage statistics for a given run configuration.
According to the documentation we should be able to append the results for multiple runs either by selecting it as the default behavior or by having Intellij prompt for the settings before applying coverage to the editor.
But the settings never seem to get applied. If I choose to be prompted; I'm never prompted. If I choose to append them; they're never appended. One member on my team says they are prompted but the results do not reflect their choice.
I've tried everything I can think of:
Manually changed settings for Coverage in my workspace.xml file
Deactivated and reactivated Coverage
Uninstalled and reinstalled Coverage
Tried using the other runners for Coverage (Emma and JaCoCo)
Even uninstalled and reinstalled Intellij with hopes I was carrying around faulty settings from a previous install.
Nothing works.
Am I missing something obvious; how do I configure Coverage to append coverage suites? I'm thinking it's a bug but is there perhaps some workaround possible?
There's a workaround but it doesn't involve appending the suites and it's a bit ugly.
I can't find a way to fix appending suites but coverage is applied to run configurations. So, what you can do if you have an existing suite you want to add to, you can add another run configuration and run with coverage to generate a suite for that run.
What you end up with is a number of suites you then have to merge; the merging functionality in Coverage works. Note that no coverage suites are appended; no new files are generated. It simply merges the results into the coverage view allowing a total report to be generated.
To view merged coverage data:
Press Ctrl+Alt+F6
Choose one or more coverage suites to merge
Click "Show selected"
A view of the merged suite data should appear in the editor.
I am using selenium webdriver and testNG to create automated test case. I am running the same test case multiple times for different set of data. The execution is slowing down after each iteration and at some point it becomes very slow and the process stops.
The code is very straightforward: iterating over the same testNG method containing selenese scripts (example:driver.findElement(By.id(target)).click();)
Any idea why the execution is getting slower and after multiple iterations it stops.
#Anna Clearing temp files solved a similar issue for me. My test was generating a lot of log files, screenshots, windows temp files, among others. Now I make automation clear my temp files and my results have been way better.
If that does not solve your issue, please share more information on how your automation is setup (testNG, Jenkins, Maven, etc) and the code that initiates the runs.
I just started working on an existing grails project where there is a lot of code written and not much is covered by tests. The project is using Hudson with the Cobertura plugin which is nice. As I'm going through things, I'm noticing that even though there are not specific test classes written for code, it is being covered. Is there any easy way to see what tests are covering the code? It would save me a bit of time if I was able to know that information.
Thanks
What you want to do is collect test coverage data per test. Then when some block of code isn't exercised by a test, you can trace it back to the test.
You need a test coverage tool which will do that; AFAIK, this is straightforward to organize. Just run one test and collect test coverage data.
However, most folks also want to know, what is the coverage of the application given all the tests? You could run the tests twice, once to get what-does-this-test-cover information, and then the whole batch to get what-does-the-batch-cover. Some tools ( ours included) will let you combine the coverage from the individual tests, to produce covverage for the set, so you don't have to run them twice.
Our tools have one nice extra: if you collect test-specific coverage, when you modify the code, the tool can tell which individual tests need to be re-run. You need a bit of straightforward scripting for this, to compare the results of the instrumentation data for the changed sources to the results for each test.