"[TestNG] No tests found. Nothing was run" message - testing

Last week I was running this script and it was running fine.
After a week when I again ran it, it throws [TestNG] No tests found.
Nothing was run as in the picture below
So why is this happening? Any Solution for solving this problem?

I have to degrade TestNG version to 6.10, after that it is running.

Related

Cypress giving false positive and not failing test despite finding wrong text

As mentioned in the title, the cypress client proceeds on to other tests without marking a test as failed where it has in fact found a mismatch in the expectation. This behavior can be seen in the attached image:
This is intermittent but can go completely unnoticed when the tests run in a CI environment.
How am I supposed to debug & fix this issue?

Ktor - gradle test task faild

I have a strange problem. When I run my test in IDEA they work, but if I do it from console 'gradle test' I get:
com.easythings.teessstttt.service.ProductServiceTest > initializationError FAILED
org.jetbrains.exposed.exceptions.ExposedSQLException at ProductServiceTest.kt:29
Caused by: org.h2.jdbc.JdbcBatchUpdateException at ProductServiceTest.kt:29
Why?
I fixed it. When I started tests in IDEA I did it for a signle file and I have only one connection to the database. Using the 'gradle test' command starts all tests at the same time. I added "time now" to database name and it solved the problem.

NUnit tests are not restarted through VSTest in DevOps

I am using VSTest to run tests via Azure DevOps. Tests run successfully, but using the option rerun failed tests: true, an error appears during the restart phase.
NUnit 3.12.0 ;
NUnit Adapter 3.16.1.0 (Checked with 4.0.0.0)
vstest.console.exe "C:\agent2.172.2\_work\r1\a\UITest\drop\Tests.Web\bin\Release\netcoreapp3.1\Tests.Web.dll"
/Settings:"C:\agent2.172.2\_work\_temp\3utv233tymm.runsettings"
/Logger:"trx"
/TestAdapterPath:"C:\agent2.172.2\_work\r1\a\UITest\drop\Tests.Web\bin\Release\netcoreapp3.1"
/TestCaseFilter:"FullyQualifiedName=Tests.Web.Tests._5.CourierModuleTest.N1_SendingTest.Id_5_1_01_TransferToCourierModule(Chrome)|FullyQualifiedName=Tests.Web.Tests._3.IssuanceOfDocuments.InformationOnTheApplicationIdentificationOfRecipient.N2_RecipientIdentificationTest.Id_3_2_13_RegisterAddressByFiasTest(Chrome)"
NUnit Adapter 3.16.1.0: Test execution started
An exception occurred while invoking executor 'executor://nunit3testexecutor/': Incorrect format for TestCaseFilter Missing Operator '|' or '&'. Specify the correct format and try again. Note that the incorrect format can lead to no test getting executed.
How can I fix this error and successfully restart the tests in DevOps?
I tried googling for a similar issue but didn't find anything that would work. Any help is really appreciated
Here's similar discussion, according to comments from the Contributors of azure-pipelines-task repo:
1.You should update your VS and VSTest component within it to latest version since we don't support data driven tests for Rerun scenario. It will be available with VS 15.8 release and higher.
2.You should configure your VSTest task following this solution:
Recommended to give total number of your tests as input (Number of tests per batch).

Protractor test times out randomly in Docker on Jenkins, works fine in Docker locally

When using the APIs defined by Protractor & Jasmine (the default/supported runner for Protractor), the tests will always work okay on individual developer laptops. For some reason when the test runs on the Jenkins CI server, they will fail (despite being in the same docker containers on both hosts, and that was wildly frustrating.)
This error occurs: A Jasmine spec timed out. Resetting the WebDriver Control Flow.
This error also appears: Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
Setting getPageTimeout & allScriptsTimeout to 30 seconds had no effect on this.
I tried changing jasmine.DEFAULT_TIMEOUT_INTERVAL to 60 seconds for all tests in this suite, once the first error appears then every test will wait the full 60 seconds and time out.
I've read and reread Protractor's page on timeouts but none of that seems relevant to this situation.
Even stranger still, it seems like some kind of buffer issue - at first the tests would always fail on a particular spec, and nothing about that spec looked wrong. While debugging I upgraded the selenium docker container from 2.53.1-beryllium to 3.4.0-einsteinium and the tests still failed but they failed a couple specs down - suggesting that maybe there was some optimization in the update and so it was able to get more done before it gave out.
I confirmed that by rearranging the order of the specs - the specs that had failed consistently before were now passing and a test that previously passed began to fail (but around the same time in the test duration as the other failures before the reorder.)
Environment:
protractor - 5.1.2
selenium/standalone-chrome-debug - 3.4.0-einsteinium
docker - 1.12.5
The solution ended up being simple - I first found it on a chrome bug report, and it turned out it was also listed right on the front page of the docker-selenium repo but the text wasn't clear as to what it was for when I'd read it the first time. (It says that selenium will crash without it, but the errors I was getting from Jasmine were just talking about timeouts, and that was quite misleading.)
Chrome apparently utilizes /dev/shm, and apparently that's fairly small in docker. There are workarounds for chrome and firefox linked from their README that explain how to resolve the issue.
I had a couple test suites fail after applying the fix but all the test suites have been running and passing for the last day, so I think that was actually the problem and that this solution works. Hope this helps!

StatLight hangs when run from TeamCity as single command

I'm running TeamCity 6.5 on a Windows Server, with a couple of build agents on the same server (all running as the system user as services). I had been building SilverLight projects and running the StatLight (v 1.4.4147) tests previously under Jenkins with no problems. On Jenkins, I called the StatLight test in a custom script as follows:
StatLight.exe -x="Tests.xap"
StatLight.exe -x="MoreTests.xap"
StatLight.exe -x="EvenMoreTests.xap"
... etc., but when I migrated my build jobs to TeamCity, I also changed these into a single command line step as follows:
StatLight.exe --teamcity -x="Tests.xap" -x="MoreTests.xap" -x="EvenMoreTests.xap"
This works about 50% of the time, but when it fails, there's no output in the build log to tell me why - I just get:
[11:41:18]: [MyProject\bin\Release\MoreTests.xap] Tests.ExtensionsTests.WatchObservableCollection
[11:41:18]: [MyProject\bin\Release\MoreTests.xap] Tests.SubscribingModelBaseTests.DisposeIsCalled
[11:41:18]: [MyProject\bin\Release\MoreTests.xap] --- Completed Test Run at: 28/09/2011 11:41:18. Total Run Time: 00:00:11.8125000
[11:41:19]: [MyProject\bin\Release\MoreTests.xap] Test run results: Total 6, Successful 6, Failed 0,
[11:41:19]: [Step 5/6] MyProject\bin\Release\EvenMoreTests.xap (9m:42s)
... and then nothing more. The time reported in that last line just goes up and up until I kill the the build job. Adding the --debug switch to StatLight doesn't improve the above output either.
Right now, I've switched the TeamCity build step to call each test individually as I was in Jenkins, but this is more of a workaround than a proper solution. And of course, I may still run into the above problem - I've yet to find out.
What I'd like to know is what steps I can take to debug this issue properly, or whether there are known issues that can cause the above behaviour?
There was one issue fixed in the 1.5 version relating to teamcity. http://statlight.codeplex.com/workitem/13654
I'm not sure it will fix your issue, but would you mind upgrading, trying and reporting back?