Apologies if it has been answered before, but I can't seem to find a good answer.
What is the context of how #QuarkusTest runs versus QuarkusIntegrationTest?
So far, all I got is the integration test runs against a packaged form of the app (.jar, native compilation), whereas the plain #QuarkusTest doesn't? But this does not leave much explanation, and apologies if this comes from a lack of understanding in test runtimes.
To start a test instance of Quarkus (via #QuarkusTest), does it not compile and package into a jar? Makes sense to not I suppose, and just test against running compiled classes but I would rather get the real answer than assuming.
https://quarkus.io/guides/getting-started-testing#native-executable-testing
Besides the difference you mention, there's another crucial difference between #QuarkusTest and #QuarkusIntegrationTest. With #QuarkusTest, the test runs in the same process as the tested application, so you can inject the application's beans into the test instance etc., while with #QuarkusIntegrationTest, the tested application runs in an external process, so you can only interact with it over network.
Related
I inherited an android project to setup code coverage for. Not having done much for android and almost as little in gradle, I embarked on a quest to find a helpful tutorial. As surprises go, the first few tutorials were very helpful and I was able to include the jacoco gradle plugin and enable the code coverage. Using jenkins I even generated a coverage report. So far everything looks fine.
However, upon setting my eyes on the report, I smelled something fishy. The test vs coverage ratio seemed to be far too small. Further investigation revealed the culprit.
The tests itself are written more as functional not unit ones. That would be ok. However, the project library has no tests in its module. Instead the library tests are written in the gui module (as that is where the library is used).
Therefore, even though most of the library functionality is covered by tests, coverage is generated for stuff from gui module only.
Project
-- Gui module
---- gui sources
---- all the tests
-- Library module
---- library sources
No I have been looking for a working solution quite some time. Unfortunately, all I was able to find was how to combine unit and integration .exec test coverage results into one report (or other unit test based solutions - none of which worked for the instrumentation ones).
What I need, is generate coverage for sources from Library module based on Gui module tests.
As I am stumbling in a dark here, is even anything like that, remotely possible?
For anyone reading this... if you have the same issue, it is time to start banging your head against the wall...
Today I was lucky enough to stumble upon this: https://issuetracker.google.com/issues/37004446#comment12
The actual "problem" seems to be, that library projects are "always" of release type. Therefore they do not contain "necessary instrumentation setup" (unless you enable code coverage for release as well, although I haven't tested it).
So the solution is to specifically enable, in the library to be published, "debug" build (as mentioned, default is the release type):
android {
publishNonDefault true
}
Then, in the project that uses library, specify a debugCompile dependency (release compile can use the "default" release configuration):
dependencies {
debugCompile project(path: 'library', configuration: 'debug')
releaseCompile project('library')
}
And of course (this one I take for granted), remember to enable test coverage for the library:
android {
buildTypes {
debug {
testCoverageEnabled true
}
}
}
I am running Geb functional tests in my Grails app through Eclipse "Run As JUnit..."
This normally works great and allows me to keep my test server running with grails run-app, and I get fast test execution times.
However, it doesn't allow me to use GORM domain objects in my setup/teardown methods. Those only work if I run with grails test-app, which requires a much longer cycle time.
Is there another way I can access the DB from my functional tests without GORM? I would be perfectly OK accessing the DB directly through the groovy.Sql class, as long as I don't have to duplicate configuration.
The question you linked to in your comment actually does contain a solution in this answer - you should use Grails Remote Control plugin to change the state of your application under test from your functional tests. Some reasons why are outlined in this answer to another question.
I have a Grails application that, when run on my local Windows machine, passes all tests in my integration test suite. When I deploy my app to my Test environment in Jenkins, and run the same suite of tests, a few of them are failing for inexplicable reasons.
I think the Test box is Linux but I am not sure. I am using mocks in my Grails app and am wondering if that may be causing confusion in values returned.
Has anyone any ideas?
EDIT:
My app translates an XML document into a new XML document. One of the elements in the returned XML document is supposed to be PRODUCT but comes back as product.
The place where this element is set is from an in-memory database that is populated from a DB script. It is the same DB script that is used locally and on my Test environment.
The app does not read any config files that would be different in different environments.
Like the others have stated the really isn't enough information here to help give a solid answer. A couple of things that I would look at are:
If it's integration tests that are failing maybe you've got some "bad tests" that are dependent on certain data that does not exist in your test environment that Jenkins is running against.
There is no guaranteed consistency for test execution order across machines/platforms. So it's entirely possible that the tests pass for you locally just because they run in a certain order and leave things mocked out or data setup from one test that is needed in another. I wrote a plugin a while ago (http://grails.org/plugin/random-test-order) to help identify these problems. I haven't updated the plugin since Grails 1.3.7 so it may not work with 2.0+ grails apps.
If the steps above don't identify the problem knowing any differences in how you are invoking the tests on Jenkins vs. Local would be beneficial. For example if you specify a specific grails environment (http://grails.org/doc/latest/guide/conf.html#environments) when running on Jenkins and what the differences are between that and the grails environment used on your local.
I've just started some MonoTouch development and I've tried, and failed, to get Moq working for my unit tests. The binary version fails because it's looking for System v2.0, which I assume is down to its Castle requirements, and building it from source crashes the compiler!
My question is has anyone gotten Moq to work on Mono (the touch part should be irrelevant, I'm not deploying it to the phone!), or had any joy with any of the other mocking frameworks? Failing that I'm back to rolling my own, which is a bit of a pain.
I'm using Moq right now with Monodevelop to test the non-Monotouch parts of a Monotouch app, and I haven't had any trouble. For the target runtime, my test project and the code under test both use Mono / .NET 3.5, and for references, it's got:
System, Version=2.0.0
nunit.core, Version=2.4.8
nunit.framework, Version=2.4.8
[code under test]
System.Core, Version=3.5.0
Moq.dll
System, nunit.core and nunit.framework are all as provided by Monodevelop.
The Moq I'm using is Moq.4.0.10827/NET35/Moq.dll.
(I haven't had any luck NUnit-testing the Monotouch parts -- I assume because when the tests are running, there's no phone or simulator, so the native code Monotouch is wrapping can't run. I've had to separate out the non-iOS-specific parts of the app and set up two separate solutions, one for real builds and one for unit testing the parts that can be unit tested. If you've gotten farther than that, let me know!)
Are there any differences between the original CruiseControl and the .NET port? I've compared the 2, but can't find any big differences except the language it has been developed in. I want to use either one of them for (automated) testing of web applications, using Selenium and Subversion, perhaps even Groovy but don't know which to choose.
[edit]
After looking at CC and Hudson, I've chosen Hudson for it's simplicity, it already has plugins to run Groovy scripts and Selenium as well
Choose me, choose me! (I work on the original CruiseControl.)
I've never used CC.NET but from what I know I agree that they are pretty comparable. Probably the most important difference is cross-platform vs. Windows only.
Now I wonder how long until someone comes by and says their both crap and you should try Hudson? ;)
(And of course there are lots of other choices...)
CruiseControl.NET (cc.net henceforth) has build queues (http://confluence.public.thoughtworks.org/display/CCNET/Project+Configuration+Block), which allows you to serialize builds that depends on a certain build order. I'm in the process of emulating this behavior in the java version of cruisecontrol but the functionality doesn't map one to one. The reason however, that I'm at all moving from the .net to the java version is that the .net version core dumps with mono (cc.net nightly build and mono nightly build as of two months ago). The fault lies with monos thread handling but voids attempts to get cc.net up and running.
The documentation on this can be tricky to find, if you don't notice the version numbers that the configuration examples/documentation adhere to (confluence.public.thoughtworks.org has the updated configuration documentation whereas ccnet.sourceforge.net has not. I know that the ccnet is most likely a dead site, but if your're not carefully reading the datestamps on every page you're visiting, this may bite you).
Furthermore, the sourcecontrol blocks for cvs and svn in cc.net are more granular and featurerich than their counterpart in the java version, but this has not been a problem in my work. The java version is also easy to extend/modify re: plugin behavior, but you would really just like to see this kind of work going upstream instead of forking.
I'm fairly impressed with both the java version and the fork in .net (modulo mono runtime behavior), but you really do not want to try any of the other forks of cruisecontrol. I've had peripheral experience with hudson, and the features were just not compelling enough to veer me from cruisecontrol. Hudson has a (somewhat coloured) comparison map of Hudson and CruiseControl (java) at http://hudson.gotdns.com/wiki/display/HUDSON/Home
A viable alternative is the python implemented buildbot (http://buildbot.net/trac). It does not have fancy gui dashboards and the setup is somewhat more commandline-bound, but if you're doing distributed builds, it's very easy to set up and get running.
I think for you it will come down to operating system, original can run on nix, and .net version runs on windows.
There are other automated build utilities that can do this as well, such as TeamCity in the windows space, and cruisecontrol.rb in the ruby world.
Also there is a PowerShell based build utility called pSake that can poll subversion and perform tasks.