Anyone had any luck integrating SQL unit tests to run with CI build tool? - sql

Goal is to run a few queries against a database on each new build? Has anyone had any luck without having to put sql in java classes or creating entire new schemas to hold stored procs? Ideally you can include some SQLs in separate files that get run as soon as the build completes.
Might be using maven,bamboo but would love to hear any experiences/successes/difficulties that people have encountered.

You don't say what tools you use for writing you SQL unit tests. If you're using Steven Feuerstein's utplsql tool you should read this artcle on Continuous Integration with Oracle PL/SQL, utPLSQL and Hudson. And even if you're not it might provide some useful insights.

Maybe Team City (Jetbrains) is what you're looking for. It has various build runners, including but not limited to Ant, MS Build, NUnit, Maven and Command Line.
Just configure a TC-project to listen to your svn/git/hg repository for changes, then run a build: first compilation and if successful then Maven (or whatever). Or which-ever way you want to do it.
/mikkel

Related

How Test scripts run for a specific project after completing all test scripts?

I am working in automation in selenium with java and testng.I have completed all my test scripts but i don't have practical experience of working in Selenium in IT industry.
My question is how the test scripts run after completing test scripts for a specific project for regression?
1.Using Eclipse(any IDE) on regular basis or
2.Making any jar file to run on regular basis or
3.Any other means
Please let me know what happens according to company point of view.
It really depends on companies point of view. From my work practice - we had been doing regression via selenium (Eclipse IDE), but again if the company practices continuous integration system, mostly the test must be ran by a machine, so probably then it's better to use jar which would return test results to some kind of file.
Different companies follow different approach while doing regression testing, when using automated scripts. There is not set standards or a SOP (Standard Operating Procedure) for this. The higher level of hierarchy in the testing department (test lead etc.) has to decide, which practise will have a maximum ROI while following a specific practise.
For example, in my current organisation, I have been running a CI system, using Jenkins, which runs all the automation scripts, at a specified time on the day the regression is supposed to begin - or we trigger it and the CI takes care of the rest.
In my previous organisation, for regression purposes, we have had a dedicated system, where we would copy all the scripts, make all the necessary updates/system changes and then trigger the scripts to have the tests run.
I believe not many big companies would follow the practise of having their regression tests run individually via Eclipse IDE, since for a whole sprint (or for a whole project), there would be hundreds of test cases, involving a lot of scripts, and running them via Eclipse, would make them pretty boring and time consuming as well. Plus every single test script run would generate a separate report, which would be too complex to store and debug in case of any failure.
However, as I said, this depends entirely on how the company sees the ROI and effort to be made for this.

How to display a short test report/counters in travis-ci?

I mean, it would be very useful if I can see how many tests passed/failed just by one line, without reading build logs.
I use karma as test runner. It have a lot of reporter, but which one should I use?
Example from TeamCity:
This seems like a useful feature but the current user interface doesn't seem to support it.
You can file it as a feature request on Travis CI's GitHub page using the link below:
https://github.com/travis-ci/travis-ci/issues
Although Travis CI doesn't have its own interface for counting the number of tests passed, they do work with CodeClimate, which has it's it's interface and metrics for test coverage. It shows overall test coverage for the whole project and coverage for each file. There's some more info on that here, though it looks like their free version allows local testing only.
There are other tools out there for tracking and analyzing coverage as well, including Coveralls, which is pretty good as well. They have a free version for open source, like Travis CI, so that's can be a plus. They also show coverage as a percent and file-by-file.

How would you handle incremental SQL patches using Gradle

I am currently working on a product that heavily relies on database logic/functions to realize certain business cases. After having a hard time with quarterly live releases we decided to integrate our projects in a CI environment and to setup a continuous delivery process as a final goal.
At the current moment the database related projects heavliy rely on shell scripts. These scripts are triggered on each release and take care of the incremental import of certain sql patches (e.g. projectX_v_4_0.sql, projectX_v_4_1.sql, ... projectX_v_4_n.sql).
Regretfully this approach is very error prone and also the script logic is not verified/tested at all. Since our experience with Gradle was very good in the past, we decided to evaluate Gradle as an alternative to the existing shell scripts.
My question now is: How would you handle the sequential import of the certain sql patches? Is there a certain framework that you could recommend or would you prefer to execute the psql command from inside Gradle as it was done by the shell scripts before?
Thanks for any hints/recommendations and general thoughts!
Have a look at Liquibase and Flyway. A Gradle plugin is available for both tools.

Is there a way to 'test run' an ant build?

Is there a way to run an ant build such that you get an output of what the build would do, but without actually doing it?
That is to say, it would list all of the commands that would be submitted to the system, output the expansion of all filesets, etc.
When I've searched 'ant' and 'test', I get overwhelming hits for running tests with ant. Any suggestions on actually testing ant build files?
It seems, that you are looking for a "dry run".
I googled it a bit and found no evidence that this is supoorted.
Heres a bugzilla-request for that feature, that explains things a bit:
https://issues.apache.org/bugzilla/show_bug.cgi?id=35464
This is impossible in theory and in practice. In theory, you cannot test a program meaningfully without actually running it (basically the halting problem).
In practice, since individual ant tasks very often depend on each other's output, this would be quite pointless for the vast majority of Ant scripts. Most of them compile some source code and build JARs from the class files - but what would the fileset for the JAR contain if the compiler didn't actually run?
The proper way to test an Ant script is to run it regularly, but on a test system, possibly a VM image that you can restory to the original state easily.
Here's a problem: You have target #1 that builds a bunch of stuff, then target #2 that copies it.
You run your Ant script in test mode, it pretends to do target #1. Now it comes to target #2 and there's nothing to copy. What should target #2 return? Things can get even more confusing when you have if and unless clauses in your ant targets.
I know that Make has a command line parameter that tells it to run without doing a build, but I never found it all that useful. Maybe that's why Ant doesn't have one.
Ant does have a -k parameter to tell it to keep going if something failed. You might find that useful.
As Michael already said, that's what Test Systems - VM's come in handy- are for
From my ant bookmarks => some years ago some tool called "Virtual Ant" has been announced, i never tried it. So don't regard it as a tip but as something someone heard of
From what the site says =
"With Virtual Ant you no longer have to get your hands dirty with XML to create or edit Ant build scripts. Work in a completely virtualized environment similar to Windows Explorer and run your tasks on a Virtual File System to see what they do, in real time, without affecting your real file system*. The actual Ant build script is generated in the background."
Hm, sounds to good to be true ;-)
..without affecting your real file system.. might be what you asked for !?
They provide a 30day trial license so you won't lose no money but only the time to have a look on..

Is there a tool for creating historical report out of j/nunit results

Looking for a way to get a visual report about:
overall test success percentage over time (information about if and how quickly tests are going greener)
visualised single test results over time (to easily notice test gone red that has been green for long time or vice versa to pay attention to a test that has just gone green)
any other visual statistics that would benefit testers and the project as a whole
Basically a tool that would generate results from the whole test results directory not just off the single (daily) run.
Generally it seems it could be done using XSLT, but it doesn't seem to have much flexibility to work with multiple files at the same time.
Does such a tool exist already?
I feel fairly courageous to claim that most Continuous Integration Engines such as Hudson (for Java) provide such capability either natively or through plugins. In Hudson's case there's a few code coverage plugins available already and I think it does basic graphs from unit tests automatically by itself.
Oh and remember to configure the CI properly, for example our Hudson polls CVS every 10 minutes and if it sees any changes, it does all the associated tricks (get updated .java files, compile, run tests, verify dependencies etc.) to see if the build is still OK or not.
Hudson will do this and it will work with Nunit (here), Junit (natively), and MSTest.exe tests using the steps I outline here. It does all that you require and more. Even if you want it to ONLY run tests and give you feedback on those, it can.
There's such new report supporting NUnit \ JUnit called Allure. To retrieve information from NUnit you need to use NUnit adapter, for JUnit - read the following wiki page. You can use it with Jenkins via respective plugin.