I am currently working on a product that heavily relies on database logic/functions to realize certain business cases. After having a hard time with quarterly live releases we decided to integrate our projects in a CI environment and to setup a continuous delivery process as a final goal.
At the current moment the database related projects heavliy rely on shell scripts. These scripts are triggered on each release and take care of the incremental import of certain sql patches (e.g. projectX_v_4_0.sql, projectX_v_4_1.sql, ... projectX_v_4_n.sql).
Regretfully this approach is very error prone and also the script logic is not verified/tested at all. Since our experience with Gradle was very good in the past, we decided to evaluate Gradle as an alternative to the existing shell scripts.
My question now is: How would you handle the sequential import of the certain sql patches? Is there a certain framework that you could recommend or would you prefer to execute the psql command from inside Gradle as it was done by the shell scripts before?
Thanks for any hints/recommendations and general thoughts!
Have a look at Liquibase and Flyway. A Gradle plugin is available for both tools.
Related
I am working in automation in selenium with java and testng.I have completed all my test scripts but i don't have practical experience of working in Selenium in IT industry.
My question is how the test scripts run after completing test scripts for a specific project for regression?
1.Using Eclipse(any IDE) on regular basis or
2.Making any jar file to run on regular basis or
3.Any other means
Please let me know what happens according to company point of view.
It really depends on companies point of view. From my work practice - we had been doing regression via selenium (Eclipse IDE), but again if the company practices continuous integration system, mostly the test must be ran by a machine, so probably then it's better to use jar which would return test results to some kind of file.
Different companies follow different approach while doing regression testing, when using automated scripts. There is not set standards or a SOP (Standard Operating Procedure) for this. The higher level of hierarchy in the testing department (test lead etc.) has to decide, which practise will have a maximum ROI while following a specific practise.
For example, in my current organisation, I have been running a CI system, using Jenkins, which runs all the automation scripts, at a specified time on the day the regression is supposed to begin - or we trigger it and the CI takes care of the rest.
In my previous organisation, for regression purposes, we have had a dedicated system, where we would copy all the scripts, make all the necessary updates/system changes and then trigger the scripts to have the tests run.
I believe not many big companies would follow the practise of having their regression tests run individually via Eclipse IDE, since for a whole sprint (or for a whole project), there would be hundreds of test cases, involving a lot of scripts, and running them via Eclipse, would make them pretty boring and time consuming as well. Plus every single test script run would generate a separate report, which would be too complex to store and debug in case of any failure.
However, as I said, this depends entirely on how the company sees the ROI and effort to be made for this.
Goal is to run a few queries against a database on each new build? Has anyone had any luck without having to put sql in java classes or creating entire new schemas to hold stored procs? Ideally you can include some SQLs in separate files that get run as soon as the build completes.
Might be using maven,bamboo but would love to hear any experiences/successes/difficulties that people have encountered.
You don't say what tools you use for writing you SQL unit tests. If you're using Steven Feuerstein's utplsql tool you should read this artcle on Continuous Integration with Oracle PL/SQL, utPLSQL and Hudson. And even if you're not it might provide some useful insights.
Maybe Team City (Jetbrains) is what you're looking for. It has various build runners, including but not limited to Ant, MS Build, NUnit, Maven and Command Line.
Just configure a TC-project to listen to your svn/git/hg repository for changes, then run a build: first compilation and if successful then Maven (or whatever). Or which-ever way you want to do it.
/mikkel
I love the maven-versions-plugin but sometimes I forget to run it for a while. Is there a way to make a maven build fail (and thus have a continuous build fail) when certain important dependencies are out of date?
I think you're approaching this incorrectly. Mail yourself the output of the maven-versions-plugin if you want, but don't fail the build due to changes outside of your control.
Even more, why would you want to needlessly update to the latest versions? I have seen many tricky problems appear due to upgrades which have brought slight changes to previous behaviour.
This, in general, is a bad practice - to update versions automatically. There is no practical reason of using the latest version of any package. If the library you're using satisfies your requirements you should stay with this version for security/stability reasons. And forever.
I think that maven-versions-plugin is an anti-pattern itself.
ps. When and if you want to do integration testing of modules developed by different teams/programmers, it is "integration testing". Even in this case I still think that on-fly version updating is the wrong approach. Root project should not do this integration testing, instead, every sub-module (or JAR, in your case), has to be responsible for integration testing of itself together with the rest of the system. When a sub-module increases its version it has to validate whether everything is still fine, and only then has to release a new version to the repository. And when the sub-module is doing the validation it has to be dependent on statically specified version numbers.
Where i am working we have the following issue:
Our current test procedure is that our business analyst test the release based on their specifications/tests. If it passes these tests it is given to the quality dept where they test the new release and the entire system to check if something else was broken.
Just to mention that we outsource our development. Unfortunately the release given to us is rarely tested by the developers and thats "the relationship" we have with them these last 7 years....
As a result if the patch/release fails the tests at the functionality testing level or at the quality level with each patch given we need to test the whole thing again not just the release.
Is there a way we can prevent this from happening?
You have two options:
Separate the code into independent modules so that a patch/change in one module only means you have to re-test that one module. However, due to dependencies this is effective only to a very limited degree.
Introduce automated tests so that re-testing is not as expensive. It takes some more work at fist, but will definitely pay off in your scenario. You don't have to do unit test or TDD - integration tests based on capture-replay tools are often easier to introduce in your scenario (established project with manual testing process).
Implement a continuous testing framework that you and the developers can access. Someething like CruiseControl.Net and NUnit to automate the functional tests.
Given access, they'll be able to see nightly tests on the build. Heck, they don't even need to test it themselves, your tests will be being run every night (or regularly), and they'll know straight away what faults they've caused, or fixed, if any.
Define a 'Quality SLA' - namely that all unit tests must pass, all new code must have a certain level of coverage, all new code must have a certain score in some static analysis checker.
Of course anything like this can be gamed, so have regular post release debriefs where you discuss areas of concern and put in place contingency to avoid it in future.
Implement GO server with Dashboard and handle with GO Agent GUI at your end.
http://www.thoughtworks-studios.com/forms/form/go/downloadlink text
Looking for a way to get a visual report about:
overall test success percentage over time (information about if and how quickly tests are going greener)
visualised single test results over time (to easily notice test gone red that has been green for long time or vice versa to pay attention to a test that has just gone green)
any other visual statistics that would benefit testers and the project as a whole
Basically a tool that would generate results from the whole test results directory not just off the single (daily) run.
Generally it seems it could be done using XSLT, but it doesn't seem to have much flexibility to work with multiple files at the same time.
Does such a tool exist already?
I feel fairly courageous to claim that most Continuous Integration Engines such as Hudson (for Java) provide such capability either natively or through plugins. In Hudson's case there's a few code coverage plugins available already and I think it does basic graphs from unit tests automatically by itself.
Oh and remember to configure the CI properly, for example our Hudson polls CVS every 10 minutes and if it sees any changes, it does all the associated tricks (get updated .java files, compile, run tests, verify dependencies etc.) to see if the build is still OK or not.
Hudson will do this and it will work with Nunit (here), Junit (natively), and MSTest.exe tests using the steps I outline here. It does all that you require and more. Even if you want it to ONLY run tests and give you feedback on those, it can.
There's such new report supporting NUnit \ JUnit called Allure. To retrieve information from NUnit you need to use NUnit adapter, for JUnit - read the following wiki page. You can use it with Jenkins via respective plugin.