how to connect mysql from hudson? - selenium

I am executing regression test created by selenium and triggered from hudson. After this test i need to clean up DB , so for this any option there in Hudson to connect DB and execute some script? Or what is the best way to do this one?
Thanks in advance
by Mani

There is no build-in plugin in Hudson/Jenkins that I'm aware of, but you can make the Hudson build process execute a shell script/bat file that in turn can do whatever you can do with a script:
Shell scripts and Windows Batch commands
Depending on your situation it might be preferable to add this step to an overall build script (as an <exec> task in ant for example).

You can do as stated above or if you connect to the databases using JPA or Hibernate you can set up those so the database is recreated each time. That's how I do it on my case. From the question is hard to tell which method you use to connect to the database.

My tests are being invoked through TestNG and before they run, I clean up the DB via JDBC.
Since you didn't say which DB you are using, I recommending Googling for "[DB] JDBC example", changing [DB] for whatever DBMS you are using :)

Related

Automating SQL scripts in Toad for Oracle

I'm not a DBA, so bear with me.
I've scheduled tasks through Toad's automation designer, but this uses Windows task scheduler. It's using Oracle 12c.
How can I ensure my scripts execute on time without running a local machine with scheduled tasks? Can this be done without additional software? Thanks in advance.
I don't know what those scripts do.
However, if they can be rewritten into stored procedures, then you could schedule them as database jobs using either DBMS_SCHEDULER (or DBMS_JOB, which is simpler but still quite usable) package.
As a matter of fact, DBMS_SCHEDULER can even run your operating system scripts, using job_type => 'external_script'. Read documentation for more info.
See if it helps.

How to set up Redgate SQL Source Control with Continuous Integration

My question is this:
What is the best setup for managing SQL changes in a development team?
Our team consists of 4 developers, each with their own copy of a database.
When committing SQL/Application changes to our TFS server, we wish to ensure that any build errors do not get propagated to other developers. So, we are going to implement continuous integration to assist with this.
The idea is that
1.SQL and application code changes are committed to TFS.
2. A central database gets the SQL updates, and we build the application.
3. Unit tests are executed on the build server.
4. If any of these steps fail, the checkin is rejected and the database gets rolled back to the state it was in before the commit.
What is the best way to set up our Redgate SQL source code to implement this?
If you want to use SQL Source Control, based on your requirements this is a possible setup to consider.
For each developer machine:
Install SQL Source Control
Link each developer database to your TFS repository using Dedicated database as development model.
Install SQL Prompt to write your SQL more easily
Configure SQL Test for writing unit tests for SQL Server
On the build server:
Install Redgate DLM Automation (there are Add-Ons to simplify setup)
Configure a Validate build task to validate the schema by checking the database can be built successfully from scratch
Configure a Test build task to run SQL tests
Configure a Sync build task to update your central database with the SQL updates
Run your application unit tests
If the test fails, you can run a custom script that revert the last check-in, use the sync build task again to roll back the database changes and trigger a new build. You can use Redgate DLM Automation PowerShell cmdlets to do it.
The last step could be tricky. I honestly prefer and recommend to use branches instead of relying on a single central database. In this way, each developer can be working fully independently and you can merge the new changes in the master branch only when the work has being validated on each individual branch.
If you want to go further and also implement deployment you can use Redgate DLM Automation Deployment to create a release database package and deploy your database changes to production directly from your build server or using a release tool like Octopus Deploy.
Finally, I would also advice you to have a look at Redgate ReadyRoll especially if you are considering a migration-first approach to database changes.
As you can see, there are different ways of using Redgate tools to manage database changes and there is no single best way of setting them up. It always depends by the specific requirements and problems you need to solve.
Hope this helps.
You can use a Database Project. It can contain the entire database schema plus stored procedures. During a build, it will verify that the stored procedures match the schema.
Then enable the Gated Check-in option in build definition, it accepts check-ins only if the submitted changes merge and build successfully.
For the data written to database, it's based on your test method, you can set the method to delete the data if the test failed, or you shouldn't be writing to a real database. Instead you should mock the database classes. This way you don't actually have to connect and modify the database and therefor no cleanup is needed.
For more information you can reference below articles:
How To Unit Test – Interacting with the Database
Database cleanup after Junit tests

What are the different approaches for deploying DB changes using TFS 2015?

Currently, we are manual running DB scripts (SQL Server 2012) outside of our CI/CD deployment. What are ways (including toolsets) can we automate deployment of DB changes using TFS 2015 Update 3?
There are really two approaches here, both of which work with TFS. Really TFS just facilitates the execution of any scripting that you will use to update your database, including your custom, handcrafted scripts.
There is the state based approach, which uses a comparison technology to look at your VCS/dev/test/staging database and compare this to production. SQL Source Control and the DLM Automation Suite from Redgate Software does this, as do other comparison tools. What you would do is use a command line or programmatic interface to set your source and target, capture the output and then use this as an artifact in your release process. I might include a review of the artifact as a scripting choice in your flow.
Note, there are some problems that State based comparisons don't do well with. Renames, splits, merges, data movement, a few others. Some comparison tools have ways around this, some do not. Be aware this may be an issue. If you have a more mature database, perhaps not, but you should consider this. SQL Source Control allows custom migration scripts, which can handle these issues.
The other approach is a script runner or migration strategy where each change you make to a dev database is captured as an ordered script and a framework executes these in order, if they are needed. This is preferred by some people since you can see exactly what code will be executed at dev and deployment time. ReadyRoll from Redgate Software, Liquibase, Rails Migrations, DBUp, FlywayDB, all use this strategy.
Neither of these is better or worse. Both work, both have pros and cons, but really the choice comes down to your comfort level and preference.
Disclosure: I work for Redgate Software.
If deploy DB changes just mean using SQL Server Database Projects (.sqlproj files) with Team Foundation Build in Team Foundation Server.
There are several ways can achieve this:
Use MSBuild task with some arguments to publish your SQL project
during build.
Add a deploy target in your sqlproj file,run the target after build
completes.
Or add a "Batch Script" step in your build
definition to run "SqlPackage.exe" to publish the .dacpac file.
More details please refer to this blog: Deploying SSDT During Local and Server Build .
As for using TFS2015, you can also try to use SQL Server Database Deployment task.
Use this task to deploy a SQL Server database to an existing SQL
Server instance. The task uses a DACPAC and SqlPackage.exe, which
provides fine-grained control over database creation and upgrades.

Exporting In-Memory HSQL Database Content as SQL During Maven Build

During a Maven build's integration test phase I am populating an in-memory HSQL database based on the performed tests. Afterwards, I would like to capture this state by exporting the database content as SQL statements (for later import).
Is there some Maven plugin or command line tool suitable for this task? For MySQL we are using mysqldump, so I am basically looking for an equivalent for HSQL.
With HSQLDB use:
SCRIPT <filepath>
Example:
SCRIPT '/opt/dump/mydb.script'
There's the dbunit plugin, for one, which should work for a variety of databases.

Creating database (not populating it) through Ant build file

I've managed to do some ant-script to populate my databases.. (simple script that runs some .sql files, like 'create', 'populate', 'drop', etc.)
Is there any way in hell that an ant-script can create the database itself from scratch? This is for JavaDB-Derby (from the glassfish bundle). Since it's a project for university, we find that we recreate the database on different machines all the time, and I would like to avoid this. Also it'd be great to know.
Normally I would create a database through Netbeans, and it would ask for a name, location, username, and then it would create the link derby:jdbc://localhost:1527//DBUsername/
I understand this is probably a bit too db-related, but since ant seems like a good tool maybe it could help.. or if not, maybe some other way (maybe another .sql file?)
Thanks for any replies.
Apache Derby's JDBC driver lets you create a database using ;create=true flag in the connection string:
jdbc:derby:MyDatabase;create=true
You can do this from Ant by running ij tool as command-line (or as java app). Here's a link to documentation
I've created databases via Ant. I don't recall having a problem executing DDL with the SQL task. You might want to check out DBUnit.