Adding Plain SQL Tables to Grails App instead of using ORM? - sql

In Grails, how can plain SQL/DDL be used to create / drop tables in the same manner they would be if one were using GORM / ORM?
For example, when using GORM / ORM, the tables used for persistence are regularly created/dropped, and inserted into, during the runtime of Integration Tests, and execution of the application.
I know there is a way to do this using just Groovy as shown in the example named "Advanced Usage" here, but I'm looking for something more along the lines of being built into the framework already, something where I can specify an SQL file with DDL to be loaded.

I'm looking for something more along the lines of being built into the framework already, something where I can specify an SQL file with DDL to be loaded.
As far as I know, there is no such support built-into Grails, so you'll have to write it yourself. Luckily it shouldn't be too difficult. Here's an implementation plan:
Store your DDL file in the conf directory
In Bootstrap.groovy, dependency-inject the DataSource Spring bean
In the init closure of Bootstrap.groovy use the DataSource to get a Connection to the DB
Using the Connection, create the database and execute the SQL statements in the DDL file against it
In the destroy closure of Bootstrap.groovy, drop the database

Related

Use Liquibase autogenerated xml for Corda Enterprise DB migration

I switched to Corda Enterprise mainly to try how it handles automated database migration.
In the documentation here it says tools-database-manager generates only SQL version of Liquibase script for initial DB and SQL version is Database specific so should not be used for production.
But it is possible to generate the XML also with liqubase cmd using this command:
/snap/bin/liquibase --url="jdbc:h2:tcp://localhost:10039/node" --driver=org.h2.Driver --classpath=/home/corda/Downloads/h2.jar generateChangeLog
which I did, and then I had to remove all the chnagelogs which are related to corda internal tables, and left only the ones that are my own and it seems everything works.
So the question is - may this approach have some hidden dangers that I don't know. Why otherwise Corda team developed tools-database-manager, and why they don't yet support xml generation with tools-database-manager?
And this leads to another question - what if I for example forget to include one of my tables in the initial script? Seems corda does not complain about it. Won't my table be created? Will I be able to ever migrate that table if it is missing in the initial script?
Firstly tools-database-manager is a helper tool available to make it easy for developers to perform database migration.
Let’s say you have 2 nodes in your network, each using a different database. PartyA uses PostgreSQL and PartyB uses Oracle. If PartyA uses this tool to create the migration script by connecting to PostgreSQL, this will out SQL statements specific to PostgreSQL.
Hence this is not portable and hence it's said the generated script is database specific.
Also, you do not want to trust a script and fire it directly on your production database, it contains DDL statements, so it is strongly recommended that every time a script is generated, make sure you know what the script is doing by manually looking into it.
There are a lot of enhancements going on in this space, supporting XML for migration script being one of them.
As mentioned earlier, you should manually look at the migration script. If you forget to add one of your table, Corda will not complain. It will fail sometime later when from within your code you try to access this table.
Yes, you can stop the node and create the table again by adding a create table script.

AS400 SQL query similar to CLRLIB (clear library) in native AS400

I'm working on a AS400 database and I need to manipulate library/collection with sql.
I need to recreate something similar to the CLRLIB command but I don't find a good way to do this.
Is there a way to delete all the table from a library with a sql query ?
Maybe I can drop the collection and create a new one with the same name. But I don't know if this is a good way to clear the library.
RESOLVE :
Thanks to Buck Calabro for his solution.
I use the following query to call the CLRLIB in SQL :
CALL QSYS.QCMDEXC('CLRLIB LIB_NAME ASPDEV(ASP_NAME)', 0000000032.00000)
Where LIB_NAME is the name of the library I want to clear, ASP_NAME is the name of the ASP where the library is and 0000000032.00000 is the command lenght.
(note that the term COLLECTION has been deprecated, SCHEMA is the current term)
Since a library can contain both SQL and non-SQL objects, there's no SQL way to delete every possible object type.
Dropping the schema and recreating it might work. But note that if the library is in a job's library list, it will have a lock on it and you will not be able to drop it. Also, unless the library was originally created via CREATE SCHEMA (or CREATE COLLECTION) you're going to end up with differences.
CRTLIB creates an empty library, CREATE SCHEMA creates a library plus objects needed for automatic journaling and a dozen or so SQL system views.
Read Charles' answer - there may be objects in your schema that you want to keep (data areas, programs, display and printer files, etc.) If the problem is to delete all of the tables so you can re-build all of the tables, then look at the various system catalog tables: SYSTABLES, SYSVIEWS, SYSINDEXES, etc. The system catalog 'knows' about all of the SQL tables, indexes, views, stored procedures, triggers and so on. You could read the catalog and issue the appropriate SQL DROP statements.

How to avoid manually writing/managing SQL

My team and I are rapidly developing an Webapp backed by an Oracle DB. We use maven's plugin flyway to manage our db creation and population from INSERT SQL scripts. Typically we add 3-4 tables per sprint and / or modify the existing tables structure.
We model the schema in an external tool that generates the schema including the constraints and run this in first followed by the SQL INSERTs to ensure the integrity of all the data.
We spend too much time managing the changes to the SQL to cover the new tables - by this I mean adding the extra column data to the existing SQL INSERT statements not to mention the manual creation of the new SQL INSERT data particularly when they reference a foreign key.
Surely there is another way, maybe maintaining raw data in Excel and passing this through a parser to the DB. Has anyone any ideas?
10 tables so far and up to 1000 SQL statements, DB is not live so we tear it down on every build.
Thanks
Edit: The inserted data is static reference data the platform depends on to function - menus etc.
The architecture is Tomcat, JSF, Spring, JPA, Oracle
Please store your raw data in tables in the database - hey! why on earth do you want to use Excel for this? You have Oracle Database - the best tool for the job!
Load your unpolished data using SQL*Loader or external tables into regular tables in the database.
From there you have SQL - the most powerful rdbms tool to manipulate your data.
NEVER do slow by slow inserts. (1000 sql statements). Please do CTAS.
Add/enable the constraints AFTER you have loaded all the data.
create table t as select * from raw_data;
or
insert into t (x,y,z) select x,y,z from raw_data;
Using this method, you can bypass the SQL engine and do direct inserts (direct path load). This can even be done in parallel to make your data go into the database superfast!
Do all of your data manipulation in SQL or PLSQL. (Not in the application)
Please invest time learning the Oracle Database. It is full of features for you to use!
Don't just use it like a datadump (a place where you store your data). Create packages - interfaces to your application - your API to the database.
Don't just throw around thousands of statements compiled into your application. It will get messy.
Build your business logic inside the database PLSQL - use your application for presentation.
Best of luck!
Alternatively, you also have the option to implement a Java migration. It could read whatever input data you have (Excel, csv, ...) and do the proper inserts.

How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web interface?

I am currently updating a java-based web application which allows database developers to create stored procedure regression test suites for database testing.
Currently, for test setup, execution and clean-up stages, the user is provided with text boxes where they are able to enter SQL code which is executed by the isql command.
I would like to extend the application to use DB Unit’s DatabaseOperation methods to provide more ways to setup the state of the database than just SQL statements. The main reason for using Db Unit rather than just SQL statements is to be able to create and store xml and xls DataSets on a server where they can be associated with their test cases and used for data setup.
My question is:
How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web interface?
I have considered:
Creating a simple programming language and a parser to read some simple syntax involving the DB Unit method names which accept a parameter being the file location to an xml or xls DataSet. I was thinking of allowing the user to register the files they need with the web app which would catalogue them and provide each file with an identifier which could passed as a parameter to the methods in this simple programming language.
Creating an XML DTD which provides the user with the ability to specify operations and parameters. If I went this approach, how can I execute the methods and their parameters that I parse from the XML document?
Creating a table in the database which stores the method and a FK relation to a catalogued DataSet file, however I don’t think this would be good solution due to the fact that data entry would be tedious.
Thanks for your help.
This actually seems like rather simple problem when I think about it again.
DBUnit has plugins for Maven and Ant integration which run tests written in XML in the Maven POM file.
I'm going to take a similar approach and go ahead with the XML option using the Xerces-J parser and create a collection of Operation, Export and Compare objects which are run in order.

Will I be able to create dynamic classes at runtime in Oslo?

For instance, will I be able to create an application that allows users to create and modify existing types at runtime? Will I be able to persist instances of those types in SQL without having to worry about the user who adds 100,000 records and expects a (really) fast query on them?
Think SharePoint Content Types... but on steroids. Oslo steroids - Possible or not?
That would be awesome!
In the demos, they create the new extent, and then a (updatable) view with the old name.
But I haven't heard about a feature that would automatically merge existing data into the new structure, though. For now, they suggest using SQL Server Integration Services for that part - but then it's a DB-Admin task.
Regarding performance, after MSchema is compiled to SQL statements, it's all plain SQL Server performance.