Liquibase - advantages of using createTable over CREATE TABLE - liquibase

In my current liquibase project I usually use plain SQL to create tables, because I believe it gives better control over the DDL syntax, and I can paste exactly what I modeled. Of course I'm losing the automatic rollback functionality, but except that - are there any other benefits of using createTable elements over plain SQL?
I thought it has some advantages when you switch to a different database server, but even in this case I would probably just create a different version of the SQL manually (again, for better control over the DDL syntax).

Related

Why would you create an SQL table via a query?

In most tutorials on database design, you are shown to create and manipulate tables via queries. Sorry for a newbie question but when using SQL Server Management Studio, why would you create a table using a query and not just using the built-in functions to create tables and add attributes to them? (eg: right-click\create table, go to design view and add columns and specify domains, indexes, keys etc...)
In any development, multiple environments are used. Development environment is used at coding stage, then QA, then Model Office/ UAT/ Production.
Using scripts ensures that changes can be promoted automatically. It also ensures that manual errors are either eliminated or kept to a minimum.
Hand coding in each environment will be expensive and error prone. Scripts make it possible to have same table structure.
I create tables using queries (and i store them in .sql files) because that way i can re-run them at later time to recreate the full database structure.
This sounds more useful while in a development/testing environment than it can be in productive, where i guess you wouldn't drop and re-create the entire database that often.
To add a reason not already mentioned - it allows the scripts to be audited / reviewed and potentially stored in a version controller or issue tracking system. This will be necessary in complex or secure scenarios especially in a fast-changing environment.
It looks more professional to write queries in tutorials :). In real life, it's simpler to alter a table through UI, but then again, you forget the SQL syntax that way. If you're not a Database Admin, it's not that important to know SQL syntax from a-z, in my opinion.

Creating a table conditionally in SQlite

This is probably very basic stuff. I want to create a table if a certain condition is met. Basically I have a db with a version number, and if the version is as expected, a new table is created.
This is pretty straightforward to do, say, in python, but is there a pure SQL way, or then a pure SQLite way, of doing this? Basically I want to know if my update scripts could be free of any other programming langage other than SQL (or SQLite's SQL).
I looked at the CASE clause but it seems I can't use it as a top-level switch statement.
No, there's no way to do this. SQLite has no flow control statements and the only if condition you can specify on the CREATE TABLE command is IF NOT EXISTS.
You will have to use a scripting language to execute this logic.

Normalization of an existing SQL database

I have a single-table database I inherited and migrated to SQL Server, and then I normalized it by creating, linking, and filling a whole bunch of lookup-type tables that represented items in the main table. I now want to replace those items in the original table with their foreign keys. Am I stuck writing a bunch of queries or UDF's and then a giant INSERT statement to accomplish this, or is there a tool I can use to point at the various fields and have it handle the grunt work for me?
Redgate SQL Refactor comes with a 14 day evaluation period and has a "Split Table" refactoring which sounds like it might do what you need?
The feature is described thus:
Split Table splits a table into two
tables, and automatically rewrites the
referencing stored procedures, views,
and so on. You can also use this
refactoring to introduce referential
integrity tables. You can select this
feature from the context menu in
Management Studio’s Object Explorer.
I have had similar experiences. I once inherited a fairly large database that required serious overhaul to the schema before I would look at it without scorn.
Because the upgrade was fairly significant, I designed an SSIS package to migrate data from the old schema to the new. Lookup activities were helpful to map old text values to the new keys. I kept a script of my old schema and data handy and would repeatedly restore the database in a sandbox and re-migrate until I could satisfy the powers-that-be that the migration was reliable.
I found there was only a moderate learning curve to getting started with SSIS. If the tool is available to you, I recommend giving it a try.

Defining table structure for a database?

Up until now, my experience with databases has always been working with an intermediate definition layer that we have where I work. i.e. SQL wasn't directly written for the table definitions, but generated from an intermediate file which wrote out SQL scripts for creating the appropriate tables, upgrade scripts between schema changes, and helper functions for doing simple queries/updates/inserts/deletes from the database.
Now I'm in a situation where I don't have access to that, for reasons I won't get into, and I find myself somewhat lost at sea regarding what to do. I need to have a small number of tables in a database, and I'm unsure what's usually done to manage the table definitions.
Do people normally just use the SQL script that does the table creation as their definition, or does everyone just use an IDE that manages the definition in a separate file and regenerates the SQL script to create the tables?
I'd really prefer not to have to introduce a dependence on a specific IDE, because as we all know, developers are whiners that are prone to religious debates over small things.
Open your favorite text editor -> Start writing CREATE scripts -> Save -> Put in Source Control
That script now becomes the basis for you database. Anytime there are schema changes, they get put back into the scripts so that they don't get lost.
These become your definition.
I find it more reliable than depending on any specific IDE/Platform generating those scripts for you.
We write the scripts ourselves and store them in source control like any other code. Then the scripts that are appropriate for a particular version are all groupd together and promoted to prod together. Make sure to use alter table when changing existing tables becasue you don't want to drop and recreate them if they have data! I use a drop and recireate for all other objects though. If you need to add records to a particular table (usually a lookup of sometype) we do that in scripts as well. Then that too gets promoted with the rest of the version code.
For me, putting the scripts in source control however they are generated is the key step. This is how you know what you have changed for the next release. This is how you can see earlier versions and revert back easily if there is a problem. Treat database code the same wayyou treat all other code.
YOu could use one of the data modelling tools that creates scripts for you if you are starting out on a database design and the eventually want to create it for you. Some tools for that are Erwin, Fabforce etc... (though not free)
If you have access to an IDE like SQL Management studio, you can create them by using an GUI thats pretty simple.
If you are writing your own code, Its always better to write your own scripts based on a good template so that you cover all the properties of the definition of the table like the file_group, Collation & stuff. Hope this helps
Once you do create a base copy generate scripts and have a base reference copy of it so that you could do "incremental" changes on them and manage them in a source control.
Though I use TOAD for Oracle, I always write the scripts to create my database objects by hand. It gives you (and your DBA's) more control and knowledge of what's being created and how.
If your schema is too difficult to describe in SQL, you probably have other issues more pressing than which IDE. Use modelling documentation if you need a graphical representation, but yeah, you don't need an IDE.
There are multiple ways out there for what you are asking.
Old traditional way is to have a script file ready with your application that has CREATE TABLE statement.
If you are a developer, and that too a Java enterprise developer, you could generate complete schema using a persistence library called Hibernate. Here is a how to
If you are a DBA level user, you could take schema export from one environment and import that in to your current environment. This is a standard practice among DBAs. But it requires admin access as you can see. Also, the methods are dependent on the database you are using (oracle, db2 etc)

How should I organize my master ddl script

I am currently creating a master ddl for our database. Historically we have used backup/restore to version our database, and not maintained any ddl scripts. The schema is quite large.
My current thinking:
Break script into parts (possibly in separate scripts):
table creation
add indexes
add triggers
add constraints
Each script would get called by the master script.
I might need a script to drop constraints temporarily for testing
There may be orphaned tables in the schema, I plan to identify suspect tables.
Any other advice?
Edit: Also if anyone knows good tools to automate part of the process, we're using MS SQL 2000 (old, I know).
I think the basic idea is good.
The nice thing about building all the tables first and then building all the constraints, is that the tables can be created in any order. When I've done this I had one file per table, which I put in a directory called "Tables" and then a script which executed all the files in that directory. Likewise I had a folder for constraint scripts (which did foreign key and indexes too), which were executed when after the tables were built.
I would separate the build of the triggers and stored procedures, and run these last. The point about these is they can be run and re-run on the database without affecting the data. This means you can treat them just like ordinary code. You should include "if exists...drop" statements at the beginning of each trigger and procedure script, to make them re-runnable.
So the order would be
table creation
add indexes
add constraints
Then
add triggers
add stored procedures
On my current project we are using MSBuild to run the scripts. There are some extension targets that you can get for it which allow you to call sql scripts. In the past I have used perl which was fine too (and batch files...which I would not recommend - the're too limited).
#Adam
Or how about just by domain -- a useful grouping of related tables in the same file, but separate from the rest?
Only problem is if some domains (in this somewhat legacy system) are tightly coupled. Plus you have to maintain the dependencies between your different sub-scripts.
If you are looking for an automation tool, I have often worked with EMS SQLManager, which allows you to generate automatically a ddl script from a database.
Data inserts in reference tables might be mandatory before putting your database on line. This can even be considered as part of the ddl script. EMS can also generate scripts for data inserts from existing databases.
Need for indexes might not be properly estimated at the ddl stage. You will just need to declare them for primary/foreign keys. Other indexes should be created later, once views and queries have been defined
What you have there seems to be pretty good. My company has on occasion, for large enough databases, broken it down even further, perhaps to the individual object level. In this way each table/index/... has its own file. Can be useful, can be overkill. Really depends on how you are using it.
#Justin
By domain is mostly always sufficient. I agree that there are some complexities to deal with when doing it this way, but that should be easy enough to handle.
I think this method provides a little more seperation (which in a large database you will come to appreciate) while still making itself pretty manageable. We also write Perl scripts that do a lot of the processing of these DDL files, so that might be an option of a good way to handle that.
there is a neat tools that will iterate through the entire sql server and extract all the table, view, stored proceedures and UDF defintions to the local file system as SQL scripts (Text Files). I have used this with 2005 and 2008, not sure how it wil work with 2000 though. Check out http://www.antipodeansoftware.com/Home/Products
Invest the time to write a generic "drop all constraints" script, so you don't have to maintain it.
A cursor over the following statements does the trick.
Select * From Information_Schema.Table_Constraints
Select * From Information_Schema.Referential_Constraints
I previously organised my DDL code organised by one file per entity and made a tool that combined this into a single DDL script.
My former employer used a scheme where all table DDL was in one file (stored in oracle syntax), indicies in another, constraints in a third and static data in a fourth. A change script was kept in paralell with this (again in Oracle). The conversion to SQL was manual. It was a mess. I actually wrote a handy tool that will convert Oracle DDL to SQL Server (it worked 99.9% of the time).
I have recently switched to using Visual Studio Team System for Database professionals. So far it works fine, but there are some glitches if you use CLR functions within the database.