Jeff and others have convinced me that GUIDs are preferable to auto-increment ids. I have a Postgres DB that is indexed by auto-increment ids so I'd like to "refactor" the indexes to UUIDs. Is there some general (or specific) approach to doing this besides writing functions that traverse the tables, and check for index matches across tables?
Update
Note: the database is not currently in production, so performance and transactional integrity are non-issues.
I'm not able to find anything that will do this automatically for you, so it looks like it's up to you to do it. Good thing the world still needs database developers, eh?
The best way, arguably, is to have the entire change scripted out. The best way to create that script is probably with another script or tool (code that writes code), which doesn't seem to be available for this particular scenario. Of course each of these adds another layer of software which must be constructed and tested. If I thought that I would want to repeat this process some time, or needed some level of audit trail (e.g. change scripts), I would probably bite the bullet and write the script that writes this script.
If this really is just a one-shot deal, and you can prevent DB access while you're doing it, then it might save time and effort to just manually make the changes, sort of like when you initially develop a database. By this, I mean adding UUID columns via your preferred method (diagrammer, SQL DDL, etc.), filling them with data (probably with ad-hoc SQL DML), setting keys and constraints, and then eventually removing the old foreign keys and columns (again, using whatever method you like).
If you have multiple environments (dev, test, prod), you can potentially do this in dev and then use a DB compare tool to script the changes, though you'll need the new FK values scripted.
An example
Here is a working script example on SQL Fiddle, though it's in SQL Server (my easiest DB), just to give you an idea about what you'll have to script (unfortunately, not how). It's still not completely transactionally consistent, as someone could modify something during one particular operation.
I realize this isn't by any means a complete answer, so feel free to vote me down (and provide a better answer).
Good luck, this is actually a fun problem.
Related
Background
I have a software component that writes data to a postgres database (into several tables) and I want to write an automatic functional test for this component. I already have a host of unit tests in place that check the subcomponents, but I'd like a test that checks the whole system end-to-end.
For each test run, I use a clean database (actually a completely new, this-test-run-only database). The software component is stable in the sense that given the same input, it will always write the same user data to the database.
The database design is relational, such that most tables contain foreign keys. Obviously, I don't want to check the value of these keys, because I don't want to rely on the fact that these keys are generated in a predictive manner by postgres.
Assume that there are no issues regarding user rights on the database, connection issues etc. Also disregard development/production disparities.
I currently use a number of select statements to produce a textual "dump" of the database and compare it to a reference dump (ignoring whitespace and so on), but this seems rather clumsy. Also, this doesn't take into account the relationships between the tables. Extending the current approach to deal with this doesn't strike me as maintainable at all, should the database layout ever change.
My software as well as the testing framework is written in C++, the testing scripts are simple bash scripts. I'm open to use any language to achieve this.
Question
How can I automatically verify the database contents in "the database way"?
Even better would be an approach that doesn't rely on postgres as the backend.
pgTap is a testing framework for PostgreSQL. You can use it to test both the structure and the content of a PostgreSQL database. I've used it on projects that had to meet certain contractual standards for seeded data (data for "lookup" tables like state codes and abbreviations, delivery carriers, user roles, etc.). It has worked well for that purpose.
But I don't yet see a compelling reason to abandon your current method, which is already written and working. Text dumps of single tables are supported by all current SQL dbms, as far as I know. If you move to a different dbms, you'll have to change the name of the dump program and the arguments to it. I can't imagine why you'd need to change the reference file, but I suppose that could happen.
The "database way" is really just to select the data you expect to be in the database, and see if it's really there. That's pretty much what you're doing now, and what pgTap does with perhaps greater flexibility.
To increase maintainability (to reduce duplication), you could generate the INSERT statements from the reference data, or you could generate the reference data from the INSERT statements. I can imagine development environments where that would be a wise thing to do, but I don't know whether yours is one of them.
The project I work on has undergone a transformation at the database level. For the better, about 40% of the SQL layout has been changed. Some columns were eliminated, others moved. I am now tasked with developing a data migration strategy.
What migration methods, even tools are available so that I don't have to figure out each every individual dependency and manually script a key change when their IDs (for instance) change.
I realize this question is a bit obtuse and open ended, but I assume others have had to do this before and I would appreciate any advice.
I'm on MS SQL Server 2008
#OMG Ponies Not obtuse but vague:
Great point. I guess this helps me reconsider what I am asking, at least make it more specific. How do you insert from multiple tables to multiple tables keeping the relationships established by the foreign keys intact? I now realize I could drop the ID key constraint during the insert and re-enable it after, but I guess I have to figure out what depends on what myself and make sure it goes smoothly.
I'll start there, but will leave this open in case anyone else has other recommendation.
You should create an upgrade script that morphs the current schema into the v. next schema, applying appropriate operations (alter table, select into, update, delete etc). While this may seem tedious, is the only process that will be testable: start from a backup of the current db, apply the upgrade script, test the result db for conformance with the desired schema. You can test and debug your upgrade script until is hammered into correctness. You can test it on a real data size so that you get a correct estimate of downtime due to size-of-data operations.
While there are out there tools that can copy data or transforms schema(s) (like SQL Compare) I believe approaching this as a development project, with a script deliverable that can be tested repeatedly and validated, is a much saner approach.
In future you can account for this upgrade step in your development and start with it, rather than try to squeeze it in at the end.
there are tons of commercial tools around that claim to solve this -> i wouldn't buy that...
I think your best bet is to model domain classes that represent your data and write adapters that read in/serialize to the old/new schemas.
If you haven't got a model of your domain, you should build one now.
ID's will change, so ideally they should not carry any meaning to user's of your database.
I've shown up at a new job and discovered database which is in dire need of some help. There are many many things wrong with it, including
No foreign keys...anywhere. They're faked by using ints and managing the relationship in code.
Practically every field can be NULL, which isn't really true
Naming conventions for tables and columns are practically non-existent
Varchars which are storing concatenated strings of relational information
Folks can argue, "It works", which it is. But moving forward, it's a total pain to manage all of this with code and opens us up to bugs IMO. Basically, the DB is being used as a flat file since it's not doing a whole lot of work.
I want to fix this. The issues I see now are:
We have a lot of data (migration, possibly tricky)
All of the DB logic is in code (with migration comes big code changes)
I'm also tempted to do something "radical" like moving to a schema-free DB.
What are some good strategies when faced with an existing DB built upon a poorly designed schema?
Enforce Foreign Keys: If a relationship exists in the domain, then it should have a Foreign Key.
Renaming existing tables/columns is fraught with danger, especially if there are many systems accessing the Database directly. Gotchas include tasks that run only periodically; these are often missed.
Of Interest: Scott Ambler's article: Introduction To Database Refactoring
and Catalog of Database Refactorings
Views are commonly used to transition between changing data models because of the encapsulation. A view looks like a table, but does not exist as a finite object in the database - you can change what column is being returned for a given column alias as desired. This allows you to setup your codebase to use a view, so you can move from the old table structure to the new one without the application needing to be updated. But it means the view has to return the data in the existing format. For example - your current data model has:
SELECT t.column --a list of concatenated strings, assuming comma separated
FROM TABLE t
...so the first version of the view would be the query above, but once you created the new table that uses 3NF, the query for the view would use:
SELECT GROUP_CONCAT(t.column SEPARATOR ',')
FROM NEW_TABLE t
...and the application code would never know that anything changed.
The problem with MySQL is that the view support is limited - you can't use variables within it, nor can they have subqueries.
The reality to the changes you wish to make is effectively rewriting the application from the ground up. Moving logic from the codebase into the data model will drastically change how the application gets the data. Model-View-Controller (MVC) is ideal to implement with changes like these, to minimize the cost of future changes like these.
I'd say leave it alone until you really understand it. Then make sure you don't start with one of the Things You Should Never Do.
Read Scott Ambler's book on Refactoring Databases. It covers a good many techniques for how to go about improving a database - including the transitional measures needed to allow both old and new programs to work with the changing design.
Create a completely new schema and make sure that it is fully normalized and contains any unique, check and not null constraints etc that are required and that appropriate data types are used.
Prepopulate each table that fills the parent role in a foreign key relationship with a single 'Unknown' record.
Create an ETL (Extract Transform Load) process (I can recommend SSIS (SQL Server Integration Services) but there are plenty of others) that you can use to refill the new schema from the existing one on a regular basis. Use the 'Unknown' record as the parent of any orphaned records - there will be plenty ;). You will need to put some thought into how you will consolidate duplicate records - this will probably need to be on a case by case basis.
Use as many iterations as are necessary to refine your new schema (ensure that the ETL Process is maintained and run regularly).
Create views over the new schema that match the existing schema as closely as possible.
Incrementally modify any clients to use the new schema making temporary use of the views where necessary. You should be able to gradually turn off parts of the ETL process and eventually disable it completely.
First see how bad the code is related to the DB if it is all mixed in no DAO layer you shouldn't think about a rewrite but if there is a DAO layer then it would be time to rewrite that layer and DB along with it. If possible make the migration tool based on using the two DAOs.
But my guess is there is no DAO so you need to find what areas of the code you are going to be changing and what parts of the DB that relates to hopefully you can cut it up into smaller parts that can be updated as you maintain. Biggest deal is to get FKs in there and start checking for proper indexes there is a good chance they aren't being done correctly.
I wouldn't worry too much about naming until the rest of the db is under control. As for the NULLs if the program chokes on a value being NULL don't let it be NULL but if the program can handle it I wouldn't worry about it at this point in the future if it is doing a default value move that to the DB but that is way down the line from the sound of things.
Do something about the Varchars sooner rather then later. If anything make that the first pure background fix to the program.
The other thing to do is estimate the effort of each areas change and then add that price to the cost of new development on that section of code. That way you can fix the parts as you add new features.
And how do you keep them in synch between test and production environments?
When it comes to indexes on database tables, my philosophy is that they are an integral part of writing any code that queries the database. You can't introduce new queries or change a query without analyzing the impact to the indexes.
So I do my best to keep my indexes in synch betweeen all of my environments, but to be honest, I'm not doing very well at automating this. It's a sort of haphazard, manual process.
I periodocally review index stats and delete unnecessary indexes. I usually do this by creating a delete script that I then copy back to the other environments.
But here and there indexes get created and deleted outside of the normal process and it's really tough to see where the differences are.
I've found one thing that really helps is to go with simple, numeric index names, like
idx_t_01
idx_t_02
where t is a short abbreviation for a table. I find index maintenance impossible when I try to get clever with all the columns involved, like,
idx_c1_c2_c5_c9_c3_c11_5
It's too hard to differentiate indexes like that.
Does anybody have a really good way to integrate index maintenance into source control and the development lifecycle?
Indexes are a part of the database schema and hence should be source controlled along with everything else. Nobody should go around creating indexes on production without going through the normal QA and release process- particularly performance testing.
There have been numerous other threads on schema versioning.
The full schema for your database should be in source control right beside your code. When I say "full schema" I mean table definitions, queries, stored procedures, indexes, the whole lot.
When doing a fresh installation, then you do:
- check out version X of the product.
- from the "database" directory of your checkout, run the database script(s) to create your database.
- use the codebase from your checkout to interact with the database.
When you're developing, every developer should be working against their own private database instance. When they make schema changes they checkin a new set of schema definition files that work against their revised codebase.
With this approach you never have codebase-database sync issues.
Yes, any DML or DDL changes are scripted and checked in to source control, mostly thru activerecord migrations in rails. I hate to continually toot rails' horn, but in many years of building DB-based systems I find the migration route to be so much better than any home-grown system I've used or built.
However, I do name all my indexes (don't let the DBMS come up with whatever crazy name it picks). Don't prefix them, that's silly (because you have type metadata in sysobjects, or in whatever db you have), but I do include the table name and columns, e.g. tablename_col1_col2.
That way if I'm browsing sysobjects I can easily see the indexes for a particular table (also it's a force of habit, wayyyy back in the day on some dBMS I used, index names were unique across the whole DB, so the only way to ensure that is to use unique names).
I think there are two issues here: the index naming convention, and adding database changes to your source control/lifecycle. I'll tackle the latter issue.
I've been a Java programmer for a long time now, but have recently been introduced to a system that uses Ruby on Rails for database access for part of the system. One thing that I like about RoR is the notion of "migrations". Basically, you have a directory full of files that look like 001_add_foo_table.rb, 002_add_bar_table.rb, 003_add_blah_column_to_foo.rb, etc. These Ruby source files extend a parent class, overriding methods called "up" and "down". The "up" method contains the set of database changes that need to be made to bring the previous version of the database schema to the current version. Similarly, the "down" method reverts the change back to the previous version. When you want to set the schema for a specific version, the Rails migration scripts check the database to see what the current version is, then finds the .rb files that get you from there up (or down) to the desired revision.
To make this part of your development process, you can check these into source control, and season to taste.
There's nothing specific or special about Rails here, just that it's the first time I've seen this technique widely used. You can probably use pairs of SQL DDL files, too, like 001_UP_add_foo_table.sql and 001_DOWN_remove_foo_table.sql. The rest is a small matter of shell scripting, an exercise left to the reader.
I always source-control SQL (DDL, DML, etc). Its code like any other. Its good practice.
I am not sure indexes should be the same across different environments since they have different data sizes. Unless your test and production environments have the same exact data, the indexes would be different.
As to whether they belong in source control, am not really sure.
I do not put my indexes in source control but the creation script of the indexes. ;-)
Index-naming:
IX_CUSTOMER_NAME for the field "name" in the table "customer"
PK_CUSTOMER_ID for the primary key,
UI_CUSTOMER_GUID, for the GUID-field of the customer which is unique (therefore the "UI" - unique index).
On my current project, I have two things in source control - a full dump of an empty database (using pg_dump -c so it has all the ddl to create tables and indexes) and a script that determines what version of the database you have, and applies alters/drops/adds to bring it up to the current version. The former is run when we're installing on a new site, and also when QA is starting a new round of testing, and the latter is run at every upgrade. When you make database changes, you're required to update both of those files.
Using a grails app the indexes are stored in source control by default since you are defining the index definition inside of a file that represents your domain object. Just offering the 'Grails' perspective as an FYI.
As a database architect, developer, and consultant, there are many questions that can be answered. One, though I was asked recently and still can't answer good, is...
"What is one of, or some of, the best methods or techniques to keep database changes documented, organized, and yet able to roll out effectively either in a single-developer or multi-developer environment."
This may involve stored procedures and other object scripts, but especially schemas - from documentation, to the new physical update scripts, to rollout, and then full-circle. There are applications to make this happen, but require schema hooks and overhead. I would rather like to know about techniques used without a lot of extra third-party involvement.
The easiest way I have seen this done without the aid of an external tool is to create a "schema patch" if you will. The schema patch is just a simple t-sql script. The schema patch is given a version number within the script and this number is stored in a table in the database to receive the changes.
Any new changes to the database involve creating a new schema patch that you can then run in sequence which would then detect what version the database is currently on and run all schema patches in between. Afterwards the schema version table is updated with whatever date/time the patch was executed to store for the next run.
A good book that goes into details like this is called Refactoring Databases.
If you wish to use an external tool you can look at Ruby's Migrations project or a similar tool in C# called Migrator.NET. These tools work by creating c# classes/ruby classes with an "Forward" and "Backward" migration. These tools are more feature rich because they know how to go forward as well as backwards in the schema patches. As you stated however, you are not interested in an external tool, but I thought I would add that for other readers anyways.
I rather liked this series:
http://odetocode.com/Blogs/scott/archive/2008/02/03/11746.aspx
In my case I have a script generate every time I change the database, I named the script like 00001.sql, n.sql and I have a table with de number of last script I have execute. You can also see Database Documentation
as long as you add columns/tables to your database it will be an easy task by scripting these changes in advance in sql-files. you just execute them. maybe you have some order to execute them.
a good solution would be to make one file per table, so that all changes belonging to this table would be visible to who-ever is working on the table (its like working on a class). the same is valid for stored procedures or views.
a more difficult task (and therefore maybe tools would be good) is to step back. as long as you just added tables/columns maybe this would not be a big issue. but if you have dropped columns on an update, and now you have to undo your update, the data is not there anymore. you will need to get this data from the backup. but keep in mind, if you have more then a few tables this could be a big task, and in the normal case you should undo your update very fast!
if you could just restore the backup, then its fine in this moment. but, if you update on monday, your clients work till wednesday and then they see that some data is missing (which you just dropped out of a table) then you could not just restore the old database.
i have a model-based approach in my mind (sorry, not implemented at the moment) in which schema-changes are "modeled" (e.g. per xml) and during an update a processor (e.g. a c# program) creates all necessary "sql" and e.g. moves data to a "dropDatabase". the data can reside there, and if for some reason i need to restore some of the dropped data, i can just do it with the processor. i think over some time (years) this approach is not as bad because otherwise developers don't touch "old" tables because they don't know anymore if the table or column is really necessary. with this approach you don't risk too lot if you drop something!
What I do is:
All the DDL commands required to recreate the schema (and the stored procedures and the indexes, etc) are in a script.
To be sure the script is OK, it is tested from time to time (create a database, run the script and restore the backup and check the database works well).
For change control, the script is kept in a Version Control System (I typically use Subversion).
The trick is that, if the database cannot be brought down to recreate with, say, an added column, I have two changes to make, an ALTER TABLE + a modification in the script. A bit more work but, in the long term, it wins.