Rails update old database - sql

Right now I'm working on updating a Rails app and the database has some issues. It's also being converted from MySQL to PostgreSQL.
There's 3 columns being used to track one time value. The time the facility opens on Monday is being recorded as monday_open_hour, monday_open_minute, monday_open_ampm. I'd like to merge these into a single time field.
There are also several fields being used for only 1% of the 3000+ records, so I'd like to break those out into a separate table.
What would be the best way to do this? I imagine it could probably be done in SQL with some kind of stored procedures/cursors. Is there a way to do it with Ruby/Rails?

The Rails way to deal with incremental database changes is to use migrations. Migrations let you apply incremental changes to your schema or database contents in an orderly fashion, even as you're collaborating with a team. There are nice helpers for common tasks like creating and dropping tables, renaming columns, and simple things like that, but you can drop to arbitrary SQL if you need to (although, be aware that that will most likely tie you to your current database, and make further moves more difficult).
Basically, you can generate a new migration with rails generate migration ConsolidateDateColumns (for example). This will create a template for you in the db/migrate directory; see the Rails Guides entry to get started on writing them. When you're ready to apply it, run rake db:migrate.
The advantages of doing it this way are that it lets you easily apply the same changes to different environments (development, test, production, staging, or across your development team) and keep them in sync, and it encourages you to keep things reversible whenever possible, so you maintain some degree of freedom to migrate back and forth if you need to.
One more thing: it sounds like you're going to be doing a lot of major changes in quick succession. Make sure that you take a backup of your original database before you begin, and thoroughly test your work against a reduced test set in a separate environment before you run it against the real thing!

Related

Multi System Database structure based copying/updating best practice

so after searching and not finding similar cases I want to open a new question.
So here is the case:
We are working with a large database with a very complicated data structure. Also we are working on multiple systems to ensure stability (development, testing, quality and productive) and its always a struggle so move data between those systems. As I said the data structure is very large and there is also a lot of logic inside the database. Customers are able to add new data parts as configuration and there is also a static income of data which are used for statistics and monitoring. So let me explain the problem with a small example:
Lets take this Database as an example. We have some families making some contest with each other. And they will create some statistics about the points they make.
The Purple Tables are fixed configurations. They are created once and they can only be changed via an Operator. Those changes will be done and tested in the development system first.
The Yellow Tables are changing configurations. Each Family is able to create or delete multiple Contests and assign their kids.
The Red Table is just plain data. Each time a kid makes points, a new row is added with the amount and current time and the relation to the kid and contest.
This table will be the base for the later statistics.
This Database is developed on two systems a productive one which is used by the families and a develop system which is used by the programmers/operators.
While developing the programmers will add test data like kids families contests and points. And while using the families will create new contests and assign new kids and will fill up the point table.
It's necessary to copy new/tested/fixed families from the development to the productive system.
Its also necessary to copy Contests, Contest-Kid-Assignments and Points from the productive to the development system to find new errors.
Also it must be possible to change the table structure on the development system and transmit this change to the productive system. (This shouldn't be the main topic here sometimes it can be such a large changes that there just is no easy way, so lets keep this point simple but keep it in mind.)
I want to copy parts of the tables to another system but be able to ignore some tables (for example: Points) and I want to make sure to not copy kids without their parent family so there is no "parentless" object in the database.
Question: What would be a good and save way to do this?
I don't need a solution for a specific database type or some scripts. I'm looking for tools, libraries or good practice. (But just as a note we're using mssql.)
We are currently making a tool for this problem (not going well: unstable, overly complicated, slow and possible reinventing the wheel).
Also a lot of devs I know just copy the whole database (making a backup and running it into another server) But this is also making problems: users are being copied and their guid change so they loose permissions etc. I don't think this is a good solution. Also the database is down for quite a long time and its never a smooth process.
Making it manually is sometimes the easiest way but considering the size of our data structure its not just a huge piece of work there is also a large possibility for mistakes.
So I'm hoping someone knows a tool or something similar to help me out.
Welcome to the pains of development for a Stateful entity like a database. :) RedGate makes a tool called SQL Source Control that is good for moving changed data and Schema into Production, and it can interface with source control solutions such as GIT. It's a bit pricey, but it's the best I've found. One option for keeping dev up to date with prod data and dev changes is one I concocted at my last place of employment, which was... not 100% perfect, but better than nothing, and free. It was developed in Powershell, and it went something like this:
Create Pre-restore, Pre-dacpac and Post-dacpac SQL scripts to store data and
permission diffs between dev and prod
Use SQLPackage.EXE to make DacPac of Dev(Dacpac is basically an xml schema of db, no
data)
Execute Pre-restore Proc (Often copying out test data that needs to be persisted)
Restore Prod over Dev
Execute Pre-dacpac script (any DDL That could cause data loss may need to go here)
Use SQLPackage.EXE to apply DacPac made in step 2 to Newly restored database
Execute Post-Dacpac Script (Permissions, restoration of data copied in step 3)
Again, like I said, it worked and automated the restoration of prod data into our dev environment while keeping our dev changes intact, but it required a good bit of upkeep and maintenance. Also, keep in mind, once your DB reaches a certain size, doing a nightly restore is no longer a viable option due to the time it takes to restore.

How to scale Play2 Evolutions

I'm recently started using Play2 on a project, and read the section on evolutions. And while the example they cite seems fine if my project had 1 table, it seems like it would be very messy if I had 10-20 tables in 1.sql and then changes to them split up over 2.sql, 3.sql and so on.
In Ruby on Rails, Symfony, and others, you define your up/down migrations per entity.
My question is, what is the best way to setup your evolutions in Play2? Should I have all my tables in 1.sql and then make little changes to them over 2.sql and so on? Or is there a way to have a separate .sql file for each table?
Also, are there any examples of large, open source Play2 projects where I could see how it would look?
Actually Play has not possibility to divide evolutions by entities.
IMHO it's rather matter of taste, you can add each entity in single next evolution, anyway only difference will be that counter of evolution will be bigger, I don't think that will help you to keep evolutions cleaner.
Typical workflow is starting from ... good planning. Just create some graph representation of your schema and try to add there as many things as you need. It helps a lot while the project startup and also in next steps of development.
If you are gonna to use Ebean, create all models from your graph and let the plugin to create automatic first evolution file, probably you will save a lot of time on writing evolutions for relations, constraints, etc. Spend some time for fixing and checking initial schema before further development.
After that you need to disable automatic updates as they drops whole DB and recreates tables them from the scratch (there's no diff schema update in Ebean).
It's also matter of taste but I prefer to combine several changes into single evolutions (so again planning...) instead of creating next and next files for every single change ad hoc.

What is db/development_structure.sql in a rails project?

There is a development_structure.sql inside my /db folder of my rails application (rails 2.3.4, ruby 1.8.7) and I am not sure exactly what it does.
Is it needed for some specific environment? (I think I read somewhere that it's used for tests)
Do I need to add it to my git repository?
This post has been used as a reference by a coworker of mine, but the two answers are not exact or informative enough.
development_structure.sql is a low-level dump of the schema, which is necessary when you start to use proprietary database features - either you want to or not, you're going to use them at some point.
Regarding the question of storing it or not, there's some debate. Here is an informative post: http://www.saturnflyer.com/blog/jim/2010/09/14/always-check-in-schema-rb/.
And my take on this follows.
The objective of the development_structure.sql is to sync, for any given commit, the database structure with the code, without having previous knowledge of the schema structure, that is, without having to rely on a pre-existing state of the schema to get the new one.
In a nutshell, by having a schema structure available, whenever you change branch/commit, you load it directly and forget it.
This is mostly valid for dynamic and "crowded" projects, where different branches have differences in the underlying schema structure.
Without having the schema structure stored, you would need to always use an existing reference schema in your database, and migrate it back or forward every time you change branch/commit; several real-world cases can make this process inefficient (e.g. when another branch doesn't have some migrations you currently have, or some migrations can't be rolled back).
Another problem is automated builds, which suffer from the same problems, and even worse, they can't apply manual changes.
The only downside is that it requires a certain habit, which is, to store it every time you run a migration. Easy to say, but also easy to forget.
I don't say you can't live without development_structure.sql - of course you can.
But if you have it, when changing branch/commit you just load-and-forget; if you don't, you [may] have to go through a series of manual steps.
You should not add it to your git repository.
It is a file created automatically by rails when you run migrations with your database.yml configured to connect against a mysql database.
You can view it as an alternative to schema.rb
I believe you can force rails to create it by adding in your environment.rb:
config.active_record.schema_format = :sql
When present this file is used for example by:
rake db:test:clone_structure
Edit
Relevant section in Ruby On Rails Guides.
http://guides.rubyonrails.org/migrations.html#schema-dumping-and-you
They recommend to check it into source control on the wiki.
I personally like to keep it out of it. I like to be able to run all migrations very quickly. It is for me a good sign. If migrations become slow I feel like I am not in total control of my environment anymore. Slowness in migrations generally means I have a lot of data in my development database which I feel wrong.
However, It seems to be a matter of personal taste nowadays.
Follow your instincts on this one.
It's created when you run a rake task to clone your development database to your test database. The development database is outputted to SQL which is then read in to your test DB. You can safely delete it.
In rails 3, you don't even have to write this line,
config.active_record.schema_format = :sql
You can generate this structure.sql file by simply running the above rake command mentioned above

Do you put your indexes in source control?

And how do you keep them in synch between test and production environments?
When it comes to indexes on database tables, my philosophy is that they are an integral part of writing any code that queries the database. You can't introduce new queries or change a query without analyzing the impact to the indexes.
So I do my best to keep my indexes in synch betweeen all of my environments, but to be honest, I'm not doing very well at automating this. It's a sort of haphazard, manual process.
I periodocally review index stats and delete unnecessary indexes. I usually do this by creating a delete script that I then copy back to the other environments.
But here and there indexes get created and deleted outside of the normal process and it's really tough to see where the differences are.
I've found one thing that really helps is to go with simple, numeric index names, like
idx_t_01
idx_t_02
where t is a short abbreviation for a table. I find index maintenance impossible when I try to get clever with all the columns involved, like,
idx_c1_c2_c5_c9_c3_c11_5
It's too hard to differentiate indexes like that.
Does anybody have a really good way to integrate index maintenance into source control and the development lifecycle?
Indexes are a part of the database schema and hence should be source controlled along with everything else. Nobody should go around creating indexes on production without going through the normal QA and release process- particularly performance testing.
There have been numerous other threads on schema versioning.
The full schema for your database should be in source control right beside your code. When I say "full schema" I mean table definitions, queries, stored procedures, indexes, the whole lot.
When doing a fresh installation, then you do:
- check out version X of the product.
- from the "database" directory of your checkout, run the database script(s) to create your database.
- use the codebase from your checkout to interact with the database.
When you're developing, every developer should be working against their own private database instance. When they make schema changes they checkin a new set of schema definition files that work against their revised codebase.
With this approach you never have codebase-database sync issues.
Yes, any DML or DDL changes are scripted and checked in to source control, mostly thru activerecord migrations in rails. I hate to continually toot rails' horn, but in many years of building DB-based systems I find the migration route to be so much better than any home-grown system I've used or built.
However, I do name all my indexes (don't let the DBMS come up with whatever crazy name it picks). Don't prefix them, that's silly (because you have type metadata in sysobjects, or in whatever db you have), but I do include the table name and columns, e.g. tablename_col1_col2.
That way if I'm browsing sysobjects I can easily see the indexes for a particular table (also it's a force of habit, wayyyy back in the day on some dBMS I used, index names were unique across the whole DB, so the only way to ensure that is to use unique names).
I think there are two issues here: the index naming convention, and adding database changes to your source control/lifecycle. I'll tackle the latter issue.
I've been a Java programmer for a long time now, but have recently been introduced to a system that uses Ruby on Rails for database access for part of the system. One thing that I like about RoR is the notion of "migrations". Basically, you have a directory full of files that look like 001_add_foo_table.rb, 002_add_bar_table.rb, 003_add_blah_column_to_foo.rb, etc. These Ruby source files extend a parent class, overriding methods called "up" and "down". The "up" method contains the set of database changes that need to be made to bring the previous version of the database schema to the current version. Similarly, the "down" method reverts the change back to the previous version. When you want to set the schema for a specific version, the Rails migration scripts check the database to see what the current version is, then finds the .rb files that get you from there up (or down) to the desired revision.
To make this part of your development process, you can check these into source control, and season to taste.
There's nothing specific or special about Rails here, just that it's the first time I've seen this technique widely used. You can probably use pairs of SQL DDL files, too, like 001_UP_add_foo_table.sql and 001_DOWN_remove_foo_table.sql. The rest is a small matter of shell scripting, an exercise left to the reader.
I always source-control SQL (DDL, DML, etc). Its code like any other. Its good practice.
I am not sure indexes should be the same across different environments since they have different data sizes. Unless your test and production environments have the same exact data, the indexes would be different.
As to whether they belong in source control, am not really sure.
I do not put my indexes in source control but the creation script of the indexes. ;-)
Index-naming:
IX_CUSTOMER_NAME for the field "name" in the table "customer"
PK_CUSTOMER_ID for the primary key,
UI_CUSTOMER_GUID, for the GUID-field of the customer which is unique (therefore the "UI" - unique index).
On my current project, I have two things in source control - a full dump of an empty database (using pg_dump -c so it has all the ddl to create tables and indexes) and a script that determines what version of the database you have, and applies alters/drops/adds to bring it up to the current version. The former is run when we're installing on a new site, and also when QA is starting a new round of testing, and the latter is run at every upgrade. When you make database changes, you're required to update both of those files.
Using a grails app the indexes are stored in source control by default since you are defining the index definition inside of a file that represents your domain object. Just offering the 'Grails' perspective as an FYI.

Best Database Change Control Methodologies

As a database architect, developer, and consultant, there are many questions that can be answered. One, though I was asked recently and still can't answer good, is...
"What is one of, or some of, the best methods or techniques to keep database changes documented, organized, and yet able to roll out effectively either in a single-developer or multi-developer environment."
This may involve stored procedures and other object scripts, but especially schemas - from documentation, to the new physical update scripts, to rollout, and then full-circle. There are applications to make this happen, but require schema hooks and overhead. I would rather like to know about techniques used without a lot of extra third-party involvement.
The easiest way I have seen this done without the aid of an external tool is to create a "schema patch" if you will. The schema patch is just a simple t-sql script. The schema patch is given a version number within the script and this number is stored in a table in the database to receive the changes.
Any new changes to the database involve creating a new schema patch that you can then run in sequence which would then detect what version the database is currently on and run all schema patches in between. Afterwards the schema version table is updated with whatever date/time the patch was executed to store for the next run.
A good book that goes into details like this is called Refactoring Databases.
If you wish to use an external tool you can look at Ruby's Migrations project or a similar tool in C# called Migrator.NET. These tools work by creating c# classes/ruby classes with an "Forward" and "Backward" migration. These tools are more feature rich because they know how to go forward as well as backwards in the schema patches. As you stated however, you are not interested in an external tool, but I thought I would add that for other readers anyways.
I rather liked this series:
http://odetocode.com/Blogs/scott/archive/2008/02/03/11746.aspx
In my case I have a script generate every time I change the database, I named the script like 00001.sql, n.sql and I have a table with de number of last script I have execute. You can also see Database Documentation
as long as you add columns/tables to your database it will be an easy task by scripting these changes in advance in sql-files. you just execute them. maybe you have some order to execute them.
a good solution would be to make one file per table, so that all changes belonging to this table would be visible to who-ever is working on the table (its like working on a class). the same is valid for stored procedures or views.
a more difficult task (and therefore maybe tools would be good) is to step back. as long as you just added tables/columns maybe this would not be a big issue. but if you have dropped columns on an update, and now you have to undo your update, the data is not there anymore. you will need to get this data from the backup. but keep in mind, if you have more then a few tables this could be a big task, and in the normal case you should undo your update very fast!
if you could just restore the backup, then its fine in this moment. but, if you update on monday, your clients work till wednesday and then they see that some data is missing (which you just dropped out of a table) then you could not just restore the old database.
i have a model-based approach in my mind (sorry, not implemented at the moment) in which schema-changes are "modeled" (e.g. per xml) and during an update a processor (e.g. a c# program) creates all necessary "sql" and e.g. moves data to a "dropDatabase". the data can reside there, and if for some reason i need to restore some of the dropped data, i can just do it with the processor. i think over some time (years) this approach is not as bad because otherwise developers don't touch "old" tables because they don't know anymore if the table or column is really necessary. with this approach you don't risk too lot if you drop something!
What I do is:
All the DDL commands required to recreate the schema (and the stored procedures and the indexes, etc) are in a script.
To be sure the script is OK, it is tested from time to time (create a database, run the script and restore the backup and check the database works well).
For change control, the script is kept in a Version Control System (I typically use Subversion).
The trick is that, if the database cannot be brought down to recreate with, say, an added column, I have two changes to make, an ALTER TABLE + a modification in the script. A bit more work but, in the long term, it wins.