I am creating a new Rails application which will work with an existing schema. I have been given the schema SQL but I want to create Rails migrations to populate the database in development. The schema is not overly complicated, with around 20 tables, however I don't want to waste time and risk typos by manually creating the migrations.
Is there a way to generate Rails migrations given a schema's SQL?
Sure, connect your application to your database, then run
rake db:schema:dump
This will give you a db/schema.rb ready with all of your definitions. Now that you have that db/schema.rb, simply copy the contents within the declaration into a new migration. I've done this before, and it works just great.
I prefer to simply write the initial migration's up method with SQL execute calls:
class InitialDbStructure < ActiveRecord::Migration
def up
execute "CREATE TABLE abouts (
id INTEGER UNSIGNED AUTO_INCREMENT,
updated_at TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
created_at TIMESTAMP,
title VARCHAR(125),
body MEDIUMTEXT,
CONSTRAINT PK_id PRIMARY KEY (id),
INDEX ORDER_id (id ASC)
) ENGINE=InnoDB;"
end
NOTES
You will find, particularly if you are often rebuilding and repopulating tables (rake db:drop db:create db:schema:load db:fixtures:load), that execute statements run far faster than interpreted Ruby syntax. For example, it takes over 55 seconds for our tables to rebuild from Rails migrations in Ruby syntax, whereas execute statements re-generate and re-populate our tables in 20 seconds. This of course is a substantial issue in projects where initial content is regularly revised, or table specifications are regularly revised.
Perhaps of equal importance, you can retain this rebuild and repopulate speed by maintaining a single original migration in executed SQL syntax and re-executing migrations (of that single file) by first gutting your schema.rb and then running rake db:reset before re-populating your tables. Make sure you set :version => 0, so that you will get a new schema, faithful to your migration:
ActiveRecord::Schema.define(:version => 0) do
end
Related
In my Express Sequelize Postgres app, I created a migration to create a model/table with some attributes.
After the migration (status: up), I dropped the table.
Now I cannot undo the migration - the migration file exists, but I get the following error:
ERROR: relation "public.CustomerAddresses" does not exist
How do I undo the migration, so I can remigrate?
Here are two possible options:
As mentioned, you can undo your manual drop by manually recreating the table. As long as the table exists, you'll be able to undo your migration.
You can remove the entry for your migration from the migrations table to make the state of your schema match the state of the migrations table, since you've already performed the drop manually:
DELETE FROM "SequelizeMeta" WHERE name='<your migration name>';
I have several tables which are worked on within a development environment, then moved to production. If they don't already exist in production, it's fine to just generate the table creation script from SSMS and run it. However, there are occasions where the table already exists in production but all that's needed is an extra column or constraint. The problem is knowing exactly what has changed.
Is there a way to get SQL to compare my CREATE TABLE statement against the existing table and only apply what has changed? Essentially I am trying to do the below and SQL correctly complains that the table exists already.
I would have to manually write an ALTER query which on a real example would be difficult due to the sheer volume of columns. Is there a better / easier way to see what has changed? Note that this involves two separate database servers.
CREATE TABLE suppliers
( supplier_id int NOT NULL,
supplier_name char(50) NOT NULL,
contact_name char(50),
CONSTRAINT suppliers_pk PRIMARY KEY (supplier_id)
);
CREATE TABLE suppliers
( supplier_id int NOT NULL,
supplier_name char(50) NOT NULL,
contact_name char(50),
contact_number char(20), --this has been added
CONSTRAINT suppliers_pk PRIMARY KEY (supplier_id)
);
Also, dropping and recreating wouldn't be a possibility because data would be lost.
SSMS can generate the schema change script if you make the change in the table designer (right-click on the table in Object Explorer and select Design). Then, instead of applying the change immediately, from the menu select Table Designer-->Generate Change Script. Note that depending on the change, SSMS may need to recreate the table, although data will be retained. SSMS requires you uncheck the option to "prevent saving changes that require table re-creation" under Tools-->Options-->Designers-->Table and Database Designers. Review the script to make sure you're good with it.
SQL Server Data Tools (SSDT) and third-party tools (e.g. from Red-Gate and ApexSQL) have schema-compare features to generate the needed DDL after the fact. There are also features like migration scripts to facilitate continuous integration and source control integration as well. I suggest you keep database objects under source control and leverage database tooling as part of your development process.
Typically we use something like database migrations for this, as a feature outside of the database. For example, in several of our C# apps we have a tool called FluentMigrator. We write a script that adds the new columns we need in code, to the dev database. When the project is debugged, FM will run the script and modify the dev db, the dev code uses the new columns and all is well. FM knows not to run the script again
When time comes to put something live, the FM script is a part of the release, the app is put live onto the website, the migrations run again updating the live db so the live code will use the new columns and still all is well..
If there is nothing outside of your sql server (not sure how you manage that, but..), then surely you must be writing scripts (or using gui to generate scripts) that alter the DB right? So just keep those scripts and run them as part of the process of "going live"
If you are looking at this from a perspective that these db already exist created by someone else and they threw away the scripts, then you can one time catch up using a Database Schema Compare tool. Microsoft have one in SSDT - see here for more info on how it is used:
https://msdn.microsoft.com/en-us/library/hh272690(v=vs.103).aspx
If you don't have many constraints I suggest you create a dynamic script to cast and import the data into your new tables. If this doesn't fail then you just drop the old tables and rename the newly created ones.
Before seeding test data into DB table I need to truncate the table (I need to reset primary key), I am trying to do that this way:
ActiveRecord::Base.connection.execute("TRUNCATE users")
but when I print out data from DB, I still don't see counting primary key from 1.
What am I doing wrong?
EDIT:
Also, I've tried manually run in terminal to PostgreSQL database
truncate users
But the primary count still doesn't start from 1.
SOLUTION:
In Postgres, run:
ALTER SEQUENCE users_id_seq RESTART WITH 1;
In MySQL, TRUNCATE table; deletes all rows and resets the auto increment counter.
In PostgreSQL it does not do this automatically. You can use TRUNCATE TABLE table RESTART IDENTITY;.
Just for the record: In SQLite, there is no TRUNCATE statement, instead, it's
DELETE FROM table;
DELETE FROM sqlite_sequence WHERE name='table';
In case your db is postgres, you can do something like this to truncate the db tables of your models:
[
MyModel1,
MyModel2,
MyModel3
].each do |m|
ActiveRecord::Base.connection.execute("TRUNCATE TABLE #{m.table_name} RESTART IDENTITY;")
end
This is too late I'm answering this question but I hope this will help someone else.
You've to install (OR you can add gem 'database_cleaner' to your Gemfile) a GEM called Database Cleaner which helps to clean your database without affecting your database schema._
To clean your database each time whenever you do rake db:seed then paste
DatabaseCleaner.clean_with(:truncation)
on the top of your seed file. It'll clear your database and start count from 1 again.
Disclaimer : This updated answer is tested, and working perfectly in my system.
From within Rails in a csv_upload.rb I used and it worked.
ActiveRecord::Base.connection.execute('TRUNCATE model_name RESTART IDENTITY')
I recently added Devise and CanCan to my Rails 3.2.3 app and need to run rake db:migrate in order to get them working properly. I have a migration file for links that I created already and it is somehow conflicting with when I run rake db:migrate
== CreateLinks: migrating ====================================================
-- create_table(:links)
rake aborted!
An error has occurred, this and all later migrations canceled:
SQLite3::SQLException: table "links" already exists: CREATE TABLE "links" ("id"INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, "url" varchar(255), "description" var
char(255), "created_at" datetime NOT NULL, "points" integer, "updated_at" dateti
me NOT NULL)
I tried running rake db:migrate:reset but this seems to do nothing to help my situation. I still cant run a db migration for my new gems. How can I get around this? Can I omit the links migration somehow?
sDid you create the links table manually before running the migration? Somehow you seem to have gotten your migrations out of sync with your database.
If you are not concerned about any of the data in the database, you can do a rake db:drop first, then do the rake db:migrate. This blows away all the tables in the database and run all the migrations again from the beginning.
If you do need to maintain the existing database tables, then you could wrap the create_table :links statement with an unless table_exists? :links statement.
I need help writing a TSQL script to modify two columns' data type.
We are changing two columns:
uniqueidentifier -> varchar(36) * * * has a primary key constraint
xml -> nvarchar(4000)
My main concern is production deployment of the script...
The table is actively used by a public website that gets thousands of hits per hour. Consequently, we need the script to run quickly, without affecting service on the front end. Also, we need to be able to automatically rollback the transaction if an error occurs.
Fortunately, the table only contains about 25 rows, so I am guessing the update will be quick.
This database is SQL Server 2005.
(FYI - the type changes are required because of a 3rd-party tool which is not compatible with SQL Server's xml and uniqueidentifier types. We've already tested the change in dev and there are no functional issues with the change.)
As David said, execute a script in a production database without doing a backup or stop the site is not the best idea, that said, if you want to do changes in only one table with a reduced number of rows you can prepare a script to :
Begin transaction
create a new table with the final
structure you want.
Copy the data from the original table
to the new table
Rename the old table to, for example,
original_name_old
Rename the new table to
original_table_name
End transaction
This will end with a table that is named as the original one but with the new structure you want, and in addition you maintain the original table with a backup name, so if you want to rollback the change you can create a script to do a simple drop of the new table and rename of the original one.
If the table has foreign keys the script will be a little more complicated, but is still possible without much work.
Consequently, we need the script to
run quickly, without affecting service
on the front end.
This is just an opinion, but it's based on experience: That's a bad idea. It's better to have a short, (pre-announced if possible) scheduled downtime than to take the risk.
The only exception is if you really don't care if the data in these tables gets corrupted, and you can be down for an extended period.
In this situation, based on th types of changes you're making and the testing you've already performed, it sounds like the risk is very minimal, since you've tested the changes and you SHOULD be able to do it safely, but nothing is guaranteed.
First, you need to have a fall-back plan in case something goes wrong. The short version of a MINIMAL reasonable plan would include:
Shut down the website
Make a backup of the database
Run your script
test the DB for integrity
bring the website back online
It would be very unwise to attempt to make such an update while the website is live. you run the risk of being down for an extended period if something goes wrong.
A GOOD plan would also have you testing this against a copy of the database and a copy of the website (a test/staging environment) first and then taking the steps outlined above for the live server update. You have already done this. Kudos to you!
There are even better methods for making such an update, but the trade-off of down time for safety is a no-brainer in most cases.
And if you absolutely need to do this in live then you might consider this:
1) Build an offline version of the table with the new datatypes and copied data.
2) Build all the required keys and indexes on the offline tables.
3) swap the tables out in a transaction. 00 you could rename the old table to something else as an emergency backup.
sp_help 'sp_rename'
But TEST FIRST all of this in a prod like environment. And make sure your backups are up to date. AND do this when you are least busy.