Rails seed: How to truncate DB table? - sql

Before seeding test data into DB table I need to truncate the table (I need to reset primary key), I am trying to do that this way:
ActiveRecord::Base.connection.execute("TRUNCATE users")
but when I print out data from DB, I still don't see counting primary key from 1.
What am I doing wrong?
EDIT:
Also, I've tried manually run in terminal to PostgreSQL database
truncate users
But the primary count still doesn't start from 1.
SOLUTION:
In Postgres, run:
ALTER SEQUENCE users_id_seq RESTART WITH 1;

In MySQL, TRUNCATE table; deletes all rows and resets the auto increment counter.
In PostgreSQL it does not do this automatically. You can use TRUNCATE TABLE table RESTART IDENTITY;.
Just for the record: In SQLite, there is no TRUNCATE statement, instead, it's
DELETE FROM table;
DELETE FROM sqlite_sequence WHERE name='table';

In case your db is postgres, you can do something like this to truncate the db tables of your models:
[
MyModel1,
MyModel2,
MyModel3
].each do |m|
ActiveRecord::Base.connection.execute("TRUNCATE TABLE #{m.table_name} RESTART IDENTITY;")
end

This is too late I'm answering this question but I hope this will help someone else.
You've to install (OR you can add gem 'database_cleaner' to your Gemfile) a GEM called Database Cleaner which helps to clean your database without affecting your database schema._
To clean your database each time whenever you do rake db:seed then paste
DatabaseCleaner.clean_with(:truncation)
on the top of your seed file. It'll clear your database and start count from 1 again.
Disclaimer : This updated answer is tested, and working perfectly in my system.

From within Rails in a csv_upload.rb I used and it worked.
ActiveRecord::Base.connection.execute('TRUNCATE model_name RESTART IDENTITY')

Related

Delete a broken postgreSQL table

I have a table of ~20 millions in postgreSQL and I want to delete it.
But every one of there operations doesn't work (It still running more than 12 hours without success):
- DELETE
- TRUNCATE
- VACUUM
- ANALYZE
I can't do anythink on this table...
A few day's ago I try to re-generate the id (BIGSERIAL) of each line with:
ALTER SEQUENCE "data_id_seq" RESTART WITH 1
UPDATE data SET id=nextval('data_id_seq')
And I think this operation brok the table...
If someone know how can I delete this table, thanks for help !
Try this...
DROP TABLE table_name;
See the doc

mysqldump not creating table, then attempting to LOCK / ALTER the non-existing table

Having several issues with a mysqldump on mysql Ver 14.14 Distrib 5.1.66, for unknown-linux-gnu (x86_64), first being:
A standard mysqldump is not creating a table, then attempting to LOCK / ALTER the non-existent table.
SQL query:
--Dumping data for table `catalog_product_index_price_cfg_opt_tmp`
LOCK TABLES `catalog_product_incatalog_category_entitydex_price_cfg_opt_tmp` WRITE;
MySQL said:
Documentation
#1146 - Table 'group_high.catalog_product_incatalog_category_entitydex_price_cfg_opt_tmp' doesn't exist.
Any idea on how this could be happening?
I've had similar issues in the past. And this answer might be similar to the "did you try turning it on". But how I have fixed it in the past is to close connection to DB then start again and make sure all the tables are there, and I can query that specific table. Then begin the MySQL. If it's still not outputting then you've narrowed the problem down to the dump.

Generate Rails migrations from a schema

I am creating a new Rails application which will work with an existing schema. I have been given the schema SQL but I want to create Rails migrations to populate the database in development. The schema is not overly complicated, with around 20 tables, however I don't want to waste time and risk typos by manually creating the migrations.
Is there a way to generate Rails migrations given a schema's SQL?
Sure, connect your application to your database, then run
rake db:schema:dump
This will give you a db/schema.rb ready with all of your definitions. Now that you have that db/schema.rb, simply copy the contents within the declaration into a new migration. I've done this before, and it works just great.
I prefer to simply write the initial migration's up method with SQL execute calls:
class InitialDbStructure < ActiveRecord::Migration
def up
execute "CREATE TABLE abouts (
id INTEGER UNSIGNED AUTO_INCREMENT,
updated_at TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
created_at TIMESTAMP,
title VARCHAR(125),
body MEDIUMTEXT,
CONSTRAINT PK_id PRIMARY KEY (id),
INDEX ORDER_id (id ASC)
) ENGINE=InnoDB;"
end
NOTES
You will find, particularly if you are often rebuilding and repopulating tables (rake db:drop db:create db:schema:load db:fixtures:load), that execute statements run far faster than interpreted Ruby syntax. For example, it takes over 55 seconds for our tables to rebuild from Rails migrations in Ruby syntax, whereas execute statements re-generate and re-populate our tables in 20 seconds. This of course is a substantial issue in projects where initial content is regularly revised, or table specifications are regularly revised.
Perhaps of equal importance, you can retain this rebuild and repopulate speed by maintaining a single original migration in executed SQL syntax and re-executing migrations (of that single file) by first gutting your schema.rb and then running rake db:reset before re-populating your tables. Make sure you set :version => 0, so that you will get a new schema, faithful to your migration:
ActiveRecord::Schema.define(:version => 0) do
end

How to efficiently remove all rows from a table in DB2

I have a table that has something like half a million rows and I'd like to remove all rows.
If I do simple delete from tbl, the transaction log fills up. I don't care about transactions this case, I do not want to rollback in any case. I could delete rows in many transactions, but are there any better ways to this?
How to efficiently remove all rows from a table in DB2? Can I disable the transactions for this command somehow or is there special commands to do this (like truncate in MySQL)?
After I have deleted the rows, I will repopulate the database with similar amount of new data.
It seems that following command works in newer versions of DB2.
TRUNCATE TABLE someschema.sometable IMMEDIATE
To truncate a table in DB2, simply write:
alter table schema.table_name activate not logged initially with empty table
From what I was able to read, this will delete the table content without doing any kind of logging which will go much easier on your server's I/O.

Why 'delete from table' takes a long time when 'truncate table' takes 0 time?

(I've tried this in MySql)
I believe they're semantically equivalent. Why not identify this trivial case and speed it up?
truncate table cannot be rolled back, it is like dropping and recreating the table.
...just to add some detail.
Calling the DELETE statement tells the database engine to generate a transaction log of all the records deleted. In the event the delete was done in error, you can restore your records.
Calling the TRUNCATE statement is a blanket "all or nothing" that removes all the records with no transaction log to restore from. It is definitely faster, but should only be done when you're sure you don't need any of the records you're going to remove.
Delete from table deletes each row from the one at a time and adds a record into the transaction log so that the operation can be rolled back. The time taken to delete is also proportional to the number of indexes on the table, and if there are any foreign key constraints (for innodb).
Truncate effectively drops the table and recreates it and can not be performed within a transaction. It therefore required fewer operations and executes quickly. Truncate also does not make use of any on delete triggers.
Exact details about why this is quicker in MySql can be found in the MySql documentation:
http://dev.mysql.com/doc/refman/5.0/en/truncate-table.html
Your question was about MySQL and I know little to nothing about MySQL as a product but I thought I'd add that in SQL Server a TRUNCATE statement can be rolled back. Try it for yourself
create table test1 (col1 int)
go
insert test1 values(3)
begin tran
truncate table test1
select * from test1
rollback tran
select * from test1
In SQL Server TRUNCATE is logged, it's just not logged in such a verbose way as DELETE is logged. I believe it's referred to as a minimally logged operation. Effectively the data pages still contain the data but their extents have been marked for deletion. As long as the data pages still exist you can roll back the truncate. Hope this is helpful. I'd be interested to know the results if somebody tries it on MySQL.
For MySql 5 using InnoDb as the storage engine, TRUNCATE acts just like DELETE without a WHERE clause: i.e. for large tables it takes ages because it deletes rows one-by-one. This is changing in version 6.x.
see
http://dev.mysql.com/doc/refman/5.1/en/truncate-table.html
for 5.1 info (row-by-row with InnoDB) and
http://blogs.mysql.com/peterg/category/personal-opinion/
for changes in 6.x
Editor's note
This answer is clearly contradicted by the MySQL documentation:
"For an InnoDB table before version 5.0.3, InnoDB processes TRUNCATE TABLE by deleting rows one by one. As of MySQL 5.0.3, row by row deletion is used only if there are any FOREIGN KEY constraints that reference the table. If there are no FOREIGN KEY constraints, InnoDB performs fast truncation by dropping the original table and creating an empty one with the same definition, which is much faster than deleting rows one by one."
Truncate is on a table level, while Delete is on a row level. If you would translate this to sql in an other syntax, truncate would be:
DELETE * FROM table
thus deleting all rows at once, while DELETE statement (in PHPMyAdmin) goes like:
DELETE * FROM table WHERE id = 1
DELETE * FROM table WHERE id = 2
Just until the table is empty. Each query taking a number of (milli)seconds which add up to taking longer than a truncate.