Are there libraries that focus on taking two database exports, finding the differences and creating update/alter statements for it? Basically an update script from export A to export B.
For instance this:
-- Version 1
CREATE TABLE IF NOT EXISTS `mytable` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`title` varchar(255) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;
-- Version 2
CREATE TABLE IF NOT EXISTS `mytable` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`title` varchar(255) NOT NULL,
`description` text,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ;
-- Would result in this:
ALTER TABLE `mytable`
ADD `description` text;
Edit: this question is related to libraries for MySQL, not tools.
There are a few MySQL comparison tools out there.
SQLyog
Redgate MySQL Compare
RedGate http://www.red-gate.com/products/sql-development/sql-compare/index-b offer a very good and stable solution to this.
I believe ultimate edition of Visual Studio 2010 can also compare schemas however I'm not sure if it will generate the ALTER scripts for you.
Edit:
I just remembered this http://opendbiff.codeplex.com/ too however I didn't have much luck when I last looked at it.
This node module could be useful. It diffs live databases, but then it should be simple to create a live database from an SQL dump.
https://github.com/contra/dbdiff
Related
I apologize for asking question related to this topic, I know this topic has been discussed on few other posts and its kind of a duplicate but the posts I been through did not help me in fixing my issue.
I have the following SQL code to add two tables:
CREATE TABLE IF NOT EXISTS `opa`.`iddocumenttype`(
`iddocumenttypeid` INT(11) UNSIGNED NOT NULL AUTO_INCREMENT,
`name` VARCHAR(50) NOT NULL,
PRIMARY KEY(`iddocumenttypeid`),
KEY (`iddocumenttypeid`)
) ENGINE=INNODB CHARSET=utf8;
and
CREATE TABLE IF NOT EXISTS `opa`.`identificationdocumentdetails`(
`identificationdocumentdetailsid` INT(11) UNSIGNED NOT NULL AUTO_INCREMENT,
`IDdocumenttypeid` INT(11) UNSIGNED NOT NULL,
`IDdocumentnumber` VARCHAR(128),
`name` VARCHAR(200) COMMENT 'Exact name as on idDoc?',
`dateofissue` DATE CHECK (dateofissue < CURRENT_DATE()),
`placeofissue` VARCHAR(128),
`validtill` DATE CHECK (validtill > CURRENT_DATE()),
PRIMARY KEY (`identificationdocumentdetailsid`),
CONSTRAINT `FK_identificationdocumentdetails_iddocumenttype` FOREIGN KEY (`IDdocumenttypeid`) REFERENCES `opa`.`iddocumenttype`(`iddocumenttypeid`)) ENGINE=INNODB CHARSET=utf8;
Now, when I run the query to create the second table which is identificationdocumentdetails , I get the following error:
Error Code: 1005
Can't create table 'opa.identificationdocumentdetails' (errno: 150)
I dont understand why its happening, I am sure some thing has to do with CONSTRAINT line because when I remove this line:
CONSTRAINT `FK_identificationdocumentdetails_iddocumenttype` FOREIGN KEY (`IDdocumenttypeid`) REFERENCES `opa`.`iddocumenttype`(`iddocumenttypeid`)
the store procedure works fine, but I think I am missing whats going wrong here, Can some one please point out to me what am I not seeing here....
`IDdocumenttypeid` INT(11)
in identificationdocumentdetails table need to be the exactly same type as the column referenced,which is
`iddocumenttypeid` INT(11) UNSIGNED NOT NULL
http://sqlfiddle.com/#!9/66b6d
I want to create a friend list for my website which is supposed to be stored in a database table, following is the table structure I think should best serve the purpose.
CREATE TABLE `sdt_friend_graph` (
`user` INT(11) NOT NULL,
`friend` INT(11) NOT NULL,
`status` ENUM('requested','accepted') COLLATE utf8_unicode_ci DEFAULT NULL,
`requested_on` DATETIME DEFAULT NULL,
`accepted_on` DATETIME DEFAULT NULL,
PRIMARY KEY (`user`,`friend`)
)
just want to find out if my approach is ok, or is there any better way to do this to make it more efficient, I'm open to suggestions.
Regards,
your table structure looks fine, i would just add user as an AUTO_INCREMENT field and change the name to friendid... just for semantics.
Imagine that we have a website where users can read articles, view photos, watch videos, and many more. Every "item" may be commented, so that we need space to save that comments somewhere. Let's discuss storage possibilities for this case.
Distributed solution
We can obviously create separate tables for each "item", so that we have tables like:
CREATE TABLE IF NOT EXISTS `article_comments` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`createdBy` int(11) DEFAULT NULL,
`createdAt` int(11) DEFAULT NULL,
`article` int(11) DEFAULT NULL,
`content` text,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
and then obviously photo_comments, video_comments, and so on. The advantages of this way are as follows:
we can specify Foreign Key to every "item" table,
database is divided into logical parts.
there is no problem with export of such data.
Disadvantages:
many tables
probably hard to maintain (adding fields, etc.)
Centralized solution
On the other hand we can merge all those tables into two:
CREATE TABLE IF NOT EXISTS `comment_types` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
and
CREATE TABLE IF NOT EXISTS `comments` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`createdBy` int(11) DEFAULT NULL,
`createdAt` int(11) DEFAULT NULL,
`type` int(11) DEFAULT NULL,
`content` text,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
Table comment_types is a dictionary, it contains key-value pairs of commented item "type" and its name, for example :
1:Articles
2:Photos
3:Videos
Table comments stores usual data with additional type field.
Advantages:
Maintenance (adding / removing fields),
Adding new comment types "on the fly".
Disadvantages:
Harder to migrate / export,
Possible performance drop when querying large dataset.
Discussion:
Which storage option will be better in terms of query performance (assume that dataset IS big enough for that to be the case),
Again performance - will adding INDEX on type remove or drastically reduce that percormance drop?
Which storage option will be better in terms of management and possible migration in the future (distributed will be better, of course, but let's see if centralized one isn't the one far away)
I'm not sure either of the disadvantages you list for option 2 are serious, data export is easily accomplished with a simple WHERE clause and I wouldn't worry about performance. Option 2 is properly normalised and in a modern relational database performance should be excellent (and can be tweaked further with appropriate indexes etc if necessary).
I would only consider the first option if I could prove that it was necessary for performance, scalability or other reasons - but it must be said that seems unlikely.
i have this tabel,
CREATE TABLE `forum_rank` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`user_id` int(11) NOT NULL DEFAULT '0',
`rank` int(11) NOT NULL DEFAULT '0',
`forum_id` int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (`id`)
) ENGINE=MyISAM AUTO_INCREMENT=2 DEFAULT CHARSET=latin1;
now i ask about what perfome best, its * or alle felt like this 2 eg.
select * form forum_rank;
or
select id, user_id, rank, forum_id from forum_rank;
You should explicitly specify the columns. Otherwise the database engine will first have to find out what the table's columns are (resolve the * operator) and after perform the actual query.
I don't think performance will be a problem here. There's a better reason to prefer the second idiom: your code is less likely to break if you add additional columns.
Just wanted to know what would happen if in my book database i had two different authors which have the same name. How could i redesign my database to sort this problem out? Do i have to assign primary and secondary keys or something? By the way this question is related to my previous one.
An AUTHORS table would help your book database - you could store the author info once, but associate it with multiple books:
DROP TABLE IF EXISTS `example`.`authors`;
CREATE TABLE `example`.`authors` (
`author_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`firstname` varchar(45) NOT NULL,
`lastname` varchar(45) NOT NULL,
PRIMARY KEY (`author_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Books can have multiple authors, so you'd need a many-to-many table to relate authors to books:
DROP TABLE IF EXISTS `example`.`book_authors_map`;
CREATE TABLE `example`.`book_authors_map` (
`book_id` int(10) unsigned NOT NULL,
`author_id` int(10) unsigned NOT NULL,
PRIMARY KEY (`book_id`,`author_id`),
KEY `FK_authors` (`author_id`),
CONSTRAINT `FK_books` FOREIGN KEY (`book_id`) REFERENCES `books` (`book_id`),
CONSTRAINT `FK_authors` FOREIGN KEY (`author_id`) REFERENCES `authors` (`author_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
You should almost always use your own in-house ID system, even if it's never displayed to your users. In your database each book will have it's own 'id' attribute, which you can just auto-increment by 1 each time.
The reason for doing this, other than the example in your question, is that even if you use a seemingly unique identifier (like an ISBN), this standard could (and has) change at some point in time, leaving you with a lot of work to do to update your database.
If you have two different authors with the exact same name, each author should have some sort of unique ID to differntiate them, either a GUID or an autonumber.
Use natural keys where they exist - in this case ISBN numbers