Set AUTO_INCREMENT starting value in a InnoDB table to zero? - sql

Is there any to get the an AUTO_INCREMENT field of a InnoDB to start counting from 0 not 1
CREATE TABLE `df_mainevent` (
`idDf_MainEvent` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`idDf_MainEvent`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

MySQL documentation:
If a user specifies NULL or 0 for the
AUTO_INCREMENT column in an INSERT,
InnoDB treats the row as if the value
had not been specified and generates a
new value for it.
So it means that 0 is a 'special' value which is similar to NULL. Even when you use AUTO_INCREMENT = 0 is will set the initial value to 1.
Beginning with MySQL 5.0.3, InnoDB
supports the AUTO_INCREMENT = N table
option in CREATE TABLE and ALTER TABLE
statements, to set the initial counter
value or alter the current counter
value. The effect of this option is
canceled by a server restart, for
reasons discussed earlier in this
section.

CREATE TABLE `df_mainevent` (
`idDf_MainEvent` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`idDf_MainEvent`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=latin1;
works with MySQL >= 5.0.3.
EDIT:
Just noticed that MySQL in general does not like auto-increment values equal to 0 - that's independent from the used storage engine. MySQL just uses 1 as the first auto-increment value. So to answer the question: NO that's not possible but it does not depend on the storage engine.

This works in both InnoDB and MyISAM, and the second insert is a 1 not a 2:
CREATE TABLE ex1 (id INT AUTO_INCREMENT PRIMARY KEY) ENGINE=MyISAM;
SET sql_mode='NO_AUTO_VALUE_ON_ZERO';
INSERT INTO ex1 SET id=0;
INSERT INTO ex1 SET id=NULL;
SELECT * FROM ex1;
+----+
| id |
+----+
| 0 |
| 1 |
+----+
2 rows in set (0.00 sec)
CREATE TABLE ex2 (id INT AUTO_INCREMENT PRIMARY KEY) ENGINE=InnoDB;
SET sql_mode='NO_AUTO_VALUE_ON_ZERO';
INSERT INTO ex2 SET id=0;
INSERT INTO ex2 SET id=NULL;
SELECT * FROM ex2;
+----+
| id |
+----+
| 0 |
| 1 |
+----+
2 rows in set (0.00 sec)

Daren Schwenke's technique works. To bad that the next record inserted will be 2.
For example:
CREATE TABLE IF NOT EXISTS `table_name` (
`ID` INT UNSIGNED NOT NULL AUTO_INCREMENT,
`Name` VARCHAR(100) NOT NULL,
PRIMARY KEY( `ID` )
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=latin1;
INSERT INTO `table_name` (`Name`) VALUES ('Record0?');
UPDATE `table_name` SET `ID`=0 WHERE `ID`=1;
INSERT INTO `table_name` (`Name`) VALUES ('Record1?');
SELECT * FROM `table_name`;
ID Name
0 Record0?
2 Record1?
This isn't a big deal its just annoying.
Tim

I have not been able to have autoincrement start at 0, but starting at 1 and then setting it to 0 via an UPDATE works fine.
I commonly use this trick to detect deletes in a table.
On update of any row, I set that row's last update time.
On deletes, I set the last update time of row 0.

Related

How to declare "nextval('testing_thing_thing_id_seq'::regclass)" as default value for column "thing_id" in postgres table "testing_thing"?

In my postgres db there is a table called testing_thing, which I can see (by running \d testing_thing in my psql prompt) it is defined as
Table "public.testing_thing"
Column | Type | Collation | Nullable | Default
--------------+-------------------+-----------+----------+-----------------------------------------------------
thing_id | integer | | not null | nextval('testing_thing_thing_id_seq'::regclass)
thing_num | smallint | | not null | 0
thing_desc | character varying | | not null |
Indexes:
"testing_thing_pk" PRIMARY KEY, btree (thing_num)
I want to drop it and re-create it exactly as it is, but I don't know how to reproduce the
nextval('testing_thing_thing_id_seq'::regclass)
part for column thing_id.
This is the query I put together to create the table:
CREATE TABLE testing_thing(
thing_id integer NOT NULL, --what else should I put here?
thing_num smallint NOT NULL PRIMARY KEY DEFAULT 0,
thing_desc varchar(100) NOT NULL
);
what is it missing?
Add a DEFAULT to the column you want to increment and call nextval():
CREATE SEQUENCE testing_thing_thing_id_seq START WITH 1;
CREATE TABLE testing_thing(
thing_id integer NOT NULL DEFAULT nextval('testing_thing_thing_id_seq'),
thing_num smallint NOT NULL PRIMARY KEY DEFAULT 0,
thing_desc varchar(100) NOT NULL
);
Side note: Keep in mind that attaching a sequence to a column does not prevent users to manually fill it with random data, which can create really nasty problems with primary keys. If you want to overcome it and do not necessarily need to have a sequence, consider creating an identity column, e.g.
CREATE TABLE testing_thing(
thing_id integer NOT NULL GENERATED ALWAYS AS IDENTITY,
thing_num smallint NOT NULL PRIMARY KEY DEFAULT 0,
thing_desc varchar(100) NOT NULL
);
Demo: db<>fiddle

How to UPDATE or INSERT in PostgreSQL

I want to UPDATE or INSERT a column in PostgreSQL instead of doing INSERT or UPDATE using INSERT ... ON CONFLICT ... because there will be more updates than more inserts and also I have an auto incrementing id column that's defined using SERIAL so it increments the id column everytime it tries to INSERT or UPDATE and that's not what I want, I want the id column to increase only if it's an INSERT so that all ids would be in an order instead
The table is created like this
CREATE TABLE IF NOT EXISTS table_name (
id SERIAL PRIMARY KEY,
user_id varchar(30) NOT NULL,
item_name varchar(50) NOT NULL,
code_uses bigint NOT NULL,
UNIQUE(user_id, item_name)
)
And the query I used was
INSERT INTO table_name
VALUES (DEFAULT, 'some_random_id', 'some_random_name', 1)
ON CONFLICT (user_id, item_name)
DO UPDATE SET code_uses = table_name.code_uses + 1;
Thanks :)
Upserts in PostgreSQL do exactly what you described.
Consider this table and records
CREATE TABLE t (id SERIAL PRIMARY KEY, txt TEXT);
INSERT INTO t (txt) VALUES ('foo'),('bar');
SELECT * FROM t ORDER BY id;
id | txt
----+-----
1 | foo
2 | bar
(2 Zeilen)
Using upserts the id will only increment if a new record is inserted
INSERT INTO t VALUES (1,'foo updated'),(3,'new record')
ON CONFLICT (id) DO UPDATE SET txt = EXCLUDED.txt;
SELECT * FROM t ORDER BY id;
id | txt
----+-------------
1 | foo updated
2 | bar
3 | new record
(3 Zeilen)
EDIT (see coments): this is the expected behaviour of a serial column, since they're nothing but a fancy way to use sequences. Long story short: using upserts the gaps will be inevitable. If you're worried the value might become too big, use bigserial instead and let PostgreSQL do its job.
Related thread: serial in postgres is being increased even though I added on conflict do nothing

Why does MySQL return a message like 'returned an empty result set' or 'n row(s) affected'?

Why does MySQL return # MySQL returned an empty result set (i.e. zero rows). and 3 row(s) affected.? Is there anything wrong in my SQL statements?
CREATE TABLE IF NOT EXISTS `test` (
`id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
`order` mediumint(8) NOT NULL,
`url` varchar(70) COLLATE utf8_unicode_ci NOT NULL,
`title` varchar(70) COLLATE utf8_unicode_ci NOT NULL,
`content` text COLLATE utf8_unicode_ci,
PRIMARY KEY (`id`),
UNIQUE KEY `url` (`url`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;
# MySQL returned an empty result set (i.e. zero rows).
INSERT INTO `test` (`id`, `order`, `url`, `title`, `content`) VALUES
(52338, 1, '', 'Home', 'content'),
(70104, 2, 'about', 'About', 'content'),
(27034, 3, 'portfolio', 'Portfolio', 'content');
# 3 row(s) affected.
The number of affected rows and the length of the result set are two different things.
Generally, INSERT, UPDATE and DELETE statements affect rows, while SELECT returns a result set which may be empty if no rows were matched according to the condition.
Insert queries don't return any rows. The affected rows is basically how many rows were inserted. If one of the value sets you included had failed for some reason, you'd see "2 rows affected" instead of 3.
The same applies for delete and update queries - you're not FETCHING information from the database, you're just adding or changing data that was already there.
Only in the case of a SELECT query would rows be returned, and then only if any rows matched the conditions (where/having/joins) you set.
I just cut and pasted your code directly into a test database and it works fine.
lwdba#localhost (DB information_schema) :: create database test1;
Query OK, 1 row affected (0.02 sec)
lwdba#localhost (DB information_schema) :: use test1
Database changed
lwdba#localhost (DB test1) :: CREATE TABLE IF NOT EXISTS `test`
`id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,
`order` mediumint(8) NOT NULL,
`url` varchar(70) COLLATE utf8_unicode_ci NOT NULL,
`title` varchar(70) COLLATE utf8_unicode_ci NOT NULL,
`content` text COLLATE utf8_unicode_ci,
PRIMARY KEY (`id`),
UNIQUE KEY `url` (`url`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;
Query OK, 0 rows affected (0.08 sec)<BR>
The CREATE TABLE command is what echoed 0 rows affected if that's your concern.
lwdba#localhost (DB test1) :: INSERT INTO `test`
(`id`, `order`, `url`,`title`, `content`) VALUES
(52338, 1, '', 'Home', 'content'),
(70104, 2, 'about', 'About', 'content'),
(27034, 3, 'portfolio', 'Portfolio', 'content');
Query OK, 3 rows affected (0.00 sec)
Records: 3 Duplicates: 0 Warnings: 0
lwdba#localhost (DB test1) :: select * from test;
+-------+-------+-----------+-----------+---------+
| id | order | url | title | content |
+-------+-------+-----------+-----------+---------+
| 52338 | 1 | | Home | content |
| 70104 | 2 | about | About | content |
| 27034 | 3 | portfolio | Portfolio | content |
+-------+-------+-----------+-----------+---------+
3 rows in set (0.00 sec)<BR>
mysql database is returned 0 rows
insert into studentfinace(Name,mother,class_id,level,fee,blance)
select studentfinace.Name,studentfinace.mother,class.Name,level.Name,level.fee, studentfinace.blance from studentfinace join class on class.ID=studentfinace.class_id join level on level.level_id=class.level_id where 1

MySQL: Which indexes to use for a simple range select?

I have a table with ~30 million rows ( and growing! ) and currently i have some problems with a simple range select.
The query, looks like this one:
SELECT SUM( CEIL( dlvSize / 100 ) ) as numItems
FROM log
WHERE timeLogged BETWEEN 1000000 AND 2000000
AND user = 'example'</pre>
It takes minutes to finish and i think that the solution would be at the indexes that i'm using. Here is the result of explain:
+----+-------------+-------+-------+---------------------------------+---------+---------+------+----------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+-------+---------------------------------+---------+---------+------+----------+-------------+
| 1 | SIMPLE | log | range | PRIMARY,timeLogged | PRIMARY | 4 | NULL | 11839754 | Using where |
+----+-------------+-------+-------+---------------------------------+---------+---------+------+----------+-------------+
My table structure is this one ( reduced to make it fit better on the problem ):
CREATE TABLE IF NOT EXISTS `log` (
`origDomain` varchar(64) NOT NULL default '0',
`timeLogged` int(11) NOT NULL default '0',
`orig` varchar(128) NOT NULL default '',
`rcpt` varchar(128) NOT NULL default '',
`dlvSize` varchar(255) default NULL,
`user` varchar(255) default NULL,
PRIMARY KEY (`timeLogged`,`orig`,`rcpt`),
KEY `timeLogged` (`timeLogged`),
KEY `orig` (`orig`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Any ideas of what can I do to optimize this query or indexes on my table?
You may want to try adding a composite index on (user, timeLogged):
CREATE TABLE IF NOT EXISTS `log` (
...
KEY `user_timeLogged` (user, timeLogged),
...
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Related Stack Overflow post:
Database: When should I use a composite index?
In addition to the suggestions made by the other answers, I note that you have a column user in the table which is a varchar(255). If this refers to a column in a table of users, then 1) it would most likely to far more efficient to add an integer ID column to that table, and use that as the primary key and as a referencing column in other tables; 2) you are using InnoDB, so why not take advantage of the foreign key capabilities it offers?
Consider that if you index by a varchar(n) column, it is treated like a char(n) in the index, so each row of your current primary key takes up 4 + 128 + 128 = 260 bytes in the index.
Add an index on user.

Table sync and copy into other table

I have two tables. Table A and Table B. They are identical. Ever 10 min i need to check if there any changs happend (New and updated) to Table A and copy into Table B. And also enter in Table C if i see a differance and new.
I also need to log if there any new records in Table A to table B and Table C
Iam planning to do join and compare the records. If i do that i might miss the new records. Is there any better way to do this kind of sync. It has to be done in SQL i can not use any other tools like SSIS.
Here's what I came up with in making some simple tables in SQL:
# create some sample tables and data
DROP TABLE alpha;
DROP TABLE beta;
DROP TABLE charlie;
CREATE TABLE `alpha` (
`id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`data` VARCHAR(32) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=MYISAM DEFAULT CHARSET=latin1;
CREATE TABLE `beta` (
`id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`data` VARCHAR(32) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=MYISAM DEFAULT CHARSET=latin1;
CREATE TABLE `charlie` (
`id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
`data` VARCHAR(32) DEFAULT NULL,
`type` VARCHAR(16) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=MYISAM DEFAULT CHARSET=latin1;
INSERT INTO alpha (data) VALUES ("a"), ("b"), ("c"), ("d"), ("e");
INSERT INTO beta (data) VALUES ("a"), ("b"), ("c");
# note new records of A, log in C
INSERT INTO charlie (data, type)
(SELECT data, "NEW"
FROM alpha
WHERE id NOT IN
(SELECT id
FROM beta));
# insert new records of A into B
INSERT INTO beta (data)
(SELECT data
FROM alpha
WHERE id NOT IN
(SELECT id
FROM beta));
# make a change in alpha only
UPDATE alpha
SET data = "x"
WHERE data = "c";
# note changed records of A, log in C
INSERT INTO charlie (data, type)
(SELECT alpha.data, "CHANGE"
FROM alpha, beta
WHERE alpha.data != beta.data
AND alpha.id = beta.id);
# update changed records of A in B
UPDATE beta, alpha
SET beta.data = alpha.data
WHERE alpha.data != beta.data
AND alpha.id = beta.id;
You would of course have to expand this for the type of data, number of fields, etc. but this is a basic concept if it helps.
It's a pity that you can't use SSIS (not allowed?) because it's built for this kind of thing. Anyway, using pure SQL you should be able to something like the following: if your tables have got a created/updated timestamp column, then you could query Table B for the highest one and get all records from table A with timestamps higher than that one.
If there's no timestamp to use, hopefully there's a PK like an int that can be used in the same way.
Hope that helps?
Valentino.
I would try using a trigger or transactional replication.
Hopefully you have a good unique key that is used in the tables. To get new records you can do the following:
SELECT * FROM tableA
WHERE NOT EXISTS( SELECT * FROM tableB WHERE pkey.tableA = pkey.TableB)