when I ran desc features
+-------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| name | varchar(128) | YES | UNI | NULL | |
+-------+--------------+------+-----+---------+----------------+
I have table like below,
+-----------+----------------------------------------------------------+
| id | name |
+-----------+----------------------------------------------------------+
| 105314659 | latitude |
| 105314658 | final |
+-----------+----------------------------------------------------------+
I ran INSERT INTO features (name) VALUES ('test'), it didnot give auto-increment primary key, like
+-----------+----------------------------------------------------------+
| id | name |
+-----------+----------------------------------------------------------+
| 109728684 | test |
| 105314659 | latitude |
| 105314658 | final |
+-----------+----------------------------------------------------------+
+-----------+----------------------------------------------------------+
| id | name |
+-----------+----------------------------------------------------------+
| 109728690 | test5 |
| 109728688 | test4 |
| 109728687 | test3 |
| 109728686 | test2 |
| 109728684 | test |
| 105314659 | latitude |
| 105314658 | final |
+-----------+----------------------------------------------------------+
anyone has any thoughts ? Thanks !
*** This table I constant delete and insert BTW. But when I ran single insert, it didnot auto incremental id ***
This issue is from
INSERT INTO nlp_request_feature_types (name) VALUES ('test4'); ,
It will cause exc.IntegrityError, like
ERROR 1062 (23000): Duplicate entry 'test4' for key 'features.ix_features', BUT the id = 109728689 , will be allocated.
So when next run INSERT INTO nlp_request_feature_types (name) VALUES ('test5'); , it will be successfully saved into table, and the primary key id = 109728690.
As such, it will give us the illusion that auto-incremental primary key doesnot work. However, it does work, just because our mistake of ERROR 1062 consumed one id.
Related
I have SQL for example
show tables from mydb;
It shows the list of table
|table1|
|table2|
|table3|
Then,I use sql sentence for each table.
such as "show full columns from table1 ;"
+----------+--------+-----------+------+-----+---------+----------------+---------------------------------+---------+
| Field | Type | Collation | Null | Key | Default | Extra | Privileges | Comment |
+----------+--------+-----------+------+-----+---------+----------------+---------------------------------+---------+
| id | bigint | NULL | NO | PRI | NULL | auto_increment | select,insert,update,references | |
| user_id | bigint | NULL | NO | MUL | NULL | | select,insert,update,references | |
| group_id | int | NULL | NO | MUL | NULL | | select,insert,update,references | |
+----------+--------+-----------+------+-----+---------+----------------+---------------------------------+---------+
So in this case I can use programming language such as .(this is not correct code just showing the flow)
tables = "show tables from mydb;"
for t in tables:
cmd.execute("show full columns from {t} ;")
However is it possible to do this in sql only?
If you are using MySQL you can use the system view - INFORMATION_SCHEMA.
It contains table name and column name (and other details). No loop is require and you can easily filter by other information, too.
SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
If you are using Microsoft SQL Server, you can use the above command
I am getting a BadSQLGrammarException, but I cannot see where the error is in the following SQL statement:
insert into comment(comment_date, name, comment) values '4/27/2013', 'Frank', 'Test';
Here is the description of my table. (Yes, I am converting a date into a String. There are probably better ways to do it with SQL, but that's not the question here.)
+--------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------------+--------------+------+-----+---------+-------+
| id | int(11) | YES | | NULL | |
| comment_date | varchar(10) | YES | | NULL | |
| name | varchar(200) | YES | | NULL | |
| comment | varchar(200) | YES | | NULL | |
+--------------+--------------+------+-----+---------+-------+
INSERT INTO table (fieldlist) VALUES (valueslist)
^-- ^--
you forgot the indicated brackets.
Try using parentheses around the values, like this:
insert into comment(comment_date, name, comment) values ('4/27/2013', 'Frank', 'Test');
I've read some information about the ugly side of just setting a deleted_at field in your tables to signify a row has been deleted.
Namely
http://richarddingwall.name/2009/11/20/the-trouble-with-soft-delete/
Are there any potential problems with taking a row from a table you want to delete and pivoting it into some EAV tables?
For instance.
Lets Say I have two tables deleted and deleted_row respectively described as follows.
mysql> describe deleted;
+------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| tablename | varchar(255) | YES | | NULL | |
| deleted_at | timestamp | YES | | NULL | |
+------------+--------------+------+-----+---------+----------------+
mysql> describe deleted_rows;
+--------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| entity | int(11) | YES | MUL | NULL | |
| name | varchar(255) | YES | | NULL | |
| value | blob | YES | | NULL | |
+--------+--------------+------+-----+---------+----------------+
Now when you wanted to delete a row from any table you would delete it from the table then insert it into these tables as such.
deleted
+----+-----------+---------------------+
| id | tablename | deleted_at |
+----+-----------+---------------------+
| 1 | products | 2011-03-23 00:00:00 |
+----+-----------+---------------------+
deleted_row
+----+--------+-------------+-------------------------------+
| id | entity | name | value |
+----+--------+-------------+-------------------------------+
| 1 | 1 | Title | A Great Product |
| 2 | 1 | Price | 55.00 |
| 3 | 1 | Description | You guessed it... it's great. |
+----+--------+-------------+-------------------------------+
A few things I see off the bat.
You'll need to use application logic
to do the pivot (Ruby, PHP, Python,
etc)
The table could grow pretty big
because I'm using blob to handle
the unknown size of the row value
Do you see any other glaring problems with this type of soft delete?
Why not mirror your tables with archive tables?
create table mytable(
col_1 int
,col_2 varchar(100)
,col_3 date
,primary key(col_1)
)
create table mytable_deleted(
delete_id int not null auto_increment
,delete_dtm datetime not null
-- All of the original columns
,col_1 int
,col_2 varchar(100)
,col_3 date
,index(col_1)
,primary key(delete_id)
)
And then simply add on-delete-triggers on your tables that inserts the current row in the mirrored table before the deletion? That would provide you with dead-simple and very performant solution.
You could actually generate the tables and trigger code using the data dictionary.
Note that I might not want to have a unique index on the original primary key (col_1) in the archive table, because you may actually end up deleting the same row twice over time if you are using natural keys. Unless you plan to hook up the archive tables in your application (for undo purposes) you can drop the index entirely. Also, I added the time of delete (deleted_dtm) and a surrogate key that can be used to delete the deleted (hehe) rows.
You may also consider range partitioning the archive table on deleted_dtm. This makes it pretty much effortless to purge data from the tables.
I am trying to build a new table such that the values in the existing table are NOT contained (but obviously the following checks for contained) in another table. Following is my table structure:
mysql> explain t1;
+-----------+---------------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-----------+---------------------+------+-----+---------+-------+
| id | int(11) | YES | | NULL | |
| point | bigint(20) unsigned | NO | MUL | 0 | |
+-----------+---------------------+------+-----+---------+-------+
mysql> explain whitelist;
+-------------+---------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+---------------------+------+-----+---------+----------------+
| id | bigint(20) unsigned | NO | PRI | NULL | auto_increment |
| x | bigint(20) unsigned | YES | | NULL | |
| y | bigint(20) unsigned | YES | | NULL | |
| geonetwork | linestring | NO | MUL | NULL | |
+-------------+---------------------+------+-----+---------+----------------+
My query looks like this:
SELECT point
FROM t1
WHERE EXISTS(SELECT source
FROM whitelist
WHERE MBRContains(geonetwork, GeomFromText(CONCAT('POINT(', t1.point, ' 0)'))));
Explain:
+----+--------------------+--------------------+-------+-------------------+-----------+---------+------+------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+--------------------+--------------------+-------+-------------------+-----------+---------+------+------+--------------------------+
| 1 | PRIMARY | t1 | index | NULL | point | 8 | NULL | 1001 | Using where; Using index |
| 2 | DEPENDENT SUBQUERY | whitelist | ALL | _geonetwork | NULL | NULL | NULL | 3257 | Using where |
+----+--------------------+--------------------+-------+-------------------+-----------+---------+------+------+--------------------------+
The query is taking 6 seconds to execute for 1000 records in t1 which is unacceptable for me. How can I rewrite this query using Joins (or perhaps a faster way if that exists) if I don't have a column to join on? Even a stored procedure is acceptable I guess in the worst case. My goal is to finally create a new table containing entries from t1. Any suggestions?
Unless the query optimizer is failing, a WHERE EXISTS construct should result in the same plan as a join with a GROUP clause. Look at optimizing MBRContains(geonetwork, GeomFromText(CONCAT('POINT(', t1.point, ' 0)')))), that's probably where your query is spending all its time. I don't have a suggestion for that, but here's your query written with a JOIN:
Select t1.point
from t1
join whitelist on MBRContains(whitelist.geonetwork, GeomFromText(CONCAT('POINT(', t1.point, ' 0)'))))
group by t1.point
;
or to get the points in t1 not in whitelist:
Select t1.point
from t1
left join whitelist on MBRContains(whitelist.geonetwork, GeomFromText(CONCAT('POINT(', t1.point, ' 0)'))))
where whitelist.id is null
;
This seems like a case where de-nomalizing t1 might be beneficial. Adding a GeomFrmTxt column with a value of GeomFromText(CONCAT('POINT(', t1.point, ' 0)')) could speed up the query you already have.
I have two tables that look like this
Train
+----------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+----------+-------------+------+-----+---------+-------+
| TrainID | varchar(11) | NO | PRI | NULL | |
| Capacity | int(11) | NO | | 50 | |
+----------+-------------+------+-----+---------+-------+
Reservations
+---------------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------------+-------------+------+-----+---------+----------------+
| ReservationID | int(11) | NO | PRI | NULL | auto_increment |
| FirstName | varchar(30) | NO | | NULL | |
| LastName | varchar(30) | NO | | NULL | |
| DDate | date | NO | | NULL | |
| NoSeats | int(2) | NO | | NULL | |
| Route | varchar(11) | NO | | NULL | |
| Train | varchar(11) | NO | | NULL | |
+---------------+-------------+------+-----+---------+----------------+
Currently, I'm trying to create a query that will increment the capacity on a Train if a reservation is cancelled. I know I have to perform a Join, but I'm not sure how to do it in an Update statement. For Example, I know how to get the capacity of a Train with given a certain ReservationID, like so:
select Capacity
from Train
Join Reservations on Train.TrainID = Reservations.Train
where ReservationID = "15";
But I'd like to construct the query that does this -
Increment Train.Capacity by ReservationTable.NoSeats given a ReservationID
If possible, I'd like to know also how to Increment by an arbitrary number of seats. As an aside, I'm planning on deleting the reservation after I perform the increment in a Java transaction. Will the delete effect the transaction?
Thanks for the help!
MySQL supports a multi-table UPDATE syntax, which would look approximately like this:
UPDATE Reservations r JOIN Train t ON (r.Train = t.TrainID)
SET t.Capacity = t.Capacity + r.NoSeats
WHERE r.ReservationID = ?;
You can update the Train table and delete from the Reservations table in the same transaction. As long as you do the update first and then do the delete second, it should work.
Here is another example of an UPDATE statement that contains joins to determine the value that is being updated. In this case, I want to update the transactions.payee_id with the related account payment id, if the payee_id is zero (wasn't assigned).
UPDATE transactions t
JOIN account a ON a.id = t.account_id
JOIN account ap ON ap.id = a.pmt_act_id
SET t.payee_id = a.pmt_act_id
WHERE t.payee_id = 0