I am reading the post Improve INSERT-per-second performance of SQLite? to improve the performance of my SQLite.
One question is: If I need to perform the following queries:
INSERT INTO
INSERT INTO
...
INSERT INTO(more than 10000 times)
SELECT ...
SELECT
UPDATE ...
If I want to improve the performance, should I insert "BEGIN TRANSATION" and "END TRANSATION" at the very beginning and ending of all codes, like this:
BEGIN TRANSACTION
INSERT INTO
INSERT INTO
...
INSERT INTO(more than 10000 times)
SELECT ...
SELECT
UPDATE ...
UPDATE ...
END TRANSACTION
Or should I insert BEGIN/END TRANSACTION just for the insert operation?
BEGIN TRANSACTION
INSERT INTO
INSERT INTO
...
INSERT INTO(more than 10000 times)
END TRANSACTION
SELECT ...
SELECT
UPDATE ...
UPDATE ...
IF the INSERTs are for the same table, with the same columns inserted, using one insert will improve performance significantly, that's because each seperate insert command includes going back and forth from the DB, much more time than the actual query time.
Based on the limits of the server (other processes logged in etc) , I would set a limit to the number of inserted rows, for example a 1000 rows at a time.
INSERT INTO table (col1, col2, col3,...) VALUES
{(v1, v2, v3,...), }X 1000;
Is much faster than
{
INSERT INTO table (col1, col2, col3,...) VALUES
(v1, v2, v3,...);
}
X 1000
hope that helps
Related
I am using below query to insert record into a table.
DELETE FROM Table1;
INSERT into Table1 (F1,f2,f3...) SELECT * FROM TABLE2 WHERE......
The problem is, SELECT query took some time to fetch due to many conditions, while records are already deleted from Table1, live records may not be available for display in Client Side while waiting for SELECT query result and be inserted in table1.
I need to SELECT record first, then DELETE record from the table1, then insert the selected record to table1. Can anyone help me please?
you can use transaction like and also delete after insert but add somewhere criteria.
BEGIN TRY
BEGIN TRAN
While (i<0)(while loop and so on condition.)
BEGIN
DELETE FROM Table1 ;
INSERT into Table1 (F1,f2,f3...) SELECT * FROM TABLE2 WHERE......
END
COMMIT TRAN
END TRY
BEGIN CATCH
ROLLBACK TRAN
END CATCH
You can use ##rowcount
Declare #row_count int
SELECT #row_count=count(*) FROM TABLE2 WHERE......
INSERT into Table1 (F1,f2,f3...) SELECT * FROM TABLE2 WHERE......
DELETE TOP(#row_count) FROM Table1 ;
First get count how many records you want to insert the,
First data is inserted,
if data insertion success then data is delete
other wise data won't be delete
So, every time data is available in table to show client
Trigger with Insert into (select * ...)
I'm trying it.
INSERT INTO T_ USERS SELECT * FROM USERS WHERE ID = :new.ID;
not working...
this work.
INSERT INTO T_USERS(ID) VALUES(:new.ID);
Trigger
create or replace trigger "TRI_USER"
AFTER
insert on "USER"
for each row
begin
INSERT INTO T_USER SELECT * FROM USER WHERE ID = :new.ID;
end;
this work.
INSERT INTO T_USERS(ID) VALUES(:new.ID);
So if it fits to you then try this:
INSERT INTO T_USER(ID) SELECT ID FROM USER WHERE ID = :new.ID;
If you want to select one or more rows from another table, you have to use this syntax:
insert into <table>(<col1>,<col2>,...,<coln>)
select <col1>,<col2>,...,<coln>
from ...;
Perhaps you could post the actual error you are experiencing?
Also, I suggest that you rethink your approach. Triggers that contain DML introduce all sorts of issues. Keep in mind that Oracle Database may need to restart a trigger, and could therefore execute your DML multiple times for a particular row.
Instead, put all your related DML statements together in a PL/SQL procedure and invoke that.
Its not about your trigger but because of INSERT statement
here insert statement works as below
INSERT INTO <TABLE>(COL1,COL2,COL3) VALUES (VAL1,VAL2,VAL3); --> If trying to populate value 1 by one.
INSERT INTO <TABLE>(COL1,COL2,COL3) --> If trying to insert mult vales at a time
SELECT VAL1,VAL2,VAL3 FROM <TABLE2>;
The number of values should match with number of columsn mentioned.
Hope this helps you to understand
I am using SQL Server 2012
The query is:
drop table x
create table x(id int primary key)
insert into x values(5)
insert into x values(6)
begin tran
insert into x values(1),(2),(3),(3),(4)--Primary key violation
commit tran
select* from x
This returns
5
6
and another query
drop table x
create table x(id int primary key)
insert into x values(5)
insert into x values(6)
begin tran
insert into x values(1)
insert into x values(2)
insert into x values(3)
insert into x values(3) --Primary key violation
insert into x values (4)
commit tran
select * from x
This returns
1
2
3
4
5
6
So what is the difference in inserting values in SQL Server?
Between those 2 queries and why the different result sets?
Sample 1 has a single insert statement for the 1,2,3,3,4,5. This is a "bulk" insert statement (however SQL Server uses the term bulk insert in a different fashion). Essentially it means all the inserts in this line are executed as 1 single action.
Sample 2 has separate insert statements. Since there is no exception handling in place, there is no reason for the transaction to abort. The error is ignored, the other records are added, and the result is then what you see.
SQL server executes the queries as batches. So if any error occurs in the batch, according to MSDN, one the following is possible.
No statements in the batch are executed.
No statements in the batch are executed and the transaction is rolled back.
All of the statements before the error statement are executed.
All of the statements except the error statement are executed.
In your first case, "No statements in the batch are executed". And in your second case, "All of the statements except the error statement or executed".
For more about SQL batches, please refer the following MSDN articles,
Batches of SQL Statements
Executing Batches
Errors and Batches
I have been trying to create triggers to lessen the client side code that needs to be written. I have written the following two tSQL triggers and they both seem to produce the same results, I'm just wondering which one is the more proper way to do it. I'm using SQL Server 2012 if that makes any difference.
i.e.
which one uses less resources
executes faster
is more secure against attacks
etc...
CREATE TRIGGER tr_ProductGroup_INSERT_GroupMap
ON [qmgmt].[dbo].[ProductGroup]
After INSERT
AS
BEGIN
if (
select count([inserted].[groupID])
from [inserted]
where [inserted].[groupID] = 1
) = 0
begin
insert into [qmgmt].[dbo].[GroupMap]([parentGroupID], [childGroupID])
select 1, [inserted].[groupID]
from [inserted]
end
END
GO
OR
CREATE TRIGGER tr_ProductGroup_INSERT_GroupMap
ON [qmgmt].[dbo].[ProductGroup]
After INSERT
AS
BEGIN
insert into [qmgmt].[dbo].[GroupMap]([parentGroupID], [childGroupID])
select 1, [inserted].[groupID]
from [inserted]
Where[inserted].[groupID] in
(
select [inserted].[groupID]
from [inserted]
where [inserted].[groupID] <> 1
)
END
GO
UPDATE:
Based on some of the comments here are the inserts I am using. The GroupMap table has the same results no matter which trigger I use.
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('root', 'The root of all groups')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('orphans', 'This is where the members of deleted groups go')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('SMGMT', 'Support Management')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('ST1', 'Support Tier 1')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('ST2', ' Support Tier 2')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('ST3', 'Support Tier 3')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('SaaSMGMT', 'Express Management')
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription]) values ('SaaSSup', 'Support Express')
Since a comment is a bit too small to put this example in, I'm going to put it in an answer. The reason why people say your triggers are functionally different is that, although your test insert row by row, a trigger will also fire when you insert multiple rows into the table in one single operation. Based on your examples you could try the following:
insert into [qmgmt].[dbo].[ProductGroup]([groupName], [groupDescription])
SELECT 'root', 'The root of all groups'
UNION ALL
SELECT 'orphans', 'This is where the members of deleted groups go'
UNION ALL
SELECT 'SMGMT', 'Support Management'
When doing this query the inserted table will hold 3 rows and (depending on the data) the result of the 2 trigger-code-examples might give different results.
Don't worry, this is a common misconception. The rule of thumb with SQL is to always think in record-sets, never in 'a single record with fields'.
As for your question (yes, I'm going for a real answer =)
I would suggest a variation on the second one.
CREATE TRIGGER tr_ProductGroup_INSERT_GroupMap
ON [qmgmt].[dbo].[ProductGroup]
After INSERT
AS
BEGIN
insert into [qmgmt].[dbo].[GroupMap]([parentGroupID], [childGroupID])
select 1, [inserted].[groupID]
from [inserted]
where [inserted].[groupID] <> 1
END
This way the server only needs to run over inserted once, decide which records to 'keep' and then store them right-away into the destination table.
The question now is, if this does what you want it to do...
I’m trying to write a trigger for sqlite and just running into all kinds of problems. In truth I think my real problem is with my poor background in the sql language. Anyway here goes…
I have two tables Table1 and Table2. Table1 has a column named time (which is a 64bit integer time). I need a trigger that watches for a new row being inserted in Table1. If there are 3 or more rows in Table1 that have time greater than X (a hard coded value in the below example 120 seconds) I need to insert a new row into Table2.
Here is what I have so far (note this does not work)
CREATE TRIGGER testtrigger AFTER
INSERT ON Table1 WHEN
(
SELECT COUNT() AS tCount FROM
(
SELECT * FROM Table1 WHERE
time > (NEW.time - 120)
) WHERE tCount > 3
)
BEGIN
INSERT INTO Table2 (time, data) VALUES
(NEW.time, 'data1');
END
Any kind souls out there who are better in SQL than I?
This works because the WHEN clause needs an expression:
sqlite> .schema Table1
CREATE TABLE Table1 (time int);
CREATE TRIGGER testtrigger AFTER INSERT ON Table1
WHEN 3<(SELECT Count() FROM Table1 WHERE time>(NEW.time-120))
BEGIN
INSERT INTO Table2 (time, data) VALUES (NEW.time,'data1');
END;
Have you looked at this reference page? From what I can tell this is a "misuse of aggregate" which probably stems from statement in the When section. You had this:
sqlite> .tables
Table1 Table2
sqlite> .schema Table1
CREATE TABLE Table1 (time int);
CREATE TRIGGER testtrigger AFTER
INSERT ON Table1 WHEN
(
SELECT COUNT() AS tCount FROM
(
SELECT * FROM Table1 WHERE
time > (NEW.time - 120)
) WHERE tCount > 3
)
BEGIN
INSERT INTO Table2 (time, data) VALUES
(NEW.time, 'data1');
END;
sqlite> .schema Table2
CREATE TABLE Table2 (time int,data string);
sqlite> insert into Table1 VALUES (5);
SQL error: misuse of aggregate:
sqlite>
I tried deleting "WHERE tCount" to make it into an expression, but then I got a syntax error at the operator.
So instead I switched things about for the solution above.
Your WHEN clause in the trigger should be a comparison expression which returns true or false, instead of returning a number. Try dlamblin's idea.
Maybe a different syntactical approach?
CREATE TRIGGER testtrigger ON Table1
FOR INSERT
AS
BEGIN
DECLARE #timeNum int
SET #timeNum = SELECT count(*) FROM Table1 WHERE time > (New.time - 120)
IF #timeNum > 3
BEGIN
INSERT INTO Table2 (time, data) VALUES
(NEW.time, 'data1');
END
END
But also, try some debugging statements. When I was debugging my last trigger for a webservice I put some INSERT statements into a debugging table that I setup. So then you could output the #timeNum every time the trigger gets called, and then put another debug INSERT inside the loop to make see if you actually get into your Table2 INSERT logic.
UPDATE:
Sorry! Looks like SqlLite kinda sucks, I did not know that it lacked some of this syntax. Nonetheless, if you are not getting any answers, consider some debugging statements to make sure that your code paths are being called under the right conditions.