Overwrite values in database by a unique column - sql-server-2005

I have a datatable comimg from code that contain set of columns (ID- primary key, and name varchar). I have my sql table that also contain same (ID- primary key, and name varchar) type of data. table has 200000+ rows. I want to compare that if sqltable id column contain same id in datatable means (datatable id column value matches with sqltable id field). If the both id is same thn it will overwrite recoed in sqltable.
Vivek Jagga
Chandigarh

My assumption is that the datatable rows all have a state of "Added", so will result in the InsertCommand being executed.
Option 1
- create a stored proc that checks whether a record with the ID already exists first. If it exists, do an UPDATE, else do an INSERT. Assign this sproc as the InsertCommand
Option 2 (assumes SQL 2005+)
- create a stored proc that tries an INSERT within a TRY block. In the CATCH block, if the error is a PK constraint error (ERROR_NUMBER() = 2627, then it means the record with that ID already exists so do an UPDATE on it. Assign this sproc as the InsertCommand
Option 3
- load all the data in the datatable in to a new table (use the SqlBulkCopy class for this). Then UPDATE records in the real table from this table where the ID already exists. Then INSERT records into the real table where they don't already exist.
Option 1 has the overhead of checking each time whether a record exists before doing anything and this could be relatively expensive over the whole activity.
Option 2 is more optimal if the majority of the time, you know that most of the records are NEW (i.e. if there's quite a few updates, then you'll have the hit of the PK errors ).
Option 3 can work and perform really well. The SqlBulkCopy class is a fast way to bulk load data into the database.

Related

PostgreSQL, add row to table when a row is created in another table

I am trying to create a trigger function, to create a new row, in a table, when a value is modified or created in another table. But the problem is that I need to insert in the other table, the primary key that provoked the trigger function.
Is there a way to do it?
Basically, when an insert or update will be done in table 1, I want to see in table 2 a new row, with one field filed with the value of the primary key of the row in table1 that provoked the trigger.
begin
INSERT INTO resultados_infocorp(id_user, Procesado)
VALUES (<PRIMARY_KEY>,false)
RETURN NEW;
End;
This is because if Procesado is false, thank to the id_user I will make some validations, but the ID of the user is necesary and I cant do it from the backend of my project, because I have many db inputs.
PD: The primary key of the new table is a sequence, this is the reason why I am not passing this arg.
CREATE TRIGGER resultados_infocorp_actualizar
AFTER INSERT OR UPDATE OF id_user, fb_id, numdocumento, numtelefono, tipolicencia, trabajoaplicativo
ON public.usuarios
FOR EACH ROW
EXECUTE PROCEDURE public.update_solicitudes_infocorp();
You have not shown the trigger definition. Still if you want the PK value then something like:
INSERT INTO resultados_infocorp(id_user, Procesado)
VALUES (NEW.pk_fld,false)
Where pk_fld is the name of your PK field. Take a look here:
https://www.postgresql.org/docs/current/plpgsql-trigger.html
for what is available to a trigger function. For the purpose of this question the important part is:
NEW
Data type RECORD; variable holding the new database row for INSERT/UPDATE operations in row-level triggers. This variable is null in statement-level triggers and for DELETE operations.

primary key of newly added row in sql table, vb.net

I have a vb.net application where datagridview is databound to sql table at design time(with table adapter and dataset). All I wanted is to get newly added row's Primary key(int) when i add new row to it. I have searched, but it's showing sql commands like below:
INSERT INTO #Customers
(Name)
VALUES ('Customer 1')
SELECT SCOPE_IDENTITY()
so using scope_identity(). But is there any way I can do it with adapter(or dataset) in vb.net or same command can be given though adapter or similar ?
Thanks in advance!
try this!
SCOPE_IDENTITY() will give you the last identity value inserted into any table directly within the current scope (scope = batch, stored procedure, etc. but not within, say, a trigger that was fired by the current scope)
Use the OUTPUT clause if you are inserting multiple rows and need to retrieve the set of IDs that were generated.
INSERT #a(x) OUTPUT inserted.identity_column VALUES('b'),('c');
--
result will be
1
2

SQLiteexception database disk image is malformed

I have a weird error with a SQLite Database: You can download it here
Everytime I try to insert something in the Table "CurrencyTransactions" it fails because a new column called 7 appeared for no reason.
I tried to drop the table but
I ran PRAGMA integrity_check but I've this error then
Then I tried to export a .sql file and to import it again in a fresh new database but
1) If I import the structure only, it works fine and I don't have the 7 column anymore
2) If I import the entries then, it fails with this error:
It means something like: "Error in process #74: not an error"
To finish, I also tried this solution but the new database created is empty.
What can I do? I really need to save the entries.
What I suggest is in DB Browser.
File/Export/Database to SQL file.
Select All (for all tables)
Other options up to you other than Export Everything
Save the file.
Close the database.
Open a new database e.g nadekobotfix.db. (could be same name but different location)
Note 1-6 takes a minute or so (just under 60k).
Do the hard work according to :-
You may need to remove/ignore the first and last lines (BEGIN TRANSACTION; and the subsequent COMMIT;)
You would probably not be able to run the generated SQL directly due to constraints (tried this an failed with constraints).
You need to copy sections from the file and run according to the hierarchy as imposed by the constraints (foreign keys). If you have CHECK constraints these may need to be considered. (no Triggers to worry about).
Running SELECT * FROM sqlite_master WHERE type = 'table' AND instr(sql,'CHECK'); returns nothing so there are no CHECK constraints.
Indexes could/should be left till last (as they are in the generated SQL).
A section would consist of a table's create statement along with the insert statements.
You may wish to create a spreadsheet of the tables(sections) marking them off when they have been done.
The following query could assist as those with NA could be done first
SELECT CASE WHEN instr(sql,'FOREIGN KEY') THEN 'FK' ELSE 'NA' END AS fkey, name,sql
FROM sqlite_master
WHERE type = 'table' AND name NOT LIKE 'sqlite%' ORDER BY instr(sql,'FOREIGN KEY')
you could export individual tables from DB Browser for SQlite marking them off when done.
You may wish to do an integrity_check at regular intervals.
If this works (you might have to make adjustments to the SQL) then you can rename the old db and then rename the new (or move the old and the copy the new if using the same database name).
Note you may still have to determine how the corruption occurred.
You may wish to backup the database regularly.
You may wish to have a look at How To Corrupt An SQLite Database File
You may wish to heed :-
With few exceptions, analysis of a corrupt database does not normally
help to determine what went wrong. A better approach to avoiding
"danger", we have found, is to read and understand
https://www.sqlite.org/howtocorrupt.html
* in database main *
Page 10628: btreeInitPage() returns error code 11
This indicates that the page header is so badly corrupted that SQLite
cannot interpret this page at all. One possible reason: page 10628
has been zeroed. Can you look at a hex dump of that page? (Remember
that SQLite numbers pages beginning with 1, so the start of the page
is pgsz*10627 where pgsz is the page size.)
-- D. Richard Hipp
“btreeInitPage() returns error code 11”
Sample adjustment required
The Reminders table has a column called When, this is an SQL keyword (inadvisable column name IMO) so the generated SQL for the INSERT doesn't wrap the column name so you will get an error.
i.e. :-
CREATE TABLE IF NOT EXISTS `Reminders` (
`Id` INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
`ChannelId` INTEGER NOT NULL,
`IsPrivate` INTEGER NOT NULL,
`Message` TEXT,
`ServerId` INTEGER NOT NULL,
`UserId` INTEGER NOT NULL,
`When` TEXT NOT NULL,
`DateAdded` TEXT
);
INSERT INTO `Reminders` (Id,ChannelId,IsPrivate,Message,ServerId,UserId,When,DateAdded) VALUES (1270,367886754973351936,1,'Birthday Day',318127386367623170,367886754973351936,'2018-05-03 22:07:48.1860996','2018-03-18 22:07:48.186101'),
(1271,248278722656993281,1,'to remind Chanmi to remind Jayos to DeagleMomoka',318127386367623170,248278722656993281,'2018-05-05 22:08:58.4915565','2018-03-18 22:08:58.4915582'),
(1376,170240129414201344,1,'timely',318127386367623170,170240129414201344,'2018-03-29 09:00:29.4476776','2018-03-28 09:00:29.447679'),
(1377,373301201158144000,1,'timely',318127386367623170,373301201158144000,'2018-03-29 09:50:14.1631563','2018-03-28 09:50:14.1631577'),
(1378,248278722656993281,1,'timely',318127386367623170,248278722656993281,'2018-03-29 11:24:27.0250275','2018-03-28 11:24:27.025029'),
(1379,421433212716318721,1,'to timely',318127386367623170,421433212716318721,'2018-03-29 19:21:17.7465563','2018-03-28 19:21:17.7465584'),
(1380,346513954966863872,1,'t',318127386367623170,346513954966863872,'2018-03-29 19:42:23.4758798','2018-03-28 19:42:23.4758816'),
(1381,272735316002209792,1,'t!daily',318127386367623170,272735316002209792,'2018-03-29 21:01:47.5616218','2018-03-28 21:01:47.5616236'),
(1382,298272937243312132,1,'timely',318127386367623170,298272937243312132,'2018-03-29 23:18:02.8826873','2018-03-28 23:18:02.8826891'),
(1383,332340162774302720,1,'t',318127386367623170,332340162774302720,'2018-03-30 01:55:21.4704139','2018-03-29 01:55:21.4704156'),
(1384,367165474246754314,1,'tatyahaksodoeo',318127386367623170,367165474246754314,'2018-03-30 03:46:18.8805182','2018-03-29 03:46:18.8805196'),
(1385,290086674761908225,1,'timely',318127386367623170,290086674761908225,'2018-03-30 07:02:33.4115303','2018-03-29 07:02:33.4115321'),
(1386,168064128500367360,1,'timely',318127386367623170,168064128500367360,'2018-03-30 07:19:09.1915867','2018-03-29 07:19:09.1915885');
would have to be changed to use (square brackets, single or double quotes or grave accents can be used to enclose/wrap/quote the offending keyword):-
.......INSERT INTO `Reminders` (Id,ChannelId,IsPrivate,Message,ServerId,UserId,[When],DateAdded) ......
Likewise table SelfAssignableRoles has the GROUP keyword as a column name.
Likewise table Permissionv2 and table StartupCommand have the INDEX keyword as a column name.
Potential Issue
As an exercise I've tried doing the above and have managed to get 67 out of the 71 tables (66 out of 70 of your tables as sqlite_sequence is automatically created).
However, there appears to be an issue, between the Clubs table and the DiscordUser table. I believe that there is a circular reference between them. Thus as WaifuInfo and WaifuUpdates are reliant upon the DiscordUser table and as WaifuItem is realiant upon WaifInfo. The tables mentioned here have not been successfully copied.
A word of warning. If you attempt to create Clubs and or DiscordUser using the existing constraints you may end up in a situation where one always has to exist.
e.g. if DiscordUser exists but Clubs doesn't then
DROP TABLE IF EXISTS `DiscordUser`;
results in :-
no such table: main.Clubs: DROP TABLE IF EXISTS `DiscordUser`;
If you then create Clubs and try the DROP with a very basic (no Constraints) using :-
CREATE TABLE IF NOT EXISTS `Clubs` (ID INTEGER PRIMARY KEY);
DROP TABLE IF EXISTS `DiscordUser`;
The result is good as per :-
Query executed successfully: DROP TABLE IF EXISTS `DiscordUser`; (took 1ms)
Now try to DROP Clubs using :-
--CREATE TABLE IF NOT EXISTS `Clubs` (ID INTEGER PRIMARY KEY);
--DROP TABLE IF EXISTS `DiscordUser`;
DROP TABLE IF EXISTS `Clubs`;
and you can't as DiscordUser doesn't exist as per :-
no such table: main.DiscordUser: DROP TABLE IF EXISTS `Clubs`;
I've tried closing the database in case it was a caching issue but the behaviour remains.
As such, I'd strongly suggest having a good look at the constraint usage and being sure of correcting the issues before trying to copy all of the tables (I guess that there is a chance that this could be part of the cause of the corruption, however why/how is way beyond me).
P.S. The method I used was (1-6):-
Then for 7 :-
Run the sqlite_master query, from above, select all cells and copy, then drop the results into a spreadsheet (you could drop the sql column as the create gets truncated unless you try to fiddle with the delimiters).
Open the exported file (I used NotePad++) in your Editor.
Open a new DB in DB Browser (will refer to it as DBB from now) for SQLite.
In DBB in EXEC SQL tab, input PRAGMA integrity_check, run to check.
Create new tab (for next SQL).
Switch to Spreadsheet and copy the first table name that isn't marked as done.
Switch to Editor do find on EXISTS copied_table_name
Select the section (i.e the CREATE statement along to and including the last row to be inserted, note can be a pain for the larger tables so might be easier to create separate export for those tables). Copy the selection to the clipboard.
Paste into the empty tab and run.
If OK then
in DBB click to create a new tab for the next
switch to spreadsheet and mark table as DONE
goto 5.
If not OK then
If you can fix the issue by altering the SQL fix the SQL (e.g. column name needs enclosing/wrapping/quoting) and then go to 9.
If the issue is due to Constraints then go to 5 but select the table causing the constraint.
OK the issue with the DiscordUser/Club tables is that a Clubs.Ownerid requires a DiscordUser. So clubs cannot be added without the relevant Discord users (id's 1,2,7,14 and 32). Some DiscordUsers are club members so they require a club to exist.
What I have done is to load the DiscordUsers rows for the Club owners changing their ClubId to null. Load the Clubs. Update the ClubId's of the DiscordUsers so they are members of the club that they were before (i.e. undo the null) and to load the rest of the nearly 600 Discordusers (excluding those already loaded).
Here's the SQL I used for that part (note except for the Discorduser, Clubs and the 3 waifu tables, all other tables have been successfully created and loaded).
INSERT INTO `DiscordUser` (Id,AvatarId,Discriminator,UserId,DateAdded,Username,ClubId,LastLevelUp,NotifyOnLevelUp,LastXpGain,TotalXp,IsClubAdmin,CurrencyAmount) VALUES
-- ClubId was 6 changed to null
(1,'6d5212a0f5e862d57c8ffc6f254a2e85','1458',299779864045682689,'2017-10-07 18:02:04.8287878','Anubis',NULL,'2018-03-27 02:22:26.362966',0,'2017-11-17 01:19:14.0313957',7056,1,280),
-- Owns a club but not in a club
(2,'3b37e0f635706f81fdde2b6de9889283','9810',181200115539640321,'2017-10-07 18:04:39.767728','AnnaHime',NULL,'2018-01-02 02:27:38.8011863',0,'2017-11-16 01:29:49.0371488',429,0,360),
-- ClubId null was 3
(7,'612c67b6eb57d8806dcc92ed45b3a6d0','0396',177502331582021639,'2017-10-07 18:11:09.7830603','Tsuchimursu',NULL,'2018-03-28 17:45:53.7399883',0,'2017-11-17 15:53:59.084885',18156,1,4725),
-- ClubId null was 4
(14,'b2dd362171277337294de325bf92ad6a','3267',215597863441268737,'2017-10-07 18:45:54.8092675','LaLa☆Star',NULL,'2018-01-14 20:52:15.7531274',0,'2017-11-08 19:00:22.7778305',2061,1,286),
-- ClubId null was 5
(32,'667f4d802b977c4d4be974e35ae63c55','2593',251689019929395200,'2017-10-08 00:58:16.6089546','username',NULL,'2018-03-28 07:27:34.9348084',0,'2017-11-17 20:02:14.0283998',4704,1,1188),
-- ClubId was 2 changed to NULL
(91,'0adb399c9f2cd94370038e2452ab8c8d','6790',346513954966863872,'2017-10-13 05:48:51.7788964','mayoi',NULL,'2018-03-24 02:50:06.8970518',0,'2017-11-17 20:01:29.0692552',7635,1,515)
;
INSERT INTO `Clubs` (Id,DateAdded,Discrim,ImageUrl,MinimumLevelReq,Name,OwnerId,Xp,Description) VALUES
(2,'2017-11-14 07:39:57.5091592',1,'https://lh3.googleusercontent.com/_7WKFouxTx1fdFpnmmuykDAd5SoiiJOPzHdRmXKOmRRZhV5Ba4V_kZct5ooVjQ9BuzU=w300',5,'We ⤠waifus',91,40137,'Love your waifus short & tall, big & small, cute as dolls, we love ''em all!'),
(3,'2017-12-11 07:00:59.3762914',1,'',30,'Den of Faes',7,11607,NULL),
(4,'2017-12-11 07:03:59.093402',1,'',5,'Skeleton Enthusiasts',14,657,NULL),
(5,'2017-12-11 07:05:56.9111719',1,'',5,'Saki''s Juice',32,2610,NULL),
(6,'2017-12-22 04:46:24.7271709',1,'',5,'nap pile',1,24870,'For the sleeping beauties and the wandering insomniacs who enjoy a good night sleep.')
;
UPDATE `DiscordUser` SET ClubId = 6 WHERE Id=1;
UPDATE `DiscordUser` SET ClubId = 3 WHERE Id=7;
UPDATE `DiscordUser` SET ClubId = 4 WHERE Id=14;
UPDATE `DiscordUser` SET ClubId = 5 WHERE Id=32;
UPDATE `DiscordUser` SET ClubId = 2 WHERE Id=91;
-- LOAD Remaining DiscordUser rows (note incomplete)
INSERT INTO `DiscordUser` (Id,AvatarId,Discriminator,UserId,DateAdded,Username,ClubId,LastLevelUp,NotifyOnLevelUp,LastXpGain,TotalXp,IsClubAdmin,CurrencyAmount) VALUES
--(1,'6d5212a0f5e862d57c8ffc6f254a2e85','1458',299779864045682689,'2017-10-07 18:02:04.8287878','Anubis',6,'2018-03-27 02:22:26.362966',0,'2017-11-17 01:19:14.0313957',7056,1,280),
--(2,'3b37e0f635706f81fdde2b6de9889283','9810',181200115539640321,'2017-10-07 18:04:39.767728','AnnaHime',NULL,'2018-01-02 02:27:38.8011863',0,'2017-11-16 01:29:49.0371488',429,0,360),
(3,'a3cd92d397ad357834d0e6c9f10bfc59','0429',145356302347010048,'2017-10-07 18:04:49.786657','Rebel Lucy',NULL,'2018-03-26 12:55:21.1149964',0,'2017-11-17 22:21:24.0263741',6876,0,3600),
(4,'7225dccaab1c93896657a61e18595378','5286',84689434536050688,'2017-10-07 18:05:44.765554','scarletflame234',NULL,'2018-03-28 22:56:28.7427437',0,'2017-11-17 23:21:41.4446535',13368,0,288),
(5,'c1316bc0673f4a2709b3ce550ed54395','0760',303279191116480514,'2017-10-07 18:06:39.7664015','zachary',NULL,'2018-03-02 03:48:43.4817755',0,'2017-11-17 18:44:14.1082867',210,0,50),
(6,'2ed95eae7c3088c46b23e71578dacc42','8801',161369834314137601,'2017-10-07 18:07:04.7672808','Kou',NULL,'2018-03-07 06:24:32.3405246',0,'2017-11-17 23:20:00.0648699',2640,0,55),
--(7,'612c67b6eb57d8806dcc92ed45b3a6d0','0396',177502331582021639,'2017-10-07 18:11:09.7830603','Tsuchimursu',3,'2018-03-28 17:45:53.7399883',0,'2017-11-17 15:53:59.084885',18156,1,4725),
(8,'5b1d239935ab4dd6d3eee98954601d52','9859',179093512610906113,'2017-10-07 18:13:54.7939334','TheCorty',NULL,'2017-11-12 12:07:36.4752178',0,'2017-11-12 23:47:26.4744132',2460,0,205), ...........
NOTE The SQL from -- LOAD Remaining DiscordUser rows, will not work as it's only intended to show how ID's 1,2 and 7 have been commented out, as should be rows 14, 32 and 91 as they have already been loaded, the other close to 600 rows should be included.
Note I've just also loaded the outstanding 3 waifu tables so all data can be retrieved (assuming that none has been lost due to the corruption). PRAGMA integrity_check; returns OK.

Inserting new rows and generate a new id based on the current last row

The primary key of my table is an Identity column of an ID. I want to be able to insert a new row and have it know what the last ID in the table currently is and add one to it. I know I can use Scope Identity to get the last inserted column from my code, but I am worried about people manually adding entries in the database, because they do this quite often. Is there a way I can look at the last ID in the table and not just the last ID my code inserted?
With a SQL Identity column, you don't need to do anything special. This is the default behavior. SQL Server will handle making sure you don't have collisions regardless of where the inserts come from.
The ##Identity will pull the latest identity, and scope_identity will grab the identity from the current scope.
A scope is a module: a stored procedure, trigger, function, or batch. Therefore, if two statements are in the same stored procedure, function, or batch, they are in the same scope.
If you don't want to allow manual entries to the primary column, then you can add Identity constraint to it along with primary key constraint.
Example, while creating a table,
CREATE Table t_Temp(RowID Int Primary Key Identity(1,1), Name Varchar(50))
INSERT Into t_Temp values ('UserName')
INSERT Into t_Temp values ('UserName1')
SELECT * from t_Temp
You can query the table and get the next available code in one SQL query:
SELECT COALESCE(MAX(CAST("RowID" AS INT)),0) +1 as 'NextRowID' from <tableName>
The "0" here is a default, meaning if there are no rows found, the first code returned would be (0+1) =1
Generally I have 999 instead of the 0 as I like my RowID/primary key etc. to start at 1000.

How to fix this stored procedure problem

I have 2 tables. The following are just a stripped down version of these tables.
TableA
Id <pk> incrementing
Name varchar(50)
TableB
TableAId <pk> non incrementing
Name varchar(50)
Now these tables have a relationship to each other.
Scenario
User 1 comes to my site and does some actions(in this case adds rows to Table A). So I use a SqlBulkCopy all this data in Table A.
However I need to add the data also to Table B but I don't know the newly created Id's from Table A as SQLBulkCopy won't return these.
So I am thinking of having a stored procedure that finds all the id's that don't exist in Table B and then insert them in.
INSERT INTO TableB (TableAId , Name)
SELECT Id,Name FROM TableA as tableA
WHERE not exists( ...)
However this comes with a problem. A user at any time can delete something from TableB so if a user deletes say a row and then another user comes around or even the same user comes around and does something to Table A my stored procedure will bring back that deleted row in Table B. Since it will still exist in Table A but not Table B and thus satisfy the stored procedure condition.
So is there a better way of dealing with two tables that need to be updated when using bulk insert?
SQLBulkCopy complicates this so I'd consider using a staging table and an OUTPUT clause
Example, in a mixture of client pseudo code and SQL
create SQLConnection
Create #temptable
Bulkcopy to #temptable
Call proc on same SQLConnection
proc:
INSERT tableA (..)
OUTPUT INSERTED.key, .. INTO TableB
SELECT .. FROM #temptable
close connection
Notes:
temptable will be local to the connection and be isolated
the writes to A and B will be atomic
overlapping or later writes don't care about what happens later to A and B
emphasising the last point, A and B will only ever be populated from the set of rows in #temptable
Alternative:
Add another column to A and B called sessionid and use that to identify row batches.
One option would be to use SQL Servers output clause:
INSERT YourTable (name)
OUTPUT INSERTED.*
VALUES ('NewName')
This will return the id, name of the inserted rows to the client, so you can use them in the insert operation for the second table.
Just as an alternative solution you could use database triggers to update the second table.