I am working on a stored procedure that will bulk insert data from a .csv into a table, but the procedure should not "insert duplicates". Some of the rows might already be in the table but some of the values might have changed, I guess an UPSERT would be good here, but what about deleting records that already exist in the table but don't exist in the .csv?
The end goal is exactly the same as wiping the entire table and inserting all the .csv data into it, but as the .csv has millions of records and only a few hundred of them change from time to time, I assume "updating" the table is faster than wiping it.
DECLARE #ssql NVARCHAR(4000) = 'BULK INSERT gl_ip_range FROM '''
+ #psTempFilePath
+
''' WITH ( FIELDTERMINATOR ='','', ROWTERMINATOR =''\n'', FORMAT =''CSV'', FIRSTROW =2 )'
;
The code above is what I currently use together with clearing the table prior to the insert.
Will clearing the table and inserting the millions of records be faster than doing what I am trying to achieve?
Related
I'm trying to bulk insert the contents of a CSV into a table in SQL (which I then merge into another table to update data, or add new rows).
The CSV has 5.4M rows and my SQL bulk insert returns no errors or warnings but only loads the first 1048575 rows of data (if I use the SSIS wizard it consumes all 5.4M rows, but I need this to be T-SQL which runs daily).
bulk insert destination_table
from 'path\filename.csv'
with (
format = 'CSV',
firstrow = 2,
fieldterminator = ',',
rowterminator = '0x0A'
)
Any ideas why it stops at the Excel row limit?
I'm new to SQL and I'm attempting to do a bulk insert into a view, however, when I execute the script the message says (0 row(s) affected).
This is what I'm executing:
BULK INSERT vwDocGeneration
FROM '\\servername\Data\Doc\Test.csv'
WITH
(
Fieldterminator = '|',
Rowterminator = '\r\n'
)
I've confirmed the row terminators in my source file and they end with CRLF. The view and the file being imported have the same number of columns. I'm stuck! Any ideas would be greatly appreciated!
Per Mike K 's suggestion I started looking at key constraints and after I adjusted on of them I was able to use the bulk insert! FYI I did insert into the view because the table had an additional field that wasn't included in my CSV file. Thanks for confirming its possible #Gordon Linoff.
If you are looking for the number of rows affected by that operation, then use this.
DECLARE #Rows int
DECLARE #TestTable table (col1 int, col2 int)
// your bulk insert operation
SELECT #Rows=##ROWCOUNT
SELECT #Rows AS Rows,##ROWCOUNT AS [ROWCOUNT]
Or you can first bulk insert into a table, then create an appropriate view from that table.
Following article might be useful -
http://www.w3schools.com/sql/sql_view.asp
http://www.codeproject.com/Articles/236425/How-to-insert-data-using-SQL-Views-created-using-m
I want to read all of the text in a file and insert it into a table's column. One suggested way was to use BULK INSERT. Because of the syntax, I thought it would be better to BULK INSERT into a temp table, then eventually, I would SELECT from the temp table along with other values to fill the main table's row.
I tried:
USE [DB]
CREATE TABLE #ImportText
(
[XSLT] NVARCHAR(MAX)
)
BULK INSERT #ImportText
FROM 'C:\Users\me\Desktop\Test.txt'
SELECT * FROM #ImportText
DROP TABLE #ImportText
But, it is creating a new row in #ImportText per newline in the file. I don't want it split at all. I could not find a FIELDTERMINATOR that would allow for this. (i.e. end of file character)
Try this:
BULK INSERT #ImportText
FROM 'C:\Users\me\Desktop\Test.txt'
WITH (ROWTERMINATOR = '\0')
I have a table Which has more than 1 million records, I have created a stored Procedure to insert data in that table, before Inserting the data I need to truncate the table but truncate is taking too long.
I have read on some links that if a table is used by another person or some locks are applied then truncate takes too long time but here I am the only user and I have applied no locks on that.
Also no other transactions are open when I tried to truncate the table.
As my database is on SQL Azure I am not supposed to drop the indexes as it does not allow me to insert the data without an index.
Drop all the indexes from the table and then truncate, if you want to insert the data then insert data and after inserting the data recreate the indexes
When deleting from Azure you can get into all sorts of trouble, but truncate is almost always an issue of locking. If you can't fix that you can always do this trick when deleting from Azure.
declare #iDeleteCounter int =1
while #iDeleteCounter > 0
begin
begin transaction deletes;
with deleteTable as
(
select top 100000 * from mytable where mywhere
)
delete from deleteTable
commit transaction deletes
select #iDeleteCounter = count(1) from mytable where mywhere
print 'deleted 100000 from table'
end
Need to read CSV file information one by one. i.e. If the customer in the file is existing in Customer table insert into detail table otherwise insert into error table. So I can't use bulk insert method.
How to read one by one record from CSV file? How to give the path?
Bulk insert method is not going to work here.
One option is to use an INSTEAD OF INSERT trigger to selectively put the row in the correct table, and then use your normal BULK INSERT with the option FIRE_TRIGGERS.
Something close to;
CREATE TRIGGER bop ON MyTable INSTEAD OF INSERT AS
BEGIN
INSERT INTO MyTable
SELECT inserted.id,inserted.name,inserted.otherfield FROM inserted
WHERE inserted.id IN (SELECT id FROM customerTable);
INSERT INTO ErrorTable
SELECT inserted.id,inserted.name,inserted.otherfield FROM inserted
WHERE inserted.id NOT IN (SELECT id FROM customerTable);
END;
BULK INSERT MyTable FROM 'c:\temp\test.sql'
WITH (FIELDTERMINATOR=',', FIRE_TRIGGERS);
DROP TRIGGER bop;
If you're importing files regularly, you can create a table (ImportTable) with the same schema, set the trigger on that and do the imports to MyTable through bulk import to ImportTable. That way you can keep the trigger and as long as you're importing to ImportTable, you don't need to do any special setup/procedure for each import.
CREATE TABLE #ImportData
(
CVECount varchar(MAX),
ContentVulnCVE varchar(MAX),
ContentVulnCheckName varchar(MAX)
)
BULK INSERT #ImportData
FROM 'D:\test.csv'
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = ',', --CSV field delimiter
ROWTERMINATOR = '\n', --Use to shift the control to next row
TABLOCK
)
select * from #ImportData
//Here you can write your script to user read data one by one
DROP TABLE #ImportData
Use bulk insert to load into a staging table and then process it line by line.