SQL Server : Bulk Insert 0 Rows Affected - sql

I'm new to SQL and I'm attempting to do a bulk insert into a view, however, when I execute the script the message says (0 row(s) affected).
This is what I'm executing:
BULK INSERT vwDocGeneration
FROM '\\servername\Data\Doc\Test.csv'
WITH
(
Fieldterminator = '|',
Rowterminator = '\r\n'
)
I've confirmed the row terminators in my source file and they end with CRLF. The view and the file being imported have the same number of columns. I'm stuck! Any ideas would be greatly appreciated!

Per Mike K 's suggestion I started looking at key constraints and after I adjusted on of them I was able to use the bulk insert! FYI I did insert into the view because the table had an additional field that wasn't included in my CSV file. Thanks for confirming its possible #Gordon Linoff.

If you are looking for the number of rows affected by that operation, then use this.
DECLARE #Rows int
DECLARE #TestTable table (col1 int, col2 int)
// your bulk insert operation
SELECT #Rows=##ROWCOUNT
SELECT #Rows AS Rows,##ROWCOUNT AS [ROWCOUNT]
Or you can first bulk insert into a table, then create an appropriate view from that table.
Following article might be useful -
http://www.w3schools.com/sql/sql_view.asp
http://www.codeproject.com/Articles/236425/How-to-insert-data-using-SQL-Views-created-using-m

Related

SQL Server Bulk import data from .csv into new table

I can't seem to find the answer to this quite trivial question.
I would like to bulk import data from a .csv file (with an unknown number of columns, comma separated) file to a new SQL Server table, within an existing database. The BULK INSERT statement works fine if the table is predefined, but since I don't know the number of columns of the .csv file upfront, this won't work.
I was trying to use bulk in combination with openrowset, but can't get it working.
By the way: SSIS won't be an option in this case, since I would like to incorporate the query within R (sqlquery) or Python.
Help would be highly appreciated!
I have found a workaround, using R, to solve the problem above. The same logic can be applied in other languages. I advise everyone using this function to keep in mind the useful comments above.
I wrote a small function to capture the steps in R:
SQLSave <- function(dbhandle, data, tablename) {
# Export data to temp path, for example within your SQL Server directory.
write.csv2(data,file = "\\\\pathToSQL\\temp.csv",row.names=FALSE,na="")
# Write first 100 rows to SQL Server, to incorporate the data structure.
sqlSave(dbhandle, head(data,100), tablename = tablename, rownames = FALSE, safer = FALSE)
# SQL Query to remove data in the table, structure remains:
sqlQuery(dbhandle,paste("DELETE FROM [",tablename,"]",sep=""));
# SQL Query to bulk insert all data from temp .csv to SQL Server
sqlQuery(dbhandle,paste("BULK INSERT [",tablename,"]
FROM '\\\\pathToSQL\\temp.csv'
WITH
(
FIELDTERMINATOR = ';',
ROWTERMINATOR = '\\n',
FIRSTROW = 2,
KEEPNULLS
)",sep=""));
# Delete temp file from file directory
file.remove("\\\\pathToSQL\\temp.csv")
}
I am currently struggling with the same problem. I have first read the first row (headers) using bulk insert and created the table. Then again using bulk insert from row 2 imported data in the table. Although you have to change datatype after checking the data imported.
CREATE TABLE #Header(HeadString nvarchar(max))
declare #TableName nvarchar(100)='byom.DenormReportingData_100_100'
DECLARE #Columns nvarchar(max)=''
declare #Query nvarchar(max)=''
DECLARE #QUERY2 NVARCHAR(MAX)=''
bulk insert #Header
from 'F:/Data/BDL_BI_Test.csv'
with(firstrow=1,lastrow=1)
select #Columns=(select quotename(value)+' VARCHAR(500)'+',' from #Header cross apply string_split(HeadString,',') for xml path(''))
if isnull(#Columns,'')<>''
begin
set #Columns = left(#Columns,len(#Columns)-1)
select #Query=#Query+'CREATE TABLE '+#TableName+' ('+#Columns+')'
exec(#QUERY)
end
select #QUERY2 =#QUERY2+'bulk insert '+#TableName+' from ''F:/Data/BDL_BI_Test.csv''
with(firstrow=2,FORMAT=''csv'',FIELDTERMINATOR='','',ROWTERMINATOR=''\n'')'
exec(#QUERY2)

How do I take all of the text from a file and insert it into one row's column?

I want to read all of the text in a file and insert it into a table's column. One suggested way was to use BULK INSERT. Because of the syntax, I thought it would be better to BULK INSERT into a temp table, then eventually, I would SELECT from the temp table along with other values to fill the main table's row.
I tried:
USE [DB]
CREATE TABLE #ImportText
(
[XSLT] NVARCHAR(MAX)
)
BULK INSERT #ImportText
FROM 'C:\Users\me\Desktop\Test.txt'
SELECT * FROM #ImportText
DROP TABLE #ImportText
But, it is creating a new row in #ImportText per newline in the file. I don't want it split at all. I could not find a FIELDTERMINATOR that would allow for this. (i.e. end of file character)
Try this:
BULK INSERT #ImportText
FROM 'C:\Users\me\Desktop\Test.txt'
WITH (ROWTERMINATOR = '\0')

To read CSV file data one by one from SQL Stored proc

Need to read CSV file information one by one. i.e. If the customer in the file is existing in Customer table insert into detail table otherwise insert into error table. So I can't use bulk insert method.
How to read one by one record from CSV file? How to give the path?
Bulk insert method is not going to work here.
One option is to use an INSTEAD OF INSERT trigger to selectively put the row in the correct table, and then use your normal BULK INSERT with the option FIRE_TRIGGERS.
Something close to;
CREATE TRIGGER bop ON MyTable INSTEAD OF INSERT AS
BEGIN
INSERT INTO MyTable
SELECT inserted.id,inserted.name,inserted.otherfield FROM inserted
WHERE inserted.id IN (SELECT id FROM customerTable);
INSERT INTO ErrorTable
SELECT inserted.id,inserted.name,inserted.otherfield FROM inserted
WHERE inserted.id NOT IN (SELECT id FROM customerTable);
END;
BULK INSERT MyTable FROM 'c:\temp\test.sql'
WITH (FIELDTERMINATOR=',', FIRE_TRIGGERS);
DROP TRIGGER bop;
If you're importing files regularly, you can create a table (ImportTable) with the same schema, set the trigger on that and do the imports to MyTable through bulk import to ImportTable. That way you can keep the trigger and as long as you're importing to ImportTable, you don't need to do any special setup/procedure for each import.
CREATE TABLE #ImportData
(
CVECount varchar(MAX),
ContentVulnCVE varchar(MAX),
ContentVulnCheckName varchar(MAX)
)
BULK INSERT #ImportData
FROM 'D:\test.csv'
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = ',', --CSV field delimiter
ROWTERMINATOR = '\n', --Use to shift the control to next row
TABLOCK
)
select * from #ImportData
//Here you can write your script to user read data one by one
DROP TABLE #ImportData
Use bulk insert to load into a staging table and then process it line by line.

Import CSV file into SQL Server 2012 as SQL function

So I am basing my code on the Import to mapped columns question asked by another user.
Here is the code...
DECLARE #TempTable TABLE (Name nvarchar(max))
BULK INSERT #TempTable
FROM ‘C:\YourFilePath\file.csv’
WITH ( FIELDTERMINATOR = ‘,’,
ROWTERMINATOR = ‘\n’
)
INSERT INTO TABLE ([Name], [TypeId])
Select Name,'99E05902-1F68-4B1A-BC66-A143BFF19E37' from #TempTable
Do I put this code into a stored procedure or a function to run it from my ASP script?
Is your TypeID a constant at the last line?
You can do either SP or TableFunction. But since TableFunction is depandant on the version of SQL Server you have, it might make sense to use SP.

Execute SQL statements while looping a table

I want to create a table with a few records in it and then run a set of sql statements for every record in that table. I would use the data in the table to set values in the sql statement.
This should allow me to write the SQL just once and then run it for whatever data I put in the table.
But, I'm not sure how to go about doing this. Should I use a cursor to loop the table? Some other way?
Thanks for any help or advice you can give me.
CURSOR will have an overhead associated with it, but can be a good method to walk through your table. They are not a totally unnecessary evil and have their place.
With the limited information that WilliamB2 provided, it sounds like a CURSOR set may be a good solution for this problem to walk through his data and generate the multiple downstream INSERTs.
Yes you can use a cursor. You can also use a while loop
declare #table as table(col1 int, col2 varchar(20))
declare #col1 int
declare #col2 varchar(50)
declare #sql varchar(max)
insert into #table
SELECT col1, col2 FROM OriginalTable
while(exists(select top 1 'x' from #table)) --as long as #table contains records continue
begin
select top 1 #col1=col1, #col2=col2 from #table
SET #sql = 'INSERT INTO Table t VALUES('+cast(#col1 as varchar)+')'
delete top (1) from #table --remove the previously processed row. also ensures no infinite loop
end
I think cursor has an overhead attached to it.
With this second approach you are not working on the original table
Maybe you could use INSERT...SELECT instead of the loop:
INSERT INTO target_table
SELECT
some_col,
some_other_col,
'Some fixed value',
NULL,
42,
you_get_the_idea
FROM source_table
WHERE source_table.you_get_the_idea = 1
The columns on your SELECT should match the structure of the target table (you can omit an int/identity pk like id if you have one).
If the best option is this or the loop depends on how many tables you want to populate inside the loop. If it's just a few, I usually stick with INSERT...SELECT.