SQL Server Bulk Import ROWTERMINATOR vbCrLf (\n) not working - sql

I know there are other threads on here regarding this problem and none of the recommended fixes are working. I have verified that the CRLF at the end of each record is a hex '0D0A'. I can do a replace in VBS on vbCrLf and it replaces every one of them.
Here is a sample of my tab-delimited text file:
01/16/2013 11:00 HS01 DocLast, DocFirst PA-C Occurred ML 11/20/2012 15:31
01/07/2013 09:40 HS01 DocLast, DocFirst PA-C Canceled ML 11/20/2012 15:36 Patient Canceled 20130103-57935
I am executing this code against the text file in a stored procedure in SQL Server 2008:
set #sqlcmd = '
BULK INSERT #temp_import_records
FROM ''' + #import_file + '''
WITH
(
ROWTERMINATOR = ''\n''
)'
I am trying to insert this text into a temp table with 20 columns. This data only has 10 fields. With my code, both of these records are being installed into the same record in the temp table. I have tried setting the rowterminator to '0D0A' and '0x0A' and neither of them worked.
What am I doing wrong?

For anybody landing here now, you can try using HEX codes
ROWTERMINATOR = '0x0d0a' or ROWTERMINATOR = '0x0a'

The \n character is only the LF (0A) part of the two character CR+LF (0D0A) row terminator. Use \r for the first part:
ROWTERMINATOR = ''\r\n''

I suggest also escaping the termination like so:
ROWTERMINATOR = ''\\n''

Related

SQL bulk insert csv with currency sign separator ¤

I have a .csv file with a currency sign field separator (¤), when I execute this query to bulk load it to a table it raise an error.
The file is UTF-8 encoded.
BULK INSERT dbo.test
FROM 'file.csv'
WITH (DATA_SOURCE = 'MyAzureBlobStorage',
FIRSTROW = 2,
CODEPAGE = 65001, --UTF-8 encoding
FIELDTERMINATOR = '¤', --CSV field delimiter
ROWTERMINATOR = '\n' --Use to shift the control to next row
);
The error I get is:
The bulk load failed. The column is too long in the data file for row 1, column 1. Verify that the field terminator and row terminator are specified correctly.
This is working fine with a semicolon as the separator.

Bulk insert SQL command cannot insert first row

I'm using a bulk insert command for SQL Server but for some reason the first row isn't being inserted. Why can't I insert data from the first row? Is bulk insert expecting headers as default and how can i circumvent this? If I add a dummy row and set WITH to FIRSTROW = 2 then the first row is inserted without a problem but I don't think this is a nice solution.
Error code:
Msg 4864, Level 16, State 1, Line 1
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 1 (table_id).
Command:
BULK INSERT TableData
FROM 'C:\Users\Oscar\file.csv'
WITH (FIELDTERMINATOR = ';',
ROWTERMINATOR = '\n',
KEEPNULLS,
KEEPIDENTITY)
Sample data:
1;Text 1;1;0;;
2;Text 2;1;0;;
3;Text 3;1;0;;
4;Text 4;1;0;;
5;Text 5;1;0;;
The script is probably utf-8 and you're trying to load it from cmd with cp-1252 or something, the UTF-Bom at the beginning freaks out the interpreter.
Look with a hexeditor and you'll see it.
Save as ANSI and try again.
I've had problems with Line Feed/Carriage Return Line Feed, this might be the issue here as well. For Line Feed I had to use a Row terminator of 0x0a:
BULK INSERT TableData
FROM 'C:\Users\Oscar\file.csv'
WITH (
FIELDTERMINATOR = ';',
ROWTERMINATOR = '0x0a',
KEEPNULLS,
KEEPIDENTITY)

SQL: Insert a linebreak in varchar string

I've searched StackOverflow for all the possible solutions concerning how to insert a linebreak in a SQL text string. I've referred this link but to no avail. How to insert a line break in a SQL Server VARCHAR/NVARCHAR string
But none of the solutions are working for me.
This is what I'm trying to do:
insert into sample (dex, col)
values (2, 'This is line 1.' + CHAR(13)+CHAR(10) + 'This is line 2.')
But this is the output generated: (Select Col from sample where dex = 2)
This is line 1. This is line 2.
This is the output that I desire:
This is line 1.
This is line 2.
I'm using SQL server and SSMS if that helps.
Any ideas why it isn't working?
Well your query works perfectly fine.
SSMS by default shows all query out put in the grid view, which does not display the line break character.
To see it you can switch to text view using cntrl + T shortcut or like below
The results I got for your query are below( and they work)
It works perfectly:
CREATE TABLE sample(dex INT, col VARCHAR(100));
INSERT INTO sample(dex, col)
VALUES (2, 'This is line 1.' + CHAR(13)+CHAR(10) + 'This is line 2.');
SELECT *
FROM sample;
LiveDemo
Output:
The "problem" is SSMS grid view that skips newline characters (and others too). Otherwise you will get different rows height like in Excel.
You could observe the same behaviour in SEDE.
LiveDemo-SEDELiveDemo-SEDE-TextView
Output:
You could compare it using:
SELECT 'This is line 1.' + CHAR(13)+CHAR(10) + 'This is line 2.';
PRINT 'This is line 1.' + CHAR(13)+CHAR(10) + 'This is line 2.';
The CR/LF chars are there, it's just that in the format of your output, they are being ignored.
I've created a fiddle to illustrate this, with 2 VARCHAR columns. In the first one I insert the text with no CR/LF, in the second I include them
CREATE TABLE sample (dex INT, colnocr VARCHAR(50), col VARCHAR(50)) ;
insert into sample (dex, colnocr, col) values
(2,
'This is line 1.' + 'This is line 2.',
'This is line 1.' + CHAR(13) + CHAR(10) + 'This is line 2.'
)
;
if you run the query
SELECT * FROM sample
The result in plain text are:
| dex | colnocr | col |
|-----|--------------------------------|----------------------------------|
| 2 | This is line 1.This is line 2. | This is line 1.
This is line 2. |
but if you run it in tabular :
dex colnocr col
2 This is line 1.This is line 2. This is line 1. This is line 2.
Check it : SqlFiddleDemo
A bit late to this discussion, but in SSMS 2016, there is an option on the Tools | Options menu under Query Results / SQL Server / Results to Grid called "Retain CR/LF on copy or save". Checking this box will allow you to copy values from a cell in a grid result to, say, another query window and still have the line breaks.

SQL Server Bulk Insert with condition if the line is empty, do not read it

I just want to ask if there is a way to use the bulk insert feature but to check if the last line is empty and skip it.
I have a text file that is being populated with data but the last line will always be empty cause when it repopulates, it will start from there and the the end of the previous line that is already populated.
My query so far looks like this:
BULK INSERT #TEMP
FROM 'C:\Test\Test.txt'
WITH (FIELDTERMINATOR ='\t',
ROWTERMINATOR = '\r',
FIRSTROW = 2, KEEPNULLS)
It will then be input into a temp table but the query will not go this far because of the last line in the text file. Is there a setting to say, skip the last line if its empty?
If you're expecting one line to be nonimportable then include
MAXERRORS =1
In the command and BULK INSERT should import the other ones fine.
BULK INSERT #TEMP
FROM 'C:\Test\Test.txt'
WITH (FIELDTERMINATOR ='\t',
ROWTERMINATOR = '\r',
MAXERRORS =1,
FIRSTROW = 2, KEEPNULLS)
Unless there's an unexpected problem in another line (in which case you'll want to trap the error anyway)
https://msdn.microsoft.com/en-us/library/ms188365.aspx
Well i found my answer here:Difference between CR LF, LF and CR line break types?
What i did is changed the from ROWTERMINATOR = '\r' to ROWTERMINATOR = '0x0a'
You can read up on the link about the differences in end of line chars which soved my issue. I hope this works for others as well! Best of luck guys!

Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 4 (Year)

I'm getting the conversion error when I try to import a text file to my database. Below is the error message I received:
Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 4 (Year).
Here is my query code:
CREATE TABLE Students
(
StudentNo Integer NOT NULL Primary Key,
FirstName VARCHAR(40) NOT NULL,
LastName VARCHAR(40) NOT NULL,
Year Integer,
GPA Float NULL
);
Here is the sample data from text file:
100,Christoph,Van Gerwen,2011
101,Anar,Cooke,2011
102,Douglis,Rudinow,2008
I think I know what the problem is..Below is my bulk insert code:
use xta9354
bulk insert xta9354.dbo.Students
from 'd:\userdata\xta9_Students.txt'
with (fieldterminator = ',',rowterminator = '\n')
With the sample data, there is no ',' after the Year attribute even tho there is still another attribute Grade after the Year which is NULL
Can someone please tell me how to fix this?
Try using a format file since your data file only has 4 columns. Otherwise, try OPENROWSET or use a staging table.
myTestFormatFiles.Fmt may look like:
9.0
4
1 SQLINT 0 3 "," 1 StudentNo ""
2 SQLCHAR 0 100 "," 2 FirstName SQL_Latin1_General_CP1_CI_AS
3 SQLCHAR 0 100 "," 3 LastName SQL_Latin1_General_CP1_CI_AS
4 SQLINT 0 4 "\r\n" 4 Year "
(source: microsoft.com)
This tutorial on skipping a column with BULK INSERT may also help.
Your statement then would look like:
USE xta9354
GO
BULK INSERT xta9354.dbo.Students
FROM 'd:\userdata\xta9_Students.txt'
WITH (FORMATFILE = 'C:\myTestFormatFiles.Fmt')
In my case, I was dealing with a file that was generated by hadoop on a linux box. When I tried to import to sql I had this issue. The fix wound up being to use the hex value for 'line feed' 0x0a. It also worked for bulk insert
bulk insert table from 'file'
WITH (FIELDTERMINATOR = ',', ROWTERMINATOR = '0x0a')
We use the bulk insert as well. The file we upload is sent from an external party. After a while of troubleshooting, I realized that their file had columns with commas in it. Just another thing to look for...
The above options works for Google big query file also. I exported a table data to goodle cloud storage and downloaded from there. While loading the same to sql server was facing this issue and could successfully load the file after specifying the row delimiter as
ROWTERMINATOR = '0x0a'
Pay attention to header record as well and specify
FIRSTROW = 2
My final block for data file export from google bigquery looks like this.
BULK INSERT TABLENAME
FROM 'C:\ETL\Data\BigQuery\In\FILENAME.csv'
WITH
(
FIRSTROW = 2,
FIELDTERMINATOR = ',', --CSV field delimiter
ROWTERMINATOR = '0x0a',--Files are generated with this row terminator in Google Bigquery
TABLOCK
)
My guess is it's an encoding problem, for instance your file is UTF-8 but SQL will not read it the way it should, so it attempts to insert 100ÿ or something along these lines into your table.
Possible fixes:
Specify Codepage
Change the Encoding of the source using Powershell
Code samples:
1.
BULK INSERT myTable FROM 'c:\Temp\myfile.csv' WITH (
FIELDTERMINATOR = '£',
ROWTERMINATOR = '\n',
CODEPAGE = 'ACP' -- ACP corresponds to ANSI, also try UTF-8 or 65001 for Unicode
);
2.
get-content "myfile.csv" | Set-content -Path "myfile.csv" -Encoding String
# String = ANSI, also try Ascii, Oem, Unicode, UTF7, UTF8, UTF32
Added MSSQLSERVER full access to the folder, diskadmin and bulkadmin server roles.
In my c# application, when preparing for the bulk insert command,
string strsql = "BULK INSERT PWCR_Contractor_vw_TEST FROM '" + strFileName + "' WITH (FIELDTERMINATOR = ',', ROWTERMINATOR = '\\n')";
And I get this error - Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1, column 8 (STATUS).
I looked at my logfile and found that the terminator becomes ' ' instead of '\n'.
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error:
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)". Query :BULK INSERT PWCR_Contractor_vw_TEST FROM 'G:\NEWSTAGEWWW\CalAtlasToPWCR\Results\parsedRegistration.csv' WITH (FIELDTERMINATOR = ',', **ROWTERMINATOR = ''**)
So I added extra escape to the rowterminator - string strsql = "BULK INSERT PWCR_Contractor_vw_TEST FROM '" + strFileName + "' WITH (FIELDTERMINATOR = ',', ROWTERMINATOR = '\\n')";
And now it inserts successfully.
Bulk Insert SQL - ---> BULK INSERT PWCR_Contractor_vw_TEST FROM 'G:\\NEWSTAGEWWW\\CalAtlasToPWCR\\Results\\parsedRegistration.csv' WITH (FIELDTERMINATOR = ',', ROWTERMINATOR = '\n')
Bulk Insert to PWCR_Contractor_vw_TEST successful... ---> clsDatase.PerformBulkInsert