SQL Bulk Insert skipping last 849 Lines from Text File - sql

Good day all! For some reason my bulk Insert statement is skipping the last 849 lines from the text file I am reading. I know this because when I manually add my own last line I don't see it in the table after the insert is done and when debugging I see the message: (133758 row(s) affected) and the text file has 134607 Lines, excluding the first 2.
My query looks like this:
BULK INSERT #TEMP FROM 'C:\Test\Test.txt'
WITH (FIELDTERMINATOR ='\t', ROWTERMINATOR = '0x0a', FIRSTROW = 2, MAXERRORS = 50, KEEPNULLS)
I have checked if the are more columns then what the table has and that's not the case. I have changed MAXERRORS from 10 to 20 to 30 to 40 to 50, to see if there are any changes but the Row(s) affected stays the same. Is there maybe something I haven't handled for or missing?
Thanks awesome people.
P.S. I am using this insert for another text file and table but with different column headers and there are less columns in the text file and it works perfectly.

Related

How to generate insert SQL statement in excel with other records

I have some data shown in the below picture. I also have an insert SQL statement extracted from database.
How can I write insert SQL statement so that I can include other records as well?
[enter image description here][1]
INSERT INTO "tblcompany" ("fldComID", "fldComCode", "fldComName", "fldComChiName", "fldComTaxNo", "fldMPFMemberID", "fldCreateDate", "fldCreateEmpName", "fldLastMDate", "fldLastMEmpName")
VALUES (1, 'Code 1', 'Company 1', 'Company Chi. 1', '1D1-20978121', 'MPFMemberID 1', '2020-06-12 09:52:27.000', 'E001', '2020-06-12 09:52:27.000', 'E001');
You're image isn't showing up, so I'm guessing at what you want here.
If you're asking how to input a bunch of data from excel into a table, then I would recommend converting file to csv and using psql if supported and you have thousands of records. You'll need a machine that supports it, log into that machine via a command prompt and run these two rows. Replace all bracketed values with your names and remove the brackets.
\psql -h [myserver.address.com] -d [database_name] -U [user_name]
\copy [my_table_name_on_database] from '[my_csv_file.csv]' csv header delimiter ','
But if you're old fashioned and just looking to manually insert a few rows from excel into a database table, then here's the format of the formula you want to write in excel. In this sample, using an excel data with 4 columns. The first column is numeric, while the others are strings.
So, excel data looks like this:
fldComID fldComCode fldComName fldComChiName
1 Code_1 Company1 CompanyChi.g
2 Code_x Company2 CompanyChi.me
3 Code_44 Company3 CompanyChi.u
In column E, place this formula:
=CONCAT("(",A2,", '",B2,"', '",C2,"', '",D2,"'),")
The output will look like this:
(1, 'Code_1', 'Company1', 'CompanyChi.g'),
(2, 'Code_x', 'Company2', 'CompanyChi.me'),
(3, 'Code_44', 'Company3', 'CompanyChi.u'),
So, you're insert statement will just be:
insert into my_table values
(1, 'Code_1', 'Company1', 'CompanyChi.g'),
(2, 'Code_x', 'Company2', 'CompanyChi.me'),
(3, 'Code_44', 'Company3', 'CompanyChi.u')
You just need to repeat the formula pattern for each additional column you want to include. Ticks are needed for varchar columns, otherwise leave out for integer/numeric columns.

Add specific column length padding to an already populated column

I have a table column filled with values. The Column is set to Varchar(100). The problem I have is that I need to take whatever data is there and expand it out with "Padding" until each one is 100 characters.
All of this goes into a fixed width flat file.
Each entry in the column is between 20 and 30 characters right now. The padding I will be adding is "blank spaces"
My problem is that I am not sure how to write an update statement to handle updating the existing data. I have already fixed the future data.
Any suggestions?
You can do this with an update. The simplest method, though, is probably to change the type to char(100):
alter table t alter column col char(100);
char() pads values with spaces on the right.
Alternatively, you can just do:
update t
set col = col + replicate(' ', 100);
I don't think SQL Server complains about the size difference. If you are concerned about a potential error, you can do:
update t
set col = col + replicate(' ', 100 - len(col));

Update varbinary(MAX) field in SQLServer 2012 Lost Last 4 bits

Recently I would like to do some data patching, and try to update a column of type varbinary(MAX), the update value is like this:
0xFFD8F...6DC0676
However, after update query run successfully, the value becomes:
0x0FFD8...6DC067
It seems the last 4 bits are lost, or whole value right shifting a byte...
I tried deleting entire row and run an Insert Query, same things happen!
Can anyone tell me why is this happening & how can I solve it? Thanks!
I have tried several varying length of binary, for maximum
43658 characters (Each represents 4 bits, total around 21 KB), the update query runs normally. 1 more character will make the above "bug" appears...
PS1: For a shorter length varbinary as update value, everything is okay
PS2: I can post whole binary string out if it helps, but it is really long and I am not sure if it's suitable to post here
EDITED:
Thanks for any help!
As someone suggested, the value inserted maybe of odd number of 4-bits, so there is a 0 append in front of it. Here is my update information on the value:
The value is of 43677 characters long exluding "0x", which menas Yes, it is odd
It does explain why a '0' is inserted before, but does not explain why the last character disappears...
Then I do an experiment:
I insert a even length value, with me manually add a '0' before the original value,
Now the value to be updated is
0x0FFD8F...6DC0676
which is of 43678 characters long, excluding "0x"
The result is no luck, the updated value is still
0x0FFD8...6DC067
It seems that the binary constant 0xFFD8F...6DC0676 that you used for update contains odd number of hex digits. And the SqlServer added half-byte at the beginning of the pattern so that it represent whole number of bytes.
You can see the same effect running the following simple query:
select 0x1, 0x104
This will return 0x01 and 0x0104.
The truncation may be due to some limitaions in SSMS, that can be observed in the following experiment:
declare #b varbinary(max)
set #b = 0x123456789ABCDEF0
set #b = convert(varbinary(max), replicate(#b, 65536/datalength(#b)))
select datalength(#b) DataLength, #b Data
The results returned are 65536 and 0x123456789ABCDEF0...EF0123456789ABCD, however if in SSMS I copy Data column I'm getting pattern of 43677 characters length (this is without leading 0x), which is 21838.5 bytes effectively. So it seems you should not (if you do) rely on long binary data values obtained via copy/paste in SSMS.
The reliable alternative can be using intermediate variable:
declare #data varbinary(max)
select #data = DataXXX from Table_XXX where ID = XXX
update Table_YYY set DataYYY = #data where ID = YYY

Bulk insert issue

During a bulk insert from a csv file a row in the file has 00000100008 value, both source (from which csv file is created) and the destination temptable has same field (char(11)).
When I try to insert I got the following error:
Bulk load data conversion error (truncation) for row 1, column 1 (fieldname)
If I remove the leading 0s and change this value to 100008 in csv file and then bulk insert, the destination table temptable shows '++ 100008 as inserted value. Why is that? How I can cope with value without leading double plus signs?
Here is the script:
BULK
INSERT temptable
FROM 'c:\TestFile.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
Edit: Some sample records from csv file.
100008,111122223333,Mr,ForeName1,SurName1,1 Test Lane,London,NULL,NULL,NULL,wd25 123,test#email.com.com,NULL
322,910315715845,Ms,G,Dally,17 Elsie Street,NULL,NULL,GOOLE,NULL,DN146DU,test1#email1.com,
323,910517288401,Mrs,G,Tom,2 White Mead,NULL,NULL,YEOVIL,NULL,BA213RS,test3#tmail2.com,
My first thought is that the file was saved on a Unix system and that you may have incompatibilities with the different style line breaks.
My first advice would be to analyze the text file using a hex editor to try to determine what character is getting put there.
++ 100008 basically means - Row format is inconsistent with page header. To solve this problem Run dbcc checktable.
I hope that this is going to help you.
Regards,

Import fixed width text to SQL

We have records in this format:
99 0882300 25 YATES ANTHONY V MAY 01 12 04 123456 12345678
The width is fixed and we need to import it into SQL. We tried bulk import, but it didn't work because it's not ',' or '\t' separated. It's separated by individual spaces, of various lengths, in the text file, which is where our dilemma is located.
Any suggestions on how to handle this? Thanks!
question is pretty old but might still be relevant.
I had exactly the same problem as you.
My solution was to use BULK INSERT, together with a FORMAT file.
This would allow you to:
keep the code much leaner
have the mapping for the text file
to upload in a separate file that you can easy tweak
skip columns if you fancy
To cut to the chase, here is my data format (that is one line)
608054000500SS001 ST00BP0000276AC024 19980530G10379 00048134501283404051N02912WAC 0024 04527N05580WAC 0024 1998062520011228E04ST 04856 -94.769323 26.954832
-94.761114 26.953626G10379 183 1
And here is my SQL code:
BULK INSERT dbo.TARGET_TABLE
FROM 'file_to_upload.dat'
WITH (
BATCHSIZE = 2000,
FIRSTROW = 1,
DATAFILETYPE = 'char',
ROWTERMINATOR = '\r\n',
FORMATFILE = 'formatfile.Fmt'
);
Please note the ROWTERMINATOR parameter set there, and the DATAFILETYPE.
And here is the format file
11.0
6
1 SQLCHAR 0 12 "" 1 WELL_API SQL_Latin1_General_CP1_CI_AS
2 SQLCHAR 0 19 "" 2 SPACER1 SQL_Latin1_General_CP1_CI_AS
3 SQLCHAR 0 8 "" 3 FIELD_CODE SQL_Latin1_General_CP1_CI_AS
4 SQLCHAR 0 95 "" 4 SPACER2 SQL_Latin1_General_CP1_CI_AS
5 SQLCHAR 0 5 "" 5 WATER_DEPTH SQL_Latin1_General_CP1_CI_AS
6 SQLCHAR 0 93 "" 6 SPACER3 SQL_Latin1_General_CP1_CI_AS
I put documentation links below, but what you must note is the following:
the ""s in the 5th column, which indicates the separator (for a .csv would be obviously ","), which in our case is set to just "";
column 2 is fully "SQLCHAR", as it's a text file. This must stay so even if the destination field in the data table is for example an integer (it is my case)
Bonus note: in my case I only needed three fields, so the stuff in the middle I just called "spacer", and in my format file gets ignored (you change numbers in column 6, see documentation).
Hope it answers your needs, works fine for me.
Cheers
Documentation here:
https://msdn.microsoft.com/en-us/library/ms178129.aspx
https://msdn.microsoft.com/en-us/library/ms187908.aspx
When you feel more at home with SQL than importing tools, you could bulk import the file into a single VARCHAR(255) column in a staging table. Then process all the records with SQL and transform them to your destination table:
CREATE TABLE #DaTable(MyString VARCHAR(255))
INSERT INTO #DaTable(MyString) VALUES ('99 0882300 25 YATES ANTHONY V MAY 01 12 04 123456 12345678')
INSERT INTO FInalTable(Col1, Col2, Col3, Name)
SELECT CAST(SUBSTRINg(MyString, 1, 3) AS INT) as Col1,
CAST(SUBSTRING(MyString, 4, 7) AS INT) as Col2,
CAST(SUBSTRING(MyString, 12, 3) AS INT) as Col3,
SUBSTRING(MyString, 15, 6) as Name
FROM #DaTable
result: 99 882300 25 YATES
To import from TXT to SQL:
CREATE TABLE #DaTable (MyString VARCHAR(MAX));
And to import from a file
BULK INSERT #DaTable
FROM'C:\Users\usu...IDA_S.txt'
WHITH
(
CODEPAGE = 'RAW'
)
3rd party edit
The sqlite docs to import files has an example usage to insert records into a pre-existing temporary table from a file which has column names in its first row:
sqlite> .import --csv --skip 1 --schema temp C:/work/somedata.csv tab1
My advice is to import the whole file in a new table (TestImport) with 1 column like this
sqlite> .import C:/yourFolder/text_flat.txt TestImport
and save it to a db file
sqlite> .save C:/yourFolder/text_flat_out.db
And now you can do all sorts of etl with it.
I did this for a client a while back and, sad as it may seem, Microsoft Access was the best tool for the job for his needs. It's got support for fixed width files baked in.
Beyond that, you're looking at writing a script that translates the file's rows into something SQL can understand in an insert/update statement.
In Ruby, you could use the String#slice method, which takes an index and length, just like fixed width files' definitions are usually expressed in. Read the file in, parse the lines, and write it back out as a SQL statement.
Use SSIS instead.
This is much clearer and has various options for the import of (text) files