How to Bulk Inserts via TSQL without using FIELDTERMINATOR - sql-server-2000

I want to bulk insert data from .dat file.
but the problem is the file doesn't contains any char by which i could separate different values..
Actually the file contain code generated by Attendance machine, the code looks like
31201201100915000100000043210001
31201205301806000200000043210011
Above 2 lines are Attendance of 1 day of Employee 4321, The 1st line is entry of TimeIn & the second line is entry of TimeOut,
details are below
31 - Machine Code
2012 - Year
01 - Month
10 - Day
09 - Hour
15 - Min
0001 - In or Out (0001 for In & 0002 for Out)
0000000061 - EmployeeCode
0001 - Terminal No (0001 for Terminal In & 0011 for Terminal Out)
can i bulk import this file ? if yes then how? can anyone tell how i can solve this problem?
Thanks
im using SQL server 2000 :(

You will need to get the data into a staging table by using BULK INSERT or BCP and then parse out the columns using the SUBSTRING() and CAST()/CONVERT() functions.

read the dat file in to a string using the Environment.Newline sperator
string[] Lines = System.IO.ReadAllLines("dat.file");
then for each line do something along the lines of
Int MachineCode = Lines[0].SubString(0,2);
Int Year = Lines[0].SubString(2,4);

Related

Additional 0 in varbinary insert in SSMS

I have a problem when I am trying to move a varbinary(max) field from one DB to another.
If I insert like this:
0xD0CF11E0A1B11AE10000000
It results the beginning with an additional '0':
0x0D0CF11E0A1B11AE10000000
And I cannot get rid of this. I've tried many tools, like SSMS export tool or BCP, but without any success. And it would be better fro me to solve it in a script anyway.
And don't have much kowledge about varbinary (a program generates it), my only goal is to copy it:)
0xD0CF11E0A1B11AE10000000
This value contains an odd number of characters. Varbinary stores bytes. Each byte is represented by exactly two hexadecimal characters. You're either missing a character, or your not storing bytes.
Here, SQL Server is guessing that the most significant digit is a zero, which would not change the numeric value of the string. For example:
select 0xD0C "value"
,cast(0xD0C as int) "as_integer"
,cast(0x0D0C as int) "leading_zero"
,cast(0xD0C0 as int) "trailing_zero"
value 3_char leading_zero trailing_zero
---------- --------- --------------- ----------------
0d0c 3340 3340 53440
Or:
select 1 "test"
where 0xD0C = 0x0D0C
test
-------
1
It's just a difference of SQL Server assuming that varbinary always represents bytes.

SQL Server : replace script error gives me String or binary data would be truncated

I am self taught with SQL Server. I been using the replace script lately. I have to replace a string inside of a varchar column.
An example of what I need to change is 15151500001500000000 where I need to change the last 00 to 10
This is the script I am using:
UPDATE xxxxx
SET craftname = REPLACE(craftname, 00, 10)
WHERE craftname like '%00'
However it gives me this error every time
String or binary data would be truncated.
I searched around the net and from what I can see most common reason is that the column is getting to big but here I am replacing 2 digits with 2 digits so that shouldn't happen.
Any ideas?
Try using strings instead of integers:
UPDATE xxxxx
SET craftname = REPLACE(craftname, '00', '10')
WHERE craftname like '%00';
The problem with your version is that every 0 is replaced by 10, which increases the length of the string. The integers are turned into strings using "reasonable" representations, so 00 becomes '0', rather than '00'.
The above still won't do what you want, because it will replace every occurrence of 00 with 10. I included it to show you how to fix the run-time error.
If you just want to change the last two characters, don't use REPLACE(). Instead:
UPDATE xxxxx
SET craftname = LEFT(craftname, len(craftname) - 2) + '10'
WHERE craftname like '%00';

Bulk insert issue

During a bulk insert from a csv file a row in the file has 00000100008 value, both source (from which csv file is created) and the destination temptable has same field (char(11)).
When I try to insert I got the following error:
Bulk load data conversion error (truncation) for row 1, column 1 (fieldname)
If I remove the leading 0s and change this value to 100008 in csv file and then bulk insert, the destination table temptable shows '++ 100008 as inserted value. Why is that? How I can cope with value without leading double plus signs?
Here is the script:
BULK
INSERT temptable
FROM 'c:\TestFile.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
Edit: Some sample records from csv file.
100008,111122223333,Mr,ForeName1,SurName1,1 Test Lane,London,NULL,NULL,NULL,wd25 123,test#email.com.com,NULL
322,910315715845,Ms,G,Dally,17 Elsie Street,NULL,NULL,GOOLE,NULL,DN146DU,test1#email1.com,
323,910517288401,Mrs,G,Tom,2 White Mead,NULL,NULL,YEOVIL,NULL,BA213RS,test3#tmail2.com,
My first thought is that the file was saved on a Unix system and that you may have incompatibilities with the different style line breaks.
My first advice would be to analyze the text file using a hex editor to try to determine what character is getting put there.
++ 100008 basically means - Row format is inconsistent with page header. To solve this problem Run dbcc checktable.
I hope that this is going to help you.
Regards,

Transer from table to notepad with separator

Goal:
Retrieve data in a notepad with ';' as a separator between column.
The data in the notepad should be:
2001-11-11 00:00:000;1
2001-11-11 00:00:000;2
2001-11-11 00:00:000;0
Problem:
How should I transfer data from table into notepad with this data
(datetime) (int)
date Number
--------------------------------
2001-11-11 00:00:000 1
2001-11-11 00:00:000 2
2001-11-11 00:00:000 0
2001-11-11 00:00:000 4
Go into tools-options Open the Query Results Tree, SQL Server, Results to Text. In there you will see output format and you should be able to choose custom delimiter which you can then set to a semi colon.
If you now change your output to the text (ctrl+T) or results to file (ctrl + shift + F) you should get the output you desire.
If your database is MySQL for example, you just dump the database using:
mysqldump --fields-terminated-by=str databasename
where str is ";".
select "results to file" on the menu and run:
select CAST(date as varchar(20))+';'+CAST(number as varchar(20))
from yourTable

Import fixed width text to SQL

We have records in this format:
99 0882300 25 YATES ANTHONY V MAY 01 12 04 123456 12345678
The width is fixed and we need to import it into SQL. We tried bulk import, but it didn't work because it's not ',' or '\t' separated. It's separated by individual spaces, of various lengths, in the text file, which is where our dilemma is located.
Any suggestions on how to handle this? Thanks!
question is pretty old but might still be relevant.
I had exactly the same problem as you.
My solution was to use BULK INSERT, together with a FORMAT file.
This would allow you to:
keep the code much leaner
have the mapping for the text file
to upload in a separate file that you can easy tweak
skip columns if you fancy
To cut to the chase, here is my data format (that is one line)
608054000500SS001 ST00BP0000276AC024 19980530G10379 00048134501283404051N02912WAC 0024 04527N05580WAC 0024 1998062520011228E04ST 04856 -94.769323 26.954832
-94.761114 26.953626G10379 183 1
And here is my SQL code:
BULK INSERT dbo.TARGET_TABLE
FROM 'file_to_upload.dat'
WITH (
BATCHSIZE = 2000,
FIRSTROW = 1,
DATAFILETYPE = 'char',
ROWTERMINATOR = '\r\n',
FORMATFILE = 'formatfile.Fmt'
);
Please note the ROWTERMINATOR parameter set there, and the DATAFILETYPE.
And here is the format file
11.0
6
1 SQLCHAR 0 12 "" 1 WELL_API SQL_Latin1_General_CP1_CI_AS
2 SQLCHAR 0 19 "" 2 SPACER1 SQL_Latin1_General_CP1_CI_AS
3 SQLCHAR 0 8 "" 3 FIELD_CODE SQL_Latin1_General_CP1_CI_AS
4 SQLCHAR 0 95 "" 4 SPACER2 SQL_Latin1_General_CP1_CI_AS
5 SQLCHAR 0 5 "" 5 WATER_DEPTH SQL_Latin1_General_CP1_CI_AS
6 SQLCHAR 0 93 "" 6 SPACER3 SQL_Latin1_General_CP1_CI_AS
I put documentation links below, but what you must note is the following:
the ""s in the 5th column, which indicates the separator (for a .csv would be obviously ","), which in our case is set to just "";
column 2 is fully "SQLCHAR", as it's a text file. This must stay so even if the destination field in the data table is for example an integer (it is my case)
Bonus note: in my case I only needed three fields, so the stuff in the middle I just called "spacer", and in my format file gets ignored (you change numbers in column 6, see documentation).
Hope it answers your needs, works fine for me.
Cheers
Documentation here:
https://msdn.microsoft.com/en-us/library/ms178129.aspx
https://msdn.microsoft.com/en-us/library/ms187908.aspx
When you feel more at home with SQL than importing tools, you could bulk import the file into a single VARCHAR(255) column in a staging table. Then process all the records with SQL and transform them to your destination table:
CREATE TABLE #DaTable(MyString VARCHAR(255))
INSERT INTO #DaTable(MyString) VALUES ('99 0882300 25 YATES ANTHONY V MAY 01 12 04 123456 12345678')
INSERT INTO FInalTable(Col1, Col2, Col3, Name)
SELECT CAST(SUBSTRINg(MyString, 1, 3) AS INT) as Col1,
CAST(SUBSTRING(MyString, 4, 7) AS INT) as Col2,
CAST(SUBSTRING(MyString, 12, 3) AS INT) as Col3,
SUBSTRING(MyString, 15, 6) as Name
FROM #DaTable
result: 99 882300 25 YATES
To import from TXT to SQL:
CREATE TABLE #DaTable (MyString VARCHAR(MAX));
And to import from a file
BULK INSERT #DaTable
FROM'C:\Users\usu...IDA_S.txt'
WHITH
(
CODEPAGE = 'RAW'
)
3rd party edit
The sqlite docs to import files has an example usage to insert records into a pre-existing temporary table from a file which has column names in its first row:
sqlite> .import --csv --skip 1 --schema temp C:/work/somedata.csv tab1
My advice is to import the whole file in a new table (TestImport) with 1 column like this
sqlite> .import C:/yourFolder/text_flat.txt TestImport
and save it to a db file
sqlite> .save C:/yourFolder/text_flat_out.db
And now you can do all sorts of etl with it.
I did this for a client a while back and, sad as it may seem, Microsoft Access was the best tool for the job for his needs. It's got support for fixed width files baked in.
Beyond that, you're looking at writing a script that translates the file's rows into something SQL can understand in an insert/update statement.
In Ruby, you could use the String#slice method, which takes an index and length, just like fixed width files' definitions are usually expressed in. Read the file in, parse the lines, and write it back out as a SQL statement.
Use SSIS instead.
This is much clearer and has various options for the import of (text) files