ignore bcp right truncation - sql

I have a file with the stock information, such as ticker and stock price. The file was loaded to database table using freebcp. The stock price format in the file is like: 23.125. The stock price data type in database table is [decimal](28, 2). freebcp loaded the data to the table without any problem by ignoring the last digit: 23.12 was loaded to the table column of the record. We are now using Microsoft SQL Server's bcp utility (Version: 11.0 ) to load the data. However we now encounter an issue: bcp considers loading 23.125 to decimal(28.2) is an error (## Row 783, Column 23: String data, right truncation ##). It rejected the record.
I didn't want to modify the input file, because there are a lot of columns in the file need to be fixed by removing the last digit of columns.
Are there any ways to construct the BCP or the Microsoft SQL Server to ingore the right truncation error?

A common workaround back in the day, is to BCP into a secondary/temp table, then do SELECT (columnlist) INTO the base table, with the necessary conversion. Another option, is to Use the OPENROWSET Bulk Rowset Provider, then you can cast/convert as needed.

I encountered this error today and I fixed this by using the -m parameter in sql server version #15
bcp dbo.<table> in <csv file> -S <server> -d <db> -U <user> -P <psw> -m 999999 -q -c -t ,
Reference: https://learn.microsoft.com/en-us/sql/tools/bcp-utility?view=sql-server-ver15#m
Note:
The -m option also does not apply to converting the money or bigint data types.

Related

How to fix BCP file with widechar support that causes Pentaho data integration to fail on insert of values from first character of first row?

I have an bat file that collects data with a bcp extract call that executes a Stored Procedure(SP) with the -w flag. When the data from that file is consumed by our Pentaho transformation, there is an additional character added to the first value in any row. The CSV input step uses "UTF-16LE" but the first field has a value that has garbage characters before the value (ex. "1" instead of "1"). Is there an additional option to bcp that can either add a header row or is there something that can cleanse this character from the pentaho side.
Sample BCP command :
bcp "exec [companyschema].[collectdataprocedure] %SESSIONID%" queryout collectedoutput.csv -t "," -w -T -S
The issue occurs when I try to load to the database within the transformation.
I have tried skipping the first row of the data but do need to have that data loaded to the db.
Found an answer to the issue and it is to use the Replace in String step with Search Pattern of "([^A-Za-z0-9\-])" and set Empty String "Y" to replace the first field in your row with the same name.
This resolved the issue with losing the first row of data.

MySQL mysqldump command error(bug in mysql 5.5)

I am working on exporting a table from my server DB which is about few thousand rows and the PHPMyadmin is unable to handle it.So I switched to the command line option
But I am running into this error after executing the mysqldump command.The error is
Couldn't execute 'SET OPTION SQL_QUOTE_SHOW_CREATE=1': You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'OPTION SQL_QUOTE_SHOW_CREATE=1' at line 1 (1064)
After doing some search on the same I found this as a bug in the mysql version 5.5 not supporting the SET OPTION command.
I am running a EC2 instance with centos on it.My mysql version is 5.5.31(from my phpinfo).
I would like to know if there is a fix for this as it wont be possible to upgrade the entire database for this error.
Or if there is any other alternative to do a export or dump,please suggest.
An alternative to mysqldump is the SELECT ... INTO form of SELECT, which allows results to be written to a file (http://dev.mysql.com/doc/refman/5.5/en/select-into.html).
Some example syntax from the above help page is:
SELECT a,b,a+b INTO OUTFILE '/tmp/result.txt'
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
LINES TERMINATED BY '\n'
FROM test_table;
Data can then be loaded back in using LOAD DATA INFILE (http://dev.mysql.com/doc/refman/5.5/en/load-data.html).
Again the page gives an example:
LOAD DATA INFILE '/tmp/test.txt' INTO TABLE test
FIELDS TERMINATED BY ',' LINES STARTING BY 'xxx';
And with a complete worked example pair:
When you use SELECT ... INTO OUTFILE in tandem with LOAD DATA INFILE
to write data from a database into a file and then read the file back
into the database later, the field- and line-handling options for both
statements must match. Otherwise, LOAD DATA INFILE will not interpret
the contents of the file properly. Suppose that you use SELECT ...
INTO OUTFILE to write a file with fields delimited by commas:
SELECT * INTO OUTFILE 'data.txt' FIELDS TERMINATED BY ','
FROM table2;
To read the comma-delimited file back in, the correct statement would
be:
LOAD DATA INFILE 'data.txt' INTO TABLE table2 FIELDS TERMINATED BY ',';
Not tested, but something like this:
cat yourdumpfile.sql | grep -v "SET OPTION SQL_QUOTE_SHOW_CREATE" | mysql -u user -p -h host databasename
This inserts the dump into your database, but removes the lines containing "SET OPTION SQL_QUOTE_SHOW_CREATE". The -v means reverting.
Couldn't find the english manual entry for SQL_QUOTE_SHOW_CREATE to link it here, but you don't need this option at all, when your table and database names don't include special characters or something (meaning they don't need to put in quotes).
UPDATE:
mysqldump -u user -p -h host database | grep -v "SET OPTION SQL_QUOTE_SHOW_CREATE" > yourdumpfile.sql
Then when you insert the dump into database you have to do nothing special.
mysql -u user -p -h host database < yourdumpfile.sql
I used quick and dirty hack for this.
Download mysql 5.6. (from https://downloads.mariadb.com/archive/signature/p/mysql/f/mysql-5.6.13-linux-glibc2.5-x86_64.tar.gz/v/5.6.13)
Untar and use newly downloaded mysqldump.

Fastest way to copy table contet from one server to another

Im looking for fastest way to copy some tables from one sybase server (ase 12.5) to another. Currently im using bcp tool but it takes time to create proper bcp.fmt file.
Tables have the same structure. There is about 25K rows in every table. I have to copy about 40 tables.
I tryed to use -c parameter for bcp but I get errors while importing:
CSLIB Message: - L0/O0/S0/N24/1/0:
cs_convert: cslib user api layer: common library error: The conversion/operation
was stopped due to a syntax error in the source field.
My standard bcp in/out commands:
bcp.exe SPEPL..VSoftSent out VSoftSent.csv -U%user% -P%pass% -S%srv% -c
bcp.exe SPEPL..VSoftSent in VSoftSent.csv -U%user2% -P%pass2% -S%srv2% -e import.err -c
Since you are copying from different servers, BCP is the way to go!
If it was in the same server would be different.
Are you saying it's from 1 Sybase ASE host to another Sybase ASE host?
If you don't want to mess with BCP or I/O on the file system, you could create a CIS proxy table in your destination database that references either a stored procedure with a select statement or a physical table in your source database.
Then you could just
insert into destinationtable (col1, col2...)
select
col1, col2...
from proxytablename
CIS proxy is fairly resource intensive, so I'd be very careful about how much work you're doing here.

Moving results of T-SQL query to a file without using BCP?

What I want to do is output some query results to a file. Basically, when I query the table I'm interested in, my results look like this:
HTML_ID HTML_CONTENT
1 <html>...
2 <html>...
3 <html>...
4 <html>...
5 <html>...
6 <html>...
7 <html>...
The field HTML_CONTENT is of type ntext and each record's value is around 500+ characters (that contains HTML content).
I can create a cursor to move each record's content to a temp table or whatever.
But my question is this: instead of temp table, how would I move this without using BCP?
BCP isn't an option as our sysadmin has blocked access to sys.xp_cmdshell.
Note: I want to store each record's HTML content to individual files
My version of sql is: Microsoft SQL Server 2008 (SP1) - 10.0.2531.0
You can make use of SSIS to read the table data and output the content of the table rows as files. Export column transformation available within Data Flow Task of SSIS packages might help you do that.
Here is an example, The Export Column Transformation
MSDN documentation about Export Column transformation.
This answer would have worked until you added the requirement for Individual Files.
You can run the SQL from command line and dump the output into a file. The following utilities can be used for this.
SQLCMD
OSQL
Here is an example with SQLCMD with an inline query
sqlcmd -S ServerName -E -Q "Select GetDate() CurrentDateAndTime" > output.txt
You can save the query to a file (QueryString.sql) and use -i instead
sqlcmd -S ServerName -E -i QueryString.sql> output.txt
Edit
Use SSIS
Create a package
Create a variable called RecordsToOutput of type Object at the package level
Use an EXECUTE SQL task and get the dataset back into RecordsToOutput
Use a For-Each loop to go through the RecordsToOutput dataset
In the loop, create a variable for each column in the dataset (give it the same name)
Add a Data Flow task
Use a OleDB source and use a SQL statement to create one row (with data you already have)
use a flat-file destination to write out the row.
Use expressions on the flat file connection to change the name of the destination file for each row in the loop.

Unable to update the table of SQL Server with BCP utility

We have a database table that we pre-populate with data as part of our deployment procedure. Since one of the columns is binary (it's a binary serialized object) we use BCP to copy the data into the table.
So far this has worked very well, however, today we tried this technique on a Windows Server 2008 machine for the first time and noticed that not all of the columns were being updated. Out of the 31 rows that are normally inserted as part of this operation, only 2 rows actually had their binary columns populated correctly. The other 29 rows simply had null values for their binary column. This is the first situation where we've seen an issue like this and this is the same .dat file that we use for all of our deployments.
Has anyone else ever encountered this issue before or have any insight as to what the issue could be?
Thanks in advance,
Jeremy
My guess is that you're using -c or -w to dump as text, and it's choking on a particular combination of characters it doesn't like and subbing in a NULL. This can also happen in Native mode if there's no format file. Try the following and see if it helps. (Obviously, you'll need to add the server and login switches yourself.)
bcp MyDatabase.dbo.MyTable format nul -f MyTable.fmt -n
bcp MyDatabase.dbo.MyTable out MyTable.dat -f MyTable.fmt -k -E -b 1000 -h "TABLOCK"
This'll dump the table data as straight binary with a format file, NULLs, and identity values to make absolutely sure everything lines up. In addition, it'll use batches of 1000 to optimize the data dump. Then, to insert it back:
bcp MySecondData.dbo.MyTable in MyTable.dat -f MyTable.fmt -n -b 1000
...which will use the format file, data file, and set batching to increase the speed a little. If you need more speed than that, you'll want to look at BULK INSERT, FirstRow/LastRow, and loading in parallel, but that's a bit beyond the scope of this question. :)