SqlBulkCopy exception, find the colid - sql

Using SqlBulkCopy and getting this exception:
Received an invalid column length from the bcp client for colid 30.
I've been banging my head against this one for hours. I know what row is having the issue, but I don't know which column "colid 30" is. There are 178 columns in the data table. All values seem to be correct and I don't see any that are longer than any column data types in my database.
This database holds property listings and currently has over 3 million records, all of which are just fine.
Is there a way to pinpoint what colid 30 is? Or is there a way to view the actual SQL that the bcp is submitting to the database?

I hope this helps solve someone else's issues as well.
The error was because one of the string/varchar fields in the datatable had a semicolon ";" in it's value. Apparently you need to manually escape these before doing the insert!
I did a loop through all rows/columns and did:
string.Replace(";", "CHAR(59)");
After that, everything inserted smoothly.

Check the size of the columns in the table you are doing bulk insert. the varchar or other string columns might need to be extended.
e.g, Increase size 30 to 50 =>
ALTER TABLE [dbo].[TableName]
ALTER COLUMN [ColumnName] Varchar(50)

In my case, the same issue message was on colid 53. In actual, the issue was on column 54. I guess, you should also check the very next column for size comparison with target table. I hope, it may help you.

Related

"Numeric value '' is not recognized" - what column?

I am trying to insert data from a staging table into the master table. The table has nearly 300 columns, and is a mix of data-typed Varchars, Integers, Decimals, Dates, etc.
Snowflake gives the unhelpful error message of "Numeric value '' is not recognized"
I have gone through and cut out various parts of the query to try and isolate where it is coming from. After several hours and cutting every column, it is still happening.
Does anyone know of a Snowflake diagnostic query (like Redshift has) which can tell me a specific column where the issue is occurring?
Unfortunately not at the point you're at. If you went back to the COPY INTO that loaded the data, you'd be able to use VALIDATE() function to get better information to the record and byte-offset level.
I would query your staging table for just the numeric fields and look for blanks, or you can wrap all of your fields destined for numeric fields with try_to_number() functions. A bit tedious, but might not be too bad if you don't have a lot of numbers.
https://docs.snowflake.com/en/sql-reference/functions/try_to_decimal.html
As a note, when you stage, you should try and use the NULL_IF options to get rid of bad characters and/or try to load them into stage using the actual datatypes in your stage table, so you can leverage the VALIDATE() function to make sure the data types are correct before loading into Snowflake.
Query your staging using try_to_number() and/or try_to_decimal() for number and decimal fields of the table and the use the minus to get the difference
Select $1,$2,...$300 from #stage
minus
Select $1,try_to_number($2)...$300 from#stage
If any number field has a string that cannot be converted then it will be null and then minus should return those rows which have a problem..Once you get the rows then try to analyze the columns in the result set for errors.

String or binary data would be truncated, field find

I understand the error and how to fix I am just interested in finding the field to fix.
Let me start from the top. I am running a scheduled task daily which executes a process that at some points runs some sprocs in sql which run insert statements. Unfortunately after checking my logs I am getting the error in question and therefore my sprocs arent working. I could update every field to a bigger length and this would probably fix it but id rather not. Is there any way of knowing (without manually checking as there are many fields and thousands of rows) the field that contains the value that is too big for the field it is being inserted into?
Import the data into a new table using VARCHAR(MAX) as the datatype for the columns. Then you can use DATALENGTH to get the maximum size of each column.
SELECT MAX(DATALENGTH(col1)) AS col1, MAX(DATALENGTH(col2)) AS col2, etc.
FROM newTable
This will tell you which column(s) exceed the size of your column(s).

Why am I getting a "[SQL0802] Data conversion of data mapping error" exception?

I am not very familiar with iseries/DB2. However, I work on a website that uses it as its primary database.
A new column was recently added to an existing table. When I view it via AS400, I see the following data type:
Type: S
Length: 9
Dec: 2
This tells me it's a numeric field with 6 digits before the decimal point, and 2 digits after the decimal point.
When I query the data with a simple SELECT (SELECT MYCOL FROM MYTABLE), I get back all the records without a problem. However, when I try using a DISTINCT, GROUP BY, or ORDER BY on that same column I get the following exception:
[SQL0802] Data conversion of data mapping error
I've deduced that at least one record has invalid data - what my DBA calls "blanks" or "4 O". How is this possible though? Shouldn't the database throw an exception when invalid data is attempted to be added to that column?
Is there any way I can get around this, such as filtering out those bad records in my query?
"4 O" means 0x40 which is the EBCDIC code for a space or blank character and is the default value placed into any new space in a record.
Legacy programs / operations can introduce the decimal data error. For example if the new file was created and filled using the CPYF command with the FMTOPT(*NOCHK) option.
The easiest way to fix it is to write an HLL program (RPG) to read the file and correct the records.
The only solution I could find was to write a script that checks for blank values in the column and then updates them to zero when they are found.
If the file has record format level checking turned off [ie. LVLCHK(*NO)] or is overridden to that, then an HLL program. (ex. RPG, COBOL, etc) that was not recompiled with the new record might write out records with invalid data in this column, especially if the new column is not at the end of the record.
Make sure that all programs that use native I/O to write or update records on this file are recompiled.
I was able to solve this error by force-casting the key columns to integer. I changed the join from this...
FROM DAILYV INNER JOIN BXV ON DAILYV.DAITEM=BXV.BXPACK
...to this...
FROM DAILYV INNER JOIN BXV ON CAST(DAILYV.DAITEM AS INT)=CAST(BXV.BXPACK AS INT)
...and I didn't have to make any corrections to the tables. This is a very old, very messy database with lots of junk in it. I've made many corrections, but it's a work in progress.

Changing column datatype from DECIMAL(9,0) to DECIMAL(15,0)

Can you please help me concerning this matter (I didnĀ“t found it in the Teradata documentation, which is honestly little overwhelming): My table had this column -BAN DECIMAL(9,0)-, and now I want to change it to - BAN DECIMAL(15,0) COMPRESS 0.- How can I do it? What does COMPRESS constraint 0. or any other mean anyway?
I hope this is possible, and I don`t have to create a new table and then copy the data form the old table. The table is very very big - when I do COUNT(*) form that table I get this error: 2616 numeric overflow occurred during computation
The syntax diagram for ALTER TABLE doesn't seem to support directly changing a column's data type. (Teradata SQL DDL Documentation). COMPRESS 0 compresses zeroes. Teradata supports a lot of different kinds of compression.
Numeric overflow here probably means you've exceeded the range of an integer. To make that part work, just try casting to a bigger data type. (You don't need to change the column's data type to do this.)
select cast(count(*) as bigint)
from table_name;
You asked three different questions:
You cannot change the data type of a column from DECIMAL(9,0) to DECIMAL(15,0). Your best bet would be to create a new column (NEW_BAN), assign values from your old column, drop the old column and rename NEW_BAN back to BAN).
COMPRESS 0 is not a constraint. It means that values of "zero" are compressed from the table, saving disk space.
Your COUNT(*) is returning that error becasue the table has more than 2,147,483,647 rows (the max value of an INTEGER). Cast the result as BIGINT (as shown by Catcall).
And I agree, the documentation can be overwhelming. But be patient and focus only on the SQL titles for your exact release. They really are well written.
You can not use ALTER TABLE to change the data type from DECIMAL(9,0) to DECIMAL(15,0) because it cross the byte boundary required to store the values in the table. For Teradata 13.10, see the Teradata manual for SQL Data Definition Language Detailed Topics pages 61-65 for more details on using ALTER TABLE to change column data types.

Insert zero before a numeric datatype in SQL Server

I was trying insert a value in a column in SQL Server which is of type numeric(18, 0).
I am unable to insert a zero at the beginning. For example adding 022223 gets inserted as 22223...
I think changing the column type to varchar will work but I don't want to alter table structure.
Any way to do this without changing the table structure..Please help
There is no point to have this IN a database. You will, afterall, select the data, won't you? So while selecting do something like this. I looked in google for "mssql numeeric leading zeros". All of the solutions are like in the link I have mentioned :) Or obviously, use varchar if you, for some reason, must have data like that in a table :)
you can apply a transformation when reading and inserting the value like this:
when inserting value :
string s = "00056";
double val = double.Parse("0." + s);
and when querying value use:
double value = 0.00056; // stored value in your db field
s = val.ToString().Remove(0,2);
i think it will work in any case - whether you have leading zero in your value or not.
Well I dont think it's possible, since its being stored as a 32 bit so simply add leading zeros when printing it ...
There are multiple approaches.
http://sqlusa.com/bestpractices2005/padleadingzeros/