SQL Error: Cannot be converted to a PACKED DECIMAL value - sql

I have db2 import statement which reads from a file and writes to a database.
Column data type for column 18 (where i am getting error) is Decimal(18,2)
The value for that column coming in the file is -502.47
However, I am getting the below error:
SQL3123W The field value in row "1" and column "18" cannot be converted to a PACKED DECIMAL value. A null was loaded.
And the value is not going into database.
What is the reason for this error ? What is the solution ?

There was an issue with the number of column. I was passing more number of columns then the program expected. So we can get above error in that case as well.

It was because of the double quotes in the loaded CSV files at that particular cell mentioned in the error.
You should try opening the file in Notepad++ or any other text editor, remove the double quotes, save and load back into the DB.
Your error should be resolved.

Related

how do I filter out errant integer data in pentaho data integration

I have a fixed position input.txt file like this:
4033667 70040118401401
4033671 70040/8401901 < not int because of "/"
4033669 70040118401301
4033673 70060118401101
I'm using a text file input step to pull the data in, and I'd like to load the data into a database as int's and have errant data go to a log file.
I've tried to using the filter step and the data validator step, but I can't seem to get either to work. I've even tried using the text input field to bring it in as a string and then converting it to an int w/ the Select/Rename values Step, and changing the data-type in meta-data section.
a typical error I keep running into is "String : couldn't convert String to Integer"
Any suggestions?
Thanks!
So I ended up using...
Text file input > Filter Rows (regex \d+) > select values (to cast string to int) > table output
...and the error log comes off of the false result of the regex filter.
I understand you problem.
Let do it simple.

Teradata SQL - Replacing special characters

I'm using Report Builder 3.0 for my reports. My report runs, however, if a user exports the results to Excel (xlsx) instead of Excel 2003 (xls), they get an "illegal xml character" message when the file is open.
4 of the columns contain "&" and / or " ' "; so I'm trying to replace these special characters; which I believe are causing the issue.
I've tried to update this line:
j.journal_desc AS "Jrnl Description",
with this line:
oreplace(oreplace(j.journal_desc, ’&’, β€˜and’),'''','') AS "Jrnl Description",
and it works fine. However when I do this on a second line I get the message: "SELECT Failed. [9804] Response Row size or Constant Row size overflow".
I've tried "otranslate" and it works on 2 columns. However, when I try it on the 3rd column, I get the same overflow message.
Is it possible to use oreplace or otranslate on multiple columns? Am I doing something wrong? Is there a better way to replace these special characters? t
Thanks for the help......
oreplace and otranslate when used the result string will have length of 8000 unicode characterset.each of otranslate will make much longer by 8000. Try to cast to smaller length should fix problem.
CAST(oreplace(journal_desc,'&','and') AS VARCHAR(100))

Line contains invalid enclosed character data or delimiter at position

I was trying to load the data from the csv file into the Oracle sql developer, when inserting the data I encountered the error which says:
Line contains invalid enclosed character data or delimiter at position
I am not sure how to tackle this problem!
For Example:
INSERT INTO PROJECT_LIST (Project_Number, Name, Manager, Projects_M,
Project_Type, In_progress, at_deck, Start_Date, release_date, For_work, nbr,
List, Expenses) VALUES ('5770','"Program Cardinal
(Agile)','','','','','',to_date('', 'YYYY-MM-DD'),'','','','','');
The Error shown were:
--Insert failed for row 4
--Line contains invalid enclosed character data or delimiter at position 79.
--Row 4
I've had success when I've converted the csv file to excel by "save as", then changing the format to .xlsx. I then load in SQL developer the .xlsx version. I think the conversion forces some of the bad formatting out. It worked at least on my last 2 files.
I fixed it by using the concatenate function in my CSV file first and then uploaded it on sql, which worked.
My guess is that it doesn't like to_date('', 'YYYY-MM-DD'). It's missing a date to format. Is that an actual input of your data?
But it could also possibly be the double quote in "Program Cardinal (Agile). Though I don't see why that would get picked up as an invalid character.

PSQL: Invalid input syntax for integer on COPY

I have a .txt file with a plain list of words (one word on each line) that I want to copy into a table. The table was created in Rails with one row: t.string "word".
The same file loaded into another database/table worked fine, but in this case I get:
pg_fix_development=# COPY dictionaries FROM '/Users/user/Documents/en.txt' USING DELIMITERS ' ' WITH NULL as '\null';
ERROR: invalid input syntax for integer: "aa"
CONTEXT: COPY dictionaries, line 1, column id: "aa"
I did some googling on this but can't figure out how to fix it. I'm not well versed in SQL. Thanks for your help!
If you created that table in Rails then you almost certainly have two columns, not one. Rails will add an id serial column behind your back unless you tell it not to; this also explains your "input syntax for integer" error: COPY is trying to use the 'aa' string from your text file as a value for the id column.
You can tell COPY which column you're importing so that the default id values will be used:
copy dictionaries(word) from ....

SSIS Bulk Insert where fields contain commas?

My bulk insert in SSIS is failing when a field contains a comma character. My flat file source is tab delimited and there are many instances in which a text field will contain commas. For example, a UserComment may have a comma. This causes the bulk insert to fail.
How can I tell SSIS to ignore the commas? I thought it would happen automatically since the row delimiter is {CR}{LF} and the column delimiter is "Tab". Why does it bark at the comma? Also please note that I am NOT currently using a format file.
Thanks in advance.
UPDATE:
Here is the error I get in SSIS:
Error: 0xC002F304 at Bulk Insert Task, Bulk Insert Task: An error occurred with the following error message: "Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 183, column 5 (EmailAddress).Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 182, column 5 (EmailAddress).Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 181, column 5 (EmailAddress).".
Task failed: Bulk Insert Task
It seems to fail on record 131988 which is why I think it's because of the "something,something" email with no space. Many records before 131988 come across fine.
131988 01 MEMPHIS, TN someone#somewhere.com
131988 02 NORTH LITTLE ROCK, AR someone#somewhere.com,someone1#somewhere1.com
131988 03 HOUSTON, TX someone#somewhere.com,someone1#somewhere1.com
I doubt the comma or the # sign is being called an "invalid character".
I see there are two tabs in the input record just before the field that contains the email addresses, so that email address column would be the fifth column. But when the error message refers to "column 5" it's presumably using zero-based indexing, so the email column is only index 4. Is there tab and another column? Maybe the invalid character is there.
I suspect there is a invisible bad character embedded in whatever column is causing the error. I often pick up bad characters when cutting and pasting out of email address lines, so that's a likely suspect.
Run the failing line by itself to make sure it still fails.
Then copy it into, say, Notepad, and do a "Save As" with the Encoding set to ANSI. (It may complain at that point if there's a bad character.) Use the "Save As" file as the new import file. At this point you should be able to be reasonably confident that "what you see is what you get", and that there are no invisible characters embedded in the import file.
If this turns out to be the problem, you'll need some way to verify that future import files are clean, or else handle them somehow during the import process.
(I presume you've checked the destination column length is okay. That would definitely be a showstopper.)
"Type mismatch or invalid character for the specified codepage" is a misleading error message. The source table's field length exceeded the destination table's specified length and thus the error. After adjusting lengths, everything worked properly.