SQL> desc FLIGHTS;
Name Null? Type
----------------------------------------- -------- ----------------------------
FLNO NUMBER(38)
FROM VARCHAR2(64)
TO VARCHAR2(64)
DISTANCE NUMBER(38)
DEPARTS DATE
ARRIVES DATE
PRICE FLOAT(63)
data file:
99,Los Angeles,Washington D.C.,2308,2005/04/12 09:30,2005/04/12 21:40,235.98
13,Los Angeles,Chicago,1749,2005/04/12 08:45,2005/04/12 20:45,220.98
346,Los Angeles,Dallas,1251,2005/04/12 11:50,2005/04/12 19:05,225.43
387,Los Angeles,Boston,2606,2005/04/12 07:03,2005/04/12 17:03,261.56
and sqlldr control file:
LOAD DATA INFILE 'flights.txt'
INTO TABLE Flights
FIELDS TERMINATED BY ","
( FLNO
, FROM
, TO
, DISTANCE
, DEPARTS
, ARRIVES
, PRICE)
An exerpt from the error log:
Table FLIGHTS, loaded from every logical record.
Insert option in effect for this table: INSERT
Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ---------------------
FLNO FIRST * , CHARACTER
FROM NEXT * , CHARACTER
TO NEXT * , CHARACTER
DISTANCE NEXT * , CHARACTER
DEPARTS NEXT * , CHARACTER
ARRIVES NEXT * , CHARACTER
PRICE NEXT * , CHARACTER
Record 1: Rejected - Error on table FLIGHTS, column FROM.
ORA-01747: invalid user.table.column, table.column, or column specification
I am not sure what is wrong with my SQL, but I'm assuming it is because of the FROM entry?
Firstly, calling columns from and to is a bad idea, they're key-words. Something like origin and destination might be better...
Secondly, float, this really isn't needed. The chances of you needing a price to 63 decimal places is remote. Something like number(18,2) should be more than sufficient (ridiculous in fact), but if you want the absolute maximum use number(38,2).
My last pre-answer point is your datafile. If at all possible get your supplier to change this. A comma delimited file is just asking for trouble... there are far too many ways for a comma to be in the data. If you can have it | or ¬ delimited it's better as they're hardly ever used in text.
Depending on your NLS_PARAMETERS, there's no guarantee that the date in the file will be altered into the date you need in your table. It's best to explicitly specify this on entry into the database like this:
LOAD DATA
INFILE 'flights.txt'
INTO TABLE Flights
FIELDS TERMINATED BY ","
( FLNO
, FROM
, TO
, DISTANCE
, DEPARTS "to_date(:departs,'yyyy/mm/dd hh24:mi')"
, ARRIVES "to_date(:arrives,'yyyy/mm/dd hh24:mi')"
, PRICE DECIMAL EXTERNAL
)
Notice that I've also changed PRICE into a decimal, if you look at your log file, everything is supposedly a character, which means you're doing an implicit conversion, which may not be guaranteed.
Why you got your specific error message? I don't actually know. I also suspect it's because you've got a column called FROM. According to the control file documentation there's no SQL*Loader keyword from, so I can only posit that it's some problem with SQL*Loader's communication with Oracle.
Here's another good resource on syntax.
Related
I have a problem when I am trying to move a varbinary(max) field from one DB to another.
If I insert like this:
0xD0CF11E0A1B11AE10000000
It results the beginning with an additional '0':
0x0D0CF11E0A1B11AE10000000
And I cannot get rid of this. I've tried many tools, like SSMS export tool or BCP, but without any success. And it would be better fro me to solve it in a script anyway.
And don't have much kowledge about varbinary (a program generates it), my only goal is to copy it:)
0xD0CF11E0A1B11AE10000000
This value contains an odd number of characters. Varbinary stores bytes. Each byte is represented by exactly two hexadecimal characters. You're either missing a character, or your not storing bytes.
Here, SQL Server is guessing that the most significant digit is a zero, which would not change the numeric value of the string. For example:
select 0xD0C "value"
,cast(0xD0C as int) "as_integer"
,cast(0x0D0C as int) "leading_zero"
,cast(0xD0C0 as int) "trailing_zero"
value 3_char leading_zero trailing_zero
---------- --------- --------------- ----------------
0d0c 3340 3340 53440
Or:
select 1 "test"
where 0xD0C = 0x0D0C
test
-------
1
It's just a difference of SQL Server assuming that varbinary always represents bytes.
I have an existing database with a table with a string[16] key field.
There are rows whose key ends with a space: "16 ".
I need to allow user to change from "16 " to e.g. "16" but also do a unique key check (i.e. the table does not have already a record with key="16").
I run the following query:
select * from plu__ where store=100 and plu_num = '16'
It returns the row with key="16 "!
How do I check for unique key so that keys with trailing spaces are not included?
EDIT: The DDL and the char_length
CREATE TABLE PLU__
(
PLU_NUM Varchar(16),
CAPTION Varchar(50),
...
string[16] - there is no such datatype in Firebird. There are CHAR(16) and VARCHAR(16) (and BLOB SUBTYPE TEXT, but it is improbable here). So you omit some crucial points about your system. You do not work with Firebird, but with some undisclosed intermediate layer, that is no one knows how opaque or transparent.
I suspect you or your system chose CHAR datatype instead of VARCHAR where all data is right-padded with space to the max. OR maybe the COLLATION of the column/table/database is so that trailing spaces do not matter.
Additionally, you may be just wrong. You claim that the row being Selected does contain the trailing blank, but I do not see it. For example, add CHAR_LENGTH(plu_num) to the columns in your SELECT and see what is there.
Additionally, if plu_num is number - should it not be integer or int64 rather than text?
Bottom of your screenshot shows "(NONE)". I suspect that is the "connection charset". This is allowed for backward compatibility with programs made 20 years ago, but it is quite dangerous today. You have to consult your system documentation, how to set the connection charset to URF-8 or Windows-1250 or something meaningful.
"How do I check for unique key so that keys with trailing spaces are not included?" you do not. You just can not do it reliably, because of different transactions and different programs making simultaneous connections. You would check it, decide you are clear, but right before you would insert your row - some other computer would insert it too. That gap can not be crossed that way, between your two commands of checking and inserting - anyone else can do it too. It is called race conditions.
You have to ask the server to do the checks.
For example, you have to introduce UNIQUE CONSTRAINT on the pair of columns (store, plu_num). That way the server would refuse to store two rows with the same values in those columns, visible in the same transaction.
Additionally, is it even normal to have values with spaces? Convert the field to integer datatype and be safe.
Or if you want to keep it textual and non-numeric you still can
Introduce CHECK CONSTRAINT that trim(plu_num) is not distinct from plu_num (or if plu_num is declared as a NOT NULL column to the server, then trim(plu_num) = plu_num). That way the server would refuse storing any value with spaces before or after the text.
In a case the datatype or the collation of the column makes no difference for comparing texts with and without trailing spaces (and in case you can not change that datatype or collation), you may try adding tokens, like ('+' || trim(plu_num) || '+') = ('+' || plu_num || '+')
Or instead of that CHECK CONSTRAINT, you can have proactively remove those spaces: set new before update or insert TRIGGER on the table, that would do like NEW.plu_num = TRIM(NEW.plu_num)
Documentation:
https://www.firebirdsql.org/refdocs/langrefupd20-distinct.html
http://www.firebirdtest.com/file/documentation/reference_manuals/fblangref25-en/html/fblangref25-ddl-tbl.html#fblangref25-ddl-tbl-constraints
http://www.firebirdtest.com/file/documentation/reference_manuals/fblangref25-en/html/fblangref25-ddl-tbl.html#fblangref25-ddl-tbl-altradd
http://www.firebirdtest.com/file/documentation/reference_manuals/fblangref25-en/html/fblangref25-ddl-trgr.html
http://www.firebirdtest.com/file/documentation/reference_manuals/fblangref25-en/html/fblangref25-datatypes-chartypes.html
Also, via http://www.translate.ru a bit more verbose:
http://firebirdsql.su/doku.php?id=constraint
http://firebirdsql.su/doku.php?id=alter_table
You may also check http://www.firebirdfaq.org/cat3/
Additionally, if you add the constraints onto existing table with non-valid data entered earlier before you introduced those checks, you might trap yourself into "non-restorable backup" situation. You would have to check for it, and sanitize your old data to abide by newly introduced constraints.
Option #4 explained in detail is below. Just this seems be a bad idea of database design! One should not just "let people edit number to remove trailing blanks", one should make the database design so that there would be no any numbers with trailing blank and would be no any way to insert them into the database.
CREATE TABLE "_NEW_TABLE" (
ID INTEGER NOT NULL,
TXT VARCHAR(10)
);
Select id, txt, '_'||txt||'_', char_length(txt) from "_NEW_TABLE"
ID TXT CONCATENATION CHAR_LENGTH
1 1 _1_ 1
2 2 _2_ 1
4 1 _1 _ 2
5 2 _2 _ 2
7 1 _ 1_ 2
8 2 _ 2_ 2
Select id, txt, '_'||txt||'_', char_length(txt) from "_NEW_TABLE"
where txt = '2'
ID TXT CONCATENATION CHAR_LENGTH
2 2 _2_ 1
5 2 _2 _ 2
Select id, txt, '_'||txt||'_', char_length(txt) from "_NEW_TABLE"
where txt || '+' = '2+' -- WARNING - this PROHIBITS index use on txt column, if there is any
ID TXT CONCATENATION CHAR_LENGTH
2 2 _2_ 1
Select id, txt, '_'||txt||'_', char_length(txt) from "_NEW_TABLE"
where txt = '2' and char_length(txt) = char_length('2')
I tried loading data into table using sql loader.
The log shows actual length of the string is 101 where as 100 is maximum(Rejects the record).But when i checked ,I found the length is 99.
data type of the string is varchar2(100) in table
I didnt specify anything about length in control file
What would be the exact problem?
Your data value only has 99 characters, but it seems some are multibyte characters - from a comment at least one is the symbol ½.
There are two related way to see this behaviour, depending on how your table is defined and what is in your control file.
You're probably seeing the effect of character length semantics. Your column is defined as 100 bytes; you're trying to insert 99 characters, but as some characters require multiple bytes for storage, the total number of bytes required for your string is 101 - too many for the column definition.
You can see that effect here:
create table t42 (str varchar2(10 byte));
Then if I have a data file with one row that has a multibyte character:
This is 10
This is 9½
and a simple control file:
LOAD DATA
CHARACTERSET UTF8
TRUNCATE INTO TABLE T42
FIELDS TERMINATED BY ','
TRAILING NULLCOLS
(
STR
)
Then trying to load that gets:
Record 2: Rejected - Error on table T42, column STR.
ORA-12899: value too large for column "MYSCHEMA"."T42"."STR" (actual: 11, maximum: 10)
Total logical records read: 2
Total logical records rejected: 1
If I recreate my table with character semantics:
drop table t42 purge;
create table t42 (str varchar2(10 char));
then loading with the same data and control file now gets no errors, and:
Total logical records read: 2
Total logical records rejected: 0
However, even when the table is defined with character semantics, you could still see this; if I remove the line CHARACTERSET UTF8 then my environment defaults (via NLS_LANG, which happens to set my character set to WE8ISO8859P1) leads to a character set mismatch and I again see:
Record 2: Rejected - Error on table T42, column STR.
ORA-12899: value too large for column "STACKOVERFLOW"."T42"."STR" (actual: 11, maximum: 10)
(Without that control file line, and with byte semantics for the column, the error reports actual length as 13 not 11).
So you need the table to be defined to hold the maximum number of characters you expect, and you need the control file to specify the character set if your NLS_LANG is defaulting it to something that doesn't match the database character set.
You can see the default semantics a new table will get by querying, for the database default and your current session default:
select value from nls_database_parameters where parameter = 'NLS_LENGTH_SEMANTICS';
select value from nls_session_parameters where parameter = 'NLS_LENGTH_SEMANTICS';
For an existing table you can check which was used by looking at the user_tab_columns.char_used column, which will be B for byte semantics and C for character semantics.
I'm trying to get a percentage to display as a decimal in my database.
I have the following set up to convert the percentage columns into decimals:
---------------- ---------------- ------------
excel source ---------> data conversion ----------> db output
---------------- ---------------- ------------
I've tried to strictly convert the input to decimal and numeric.
Neither of these have changed my results.
In my columns in the database I'm getting just 0's and 1's.
Forgive my crude drawing; I do not have enough rep to post pictures yet.
Hope this is what you are looking for
Excel sheet like this is the source.
I just tested it in my system.It is working fine. This is what I did.
Created an SSIS package with just 1 DFT.
Data flow is given below. Please note that the value which appeared as 40% in Excel sheet is visible as 0.40. So I added two derived columns. One converting as such and the next which multiplies with 100.
the derived column structure is shown below.
The destination table structure be
Create table Destination
(
id int,
name varchar(15),
hike decimal(8,2)
)
I am getting the result as expected.
Select * from Destination
There are many ways to accomplish this. Here's one:
1) Save your excel file as a tab delimited text file.
2) Create a New Flat File Connection in SSIS
a) Set File Name = .txt file
b) Go to Advanced tab and click on the column with the percentages
c) Set the Data Type to match the target field in your database (e.g., numeric(10,5)
3) In the SSIS workflow, create a derived column of your percent field to convert from percent to decimal(e.g., newfield = oldfield/100). Make sure to check the data type has not changed in the Derived Column Transformation Editor.
Query:
Select To_Number(qty) From my_table Where Id=12345;
Output:
ORA-01722: invalid number
01722. 00000 - "invalid number"
Query: Select qty From my_table Where Id=12345;
Output: 0.00080
Query:
Select To_Number(0.00080) From Dual;
Output:
0.00080 (no error)
This is a odd situation I am facing in Oracle. Can anybody suggest why it happens? The column qty is NUMBER type. Hence it is very hard to imagine that it contains invalid number, but it happened.
I want to clarify that it happened for the specific value in the column although we have thousands of records in the same column.
Added more: The same error appears if I use TO_CHAR(qty) function. The qty column is NUMBER type not VARCHAR2. In fact we are using SUM(qty) function which showed error. Hence I went for a dissection and found this row being the culprit.
I'm assuming that qty is defined as a varchar2 in my_table-- otherwise, there would be no purpose served by calling to_number. If that assumption is correct, I'll wager that there is some other row in the table where qty has non-numeric data in it.
SQL is a set-based language so Oracle (or any other database) is perfectly free to evaluate things in whatever order it sees fit. That means that Oracle is perfectly free to evaluate the to_number(qty) expression before applying the id=12345 predicate. If Oracle happens to encounter a row where the qty value cannot be converted to a number, it will throw an error.
It is also possible that there is some non-numeric data in the particular row where id = 12345 that happens not to be displaying (control characters for example). You can check that by running the query
SELECT dump(qty, 1016)
FROM my_table
WHERE id = 12345
(if you want decimal rather than hexadecimal, use 1010 as the second parameter to dump) and checking to see whether there is anything unexpected in the data.
The only way I can see you could get the results you've shown, given that qty really is a number field, is if it holds corrupt data (which is why there has been scepticism about that assumption). I'm also assuming your client is formatting the value with a leading zero, but is not forcing the trailing zero, which wouldn't normally appear; you can of course force it with to_char(.0008, '0.00000'), but you don't appear to be doing that; still, the leading zero makes me wonder.
Anyway, to demonstrate corruption you can force an invalid value into the field via PL/SQL - don't try this with real data or a table you care about:
create table t42(qty number);
table T42 created.
declare
n number;
begin
dbms_stats.convert_raw_value('bf0901', n);
insert into t42 (qty) values (n);
end;
/
anonymous block completed
select qty from t42;
QTY
----------
.00080
select to_number(qty) from t42;
Error starting at line : 12 in command -
select to_number(qty) from t42
Error report -
SQL Error: ORA-01722: invalid number
01722. 00000 - "invalid number"
Note the plain query shows the number as expected - though with a trailing zero, and no leading zero - and running it through to_number() throws ORA-01722. Apart from the leading zero, that is what you've shown.
It also fails with to_char(), as in your question title:
select to_char(qty) from t42;
Error starting at line : 13 in command -
select to_char(qty) from t42
Error report -
SQL Error: ORA-01722: invalid number
... which makes sense; your to_number() is doing an implicit conversion, so it's really to_number(to_char(qty)), and it's the implicit to_char() that actually generates the error, I think.
Your comments suggest you have a process that is loading and removing data. It would be interesting to see exactly what that is doing, and if it could be introducing corruption. This sort of effect can be achieved through OCI as the database will trust that the data it's passed is valid, as it does in the PL/SQL example above. There are bug reports suggesting imp can also cause corruption. So the details of your load process might be important, as might the exact database version and platform.
I encountered the nearly same problem. And I found the mysterious number behaved differently from the normal number after dump(). For example, assuming my qty=500 (datatype: number(30,2)) , then:
select dump(qty) from my_table where Id=12345;
Typ=2 Len=3: 194,6,1
select dump(500.00) from dual;
Typ=2 Len=2: 194,6
If we know how number datatype be stored (if not, plz visit http://translate.google.com/translate?langpair=zh-CN%7Cen&hl=zh-CN&ie=UTF8&u=http%3A//www.eygle.com/archives/2005/12/how_oracle_stor.html ) , we can find that there is a tailing zero (the last extra "1" in Typ=2 Len=3: 194,6,1) in the mysterious number.
So I made a trick to eliminate the tailing zero, and it works for the problem.
select dump(trunc(qty+0.001,2)) from my_table where Id=12345;
Typ=2 Len=2: 194,6
Hope someone to explain the deep mechanism.
try this:
Select To_Number(trim(qty)) From my_table Where Id=12345;