I have a column X which is full of floats with decimals places ranging from 0 (no decimals) to 6 (maximum). I can count on the fact that there are no floats with greater than 6 decimal places. Given that, how do I make a new column such that it tells me how many digits come after the decimal?
I have seen some threads suggesting that I use CAST to convert the float to a string, then parse the string to count the length of the string that comes after the decimal. Is this the best way to go?
You can use something like this:
declare #v sql_variant
set #v=0.1242311
select SQL_VARIANT_PROPERTY(#v, 'Scale') as Scale
This will return 7.
I tried to make the above query work with a float column but couldn't get it working as expected. It only works with a sql_variant column as you can see here: http://sqlfiddle.com/#!6/5c62c/2
So, I proceeded to find another way and building upon this answer, I got this:
SELECT value,
LEN(
CAST(
CAST(
REVERSE(
CONVERT(VARCHAR(50), value, 128)
) AS float
) AS bigint
)
) as Decimals
FROM Numbers
Here's a SQL Fiddle to test this out: http://sqlfiddle.com/#!6/23d4f/29
To account for that little quirk, here's a modified version that will handle the case when the float value has no decimal part:
SELECT value,
Decimals = CASE Charindex('.', value)
WHEN 0 THEN 0
ELSE
Len (
Cast(
Cast(
Reverse(CONVERT(VARCHAR(50), value, 128)) AS FLOAT
) AS BIGINT
)
)
END
FROM numbers
Here's the accompanying SQL Fiddle: http://sqlfiddle.com/#!6/10d54/11
This thread is also using CAST, but I found the answer interesting:
http://www.sqlservercentral.com/Forums/Topic314390-8-1.aspx
DECLARE #Places INT
SELECT TOP 1000000 #Places = FLOOR(LOG10(REVERSE(ABS(SomeNumber)+1)))+1
FROM dbo.BigTest
and in ORACLE:
SELECT FLOOR(LOG(10,REVERSE(CAST(ABS(.56544)+1 as varchar(50))))) + 1 from DUAL
A float is just representing a real number. There is no meaning to the number of decimal places of a real number. In particular the real number 3 can have six decimal places, 3.000000, it's just that all the decimal places are zero.
You may have a display conversion which is not showing the right most zero values in the decimal.
Note also that the reason there is a maximum of 6 decimal places is that the seventh is imprecise, so the display conversion will not commit to a seventh decimal place value.
Also note that floats are stored in binary, and they actually have binary places to the right of a binary point. The decimal display is an approximation of the binary rational in the float storage which is in turn an approximation of a real number.
So the point is, there really is no sense of how many decimal places a float value has. If you do the conversion to a string (say using the CAST) you could count the decimal places. That really would be the best approach for what you are trying to do.
I answered this before, but I can tell from the comments that it's a little unclear. Over time I found a better way to express this.
Consider pi as
(a) 3.141592653590
This shows pi as 11 decimal places. However this was rounded to 12 decimal places, as pi, to 14 digits is
(b) 3.1415926535897932
A computer or database stores values in binary. For a single precision float, pi would be stored as
(c) 3.141592739105224609375
This is actually rounded up to the closest value that a single precision can store, just as we rounded in (a). The next lowest number a single precision can store is
(d) 3.141592502593994140625
So, when you are trying to count the number of decimal places, you are trying to find how many decimal places, after which all remaining decimals would be zero. However, since the number may need to be rounded to store it, it does not represent the correct value.
Numbers also introduce rounding error as mathematical operations are done, including converting from decimal to binary when inputting the number, and converting from binary to decimal when displaying the value.
You cannot reliably find the number of decimal places a number in a database has, because it is approximated to round it to store in a limited amount of storage. The difference between the real value, or even the exact binary value in the database will be rounded to represent it in decimal. There could always be more decimal digits which are missing from rounding, so you don't know when the zeros would have no more non-zero digits following it.
Solution for Oracle but you got the idea. trunc() removes decimal part in Oracle.
select *
from your_table
where (your_field*1000000 - trunc(your_field*1000000)) <> 0;
The idea of the query: Will there be any decimals left after you multiply by 1 000 000.
Another way I found is
SELECT 1.110000 , LEN(PARSENAME(Cast(1.110000 as float),1)) AS Count_AFTER_DECIMAL
I've noticed that Kshitij Manvelikar's answer has a bug. If there are no decimal places, instead of returning 0, it returns the total number of characters in the number.
So improving upon it:
Case When (SomeNumber = Cast(SomeNumber As Integer)) Then 0 Else LEN(PARSENAME(Cast(SomeNumber as float),1)) End
Here's another Oracle example. As I always warn non-Oracle users before they start screaming at me and downvoting etc... the SUBSTRING and INSTRING are ANSI SQL standard functions and can be used in any SQL. The Dual table can be replaced with any other table or created. Here's the link to SQL SERVER blog whre i copied dual table code from: http://blog.sqlauthority.com/2010/07/20/sql-server-select-from-dual-dual-equivalent/
CREATE TABLE DUAL
(
DUMMY VARCHAR(1)
)
GO
INSERT INTO DUAL (DUMMY)
VALUES ('X')
GO
The length after dot or decimal place is returned by this query.
The str can be converted to_number(str) if required. You can also get the length of the string before dot-decimal place - change code to LENGTH(SUBSTR(str, 1, dot_pos))-1 and remove +1 in INSTR part:
SELECT str, LENGTH(SUBSTR(str, dot_pos)) str_length_after_dot FROM
(
SELECT '000.000789' as str
, INSTR('000.000789', '.')+1 dot_pos
FROM dual
)
/
SQL>
STR STR_LENGTH_AFTER_DOT
----------------------------------
000.000789 6
You already have answers and examples about casting etc...
This question asks of regular SQL, but I needed a solution for SQLite. SQLite has neither a log10 function, nor a reverse string function builtin, so most of the answers here don't work. My solution is similar to Art's answer, and as a matter of fact, similar to what phan describes in the question body. It works by converting the floating point value (in SQLite, a "REAL" value) to text, and then counting the caracters after a decimal point.
For a column named "Column" from a table named "Table", the following query will produce a the count of each row's decimal places:
select
length(
substr(
cast(Column as text),
instr(cast(Column as text), '.')+1
)
) as "Column-precision" from "Table";
The code will cast the column as text, then get the index of a period (.) in the text, and fetch the substring from that point on to the end of the text. Then, it calculates the length of the result.
Remember to limit 100 if you don't want it to run for the entire table!
It's not a perfect solution; for example, it considers "10.0" as having 1 decimal place, even if it's only a 0. However, this is actually what I needed, so it wasn't a concern to me.
Hopefully this is useful to someone :)
Probably doesn't work well for floats, but I used this approach as a quick and dirty way to find number of significant decimal places in a decimal type in SQL Server. Last parameter of round function if not 0 indicates to truncate rather than round.
CASE
WHEN col = round(col, 1, 1) THEN 1
WHEN col = round(col, 2, 1) THEN 2
WHEN col = round(col, 3, 1) THEN 3
...
ELSE null END
Related
I am trying to truncate a decimal which has 2 numbers after decimal point in presto that should display on truncating the number without floating values and display the full decimal with floating values when there are numbers from 1 to 9 after decimal point. I have used the following query but it does not do the job and still I am ending up with numbers having zeroes after decimal point.
select column1,case when right(cast(column1 as varchar),7)='.000000' then truncate(column1) else column1 end from table1;
Using varchar pads extra zeroes to the right and hence are the extra zeroes I have used in the above expression after the decimal point
Please let me know what has to done to truncate the decimal only when it has zeroes as the floating values
The thing is truncate(x) → double Returns x rounded to integer by dropping digits after decimal point, but it is double, not integer. And displaying double without non-significant zeroes is a GUI job, it displays all of them or not displays non-significant zeroes. For example when I am using Presto Qubole, it does not displays .000000 if nothing else except 0s after dot. So the problem is in tool you are using probably.
For example this works fime in Presto on Qubole:
with mydata as (
select 123.00000 as figure union all
select 123.0123 )
select case when regexp_like(cast(figure as varchar),'\d+\.0+$') then truncate(figure) else figure end
from mydata
Result:
123.0123
123
But in your GUI it may not work the same because in second line is not integer, it is decimal(8,5), wrap in the typeof() function and you will see, and GUI decides how to display decimal(8,5).
You said:
Using varchar pads extra zeroes to the right and hence are the extra
zeroes I have used in the above expression after the decimal point
No, the result of your expression is not varchar, varchar is being implicitly converted to decimal or double, check using typeof().
If you want it to work not depending on tool you are using, convert to varchar and transform explicitly:
select case when regexp_like(cast(figure as varchar),'\d\.0+$') --all zeroes, change according to your requirements
then regexp_replace(cast(figure as varchar),'\.0+$','') --remove fractional part
else cast(figure as varchar) --we need same type in case
end as result
from mydata
This will work guaranteed because result is varchar and displayed as is.
All that expression can be simplified:
--remove .0+ if no 1-9 after dot:
select regexp_replace(cast(figure as varchar),'\.0+$','')
from mydata
SQL Server decimal function not working as intended.
To test with sample data, I created a table and inserted values to it.
Then, I tried to run decimal function on these values.
CREATE TABLE TEST_VAL
(
VAL float
)
SELECT * FROM TEST_VAL
Output:
VAL
----------
16704.405
20382.135
2683.135
SELECT CAST(VAL AS DECIMAL(15, 2)) AS NEWVAL
FROM TEST_VAL;
Output:
NEWVAL
-------------
16704.40
20382.13
2683.14
I expected same formatting for all 3 values. But, for third value it returns ceiling round off value.
This is due to the nature of floating point numbers being inexact and being in binary. But I want to demonstrate how this is working.
The issue is that a decimal such as 0.135 cannot be represented exactly. As the floating point representation, it would typically be something like:
0.134999999234243423
(Note that these numbers as with all representations of values in this answer are made up. They are intended to be representative to make the point.)
The number of 9s is actually larger. And the subsequent digits are just representative. In this representation, we wouldn't see a problem with truncating the value. After all 0.1349999 should round to the same value as 0.13499.
In binary, this looks different:
0.11101000010101 10011 10011 10011 10011 . . .
---------------- --------------
~0.135 "arbitrary" repeating pattern
(Note: The values are made up!)
That is, the "infinite" portion of binary representation is not a bunch of repeating 1s or repeating 0s; it has a pattern. This is analogous the inverse of most numbers in base 10 For instance, 1/7 has a repeating component of six digits, 142857. We tend to forget this, because common inverses are either exact (1/5 = 0.2) or have a single repeating digit (1/6 = 0.166666...). 1/7 is the first case that is not so simple -- and almost all decimals are like this. For rational numbers, there is always a repeating sequence regardless of base and it is never longer than dividend (number at the bottom) minus 1).
We can think of this as all decimal representations (regardless of base) always have some number of digits that are repeating. For an exact representation, the repeating portion is 0. For others it is rarely one digit. Usually, it is multiple digits. And it is a fun exercise in mathematics to characterize this. But all that is important is that the repeating portion has 1s and 0s.
Now, what is happening. A floating point number has three parts:
a magnitude. This is a number of bits that represent the exponent.
an integer portion, which is the number before the decimal point.
an integer portion, which is the number after the decimal point.
(Actually, the last two are really one integer, but I find it much easier to explain this by splitting them into two components.)
Only a fixed number of bits are available for the two integer portions. What does this look like? Once again the representative patterns are something like this:
0.135 0 11101000010101100111001110
1.135 1 11101000010101100111001110
2.135 10 1110100001010110011100111
4.135 100 111010000101011001110011
8.135 1000 11101000010101100111001
16.136 10000 1110100001010110011100
-----------^ part before the decimal
------------------^ part after the decimal
Note: This is leaving off the magnitude portion of the decimal representation.
As you can see, digits get chopped off from the end. But sometimes it is 0 that gets chopped off -- so there is no change in the value being represented. And sometimes it is a 1. And there is a change.
With this, you might be able to see how the values essentially fluctuate, say:
0.135 --> 0.135000000004
1.135 --> 0.135000000004
2.135 --> 0.135000000004
4.135 --> 0.135000000001
8.135 --> 0.135999999997
16.135 --> 0.135999999994
These are then rounded differently, which is what you are seeing.
I put together this little db<>fiddle, so you can see how the rounding changes around powers of two.
Perhaps this could be explained if we extend the precision of the three numbers in the first query:
16704.4050
20382.1349
2683.1351
Rounding each of the above to only two decimal places, which is what a cast to DECIMAL(10,2) would do, would yield:
16704.40
20382.13
2683.14
Would this be of use:
select CONVERT(DECIMAL(15,2), ROUND(VAL, 2, 1)) AS NEWVAL
from TEST_VAL;
Here is the DEMO for SQLServer 2012 : DEMO
first question : why they are not same value?
because their type is different , CAST(VAL as decimal(4,2)) will format like ##.## not ##.### so in your case it get ceiling round value.
Why not use the same type ?
CREATE TABLE T
(
[VAL] DECIMAL(8,3)
);
INSERT INTO T ([VAL])
VALUES (16704.405), (20382.135), (2683.135);
SELECT * FROM T
Output:
VAL
-----------
16704.405
20382.135
2683.135
db<>fiddle here
or you can cast AS DECIMAL(8, 3)
SELECT CAST(VAL AS DECIMAL(8,3)) AS NEWVAL
FROM T;
We are doing some validation of data which has been migrated from one SQL Server to another SQL Server. One of the things that we are validating is that some numeric data has been transferred properly. The numeric data is stored as a float datatype in the new system.
We are aware that there are a number of issues with float datatypes, that exact numeric accuracy is not guaranteed, and that one cannot use exact equality comparisons with float data. We don't have control over the database schemas nor data typing and those are separate issues.
What we are trying to do in this specific case is verify that some ratio values were transferred properly. One of the specific data validation rules is that all ratios should be transferred with no more than 4 digits to the right of the decimal point.
So, for example, valid ratios would look like:
.7542
1.5423
Invalid ratios would be:
.12399794301
12.1209377
What we would like to do is count the number of digits to the right of the decimal point and find all cases where the float values have more than four digits to the right of it. We've been using the SUBSTRING, LEN, STR, and a couple of other functions to achieve this, and I am sure it would work if we had numeric fields typed as decimal which we were casting to char.
However, what we have found when attempting to convert a float to a char value is that SQL Server seems to always convert to decimal in between. For example, the field in question shows this value when queried in SQL Server Enterprise Manager:
1.4667
Attempting to convert to a string using the recommended function for SQL Server:
LTRIM(RTRIM(STR(field_name, 22, 17)))
Returns this value:
1.4666999999999999
The value which I would expect if SQL Server were directly converting from float to char (which we could then trim trailing zeroes from):
1.4667000000000000
Is there any way in SQL Server to convert directly from a float to a char without going through what appears to be an intermediate conversion to decimal along the way? We also tried the CAST and CONVERT functions and received similar results to the STR function.
SQL Server Version involved: SQL Server 2012 SP2
Thank you.
Your validation rule seems to be misguided.
An SQL Server FLOAT, or FLOAT(53), is stored internally as a 64-bit floating-point number according to the IEEE 754 standard, with 53 bits of mantissa ("value") plus an exponent. Those 53 binary digits correspond to approximately 15 decimal digits.
Floating-point numbers have limited precision, which does not mean that they are "fuzzy" or inexact in themselves, but that not all numbers can be exactly represented, and instead have to be represented using another number.
For example, there is no exact representation for your 1.4667, and it will instead be stored as a binary floating-point number that (exactly) corresponds to the decimal number 1.466699999999999892708046900224871933460235595703125. Correctly rounded to 16 decimal places, that is 1.4666999999999999, which is precisely what you got.
Since the "exact character representation of the float value that is in SQL Server" is 1.466699999999999892708046900224871933460235595703125, the validation rule of "no more than 4 digits to the right of the decimal point" is clearly flawed, at least if you apply it to the "exact character representation".
What you might be able to do, however, is to round the stored number to fewer decimal places, so that the small error at the end of the decimals is hidden. Converting to a character representation rounded to 15 instead of 16 places (remember those "15 decimal digits" mentioned at the beginning?) will give you 1.466700000000000, and then you can check that all decimals after the first four are zeroes.
You can try using cast to varchar.
select case when
len(
substring(cast(col as varchar(100))
,charindex('.',cast(col as varchar(100)))+1
,len(cast(col as varchar(100)))
)
) = 4
then 'true' else 'false' end
from tablename
where charindex('.',cast(col as varchar(100))) > 0
For this particular number, don't use STR(), and use a convert or cast to varchar. But, in general, you will always have precision issues when storing in float... it's the nature of the storage of that datatype. The best you can do is normalize to a NUMERIC type and compare with threshold ranges (+/- .0001, for example). See the following for a breakdown of how the different conversions work:
declare #float float = 1.4667
select #float,
convert(numeric(18,4), #float),
convert(nvarchar(20), #float),
convert(nvarchar(20), convert(numeric(18,4), #float)),
str(#float, 22, 17),
str(convert(numeric(18,4), #float)),
convert(nvarchar(20), convert(numeric(18,4), #float))
Instead of casting to a VarChar you might try this: cast to a decimal with 4 fractional digits and check if it's the same value as before.
case when field_name <> convert(numeric(38,4), field_name)
then 1
else 0
end
The issue you have here is that float is an approximate number data type with an accuracy of about seven digits. That means it approaches the value while using less storage than a decimal / numeric. That's why you don't use float for values that require exact precision.
Check this example:
DECLARE #t TABLE (
col FLOAT
)
INSERT into #t (col)
VALUES (1.4666999999999999)
,(1.4667)
,(1.12399794301)
,(12.1209377);
SELECT col
, CONVERT(NVARCHAR(MAX),col) AS chr
, CAST(col as VARBINARY) AS bin
, LTRIM(RTRIM(STR(col, 22, 17))) AS rec
FROM #t
As you see the float 1.4666999999999999 binary equals 1.4667. For your stated needs I think this query would fit:
SELECT col
, RIGHT(CONVERT(NVARCHAR(MAX),col), LEN(CONVERT(NVARCHAR(MAX),col)) - CHARINDEX('.',CONVERT(NVARCHAR(MAX),col))) AS prec
from #t
I am using LIKE to return matching numeric results against a float field. It seems that once there are more than 4 digits to the left of the decimal, values that match my search item on the right side of the decimal are not returned. Here's an example illustrating the situation:
CREATE TABLE number_like_test (
num [FLOAT] NULL
)
INSERT INTO number_like_test (num) VALUES (1234.56)
INSERT INTO number_like_test (num) VALUES (3457.68)
INSERT INTO number_like_test (num) VALUES (13457.68)
INSERT INTO number_like_test (num) VALUES (1234.76)
INSERT INTO number_like_test (num) VALUES (23456.78)
SELECT num FROM number_like_test
WHERE num LIKE '%68%'
That query does not return the record with the value of 12357.68, but it does return the record with the value of 3457.68. Also running the query with 78 instead of 68 does not return the 23456.78 record, but using 76 returns the 1234.76 record.
So to get to the question: why having a larger number causes these results to change? How can I change my query to get the expected results?
The like operator requires a string as a left-hand value. According to the documentation, a conversion from float to varchar can use several styles:
Value Output
0 (default) A maximum of 6 digits. Use in scientific notation, when appropriate.
1 Always 8 digits. Always use in scientific notation.
2 Always 16 digits. Always use in scientific notation.
The default style will work fine for the six digits in 3457.68, but not for the seven digits in 13457.68. To use 16 digits instead of 6, you could use convert and specify style 2. Style 2 represents a number like 3.457680000000000e+003. But that wouldn't work for the first two digits, and you get an unexpected +003 exponent for free.
The best approach is probably a conversion from float to decimal. That conversion allows you to specify the scale and precision. Using scale 20 and precision 10, the float is represented as 3457.6800000000:
where convert(decimal(20,10), num) like '%68%'
When you are comparing number with LIKE it is implicitly converted to string and then matched
The problem here is that float number is not precise and when it is converted you can get
13457.679999999999999 instead of 13457.68
So to avid this explicitly format number in appropriate format(not sure how to do this in sql server, but it will be something like)
SELECT num FROM number_like_test
WHERE Format("0.##",num) LIKE '%68%'
The conversion to string is rounding your values. Both CONVERT and CAST have the same behavior.
SELECT cast(num as nvarchar(50)) as s
FROM number_like_test
Or
SELECT convert(nvarchar(50), num) as s
FROM number_like_test
provide the results:
1234.56
3457.68
13457.7
1234.76
23456.8
You'll have to use the STR function and correct format parameters to try to get your results. For example,
SELECT STR(num, 10, 2) as s
FROM number_like_test
gives:
1234.56
3457.68
13457.68
1234.76
23456.78
Pretty well solved already, but you only need to CAST once, not twice like the other answer suggests, LIKE takes care of the string conversion:
SELECT *
FROM number_like_test
WHERE CAST(num AS DECIMAL(12,6)) LIKE '%68%'
And here's a SQL Fiddle showing the rounding behavior: SQL Fiddle
It's probably because a FLOAT data type represents a floating point number which is an approximation of the number and should not be relied on for exact comparisons.
If you need to do a search that includes the float value you would need to either store it in a decimal data type (which will hold the exact number) or convert it to a varchar using something like the STR() function
What if somebody made a column as VARCHAR2(256 CHAR) and there are only numbers in this column. I would like to get the highest number. The problem is: the number is something > 999999 but a Max to a varchar is always giving me a max number of 999999
I tried to_number(max(numbers), '9999999999999') but i still get 999999 back, at that cant be. Any ideas? Thank you
the best way is to
First Solution
convert the column in numeric
or
Second Solution
convert data in you query in numeric and than get data...
Example
select max(col1) from(
select to_number(numbers) as col1 from table ) d
It has to be this way because if you call MAX() before TO_NUMBER(), it will sort alphabetically, and then 999999 is bigger than 100000000000. Note that applying TO_NUMBER() to a varchar2 column incurs the risk of an INVALID_NUMBER exception, should the column containing any non-numeric characters. This is why the first proposed solution is to be preferred.
In Oracle, the NUMBER type contains base 100 floating point values which have a precision of 38 significant digits, and a max value of 9999...(38 9's) x 10^125. There are two questions at issue - the first is whether a NUMBER can contain a value converted from a 256 character string, and the second is if two such values which are 'close' in numeric terms can be distinguished.
Let's start with taking a 256 character string and trying to convert it to a number. The obvious thing to do is:
SELECT TO_NUMBER('9999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999') AS VAL
FROM DUAL;
Executing the above we get:
ORA-01426: numeric overflow
which, having paid attention earlier, we expected. The largest exponent that a NUMBER can handle is 125 - and here we're trying to convert a value with 256 significant digits. NUMBER's can't handle this. If we cut the number of digits down to 125, as follows:
SELECT TO_NUMBER('99999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999') AS VAL
FROM DUAL;
It works fine, and our answer it 1E125.
<blink>
WHOA! WAIT!! WHAT??? The answer is 1 x 10^125??? What about all those 9's?!?!?!?
Remember earlier I'd mentioned that an Oracle NUMBER is a floating point value with a maximum precision of 38 and a maximum exponent of 125. From the point of view of TO_NUMBER 125 9's all strung together can't be exactly represented - too many digits (remember, max. precision of 38 (more on this later)). So it does the absolute best it can - it converts the first 38 digits (all of which are 9's) and then says "How should I best round this off to make the result A) representative of the input and B) as close as I can get to what I was given?". In this case it looks at digit 39, sees that it's a 9, and decides to round upward. As all the other digits are also 9's, it continues rounding neatly until it ends up with 1 as the remaining mantissa digit.
* Later, back at the ranch... *
OK, earlier I'd mentioned that NUMBER has a precision of 38 digits. That's not entirely true - it can actually differentiate between values with up to 40 digits of precision, at least sometimes, if the wind is right, and you're going downhill. Here's an example:
SELECT CASE
WHEN to_number('9999999999999999999999999999999999999999') >
to_number('9999999999999999999999999999999999999998')
THEN 'Greater'
ELSE 'Not greater'
END AS VAL
FROM DUAL;
Those two values each have 40 digits (counting is left as an exercise to the extremely bored reader :-). If you execute the above you'll get back 'Greater', showing that the comparison of two 40 digit values succeeded.
Now for some fun. If you add an additional '9' to each string, making for a 41 digit value, and re-execute the statement it'll return 'Not greater'.
<blink>
WAIT! WHAT?? WHOA!!! Those values are obviously different! Even a TotalFool (tm) can see that!!
The problem here is that a 41 digit number exceeds the precision of the NUMBER type, and thus when TO_NUMBER finds it has a value this long it starts discarding digits on the right side. Thus, even though those two really big numbers are clearly different to you and me, they're not different at all once they've been folded, spindled, mutilated, and converted.
So, what are the takeaways here?
1 - To the OP's original question - you'll have to come up with another way to compare your number strings besides using NUMBER because Oracle's NUMBER type can't hold 256 digit values. I suggest that you normalize the strings by making sure ALL the values are 256 digits long, adding zeroes on the left as needed, and then a string comparison should work OK.
2 - Floating point numbers prove the existence of (your favorite deity/deities here) by negation, as they are clearly the work of (your favorite personification of evil here). Whenever you work with them (as we all have to, sooner or later) you should remember that they are the foul byproducts of malignant evil, waiting to lash out at you when you least expect it.
3 - There is NO point three! (And extra credit for those who can identify without resorting to an extra-cranial search engine where this comes from :-)
Share and enjoy.
If you mean that the numbers in the column can be that big (256 digits), you could try something like this:
SELECT numbers
FROM (
SELECT numbers
FROM table_name
ORDER BY LPAD(numbers, 256) DESC
)
WHERE rownum = 1
or like this:
SELECT LTRIM(MAX(LPAD(numbers, 256))) AS numbers
FROM table_name