I have a COM object with a function with an optional last argument. The IDL is a bit like this:
interface ICWhatever: IDispatch
{
[id(96)] HRESULT SomeFunction([in,defaultvalue(50.6)]float parameter);
};
This works fine: if I don't specify the parameter, 50.6 is filled in.
But in several development environments (Excel VBA, VB6) the default value is rounded before display. After typing the open brace, I see:
SomeFunction([parameter As Single = 51])
Does anyone know why this is? Is it a bug? This will confuse client programmers...
I was able to reproduce the problem you experienced (VBA), and it appears to be indeed a bug in the treatment of the Single type by (specifically) VB IDEs. Namely, the VB IDEs will improperly cast the Single default value to int before printing it out again (as part of the method signature) as a (truncated) single-precision floating-point value.
This problem does not exist in the Microsoft Script Editor, nor does it exist in OleView.exe etc.
To test, try the following Single default value: 18446744073709551615.0. In my case, this value is properly encoded in the TLB and properly displayed by OleView.exe and by Microsoft Script Editor as 1.844674E+19. However, it gets displayed as -2.147484E+09 in the VB IDEs. Indeed, casting (float)18446744073709551615.0 to int produces -2147483648 which, displayed as float, produces the observed (incorrect) VB IDE output -2.147484E+09.
Similarly, 50.6 gets cast to int to produce 51, which is then printed out as 51.
To work around this issue use Double instead of Single, as Double is converted and displayed properly by all IDEs I was able to test.
On a tangent, you are probably already aware of the fact that certain floating point values (such as 0.1) do not have a corresponding exact IEEE 754 representation and cannot be distinguished from other values (e.g. 0.1000000015.) Thus, specifying a default double-precision value of 0.1 will be displayed in most IDEs as 0.100000001490116. One way to alleviate this precision issue is to choose a different scale for your parameters (e.g. switch from seconds to milliseconds, thus 0.1 seconds would become 100 milliseconds, unambiguously representable as both single- and double precision floating point, as well as integral values/parameters.)
Related
SBLineEntry is a proxy object in LLDB Python interface. SBLineEntry.GetColumn() returns point in a line, but I am not sure what it actually means.
In C++ side source, it resolves to LineEntry.column value, but it also lacks how it is measured in.
At first, I thought it as UTF-8 code unit offset. But it seems it isn't because when I measure it it looks like UTF-16 code unit offset. But I still couldn't find any definition for this value.
What is this value?
Raw byte offset in source code file?
UTF-8 code unit offset?
UTF-16 code unit offset?
Something else?
That's a good question! If the debug information is DWARF (except for Windows systems, it is), lldb is providing the DNS_LNS_set_column data from the DWARF line table as the number returned by SBLineEntry::GetColumn(). The DWARF5 specification doesn't say what this integer is counting -- it says only,
The DW_LNS_set_column opcode takes a single unsigned LEB128 operand and stores it in the column register of the state machine.
You're probably seeing that clang puts the UTF-16 code unit offset in the DWARF, but the standard doesn't require that. This would be a reasonable clarification request to file with the DWARF standards committee, http://dwarfstd.org
For the case of Rust programs, I think it's Unicode Scalar value offset.
Here's an open issue about column number. It says span_start function produces the column number.
span_start calls lookup_char_pos.
lookup_char_pos calls bytepos_to_file_charpos.
bytepos_to_file_charpos
They are repeating the word "char", and in Rust, "char" means Unicode Scalar Value.
Edit/Update:
Thank you all for responding. I understand I was being too vague, but wasn't sure if posting naked lines of code would be useful in this case.
In my .vb file I have a pulldown control with its validation values as:
TempUnit.DataSource = {"°C", "°F", "°R", "K"}
...which is stored in a variable:
Dim unit As String = TempUnit.SelectedItem.ToString
...which gets passed into a function along with other variables:
Function xxx(..., ByVal unitT As String) As Double
... which finally calls the .fs file and gets evaluated using:
let tempConv t u =
match u with
|"°C" -> t * 9.0 / 5.0 + 32.0
|"°R" -> t - 459.67
|"K" -> t * 9.0 / 5.0 - 459.67
|_ -> t
If any temperature unit other than Kelvin is selected, the match fails and defaults to the else case (which is Fahrenheit in this context). I ended up bypassing the degree symbol entirely by evaluating the substring instead:
Dim unit As String = TempUnit.SelectedItem.ToString.Substring(1)
The program is working again, but I have no idea what I changed, if anything, to make the string match stop working. The first thing I tried was to copy/paste from one file to other to ensure they were identical strings, in addition to trying other symbols, but to no avail. The degree symbol is what caught my attention, but then I checked the pressure units and found the exact same issue with the micro prefix.
Thank you, Hans Passant, I had unicode in mind as a possible solution, but it didn't seem like an easy fix in the heat of the moment. I appreciate your link.
Original Post:
I have a VB program referencing a function stored in an F# library file whose arguments include unit of measure strings containing special characters (e.g. "°C" "µBar").
The strings are identical in the .vb and .fs files; and there was no issue until the F# library file stopped recognizing the Alt-Code characters for reasons unbeknownst to me.
The program works as intended if I remove the offending Alt-Code character from the string definitions in the F# and VB files.
What would cause a match to fail between two identical strings that happen to contain an Alt-Code character?
What is the proper way to handle Alt-Code characters in F# (and VB for that matter)?
The µ glyph is a bit infamous. Unicode has two codepoints that look like that: U+03BC = "Greek small letter Mu" and U+00B5 = "Micro sign". One is a letter in the Greek alphabet, the other is a symbol that often appears in math and units.
Compare μ and µ. Looks almost identical in most fonts (you can see the difference with Segoe UI) and very easily fools the human eye. Typographers insist they are not the same, particularly if they are Greek I'd imagine. Nor does a computer, the problem you are surely dealing with.
Copy/paste or re-type to fix. The Charmap.exe applet in Windows is very handy to get this right.
I'm using VB.NET, writing a winforms application where I'm trying to convert from a denary real number to a signed floating point binary number, as a string representation. For example, 9.125 would become "0100100100000100" (the first ten digits are the significand and the last six digits the exponent.).
I can write a function for this if I have to, but I'd rather not waste time if there's a built-in functionality available. I know there's some ToString overload or something that works on Integers, but I haven't been able to find anything that works on Doubles.
Please forgive me, I haven't used this site very much! I am working in Visual Studio with Visual Basic. I finished programming my project with Option Strict Off, then when I turned Option Strict on, I was alerted that this code was wrong:
Const TAX_Decimal As Decimal = 0.07
The explanation was that "Option Strict On disallows implicit conversions from 'Double' to 'Decimal'"
But I thought I had declared it as a decimal! It made me change it to:
Const TAX_Decimal As Decimal = CDec(0.07)
The only thing I did with this constant was multiply it by a decimal and saved it to a variable declared as a decimal!
Can someone tell me why this is happening?
Double is 8 bytes and Decimal is 16 bytes. Option Strict prevents from automatic type conversion. By default if you write a number with decimals in VB.NET it is considered as double and not decimal. For saying decimal you have to use some character to specify (I thing for decimal is m) so if you declare
Const VAR as decimal = 0.07m
then you wont require casting.
When the compiler sees a numeric literal, it selects a type based upon the size of the number, punctuation marks, and suffix (if any), and then translates the the sequence of characters in it to that type; all of this is done without regard for what the compiler is going to do with the number. Once this is done, the compiler will only allow the number to be used as its own type, explicitly cast to another type, or in the two cases defined below implicitly converted to another type.
If the number is interpreted as any integer type (int, long, etc.) the compiler will allow it to be used to initialize any integer type in which the number is representable, as well as any binary or decimal floating-point type, without regard for whether or not the number can be represented precisely in that type.
If the number is type Single [denoted by an f suffix], the compiler will allow it to be used to initialize a Double, without regard for whether the resulting Double will accurately represent the literal with which the Single was initialized.
Numeric literals of type Double [including a decimal point, but with no suffix] or Decimal [a "D" suffix not followed immediately by a plus or minus] cannot be used to initialize a variable of any other, even if the number would be representable precisely in the target type, or the result would be the target type's best representation of the numeric literal in question.
Note that conversions between type Decimal and the other floating-point types (double and float) should be avoided whenever possible, since the conversion methods are not very accurate. While there are many double values for which no exact Decimal representation exists, there is a wide numeric range in which Decimal values are more tightly packed than double values. One might expect that converting a double would choose the closest Decimal value, or at least one of the Decimal values which is between that number and the next higher or lower double value, but the normal conversion methods do not always do so. In some cases the result may be off by a significant margin.
If you ever find yourself having to convert Double to Decimal, you're probably doing something wrong. While there are some operations which are available on Double that are not available on Decimal, the act of converting between the two types means whatever Decimal result you end up with is apt to be less precise than if all computations had been done in Double`.
Can any one please help me how to get float value as it is from text box
for Ex: I have entered 40.7
rateField=[[rateField text] floatValue];
I am getting rateField value as 40.7000008 but I want 40.7 only.
please help me.
thanks in advance
Thanks Every body,
I tried all the possibilities but I am not able to get what I want. I am not looking to print the value to convert into string.I want to use that value for computation. If i use Number Formatter again when i am converting from number to float it is giving same problem.So i want float value only but it should be whatever i have given in the text box it should not be padded with any values.This is my requirement.Please help me.
thanks®ards Balu
Thanks Every body,
I tried all the possibilities but I am not able to get what I want. I am not looking to print the value to convert into string.I want to use that value for computation. If i use Number Formatter again when i am converting from number to float it is giving same problem.So i want float value only but it should be whatever i have given in the text box it should not be padded with any values.This is my requirement.Please help me.
thanks®ards
Balu
This is ok. There is not guaranteed that you will get 40.7 if you will use even double.
If you want to output 40.7 you can use %.1f or NSNumberFormatter
Try using a double instead. Usually solves that issue. Has to do with the storage precision.
double dbl = [rateField.text doubleValue];
When using floating point numbers, these things can happen because of the way the numbers are stored in binary format in the computers memory.
It's similar to the way 1/3 = 0.33333333333333... in decimal numbers.
The best way to deal with this is to use number formatters in the textbox that displays the value.
You are already resolved float value.
Floating point numbers have limited precision. Although it depends on
the system, float relative error due to rounding will be around 1.1e-8
Non elementary arithmetic operations may give larger errors, and, of
course, error progragation must be considered when several operations
are compounded.
Additionally, rational numbers that are exactly representable as
floating point numbers in base 10, like 0.1 or 0.7, do not have an
exact representation as floating point numbers in base 2, which is
used internally, no matter the size of the mantissa. Hence, they
cannot be converted into their internal binary counterparts without a
small loss of precision. This can lead to confusing results: for
example, floor((0.1+0.7)*10) will usually return 7 instead of the
expected 8, since the internal representation will be something like
7.9999999999999991118....
So if you're using those numbers for output, you should use some rounding mechanism, even for double values.