Represent Double as floating point binary string using built-in functions - vb.net

I'm using VB.NET, writing a winforms application where I'm trying to convert from a denary real number to a signed floating point binary number, as a string representation. For example, 9.125 would become "0100100100000100" (the first ten digits are the significand and the last six digits the exponent.).
I can write a function for this if I have to, but I'd rather not waste time if there's a built-in functionality available. I know there's some ToString overload or something that works on Integers, but I haven't been able to find anything that works on Doubles.

Related

Kotlin: Convert Hex String to signed integer via signed 2's complement?

Long story short, I am trying to convert strings of hex values to signed 2's complement integers. I was able to do this in a single line of code in Swift, but for some reason I can't find anything analogous in Kotlin. String.ToInt or String.ToUInt just give the straight base 16 to base 10 conversion. That works for some positive values, but not for any negative numbers.
How do I know I want the signed 2's complement? I've used this online converter and according to its output, what I want is the decimal from signed 2's complement, not the straight base 16 to base 10 conversion that's easy to do by hand.
So, "FFD6" should go to -42 (correct, confirmed in Swift and C#), and "002A" should convert to 42.
I would appreciate any help or even any leads on where to look. Because yes I've searched, I've googled the problem a bunch and, no I haven't found a good answer.
I actually tried writing my own code to do the signed 2's complement but so far it's not giving me the right answers and I'm pretty at a loss. I'd really hope for a built in command that does it instead; I feel like if other languages have that capability Kotlin should too.
For 2's complement, you need to know how big the type is.
Your examples of "FFD6" and "002A" both have 4 hex digits (i.e. 2 bytes).  That's the same size as a Kotlin Short.  So a simple solution in this case is to parse the hex to an Int and then convert that to a Short.  (You can't convert it directly to a Short, as that would give an out-of-range error for the negative numbers.)
"FFD6".toInt(16).toShort() // gives -42
"002A".toInt(16).toShort() // gives 42
(You can then convert back to an Int if needed.)
You could similarly handle 8-digit (4-byte) values as Ints, and 2-digit (1-byte) values as Bytes.
For other sizes, you'd need to do some bit operations.  Based on this answer for Java, if you have e.g. a 3-digit hex number, you can do:
("FD6".toInt(16) xor 0x800) - 0x800 // gives -42
(Here 0x800 is the three-digit number with the top bit (i.e. sign bit) set.  You'd use 0x80000 for a five-digit number, and so on.  Also, for 9–16 digits, you'd need to start with a Long instead of an Int.  And if you need >16 digits, it won't fit into a Long either, so you'd need an arbitrary-precision library that handled hex…)

Option Strict On and Constant in Visual Basic?

Please forgive me, I haven't used this site very much! I am working in Visual Studio with Visual Basic. I finished programming my project with Option Strict Off, then when I turned Option Strict on, I was alerted that this code was wrong:
Const TAX_Decimal As Decimal = 0.07
The explanation was that "Option Strict On disallows implicit conversions from 'Double' to 'Decimal'"
But I thought I had declared it as a decimal! It made me change it to:
Const TAX_Decimal As Decimal = CDec(0.07)
The only thing I did with this constant was multiply it by a decimal and saved it to a variable declared as a decimal!
Can someone tell me why this is happening?
Double is 8 bytes and Decimal is 16 bytes. Option Strict prevents from automatic type conversion. By default if you write a number with decimals in VB.NET it is considered as double and not decimal. For saying decimal you have to use some character to specify (I thing for decimal is m) so if you declare
Const VAR as decimal = 0.07m
then you wont require casting.
When the compiler sees a numeric literal, it selects a type based upon the size of the number, punctuation marks, and suffix (if any), and then translates the the sequence of characters in it to that type; all of this is done without regard for what the compiler is going to do with the number. Once this is done, the compiler will only allow the number to be used as its own type, explicitly cast to another type, or in the two cases defined below implicitly converted to another type.
If the number is interpreted as any integer type (int, long, etc.) the compiler will allow it to be used to initialize any integer type in which the number is representable, as well as any binary or decimal floating-point type, without regard for whether or not the number can be represented precisely in that type.
If the number is type Single [denoted by an f suffix], the compiler will allow it to be used to initialize a Double, without regard for whether the resulting Double will accurately represent the literal with which the Single was initialized.
Numeric literals of type Double [including a decimal point, but with no suffix] or Decimal [a "D" suffix not followed immediately by a plus or minus] cannot be used to initialize a variable of any other, even if the number would be representable precisely in the target type, or the result would be the target type's best representation of the numeric literal in question.
Note that conversions between type Decimal and the other floating-point types (double and float) should be avoided whenever possible, since the conversion methods are not very accurate. While there are many double values for which no exact Decimal representation exists, there is a wide numeric range in which Decimal values are more tightly packed than double values. One might expect that converting a double would choose the closest Decimal value, or at least one of the Decimal values which is between that number and the next higher or lower double value, but the normal conversion methods do not always do so. In some cases the result may be off by a significant margin.
If you ever find yourself having to convert Double to Decimal, you're probably doing something wrong. While there are some operations which are available on Double that are not available on Decimal, the act of converting between the two types means whatever Decimal result you end up with is apt to be less precise than if all computations had been done in Double`.

What is the rationale behind "0xHHHHHHHH" formatted Microsoft error codes?

Why does Microsoft tend to report "error codes" as hexadecimal values?
Error codes are 32-bit double word values (4 byte values.) This is likely the raw integer return code of whatever C-style function has reported an error.
However, why report the error to a user in hexadecimal? The "0x" prefix is worthless, and the savings in character length is minimal. These errors end up displayed to end users in Microsoft software and even on Microsoft websites.
For example:
0x80302010 is 10 characters long, and very cryptic.
2150637584 is the decimal equivalent, and much more user friendly.
Is there any description of the "standard" use of a 32-bit field as an error code mechanism (possibly dividing the field into multiple fields for developer interpretation) or of the logic behind presenting a hexadecimal code to end users?
We can only guess about the reason, so this question cannot be answered for sure. But let's guess:
One reason might be that with hex numbers, you know the number will have 8 digits. If it has more or less digits the number is "corrupt" (for example, the customer mistyped). With decimal numbers the number of digits for the same value varies.
Also, to a developer, hex numbers are more convenient and natural than decimal numbers. For example, if some info is coded as bit flags you can decipher them manually easily in hex numbers but not in decimal numbers.
It is a little bit subjective as to whether hexadecimal or decimal error codes are more user friendly. Here is a scenario where the hexadecimal error codes are significantly more convenient, which could be part of the reason that hexadecimal error codes are used in the first place.
Consider the documentation for Win32 Error Codes for Active Directory Service Interfaces, ADSI uses error codes with the format 0x8007XXXX, where the XXXX corresponds to a DWORD value that maps to a Win32 error code.
This makes it extremely easy to get the corresponding Win32 error code, because you can just strip off the last 4 digits. This would not be possible with a decimal error code representation.
The middle ground answer to this would be that formatting the number like an IPv4 address would be more luser-friendly while preserving some sort of formatting that helps the dev guys.
Although TBH I think hex is fine, the hypothetical non-technical user has no more idea what 0x1234ABCD means than 1234101112 or "Cracked gangle pin on fwip valve".

Use of byte arrays and hex values in Cryptography

When we are using cryptography always we are seeing byte arrays are being used instead of String values. But when we are looking at the techniques of most of the cryptography algorithms they uses hex values to do any operations. Eg. AES: MixColumns, SubBytes all these techniques(I suppose it uses) uses hex values to do those operations.
Can you explain how these byte arrays are used in these operations as hex values.
I have an assignment to develop a encryption algorithm , therefore any related sample codes would be much appropriate.
Every four digits of binary makes a hexadecimal digit, so, you can convert back and forth quite easily (see: http://en.wikipedia.org/wiki/Hexadecimal#Binary_conversion).
I don't think I full understand what you're asking, though.
The most important thing to understand about hexadecimal is that it is a system for representing numeric values, just like binary or decimal. It is nothing more than notation. As you may know, many computer languages allow you to specify numeric literals in a few different ways:
int a = 42;
int a = 0x2A;
These store the same value into the variable 'a', and a compiler should generate identical code for them. The difference between these two lines will be lost very early in the compilation process, because the compiler cares about the value you specified, and not so much about the representation you used to encode it in your source file.
Main takeaway: there is no such thing as "hex values" - there are just hex representations of values.
That all said, you also talk about string values. Obviously 42 != "42" != "2A" != 0x2A. If you have a string, you'll need to parse it to a numeric value before you do any computation with it.
Bytes, byte arrays and/or memory areas are normally displayed within an IDE (integrated development environment) and debugger as hexadecimals. This is because it is the most efficient and clear representation of a byte. It is pretty easy to convert them into bits (in his mind) for the experienced programmer. You can clearly see how XOR and shift works as well, for instance. Those (and addition) are the most common operations when doing symmetric encryption/hashing.
So it's unlikely that the program performs this kind of conversion, it's probably the environment you are in. That, and source code (which is converted to bytes at compile time) probably uses a lot of literals in hexadecimal notation as well.
Cryptography in general except hash functions is a method to convert data from one format to another mostly referred as cipher text using a secret key. The secret key can be applied to the cipher text to get the original data also referred as plain text. In this process data is processed in byte level though it can be bit level as well. The point here the text or strings which we referring to are in limited range of a byte. Example ASCII is defined in certain range in byte value of 0 - 255. In practical when a crypto operation is performed, the character is converted to equivalent byte and the using the key the process is performed. Now the outcome byte or bytes will most probably be out of range of human readable defined text like ASCII encoded etc. For this reason any data to which a crypto function is need to be applied is converted to byte array first. For example the text to be enciphered is "Hello how are you doing?" . The following steps shall be followed:
1. byte[] data = "Hello how are you doing?".getBytes()
2. Process encipher on data using key which is also byte[]
3. The output blob is referred as cipherTextBytes[]
4. Encryption is complete
5. Using Key[], a process is performed over cipherTextBytes[] which returns data bytes
6 A simple new String(data[]) will return string value of Hellow how are you doing.
This is a simple info which might help you to understand reference code and manuals better. In no way I am trying to explain you the core of cryptography here.

Why do IDL defaultvalue values look rounded?

I have a COM object with a function with an optional last argument. The IDL is a bit like this:
interface ICWhatever: IDispatch
{
[id(96)] HRESULT SomeFunction([in,defaultvalue(50.6)]float parameter);
};
This works fine: if I don't specify the parameter, 50.6 is filled in.
But in several development environments (Excel VBA, VB6) the default value is rounded before display. After typing the open brace, I see:
SomeFunction([parameter As Single = 51])
Does anyone know why this is? Is it a bug? This will confuse client programmers...
I was able to reproduce the problem you experienced (VBA), and it appears to be indeed a bug in the treatment of the Single type by (specifically) VB IDEs. Namely, the VB IDEs will improperly cast the Single default value to int before printing it out again (as part of the method signature) as a (truncated) single-precision floating-point value.
This problem does not exist in the Microsoft Script Editor, nor does it exist in OleView.exe etc.
To test, try the following Single default value: 18446744073709551615.0. In my case, this value is properly encoded in the TLB and properly displayed by OleView.exe and by Microsoft Script Editor as 1.844674E+19. However, it gets displayed as -2.147484E+09 in the VB IDEs. Indeed, casting (float)18446744073709551615.0 to int produces -2147483648 which, displayed as float, produces the observed (incorrect) VB IDE output -2.147484E+09.
Similarly, 50.6 gets cast to int to produce 51, which is then printed out as 51.
To work around this issue use Double instead of Single, as Double is converted and displayed properly by all IDEs I was able to test.
On a tangent, you are probably already aware of the fact that certain floating point values (such as 0.1) do not have a corresponding exact IEEE 754 representation and cannot be distinguished from other values (e.g. 0.1000000015.) Thus, specifying a default double-precision value of 0.1 will be displayed in most IDEs as 0.100000001490116. One way to alleviate this precision issue is to choose a different scale for your parameters (e.g. switch from seconds to milliseconds, thus 0.1 seconds would become 100 milliseconds, unambiguously representable as both single- and double precision floating point, as well as integral values/parameters.)