Long story short, I am trying to convert strings of hex values to signed 2's complement integers. I was able to do this in a single line of code in Swift, but for some reason I can't find anything analogous in Kotlin. String.ToInt or String.ToUInt just give the straight base 16 to base 10 conversion. That works for some positive values, but not for any negative numbers.
How do I know I want the signed 2's complement? I've used this online converter and according to its output, what I want is the decimal from signed 2's complement, not the straight base 16 to base 10 conversion that's easy to do by hand.
So, "FFD6" should go to -42 (correct, confirmed in Swift and C#), and "002A" should convert to 42.
I would appreciate any help or even any leads on where to look. Because yes I've searched, I've googled the problem a bunch and, no I haven't found a good answer.
I actually tried writing my own code to do the signed 2's complement but so far it's not giving me the right answers and I'm pretty at a loss. I'd really hope for a built in command that does it instead; I feel like if other languages have that capability Kotlin should too.
For 2's complement, you need to know how big the type is.
Your examples of "FFD6" and "002A" both have 4 hex digits (i.e. 2 bytes). That's the same size as a Kotlin Short. So a simple solution in this case is to parse the hex to an Int and then convert that to a Short. (You can't convert it directly to a Short, as that would give an out-of-range error for the negative numbers.)
"FFD6".toInt(16).toShort() // gives -42
"002A".toInt(16).toShort() // gives 42
(You can then convert back to an Int if needed.)
You could similarly handle 8-digit (4-byte) values as Ints, and 2-digit (1-byte) values as Bytes.
For other sizes, you'd need to do some bit operations. Based on this answer for Java, if you have e.g. a 3-digit hex number, you can do:
("FD6".toInt(16) xor 0x800) - 0x800 // gives -42
(Here 0x800 is the three-digit number with the top bit (i.e. sign bit) set. You'd use 0x80000 for a five-digit number, and so on. Also, for 9–16 digits, you'd need to start with a Long instead of an Int. And if you need >16 digits, it won't fit into a Long either, so you'd need an arbitrary-precision library that handled hex…)
Related
I am a beginner to Unix programming and C and I have two questions regarding the stat struc and its field st_mode:
When accessing the st_mode field as below, what type of number is returned ( octal, decimal, etc.)?
struct stat file;
stat( someFilePath, &file);
printf("%d", file.st_mode );
I thought the number is in octal but when I ran this code, and I got the value 33188. What is the base?
I found out that the st_mode encodes a 16 bit binary number that represents the file type and file permissions. How do I get the 16-bit number from the above output (especially when it doesn't seem to be in octal). And which parts of the 16-bit digit encode which information?
Thanks for any help.
The actual type behind mode_t and how it encodes information is implementation defined. The only thing that's certain is that it's a bitmask.
To work with st_mode, use the flags and macros defined in the sys/stat.h header. For a list of those defines, consult:
man 2 stat
If you truly need to know what each bit represents, or are simply curious, read the header or use printf to inspect the flags.
I'm using VB.NET, writing a winforms application where I'm trying to convert from a denary real number to a signed floating point binary number, as a string representation. For example, 9.125 would become "0100100100000100" (the first ten digits are the significand and the last six digits the exponent.).
I can write a function for this if I have to, but I'd rather not waste time if there's a built-in functionality available. I know there's some ToString overload or something that works on Integers, but I haven't been able to find anything that works on Doubles.
Why does Microsoft tend to report "error codes" as hexadecimal values?
Error codes are 32-bit double word values (4 byte values.) This is likely the raw integer return code of whatever C-style function has reported an error.
However, why report the error to a user in hexadecimal? The "0x" prefix is worthless, and the savings in character length is minimal. These errors end up displayed to end users in Microsoft software and even on Microsoft websites.
For example:
0x80302010 is 10 characters long, and very cryptic.
2150637584 is the decimal equivalent, and much more user friendly.
Is there any description of the "standard" use of a 32-bit field as an error code mechanism (possibly dividing the field into multiple fields for developer interpretation) or of the logic behind presenting a hexadecimal code to end users?
We can only guess about the reason, so this question cannot be answered for sure. But let's guess:
One reason might be that with hex numbers, you know the number will have 8 digits. If it has more or less digits the number is "corrupt" (for example, the customer mistyped). With decimal numbers the number of digits for the same value varies.
Also, to a developer, hex numbers are more convenient and natural than decimal numbers. For example, if some info is coded as bit flags you can decipher them manually easily in hex numbers but not in decimal numbers.
It is a little bit subjective as to whether hexadecimal or decimal error codes are more user friendly. Here is a scenario where the hexadecimal error codes are significantly more convenient, which could be part of the reason that hexadecimal error codes are used in the first place.
Consider the documentation for Win32 Error Codes for Active Directory Service Interfaces, ADSI uses error codes with the format 0x8007XXXX, where the XXXX corresponds to a DWORD value that maps to a Win32 error code.
This makes it extremely easy to get the corresponding Win32 error code, because you can just strip off the last 4 digits. This would not be possible with a decimal error code representation.
The middle ground answer to this would be that formatting the number like an IPv4 address would be more luser-friendly while preserving some sort of formatting that helps the dev guys.
Although TBH I think hex is fine, the hypothetical non-technical user has no more idea what 0x1234ABCD means than 1234101112 or "Cracked gangle pin on fwip valve".
When we are using cryptography always we are seeing byte arrays are being used instead of String values. But when we are looking at the techniques of most of the cryptography algorithms they uses hex values to do any operations. Eg. AES: MixColumns, SubBytes all these techniques(I suppose it uses) uses hex values to do those operations.
Can you explain how these byte arrays are used in these operations as hex values.
I have an assignment to develop a encryption algorithm , therefore any related sample codes would be much appropriate.
Every four digits of binary makes a hexadecimal digit, so, you can convert back and forth quite easily (see: http://en.wikipedia.org/wiki/Hexadecimal#Binary_conversion).
I don't think I full understand what you're asking, though.
The most important thing to understand about hexadecimal is that it is a system for representing numeric values, just like binary or decimal. It is nothing more than notation. As you may know, many computer languages allow you to specify numeric literals in a few different ways:
int a = 42;
int a = 0x2A;
These store the same value into the variable 'a', and a compiler should generate identical code for them. The difference between these two lines will be lost very early in the compilation process, because the compiler cares about the value you specified, and not so much about the representation you used to encode it in your source file.
Main takeaway: there is no such thing as "hex values" - there are just hex representations of values.
That all said, you also talk about string values. Obviously 42 != "42" != "2A" != 0x2A. If you have a string, you'll need to parse it to a numeric value before you do any computation with it.
Bytes, byte arrays and/or memory areas are normally displayed within an IDE (integrated development environment) and debugger as hexadecimals. This is because it is the most efficient and clear representation of a byte. It is pretty easy to convert them into bits (in his mind) for the experienced programmer. You can clearly see how XOR and shift works as well, for instance. Those (and addition) are the most common operations when doing symmetric encryption/hashing.
So it's unlikely that the program performs this kind of conversion, it's probably the environment you are in. That, and source code (which is converted to bytes at compile time) probably uses a lot of literals in hexadecimal notation as well.
Cryptography in general except hash functions is a method to convert data from one format to another mostly referred as cipher text using a secret key. The secret key can be applied to the cipher text to get the original data also referred as plain text. In this process data is processed in byte level though it can be bit level as well. The point here the text or strings which we referring to are in limited range of a byte. Example ASCII is defined in certain range in byte value of 0 - 255. In practical when a crypto operation is performed, the character is converted to equivalent byte and the using the key the process is performed. Now the outcome byte or bytes will most probably be out of range of human readable defined text like ASCII encoded etc. For this reason any data to which a crypto function is need to be applied is converted to byte array first. For example the text to be enciphered is "Hello how are you doing?" . The following steps shall be followed:
1. byte[] data = "Hello how are you doing?".getBytes()
2. Process encipher on data using key which is also byte[]
3. The output blob is referred as cipherTextBytes[]
4. Encryption is complete
5. Using Key[], a process is performed over cipherTextBytes[] which returns data bytes
6 A simple new String(data[]) will return string value of Hellow how are you doing.
This is a simple info which might help you to understand reference code and manuals better. In no way I am trying to explain you the core of cryptography here.
Can any one please help me how to get float value as it is from text box
for Ex: I have entered 40.7
rateField=[[rateField text] floatValue];
I am getting rateField value as 40.7000008 but I want 40.7 only.
please help me.
thanks in advance
Thanks Every body,
I tried all the possibilities but I am not able to get what I want. I am not looking to print the value to convert into string.I want to use that value for computation. If i use Number Formatter again when i am converting from number to float it is giving same problem.So i want float value only but it should be whatever i have given in the text box it should not be padded with any values.This is my requirement.Please help me.
thanks®ards Balu
Thanks Every body,
I tried all the possibilities but I am not able to get what I want. I am not looking to print the value to convert into string.I want to use that value for computation. If i use Number Formatter again when i am converting from number to float it is giving same problem.So i want float value only but it should be whatever i have given in the text box it should not be padded with any values.This is my requirement.Please help me.
thanks®ards
Balu
This is ok. There is not guaranteed that you will get 40.7 if you will use even double.
If you want to output 40.7 you can use %.1f or NSNumberFormatter
Try using a double instead. Usually solves that issue. Has to do with the storage precision.
double dbl = [rateField.text doubleValue];
When using floating point numbers, these things can happen because of the way the numbers are stored in binary format in the computers memory.
It's similar to the way 1/3 = 0.33333333333333... in decimal numbers.
The best way to deal with this is to use number formatters in the textbox that displays the value.
You are already resolved float value.
Floating point numbers have limited precision. Although it depends on
the system, float relative error due to rounding will be around 1.1e-8
Non elementary arithmetic operations may give larger errors, and, of
course, error progragation must be considered when several operations
are compounded.
Additionally, rational numbers that are exactly representable as
floating point numbers in base 10, like 0.1 or 0.7, do not have an
exact representation as floating point numbers in base 2, which is
used internally, no matter the size of the mantissa. Hence, they
cannot be converted into their internal binary counterparts without a
small loss of precision. This can lead to confusing results: for
example, floor((0.1+0.7)*10) will usually return 7 instead of the
expected 8, since the internal representation will be something like
7.9999999999999991118....
So if you're using those numbers for output, you should use some rounding mechanism, even for double values.