How do I output the Int size in bits, min, and max in Kotlin? - kotlin

I already have an example with Byte, here it is:
8
-128
127
I need to do the same with type Integer.

Adding onto the above value you can also print the file size of an Int in Bits by doing..
println(Int.SIZE_BITS)
Full answer:
println(Int.SIZE_BITS)
println(Int.MIN_VALUE)
println(Int.MAX_VALUE)

Related

pandas astype gives a strange result

I'm trying to cast a long column (2M) of float (read as float by csv_read) into integers.
if I try to substitute the column, I get a wrong result:
df3[['DISPATCHINTERVAL']]=df3[['DISPATCHINTERVAL']].astype(int)
20190930241.0 becomes -2147483648
Instead, If I use applymap (elementwise) it works:
df3[['DISPATCHINTERVAL']]=df3[['DISPATCHINTERVAL']].applymap(np.int64)
20190930241.0 becomes 20190930241
but it's slower...
why?
In your system int is 32 bits. You can try int64
df3[['DISPATCHINTERVAL']]=df3[['DISPATCHINTERVAL']].astype('int64')
or
df3[['DISPATCHINTERVAL']]=df3[['DISPATCHINTERVAL']].astype(np.int64)

Bitwise operators: How do I clear the most significat bit?

I'm working on a problem where I need to convert an integer into a special text encoding. The requirements state the I pack the int into bytes and then clear the most significant bit. I am using the bitwise operator I am unsure of how to clear the most significant bit. Here is the problem and my method that I'm working with so far:
PROBLEM:
For this task, you need to write a small program including a pair of functions that can
convert an integer into a special text encoding
The Encoding Function
This function needs to accept a signed integer in the 14-bit range [-8192..+8191] and return a 4 character string.
The encoding process is as follows:
Add 8192 to the raw value, so its range is translated to [0..16383]
2.Pack that value into two bytes such that the most significant bit of each is cleared
Unencoded intermediate value (as a 16-bit integer):
00HHHHHH HLLLLLLL
Encoded value:
0HHHHHHH 0LLLLLLL
1 of 3
Format the two bytes as a single 4-character hexadecimal string and return it.
Sample values:
Unencoded (decimal) | Intermediate (decimal) | Intermediate (hex) | Encoded (hex)
0 | 8192 | 2000 | 4000
-8192 | 0 | 0000 | 0000
8191 | 16383 | 3fff | 7F7F
2048 | 10240 | 2800 | 5000
-4096 | 4096 | 1000 | 2000
My function
-(NSString *)encodeValue{
// get the input value
int decValue = [_inputValue.text intValue];
char* bytes = (char*)&decValue;
NSNumber *number = #(decValue+8192); //Add 8192 so that the number can't be negative, because we're about to lose the sign.
u_int16_t shortNumber = [number unsignedShortValue]; //Convert the integer to an unsigned short (2 bytes) using NSNumber.
shortNumber = shortNumber << 1; // !!!! This is what I'm doing to clear the MSB !!!!!!!
NSLog(#"%hu", shortNumber);
NSString *returnString = [NSString stringWithFormat:#"%x", shortNumber]; //Convert the 2 byte number to a hex string using format specifiers
return returnString;
}
I'm using the shift bitwise operator to clear the MSB and I get the correct answer for a couple of the values, but not every time.
If I am understanding you correctly then I believe you are after something like this:
u_int16_t number;
number = 0xFFFF;
number &= ~(1 << ((sizeof(number) * 8) - 1));
NSLog(#"%x", number); // Output will be 7fff
How it works:
sizeof(number) * 8 gives you the number of bits in the input number (eg. 16 for a u_int16_t)
1 << (number of bits in number - 1) gives you a mask with only the MSB set (eg. 0x8000)
~(mask) gives you the bitwise NOT of the mask (eg. 0x7fff)
ANDing the mask with your number then clears only the MSB leaving all others as they were
You are misunderstanding your task.
You are not supposed to clear the most significant bit anywhere. You have 14 bits. You are supposed to separate these 14 bits into two groups of seven bits. And since a byte has 8 bits, storing 7 bits into a byte will leave the most significant bit cleared.
PS. Why on earth are you using an NSNumber? If this is homework, I would fail you for the use of NSNumber alone, no matter what the rest of the code does.
PS. What is this char* bytes supposed to be good for?
PS. You are not clearing any most significant bit anywhere. You have an unsigned short containing 14 significant bits, so the two most significant bits are cleared. You shift the number to the left, so the most significant bit, which was always cleared, remains cleared, but the second most significant bit isn't. And all this has nothing to do with your task.

Why does the CLR overflow an Int32.MaxValue -> Single -> Int32, where the JVM does not?

I ran into an unexpected result in round-tripping Int32.MaxValue into a System.Single:
Int32 i = Int32.MaxValue;
Single s = i;
Int32 c = (Int32)s;
Debug.WriteLine(i); // 2147483647
Debug.WriteLine(c); // -2147483648
I realized that it must be overflowing, since Single doesn't have enough bits in the significand to hold the Int32 value, and it rounds up. When I changed the conv.r4 to conv.r4.ovf in the IL, an OverflowExcpetion is thrown. Fair enough...
However, while I was investigating this issue, I compiled this code in java and ran it and got the following:
int i = Integer.MAX_VALUE;
float s = (float)i;
int c = (int)s;
System.out.println(i); // 2147483647
System.out.println(c); // 2147483647
I don't know much about the JVM, but I wonder how it does this. It seems much less surprising, but how does it retain the extra digit after rounding to 2.14748365E9? Does it keep some kind of internal representation around and then replace it when casting back to int? Or does it just round down to Integer.MAX_VALUE to avoid overflow?
This case is explicitly handled by §5.1.3 of the Java Language Specification:
A narrowing conversion of a
floating-point number to an integral
type T takes two steps:
In the first step, the floating-point number is converted
either to a long, if T is long, or to
an int, if T is byte, short, char, or
int, as follows:
If the floating-point number is NaN (§4.2.3), the result of the
first step of the conversion is an int
or long 0.
Otherwise, if the floating-point number is not an
infinity, the floating-point value is
rounded to an integer value V,
rounding toward zero using IEEE 754
round-toward-zero mode (§4.2.3). Then
there are two cases:
If T is long, and this integer value can be represented as a
long, then the result of the first
step is the long value V.
Otherwise, if this integer value can be represented as an int,
then the result of the first step is
the int value V.
Otherwise, one of the following two cases must be true:
The value must be too small (a negative value of large magnitude
or negative infinity), and the result
of the first step is the smallest
representable value of type int or
long.
The value must be too large (a positive value of large magnitude
or positive infinity), and the result
of the first step is the largest
representable value of type int or
long.

first 16 bit of a 32 bit hex

Please excuse my lack of knowledge here but could someone let me know how i can get the first 16 bits of a 32 bit hex number.
That depends on what you mean by "first". Given a number, such as 0xdeadbeef, would you consider 0xdead or 0xbeef to be "first"?
If the former, divide the number by 65536 (as an integer). If the latter, compute the modulus to 65536.
This is of course also doable with binary operators such as shift/and, I'm just not sure sure how to express that in your desired language. I'm sure there will be other answers with more precise details.
Assuming by first, you mean least value?
if My32BitNumber is an int
dim f16 as integer = &hFFFF and My32BitNumber
If you're actually looking at a 32 bit number e.g. Gee, what are the first 16 bits of DEADBEEF
that would be the last four hex digits BEEF
& it with 0xffff.
int input = 0xabcd;
int first2Bytes = input & 0xffff;
Dim i As Integer = &HDEADBEEF
Dim s16 As UShort
s16 = i And &HFFFF 'BEEF
'or
s16 = (i >> 16) And &HFFFF 'DEAD
This will get you the 32 bit number as a four byte array:
Dim bytes As Byte() = BitConverter.GetBytes(number)
To get the first two bytes as a signed 16 bit number:
Dim first As Int16 = BitConverter.ToInt16(bytes, 0)
To get the first two bytes as an unsigned 16 bit number:
Dim first As UInt16 = BitConverter.ToUInt16(bytes, 0)
This is of course a bit slower than using bit shifts or division, but it handles the sign bit (the most significant bit) correctly, which you may have problems with using bit shift or division.
You can also get the first two bytes as a 16 bit unsigned number and assign it to an Integer:
Dim first As Integer = BitConverter.ToUInt16(bytes, 0)
(Getting a signed 16 bit number and assign to an Integer means that the sign bit would also be copied to the top 16 bits of the Integer, which is probably not desired.)
If you want the last two bytes (least significant) instead of the first two (most significant), just change the index in the ToUInt16/ToInt16 call from 0 to 2.

varchar(255) v tinyblob v tinytext

My side question is there really any difference between tinyblob & tinytext?
Buy my real question is what reason, if any, would I choose varchar(255) over tinyblob or tinytext?
Primarily storage requirements and memory handling/speed:
In the following table, M represents the declared column length in characters for nonbinary string types and bytes for binary string types. L represents the actual length in bytes of a given string value.
VARCHAR(M), VARBINARY(M):
L + 1
bytes if column values require 0 – 255
bytes,
L + 2 bytes if values may
require more than 255 bytes
TINYBLOB, TINYTEXT:
L + 1 bytes, where L < 28
Additionally, see this post:
For each table in use, MySQL allocates
memory for 4 rows. For each of these
rows CHAR(X)/VARCHAR(X) column takes
up the X characters.
A TEXT/BLOB on the other hand is
represented by a 8 byte pointer + a
1-4 byte length (depending on the
BLOB/TEXT type). The BLOB/TEXT is
allocated dynamicly on use. This will
use less memory, but in some cases it
may fragment your memory in the long
run.
Edit: As an aside, blobs store binary data and text stores ASCII, thats the only difference between TINYBLOB and TINYTEXT.
VARCHAR(255) is more SQL standard than tinyblob or tinytext. So your script, and application would be more portable across database vendors.
You can't apply CHARACTER SET to TINYTEXT, but you can to VARCHAR(255)