Two's complement on an unsigned integer in VB.NET - vb.net

How can I implement a two's complement in VB.NET using unsigned integer types such as Byte, UShort, UInteger and ULong? Can I cast a UInteger to an Integer?

No, you can't cast. That will result in an overflow exception for large values.
You can, however, do this:
intValue = BitConverter.ToInt32(BitConverter.GetBytes(uintValue), 0)
But what stops you from doing the math with the unsigned values without casting them to something? It just works.

Related

Difference between Objective-C primitive numbers

What is the difference between objective-c C primitive numbers? I know what they are and how to use them (somewhat), but I'm not sure what the capabilities and uses of each one is. Could anyone clear up which ones are best for some scenarios and not others?
int
float
double
long
short
What can I store with each one? I know that some can store more precise numbers and some can only store whole numbers. Say for example I wanted to store a latitude (possibly retrieved from a CLLocation object), which one should I use to avoid loosing any data?
I also noticed that there are unsigned variants of each one. What does that mean and how is it different from a primitive number that is not unsigned?
Apple has some interesting documentation on this, however it doesn't fully satisfy my question.
Well, first off types like int, float, double, long, and short are C primitives, not Objective-C. As you may be aware, Objective-C is sort of a superset of C. The Objective-C NSNumber is a wrapper class for all of these types.
So I'll answer your question with respect to these C primitives, and how Objective-C interprets them. Basically, each numeric type can be placed in one of two categories: Integer Types and Floating-Point Types.
Integer Types
short
int
long
long long
These can only store, well, integers (whole numbers), and are characterized by two traits: size and signedness.
Size means how much physical memory in the computer a type requires for storage, that is, how many bytes. Technically, the exact memory allocated for each type is implementation-dependendant, but there are a few guarantees: (1) char will always be 1 byte (2) sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long).
Signedness means, simply whether or not the type can represent negative values. So a signed integer, or int, can represent a certain range of negative or positive numbers (traditionally –2,147,483,648 to 2,147,483,647), and an unsigned integer, or unsigned int can represent the same range of numbers, but all positive (0 to 4,294,967,295).
Floating-Point Types
float
double
long double
These are used to store decimal values (aka fractions) and are also categorized by size. Again the only real guarantee you have is that sizeof(float) <= sizeof(double) <= sizeof (long double). Floating-point types are stored using a rather peculiar memory model that can be difficult to understand, and that I won't go into, but there is an excellent guide here.
There's a fantastic blog post about C primitives in an Objective-C context over at RyPress. Lots of intro CPS textbooks also have good resources.
Firstly I would like to specify the difference between au unsigned int and an int. Say that you have a very high number, and that you write a loop iterating with an unsigned int:
for(unsigned int i=0; i< N; i++)
{ ... }
If N is a number defined with #define, it may be higher that the maximum value storable with an int instead of an unsigned int. If you overflow i will start again from zero and you'll go in an infinite loop, that's why I prefer to use an int for loops.
The same happens if for mistake you iterate with an int, comparing it to a long. If N is a long you should iterate with a long, but if N is an int you can still safely iterate with a long.
Another pitfail that may occur is when using the shift operator with an integer constant, then assigning it to an int or long. Maybe you also log sizeof(long) and you notice that it returns 8 and you don't care about portability, so you think that you wouldn't lose precision here:
long i= 1 << 34;
Bit instead 1 isn't a long, so it will overflow and when you cast it to a long you have already lost precision. Instead you should type:
long i= 1l << 34;
Newer compilers will warn you about this.
Taken from this question: Converting Long 64-bit Decimal to Binary.
About float and double there is a thing to considerate: they use a mantissa and an exponent to represent the number. It's something like:
value= 2^exponent * mantissa
So the more the exponent is high, the more the floating point number doesn't have an exact representation. It may also happen that a number is too high, so that it will have a so inaccurate representation, that surprisingly if you print it you get a different number:
float f= 9876543219124567;
NSLog("%.0f",f); // On my machine it prints 9876543585124352
If I use a double it prints 9876543219124568, and if I use a long double with the .0Lf format it prints the correct value. Always be careful when using floating points numbers, unexpected things may happen.
For example it may also happen that two floating point numbers have almost the same value, that you expect they have the same value but there is a subtle difference, so that the equality comparison fails. But this has been treated hundreds of times on Stack Overflow, so I will just post this link: What is the most effective way for float and double comparison?.

List of Scalar Data Types

Im looking for a list of all the scalar data types in Objective C, complete with their ranges (max/min values etc).
Sorry for the simple question, Im just really struggling to find anything like this.
int An integer value between +/– 2,147,483,647.
unsigned int An integer value between 0 and 4,294,967,296.
float A floating point value between +/– 16,777,216.
double A floating point value between +/– 2,147,483,647.
long An integer value varying in size from 32 bit to 64 bit depending on architecture.
long long A 64-bit integer.
char A single character. Technically it’s represented as an int.
BOOL A boolean value, can be either YES or NO.
NSInteger When compiling for 32-bit architecture, same as an int, when compiling for 64-bit architecture,+/– 4,294,967,296.
NSUInteger When compiling for 32-bit architecture, same as an unsigned int, when compiling for 64-bit architecture, value between 0 and 2^64
Source.
char : A character 1 byte
int :An integer — a whole number 4 bytes
float : Single precision floating point number 4 bytes
Double : Double precision floating point number 8 bytes
short : A short integer 2 bytes
long : A double short 4 bytes
long long : A double long 8 bytes
BOOL : Boolean (signed char) 1 byte
For more on sizes check this post
Integer types are signed 2's complement or unsigned and the standard C variations are provided (char, short, int, long, long long and unsigned variants of these, see C types on Wikipedia), sizes may vary dependent on 32-bit & 64-bit environments - see 64-bit computing.
BOOL is an Objective-C special and is defined as signed char, while it can take any value a signed char can the constants NO and YES are defined for use. The C9X type _Bool(aka bool) is also provided.
float & double are IEEE 32-bit & 64-bit floating point - see Wikipedia for ranges.
Standard macro contants are provided for the minimum and maximum of all the types, e.g. INT_MAX for int - again see C types on Wikipedia for these.

Is it safe to assume an Integer will always be 32 bits in VB.Net?

Related:
Is it safe to assume an int will
always be 32 bits in C#?
The linked question asks whether it is "safe to assume that an int will always be 32 bits in C#". The accepted answer states that "the C# specification rigidly defines that int is an alias for System.Int32 with exactly 32 bits".
My question is this: does this hold true for VB.Net's Integer? Is it safe to assume that Integer will always be an alias for int32?
Yes.
The Integer type will never change.
The spec (7.3 Primitive Types) says:
The integral value types Byte (1-byte unsigned integer), Short (2-byte signed integer), Integer (4-byte signed integer), and Long (8-byte signed integer). These types map to System.Byte, System.Int16, System.Int32, and System.Int64, respectively. The default value of an integral type is equivalent to the literal 0.
VB.Net doesn't have an "int", it has an "Integer" type. The Integer type is an alias for System.Int32. So no, this will not change.

Pass Byte as SmallInt?

I have an Informix stored procedure which takes an int and a "smallint" as parameters. I'm trying to call this SP from a .net4 Visual Basic program.
As far as I know, "smallint" is a byte. Unfortunately, when loading up the IfxCommand.Parameters collection with an Integer and a Byte, I get an ArgumentException thrown of {"The parameter data type of Byte is invalid."} with the following stack trace:
at IBM.Data.Informix.TypeMap.FromObjectType(Type dataType, Int32 length)
at IBM.Data.Informix.TypeMap.FromObjectType(Type dataType)
at IBM.Data.Informix.IfxParameter.GetTypeMap()
at IBM.Data.Informix.IfxParameter.GetOutputValue(IntPtr stmt, CNativeBuffer valueBuffer, CNativeBuffer lenIndBuffer)
at IBM.Data.Informix.IfxDataReader.Dispose(Boolean disposing)
at IBM.Data.Informix.IfxDataReader.System.IDisposable.Dispose()
at IBM.Data.Informix.IfxCommand.ExecuteReaderObject(CommandBehavior behavior, String method)
at IBM.Data.Informix.IfxCommand.ExecuteReader(CommandBehavior behavior)
at IBM.Data.Informix.IfxCommand.ExecuteReader()
Presumably I need to cast the Byte I have to a smallint, somehow, but google isn't giving me any relevant answers, just at the moment.
I have tried using:
cmd.Parameters.Add(New IfxParameter("myVal", IBM.Data.Informix.IfxType.SmallInt)).Value = myByte
but I still get the same ArgumentException when executing the reader.
Can someone tell me what I'm doing wrong?
An Informix SmallInt is a 16 bit signed integer, byte is an 8 bit unsigned integer. A better equivalent would be Int16 or Short which is a 16 bit signed integer, just like SmallInt. I suspect that will work.
Informix has no analogue for an unsigned 8 bit integer like the .Net Byte or the TSQL TinyInt.
Int16 should work since it has the same range than SmallInt (-32,767 to 32,767)
Informix has 4 types that are related: BYTE and TEXT (since 1990), BLOB and CLOB (since 1996). Collectively, they are all large objects. A BYTE type is absolutely NOT a small integer type.
You may be able to use BYTE in a language that thinks it is a small integer if the language or driver fixes up the types.
But the native BYTE type is a large object. It requires a 56-byte descriptor in the main row of data, and then uses other storage (possibly IN TABLE, possibly in a blobspace) for the actual data storage.

What is the UInt32 data type in Visual Basic .NET?

What is the UInt32 datatype in VB.NET?
Can someone inform me about its bit length and the differences between UInt32 and Int32? Is it an integer or floating point number?
It's an unsigned 32 bit integer:
U for unsigned
Int for integer
32 for 32
Or you could just look at the documentation:
Represents a 32-bit unsigned integer.
It's a 32 Bit unsigned integer.
Data types in VB.NET notes the following:
UInt32 - 32 bit unsigned integer
Thus, it is 32 bits long, an integer.
A UInt32 is an unsigned integer of 32 bits.
A 32 bit integer is capable of holding values from -2,147,483,648 to 2,147,483,647.
However, as you have specified an unsigned integer it will only be capable of storing positive values. The range on an unsigned 32 bit integer is from 0 to 4,294,967,295.
Attempts to assign values to an Int or UInt outside of its range will result in an System.OverflowException.
Obviously, both UInt32 and Int32 are integers (not floating point), meaning no decimal portion is permitted or stored.
It may also be interesting to note that Integer and System.Int32 are the same in .NET.
For performance reasons you should always try to use Int32 for 32 bit processors and Int64 for 64 bit processors as loading these types to and from memory will be faster than other options.
Finally, try to avoid use of unsigned integers as they are not CLS compliant. If you need positive only integer that has the upper limit of the UInt32 it is better to use an Int64 instead. Unsigned integers are usually only used for API calls and the like.