What is meant by "natural unsigned int" - cil

In the documentation for ldlen and localloc the size type is described as "natural unsigned int". Although I have an idea what is meant (on x64: 64 bit unsigned on x86: 32 bit unsigned) I haven't found any documentation for it.
So is there an "official" documentation about the "natural unsigned int"?

The term natural refers, as you thought, to what is native on the hardware. Although it's not specified explicitly in a glossary, you can make it out from other uses of the word natural in the spec:
The native size types (native int,
native unsigned int, and &) are always naturally aligned (4 bytes or 8 bytes, depending on the
architecture).
or
autochar indicates a
platform-specific representation that is “natural” for the underlying platform

Related

Is there any reason to use NSInteger instead of uint8_t with NS_ENUM?

The general standard appears to use NS_ENUM with NSInteger as the base type. Why is this the case? Assuming less than 256 cases (which covers almost any enumeration), is there any reason to use that instead of uint8_t, which could use less memory space? Either imports into Swift fine.
This is different than NS_OPTIONS, where a larger type makes sense, since you shouldn't be doing any bit math with enumerations, and you can use every number representable by the base type as a value.
The answer to the question in the title:
Is there any reason to use NSInteger instead of uint8_t with NS_ENUM?
is probably not.
When declaring an enum in C if no underlying type is specified the compiler is free to choose any suitable type from char and the signed and unsigned integer types which can at least represent all the values required. The current Xcode/Clang compiler picks a 4-byte integer. One could reasonably assume the compiler writers made an informed choice - some balance of performance and storage.
Smaller types, such as uint8_t, will usually be aligned on smaller boundaries in memory (or on disc) - but that is only of benefit if the adjacent field matches the alignment e.g. if a 2-byte size typed field follows a 1-byte sized typed field then unless otherwise specified (e.g. with a #pragma packed) there will probably be an intervening unused byte.
Whether any performance or storage differences are significant will be heavily dependent on the application. Follow the usual rule of thumb - don't optimise until an issue is found.
However if you find semantic benefit in limiting the size then certainly do so - there is no general reason you shouldn't. The choice is similar to picking signed vs. unsigned integers, some programmers avoid unsigned types for values that will be ≥ 0 unless absolutely required for the extra range, while others appreciate the semantic benefit.
Summary: There is no right answer, its largely a subjective issue.
HTH
First of all: The memory footprint is close to completely meaningless. You are talking about 1 Byte vs. 4/8 Bytes. (If the memory alignment does not force the usage of 4/8 bytes whatever you chosed.) How many NS_ENUM (C) objects do you want to have in your running app?
I guess that the reason is pretty easy: NSInteger is akin of "catch all" integer type in Cocoa. That makes assignments easier, especially you do not have to care about assigning a bigger integer type to a smaller one. Without casting this would lead to warnings.
Having more than one integer type in a desktop app with a 32/64 bit model is akin of an anachronism. Nor a Mac neither a MacBook neither an iPhone is an embedded micro controller …
You can use any integer data type including uint8_t with NS_ENUM as.
typedef NS_ENUM(uint8_t, eEnumAddEditViewMode) {
eWBEnumAddMode,
eWBEnumEditMode
};
In old c style standard NSInteger is default, because NSInteger is akin of "catch all" integer type in objective c. and developer can easily type boxing and unboxing with their own variable. This is just developer friendly best practise.

Why does the C standard provide unsized types (int, long long, char vs. int32_t, int64_t, uint8_t etc.)?

Why weren't the contents of stdint.h the standard when it was included in the standard (no int no short no float, but int32_t, int16_t, float32_t etc.)? What advantage did/does ambiguous type sizes provide?
In objective-C, why was it decided that CGFloat, NSInteger, NSUInteger have different sizes on different platforms?
When C was designed, there were computers with different word sizes. Not just multiples of 8, but other sizes like the 18-bit word size on the PDP-7. So sometimes an int was 16 bits, but maybe it was 18 bits, or 32 bits, or some other size entirely. On a Cray-1 an int was 64 bits. As a result, int meant "whatever is convenient for this computer, but at least 16 bits".
That was about forty years ago. Computers have changed, so it certainly looks odd now.
NSInteger is used to denote the computer's word size, since it makes no sense to ask for the 5 billionth element of an array on a 32-bit system, but it makes perfect sense on a 64-bit system.
I can't speak for why CGFloat is a double on 64-bit system. That baffles me.
C is meant to be portable from enbedded devices, over your phone, to descktops, mainfraimes and beyond. These don't have the same base types, e.g the later may have uint128_t where others don't. Writing code with fixed width types would severely restrict portability in some cases.
This is why with preference you should neither use uintX_t nor int, long etc but the semantic typedefs such as size_t and ptrdiff_t. These are really the ones that make your code portable.

Using data type like uint64_t in cgal's exact kernel

I am beginning with CGAL. What I would like to do is to create point that coordinates are number ~ 2^51.
typedef CGAL::Exact_predicates_exact_constructions_kernel K;
typedef K::Point_2 P;
uint_64 x,y;
//init them somehow
P sp0(x,y);
Then I got a long template error. Someone could help?
I guess you realize that changing the kernel may have other effects on your program.
Concerning your original question, if your integer values are smaller than 2^51, then they fit exactly in doubles (with 53 bit mantissa), so one simple option is to cast them to double, as in :
P sp0((double)x,(double)y);
Otherwise, the Exact_predicates_exact_construction_kernel should have its main number type be able to read your uint64 values (maybe cast them to unsigned long long if it's OK on your platform) :
typedef K::FT FT;
P sp0((FT)x,(FT)y);
CGAL Number types are only documented to interoperate with int and double. I recently added some code so we can construct more numbers from long (required for Eigen), and your code will work in the next version of CGAL (except that you typo-ed uint64_t) on platforms where uint64_t is unsigned int or unsigned long (not windows). For long long support, since many of our number types are based on other libraries (GMP) that do not support long long themselves yet, it may have to wait a bit.
Ok. I think that I found solution. The problem was that I used exact Kernel that supports only double, switching to inexact kernel solved the problem. It was also possible to use just double. (one of the requirements was to use data type that supports intergers up to 2^48).

Can an Objective-C program be compiled so that int is 64 bit?

If I remember correctly, on some machine, int was 16 bit, and when we move onto the 32 bit platform, then int was 32 bit.
Now that Snow Leopard and Lion are 64 bit, can a C or Objective-C program be compiled on Xcode so that int is 64 bit? (and "%d" or "%i" will take a 64 bit integer as well). Or is int kept as 32-bit for compatibility reason?
(and if using 64 bit int, will it be faster than 32 bit because 64 bit is native?)
Update: I just found out sizeof(NSInteger) if printed in a console app by Xcode on Lion is 8 (it is typedef as long), and if it is on iOS 5.1.1, then it is 4 (typedef as int). sizeof(int) is 4 on both platforms. So it looks like in a way, int moved from 16 bit to 32 bit before, but now we do want to stop it at 32 bit.
Under the LP64 data model, as adopted by Apple, an int will always be 32-bits. Some compilers might allow you to use the ILP64 data model instead, as explained in the comments here. However, you will likely break compatibility with precompiled libraries.
can a C or Objective-C program be compiled on Xcode so that int is 64 bit?
I have been unable to find a clang option that will make int 64 bits wide. In fact, the platform headers seem to assume that it can't be. Also, you would be unable to use any platform library functions / methods that take an int as a parameter (that includes things like printf() that specify int types in the format string).
and if using 64 bit int, will it be faster than 32 bit because 64 bit is native?
I see no reason why using 32 bit ints would be slower than using 64 bit ints. In fact, possibly, they might be faster since you can fit twice as many of them in the cache.
To get a compiler where int = 64 bit, you'd have to download the Clang source code and modify it by hand. So with existing gcc or Clang compilers, there's no way.
But there is a good reason for int being 32 bits: Lots of code needs for compatibility reasons the ability to have 8 bit, 16 bit, 32 bit, and 64 bit items. The types available in C are char, short, int, long, and long long. You wouldn't want long or long long being smaller than int. If int = 64 bit, you'd only have two types (char and short) for three sizes (8, 16 and 32 bit), so you'd have to give up one of them.

MUL operator Vs IMUL operator in NASM

Is there any reason why MUL operator is only in single operand form?
IMUL operator can be in three different forms (with one, two or three operands) and this is much more convenient. From the technical point of view I don't see any reason why MUL operator can't be in two/three operands form.
It has to do with the bytecodes that are output. In the pre-80286 world there were too many opcodes so the Intel engineers were finding ways to overcome the problem. One solution was to extend the portion of the bytecode that specifies the operation (multiplication in this case) into the portion of the bytecode that encoded the first operand. This obviously meant that only one operand could by supported when executing the MUL opcode. Because a multiplication requires two operands, they solved the problem by hard coding into the processor that the first operand would always be the eax register. Later processors supported bytecodes that were of multiple lengths wich allowed them to encode more data into a single command. This allowed them to make the IMUL opcode much more useful.
There is an interesting parallel today with the IP addresses running out.
It's not that NASM doesn't support it - on the CPU, the signed version of the instruction simply supports more variants than the unsigned version.
Three-operand imul, as well as two-operand forms with an immediate operand (which is an alias to the three-operand form) were introduced with the 186 instruction set. Later, the 386 added two-operand forms with a register and an r/m operand.
All of these new forms have in common that the multiplication is done either with 16 bits times 16 bits (possibly from sign-extended 8-bit immediate) with a 16 bit result, or 32 bits times 32 bits with a 32 bit result. In these cases, the low 16 or low 32 bits of the result is the same for imul as they would be for an equivalent mul, only the flags (CF and OF) may differ. So a separate multi-operand mul isn't needed. Presumably the designers went with the mnemonic imul because the forms with an 8 bit immediate do sign-extend that immediate.