#define vs const in Objective-C - objective-c

I'm new to Objective-C, and I have a few questions regarding const and the preprocessing directive #define.
First, I found that it's not possible to define the type of the constant using #define. Why is that?
Second, are there any advantages to use one of them over the another one?
Finally, which way is more efficient and/or more secure?

First, I found that its not possible to define the type of the constant using #define, why is that?
Why is what? It's not true:
#define MY_INT_CONSTANT ((int) 12345)
Second, are there any advantages to use one of them over the another one?
Yes. #define defines a macro which is replaced even before compilation starts. const merely modifies a variable so that the compiler will flag an error if you try to change it. There are contexts in which you can use a #define but you can't use a const (although I'm struggling to find one using the latest clang). In theory, a const takes up space in the executable and requires a reference to memory, but in practice this is insignificant and may be optimised away by the compiler.
consts are much more compiler and debugger friendly than #defines. In most cases, this is the overriding point you should consider when making a decision on which one to use.
Just thought of a context in which you can use #define but not const. If you have a constant that you want to use in lots of .c files, with a #define you just stick it in a header. With a const you have to have a definition in a C file and
// in a C file
const int MY_INT_CONST = 12345;
// in a header
extern const int MY_INT_CONST;
in a header. MY_INT_CONST can't be used as the size of a static or global scope array in any C file except the one it is defined in.
However, for integer constants you can use an enum. In fact that is what Apple does almost invariably. This has all the advantages of both #defines and consts but only works for integer constants.
// In a header
enum
{
MY_INT_CONST = 12345,
};
Finally, which way is more efficient and/or more secure?
#define is more efficient in theory although, as I said, modern compilers probably ensure there is little difference. #define is more secure in that it is always a compiler error to try to assign to it
#define FOO 5
// ....
FOO = 6; // Always a syntax error
consts can be tricked into being assigned to although the compiler might issue warnings:
const int FOO = 5;
// ...
(int) FOO = 6; // Can make this compile
Depending on the platform, the assignment might still fail at run time if the constant is placed in a read only segment and it's officially undefined behaviour according to the C standard.
Personally, for integer constants, I always use enums for constants of other types, I use const unless I have a very good reason not to.

From a C coder:
A const is simply a variable whose content cannot be changed.
#define name value, however, is a preprocessor command that replaces all instances of the name with value.
For instance, if you #define defTest 5, all instances of defTest in your code will be replaced with 5 when you compile.

It is important to understand the difference between the #define and the const instructions which are not meant to the same things.
const
const is used to generate an object from the asked type that will be, once initialised, constant. It means that it is an object in the program memory and can be used as readonly.
The object is generated every time the program is launched.
#define
#define is used in order to ease the code readability and future modifications. When using a define, you only mask a value behind a name. Hence when working with a rectangle you can define width and height with corresponding values. Then in the code, it will be easier to read since instead of numbers there will be names.
If later you decide to change the value for the width you would only have to change it in the define instead of a boring and dangerous find/replace in your whole file.
When compiling, the preprocessor will replace all the defined name by the values in the code. Hence, there is no time lost using them.

In addition to other peoples comments, errors using #define are notoriously difficult to debug as the pre-processor gets hold of them before the compiler.

Since pre-processor directives are frowned upon, I suggest using a const. You can't specify a type with a pre-processor because a pre-processor directive is resolved before compilation. Well, you can, but something like:
#define DEFINE_INT(name,value) const int name = value;
and use it as
DEFINE_INT(x,42)
which would be seen by the compiler as
const int x = 42;
First, I found that its not possible to define the type of the constant using #define, why is that?
You can, see my first snippet.
Second, are there any advantages to use one of them over the another one?
Generally having a const instead of a pre-processor directive helps with debugging, not as much in this case (but still does).
Finally, which way is more efficient and/or more secure?
Both are as efficient. I'd say the macro can potentially be more secure as it can't be changed during run-time, whereas a variable could.

I have used #define before to help create more methods out of one method like if I have something like.
// This method takes up to 4 numbers, we don't care what the method does with these numbers.
void doSomeCalculationWithMultipleNumbers:(NSNumber *)num1 Number2:(NSNumber *)num2 Number3:(NSNumber *)num23 Number3:(NSNumber *)num3;
But I also what to have a method that only takes 3 numbers and 2 numbers so instead of writing two new methods I am going to use the same one using the #define, like so.
#define doCalculationWithFourNumbers(num1, num2, num3, num4) \
doSomeCalculationWithMultipleNumbers((num1), (num2), (num3), (num4))
#define doCalculationWithThreeNumbers(num1, num2, num3) \
doSomeCalculationWithMultipleNumbers((num1), (num2), (num3), nil)
#define doCalculationWithTwoNumbers(num1, num2) \
doSomeCalculationWithMultipleNumbers((num1), (num2), nil, nil)
I think this is a pretty cool thing to have, I know you can go straight to the method and just put nil in the spaces you don't want but if you are building a library it is very useful. Also this is how
NSLocalizedString(<#key#>, <#comment#>)
NSLocalizedStringFromTable(<#key#>, <#tbl#>, <#comment#>)
NSLocalizedStringFromTableInBundle(<#key#>, <#tbl#>, <#bundle#>, <#comment#>)
are done.
Whereas I don't believe you can do this with constants. But constants do have there benefits over #define like you can't specify a type with a #define because it is a pre-processor directive that is resolved before compilation, and if you get an error with #define they are harder to debug then constants. Both have there benefits and downsides but I would say it all depends on the programmer to which one you decided to use. I have written a library with both them in using the #define to do what I have shown and constants for declaring constant variables which I need to specify a type on.

Related

Objective-C macro in preprocessor IF

I have a project with some macros that are defined using Objective-C statements, like this:
#define TEST [someObject someNumber] == 500
I need to define another value based on this result, like this:
#if TEST
#define THING = 1
#else
#define THING = 2
#endif
But, this doesn't work. And I can't use #ifdef TEST because the value is always defined. Even if it's false, it's still defined.
TEST is based on an ObjC statement, and it seems like the preprocessor has no way of evaluating it. So, is there no way to test for this?
In the comments you wrote:
Since TEST must be evaluated during runtime, there's no way to know the value of it during build time. Because the preprocessor can't know the value, it can't test it. Is this correct?
Yes.
The preprocessor runs (at least logically) before the rest of the compiler. It is essentially language and syntax agnostic, and does not even have access to constants defined in your code. The conditional constructs operate solely with preprocessor tokens.

How to access a constant using a String holding its name

I have a .h file that defines a few hundred constants. Let's assume this to be one of them:
#define KDSomeItem 1
I know that the Objective-C runtime API can be used to retrieve a list of instance variable names: like detailed in this question: How do I list all fields of an object in Objective-C?
I also know that the getter [object valueForKey:theName] can be used to access the ivars as found in an earlier question of mine: How to access a property/variable using a String holding its name
My question is can something simmilar be done with constants? That is can I:
a) get a list of all constants programmatically
b) access a constant if I have a string holding its name
e.g. if I had a String like this NSString * theName = #"KDSomeItem"; could I evaluate my constant using this string?
You would not be able to do this: unlike instance variables, #define-d constants do not leave a trace after the preprocessor is done with them. They leave no metadata behind - all instances of KDSomeItem in the body of your program will be replaced with 1 even before the Objective C compiler proper gets to analyze your code.
If you need the constant names to be available at run time, you would need to build all the necessary metadata yourself. In order to do that, you may want to look into "stringizing" operator of the preprocessor:
#define STR(X) #X
This macro can be applied to a preprocessor constant, and produce a C string literal with its name:
const char *nameOfKDSomeItem = STR(KDSomeItem); // same as "KDSomeItem"
Nope. Your constant is a preprocessor macro. It is textually substituted into your source code where you use it.

enum or define, which one should I use?

enum and #define appears to be able to do the same thing, for the example below defining a style. Understand that #define is macro substitution for compiler preprocessor. Is there any circumstances one is preferred over the other?
typedef enum {
SelectionStyleNone,
SelectionStyleBlue,
SelectionStyleRed
} SelectionStyle;
vs
#define SELECTION_STYLE_NONE 0
#define SELECTION_STYLE_BLUE 1
#define SELECTION_STYLE_RED 2
Don't ever use defines unless you MUST have functionality of the preprocessor. For something as simple as an integral enumeration, use an enum.
An enum is the best if you want type safety. They are also exported as symbols, so some debuggers can display them inline while they can't for defines.
The main problem with enums of course is that they can only contain integers. For strings, floats etc... you might be better of with a const or a define.
The short answer is that it probably doesn't matter a lot. This article provides a pretty good long answer:
http://www.embedded.com/columns/programmingpointers/9900402?_requestid=345959
When there's a built-in language feature supporting what you want to do (in this case, enumerating items), you should probably use that feature.
Defines are probably slightly faster (at runtime) than enums, but the benefit is probably only a handful of cycles, so it's negligible unless you're doing something that really requires that. I'd go with enum, since using the preprocessor is harder to debug.
#define DEFINED_VALUE 1
#define DEFINED_VALUE 2 // Warning
enum { ENUM_VALUE = 1 };
enum { ENUM_VALUE = 2 }; // Error
With #define, there is a higher probability of introducing subtle bugs.

Objective C - Why do constants start with k

Why do constants in all examples I've seen always start with k? And should I #define constants in header or .m file?
I'm new to Objective C, and I don't know C. Is there some tutorial somewhere that explains these sorts of things without assuming knowledge of C?
Starting constants with a "k" is a legacy of the pre-Mac OS X days. In fact, I think the practice might even come from way back in the day, when the Mac OS was written mostly in Pascal, and the predominant development language was Pascal. In C, #define'd constants are typically written in ALL CAPS, rather than prefixing with a "k".
As for where to #define constants: #define them where you're going to use them. If you expect people who #import your code to use the constants, put them in the header file; if the constants are only going to be used internally, put them in the .m file.
Current recommendations from Apple for naming constants don't include the 'k' prefix, but many organizations adopted that convention and still use it, so you still see it quite a lot.
The question of what the "k" means is answered in this question.
And if you intend for files other than that particular .m to use these constants, you have to put the constants in the header, since they can't import the .m file.
You might be interested in Cocoa Dev Central's C tutorial for Cocoa programmers. It explains a lot of the core concepts.
The k prefix comes from a time where many developers loved to use Hungarian notation in their code. In Hungarian notation, every variable has a prefix that tells you what type it is. pSize would be a pointer named "size" whereas iSize would be an integer named "size". Just looking at the name, you know the type of a variable. This can be pretty helpful in absence of modern IDEs that can show you the type of any variable at any time, otherwise you'd always have to search the declaration to know it. Following the trend of the time, Apple wanted to have a common prefix for all constants.
Okay, why not c then, like c for "constant"? Because c was already taken, in Hungarian notation, c is for "counter" (cApple means "count of apples"). There's a similar problem with the class, being a keyword in many languages, so how do you name a variable that points to a class? You will find tons of code naming this variable klass and thus k was chosen, k as in "konstant". In many languages this word actually does start with a k, see here.
Regarding your second question: You should not use #define for constant at all, if you can avoid it, as #define is typeless.
const int x = 10; // Type is int
const short y = 20; // Type is short
const uint64_t z = 30; // Type is for sure UInt64
const double d = 5000; // Type is for sure double
const char * str = "Hello"; // Type is for sure char *
#define FOO 90
What type is FOO? It's some kind of number. But what kind of number? So far any type or no type at all. Type will depend on how and where you use FOO in your code.
Also if you have a fixed set of numbers, use an enum as then the compiler can verify you are using a valid value and enum values are always constant.
If you have to use a define, it won't matter where you define it. Header files are files you share among multiple code files, so if you need the same define in more than one place, you write it into a header file and include that header file wherever that define is needed. What you write into a code file is only visible within that code file, except for non-static functions and Obj-C classes that are both globally visible by default. But unless a function is declared in a header file and that header file is included into a code file where you want to use that function, the compiler will not know how this function looks like (what parameters it expects, what result value it returns), so it cannot check any of this and must rely that you call it correctly (usually this will cause it to create a warning). Obj-C classes cannot be used at all, unless you tell the current code file at least that this name is the name of a class, yet if you want to actually do something with that class (other than just passing it around), the compiler needs to know the interface of the class, that's why interfaces go into header files (if the class is only used within the current code file, writing interface and implementation into the file is legal and will work, too).
k for "konvention". Seriously; it is just convention.
You can put a #define wherever you like; in a header, in the .m at the top, in the .m right next to where you use it. Just put it before any code that uses it.
The "intro to objective-c" documentation provided with the Xcode tool suite is actually quite good. Read it a few times (I like to re-read it once every 2 to 5 years).
However, neither it nor any of the C books that I'm aware of will answer these particular questions. The answers sort of become obvious through experience.
I believe it is because of the former prevalence of Hungarian Notation, so k was chosen because c stood for character. ( http://en.wikipedia.org/wiki/Hungarian_notation )
--Alan

const vs enum in D

Check out this quote from here, towards the bottom of the page. (I believe the quoted comment about consts apply to invariants as well)
Enumerations differ from consts in that they do not consume any space
in the final outputted object/library/executable, whereas consts do.
So apparently value1 will bloat the executable, while value2 is treated as a literal and doesn't appear in the object file.
const int value1 = 0xBAD;
enum int value2 = 42;
Back in C++ I always assumed this was for legacy reasons, and old compilers that couldn't optimize away constants. But if this is still true in D, there must be a deeper reason behind this. Anyone know why?
Just like in C++, an enum in D seems to be a "conserved integer literal" (edit: amazing, D2 even supports floats and strings). Its enumerators have no location. They are just immaterial as values without identity.
Placing enum is new in D2. It first defines a new variable. It is not an lvalue (so you also cannot take its address). An
enum int a = 10; // new in D2
Is like
enum : int { a = 10 }
If i can trust my poor D knowledge. So, a in here is not an lvalue (no location and you can't take its address). A const, however, has an address. If you have a global (not sure whether this is the right D terminology) const variable, the compiler usually can't optimize it away, because it doesn't know what modules can access that variable or could take its address. So it has to allocate storage for it.
I think if you have a local const, the compiler can still optimize it away just as in C++, because the compiler knows by looking at its scope whether or not anyone is interested in its address or whether everyone just takes its value.
Your actual question; why enum/const is the same in D as in C++; seems to be unanswered. Sadly there exists no good reason for this choice whatsoever. I believe that this was just an unintentional side effect in C++ that became a de facto pattern. In D the same pattern was needed, and Walter Bright decided that it should be done as in C++ such that those coming from that place would recognize what to do ... In fact, before this rather IMHO silly decision, the keyword manifest was used instead of enum for this usecase.
I think a good compiler/linker should still remove the constant. It's just that with the enum, it's actually guaranteed in the spec. The difference is primarily a matter of semantics. (Also keep in mind that 2.0 isn't complete yet)
The real purpose of enum being expanded syntactically to support single manifest constants, from what I understand, is that Don Clugston, a D template guru, was doing some crazy stuff with templates. He kept running into long build times, ridiculous compiler memory usage, etc. because the compiler kept creating internal data strucutres for const variables. One key thing about const/immutable variables compared to enums is that const/immutable variables are lvalues and can have their address taken. This means there is some extra overhead for the compiler. This usually doesn't matter, but when you're executing really complicated compile-time metaprograms, even if const variables are optimized away, this is still significant overhead at compile time.
It sounds like the enum value will be used "inline" in expressions where as the const will actually take storage and any expression referencing it will be loading the value from the memory storage.
This sound similar to the difference between const vs. readonly in C#. The former is a compile-time constant and the later is a run-time constant. This definitely affected versioning of assemblies (since assemblies referencing a readonly would receive a copy at compile time and would not get a change to the value if the referenced assembly was rebuilt with a different value).