enum and #define appears to be able to do the same thing, for the example below defining a style. Understand that #define is macro substitution for compiler preprocessor. Is there any circumstances one is preferred over the other?
typedef enum {
SelectionStyleNone,
SelectionStyleBlue,
SelectionStyleRed
} SelectionStyle;
vs
#define SELECTION_STYLE_NONE 0
#define SELECTION_STYLE_BLUE 1
#define SELECTION_STYLE_RED 2
Don't ever use defines unless you MUST have functionality of the preprocessor. For something as simple as an integral enumeration, use an enum.
An enum is the best if you want type safety. They are also exported as symbols, so some debuggers can display them inline while they can't for defines.
The main problem with enums of course is that they can only contain integers. For strings, floats etc... you might be better of with a const or a define.
The short answer is that it probably doesn't matter a lot. This article provides a pretty good long answer:
http://www.embedded.com/columns/programmingpointers/9900402?_requestid=345959
When there's a built-in language feature supporting what you want to do (in this case, enumerating items), you should probably use that feature.
Defines are probably slightly faster (at runtime) than enums, but the benefit is probably only a handful of cycles, so it's negligible unless you're doing something that really requires that. I'd go with enum, since using the preprocessor is harder to debug.
#define DEFINED_VALUE 1
#define DEFINED_VALUE 2 // Warning
enum { ENUM_VALUE = 1 };
enum { ENUM_VALUE = 2 }; // Error
With #define, there is a higher probability of introducing subtle bugs.
Related
Assuming I'm including a header file in my precompiled header that includes a bunch of inline functions to be used as helpers wherever needed in any of the project's TUs -- what would be the correct way to write those inlines?
1) as static inlines? e.g.:
static inline BOOL doSomethingWith(Foo *bar)
{
// ...
}
2) as extern inlines? e.g.:
in Shared.h
extern inline BOOL doSomethingWith(Foo *bar);
in Shared.m
inline BOOL doSomethingWith(Foo *bar)
{
// ...
}
My intention with inlines is to:
make the code less verbose by encapsulating common instructions
to centralize the code they contain to aid with future maintenance
to use them instead of macros for the sake of type safety
to be able to have return values
So far I have only seen variant 1) in the wild.
I have read (sadly can't find it anymore) that variant 1) does not accurately move the inline function's body into the callers but rather creates a new function, and that only extern inline ensures that kind of behavior.
Skipping whether you should be inlining at all for the reasons you give, the standard way to inline in Cocoa is to use the predefined macro NS_INLINE - use it either in the source file using the function or in an imported header. So your example becomes:
NS_INLINE BOOL doSomethingWith(Foo *bar)
For GCC/Clang the macro uses static and the always_inline attribute.
Most (maybe all) compilers won't inline extern inline as they operate on a single compile unit at a time - a source file along with all its includes.
make the code less verbose by encapsulating common instructions
Non-inline functions do that as well...
to centralize the code they contain to aid with future maintenance
then you should have non-inline functions, don't you think?
to use them instead of macros for the sake of type safety
to be able to have return values
those seem OK to me.
Well, when I write inline functions, I usually make them static - that's typically how it's done. (Else you can get all sorts of mysterious linker errors if you're not careful enough.) It's important to note that inline does not affect the visibility of a function, so if you want to use it in multiple files, you need the static modifier.
An extern inline function does not make a lot of sense. If you have only one implementation of the function, that defeats the purpose of inline. If you use link-time optimization (where cross-file inlining is done by the linker), then the inline hint for the compiler is not very useful anyway.
only extern inline insures that kind of behavior.
It doesn't "ensure" anything at all. There's no portable way to force inlining - in fact, most modern compilers ignore the keyword completely and use heuristics instead to decide when to inline. In GNU C, you can force inlining using the __attribute__((always_inline)) attribute, but unless you have a very good reason for that, you shouldn't be doing it.
I'm new to Objective-C, and I have a few questions regarding const and the preprocessing directive #define.
First, I found that it's not possible to define the type of the constant using #define. Why is that?
Second, are there any advantages to use one of them over the another one?
Finally, which way is more efficient and/or more secure?
First, I found that its not possible to define the type of the constant using #define, why is that?
Why is what? It's not true:
#define MY_INT_CONSTANT ((int) 12345)
Second, are there any advantages to use one of them over the another one?
Yes. #define defines a macro which is replaced even before compilation starts. const merely modifies a variable so that the compiler will flag an error if you try to change it. There are contexts in which you can use a #define but you can't use a const (although I'm struggling to find one using the latest clang). In theory, a const takes up space in the executable and requires a reference to memory, but in practice this is insignificant and may be optimised away by the compiler.
consts are much more compiler and debugger friendly than #defines. In most cases, this is the overriding point you should consider when making a decision on which one to use.
Just thought of a context in which you can use #define but not const. If you have a constant that you want to use in lots of .c files, with a #define you just stick it in a header. With a const you have to have a definition in a C file and
// in a C file
const int MY_INT_CONST = 12345;
// in a header
extern const int MY_INT_CONST;
in a header. MY_INT_CONST can't be used as the size of a static or global scope array in any C file except the one it is defined in.
However, for integer constants you can use an enum. In fact that is what Apple does almost invariably. This has all the advantages of both #defines and consts but only works for integer constants.
// In a header
enum
{
MY_INT_CONST = 12345,
};
Finally, which way is more efficient and/or more secure?
#define is more efficient in theory although, as I said, modern compilers probably ensure there is little difference. #define is more secure in that it is always a compiler error to try to assign to it
#define FOO 5
// ....
FOO = 6; // Always a syntax error
consts can be tricked into being assigned to although the compiler might issue warnings:
const int FOO = 5;
// ...
(int) FOO = 6; // Can make this compile
Depending on the platform, the assignment might still fail at run time if the constant is placed in a read only segment and it's officially undefined behaviour according to the C standard.
Personally, for integer constants, I always use enums for constants of other types, I use const unless I have a very good reason not to.
From a C coder:
A const is simply a variable whose content cannot be changed.
#define name value, however, is a preprocessor command that replaces all instances of the name with value.
For instance, if you #define defTest 5, all instances of defTest in your code will be replaced with 5 when you compile.
It is important to understand the difference between the #define and the const instructions which are not meant to the same things.
const
const is used to generate an object from the asked type that will be, once initialised, constant. It means that it is an object in the program memory and can be used as readonly.
The object is generated every time the program is launched.
#define
#define is used in order to ease the code readability and future modifications. When using a define, you only mask a value behind a name. Hence when working with a rectangle you can define width and height with corresponding values. Then in the code, it will be easier to read since instead of numbers there will be names.
If later you decide to change the value for the width you would only have to change it in the define instead of a boring and dangerous find/replace in your whole file.
When compiling, the preprocessor will replace all the defined name by the values in the code. Hence, there is no time lost using them.
In addition to other peoples comments, errors using #define are notoriously difficult to debug as the pre-processor gets hold of them before the compiler.
Since pre-processor directives are frowned upon, I suggest using a const. You can't specify a type with a pre-processor because a pre-processor directive is resolved before compilation. Well, you can, but something like:
#define DEFINE_INT(name,value) const int name = value;
and use it as
DEFINE_INT(x,42)
which would be seen by the compiler as
const int x = 42;
First, I found that its not possible to define the type of the constant using #define, why is that?
You can, see my first snippet.
Second, are there any advantages to use one of them over the another one?
Generally having a const instead of a pre-processor directive helps with debugging, not as much in this case (but still does).
Finally, which way is more efficient and/or more secure?
Both are as efficient. I'd say the macro can potentially be more secure as it can't be changed during run-time, whereas a variable could.
I have used #define before to help create more methods out of one method like if I have something like.
// This method takes up to 4 numbers, we don't care what the method does with these numbers.
void doSomeCalculationWithMultipleNumbers:(NSNumber *)num1 Number2:(NSNumber *)num2 Number3:(NSNumber *)num23 Number3:(NSNumber *)num3;
But I also what to have a method that only takes 3 numbers and 2 numbers so instead of writing two new methods I am going to use the same one using the #define, like so.
#define doCalculationWithFourNumbers(num1, num2, num3, num4) \
doSomeCalculationWithMultipleNumbers((num1), (num2), (num3), (num4))
#define doCalculationWithThreeNumbers(num1, num2, num3) \
doSomeCalculationWithMultipleNumbers((num1), (num2), (num3), nil)
#define doCalculationWithTwoNumbers(num1, num2) \
doSomeCalculationWithMultipleNumbers((num1), (num2), nil, nil)
I think this is a pretty cool thing to have, I know you can go straight to the method and just put nil in the spaces you don't want but if you are building a library it is very useful. Also this is how
NSLocalizedString(<#key#>, <#comment#>)
NSLocalizedStringFromTable(<#key#>, <#tbl#>, <#comment#>)
NSLocalizedStringFromTableInBundle(<#key#>, <#tbl#>, <#bundle#>, <#comment#>)
are done.
Whereas I don't believe you can do this with constants. But constants do have there benefits over #define like you can't specify a type with a #define because it is a pre-processor directive that is resolved before compilation, and if you get an error with #define they are harder to debug then constants. Both have there benefits and downsides but I would say it all depends on the programmer to which one you decided to use. I have written a library with both them in using the #define to do what I have shown and constants for declaring constant variables which I need to specify a type on.
void Foo(Type^ type)
{
System::Guid id = type->GUID;
switch (id)
{
case System::Byte::typeid->GUID:
...
break;
...
}
Obviously case expressions are not constant. But I'd like to know why GUIDs cannot be known at compile time? (silly question I guess).
At the end of the day it looks you have to use imbricated if then else for testing against typeid and thats the only way to go, right?
Simply put: the CLR has no metadata representation of a Guid... or indeed DateTime or Decimal, as the other obvious candidates. That means there isn't a constant representation of Guid, and switch cases have to be constants, at least in C# and I suspect in C++/CLI too.
Now that doesn't have to be a blocker... C# allows const decimal values via a fudge, and languages could do the same thing for Guids, and then allow you to switch on them. The language can decide how it's going to implement switching, after all.
I suspect that the C++/CLI designers felt that it would be a sufficiently rare use-case that it wasn't worth complicating the language and the compiler to support it.
Only strings, integral types and enums can be used in .NET in a switch statement.
Why do constants in all examples I've seen always start with k? And should I #define constants in header or .m file?
I'm new to Objective C, and I don't know C. Is there some tutorial somewhere that explains these sorts of things without assuming knowledge of C?
Starting constants with a "k" is a legacy of the pre-Mac OS X days. In fact, I think the practice might even come from way back in the day, when the Mac OS was written mostly in Pascal, and the predominant development language was Pascal. In C, #define'd constants are typically written in ALL CAPS, rather than prefixing with a "k".
As for where to #define constants: #define them where you're going to use them. If you expect people who #import your code to use the constants, put them in the header file; if the constants are only going to be used internally, put them in the .m file.
Current recommendations from Apple for naming constants don't include the 'k' prefix, but many organizations adopted that convention and still use it, so you still see it quite a lot.
The question of what the "k" means is answered in this question.
And if you intend for files other than that particular .m to use these constants, you have to put the constants in the header, since they can't import the .m file.
You might be interested in Cocoa Dev Central's C tutorial for Cocoa programmers. It explains a lot of the core concepts.
The k prefix comes from a time where many developers loved to use Hungarian notation in their code. In Hungarian notation, every variable has a prefix that tells you what type it is. pSize would be a pointer named "size" whereas iSize would be an integer named "size". Just looking at the name, you know the type of a variable. This can be pretty helpful in absence of modern IDEs that can show you the type of any variable at any time, otherwise you'd always have to search the declaration to know it. Following the trend of the time, Apple wanted to have a common prefix for all constants.
Okay, why not c then, like c for "constant"? Because c was already taken, in Hungarian notation, c is for "counter" (cApple means "count of apples"). There's a similar problem with the class, being a keyword in many languages, so how do you name a variable that points to a class? You will find tons of code naming this variable klass and thus k was chosen, k as in "konstant". In many languages this word actually does start with a k, see here.
Regarding your second question: You should not use #define for constant at all, if you can avoid it, as #define is typeless.
const int x = 10; // Type is int
const short y = 20; // Type is short
const uint64_t z = 30; // Type is for sure UInt64
const double d = 5000; // Type is for sure double
const char * str = "Hello"; // Type is for sure char *
#define FOO 90
What type is FOO? It's some kind of number. But what kind of number? So far any type or no type at all. Type will depend on how and where you use FOO in your code.
Also if you have a fixed set of numbers, use an enum as then the compiler can verify you are using a valid value and enum values are always constant.
If you have to use a define, it won't matter where you define it. Header files are files you share among multiple code files, so if you need the same define in more than one place, you write it into a header file and include that header file wherever that define is needed. What you write into a code file is only visible within that code file, except for non-static functions and Obj-C classes that are both globally visible by default. But unless a function is declared in a header file and that header file is included into a code file where you want to use that function, the compiler will not know how this function looks like (what parameters it expects, what result value it returns), so it cannot check any of this and must rely that you call it correctly (usually this will cause it to create a warning). Obj-C classes cannot be used at all, unless you tell the current code file at least that this name is the name of a class, yet if you want to actually do something with that class (other than just passing it around), the compiler needs to know the interface of the class, that's why interfaces go into header files (if the class is only used within the current code file, writing interface and implementation into the file is legal and will work, too).
k for "konvention". Seriously; it is just convention.
You can put a #define wherever you like; in a header, in the .m at the top, in the .m right next to where you use it. Just put it before any code that uses it.
The "intro to objective-c" documentation provided with the Xcode tool suite is actually quite good. Read it a few times (I like to re-read it once every 2 to 5 years).
However, neither it nor any of the C books that I'm aware of will answer these particular questions. The answers sort of become obvious through experience.
I believe it is because of the former prevalence of Hungarian Notation, so k was chosen because c stood for character. ( http://en.wikipedia.org/wiki/Hungarian_notation )
--Alan
Check out this quote from here, towards the bottom of the page. (I believe the quoted comment about consts apply to invariants as well)
Enumerations differ from consts in that they do not consume any space
in the final outputted object/library/executable, whereas consts do.
So apparently value1 will bloat the executable, while value2 is treated as a literal and doesn't appear in the object file.
const int value1 = 0xBAD;
enum int value2 = 42;
Back in C++ I always assumed this was for legacy reasons, and old compilers that couldn't optimize away constants. But if this is still true in D, there must be a deeper reason behind this. Anyone know why?
Just like in C++, an enum in D seems to be a "conserved integer literal" (edit: amazing, D2 even supports floats and strings). Its enumerators have no location. They are just immaterial as values without identity.
Placing enum is new in D2. It first defines a new variable. It is not an lvalue (so you also cannot take its address). An
enum int a = 10; // new in D2
Is like
enum : int { a = 10 }
If i can trust my poor D knowledge. So, a in here is not an lvalue (no location and you can't take its address). A const, however, has an address. If you have a global (not sure whether this is the right D terminology) const variable, the compiler usually can't optimize it away, because it doesn't know what modules can access that variable or could take its address. So it has to allocate storage for it.
I think if you have a local const, the compiler can still optimize it away just as in C++, because the compiler knows by looking at its scope whether or not anyone is interested in its address or whether everyone just takes its value.
Your actual question; why enum/const is the same in D as in C++; seems to be unanswered. Sadly there exists no good reason for this choice whatsoever. I believe that this was just an unintentional side effect in C++ that became a de facto pattern. In D the same pattern was needed, and Walter Bright decided that it should be done as in C++ such that those coming from that place would recognize what to do ... In fact, before this rather IMHO silly decision, the keyword manifest was used instead of enum for this usecase.
I think a good compiler/linker should still remove the constant. It's just that with the enum, it's actually guaranteed in the spec. The difference is primarily a matter of semantics. (Also keep in mind that 2.0 isn't complete yet)
The real purpose of enum being expanded syntactically to support single manifest constants, from what I understand, is that Don Clugston, a D template guru, was doing some crazy stuff with templates. He kept running into long build times, ridiculous compiler memory usage, etc. because the compiler kept creating internal data strucutres for const variables. One key thing about const/immutable variables compared to enums is that const/immutable variables are lvalues and can have their address taken. This means there is some extra overhead for the compiler. This usually doesn't matter, but when you're executing really complicated compile-time metaprograms, even if const variables are optimized away, this is still significant overhead at compile time.
It sounds like the enum value will be used "inline" in expressions where as the const will actually take storage and any expression referencing it will be loading the value from the memory storage.
This sound similar to the difference between const vs. readonly in C#. The former is a compile-time constant and the later is a run-time constant. This definitely affected versioning of assemblies (since assemblies referencing a readonly would receive a copy at compile time and would not get a change to the value if the referenced assembly was rebuilt with a different value).