I'm trying to save some time / make my code readable. I have a lot of 'isXXX' messages that return BOOL. I am constantly adding more 'is' messages. Is it possible to make a macro that's expandable at pre-compile time via a list.
I want to specify:
isMacro(1, 2, 3).
And for each of those, I want the macro to expand it into a full -(BOOL)is1 {...}, -(BOOL)is2...
It seems like this would be a good use of the precompiler macro expansion, but I'm not sure how to implement the isMacro(...) part. (Specifically, the ... that expands into full functions before compile).
--- Update:
The 'is' methods are all dynamically computed, but they are all common.
I'm testing a single variable against an enum value and determining whether it's equal. I can't #synthesize them, because it's dynamic. I have all of them in a #property for convenience. I just want something that's like #synthesize, where I can list all of them, and create a dynamic response for each isXXX method.
Also, I don't want to run isCheck:(opMode)mode, since there is no pre-compile checking to make sure it's a valid enum value.
All of the is functions are in the following format:
-(BOOL) isTurtle { return operationMode == Turtle; } The key is that I want it to function as a property for simplicity. Therefore, I don't want something like is:(opMode)mode, where I have to specify BOOL res = [obj is:Tutle];
If you could use Boost.Preprocessor, the BOOST_PP_REPEAT_FROM_TO macro should suit your need.
#include <boost/preprocessor/repetition/repeat_from_to.hpp>
#define IS_METHODS(depth, n, aux) -(BOOL)is ## n { return aux == n; }
#implementation Foo
BOOST_PP_REPEAT_FROM_TO(1, 31, IS_METHODS, operationMode)
#end
If you can't, you're out of luck. Implementing BOOST_PP_REPEAT_FROM_TO is about the same effort as just writing the 30+ functions directly.
Also I don't see how [obj isMode:12] is really worse than obj.is12. The former also allows a variable mode, and is less cryptic to other programmers (think about the maintenance effort).
Related
I want to modify the behavior of some function without being the author of that function. What I control is that I can ask the author to follow some pattern, e.g. use a base class, use a certain decorator, property etc.
If in python, I would use a decorator to change the behavior of a method.
As an example, My goal: Improve code coverage by automatically testing over multiple input data.
Pseudo code:
#implementation SomeTestSuiteClass
// If in python, I would add a decorator here to change the behavior of the following method
-(void)testSample1 {
input = SpecialProvider();
output = FeatureToTest(input);
SpecialAssert(output);
}
#end
What I want: During test, the testSample1 method will be called multiple times. Each time, the SpecialProvider will emit a different input data. Same for the SpecialAssert, which can verify the output corresponding to the given input.
SpecialProvider and SpecialAssert will be API under my control/ownership (i.e. I write them).
The SomeTestSuiteClass together with the testSample1 will be written by the user (i.e. test writer).
Is there a way for Objective-C to achieve "what I want" above?
You could mock objects and/or its methods using objective-c runtime or some third party frameworks. I discourage it though. That is a sign of poor architecture choices in the 1st place. The main problem in your approach are hidden dependencies in your code directly referencing
SpecialProvider & SpecialAssert symbols directly.
A much better way to this would be like this:
-(void)testSample1:(SpecialProvider*)provider assert:(BOOL (^)(parameterTypes))assertBlock {
input = provider;
output = FeatureToTest(input);
if (assertBlock != nil) {
assertBlock(output);
}
}
Since Objective-c does not support default argument values like Swift does you could emulate it with:
-(void)testSample1 {
[self testSample1:DefaultSpecialProvider() assert:DefaultAssert()];
}
not to call the explicit -(void)testSample1:(SpecialProvider*)provider assert:(BOOL (^)(parameterTypes))assertBlock all the time, however in tests you would always use the explicit 2 argument variant to allow substituting the implementation(s) not being under test.
Further improvement idea:
Put the SpecialProvider and SpecialAssert behind protocols(i.e. equivalent of interfaces in other programming languages) so you can easily exchange the implementation.
For example:
... some code
int sizeOfSomeObject = someObject.length();
... some code, sizeOfSomeObject is not need anymore
now I need other int variable for other action(for example, for position in some object), and i have the dilemma: create a new variable or use sizeOfSomeObject for this. In the first case I will keep readability, but lose performance. In the second case - on the contrary. What usually do programmers in this situation?
In the first case I will keep readability, but lose performance. In the second case - on the contrary.
So did you benchmark it? I suspect no, you didn't. Most modern compilers do a lot of agressive analysis during register allocation, so if the optimizer perceives that there's a variable that's not used anymore, but there's a new variable of the same type, it will just merge the two variables to the same memory region or processor register. No need to worry about performance penalties.
And anyway, don't do premature optimization (which this is). In 90% of the cases, readability is more important than "performance".
All in all, go ahead and create a new variable with an appropriate, different, descriptive name. And just for fun, compile this version and the version in which you used the same variable name, and look at the generated assembly (or bytecode, or...) - and find out that they're identical.
I would use different named variables for different things.
In terms of something like this, I don't think just one variable would cause a massive performance hit. In most languages you have the option to clear variables from memory in some way when they are no longer in use, so I would recommend doing that so that the code means something to you or others when read at a later date.
In C++, you can use blocks for objects to be destroyed as soon as they are not needed anymore:
void some_function () {
{
MyClass c;
// ... here we use c ...
}
// now c has been destroyed
{
MyClass d;
// ... here we use d ...
}
// now d has been destroyed
}
In your example (with int variables), there is no reason to worry about performance. The worst thing that could probably happen is memory for two variables being used instead of one, but (i) that's negligible and (ii) int's will probably live in a CPU register, anyway. If you really worry, use the block approach for your int example.
It depends how often such an int would be initialized. If it's not in some hugely nested for loop, most (all) programmers will go for the first. Besides, most modern programming languages have a garbage collector, which cleans up left over objects.
Decent compiler will optimize out your second variable, so that shouldn't be an issue.
That said, there are situations where variable reuse makes sense. E.g., you might have some variable that holds a generic output populated from call to some external API. According to the context and parameters passed to the API you'll process the data differently but it's probably better (more readable etc.) to reuse the same data variable.
For example, something like this:
void* data = getSomeData(params);
//process data
//change params
data = getSomeData(params);
//process data
//change params
data = getSomeData(params);
I'm new to Objective-C, and I have a few questions regarding const and the preprocessing directive #define.
First, I found that it's not possible to define the type of the constant using #define. Why is that?
Second, are there any advantages to use one of them over the another one?
Finally, which way is more efficient and/or more secure?
First, I found that its not possible to define the type of the constant using #define, why is that?
Why is what? It's not true:
#define MY_INT_CONSTANT ((int) 12345)
Second, are there any advantages to use one of them over the another one?
Yes. #define defines a macro which is replaced even before compilation starts. const merely modifies a variable so that the compiler will flag an error if you try to change it. There are contexts in which you can use a #define but you can't use a const (although I'm struggling to find one using the latest clang). In theory, a const takes up space in the executable and requires a reference to memory, but in practice this is insignificant and may be optimised away by the compiler.
consts are much more compiler and debugger friendly than #defines. In most cases, this is the overriding point you should consider when making a decision on which one to use.
Just thought of a context in which you can use #define but not const. If you have a constant that you want to use in lots of .c files, with a #define you just stick it in a header. With a const you have to have a definition in a C file and
// in a C file
const int MY_INT_CONST = 12345;
// in a header
extern const int MY_INT_CONST;
in a header. MY_INT_CONST can't be used as the size of a static or global scope array in any C file except the one it is defined in.
However, for integer constants you can use an enum. In fact that is what Apple does almost invariably. This has all the advantages of both #defines and consts but only works for integer constants.
// In a header
enum
{
MY_INT_CONST = 12345,
};
Finally, which way is more efficient and/or more secure?
#define is more efficient in theory although, as I said, modern compilers probably ensure there is little difference. #define is more secure in that it is always a compiler error to try to assign to it
#define FOO 5
// ....
FOO = 6; // Always a syntax error
consts can be tricked into being assigned to although the compiler might issue warnings:
const int FOO = 5;
// ...
(int) FOO = 6; // Can make this compile
Depending on the platform, the assignment might still fail at run time if the constant is placed in a read only segment and it's officially undefined behaviour according to the C standard.
Personally, for integer constants, I always use enums for constants of other types, I use const unless I have a very good reason not to.
From a C coder:
A const is simply a variable whose content cannot be changed.
#define name value, however, is a preprocessor command that replaces all instances of the name with value.
For instance, if you #define defTest 5, all instances of defTest in your code will be replaced with 5 when you compile.
It is important to understand the difference between the #define and the const instructions which are not meant to the same things.
const
const is used to generate an object from the asked type that will be, once initialised, constant. It means that it is an object in the program memory and can be used as readonly.
The object is generated every time the program is launched.
#define
#define is used in order to ease the code readability and future modifications. When using a define, you only mask a value behind a name. Hence when working with a rectangle you can define width and height with corresponding values. Then in the code, it will be easier to read since instead of numbers there will be names.
If later you decide to change the value for the width you would only have to change it in the define instead of a boring and dangerous find/replace in your whole file.
When compiling, the preprocessor will replace all the defined name by the values in the code. Hence, there is no time lost using them.
In addition to other peoples comments, errors using #define are notoriously difficult to debug as the pre-processor gets hold of them before the compiler.
Since pre-processor directives are frowned upon, I suggest using a const. You can't specify a type with a pre-processor because a pre-processor directive is resolved before compilation. Well, you can, but something like:
#define DEFINE_INT(name,value) const int name = value;
and use it as
DEFINE_INT(x,42)
which would be seen by the compiler as
const int x = 42;
First, I found that its not possible to define the type of the constant using #define, why is that?
You can, see my first snippet.
Second, are there any advantages to use one of them over the another one?
Generally having a const instead of a pre-processor directive helps with debugging, not as much in this case (but still does).
Finally, which way is more efficient and/or more secure?
Both are as efficient. I'd say the macro can potentially be more secure as it can't be changed during run-time, whereas a variable could.
I have used #define before to help create more methods out of one method like if I have something like.
// This method takes up to 4 numbers, we don't care what the method does with these numbers.
void doSomeCalculationWithMultipleNumbers:(NSNumber *)num1 Number2:(NSNumber *)num2 Number3:(NSNumber *)num23 Number3:(NSNumber *)num3;
But I also what to have a method that only takes 3 numbers and 2 numbers so instead of writing two new methods I am going to use the same one using the #define, like so.
#define doCalculationWithFourNumbers(num1, num2, num3, num4) \
doSomeCalculationWithMultipleNumbers((num1), (num2), (num3), (num4))
#define doCalculationWithThreeNumbers(num1, num2, num3) \
doSomeCalculationWithMultipleNumbers((num1), (num2), (num3), nil)
#define doCalculationWithTwoNumbers(num1, num2) \
doSomeCalculationWithMultipleNumbers((num1), (num2), nil, nil)
I think this is a pretty cool thing to have, I know you can go straight to the method and just put nil in the spaces you don't want but if you are building a library it is very useful. Also this is how
NSLocalizedString(<#key#>, <#comment#>)
NSLocalizedStringFromTable(<#key#>, <#tbl#>, <#comment#>)
NSLocalizedStringFromTableInBundle(<#key#>, <#tbl#>, <#bundle#>, <#comment#>)
are done.
Whereas I don't believe you can do this with constants. But constants do have there benefits over #define like you can't specify a type with a #define because it is a pre-processor directive that is resolved before compilation, and if you get an error with #define they are harder to debug then constants. Both have there benefits and downsides but I would say it all depends on the programmer to which one you decided to use. I have written a library with both them in using the #define to do what I have shown and constants for declaring constant variables which I need to specify a type on.
If you take this method call for instance(from other post)
- (int)methodName:(int)arg1 withArg2:(int)arg2
{
// Do something crazy!
return someInt;
}
Is withArg2 actually ever used for anything inside this method ?
withArg2 is part of the method name (it is usually written without arguments as methodName:withArg2: if you want to refer to the method in the documentation), so no, it is not used for anything inside the method.
As Tamás points out, withArg2 is part of the method name. If you write a function with the exact same name in C, it will look like this:
int methodNamewithArg2(int arg1, int arg2)
{
// Do something crazy!
return someInt;
}
Coming from other programming languages, the Objective-C syntax at first might appear weird, but after a while you will start to understand how it makes your whole code more expressive. If you see the following C++ function call:
anObject.subString("foobar", 2, 3, true);
and compare it to a similar Objective-C method invocation
[anObject subString:"foobar" startingAtCharacter:2 numberOfCharacters:3 makeResultUpperCase:YES];
it should become clear what I mean. The example may be contrived, but the point is to show that embedding the meaning of the next parameter into the method name allows to write very readable code. Even if you choose horrible variable names or use literals (as in the example above), you will still be able to make sense of the code without having to look up the method documentation.
You would call this method as follows:
int i=[self methodName:arg1 withArg2:arg2];
This is just iOs's way of making the code easier to read.
In my language I can use a class variable in my method when the definition appears below the method. It can also call methods below my method and etc. There are no 'headers'. Take this C# example.
class A
{
public void callMethods() { print(); B b; b.notYetSeen();
public void print() { Console.Write("v = {0}", v); }
int v=9;
}
class B
{
public void notYetSeen() { Console.Write("notYetSeen()\n"); }
}
How should I compile that? what i was thinking is:
pass1: convert everything to an AST
pass2: go through all classes and build a list of define classes/variable/etc
pass3: go through code and check if there's any errors such as undefined variable, wrong use etc and create my output
But it seems like for this to work I have to do pass 1 and 2 for ALL files before doing pass3. Also it feels like a lot of work to do until I find a syntax error (other than the obvious that can be done at parse time such as forgetting to close a brace or writing 0xLETTERS instead of a hex value). My gut says there is some other way.
Note: I am using bison/flex to generate my compiler.
My understanding of languages that handle forward references is that they typically just use the first pass to build a list of valid names. Something along the lines of just putting an entry in a table (without filling out the definition) so you have something to point to later when you do your real pass to generate the definitions.
If you try to actually build full definitions as you go, you would end up having to rescan repatedly, each time saving any references to undefined things until the next pass. Even that would fail if there are circular references.
I would go through on pass one and collect all of your class/method/field names and types, ignoring the method bodies. Then in pass two check the method bodies only.
I don't know that there can be any other way than traversing all the files in the source.
I think that you can get it down to two passes - on the first pass, build the AST and whenever you find a variable name, add it to a list that contains that blocks' symbols (it would probably be useful to add that list to the corresponding scope in the tree). Step two is to linearly traverse the tree and make sure that each symbol used references a symbol in that scope or a scope above it.
My description is oversimplified but the basic answer is -- lookahead requires at least two passes.
The usual approach is to save B as "unknown". It's probably some kind of type (because of the place where you encountered it). So you can just reserve the memory (a pointer) for it even though you have no idea what it really is.
For the method call, you can't do much. In a dynamic language, you'd just save the name of the method somewhere and check whether it exists at runtime. In a static language, you can save it in under "unknown methods" somewhere in your compiler along with the unknown type B. Since method calls eventually translate to a memory address, you can again reserve the memory.
Then, when you encounter B and the method, you can clear up your unknowns. Since you know a bit about them, you can say whether they behave like they should or if the first usage is now a syntax error.
So you don't have to read all files twice but it surely makes things more simple.
Alternatively, you can generate these header files as you encounter the sources and save them somewhere where you can find them again. This way, you can speed up the compilation (since you won't have to consider unchanged files in the next compilation run).
Lastly, if you write a new language, you shouldn't use bison and flex anymore. There are much better tools by now. ANTLR, for example, can produce a parser that can recover after an error, so you can still parse the whole file. Or check this Wikipedia article for more options.