Which is faster? Switch statement or dictionary? - objective-c

I see a lot of the following enum-to-string conversion in Objective-c/C at work. Stuff like:
static NSString *_TranslateMyAnimalToNSString(MyAnimal animal)
{
switch (animal) {
case MyAnimalDog:
return "my_animal_dog";
case MyAnimalCat:
return #"my_animal_cat";
case MyAnimalFish:
return #"my_animal_fish";
}
}
NS_ENUM(NSInteger, MyAnimal) {
MyAnimalDog,
MyAnimalCat,
MyAnimalFish,
};
Wouldn't it be faster and/or smaller to have a static dictionary? Something like:
static NSDictionary *animalsAndNames = #{#{MyAnimalCat} : #"my_animal_cat",
#{MyAnimalDog} : #"my_animal_dog",
#{MyAnimalFish} : #"my_animal_fish"};
The difference is small, but I'm trying to optimize the binary size and speed, which makes me inclined toward the latter.
Thanks for helping clarify.

Answer
The dictionary should be faster for a large amount of cases. A dictionary is a hashmap, which grants O(1) lookup. A switch statement, on the other hand, will have to go through all entries thus requiring ϴ(n) time.
A quick explanation of big O/ϴ/Ω notation
Big-O notation is used to give an asymptotic upper bound to a particular function. That is, f(n) = O(g(n)) means that f(n) does not grow faster than g(n) as n goes to infinity (up to a constant). Similarly, big Ω denotes a lower bound. Therefore, the function f(n)=n+3 is both in Ω(1) and O(n^4), which is not very useful.
Big ϴ, then, denotes a strict bound. If f(n) = O(g(n)) and f(n) = Ω(g(n)) then also f(n) = ϴ(g(n)).
Often using big O suffices, as it makes no sense to advertise a linear algorithm as being in O(n^3), even though it is technically correct. In the case above however the emphasis is on the relative slowness of the switch case, which big O cannot correctly express, hence the usage of ϴ/Ω.
(However, I'm not sure sacrificing readability for correctness was the right choice.)

Related

How is 'n' always the 'input size' in O(n)?

Consider the following code:
void counterMethod(int n)
{
int count=0;
for(int i=0; i<n; i++)
{
count++;
}
}
Since the time complexity of this function would be O(n), but here 'n' refers to the 'value of input' not the 'size of input'. Please clarify why is it 'input size' according to the formal definition:
"Time complexity is the amount of time taken by an algorithm to run, as a function of the length of the input"
In edge-cases like this, the formal definition of "input size" which theorists use does not agree with the practical definition which most programmers use when thinking about actual code.
The formal definition of "input size" is the number of bits (or sometimes machine words) required to represent the input. In most cases this is proportional to e.g. the length of an array, the size of a dictionary, or so on, so the definition is equivalent. Formally, your counting method's time complexity is O(2N) where N is the number of bits required to represent the input number. (Usually you would write lowercase n for the number of bits and uppercase N for the actual numerical value, for readability; uppercase N is "bigger".) The formal definition is like this so that terms like "polynomial time" and "NP" have exact meanings which make sense for a variety of different algorithm inputs.
The intuitive, practical definition is that you can measure "input size" by any variable or quantity that matters for your application. Often, the numerical value of an integer is more important to your application than the number of bits required to represent it; typically you don't care about the number of bits. In that case your counting method takes O(n) time where n is the value of the integer.
Most programmers (e.g. on Stack Overflow) will talk about time complexity using this practical definition, simply because it's easier and more useful for real programming. So in your case, O(n) isn't a time complexity according to the formal definition, but if the reason you want to know the time complexity is because you want to estimate how long the code will take to run, or compare it with another algorithm to see which should be faster, then you won't care about the formal definition.

Does initialising an auxiliary array to 0 count as n time complexity already?

very new to big O complexity and I was wondering if an algorithm where you have a given array, and you initialise an auxilary array with the same amount of indexes count as n time already, or do you just assume this is O(1), or nothing at all?
TL;DR: Ignore it
Long answer: This will depend on the rest of your algorithm as well as what you want to achieve. Typically you will do something useful with the array afterwards which does have at least the same time complexity as filling the array, so that array-filling does not contribute to the time complexity. Furthermore filling an array with 0 feels like something you do to initialize the array, so your "real" algorithm can work properly. But nevertheless there are some cases you could consider.
Please note that I use pseudocode in the following examples, I hope it's clear what the algorithm should do. Also note that all the examples don't do anything useful with the array. It's just to show my point.
Lets say you have following code:
A = Array[n]
for(i=0, i<n, i++)
A[i] = 0
print "Hello World"
Then obviously the runtime of your algorithm is highly dependent on the value of n and thus should be counted as linear complexity O(n)
On the other hand, if you have a much more complicated function, say this one:
A = Array[n]
for(i=0, i<n, i++)
A[i] = 0
for(i=0, i<n, i++)
for(j=n-1, j>=0, j--)
print "Hello World"
Then even if you take the complexity of filling the array into account, you will end with complexity of O(n^2+2n) which is equal to the class O(n^2), so it does not matter in this case.
The most interesting case is surely when you have different options to use as basic operation. Say we have the following code (someFunction being an arbitrary function):
A = Array[n*n]
for(i=0, i<n*n, i++)
A[i] = 0
for(i=0, i*i<n, i++)
someFunction(i)
Now it depends on what you choose as basic operation. Which one you choose is highly dependent on what you want to achieve. Let's say someFunction is a very cheap function (regarding time complexity) and accessing the array A is more expensive. Then you would propably go with O(n^2), since accessing the array is done n^2 times. If on the other hand someFunction is expensive compared to filling the array, you would propably choose this as base operation and go with O(sqrt(n)).
Please be aware that one could also come to the conclusion that since the first part (array-filling) is executed more often than the other part (someFunction) it does not matter which one of the operations will take longer time to finish, since at some point the array-filling will need longer time. Thus you could argue that the complexity has to be quadratic O(n^2) This may be right from a theoretical view. But in real life you usually will have an operation you want to count and don't care about the other operations.
Actually you could consider ignoring the array filling as well as taking it into account in all the examples I provided above, depending whether print or accessing the array is more expensive. But I hope in the first two examples it is obvious which one will add more runtime and thus should be considered as the basic operation.

what is the time complexity in my code

#include<stdio.h>
int main()
{
int T,i,sum,n; //Here T is the test case
scanf("%d",&T);
while(T--)
{
scanf("%d",&n);
sum=0;
for(i=1;i<=n;i++)
sum=sum+i;
printf("%d\n",sum);
}
return 0;
}
If i give input of test case as T=50 and n=100.
Which is correct : time complexity O(n)=100 or time complexity O(n)=100*50.
The concept of Big-O analysis is not specific to certain values. Time Complexity , which is commonly expressed in Big-Oh , excludes coefficients and lower order terms. Here in your Code, The time complexity would be O(T*N). It will never ever be O(50*100) or O(100). There is no such notation. Any algorithm which runs in constant time (50*100 in your code) will be expressed as O(1).
In one liner, Time Complexity will never be a value, it'll be expressed as a function that depends on the input size.
Also, to have a clear understanding, I'd suggest you to go through this tutorial: Time Complexity Analysis by MyCodeSchool

Analyzing time complexity (Poly log vs polynomial)

Say an algorithm runs at
[5n^3 + 8n^2(lg (n))^4]
Which is the first order term? Would it be the one with the poly log or the polynomial?
For each two constants a>0,b>0, log(n)^a is in o(n^b) (Note small o notation here).
One way to prove this claim is examine what happens when we apply a monotomically increasing function on both sides: the log function.
log(log(n)^a)) = a* log(log(n))
log(n^b) = b * log(n)
Since we know we can ignore constants when it comes to asymptotic notations, we can see that the answer to "which is bigger" log(n)^a or n^b, is the same as "which is bigger": log(log(n)) and log(n). This answer is much more intuitive to answer.

Calculating the expected running time of function

I have a question about calculating the expected running time of a given function. I understand just fine, how to calculate code fragments with cycles in them (for / while / if , etc.) but functions without them seems a bit odd to me. For example, lets say that we have the following code fragment:
public void Add(T item)
{
var newArr = new T[this.arr.Length + 1];
Array.Copy(this.arr, newArr, this.arr.Length);
newArr[newArr.Length - 1] = item;
this.arr = newArr;
}
If my logic works correctly, the function Add has a complexity of O(1), because in the best/worst/average case it will just read every line of code once, right?
You always have to consider the time complexity of the function calls, too. I don't know how Array.Copy is implemented, but I'm going to guess it's O(N), making the whole Add function O(N) as well. Your intuition is right, though - the rest of it is in fact O(1).
If you have multiple sub-operations with O(n) + O(log(n)) etc and the costliest step is the cost of the whole operation - by default big O refers to the worst case. Here as you copy the array, it is an O(n) operation
Complexity is calculated following this 2 rules :
-Calling a method (complexity+ 1)
-Encountering the following keywords : if, while, repeat, for, &&, ||, catch, case, etc … (complexity+ 1)
In your case , given you are trying to copy an array and not a single value , the algorithm will complete N copy operations giving you an O(N) operation.