Time complexity of log - time-complexity

I was studying for runtime analysis.
One of the question that I encountered is this.
Which function would grow the slowest?
n^0.5
log(n^0.5)
(log(n))^0.5
log(n) * log(n)
I thought the answer was 2 after I draw the graph.
However, the answer is 3. I am not sure why this is the answer.
Could someone explain why this is the case?
Thank you

2.log(n^0.5)=0.5log(n)=log(n)
3.(log(n))^0.5= sqrt(log(n)) which is smaller in magnitude than 2.

I have explained in very layman terms in the picture below with a little mathematics.
Tell me if you still have confusion.
PS: Both are log so I don't think we worry about time complexity of such functions in this modern age.

Related

Is lg*(n) time complexity better than lg(n)?

I am trying to understand the time complexity of the lg*(n) [log*(n) base 2] in comparison to lg(n) and I wonder which of them is faster...can someone explain it, please? Thanks in advance.
According to Wikipedia, the iterative logarithm (log*) is one of the slowest growing time complexities. In fact, of all the commonly uses complexities, it is the second slowest, beaten only by the inverse Ackerman function. This means it grows significantly slower, and as a result completes much faster, than the log function.
Source: https://en.wikipedia.org/wiki/Iterated_logarithm#Analysis_of_algorithms
I've never seen lg*(n) notation before, but I assume you're referring to log base 2 vs log base 10. It turns out that log2(N) == log10(N) * 3.32192809489..., which is a constant factor difference, and we drop constant factors when analyzing algorithmic complexity. As a result, all logarithms are considered equal, and we do not need to bother specifying the base in algorithmic complexity.
When studying actual runtimes, then log10(N) is faster than log2(N), but very rarely do developers actually analyze runtimes in this manner, they usually do it with a profiler.

What is the difference between SAT and linear programming

I have an optimization problem that is subjected to linear constraints.
How to know which method is better for modelling and solving the problem.
I am generally asking about solving a problem as a satisfiability problem (SAT or SMT) vs. Solving as a linear programming problem (ILP OR MILP).
I don't have much knowledge in both. So, please simplify your answer if you have any.
Generally speaking, the difference is that SAT is only trying for feasible solutions, while ILP is trying to optimize something subject to constraints. I believe some ILP solvers actually use SAT solvers to get an initial feasible solution. The sensor array problem you describe in a comment is formulated as an ILP: "minimize this subject to that." A SAT version of that would instead pick a maximum acceptable number of sensors and use that as a constraint. Now, this is a satisfiability problem, but not one that's easily expressed in conjunctive normal form. I'd recommend using a solver with a theory of integers. My favorite is Z3.
However, before you give up on optimizing, you should try GMPL / GLPK. You might be surprised by how tractable your problem is. If you're not so lucky, turn it into a satisfiability problem and bring out Z3.

Why does GLPSOL (GLPK) take a long time to solve a large MIP?

I have a large MIP problem, and I use GLPSOL in GLPK to solve it. However, solving the LP relaxation problem takes many iterations, and each iteration the obj and infeas value are all the same. I think it has found the optimal solution, but it won't stop and has continued to run for many hours. Will this happen for every large-scale MIP/LP problem? How can I deal with such cases? Can anyone give me any suggestions about this? Thanks!
The problem of solving MIPs is NP-complete in general, which means that there are instances which can't be solved efficiently. But often our problems have enough structure, so that heuristics can help to solve these models. This allowed huge gains in solving-capabilities in the last decades (overview).
For understanding the basic-approach and understanding what exactly is the problem in your case (no progress in upper-bound, no progress in lower-bound, ...), read Practical Guidelines for Solving Difficult Mixed Integer Linear
Programs.
Keep in mind, that there are huge gaps between commercial solvers like Gurobi / Cplex and non-commercial ones in general (especially in MIP-solving). There is a huge amount of benchmarks here.
There are also a lot of parameters to tune. Gurobi for example has different parameter-templates: one targets fast findings of feasible solution; one targets to proof the bounds.
My personal opinion: compared to cbc (open-source) & scip (open-source but non-free for commercial usage), glpk is quite bad.

Scaling of objective function in optimization problems

In a lot of optimization algorithms I have seen that they use a scaling factor to scale the objective function. I dont understand what is the reason behind this. Why do we need scaling of objective function in optimization algorithms? Does it work without scaling. Logically it should work but I am a litte bit confused now.
I hope you can answer me and thank you indeed
Scaling is often used in optimization problems to either:
A) Make better answers look MUCH better (log scaling)
or
B) Make the difference between an OK answer and a slightly better answer come closer together (Quashing functions).
Most algorithms should work on a non-scaled objective function, but will likely work better on the scaled version.
To see the effect I recommend running your meta-heuristic on both versions.

Need Help Studying Running Times

At the moment, I'm studying for a final exam for a Computer Science course. One of the questions that will be asked is most likely a question on how to combine running times, so I'll give an example.
I was wondering, if I created a program that preprocessed inputs using Insertion Sort, and then searched for a value "X" using Binary Search, how would I combine the running times to find the best, worst, and average case time complexities of the over-all program?
For example...
Insertion Sort
Worst Case O(n^2)
Best Case O(n)
Average Case O(n^2)
Binary Search
Worst Case O(logn)
Best Case O(1)
Average Case O(logn)
Would the Worst case be O(n^2 + logn), or would it be O(n^2), or neither?
Would the Best Case be O(n)?
Would the Average Case be O(nlogn), O(n+logn), O(logn), O(n^2+logn), or none of these?
I tend to over-think solutions, so if I can get any guidance on combining running times, it would be much appreciated.
Thank you very much.
You usually don't "combine" (as in add) the running times to determine the overall efficiency class rather, you take the one that takes the longest for each worst, average, and best case.
So if you're going to perform insertion sort and then do a binary search after to find an element X in an array, the worst case is O(n^2) and the best case is O(n) -- all from insertion sort since it takes the longest.
Based on my limited study, (we haven't reached Amortization so this might be where Jim has the rest correct), but basically you just go based on whoever is slowest of the overall algorithm.
This seems to be a good book on the subject of Algorithms (I haven't got much to compare to):
http://www.amazon.com/Introduction-Algorithms-Third-Thomas-Cormen/dp/0262033844/ref=sr_1_1?ie=UTF8&qid=1303528736&sr=8-1
Also MIT has a full course on the Algorithms on their site here is the link for that too!
http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-046j-introduction-to-algorithms-sma-5503-fall-2005/
I've actually found it helpful, it might not answer specifically your question, but I think it will help get you more confident seeing some of the topics explained a few times.