Math-grounded technique to translate recursion into while() loop? - oop

What I am looking for, is some math theory enlightening how one can translate arbitrary finite recursion to a some kind of while(...) loop traditional in OOP. Or, in other case, how one can prove that given recursion can not be translated to a while(...) statement.
Hopefully, someone can help me out.
Thanks in advance.

You can find the context in Dynamic Programming or Tail Recursion. In Dynamic Programming, you can prove this by induction, as in recursive algorithms we define the function on the value of n base on the previous value of the function.

Related

I'm in a middle of a computer science class, learning time complexity and stuck on a problem (Java)

So, first, here's the code:
code from our presentation
We are learning time complexity, and according to our teacher, the run time for this one is
n^2 +4n .
But for me and my friends, it seems like this is wrong. It's a for loop, inside a for loop, inside a for loop, so doesn't this needs to be n^3? What I need is some explanation on what is the runtime for this code, we literally sat for an hour with the teacher on this (the teacher is not that smart, because she didn't really learned this subject she just had a couple of classes about it) and all she could say was that you don't count the first for loop. Please any explanation?
There are several points to make here:
Having three nested loops doesn't imply O(n^3) complexity. It depends on the loops and what happens within the loops. It could be anything between O(1) and never halting.
Your teacher should specify what should be measured more precisely: the number of steps for the entire piece of code? Or just the two inner loops, ignoring the outer loop?
You need to specify what the variable parameters are. In this case it's pretty clear what it is (probably n, and not i or j), but it helps to be precise.
The code doesn't run because there is a syntax error: b[i] ) reader.nextInt();. So strictly speaking, there is no time complexity to reason about here.
OK, now assuming that the line should actually read b[i] = reader.nextInt();, the variable parameter is n, and the whole code should be measured: your teacher is right that the entire code runs in O(n^2), because the outer and the second loops share the same index variable i. So the outer loop iterates exactly once.
It doesn't really make sense to be much more precise than O(n^2) unless you define precisely what counts as 1. So stating something that looks precise like n^2 +4n is pretty much meaningless without much more context.

Example for an algorithm with greater than n splits

I was doing the derivation for masters theorem using the tree method and I noticed something.
So we have:
$T(n)=a*T(n/b) + n^c$
From this: we notice, the last level of the tree will have $a^(log_b_n)$ splits, which equals $n^(log_b_a)$
Now, if $a=b$, I get n splits in the last level, which is I've seen used in quick sort and merge sort, and if a
Is there a practical example for greater than n splits?
Where we actually repeat operations for elements?
*Also, math overflow formatting doesn't seem to work. Would appreciate if anyone helps.
The classical matrix multiplication by divide and conquer would be such an example. The recurrence relation is: T(n)=8T(n/2)+ Theta(n^2). Another would be Straussen algorithm.
Math notation is (sadly) limited to only a few stackexchange sites.

Are there any steps or rules to draw a DFA?

In my first lecture of "Theory of Automata", after giving some concepts of Alphabet, Language, transition function etc. and a couple of simple automata of an electric circuit with one and two switches, is this question.
I understand what an Alphabet as well as the Language of a DFA is, but are there any rules or steps to followed to reach a correct automaton for a given Language? Or we just have to imagine and think in our mind and get to a solution which satisfies the given Language?
Note:- Please keep your language as simple as you can, since this is my first lecture and I am not yet aware of concepts like regular expressions or any other thing in the subject for that matter.
If you are given a description of the language in words, say, think about all the possible strings that can apply to this language. Then, try to come up with a DFA that handles most of the strings. Then look into the boundary conditions and generate some strings. Try to accommodate it in the DFA. This might be a good starting point for you
http://www.cse.chalmers.se/~coquand/AUTOMATA/o2.pdf
I am a novice .. but as per my experience.. the boundary conditions of the language to be accepted should be drawn 1st and then the complexities can be added while looking at the conditions which will get rejected step by step ... as a start if the figure in the question would have been for a DFA which accepts L={01*0}, then the bare minimum string would be "010" ..and eventually the dfa can be constructed keeping in mind the trap states and some analysis Hope this helps !!
Steps To Construct DFA-
Following steps are followed to construct a DFA for Type-01 problems-
Step-01: Determine the minimum number of states required in the DFA.
Draw those states.
Step-02: Decide the strings for which DFA will be constructed.
Step-03: Construct a DFA for the strings decided in Step-02.
Step-04: Send all the left possible combinations to the starting state.
Do not send the left possible combinations over the dead state.

translation from Datalog to SQL

I am still thinking on how to translate the recursivity of a Datalog program into SQL, such as
P(x,y) <- Q(x,y).
Q(x,y) <- P(x,z), A(y).
where A/1 is an EDB predicate. This, there is a co-dependency between P and Q. For longer queries, how to solve this problem?
Moreover, is there any system completely implement the translation? If there is, may I know what system or which paper may I refer?
If you adopt an approach of "tabling" previous conclusions and forward-chain reasoning on these to infer new conclusions, no recursive "depth" is required.
Bear in mind that Datalog requires some restrictions on rules and variable that assure finite termination and hence finitely many conclusions. Variables must have a finite range of possible values, for example.
Let's assume your example refers to constants rather than to variables:
P(x,y) <- Q(x,y).
Q(x,y) <- P(x,z), A(y).
One wrinkle is that you want A/1 to be implemented as an extended stored procedure or external code. For that I would propose tabling all the results of calling A on all possible arguments (finitely many). These are after all among the conclusions (provable statements) of your system.
Once that is done the forward-chaining inference proceeds iteratively rather than recursively. At each step consider each rule, applying it with premises (right-hand sides) that are previously obtained (tabled) conclusions if it produces a new conclusion. If no rule produces a new conclusion in the current step, halt. The proof procedure is complete.
In your example the proofs stop after all the A facts are adduced, because there are no conclusions sufficient to apply either rule to get new conclusions.
A possible approach is to use recursive CTEs in SQL, which provide the power of transitive closure. Relational algebra + transitive closure = Datalog.
Logica does something like this. It translates a datalog-like language into SQL for Google BigQuery, PostgreSQL and SQLite.

Optimization of Function Calls in Haskell

Not sure what exactly to google for this question, so I'll post it directly to SO:
Variables in Haskell are immutable
Pure functions should result in same values for same arguments
From these two points it's possible to deduce that if you call somePureFunc somevar1 somevar2 in your code twice, it only makes sense to compute the value during the first call. The resulting value can be stored in some sort of a giant hash table (or something like that) and looked up during subsequent calls to the function. I have two questions:
Does GHC actually do this kind of optimization?
If it does, what is the behaviour in the case when it's actually cheaper to repeat the computation than to look up the results?
Thanks.
GHC doesn't do automatic memoization. See the GHC FAQ on Common Subexpression Elimination (not exactly the same thing, but my guess is that the reasoning is the same) and the answer to this question.
If you want to do memoization yourself, then have a look at Data.MemoCombinators.
Another way of looking at memoization is to use laziness to take advantage of memoization. For example, you can define a list in terms of itself. The definition below is an infinite list of all the Fibonacci numbers (taken from the Haskell Wiki)
fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
Because the list is realized lazily it's similar to having precomputed (memoized) previous values. e.g. fibs !! 10 will create the first ten elements such that fibs 11 is much faster.
Saving every function call result (cf. hash consing) is valid but can be a giant space leak and in general also slows your program down a lot. It often costs more to check if you have something in the table than to actually compute it.