Consider proving correctness of the following while loop, i.e. I want show that given the loop condition holds to start with, it will eventually terminate and result in the final assertion being true.
int x = 0;
while(x>=0 && x<10){
x = x + 1;
}
assert x==10;
What would be the correct translation into SMT-LIB for checking the correctness, without using loop unwinding?
Hoare logic and loop-invariants
Typical proof of such a statement would be done via the classic Hoare logic, which I assume you're already familiar with. If not, see: https://en.wikipedia.org/wiki/Hoare_logic
The idea is to come up with an invariant for your loop. This invariant must be true before the loop starts, it must be maintained by the loop body, and it must imply the final result when the loop condition is no longer true. Additionally, you also need to prove that the loop will eventually terminate, by means of a measure function. (More on that later.)
You can convince yourself why this would be sufficient: An invariant is something that's "always" true. And if it implies your final result, then your proof is complete. The proof steps I outlined above ensure that the invariant is indeed an invariant, i.e., its truth is always maintained by your program.
Coming up with the invariant
What would be a good invariant for your loop here? Let's give this invariant the name I. A moment of thinking reveals a good choice for I is:
I = x >= 0 && x <= 10
Note how similar (but not exactly the same!) this is to your loop-condition, and this is not by accident. Loop-invariants are not unique, and coming up with a good one can be really difficult. It's an active area of research (since 60's) to synthesize loop-invariants automatically. See the plethora of research out there. https://en.wikipedia.org/wiki/Loop_invariant is a good starting point.
Proof using SMT
Now that we "magically" came up with the loop invariant, let's use SMT to prove that it is indeed correct. Instead of writing SMTLib (which is verbose and mostly intended for machines only), I'll use z3-python interface as a close enough substitute. To finish the proof, I need to show 4 things:
The invariant holds before the loop starts
The invariant is maintained by the loop body
The invariant and the negation of the loop-condition implies the desired post-condition
The loop terminates
Let's look at each in turn.
(0) Preliminaries
Since we'll use z3's python interface, we'll have to do a little bit of leg-work to get us started. Here's the skeleton we need:
from z3 import *
def C(p):
return And(p >= 0, p < 10)
def I(p):
return And(p >= 0, p <= 10)
x = Int('x')
Note that we parameterized the loop-condition (C) and the invariant (I) with a parameter so it's easy to call them with different arguments. This is a common trick in programming, abstracting away the control from the data. This way of coding will simplify our life later on.
(1) The invariant holds before the loop starts
This one is easy. Right before the loop, we know that x = 0. So we need to ask the SMT solver if x == 0 implies our invariant:
>>> prove (Implies(x == 0, I(x)))
proved
Voila! If you want to see the SMTLib for the proof obligation, you can ask z3 to print it for you:
>>> print(Implies(x == 0, I(x)).sexpr())
(=> (= x 0) (and (>= x 0) (<= x 10)))
(2) The invariant is maintained by the loop-body
The loop body is only run when the loop condition (C) is true. The body increments x by one. So, what we need to show is that if our invariant (I) is true, if the loop condition (C) is true, and if I increment x by one, then I remains true. Let's ask z3 exactly that:
>>> prove(Implies(And(I(x), C(x)), I(x+1)))
proved
Almost too easy!
(3) The invariant implies the result when loop condition is false
This time, all we need to ask the solver is to prove the required conclusion when I holds, but C doesn't:
>>> prove(Implies(And(I(x), Not(C(x))), x == 10))
proved
And we have now completed what's known as the partial-correctness claim. That is, if the loop terminates, then x will indeed be 10 at the end. This is what you were trying to prove to start with.
(4) The loop terminates
What we've done so far is known as partial-correctness. It says if the loop terminates, then your post-condition (i.e., x == 10) holds. But it does not make any guarantees that the loop will always terminate.
To get a full-proof, we have to prove termination. This is done by coming up with a measure function: A measure function is a function that assigns (typically) a numeric value to the set of program variables, which is bounded from below. Then we show that it goes down in each iteration and has an initial value that's above its lower-bound. Then we know that the loop cannot continue forever: The measure has to go down in each iteration, but it cannot do so since it's bounded below.
Termination proofs are usually harder, and coming up with a good measure can be tricky. But in this case, it's easy to come up with it:
def M(x):
return 10-x
The claim is that the measure is always non-negative in this case. Let's prove that before the loop starts, i.e., when x == 0:
>>> prove (Implies(x == 0, M(x) >= 0))
proved
It goes down in each iteration:
>>> prove (Implies(C(x), M(x) > M(x+1)))
proved
And finally, it's always positive if the loop executes:
>>> prove (Implies(C(x), M(x) >= 0))
proved
Now we know that the loop will terminate, so our proof is complete.
But wait!
You might wonder if I pulled a rabbit out of a hat here. How do we know that the above steps are sufficient? Or that I didn't make a mistake in my coding as I waved my hand over your program and magically translated it to z3-python?
For the first question: There's established research that for traditional imperative program semantics, Hoare-logic style reasoning is sound. Here's a good slide deck to start with: https://www.cl.cam.ac.uk/teaching/1617/HLog+ModC/slides/lecture2.pdf
For the second question: This is where the rubber hits the road. You have to put my argument to peer-review, possibly using an established theorem prover to code the whole thing up and trust that the mechanization is correct. Why3 (https://why3.lri.fr) is a good-platform to get started for this style of reasoning.
Picking the invariant
The trickiest part of this proof is coming up with the right invariant. A "good" invariant is one that's not only true, but one that allows you to prove the result you want. For instance, consider the following invariant:
def I(p):
return True
This invariant is manifestly true for all programs as well! But if you attempt to run the proofs we had with this version of I, you'll see that it won't go through and you'll get a counter-example. (It's quite instructive to do so.) In general, you can:
Pick an "invariant" that's not really enforced by your program, i.e., it doesn't stay true at all times as described above. Hopefully the counter-example you get from the solver will be helpful to identify what goes wrong.
Or, and this is way more likely, the invariant you picked is indeed an invariant of the program, but it is not strong enough to prove the result you want. In this case the counter-example will be less useful, and for complicated programs it can be hard to track down the reason why.
An invariant that allows you to prove the final result is called an "inductive invariant." The process of "improving" the invariant to get to a proof is known as "strengthening the invariant." There's a plethora of research in all of these topics, especially in the realm of model-checking. A good paper to read in these topics is Bradley's "Understanding IC3:" https://theory.stanford.edu/~arbrad/papers/Understanding_IC3.pdf.
Summary
The strategy outlined here is a "meta"-level proof: It's equivalent to a paper-proof which identified the proof goals, and shipped them to an SMT solver (z3 in this case), to finish the job. This is common practice in modern day proofs, i.e., coming up with sub-goals and using an automated-solver to discharge them. Theorem-provers like ACL2, Isabelle, Coq, etc. mechanize the "coming up with subgoals" part to a large extent, making sure the whole proof is sound with respect to a trusted (but typically very small) set of core-axioms. (This is the so called LCF methodology, see https://www.cl.cam.ac.uk/~jrh13/slides/manchester-12sep01/slides.pdf for a nice slide-deck on it.)
Hopefully this is a detailed-enough level answer for you to get you started in program verification with SMT-solvers. Perhaps it's more than what you asked for; but the rule-of-thumb is there is no free lunch in verification. It is a lot of work! However, you get pretty close to push-button reasoning these days (at least for certain kinds of programs) with the advances in automated theorem provers, SMT-solvers, and other frameworks that many people built over the years. Best of luck, but be warned that program-verification remains the holy-grail of computer science after almost 7-decades of work on it. Things always get better/easier, but there's much more work to be done in the field.
I was trying the time complexity mcq questions given in codechef under practice for Data Structures and Algorithms. One of the questions had a line a*< a[i]. What does that line mean?
I know that if there wasn't an and statement the complexity would have been O(n^2). But the a*< is completely alien to me. I searched for it in the internet but all I got was about the a star algorithm and asterisks! I tried running the program in python with a print statement but it says that * is invalid. Does that mean something like a pointer to the 1st element in the array or something?
Find the time complexity of the following function
n = len(a)
j = 0
for i =0 to n-1:
while (j < n and a* < a[j]):
j += 1
The answer is given as O(n). But there are nested loops so it is supposed to be O(n^2).Help required! Thanks
It doesn't actually matter what a* means. The question is to determine the time complexity of the algorithm. Notice that although there are two nested loops, the inner while loop isn't a full independent loop. Its index is j, which starts at 0 and is only ever incremented, with an upper bound of n. So the inner loop can only run a maximum of n times in total. This means that the overall complexity is only O(n).
I was recently asked an interview question about testing the validity of a Sudoku board. A basic answer involves for loops. Essentially:
for(int x = 0; x != 9; ++x)
for(int y = 0; y != 9; ++y)
// ...
Do this nested for loops to check the rows. Do it again to check the columns. Do one more for the sub-squares but that one is more funky because we're dividing the suoku board into sub-boards so we end end up more than two nested loops, maybe three or four.
I was later asked the complexity of this code. Frankly, as far as I'm concerned, all the cells of the board are visited exactly three times so O(3n). To me, the fact that we have nested loops doesn't mean this code is automatically O(n^2) or even O(n^highest-nesting-level-of-loops). But I have suspicion that that's the answer the interviewer expected...
Posed another way, what is the complexity of these two pieces of code:
for(int i = 0; i != n; ++i)
// ...
and:
for(int i = 0; i != sqrt(n); ++i)
for(int j = 0; j != sqrt(n); ++j)
// ...
Your general intuition is correct. Let's clarify a bit about Big-O notation:
Big-O gives you an upper bound for the worst-case (time) complexity for your algorithm, in relation to n - the size of your input. In essence, it is a measurement of how the amount of work changes in relation to the size of the input.
When you say something like
all the cells of the board are visited exactly three times so O(3n).
you are implying that n (the size of your input) is the the number of cells in the board and therefore visiting all cells three times would indeed be an O(3n) (which is O(n)) operation. If this is the case you would be correct.
However usually when referring to Sudoku problems (or problems involving a grid in general), n is taken to be the number of cells in each row/column (an n x n board). In this case, the runtime complexity would be O(3n²) (which is indeed equal to O(n²)).
In the future, it is perfectly valid to ask your interviewer what n is.
As for the question in the title (Is a nested for loop automatically O(n^2)?) the short answer is no.
Consider this example:
for(int i = 0 ; i < n ; i++) {
for(int j = 0 ; j < n ; j * 2) {
... // some constant time operation
}
}
The outer loops makes n iterations while the inner loop makes log2(n) iterations - therefore the time complexity will be O(nlogn).
In your examples, in the first one you have a single for-loop making n iterations, therefore a complexity of (at least) O(n) (the operation is performed an order of n times).
In the second one you two nested for loops, each making sqrt(n) iterations, therefore a total runtime complexity of (at least) O(n) as well. The second function isn't automatically O(n^2) simply because it contains a nested loop. The amount of operations being made is still of the same order (n) therefore these two examples have the same complexity - since we assume n is the same for both examples.
This is the most crucial point to sail home. To compare between the performance of two algorithms, you must be using the same input to make the comparison. In your sudoku problem you could have defined n in a few different ways, and the way you did would directly affect the complexity calculation of the problem - even if the amount of work is all the same.
*NOTE - this is unrelated to your question, but in the future avoid using != in loop conditions. In your second example, if log(n) is not a whole number, the loop could run forever, depending on the language and how it is defined. It is therefore recommended to use < instead.
It depends on how you define the so-called N.
If the size of the board is N-by-N, then yes, the complexity is O(N^2).
But if you say, the total number of grids is N (i.e., the board id sqrt(N)-by-sqrt(N)), then the complexity is O(N), or 3O(N) if you mind the constant.
I started learning Dafny and I just learned invariants. I've got this code:
function pot(m:int, n:nat): int
{
if n==0 then 1
else if n==1 then m
else if m==0 then 0
else pot(m,n-1) * m
}
method Pot(m:int, n:nat) returns (x:int)
ensures x == pot(m,n)
{
x:=1;
var i:=0;
if n==0 {x:=1;}
while i<=n
invariant i<=n;
{
x:=m*x;
i:=i+1;
}
}
And the given error is the following: "This loop invariant might not be maintained by the loop." I think I might need another invariant, but I think my code is correct other than that (I guess). Any help is appreciated. Thanks in advance.
A loop invariant must hold whenever the loop branch condition is evaluated. But on the last iteration of your loop, i will actually be n+1, so the loop invariant is not true then.
Changing the loop invariant to i <= n + 1 or changing the loop branch condition to i < n will fix this particular problem.
After that, you will still have some work to do to finish proving the method correct. Feel free to ask further questions if you get stuck.
So I'm trying to teach myself Haskell. I am currently on the 11th chapter of Learn You a Haskell for Great Good and am doing the 99 Haskell Problems as well as the Project Euler Problems.
Things are going alright, but I find myself constantly doing something whenever I need to keep track of "variables". I just create another function that accepts those "variables" as parameters and recursively feed it different values depending on the situation. To illustrate with an example, here's my solution to Problem 7 of Project Euler, Find the 10001st prime:
answer :: Integer
answer = nthPrime 10001
nthPrime :: Integer -> Integer
nthPrime n
| n < 1 = -1
| otherwise = nthPrime' n 1 2 []
nthPrime' :: Integer -> Integer -> Integer -> [Integer] -> Integer
nthPrime' n currentIndex possiblePrime previousPrimes
| isFactorOfAnyInThisList possiblePrime previousPrimes = nthPrime' n currentIndex theNextPossiblePrime previousPrimes
| otherwise =
if currentIndex == n
then possiblePrime
else nthPrime' n currentIndexPlusOne theNextPossiblePrime previousPrimesPlusCurrentPrime
where currentIndexPlusOne = currentIndex + 1
theNextPossiblePrime = nextPossiblePrime possiblePrime
previousPrimesPlusCurrentPrime = possiblePrime : previousPrimes
I think you get the idea. Let's also just ignore the fact that this solution can be made to be more efficient, I'm aware of this.
So my question is kind of a two-part question. First, am I going about Haskell all wrong? Am I stuck in the imperative programming mindset and not embracing Haskell as I should? And if so, as I feel I am, how do avoid this? Is there a book or source you can point me to that might help me think more Haskell-like?
Your help is much appreciated,
-Asaf
Am I stuck in the imperative programming mindset and not embracing
Haskell as I should?
You are not stuck, at least I don't hope so. What you experience is absolutely normal. While you were working with imperative languages you learned (maybe without knowing) to see programming problems from a very specific perspective - namely in terms of the van Neumann machine.
If you have the problem of, say, making a list that contains some sequence of numbers (lets say we want the first 1000 even numbers), you immediately think of: a linked list implementation (perhaps from the standard library of your programming language), a loop and a variable that you'd set to a starting value and then you would loop for a while, updating the variable by adding 2 and putting it to the end of the list.
See how you mostly think to serve the machine? Memory locations, loops, etc.!
In imperative programming, one thinks about how to manipulate certain memory cells in a certain order to arrive at the solution all the time. (This is, btw, one reason why beginners find learning (imperative) programming hard. Non programmers are simply not used to solve problems by reducing it to a sequence of memory operations. Why should they? But once you've learned that, you have the power - in the imperative world. For functional programming you need to unlearn that.)
In functional programming, and especially in Haskell, you merely state the construction law of the list. Because a list is a recursive data structure, this law is of course also recursive. In our case, we could, for example say the following:
constructStartingWith n = n : constructStartingWith (n+2)
And almost done! To arrive at our final list we only have to say where to start and how many we want:
result = take 1000 (constructStartingWith 0)
Note that a more general version of constructStartingWith is available in the library, it is called iterate and it takes not only the starting value but also the function that makes the next list element from the current one:
iterate f n = n : iterate f (f n)
constructStartingWith = iterate (2+) -- defined in terms of iterate
Another approach is to assume that we had another list our list could be made from easily. For example, if we had the list of the first n integers we could make it easily into the list of even integers by multiplying each element with 2. Now, the list of the first 1000 (non-negative) integers in Haskell is simply
[0..999]
And there is a function map that transforms lists by applying a given function to each argument. The function we want is to double the elements:
double n = 2*n
Hence:
result = map double [0..999]
Later you'll learn more shortcuts. For example, we don't need to define double, but can use a section: (2*) or we could write our list directly as a sequence [0,2..1998]
But not knowing these tricks yet should not make you feel bad! The main challenge you are facing now is to develop a mentality where you see that the problem of constructing the list of the first 1000 even numbers is a two staged one: a) define how the list of all even numbers looks like and b) take a certain portion of that list. Once you start thinking that way you're done even if you still use hand written versions of iterate and take.
Back to the Euler problem: Here we can use the top down method (and a few basic list manipulation functions one should indeed know about: head, drop, filter, any). First, if we had the list of primes already, we can just drop the first 1000 and take the head of the rest to get the 1001th one:
result = head (drop 1000 primes)
We know that after dropping any number of elements form an infinite list, there will still remain a nonempty list to pick the head from, hence, the use of head is justified here. When you're unsure if there are more than 1000 primes, you should write something like:
result = case drop 1000 primes of
[] -> error "The ancient greeks were wrong! There are less than 1001 primes!"
(r:_) -> r
Now for the hard part. Not knowing how to proceed, we could write some pseudo code:
primes = 2 : {-an infinite list of numbers that are prime-}
We know for sure that 2 is the first prime, the base case, so to speak, thus we can write it down. The unfilled part gives us something to think about. For example, the list should start at some value that is greater 2 for obvious reason. Hence, refined:
primes = 2 : {- something like [3..] but only the ones that are prime -}
Now, this is the point where there emerges a pattern that one needs to learn to recognize. This is surely a list filtered by a predicate, namely prime-ness (it does not matter that we don't know yet how to check prime-ness, the logical structure is the important point. (And, we can be sure that a test for prime-ness is possible!)). This allows us to write more code:
primes = 2 : filter isPrime [3..]
See? We are almost done. In 3 steps, we have reduced a fairly complex problem in such a way that all that is left to write is a quite simple predicate.
Again, we can write in pseudocode:
isPrime n = {- false if any number in 2..n-1 divides n, otherwise true -}
and can refine that. Since this is almost haskell already, it is too easy:
isPrime n = not (any (divides n) [2..n-1])
divides n p = n `rem` p == 0
Note that we did not do optimization yet. For example we can construct the list to be filtered right away to contain only odd numbers, since we know that even ones are not prime. More important, we want to reduce the number of candidates we have to try in isPrime. And here, some mathematical knowledge is needed (the same would be true if you programmed this in C++ or Java, of course), that tells us that it suffices to check if the n we are testing is divisible by any prime number, and that we do not need to check divisibility by prime numbers whose square is greater than n. Fortunately, we have already defined the list of prime numbers and can pick the set of candidates from there! I leave this as exercise.
You'll learn later how to use the standard library and the syntactic sugar like sections, list comprehensions, etc. and you will gradually give up to write your own basic functions.
Even later, when you have to do something in an imperative programming language again, you'll find it very hard to live without infinte lists, higher order functions, immutable data etc.
This will be as hard as going back from C to Assembler.
Have fun!
It's ok to have an imperative mindset at first. With time you will get more used to things and start seeing the places where you can have more functional programs. Practice makes perfect.
As for working with mutable variables you can kind of keep them for now if you follow the rule of thumb of converting variables into function parameters and iteration into tail recursion.
Off the top of my head:
Typeclassopedia. The official v1 of the document is a pdf, but the author has moved his v2 efforts to the Haskell wiki.
What is a monad? This SO Q&A is the best reference I can find.
What is a Monad Transformer? Monad Transformers Step by Step.
Learn from masters: Good Haskell source to read and learn from.
More advanced topics such as GADTs. There's a video, which does a great job explaining it.
And last but not least, #haskell IRC channel. Nothing can even come close to talk to real people.
I think the big change from your code to more haskell like code is using higher order functions, pattern matching and laziness better. For example, you could write the nthPrime function like this (using a similar algorithm to what you did, again ignoring efficiency):
nthPrime n = primes !! (n - 1) where
primes = filter isPrime [2..]
isPrime p = isPrime' p [2..p - 1]
isPrime' p [] = True
isPrime' p (x:xs)
| (p `mod` x == 0) = False
| otherwise = isPrime' p xs
Eg nthPrime 4 returns 7. A few things to note:
The isPrime' function uses pattern matching to implement the function, rather than relying on if statements.
the primes value is an infinite list of all primes. Since haskell is lazy, this is perfectly acceptable.
filter is used rather than reimplemented that behaviour using recursion.
With more experience you will find you will write more idiomatic haskell code - it sortof happens automatically with experience. So don't worry about it, just keep practicing, and reading other people's code.
Another approach, just for variety! Strong use of laziness...
module Main where
nonmults :: Int -> Int -> [Int] -> [Int]
nonmults n next [] = []
nonmults n next l#(x:xs)
| x < next = x : nonmults n next xs
| x == next = nonmults n (next + n) xs
| otherwise = nonmults n (next + n) l
select_primes :: [Int] -> [Int]
select_primes [] = []
select_primes (x:xs) =
x : (select_primes $ nonmults x (x + x) xs)
main :: IO ()
main = do
let primes = select_primes [2 ..]
putStrLn $ show $ primes !! 10000 -- the first prime is index 0 ...
I want to try to answer your question without using ANY functional programming or math, not because I don't think you will understand it, but because your question is very common and maybe others will benefit from the mindset I will try to describe. I'll preface this by saying I an not a Haskell expert by any means, but I have gotten past the mental block you have described by realizing the following:
1. Haskell is simple
Haskell, and other functional languages that I'm not so familiar with, are certainly very different from your 'normal' languages, like C, Java, Python, etc. Unfortunately, the way our psyche works, humans prematurely conclude that if something is different, then A) they don't understand it, and B) it's more complicated than what they already know. If we look at Haskell very objectively, we will see that these two conjectures are totally false:
"But I don't understand it :("
Actually you do. Everything in Haskell and other functional languages is defined in terms of logic and patterns. If you can answer a question as simple as "If all Meeps are Moops, and all Moops are Moors, are all Meeps Moors?", then you could probably write the Haskell Prelude yourself. To further support this point, consider that Haskell lists are defined in Haskell terms, and are not special voodoo magic.
"But it's complicated"
It's actually the opposite. It's simplicity is so naked and bare that our brains have trouble figuring out what to do with it at first. Compared to other languages, Haskell actually has considerably fewer "features" and much less syntax. When you read through Haskell code, you'll notice that almost all the function definitions look the same stylistically. This is very different than say Java for example, which has constructs like Classes, Interfaces, for loops, try/catch blocks, anonymous functions, etc... each with their own syntax and idioms.
You mentioned $ and ., again, just remember they are defined just like any other Haskell function and don't necessarily ever need to be used. However, if you didn't have these available to you, over time, you would likely implement these functions yourself when you notice how convenient they can be.
2. There is no Haskell version of anything
This is actually a great thing, because in Haskell, we have the freedom to define things exactly how we want them. Most other languages provide building blocks that people string together into a program. Haskell leaves it up to you to first define what a building block is, before building with it.
Many beginners ask questions like "How do I do a For loop in Haskell?" and innocent people who are just trying to help will give an unfortunate answer, probably involving a helper function, and extra Int parameter, and tail recursing until you get to 0. Sure, this construct can compute something like a for loop, but in no way is it a for loop, it's not a replacement for a for loop, and in no way is it really even similar to a for loop if you consider the flow of execution. Similar is the State monad for simulating state. It can be used to accomplish similar things as static variables do in other languages, but in no way is it the same thing. Most people leave off the last tidbit about it not being the same when they answer these kinds of questions and I think that only confuses people more until they realize it on their own.
3. Haskell is a logic engine, not a programming language
This is probably least true point I'm trying to make, but hear me out. In imperative programming languages, we are concerned with making our machines do stuff, perform actions, change state, and so on. In Haskell, we try to define what things are, and how are they supposed to behave. We are usually not concerned with what something is doing at any particular time. This certainly has benefits and drawbacks, but that's just how it is. This is very different than what most people think of when you say "programming language".
So that's my take how how to leave an imperative mindset and move to a more functional mindset. Realizing how sensible Haskell is will help you not look at your own code funny anymore. Hopefully thinking about Haskell in these ways will help you become a more productive Haskeller.