How to find the fixpoint of a loop and why do we need this? [closed] - testing

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I know that in static analysis of program, we need to find fixpoint to analysis the info loop provided.
I have read the wiki as well as related meterials in the book Secure_programming_with_Static_Analysis.
But I am still confused with the concept fixpoint, so my questions are:
could anyone give me some explanations of the concept, fixpoint?
What is the practical way(ways) to find the fixpoint in static analysis?
What information can we get after finding the fixpoint?
Thank you!

Conceptually, the fixpoint corresponds to the most information you obtain about the loop by repeatedly iterating it on some set of abstract values. I'm going to guess that by "static analysis" you're referring here to "data flow analysis" or the version of "abstract interpretation" that most closely follows data flow analysis: a simulation of program execution using abstractions of the possible program states at each point. (Model checking follows a dual intuition in that you're simulating program states using an abstraction of possible execution paths. Both are approximations of concrete program behavior. )
Given some knowledge about a program point, this "simulation" corresponds to the effect that we know a particular program construct must have on what we know. For example, at some point in a program, we may know that x could (a) be uninitialized, or else have its value from statements (b) x = 0 or (c) x = f(5), but after (d) x = 42, its value can only have come from (d). On the other hand, if we have
if ( foo() ) {
x = 42; // (d)
bar();
} else {
baz();
x = x - 1; // (e)
}
then the value of x afterwards might have come from either of (d) or (e).
Now think about what can happen with a loop:
while ( x != 0 ) {
if ( foo() ) {
x = 42; // (d)
bar();
} else {
baz();
x = x - 1; // (e)
}
}
On entry, we have possible definitions of x from {a,b,c}. One pass through the loop means that the possible definitions are instead drawn from {d,e}. But what happens if foo() fails initially so that the loop does not run at all? What are the possibilities for x then? Well, in this case, the loop body has no effect, so the definitions of x would come from {a,b,c}. But if it ran, even once, then the answer is {d,e}. So what we know about x at the end of the loop is that the loop either ran or it didn't, which means that the assignment to x could be any one or {a,b,c,d,e}: the only safe answer here is the union of the property known at loop entry ({a,b,c}) and the property know at the end of one iteration ({d,e}).
But this also means that we must associate x with {a,b,c,d,e} at the beginning of the loop body, too, since we have no way of determining whether this is the first or the four thousandth time through the loop. So we have to consider again what we can have on loop exit: the union of the loop body's effect with the property assumed to hold on entry to the last iteration. Happily, this is just {a,b,c,d,e} ∪ {d,e} = {a,b,c,d,e}. In other words, we've not obtained any additional information through this second simulation of the loop body, and thus we can stop, since no further simulated iterations will change the result.
That's the fixpoint: the abstraction of the program state that will cause simulation to produce exactly the same result.
Now as for ways to find it, there are many, though the most straightforward ("chaotic iteration") simply runs the simulation of every program point (according to some fair strategy) until the answer doesn't change. A good starting point for learning better algorithms can be found in most any compilers textbook, though it isn't usually taught in a first course. Steven Muchnick's Advanced Compiler Design and Implementation is a more thorough and very readable treatment of the subject. If you can find a copy, Matthew Hecht's Flow Analysis of Computer Programs is another classic treatment. Both books focus on the "data flow analysis" technique for static analysis. You might also try out Principles of Program Analysis, by Nielson/Nielson/Hankin, though the technical details in the book can be pretty hairy. On the other hand, it offers a more general treatment of static analysis overall.

Related

Without unwinding, translate a simple while loop iteration into SMT-LIB formula to prove correctness

Consider proving correctness of the following while loop, i.e. I want show that given the loop condition holds to start with, it will eventually terminate and result in the final assertion being true.
int x = 0;
while(x>=0 && x<10){
x = x + 1;
}
assert x==10;
What would be the correct translation into SMT-LIB for checking the correctness, without using loop unwinding?
Hoare logic and loop-invariants
Typical proof of such a statement would be done via the classic Hoare logic, which I assume you're already familiar with. If not, see: https://en.wikipedia.org/wiki/Hoare_logic
The idea is to come up with an invariant for your loop. This invariant must be true before the loop starts, it must be maintained by the loop body, and it must imply the final result when the loop condition is no longer true. Additionally, you also need to prove that the loop will eventually terminate, by means of a measure function. (More on that later.)
You can convince yourself why this would be sufficient: An invariant is something that's "always" true. And if it implies your final result, then your proof is complete. The proof steps I outlined above ensure that the invariant is indeed an invariant, i.e., its truth is always maintained by your program.
Coming up with the invariant
What would be a good invariant for your loop here? Let's give this invariant the name I. A moment of thinking reveals a good choice for I is:
I = x >= 0 && x <= 10
Note how similar (but not exactly the same!) this is to your loop-condition, and this is not by accident. Loop-invariants are not unique, and coming up with a good one can be really difficult. It's an active area of research (since 60's) to synthesize loop-invariants automatically. See the plethora of research out there. https://en.wikipedia.org/wiki/Loop_invariant is a good starting point.
Proof using SMT
Now that we "magically" came up with the loop invariant, let's use SMT to prove that it is indeed correct. Instead of writing SMTLib (which is verbose and mostly intended for machines only), I'll use z3-python interface as a close enough substitute. To finish the proof, I need to show 4 things:
The invariant holds before the loop starts
The invariant is maintained by the loop body
The invariant and the negation of the loop-condition implies the desired post-condition
The loop terminates
Let's look at each in turn.
(0) Preliminaries
Since we'll use z3's python interface, we'll have to do a little bit of leg-work to get us started. Here's the skeleton we need:
from z3 import *
def C(p):
return And(p >= 0, p < 10)
def I(p):
return And(p >= 0, p <= 10)
x = Int('x')
Note that we parameterized the loop-condition (C) and the invariant (I) with a parameter so it's easy to call them with different arguments. This is a common trick in programming, abstracting away the control from the data. This way of coding will simplify our life later on.
(1) The invariant holds before the loop starts
This one is easy. Right before the loop, we know that x = 0. So we need to ask the SMT solver if x == 0 implies our invariant:
>>> prove (Implies(x == 0, I(x)))
proved
Voila! If you want to see the SMTLib for the proof obligation, you can ask z3 to print it for you:
>>> print(Implies(x == 0, I(x)).sexpr())
(=> (= x 0) (and (>= x 0) (<= x 10)))
(2) The invariant is maintained by the loop-body
The loop body is only run when the loop condition (C) is true. The body increments x by one. So, what we need to show is that if our invariant (I) is true, if the loop condition (C) is true, and if I increment x by one, then I remains true. Let's ask z3 exactly that:
>>> prove(Implies(And(I(x), C(x)), I(x+1)))
proved
Almost too easy!
(3) The invariant implies the result when loop condition is false
This time, all we need to ask the solver is to prove the required conclusion when I holds, but C doesn't:
>>> prove(Implies(And(I(x), Not(C(x))), x == 10))
proved
And we have now completed what's known as the partial-correctness claim. That is, if the loop terminates, then x will indeed be 10 at the end. This is what you were trying to prove to start with.
(4) The loop terminates
What we've done so far is known as partial-correctness. It says if the loop terminates, then your post-condition (i.e., x == 10) holds. But it does not make any guarantees that the loop will always terminate.
To get a full-proof, we have to prove termination. This is done by coming up with a measure function: A measure function is a function that assigns (typically) a numeric value to the set of program variables, which is bounded from below. Then we show that it goes down in each iteration and has an initial value that's above its lower-bound. Then we know that the loop cannot continue forever: The measure has to go down in each iteration, but it cannot do so since it's bounded below.
Termination proofs are usually harder, and coming up with a good measure can be tricky. But in this case, it's easy to come up with it:
def M(x):
return 10-x
The claim is that the measure is always non-negative in this case. Let's prove that before the loop starts, i.e., when x == 0:
>>> prove (Implies(x == 0, M(x) >= 0))
proved
It goes down in each iteration:
>>> prove (Implies(C(x), M(x) > M(x+1)))
proved
And finally, it's always positive if the loop executes:
>>> prove (Implies(C(x), M(x) >= 0))
proved
Now we know that the loop will terminate, so our proof is complete.
But wait!
You might wonder if I pulled a rabbit out of a hat here. How do we know that the above steps are sufficient? Or that I didn't make a mistake in my coding as I waved my hand over your program and magically translated it to z3-python?
For the first question: There's established research that for traditional imperative program semantics, Hoare-logic style reasoning is sound. Here's a good slide deck to start with: https://www.cl.cam.ac.uk/teaching/1617/HLog+ModC/slides/lecture2.pdf
For the second question: This is where the rubber hits the road. You have to put my argument to peer-review, possibly using an established theorem prover to code the whole thing up and trust that the mechanization is correct. Why3 (https://why3.lri.fr) is a good-platform to get started for this style of reasoning.
Picking the invariant
The trickiest part of this proof is coming up with the right invariant. A "good" invariant is one that's not only true, but one that allows you to prove the result you want. For instance, consider the following invariant:
def I(p):
return True
This invariant is manifestly true for all programs as well! But if you attempt to run the proofs we had with this version of I, you'll see that it won't go through and you'll get a counter-example. (It's quite instructive to do so.) In general, you can:
Pick an "invariant" that's not really enforced by your program, i.e., it doesn't stay true at all times as described above. Hopefully the counter-example you get from the solver will be helpful to identify what goes wrong.
Or, and this is way more likely, the invariant you picked is indeed an invariant of the program, but it is not strong enough to prove the result you want. In this case the counter-example will be less useful, and for complicated programs it can be hard to track down the reason why.
An invariant that allows you to prove the final result is called an "inductive invariant." The process of "improving" the invariant to get to a proof is known as "strengthening the invariant." There's a plethora of research in all of these topics, especially in the realm of model-checking. A good paper to read in these topics is Bradley's "Understanding IC3:" https://theory.stanford.edu/~arbrad/papers/Understanding_IC3.pdf.
Summary
The strategy outlined here is a "meta"-level proof: It's equivalent to a paper-proof which identified the proof goals, and shipped them to an SMT solver (z3 in this case), to finish the job. This is common practice in modern day proofs, i.e., coming up with sub-goals and using an automated-solver to discharge them. Theorem-provers like ACL2, Isabelle, Coq, etc. mechanize the "coming up with subgoals" part to a large extent, making sure the whole proof is sound with respect to a trusted (but typically very small) set of core-axioms. (This is the so called LCF methodology, see https://www.cl.cam.ac.uk/~jrh13/slides/manchester-12sep01/slides.pdf for a nice slide-deck on it.)
Hopefully this is a detailed-enough level answer for you to get you started in program verification with SMT-solvers. Perhaps it's more than what you asked for; but the rule-of-thumb is there is no free lunch in verification. It is a lot of work! However, you get pretty close to push-button reasoning these days (at least for certain kinds of programs) with the advances in automated theorem provers, SMT-solvers, and other frameworks that many people built over the years. Best of luck, but be warned that program-verification remains the holy-grail of computer science after almost 7-decades of work on it. Things always get better/easier, but there's much more work to be done in the field.

Writing a computer program that will analyse the quality of another computer program?

I'm interested in knowing the possibilities of this. I'm working on a project that validates the skills of a software engineer, currently we validate skills based on code reviews by credentialed developers.
I know the answer if far more completed that the question, I couldn't imagine how complex the program would have to be able to analyse complex code but I am starting with basic programming interview questions.
For example, the classic FizzBuzz question:
Write a program that prints the numbers from 1 to 20. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”.
and below is the solution in python:
for num in range(1,21):
string = ""
if num % 3 == 0:
string = string + "Fizz"
if num % 5 == 0:
string = string + "Buzz"
if num % 5 != 0 and num % 3 != 0:
string = string + str(num)
print(string)
Question is, can we programatically analyse the validity of this solution?
I would like to know if anyone has attempted this, and if there are current implementations I can take a look at. Also if anyone has used z3, and if it is something I can use to solve this problem.
As Vilx- mentioned, correctness of programs (including whether or not they terminate) is in general known to be undecidable. However, tools such as Z3 show that relevant concrete cases can still be reasoned about, despite the general undecidability of the problem.
Static analysers typically look for "simple" problems (e.g. null dereferences, out-of-bounds accesses, numerical overflows), but are comparably fast and require little user guidance (think of guidance in the spirit of adding type annotations to your code).
A non-exhaustive (and biased) list of keywords to search for: "static analysers", "abstract interpretation"; "facebook infer", "airbus absint", "juliasoft".
Verifiers attempt to prove much richer properties, in particular functional correctness, e.g. "does this sort-implementation really sort my array (and not do anything else, e.g. deallocate some global memory or update an element reachable from the array)?" or "does that crypto-implementation really implement the crypto protocol it promises to implement?". This is a much harder task and tools from that line of research are typically rather slow, require expert users with a background in formal verification and significant user guidance.
A non-exhaustive (and biased) list of keywords to search for: "verification", "hoare logic", "separation logic"; "eth viper", "microsoft dafny", "kuleuven verifast", "microsoft f*".
Other formal methods exist, e.g. refinement (or correct-by-construction), but with even less tool support and, as far as I know, industry acceptance.
Let's put it this way: it's been mathematically proven that you CANNOT determine if a program will ever terminate. So if you want a mathematically perfect answer of if the target program is correct, you're doomed.
That said, you can still do unit tests and "linting" which will give you plenty of intetesting insights.
But for simple pieces of code like the FizzBuzz, I think that eyeballing by an experienced dev will probably bring the best results.

Where does the KeY verification tool shine?

What are some code examples demonstrating KeY’s strength?
Details
With so many Formal Method tools available, I was wondering where KeY is better than its competition, and how? Some readable code examples would be quite helpful for comparison and understanding.
Updates
Searching through the KeY website, I found code examples from the book — is there a suitable code example in there somewhere?
Furthermore, I found a paper about the bug that KeY found in Java 8’s mergeCollapse in TimSort. What is a minimal code from TimSort that demonstrates KeY’s strength? I do not understand, however, why model checking supposedly cannot find the bug — a bit array with 64 elements should not be too large to handle. Are other deductive verification tools just as capable of finding the bug?
Is there an established verification competition with suitable code examples?
This is a very hard question, which is why it hasn't yet been answered after having already been asked more than one year ago (and although we from the KeY community are well aware of it...).
The Power of Interaction
First, I'd like to point out that KeY is basically the only tool out there allowing for interactive proofs of Java programs. Although many proofs work automatically and we have quite powerful automatic strategies at hand, sometimes interaction is required to understand why a proof fails (too weak or even wrong specifications, wrong code or "just" a prover incapacity) and to add suitable corrections or strengthenings.
Feedback from Proof Inspection
Especially in the case of a prover incapacity (specification and program are OK, but the problem is too hard for the prover to succeed automatically), interaction is a powerful feature. Many program provers (like OpenJML, Dafny, Frama-C etc.) rely on SMT solvers in the backend which they feed with many more or less small verification conditions. The verification status for these conditions is then reported back to the user, basically as pass or fail -- or timeout. When an assertion failed, a user can change the program or refine the specifications, but cannot inspect the state of the proof to deduct information about why something went wrong; this style is sometimes called "auto-active" as opposed to interactive. While this can be quite convenient in many cases (especially when proofs pass, since the SMT solvers can be really quick in proving something), it can be hard to mine SMT solver output for information. Not even the SMT solvers themselves know why something went wrong (although they can produce a counterexample), as they just are fed a set of formulas for which they attempt to find a contradiction.
TimSort: A Complicated Algorithmic Problem
For the TimSort proof which you mentioned, we had to use a lot of interaction to make them pass. Take, for instance, the mergeHi method of the sorting algorithm which has been proven by one of the most experienced KeY power users known to me. In this proof of 460K proof nodes, 3K user interactions were necessary, consisting of quite a lot of simple ones like the hiding of distracting formulas, but also of 478 quantifier instantiations and about 300 cuts (on-line lemma introduction). The code of that method features many difficult Java features like nested loops with labeled breaks, integer overflows, bit arithmetic and so on; especially, there are a lot of potential exceptions and other reasons for branching in the proof tree (which is why in addition, also five manual state merging rule applications have been used in the proof). The workflow for proving this method basically was to give the strategies a try for some time, check the proof state afterward, prune back the proof and introduce a helpful lemma to reduce the overall proof work and to start again; occasionally, quantifiers were instantiated manually if the strategies failed to find the right instantiation directly by themselves, and proof tree branches were merged to tackle state explosion. I would just claim here that proving this code is (at least currently) not possible with auto-active tools, where you cannot guide the prover in that way, and also cannot obtain the right feedback for knowing how to guide it.
Strength of KeY
Concluding, I'd say that KeY's strong in proving hard algorithmic problems (like sorting etc.) where you have complicated quantified invariants and integer arithmetic with overflows, and where you need to find quantifier instantiations and small lemmas on the fly by inspecting and interacting with the proof state. The KeY approach of semi-interactive verification also excels in general for cases where SMT solvers time out, such that a user cannot tell whether something is wrong or an additional lemma is required.
KeY can of course also proof "simple" problems, however there you need to take care that your program does not contain an unsupported Java feature like floating point numbers or multithreading; also, library methods can be quite a problem if they're not yet specified in JML (but this problem applies to other approaches as well).
Ongoing Developments
As a side remark, I also would like to point out that KeY is now more and more being transformed to a platform for static analysis of different kinds of program properties (not only functional correctness of Java programs). On the one hand, we have developed tools such as the Symbolic Execution Debugger which can be used also by non-experts to examine the behavior of a sequential Java program. On the other hand, we are currently busy in refactoring the architecture of the system for making it possible to add frontends for languages different than Java (in our internal project "KeY-RED"); furthermore, there are ongoing efforts to modernize the Java frontend such that also newer language features like Lambdas and so on are supported. We are also looking into relational properties like compiler correctness. And while we already support the integration of third-party SMT solvers, our integrated logic core will still be there to support understanding proof situations and manual interactions for cases where SMT and automation fails.
TimSort Code Example
Since you asked for a code example... I cannot right know think of "the" code example showing KeY's strength, but maybe for giving you a flavor of the complexity of mergeHi in the TimSort algorithm, here a shortened excerpt with some comments (the full method has about 100 lines of code):
private void mergeHi(int base1, int len1, int base2, int len2) {
// ...
T[] tmp = ensureCapacity(len2); // Method call by contract
System.arraycopy(a, base2, tmp, 0, len2); // Manually specified library method
// ...
a[dest--] = a[cursor1--]; // potential overflow, NullPointerException, ArrayIndexOutOfBoundsException
if (--len1 == 0) {
System.arraycopy(tmp, 0, a, dest - (len2 - 1), len2);
return; // Proof branching
}
if (len2 == 1) {
// ...
return; // Proof branching
}
// ...
outer: // Loop labels...
while (true) {
// ...
do { // Nested loop
if (c.compare(tmp[cursor2], a[cursor1]) < 0) {
// ...
if (--len1 == 0)
break outer; // Labeled break
} else {
// ...
if (--len2 == 1)
break outer; // Labeled break
}
} while ((count1 | count2) < minGallop); // Bit arithmetic
do { // 2nd nested loop
// That's one complex statement below...
count1 = len1 - gallopRight(tmp[cursor2], a, base1, len1, len1 - 1, c);
if (count1 != 0) {
// ...
if (len1 == 0)
break outer;
}
// ...
if (--len2 == 1)
break outer;
count2 = len2 - gallopLeft(a[cursor1], tmp, 0, len2, len2 - 1, c);
if (count2 != 0) {
// ...
if (len2 <= 1)
break outer;
}
a[dest--] = a[cursor1--];
if (--len1 == 0)
break outer;
// ...
} while (count1 >= MIN_GALLOP | count2 >= MIN_GALLOP);
// ...
} // End of "outer" loop
this.minGallop = minGallop < 1 ? 1 : minGallop; // Write back to field
if (len2 == 1) {
// ...
} else if (len2 == 0) {
throw new IllegalArgumentException(
"Comparison method violates its general contract!");
} else {
System.arraycopy(tmp, 0, a, dest - (len2 - 1), len2);
}
}
Verification Competition
VerifyThis is an established competition for logic-based verification tools which will have its 7th iteration in 2019. The concrete challenges for past events can be downloaded from the "archive" section of the website I linked. Two KeY teams participated there in 2017. The overall winner that year was Why3. An interesting observation is that there was one problem, Pair Insertion Sort, which came as a simplified and as an optimized Java version, for which no team succeeded in verifying the real-world optimized version on site. However, a KeY team finished that proof in the weeks after the event. I think that highlights my point: KeY proofs of difficult algorithmic problems take their time and require expertise, but they're likely to succeed due to the combined power of strategies and interaction.

What is difference between functional and imperative programming languages?

Most of the mainstream languages, including object-oriented programming (OOP) languages such as C#, Visual Basic, C++, and Java were designed to primarily support imperative (procedural) programming, whereas Haskell/gofer like languages are purely functional. Can anybody elaborate on what is the difference between these two ways of programming?
I know it depends on user requirements to choose the way of programming but why is it recommended to learn functional programming languages?
Here is the difference:
Imperative:
Start
Turn on your shoes size 9 1/2.
Make room in your pocket to keep an array[7] of keys.
Put the keys in the room for the keys in the pocket.
Enter garage.
Open garage.
Enter Car.
... and so on and on ...
Put the milk in the refrigerator.
Stop.
Declarative, whereof functional is a subcategory:
Milk is a healthy drink, unless you have problems digesting lactose.
Usually, one stores milk in a refrigerator.
A refrigerator is a box that keeps the things in it cool.
A store is a place where items are sold.
By "selling" we mean the exchange of things for money.
Also, the exchange of money for things is called "buying".
... and so on and on ...
Make sure we have milk in the refrigerator (when we need it - for lazy functional languages).
Summary: In imperative languages you tell the computer how to change bits, bytes and words in it's memory and in what order. In functional ones, we tell the computer what things, actions etc. are. For example, we say that the factorial of 0 is 1, and the factorial of every other natural number is the product of that number and the factorial of its predecessor. We don't say: To compute the factorial of n, reserve a memory region and store 1 there, then multiply the number in that memory region with the numbers 2 to n and store the result at the same place, and at the end, the memory region will contain the factorial.
Definition:
An imperative language uses a sequence of statements to determine how to reach a certain goal. These statements are said to change the state of the program as each one is executed in turn.
Examples:
Java is an imperative language. For example, a program can be created to add a series of numbers:
int total = 0;
int number1 = 5;
int number2 = 10;
int number3 = 15;
total = number1 + number2 + number3;
Each statement changes the state of the program, from assigning values to each variable to the final addition of those values. Using a sequence of five statements the program is explicitly told how to add the numbers 5, 10 and 15 together.
Functional languages:
The functional programming paradigm was explicitly created to support a pure functional approach to problem solving. Functional programming is a form of declarative programming.
Advantages of Pure Functions:
The primary reason to implement functional transformations as pure functions is that pure functions are composable: that is, self-contained and stateless. These characteristics bring a number of benefits, including the following:
Increased readability and maintainability. This is because each function is designed to accomplish a specific task given its arguments. The function does not rely on any external state.
Easier reiterative development. Because the code is easier to refactor, changes to design are often easier to implement. For example, suppose you write a complicated transformation, and then realize that some code is repeated several times in the transformation. If you refactor through a pure method, you can call your pure method at will without worrying about side effects.
Easier testing and debugging. Because pure functions can more easily be tested in isolation, you can write test code that calls the pure function with typical values, valid edge cases, and invalid edge cases.
For OOP People or
Imperative languages:
Object-oriented languages are good when you have a fixed set of operations on things and as your code evolves, you primarily add new things. This can be accomplished by adding new classes which implement existing methods and the existing classes are left alone.
Functional languages are good when you have a fixed set of things and as your code evolves, you primarily add new operations on existing things. This can be accomplished by adding new functions which compute with existing data types and the existing functions are left alone.
Cons:
It depends on the user requirements to choose the way of programming, so there is harm only when users don’t choose the proper way.
When evolution goes the wrong way, you have problems:
Adding a new operation to an object-oriented program may require editing many class definitions to add a new method
Adding a new kind of thing to a functional program may require editing many function definitions to add a new case.
Most modern languages are in varying degree both imperative and functional but to better understand functional programming, it will be best to take an example of pure functional language like Haskell in contrast of imperative code in not so functional language like java/C#. I believe it is always easy to explain by example, so below is one.
Functional programming: calculate factorial of n i.e n! i.e n x (n-1) x (n-2) x ...x 2 X 1
-- | Haskell comment goes like
-- | below 2 lines is code to calculate factorial and 3rd is it's execution
factorial 0 = 1
factorial n = n * factorial (n - 1)
factorial 3
-- | for brevity let's call factorial as f; And x => y shows order execution left to right
-- | above executes as := f(3) as 3 x f(2) => f(2) as 2 x f(1) => f(1) as 1 x f(0) => f(0) as 1
-- | 3 x (2 x (1 x (1)) = 6
Notice that Haskel allows function overloading to the level of argument value. Now below is example of imperative code in increasing degree of imperativeness:
//somewhat functional way
function factorial(n) {
if(n < 1) {
return 1;
}
return n * factorial(n-1);
}
factorial(3);
//somewhat more imperative way
function imperativeFactor(n) {
int f = 1;
for(int i = 1; i <= n; i++) {
f = f * i;
}
return f;
}
This read can be a good reference to understand that how imperative code focus more on how part, state of machine (i in for loop), order of execution, flow control.
The later example can be seen as java/C# lang code roughly and first part as limitation of the language itself in contrast of Haskell to overload the function by value (zero) and hence can be said it is not purist functional language, on the other hand you can say it support functional prog. to some extent.
Disclosure: none of the above code is tested/executed but hopefully should be good enough to convey the concept; also I would appreciate comments for any such correction :)
Functional Programming is a form of declarative programming, which describe the logic of computation and the order of execution is completely de-emphasized.
Problem: I want to change this creature from a horse to a giraffe.
Lengthen neck
Lengthen legs
Apply spots
Give the creature a black tongue
Remove horse tail
Each item can be run in any order to produce the same result.
Imperative Programming is procedural. State and order is important.
Problem: I want to park my car.
Note the initial state of the garage door
Stop car in driveway
If the garage door is closed, open garage door, remember new state; otherwise continue
Pull car into garage
Close garage door
Each step must be done in order to arrive at desired result. Pulling into the garage while the garage door is closed would result in a broken garage door.
//The IMPERATIVE way
int a = ...
int b = ...
int c = 0; //1. there is mutable data
c = a+b;   //2. statements (our +, our =) are used to update existing data (variable c)
An imperative program = sequence of statements that change existing data.
Focus on WHAT = our mutating data (modifiable values aka variables).
To chain imperative statements = use procedures (and/or oop).
//The FUNCTIONAL way
const int a = ... //data is always immutable
const int b = ... //data is always immutable
//1. declare pure functions; we use statements to create "new" data (the result of our +), but nothing is ever "changed"
int add(x, y)
{
return x+y;
}
//2. usage = call functions to get new data
const int c = add(a,b); //c can only be assigned (=) once (const)
A functional program = a list of functions "explaining" how new data can be obtained.
Focus on HOW = our function add.
To chain functional "statements" = use function composition.
These fundamental distinctions have deep implications.
Serious software has a lot of data and a lot of code.
So same data (variable) is used in multiple parts of the code.
A. In an imperative program, the mutability of this (shared) data causes issues
code is hard to understand/maintain (since data can be modified in different locations/ways/moments)
parallelizing code is hard (only one thread can mutate a memory location at the time) which means mutating accesses to same variable have to be serialized = developer must write additional code to enforce this serialized access to shared resources, typically via locks/semaphores
As an advantage: data is really modified in place, less need to copy. (some performance gains)
B. On the other hand, functional code uses immutable data which does not have such issues. Data is readonly so there are no race conditions. Code can be easily parallelized. Results can be cached. Much easier to understand.
As a disadvantage: data is copied a lot in order to get "modifications".
See also: https://en.wikipedia.org/wiki/Referential_transparency
Imperative programming style was practiced in web development from 2005 all the way to 2013.
With imperative programming, we wrote out code that listed exactly what our application should do, step by step.
The functional programming style produces abstraction through clever ways of combining functions.
There is mention of declarative programming in the answers and regarding that I will say that declarative programming lists out some rules that we are to follow. We then provide what we refer to as some initial state to our application and we let those rules kind of define how the application behaves.
Now, these quick descriptions probably don’t make a lot of sense, so lets walk through the differences between imperative and declarative programming by walking through an analogy.
Imagine that we are not building software, but instead we bake pies for a living. Perhaps we are bad bakers and don’t know how to bake a delicious pie the way we should.
So our boss gives us a list of directions, what we know as a recipe.
The recipe will tell us how to make a pie. One recipe is written in an imperative style like so:
Mix 1 cup of flour
Add 1 egg
Add 1 cup of sugar
Pour the mixture into a pan
Put the pan in the oven for 30 minutes and 350 degrees F.
The declarative recipe would do the following:
1 cup of flour, 1 egg, 1 cup of sugar - initial State
Rules
If everything mixed, place in pan.
If everything unmixed, place in bowl.
If everything in pan, place in oven.
So imperative approaches are characterized by step by step approaches. You start with step one and go to step 2 and so on.
You eventually end up with some end product. So making this pie, we take these ingredients mix them, put it in a pan and in the oven and you got your end product.
In a declarative world, its different.In the declarative recipe we would separate our recipe into two separate parts, start with one part that lists the initial state of the recipe, like the variables. So our variables here are the quantities of our ingredients and their type.
We take the initial state or initial ingredients and apply some rules to them.
So we take the initial state and pass them through these rules over and over again until we get a ready to eat rhubarb strawberry pie or whatever.
So in a declarative approach, we have to know how to properly structure these rules.
So the rules we might want to examine our ingredients or state, if mixed, put them in a pan.
With our initial state, that doesn’t match because we haven’t yet mixed our ingredients.
So rule 2 says, if they not mixed then mix them in a bowl. Okay yeah this rule applies.
Now we have a bowl of mixed ingredients as our state.
Now we apply that new state to our rules again.
So rule 1 says if ingredients are mixed place them in a pan, okay yeah now rule 1 does apply, lets do it.
Now we have this new state where the ingredients are mixed and in a pan. Rule 1 is no longer relevant, rule 2 does not apply.
Rule 3 says if the ingredients are in a pan, place them in the oven, great that rule is what applies to this new state, lets do it.
And we end up with a delicious hot apple pie or whatever.
Now, if you are like me, you may be thinking, why are we not still doing imperative programming. This makes sense.
Well, for simple flows yes, but most web applications have more complex flows that cannot be properly captured by imperative programming design.
In a declarative approach, we may have some initial ingredients or initial state like textInput=“”, a single variable.
Maybe text input starts off as an empty string.
We take this initial state and apply it to a set of rules defined in your application.
If a user enters text, update text input. Well, right now that doesn’t apply.
If template is rendered, calculate the widget.
If textInput is updated, re render the template.
Well, none of this applies so the program will just wait around for an event to happen.
So at some point a user updates the text input and then we might apply rule number 1.
We may update that to “abcd”
So we just updated our text and textInput updates, rule number 2 does not apply, rule number 3 says if text input is update, which just occurred, then re render the template and then we go back to rule 2 thats says if template is rendered, calculate the widget, okay lets calculate the widget.
In general, as programmers, we want to strive for more declarative programming designs.
Imperative seems more clear and obvious, but a declarative approach scales very nicely for larger applications.
I think it's possible to express functional programming in an imperative fashion:
Using a lot of state check of objects and if... else/ switch statements
Some timeout/ wait mechanism to take care of asynchornousness
There are huge problems with such approach:
Rules/ procedures are repeated
Statefulness leaves chances for side-effects/ mistakes
Functional programming, treating functions/ methods like objects and embracing statelessness, was born to solve those problems I believe.
Example of usages: frontend applications like Android, iOS or web apps' logics incl. communication with backend.
Other challenges when simulating functional programming with imperative/ procedural code:
Race condition
Complex combination and sequence of events. For example, user tries to send money in a banking app. Step 1) Do all of the following in parallel, only proceed if all is good a) Check if user is still good (fraud, AML) b) check if user has enough balance c) Check if recipient is valid and good (fraud, AML) etc. Step 2) perform the transfer operation Step 3) Show update on user's balance and/ or some kind of tracking. With RxJava for example, the code is concise and sensible. Without it, I can imagine there'd be a lot of code, messy and error prone code
I also believe that at the end of the day, functional code will get translated into assembly or machine code which is imperative/ procedural by the compilers. However, unless you write assembly, as humans writing code with high level/ human-readable language, functional programming is the more appropriate way of expression for the listed scenarios
There seem to be many opinions about what functional programs and what imperative programs are.
I think functional programs can most easily be described as "lazy evaluation" oriented. Instead of having a program counter iterate through instructions, the language by design takes a recursive approach.
In a functional language, the evaluation of a function would start at the return statement and backtrack, until it eventually reaches a value. This has far reaching consequences with regards to the language syntax.
Imperative: Shipping the computer around
Below, I've tried to illustrate it by using a post office analogy. The imperative language would be mailing the computer around to different algorithms, and then have the computer returned with a result.
Functional: Shipping recipes around
The functional language would be sending recipes around, and when you need a result - the computer would start processing the recipes.
This way, you ensure that you don't waste too many CPU cycles doing work that is never used to calculate the result.
When you call a function in a functional language, the return value is a recipe that is built up of recipes which in turn is built of recipes. These recipes are actually what's known as closures.
// helper function, to illustrate the point
function unwrap(val) {
while (typeof val === "function") val = val();
return val;
}
function inc(val) {
return function() { unwrap(val) + 1 };
}
function dec(val) {
return function() { unwrap(val) - 1 };
}
function add(val1, val2) {
return function() { unwrap(val1) + unwrap(val2) }
}
// lets "calculate" something
let thirteen = inc(inc(inc(10)))
let twentyFive = dec(add(thirteen, thirteen))
// MAGIC! The computer still has not calculated anything.
// 'thirteen' is simply a recipe that will provide us with the value 13
// lets compose a new function
let doubler = function(val) {
return add(val, val);
}
// more modern syntax, but it's the same:
let alternativeDoubler = (val) => add(val, val)
// another function
let doublerMinusOne = (val) => dec(add(val, val));
// Will this be calculating anything?
let twentyFive = doubler(thirteen)
// no, nothing has been calculated. If we need the value, we have to unwrap it:
console.log(unwrap(thirteen)); // 26
The unwrap function will evaluate all the functions to the point of having a scalar value.
Language Design Consequences
Some nice features in imperative languages, are impossible in functional languages. For example the value++ expression, which in functional languages would be difficult to evaluate. Functional languages make constraints on how the syntax must be, because of the way they are evaluated.
On the other hand, with imperative languages can borrow great ideas from functional languages and become hybrids.
Functional languages have great difficulty with unary operators like for example ++ to increment a value. The reason for this difficulty is not obvious, unless you understand that functional languages are evaluated "in reverse".
Implementing a unary operator would have to be implemented something like this:
let value = 10;
function increment_operator(value) {
return function() {
unwrap(value) + 1;
}
}
value++ // would "under the hood" become value = increment_operator(value)
Note that the unwrap function I used above, is because javascript is not a functional language, so when needed we have to manually unwrap the value.
It is now apparent that applying increment a thousand times would cause us to wrap the value with 10000 closures, which is worthless.
The more obvious approach, is to actually directly change the value in place - but voila: you have introduced modifiable values a.k.a mutable values which makes the language imperative - or actually a hybrid.
Under the hood, it boils down to two different approaches to come up with an output when provided with an input.
Below, I'll try to make an illustration of a city with the following items:
The Computer
Your Home
The Fibonaccis
Imperative Languages
Task: Calculate the 3rd fibonacci number.
Steps:
Put The Computer into a box and mark it with a sticky note:
Field
Value
Mail Address
The Fibonaccis
Return Address
Your Home
Parameters
3
Return Value
undefined
and send off the computer.
The Fibonaccis will upon receiving the box do as they always do:
Is the parameter < 2?
Yes: Change the sticky note, and return the computer to the post office:
Field
Value
Mail Address
The Fibonaccis
Return Address
Your Home
Parameters
3
Return Value
0 or 1 (returning the parameter)
and return to sender.
Otherwise:
Put a new sticky note on top of the old one:
Field
Value
Mail Address
The Fibonaccis
Return Address
Otherwise, step 2, c/oThe Fibonaccis
Parameters
2 (passing parameter-1)
Return Value
undefined
and send it.
Take off the returned sticky note. Put a new sticky note on top of the initial one and send The Computer again:
Field
Value
Mail Address
The Fibonaccis
Return Address
Otherwise, done, c/o The Fibonaccis
Parameters
2 (passing parameter-2)
Return Value
undefined
By now, we should have the initial sticky note from the requester, and two used sticky notes, each having their Return Value field filled. We summarize the return values and put it in the Return Value field of the final sticky note.
Field
Value
Mail Address
The Fibonaccis
Return Address
Your Home
Parameters
3
Return Value
2 (returnValue1 + returnValue2)
and return to sender.
As you can imagine, quite a lot of work starts immediately after you send your computer off to the functions you call.
The entire programming logic is recursive, but in truth the algorithm happens sequentially as the computer moves from algorithm to algorithm with the help of a stack of sticky notes.
Functional Languages
Task: Calculate the 3rd fibonacci number. Steps:
Write the following down on a sticky note:
Field
Value
Instructions
The Fibonaccis
Parameters
3
That's essentially it. That sticky note now represents the computation result of fib(3).
We have attached the parameter 3 to the recipe named The Fibonaccis. The computer does not have to perform any calculations, unless somebody needs the scalar value.
Functional Javascript Example
I've been working on designing a programming language named Charm, and this is how fibonacci would look in that language.
fib: (n) => if (
n < 2 // test
n // when true
fib(n-1) + fib(n-2) // when false
)
print(fib(4));
This code can be compiled both into imperative and functional "bytecode".
The imperative javascript version would be:
let fib = (n) =>
n < 2 ?
n :
fib(n-1) + fib(n-2);
The HALF functional javascript version would be:
let fib = (n) => () =>
n < 2 ?
n :
fib(n-1) + fib(n-2);
The PURE functional javascript version would be much more involved, because javascript doesn't have functional equivalents.
let unwrap = ($) =>
typeof $ !== "function" ? $ : unwrap($());
let $if = ($test, $whenTrue, $whenFalse) => () =>
unwrap($test) ? $whenTrue : $whenFalse;
let $lessThen = (a, b) => () =>
unwrap(a) < unwrap(b);
let $add = ($value, $amount) => () =>
unwrap($value) + unwrap($amount);
let $sub = ($value, $amount) => () =>
unwrap($value) - unwrap($amount);
let $fib = ($n) => () =>
$if(
$lessThen($n, 2),
$n,
$add( $fib( $sub($n, 1) ), $fib( $sub($n, 2) ) )
);
I'll manually "compile" it into javascript code:
"use strict";
// Library of functions:
/**
* Function that resolves the output of a function.
*/
let $$ = (val) => {
while (typeof val === "function") {
val = val();
}
return val;
}
/**
* Functional if
*
* The $ suffix is a convention I use to show that it is "functional"
* style, and I need to use $$() to "unwrap" the value when I need it.
*/
let if$ = (test, whenTrue, otherwise) => () =>
$$(test) ? whenTrue : otherwise;
/**
* Functional lt (less then)
*/
let lt$ = (leftSide, rightSide) => () =>
$$(leftSide) < $$(rightSide)
/**
* Functional add (+)
*/
let add$ = (leftSide, rightSide) => () =>
$$(leftSide) + $$(rightSide)
// My hand compiled Charm script:
/**
* Functional fib compiled
*/
let fib$ = (n) => if$( // fib: (n) => if(
lt$(n, 2), // n < 2
() => n, // n
() => add$(fib$(n-2), fib$(n-1)) // fib(n-1) + fib(n-2)
) // )
// This takes a microsecond or so, because nothing is calculated
console.log(fib$(30));
// When you need the value, just unwrap it with $$( fib$(30) )
console.log( $$( fib$(5) ))
// The only problem that makes this not truly functional, is that
console.log(fib$(5) === fib$(5)) // is false, while it should be true
// but that should be solveable
https://jsfiddle.net/819Lgwtz/42/
I know this question is older and others already explained it well, I would like to give an example problem which explains the same in simple terms.
Problem: Writing the 1's table.
Solution: -
By Imperative style: =>
1*1=1
1*2=2
1*3=3
.
.
.
1*n=n
By Functional style: =>
1
2
3
.
.
.
n
Explanation in Imperative style we write the instructions more explicitly and which can be called as in more simplified manner.
Where as in Functional style, things which are self-explanatory will be ignored.

If I come from an imperative programming background, how do I wrap my head around the idea of no dynamic variables to keep track of things in Haskell?

So I'm trying to teach myself Haskell. I am currently on the 11th chapter of Learn You a Haskell for Great Good and am doing the 99 Haskell Problems as well as the Project Euler Problems.
Things are going alright, but I find myself constantly doing something whenever I need to keep track of "variables". I just create another function that accepts those "variables" as parameters and recursively feed it different values depending on the situation. To illustrate with an example, here's my solution to Problem 7 of Project Euler, Find the 10001st prime:
answer :: Integer
answer = nthPrime 10001
nthPrime :: Integer -> Integer
nthPrime n
| n < 1 = -1
| otherwise = nthPrime' n 1 2 []
nthPrime' :: Integer -> Integer -> Integer -> [Integer] -> Integer
nthPrime' n currentIndex possiblePrime previousPrimes
| isFactorOfAnyInThisList possiblePrime previousPrimes = nthPrime' n currentIndex theNextPossiblePrime previousPrimes
| otherwise =
if currentIndex == n
then possiblePrime
else nthPrime' n currentIndexPlusOne theNextPossiblePrime previousPrimesPlusCurrentPrime
where currentIndexPlusOne = currentIndex + 1
theNextPossiblePrime = nextPossiblePrime possiblePrime
previousPrimesPlusCurrentPrime = possiblePrime : previousPrimes
I think you get the idea. Let's also just ignore the fact that this solution can be made to be more efficient, I'm aware of this.
So my question is kind of a two-part question. First, am I going about Haskell all wrong? Am I stuck in the imperative programming mindset and not embracing Haskell as I should? And if so, as I feel I am, how do avoid this? Is there a book or source you can point me to that might help me think more Haskell-like?
Your help is much appreciated,
-Asaf
Am I stuck in the imperative programming mindset and not embracing
Haskell as I should?
You are not stuck, at least I don't hope so. What you experience is absolutely normal. While you were working with imperative languages you learned (maybe without knowing) to see programming problems from a very specific perspective - namely in terms of the van Neumann machine.
If you have the problem of, say, making a list that contains some sequence of numbers (lets say we want the first 1000 even numbers), you immediately think of: a linked list implementation (perhaps from the standard library of your programming language), a loop and a variable that you'd set to a starting value and then you would loop for a while, updating the variable by adding 2 and putting it to the end of the list.
See how you mostly think to serve the machine? Memory locations, loops, etc.!
In imperative programming, one thinks about how to manipulate certain memory cells in a certain order to arrive at the solution all the time. (This is, btw, one reason why beginners find learning (imperative) programming hard. Non programmers are simply not used to solve problems by reducing it to a sequence of memory operations. Why should they? But once you've learned that, you have the power - in the imperative world. For functional programming you need to unlearn that.)
In functional programming, and especially in Haskell, you merely state the construction law of the list. Because a list is a recursive data structure, this law is of course also recursive. In our case, we could, for example say the following:
constructStartingWith n = n : constructStartingWith (n+2)
And almost done! To arrive at our final list we only have to say where to start and how many we want:
result = take 1000 (constructStartingWith 0)
Note that a more general version of constructStartingWith is available in the library, it is called iterate and it takes not only the starting value but also the function that makes the next list element from the current one:
iterate f n = n : iterate f (f n)
constructStartingWith = iterate (2+) -- defined in terms of iterate
Another approach is to assume that we had another list our list could be made from easily. For example, if we had the list of the first n integers we could make it easily into the list of even integers by multiplying each element with 2. Now, the list of the first 1000 (non-negative) integers in Haskell is simply
[0..999]
And there is a function map that transforms lists by applying a given function to each argument. The function we want is to double the elements:
double n = 2*n
Hence:
result = map double [0..999]
Later you'll learn more shortcuts. For example, we don't need to define double, but can use a section: (2*) or we could write our list directly as a sequence [0,2..1998]
But not knowing these tricks yet should not make you feel bad! The main challenge you are facing now is to develop a mentality where you see that the problem of constructing the list of the first 1000 even numbers is a two staged one: a) define how the list of all even numbers looks like and b) take a certain portion of that list. Once you start thinking that way you're done even if you still use hand written versions of iterate and take.
Back to the Euler problem: Here we can use the top down method (and a few basic list manipulation functions one should indeed know about: head, drop, filter, any). First, if we had the list of primes already, we can just drop the first 1000 and take the head of the rest to get the 1001th one:
result = head (drop 1000 primes)
We know that after dropping any number of elements form an infinite list, there will still remain a nonempty list to pick the head from, hence, the use of head is justified here. When you're unsure if there are more than 1000 primes, you should write something like:
result = case drop 1000 primes of
[] -> error "The ancient greeks were wrong! There are less than 1001 primes!"
(r:_) -> r
Now for the hard part. Not knowing how to proceed, we could write some pseudo code:
primes = 2 : {-an infinite list of numbers that are prime-}
We know for sure that 2 is the first prime, the base case, so to speak, thus we can write it down. The unfilled part gives us something to think about. For example, the list should start at some value that is greater 2 for obvious reason. Hence, refined:
primes = 2 : {- something like [3..] but only the ones that are prime -}
Now, this is the point where there emerges a pattern that one needs to learn to recognize. This is surely a list filtered by a predicate, namely prime-ness (it does not matter that we don't know yet how to check prime-ness, the logical structure is the important point. (And, we can be sure that a test for prime-ness is possible!)). This allows us to write more code:
primes = 2 : filter isPrime [3..]
See? We are almost done. In 3 steps, we have reduced a fairly complex problem in such a way that all that is left to write is a quite simple predicate.
Again, we can write in pseudocode:
isPrime n = {- false if any number in 2..n-1 divides n, otherwise true -}
and can refine that. Since this is almost haskell already, it is too easy:
isPrime n = not (any (divides n) [2..n-1])
divides n p = n `rem` p == 0
Note that we did not do optimization yet. For example we can construct the list to be filtered right away to contain only odd numbers, since we know that even ones are not prime. More important, we want to reduce the number of candidates we have to try in isPrime. And here, some mathematical knowledge is needed (the same would be true if you programmed this in C++ or Java, of course), that tells us that it suffices to check if the n we are testing is divisible by any prime number, and that we do not need to check divisibility by prime numbers whose square is greater than n. Fortunately, we have already defined the list of prime numbers and can pick the set of candidates from there! I leave this as exercise.
You'll learn later how to use the standard library and the syntactic sugar like sections, list comprehensions, etc. and you will gradually give up to write your own basic functions.
Even later, when you have to do something in an imperative programming language again, you'll find it very hard to live without infinte lists, higher order functions, immutable data etc.
This will be as hard as going back from C to Assembler.
Have fun!
It's ok to have an imperative mindset at first. With time you will get more used to things and start seeing the places where you can have more functional programs. Practice makes perfect.
As for working with mutable variables you can kind of keep them for now if you follow the rule of thumb of converting variables into function parameters and iteration into tail recursion.
Off the top of my head:
Typeclassopedia. The official v1 of the document is a pdf, but the author has moved his v2 efforts to the Haskell wiki.
What is a monad? This SO Q&A is the best reference I can find.
What is a Monad Transformer? Monad Transformers Step by Step.
Learn from masters: Good Haskell source to read and learn from.
More advanced topics such as GADTs. There's a video, which does a great job explaining it.
And last but not least, #haskell IRC channel. Nothing can even come close to talk to real people.
I think the big change from your code to more haskell like code is using higher order functions, pattern matching and laziness better. For example, you could write the nthPrime function like this (using a similar algorithm to what you did, again ignoring efficiency):
nthPrime n = primes !! (n - 1) where
primes = filter isPrime [2..]
isPrime p = isPrime' p [2..p - 1]
isPrime' p [] = True
isPrime' p (x:xs)
| (p `mod` x == 0) = False
| otherwise = isPrime' p xs
Eg nthPrime 4 returns 7. A few things to note:
The isPrime' function uses pattern matching to implement the function, rather than relying on if statements.
the primes value is an infinite list of all primes. Since haskell is lazy, this is perfectly acceptable.
filter is used rather than reimplemented that behaviour using recursion.
With more experience you will find you will write more idiomatic haskell code - it sortof happens automatically with experience. So don't worry about it, just keep practicing, and reading other people's code.
Another approach, just for variety! Strong use of laziness...
module Main where
nonmults :: Int -> Int -> [Int] -> [Int]
nonmults n next [] = []
nonmults n next l#(x:xs)
| x < next = x : nonmults n next xs
| x == next = nonmults n (next + n) xs
| otherwise = nonmults n (next + n) l
select_primes :: [Int] -> [Int]
select_primes [] = []
select_primes (x:xs) =
x : (select_primes $ nonmults x (x + x) xs)
main :: IO ()
main = do
let primes = select_primes [2 ..]
putStrLn $ show $ primes !! 10000 -- the first prime is index 0 ...
I want to try to answer your question without using ANY functional programming or math, not because I don't think you will understand it, but because your question is very common and maybe others will benefit from the mindset I will try to describe. I'll preface this by saying I an not a Haskell expert by any means, but I have gotten past the mental block you have described by realizing the following:
1. Haskell is simple
Haskell, and other functional languages that I'm not so familiar with, are certainly very different from your 'normal' languages, like C, Java, Python, etc. Unfortunately, the way our psyche works, humans prematurely conclude that if something is different, then A) they don't understand it, and B) it's more complicated than what they already know. If we look at Haskell very objectively, we will see that these two conjectures are totally false:
"But I don't understand it :("
Actually you do. Everything in Haskell and other functional languages is defined in terms of logic and patterns. If you can answer a question as simple as "If all Meeps are Moops, and all Moops are Moors, are all Meeps Moors?", then you could probably write the Haskell Prelude yourself. To further support this point, consider that Haskell lists are defined in Haskell terms, and are not special voodoo magic.
"But it's complicated"
It's actually the opposite. It's simplicity is so naked and bare that our brains have trouble figuring out what to do with it at first. Compared to other languages, Haskell actually has considerably fewer "features" and much less syntax. When you read through Haskell code, you'll notice that almost all the function definitions look the same stylistically. This is very different than say Java for example, which has constructs like Classes, Interfaces, for loops, try/catch blocks, anonymous functions, etc... each with their own syntax and idioms.
You mentioned $ and ., again, just remember they are defined just like any other Haskell function and don't necessarily ever need to be used. However, if you didn't have these available to you, over time, you would likely implement these functions yourself when you notice how convenient they can be.
2. There is no Haskell version of anything
This is actually a great thing, because in Haskell, we have the freedom to define things exactly how we want them. Most other languages provide building blocks that people string together into a program. Haskell leaves it up to you to first define what a building block is, before building with it.
Many beginners ask questions like "How do I do a For loop in Haskell?" and innocent people who are just trying to help will give an unfortunate answer, probably involving a helper function, and extra Int parameter, and tail recursing until you get to 0. Sure, this construct can compute something like a for loop, but in no way is it a for loop, it's not a replacement for a for loop, and in no way is it really even similar to a for loop if you consider the flow of execution. Similar is the State monad for simulating state. It can be used to accomplish similar things as static variables do in other languages, but in no way is it the same thing. Most people leave off the last tidbit about it not being the same when they answer these kinds of questions and I think that only confuses people more until they realize it on their own.
3. Haskell is a logic engine, not a programming language
This is probably least true point I'm trying to make, but hear me out. In imperative programming languages, we are concerned with making our machines do stuff, perform actions, change state, and so on. In Haskell, we try to define what things are, and how are they supposed to behave. We are usually not concerned with what something is doing at any particular time. This certainly has benefits and drawbacks, but that's just how it is. This is very different than what most people think of when you say "programming language".
So that's my take how how to leave an imperative mindset and move to a more functional mindset. Realizing how sensible Haskell is will help you not look at your own code funny anymore. Hopefully thinking about Haskell in these ways will help you become a more productive Haskeller.