Why are we using i as a counter in loops? [closed] - variables

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I know this might seem like an absolutely silly question to ask, yet I am too curious not to ask...
Why did "i" and "j" become THE variables to use as counters in most control structures?
Although common sense tells me they are just like X, which is used for representing unknown values, I can't help to think that there must be a reason why everyone gets taught the same way over and over again.
Is it because it is actually recommended for best practices, or a convention, or does it have some obscure reason behind it?
Just in case, I know I can give them whatever name I want and that variables names are not relevant.

It comes ultimately from mathematics: the summation notation traditionally uses i for the first index, j for the second, and so on. Example (from http://en.wikipedia.org/wiki/Summation):
It's also used that way for collections of things, like if you have a bunch of variables x1, x2, ... xn, then an arbitrary one will be known as xi.
As for why it's that way, I imagine SLaks is correct and it's because I is the first letter in Index.

I believe it dates back to Fortran. Variables starting with I through Q were integer by default, the others were real. This meant that I was the first integer variable, and J the second, etc., so they fell towards use in loops.

Mathematicians were using i,j,k to designate integers in algebra (subscripts, series, summations etc) long before (e.g 1836 or 1816) computers were around (this is the origin of the FORTRAN variable type defaults). The habit of using letters from the end of the alphabet (...,x,y,z) for unknown variables and from the beginning (a,b,c...) for constants is generally attributed to Rene Descartes, (see also here) so I assume i,j,k...n (in the middle of the alphabet) for integers is likely due to him too.

i = integer
Comes from Fortran where integer variables had to start with the letters I through N and real variables started with the other letters. Thus I was the first and shortest integer variable name. Fortran was one of the earliest programming languages in widespread use and the habits developed by programmers using it carried over to other languages.
EDIT: I have no problem with the answer that it derives from mathematics. Undoubtedly that is where the Fortran designers got their inspiration. The fact is, for me anyway, when I started to program in Fortran we used I, J, K, ... for loop counters because they were short and the first legally allowed variable names for integers. As a sophomore in H.S. I had probably heard of Descartes (and a very few others), but made very little connection to mathematics when programming. In fact, the first course I took was called "Fortran for Business" and was taught not by the math faculty, but the business/econ faculty.
For me, at least, the naming of variables had little to do with mathematics, but everything due to the habits I picked up writing Fortran code that I carried into other languages.

i stands for Index.
j comes after i.

These symbols were used as matrix indexes in mathematics long before electronic computers were invented.

I think it's most likely derived from index (in the mathematical sense) - it's used commonly as an index in sums or other set-based operations, and most likely has been used that way since before there were programming languages.

There's a preference in maths for using consecutive letters in the alphabet for "anonymous" variables used in a similar way. Hence, not just "i, j, k", but also "f, g, h", "p, q, r", "x, y, z" (rarely with "u, v, w" prepended), and "α, β, γ".
Now "f, g, h" and "x, y, z" are not used freely: the former is for functions, the latter for dimensions. "p, q, r" are also often used for functions.
Then there are other constraints on available sequences: "l" and "o" are avoided, because they look too much like "1" and "0" in many fonts. "t" is often used for time, "d & δ" for differentials, and "a, s, m, v" for the physical measures of acceleration, displacement, mass, and velocity. That leaves not so many gaps of three consecutive letters without unwanted associations in mathematics for indices.
Then, as several others have noticed, conventions from mathematics had a strong influence on early programming conventions, and "α, β, γ" weren't available in many early character sets.

I found another possible answer that could be that i, j, and k come from Hamilton's Quaternions.
Euler picked i for the imaginary unit.
Hamilton needed two more square roots of -1:
ii = jj = kk = ijk = -1
Hamilton was really influential, and quaternions were the standard way to do 3D analysis before 1900. By then, mathematicians were used to thinking of (ijk) as a matched set.
Vector calculus replaced quaternionic analysis in the 1890s because it was a better way to write Maxwell's equations. But people tended to write vector quantities as like this: (3i-2j+k) instead of (3,-2,1). So (ijk) became the standard basis vectors in R^3.
Finally, physicists started using group theory to describe symmetries in systems of differential equations. So (ijk) started to connote "vectors that get swapped around by permutation groups," then drifted towards "index-like things that take on all possible values in some specified set," which is basically what they mean in a for loop.

by discarding (a little biased)
a seems an array
b seems another array
c seems a language name
d seems another language name
e seems exception
f looks bad in combination with "for" (for f, a pickup?)
g seems g force
h seems height
i seems an index
j seems i (another index)
k seems a constant k
l seems a number one (1)
m seems a matrix
n seems a node
o seems an output
p sounds like a pointer
q seems a queue
r seems a return value
s seems a string
t looks like time
u reserved for UVW mapping or electic phase
v reserved for UVW mapping or electic phase or a vector
w reserved for UVW mapping or electic phase or a weight
x seems an axis (or an unknown variable)
y seems an axis
z seems a third axis

One sunny afternoon, Archimedes what pondering (as was usual for sunny afternoons) and ran into his buddy Eratosthenes.
Archimedes said, "Archimedes to Eratosthenes greeting! I'm trying to come up with a solution to the ratio of several spherical rigid bodies in equilibrium. I wish to iterate over these bodies multiple times, but I'm having a frightful time keeping track of how many iterations I've done!"
Eratosthenes said, "Why Archimedes, you ripe plum of a kidder, you could merely mark successive rows of lines in the sand, each keeping track of the number of iterations you've done within iteration!"
Archimedes cried out to the world that his great friend was undeniably a shining beacon of intelligence for coming up with such a simple solution. But Archimedes remarked that he likes to walk in circles around his sand pit while he ponders. Thus, there was risk of losing track of which row was on top, and which was on bottom.
"Perhaps I should mark these rows with a letter of the alphabet just off to the side so that I will always know which row is which! What think you of that?" he asked, then added, "But Eratosthenes... whatever letters shall I use?"
Eratosthenes was sure he didn't know which letters would be best, and said as much to Archimedes. But Archimedes was unsatisfied and continued to prod the poor librarian to choose, at least, the two letters that he would require for his current sphere equilibrium solution.
Eratosthenes, finally tired of the incessant request for two letters, yelled, "I JUST DON'T KNOW!!!"
So Archimedes chose the first two letters in Eratosthenes' exclamatory sentence, and thanked his friend for the contribution.
These symbols were quickly adopted by ancient Greek Java developers, and the rest is, well... history.

i think it's because a lot of loops use an Int type variable to do the counting, like
for (int i = 0; etc
and when you type, you actually speak it out in your head (like when you read), so in your mind, you say 'int....'
and when you have to make up a letter right after that 'int....' , you say / type the 'i' because that is the first letter you think of when you've just said 'int'
like you spell a word to kids who start learning reading you spell words for them by using names, like this:
WORD spells William W, Ok O, Ruby R, Done D
So you say Int I, Double d, Float f, string s etc. based on the first letter.
And j is used because when you have done int I, J follows right after it.

I think it's a combination of the other mentioned reasons :
For starters, 'i' was commonly used by mathematicians in their notation, and in the early days of computing with languages that weren't binary (ie had to be parsed and lexed in some fashion), the vast majority of users of computers were also mathematicians (... and scientists and engineers) so the notation fell into use in computer languages for programming loops, and has kind of just stuck around ever since.
Combine this with the fact that screen space in those very early days was very limited, as was memory, it made sense to keep shorter variable names.

Possibly historical ?
FORTRAN, aurguably the first high level language, defined i,j,k,l,m as Integer datatypes by default, and loops could only be controlled by integer variable, the convention continues ?
eg:
do 100 i= j,100,5
....
100 continue
....

i = iterator, i = index, i = integer
Which ever you figure "i" stands for it still "fits the bill".
Also, unless you have only a single line of code within that loop, you should probably be naming the iterator/index/integer variable to something more meaningful. Like: employeeIndex
BTW, I usually use "i" in my simple iterator loops; unless of course it contains multiple lines of code.

i = iota, j = jot; both small changes.
iota is the smallest letter in the greek alphabet; in the English language it's meaning is linked to small changes, as in "not one iota" (from a phrase in the New Testament: "until heaven and earth pass away, not an iota, not a dot, will pass from the Law" (Mt 5:18)).
A counter represents a small change in a value.
And from iota comes jot (iot), which is also a synonym for a small change.
cf. http://en.wikipedia.org/wiki/Iota

Well from Mathematics: (for latin letters)
a,b: used as constants or as integers for a rational number
c: a constant
d: derivative
e: Euler's number
f,g,h: functions
i,j,k: are indexes (also unit vectors and the quaternions)
l: generally not used. looks like 1
m,n: are rows and columns of matrices or as integers for rational numbers
o: also not used (unless you're in little o notation)
p,q: often used as primes
r: sometimes a spatial change of variable other times related to prime numbers
s,t: spatial and temporal variables or s is used as a change of variable for t
u,v,w: change of variable
x,y,z: variables

Many possible main reasons, I guess:
mathematicians use i and j for Natural Numbers in formulas (the ones that use Complex Numbers rarely, at least), so this carried over to programming
from C, i hints to int. And if you need another int then i2 is just way too long, so you decide to use j.
there are languages where the first letter decides the type, and i is then an integer.

It comes from Fortran, where i,j,k,l,m,n are implicitly integers.

It definitely comes from mathematics, which long preceded computer programming.
So, where did if come from in math? My completely uneducated guess is that it's as one fellow said, mathematicians like to use alphabetic clusters for similar things -- f, g, h for functions; x, y, z for numeric variables; p, q, r for logical variables; u, v, w for other sets of variables, especially in calculus; a, b, c for a lot of things. i, j, k comes in handy for iterative variables, and that about exhausts the possibilities. Why not m, n? Well, they are used for integers, but more often the end points of iterations rather than the iterative variables themselves.
Someone should ask a historian of mathematics.

Counters are so common in programs, and in the early days of computing, everything was at a premium...
Programmers naturally tried to conserve pixels, and the 'i' required fewer pixels than any other letter to represent. (Mathematicians, being lazy, picked it for the same reason - as the smallest glyph).
As stated previously, 'j' just naturally followed...
:)

I use it for a number of reasons.
Usually my loops are int based, so
you make a complete triangle on the
keyboard typing "int i" with the
exception of the space I handle with
my thumb. This is a very fast
sequence to type.
The "i" could stand for iterator, integer, increment, or index, each of which makes
logical sense.
With my personal uses set aside, the theory of it being derived from FORTRAN is correct, where integer vars used letters I - N.

I learned FORTRAN on a Control Data Corp. 3100 in 1965. Variables starting with 'I' through 'N' were implied to be integers. Ex: 'IGGY' and 'NORB' were integers, 'XMAX' and 'ALPHA' were floating-point. However, you could override this through explicit declaration.

Related

Octave minimization for a many-body Hamiltonian with non-linear constraint

I work in theoretical physics, and I have come upon a problem that requires the minimization of a particular Hamiltonian operator for a system of 8 particles, with one non-linear constraint. Due to the complexity of the system, I cannot define the entire Hamiltonian "in one go", nor the constraint. By this I mean that the quantity I am searching for is defined recurrently, depending on complex summations over quantities calculated for systems of 7 particles, which in turn depend on quantities calculated for systems of 6, and so on, until it reaches a one or two-particle system, for which said quantities are given as initial values, dependent on the elements of a column vector (the argument/minization parameters). The constraint itself is also of this form, requiring the "overlap" between the states of 8 particles to be exactly 1. (I.E. the state be normalized) I have been thinking of a way to use fmincon for this, but I've come up short, since my function has an implicit dependence on the parameters, and I can't write the whole thing explicitly. For a better understanding, here is some of the code:
for m=3:npairs+1
for n=3:npairs+1
for i=1:nsps
for j=1:nsps
overlap(m,n)=overlap(m,n)+x(i)*x(j)*(delta(i,j)*(overlap(m-1,n-1)-N(m-1,n-1,i))+p0p(m-1,n-1,j,i));
p(m,n,i)=(n-1)*x(i)*overlap(m,n-1)-(n-2)*(n-1)*x(i)*x(i)*((m-1)*x(i)*overlap(m-1,n-1)-(m-2)*(m-1)*x(i)*x(i)*p(m-1,n-1,i));
N(m,n,i)=2*(n-1)*x(i)*p(n-1,m,i);
p0p(m,n,i,j)=(m-1)*(n-1)*x(i)*x(j)*overlap(m-1,n-1)-(m-1)*(n-1)*(m-2)*x(i)*x(i)*x(j)*p(m-2,n-1,i)-(m-1)*(n-1)*(n-2)*x(i)*x(j)*x(j)*p0(m-1,n-2,j)-(m-1)*(n-1)*(m-2)*(n-2)*x(i)*x(i)*x(j)*x(j)*(delta(i,j)*(overlap(m-2,n-2)-N(m-2,n-2,i))+p0p(m-2,n-2,j,i));
endfor
endfor
endfor
endfor
function [E]=H(x)
E=summation over all i and j of N and p0p for m=n=8 %not actual code
endfunction
overlap(9,9)=1 %constraint
It's hard to give a specific answer, but I would advise the following to get you started.
First, note that, the inner two steps of the nest loop can be vectorised, since i and j always appear as indices (whereas m and n make backreferences, so they cannot be vectorised). So your 4-level loop can be reduced to a 2-level loop containing 4 functions operating over i-by-j matrices.
Second, note that the whole construct can be expressed as a recursive function. If you have suitable base cases for m = 0, n = 0, you can iteratively obtain all i,j matrices for all cases up to m=9,n=9. In particular, you can try to 'memoize' the early steps, and plug them into higher steps, rather than rely on actual recursion.
Assuming you need to sum with the first two indeces fixed to 8 (if I understood correctly), you can easily do with Anonymous Functions
https://octave.org/doc/v6.1.0/Anonymous-Functions.html#Anonymous-Functions
# creating same data
A=ones(8,8,4,4);
B=2*ones(8,8,4,4);
# defining 2 versions of sums
f = #(A,B) [sum(sum(A(8,8,:,:))), sum(sum(B(8,8,:,:)))];
g = #(A,B) sum(sum(A(8,8,:,:)))+ sum(sum(B(8,8,:,:)));
E1=f(A,B)
E2=g(A,B)
the output will be:
octave:21> E1=f(A,B)
E1 =
16 32
octave:22> E2=g(A,B)
E2 = 48

J's # operator: why not reversed?

I've been studying J for the last few weeks, and something that has really buggered me is the dyadic case of the # operator: the only way I've used it yet is similar to the following:
(1 p: a) # a
If this were reversed, the parenthesis could be omitted:
a #~ 1 p: a
Why was there chosen not to take the reverse of the current arguments? Backwards familiarity with APL, or something I'm completely overlooking?
In general, J's primitives are designed to take "primary data" on the right, and "control data" on the left.
The distinction between "primary" and "control" data is not sharp, but in general one would expect "primary" data to vary more often than "control" data. That is, one would expect that the "control" data is less likely to be calculated than the "primary" data.
The reason for that design choice is exactly as you point out: because if the data that is more likely to be calculated (as opposed to fixed in advanced) appears on the right, then more J phrases could be expressed as simple trains or pipelines of verbs, without excessive parenthesization (given that J executed left-to-right).
Now, in the case of #, which data is more likely to be calculated? You're 100% right that the filter (or mask) is very likely to be calculated. However, the data to be filtered is almost certain to be calculated. Where did you get your a, for example?
QED.
PS: if your a can be calculated by some J verb, as in a=: ..., then your whole result, filter and all, can be expressed with primeAs =: 1&p: # ... .
PPS: Note the 1&p:, there. That's another example of "control" vs "primary": the 1 is control data - you can tell because it's forever bound to p: - and it's fixed. And so, uncoincidentally, p: was designed to take it as a left argument.
PPPS: This concept of "control data appears on the left" has been expressed many different ways. You can find one round-up of veteran Jers' explanations here: http://www.jsoftware.com/pipermail/general/2007-May/030079.html .

If I come from an imperative programming background, how do I wrap my head around the idea of no dynamic variables to keep track of things in Haskell?

So I'm trying to teach myself Haskell. I am currently on the 11th chapter of Learn You a Haskell for Great Good and am doing the 99 Haskell Problems as well as the Project Euler Problems.
Things are going alright, but I find myself constantly doing something whenever I need to keep track of "variables". I just create another function that accepts those "variables" as parameters and recursively feed it different values depending on the situation. To illustrate with an example, here's my solution to Problem 7 of Project Euler, Find the 10001st prime:
answer :: Integer
answer = nthPrime 10001
nthPrime :: Integer -> Integer
nthPrime n
| n < 1 = -1
| otherwise = nthPrime' n 1 2 []
nthPrime' :: Integer -> Integer -> Integer -> [Integer] -> Integer
nthPrime' n currentIndex possiblePrime previousPrimes
| isFactorOfAnyInThisList possiblePrime previousPrimes = nthPrime' n currentIndex theNextPossiblePrime previousPrimes
| otherwise =
if currentIndex == n
then possiblePrime
else nthPrime' n currentIndexPlusOne theNextPossiblePrime previousPrimesPlusCurrentPrime
where currentIndexPlusOne = currentIndex + 1
theNextPossiblePrime = nextPossiblePrime possiblePrime
previousPrimesPlusCurrentPrime = possiblePrime : previousPrimes
I think you get the idea. Let's also just ignore the fact that this solution can be made to be more efficient, I'm aware of this.
So my question is kind of a two-part question. First, am I going about Haskell all wrong? Am I stuck in the imperative programming mindset and not embracing Haskell as I should? And if so, as I feel I am, how do avoid this? Is there a book or source you can point me to that might help me think more Haskell-like?
Your help is much appreciated,
-Asaf
Am I stuck in the imperative programming mindset and not embracing
Haskell as I should?
You are not stuck, at least I don't hope so. What you experience is absolutely normal. While you were working with imperative languages you learned (maybe without knowing) to see programming problems from a very specific perspective - namely in terms of the van Neumann machine.
If you have the problem of, say, making a list that contains some sequence of numbers (lets say we want the first 1000 even numbers), you immediately think of: a linked list implementation (perhaps from the standard library of your programming language), a loop and a variable that you'd set to a starting value and then you would loop for a while, updating the variable by adding 2 and putting it to the end of the list.
See how you mostly think to serve the machine? Memory locations, loops, etc.!
In imperative programming, one thinks about how to manipulate certain memory cells in a certain order to arrive at the solution all the time. (This is, btw, one reason why beginners find learning (imperative) programming hard. Non programmers are simply not used to solve problems by reducing it to a sequence of memory operations. Why should they? But once you've learned that, you have the power - in the imperative world. For functional programming you need to unlearn that.)
In functional programming, and especially in Haskell, you merely state the construction law of the list. Because a list is a recursive data structure, this law is of course also recursive. In our case, we could, for example say the following:
constructStartingWith n = n : constructStartingWith (n+2)
And almost done! To arrive at our final list we only have to say where to start and how many we want:
result = take 1000 (constructStartingWith 0)
Note that a more general version of constructStartingWith is available in the library, it is called iterate and it takes not only the starting value but also the function that makes the next list element from the current one:
iterate f n = n : iterate f (f n)
constructStartingWith = iterate (2+) -- defined in terms of iterate
Another approach is to assume that we had another list our list could be made from easily. For example, if we had the list of the first n integers we could make it easily into the list of even integers by multiplying each element with 2. Now, the list of the first 1000 (non-negative) integers in Haskell is simply
[0..999]
And there is a function map that transforms lists by applying a given function to each argument. The function we want is to double the elements:
double n = 2*n
Hence:
result = map double [0..999]
Later you'll learn more shortcuts. For example, we don't need to define double, but can use a section: (2*) or we could write our list directly as a sequence [0,2..1998]
But not knowing these tricks yet should not make you feel bad! The main challenge you are facing now is to develop a mentality where you see that the problem of constructing the list of the first 1000 even numbers is a two staged one: a) define how the list of all even numbers looks like and b) take a certain portion of that list. Once you start thinking that way you're done even if you still use hand written versions of iterate and take.
Back to the Euler problem: Here we can use the top down method (and a few basic list manipulation functions one should indeed know about: head, drop, filter, any). First, if we had the list of primes already, we can just drop the first 1000 and take the head of the rest to get the 1001th one:
result = head (drop 1000 primes)
We know that after dropping any number of elements form an infinite list, there will still remain a nonempty list to pick the head from, hence, the use of head is justified here. When you're unsure if there are more than 1000 primes, you should write something like:
result = case drop 1000 primes of
[] -> error "The ancient greeks were wrong! There are less than 1001 primes!"
(r:_) -> r
Now for the hard part. Not knowing how to proceed, we could write some pseudo code:
primes = 2 : {-an infinite list of numbers that are prime-}
We know for sure that 2 is the first prime, the base case, so to speak, thus we can write it down. The unfilled part gives us something to think about. For example, the list should start at some value that is greater 2 for obvious reason. Hence, refined:
primes = 2 : {- something like [3..] but only the ones that are prime -}
Now, this is the point where there emerges a pattern that one needs to learn to recognize. This is surely a list filtered by a predicate, namely prime-ness (it does not matter that we don't know yet how to check prime-ness, the logical structure is the important point. (And, we can be sure that a test for prime-ness is possible!)). This allows us to write more code:
primes = 2 : filter isPrime [3..]
See? We are almost done. In 3 steps, we have reduced a fairly complex problem in such a way that all that is left to write is a quite simple predicate.
Again, we can write in pseudocode:
isPrime n = {- false if any number in 2..n-1 divides n, otherwise true -}
and can refine that. Since this is almost haskell already, it is too easy:
isPrime n = not (any (divides n) [2..n-1])
divides n p = n `rem` p == 0
Note that we did not do optimization yet. For example we can construct the list to be filtered right away to contain only odd numbers, since we know that even ones are not prime. More important, we want to reduce the number of candidates we have to try in isPrime. And here, some mathematical knowledge is needed (the same would be true if you programmed this in C++ or Java, of course), that tells us that it suffices to check if the n we are testing is divisible by any prime number, and that we do not need to check divisibility by prime numbers whose square is greater than n. Fortunately, we have already defined the list of prime numbers and can pick the set of candidates from there! I leave this as exercise.
You'll learn later how to use the standard library and the syntactic sugar like sections, list comprehensions, etc. and you will gradually give up to write your own basic functions.
Even later, when you have to do something in an imperative programming language again, you'll find it very hard to live without infinte lists, higher order functions, immutable data etc.
This will be as hard as going back from C to Assembler.
Have fun!
It's ok to have an imperative mindset at first. With time you will get more used to things and start seeing the places where you can have more functional programs. Practice makes perfect.
As for working with mutable variables you can kind of keep them for now if you follow the rule of thumb of converting variables into function parameters and iteration into tail recursion.
Off the top of my head:
Typeclassopedia. The official v1 of the document is a pdf, but the author has moved his v2 efforts to the Haskell wiki.
What is a monad? This SO Q&A is the best reference I can find.
What is a Monad Transformer? Monad Transformers Step by Step.
Learn from masters: Good Haskell source to read and learn from.
More advanced topics such as GADTs. There's a video, which does a great job explaining it.
And last but not least, #haskell IRC channel. Nothing can even come close to talk to real people.
I think the big change from your code to more haskell like code is using higher order functions, pattern matching and laziness better. For example, you could write the nthPrime function like this (using a similar algorithm to what you did, again ignoring efficiency):
nthPrime n = primes !! (n - 1) where
primes = filter isPrime [2..]
isPrime p = isPrime' p [2..p - 1]
isPrime' p [] = True
isPrime' p (x:xs)
| (p `mod` x == 0) = False
| otherwise = isPrime' p xs
Eg nthPrime 4 returns 7. A few things to note:
The isPrime' function uses pattern matching to implement the function, rather than relying on if statements.
the primes value is an infinite list of all primes. Since haskell is lazy, this is perfectly acceptable.
filter is used rather than reimplemented that behaviour using recursion.
With more experience you will find you will write more idiomatic haskell code - it sortof happens automatically with experience. So don't worry about it, just keep practicing, and reading other people's code.
Another approach, just for variety! Strong use of laziness...
module Main where
nonmults :: Int -> Int -> [Int] -> [Int]
nonmults n next [] = []
nonmults n next l#(x:xs)
| x < next = x : nonmults n next xs
| x == next = nonmults n (next + n) xs
| otherwise = nonmults n (next + n) l
select_primes :: [Int] -> [Int]
select_primes [] = []
select_primes (x:xs) =
x : (select_primes $ nonmults x (x + x) xs)
main :: IO ()
main = do
let primes = select_primes [2 ..]
putStrLn $ show $ primes !! 10000 -- the first prime is index 0 ...
I want to try to answer your question without using ANY functional programming or math, not because I don't think you will understand it, but because your question is very common and maybe others will benefit from the mindset I will try to describe. I'll preface this by saying I an not a Haskell expert by any means, but I have gotten past the mental block you have described by realizing the following:
1. Haskell is simple
Haskell, and other functional languages that I'm not so familiar with, are certainly very different from your 'normal' languages, like C, Java, Python, etc. Unfortunately, the way our psyche works, humans prematurely conclude that if something is different, then A) they don't understand it, and B) it's more complicated than what they already know. If we look at Haskell very objectively, we will see that these two conjectures are totally false:
"But I don't understand it :("
Actually you do. Everything in Haskell and other functional languages is defined in terms of logic and patterns. If you can answer a question as simple as "If all Meeps are Moops, and all Moops are Moors, are all Meeps Moors?", then you could probably write the Haskell Prelude yourself. To further support this point, consider that Haskell lists are defined in Haskell terms, and are not special voodoo magic.
"But it's complicated"
It's actually the opposite. It's simplicity is so naked and bare that our brains have trouble figuring out what to do with it at first. Compared to other languages, Haskell actually has considerably fewer "features" and much less syntax. When you read through Haskell code, you'll notice that almost all the function definitions look the same stylistically. This is very different than say Java for example, which has constructs like Classes, Interfaces, for loops, try/catch blocks, anonymous functions, etc... each with their own syntax and idioms.
You mentioned $ and ., again, just remember they are defined just like any other Haskell function and don't necessarily ever need to be used. However, if you didn't have these available to you, over time, you would likely implement these functions yourself when you notice how convenient they can be.
2. There is no Haskell version of anything
This is actually a great thing, because in Haskell, we have the freedom to define things exactly how we want them. Most other languages provide building blocks that people string together into a program. Haskell leaves it up to you to first define what a building block is, before building with it.
Many beginners ask questions like "How do I do a For loop in Haskell?" and innocent people who are just trying to help will give an unfortunate answer, probably involving a helper function, and extra Int parameter, and tail recursing until you get to 0. Sure, this construct can compute something like a for loop, but in no way is it a for loop, it's not a replacement for a for loop, and in no way is it really even similar to a for loop if you consider the flow of execution. Similar is the State monad for simulating state. It can be used to accomplish similar things as static variables do in other languages, but in no way is it the same thing. Most people leave off the last tidbit about it not being the same when they answer these kinds of questions and I think that only confuses people more until they realize it on their own.
3. Haskell is a logic engine, not a programming language
This is probably least true point I'm trying to make, but hear me out. In imperative programming languages, we are concerned with making our machines do stuff, perform actions, change state, and so on. In Haskell, we try to define what things are, and how are they supposed to behave. We are usually not concerned with what something is doing at any particular time. This certainly has benefits and drawbacks, but that's just how it is. This is very different than what most people think of when you say "programming language".
So that's my take how how to leave an imperative mindset and move to a more functional mindset. Realizing how sensible Haskell is will help you not look at your own code funny anymore. Hopefully thinking about Haskell in these ways will help you become a more productive Haskeller.

Is the language of all strings over the alphabet "a,b,c" with the same number of substrings "ab" & "ba" regular?

Is the language of all strings over the alphabet "a,b,c" with the same number of substrings "ab" & "ba" regular?
I believe the answer is NO, but it is hard to make a formal demonstration of it, even a NON formal demonstration.
Any ideas on how to approach this?
It's clearly not regular. How is an FA going to recognize (abc)^n c (cba)^n. Strings like this are in your language, right? The argument is a simple one based on the fact that there are infinitely many equivalence classes under the indistinguishability relation I_l.
The most common way to prove a language is NOT regular is using on of the Pumping Lemmas.
Using the lemma is a little tricky, since it has all those "exists" and so on. To prove a language L is not regular using the pumping lemma you have to prove that
for any integer p,
there is a word w in L of length n, with n>=p, such that
for all possible ways to decompose w as xyz, with len(xy) <= p and y non empty
there exists an i such that x(y^i)z (repeating the y bit i times) is NOT in L
whooo!
I'l l show how the proof looks for the "same number of as and bs" language. It should be straighfoward to convert to your case:
for any given p, we can make a word of length n = 2*p
a^p b^p (p a's followed by p b's)
any way you decompose this into xyz w/ |xy| <=p, y will only contain a's.
Thus, pumping the the y part will make the word have more as than bs,
thus NOT belonging to L.
If you need intuition on why this works, it follows from how you need to be able to count to arbritrarily large numbers to verify if a word belongs to one of these languages. However, Regular Languages are described by finite automata and no finite automata can represent the infinite ammount of states required to represent all the numbers. (The Wikipedia article should have a formal proof).
EDIT: It looks like you can't straight up use the pumping lemma in this particular case directly: if you always make y be one character long you can never make a word stop being accepted (aba becoming abbbba makes no difference and so on).
Just do the equivalence class approach suggested by Patrick87 - it will probably turn out to be cleaner than any of the dirty hacks you would need to do to make the pumping lemma applicable here.

Quick divisibility check in ZX81 BASIC

Since many of the Project Euler problems require you to do a divisibility check for quite a number of times, I've been trying to figure out the fastest way to perform this task in ZX81 BASIC.
So far I've compared (N/D) to INT(N/D) to check, whether N is dividable by D or not.
I have been thinking about doing the test in Z80 machine code, I haven't yet figured out how to use the variables in the BASIC in the machine code.
How can it be achieved?
You can do this very fast in machine code by subtracting repeatedly. Basically you have a procedure like:
set accumulator to N
subtract D
if carry flag is set then it is not divisible
if zero flag is set then it is divisible
otherwise repeat subtraction until one of the above occurs
The 8 bit version would be something like:
DIVISIBLE_TEST:
LD B,10
LD A,100
DIVISIBLE_TEST_LOOP:
SUB B
JR C, $END_DIVISIBLE_TEST
JR Z, $END_DIVISIBLE_TEST
JR $DIVISIBLE_TEST_LOOP
END_DIVISIBLE_TEST:
LD B,A
LD C,0
RET
Now, you can call from basic using USR. What USR returns is whatever's in the BC register pair, so you would probably want to do something like:
REM poke the memory addresses with the operands to load the registers
POKE X+1, D
POKE X+3, N
LET r = USR X
IF r = 0 THEN GOTO isdivisible
IF r <> 0 THEN GOTO isnotdivisible
This is an introduction I wrote to Z80 which should help you figure this out. This will explain the flags if you're not familiar with them.
There's a load more links to good Z80 stuff from the main site although it is Spectrum rather than ZX81 focused.
A 16 bit version would be quite similar but using register pair operations. If you need to go beyond 16 bits it would get a bit more convoluted.
How you load this is up to you - but the traditional method is using DATA statements and POKEs. You may prefer to have an assembler figure out the machine code for you though!
Your existing solution may be good enough. Only replace it with something faster if you find it to be a bottleneck in profiling.
(Said with a straight face, of course.)
And anyway, on the ZX81 you can just switch to FAST mode.
Don't know if RANDOMIZE USR is available in ZX81 but I think it can be used to call routines in assembly. To pass arguments you might need to use POKE to set some fixed memory locations before executing RANDOMIZE USR.
I remember to find a list of routines implemented in the ROM to support the ZX Basic. I'm sure there are a few to perform floating operation.
An alternative to floating point is to use fixed point math. It's a lot faster in these kind of situations where there is no math coprocessor.
You also might find more information in Sinclair User issues. They published some articles related to programming in the ZX Spectrum
You should place the values in some pre-known memory locations, first. Then use the same locations from within Z80 assembler. There is no parameter passing between the two.
This is based on what I (still) remember of ZX Spectrum 48. Good luck, but you might consider upgrading your hw. ;/
The problem with Z80 machine code is that it has no floating point ops (and no integer divide or multiply, for that matter). Implementing your own FP library in Z80 assembler is not trivial. Of course, you can use the built-in BASIC routines, but then you may as well just stick with BASIC.