Best way solving optimization with multiple variables in Matlab? - optimization

I am trying to compute numerically the solutions for a system of many equations and variables (100+). I tried so far three things:
I now that the vector of p(i) (which contains most of the endogenous variables) is decreasing. Thus I gave simply some starting points, and then was increasing(decreasing) my guess when I saw that the specific p was too low(high). Of course this was always conditional on the other being fixed which is not the case. This should eventually work, but it is neither efficient, nor obvious that I reach a solution in finite time. It worked when reducing the system to 4-6 variables though.
I could create 100+ loops around each other and use bisection for each loop. This would eventually lead me to the solution, but take ages both to program (as I have no idea how to create n loops around each other without actually having to write the loops - which is also bad as I would like to increase/decrease the amount of variables easily) and to execute.
I was trying fminsearch, but as expected for that wast amount of variables - no way!
I would appreciate any ideas... Here is the code (this one the fminsearch I tried):
This is the run file:
clear all
clc
% parameter
z=1.2;
w=20;
lam=0.7;
tau=1;
N=1000;
t_min=1;
t_max=4;
M=6;
a_min=0.6;
a_max=0.8;
t=zeros(1,N);
alp=zeros(1,M);
p=zeros(1,M);
p_min=2;
p_max=1;
for i=1:N
t(i)= t_min + (i-1)*(t_max - t_min)/(N-1);
end
for i=1:M
alp(i)= a_min + (i-1)*(a_max - a_min)/(M-1);
p(i)= p_min + (i-1)*(p_max - p_min)/(M-1);
end
fun=#(p) david(p ,z,w,lam,tau,N,M,t,alp);
p0=p;
fminsearch(fun,p0)
And this is the program-file:
function crit=david(p, z,w,lam,tau,N,M,t,alp)
X = zeros(M,N);
pi = zeros(M,N);
C = zeros(1,N);
Xa=zeros(1,N);
Z=zeros(1,M);
rl=0.01;
rh=1.99;
EXD=140;
while (abs(EXD)>100)
r1=rl + 0.5*(rh-rl);
for i=1:M
for j=1:N
X(i,j)=min(w*(1+lam), (alp(i) * p(i) / r1)^(1/(1-alp(i))) * t(j)^((z-alp(i))/(1-alp(i))));
pi(i,j)=p(i) * t(j)^(z-alp(i)) * X(i,j)^(alp(i)) - r1*X(i,j);
end
end
[C,I] = max(pi);
Xa(1)=X(I(1),1);
for j=2:N
Xa(j)=X(I(j),j);
end
EXD=sum(Xa)- N*w;
if (abs(EXD)>100 && EXD>0)
rl=r1;
elseif (abs(EXD)>100 && EXD<0)
rh=r1;
end
end
Ya=zeros(M,N);
for j=1:N
Ya(I(j),j)=t(j)^(z-alp(I(j))) * X(I(j),j)^(alp(I(j)));
end
Yi=sum(Ya,2);
if (Yi(1)==0)
Z(1)=-50;
end
for j=2:M
if (Yi(j)==0)
Z(j)=-50;
else
Z(j)=(p(1)/p(j))^tau - Yi(j)/Yi(1);
end
end
zz=sum(abs(Z))
crit=(sum(abs(Z)));

First of all my recommendation: use your brain.
What do you know about the function, can you use a gradient approach, linearize the problem, or perhaps fix most of the variables? If not, think twice before you decide that you are really interested in all 100 variables and perhaps simplify the problem.
Now, if that is not possible read this:
If you found a way to quickly get a local optimum, you could simply wrap a loop around it to try different starting points and hope you will find a good optimum.
If you really need to make lots of loops (and a variable amount) I suppose it can be done with recursion, but it is not easily explained.
If you just quickly want to make a fixed number of loops inside each other this can easily be done in excel (hint: loop variables can be called t1,t2 ... )
If you really need to evaluate a function at a lot of points, probably creating all the points first using ndgrid and then evaluating them all at once is preferable. (Needless to say this will not be a nice solution for 100 nontrivial variables)

Related

Root finding with a kinked function using NLsolve in Julia

I am currently trying to solve a complementarity problem with a function that features a downward discontinuity, using the mcpsolve() function of the NLsolve package in Julia. The function is reproduced here for specific parameters, and the numbers below refer to the three panels of the figure.
Unfortunately, the algorithm does not always return the interior solution, even though it exists:
In (1), when starting at 0, the algorithm stays at 0, thinking that the boundary constraint binds,
In (2), when starting at 0, the algorithm stops right before the downward jump, even though the solution lies to the right of this point.
These problems occur regardless of the method used - trust region or Newton's method. Ideally, the algorithm would look for potential solutions in the entire set before stopping.
I was wondering if some of you had worked with similar functions, and had found a clever solution to bypass these issues. Note that
Starting to the right of the solution would not solve these problems, as they would also occur for other parametrization - see (3) this time,
It is not known a priori where in the parameter space the particular cases occur.
As an illustrative example, consider the following piece of code. Note that the function is smoother, and yet here as well the algorithm cannot find the solution.
function f!(x,fvec)
if x[1] <= 1.8
fvec[1] = 0.1 * (sin(3*x[1]) - 3)
else
fvec[1] = 0.1 * (x[1]^2 - 7)
end
end
NLsolve.mcpsolve(f!,[0.], [Inf], [0.], reformulation = :smooth, autodiff = true)
Once more, setting the initial value to something else than 0 would only postpone the problem. Also, as pointed out by Halirutan, fzero from the Roots package would probably work, but I'd rather use mcpsolve() as the problem is initially a complementarity problem.
Thank you very much in advance for your help.

Is there a macro for creating fast Iterators from generator-like functions in julia?

Coming from python3 to Julia one would love to be able to write fast iterators as a function with produce/yield syntax or something like that.
Julia's macros seem to suggest that one could build a macro which transforms such a "generator" function into an julia iterator.
[It even seems like you could easily inline iterators written in function style, which is a feature the Iterators.jl package also tries to provide for its specific iterators https://github.com/JuliaCollections/Iterators.jl#the-itr-macro-for-automatic-inlining-in-for-loops ]
Just to give an example of what I have in mind:
#asiterator function myiterator(as::Array)
b = 1
for (a1, a2) in zip(as, as[2:end])
try
#produce a1[1] + a2[2] + b
catch exc
end
end
end
for i in myiterator([(1,2), (3,1), 3, 4, (1,1)])
#show i
end
where myiterator should ideally create a fast iterator with as low overhead as possible. And of course this is only one specific example. I ideally would like to have something which works with all or almost all generator functions.
The currently recommended way to transform a generator function into an iterator is via Julia's Tasks, at least to my knowledge. However they also seem to be way slower then pure iterators. For instance if you can express your function with the simple iterators like imap, chain and so on (provided by Iterators.jl package) this seems to be highly preferable.
Is it theoretically possible in julia to build a macro converting generator-style functions into flexible fast iterators?
Extra-Point-Question: If this is possible, could there be a generic macro which inlines such iterators?
Some iterators of this form can be written like this:
myiterator(as) = (a1[1] + a2[2] + 1 for (a1, a2) in zip(as, as[2:end]))
This code can (potentially) be inlined.
To fully generalize this, it is in theory possible to write a macro that converts its argument to continuation-passing style (CPS), making it possible to suspend and restart execution, giving something like an iterator. Delimited continuations are especially appropriate for this (https://en.wikipedia.org/wiki/Delimited_continuation). The result is a big nest of anonymous functions, which might be faster than Task switching, but not necessarily, since at the end of the day it needs to heap-allocate a similar amount of state.
I happen to have an example of such a transformation here (in femtolisp though, not Julia): https://github.com/JeffBezanson/femtolisp/blob/master/examples/cps.lsp
This ends with a define-generator macro that does what you describe. But I'm not sure it's worth the effort to do this for Julia.
Python-style generators – which in Julia would be closest to yielding from tasks – involve a fair amount of inherent overhead. You have to switch tasks, which is non-trivial and cannot straightforwardly be eliminated by a compiler. That's why Julia's iterators are based on functions that transform one typically immutable, simple state value, and another. Long story short: no, I do not believe that this transformation can be done automatically.
After thinking a lot how to translate python generators to Julia without loosing much performance, I implemented and tested a library of higher level functions which implement Python-like/Task-like generators in a continuation-style. https://github.com/schlichtanders/Continuables.jl
Essentially, the idea is to regard Python's yield / Julia's produce as a function which we take from the outside as an extra parameter. I called it cont for continuation. Look for instance on this reimplementation of a range
crange(n::Integer) = cont -> begin
for i in 1:n
cont(i)
end
end
You can simply sum up all integers by the following code
function sum_continuable(continuable)
a = Ref(0)
continuable() do i
a.x += i
end
a.x
end
# which simplifies with the macro Continuables.#Ref to
#Ref function sum_continuable(continuable)
a = Ref(0)
continuable() do i
a += i
end
a
end
sum_continuable(crange(4)) # 10
As you hopefully agree, you can work with continuables almost like you would have worked with generators in python or tasks in julia. Using do notation instead of for loops is kind of the one thing you have to get used to.
This idea takes you really really far. The only standard method which is not purely implementable using this idea is zip. All the other standard higher-level tools work just like you would hope.
The performance is unbelievably faster than Tasks and even faster than Iterators in some cases (notably the naive implementation of Continuables.cmap is orders of magnitude faster than Iterators.imap). Check out the Readme.md of the github repository https://github.com/schlichtanders/Continuables.jl for more details.
EDIT: To answer my own question more directly, there is no need for a macro #asiterator, just use continuation style directly.
mycontinuable(as::Array) = cont -> begin
b = 1
for (a1, a2) in zip(as, as[2:end])
try
cont(a1[1] + a2[2] + b)
catch exc
end
end
end
mycontinuable([(1,2), (3,1), 3, 4, (1,1)]) do i
#show i
end

Eigen: Computation and incrementation of only Upper part with selfAdjointView?

I do something like this to get:
bMRes += MatrixXd(n, n).setZero()
.selfadjointView<Eigen::Upper>().rankUpdate(bM);
This gets me an incrementation of bMRes by bM * bM.transpose() but twice as fast.
Note that bMRes and bM are of type Map<MatrixXd>.
To optimize things further, I would like to skip the copy (and incrementation) of the Lower part.
In other words, I would like to compute and write only the Upper part.
Again, in other words, I would like to have my result in the Upper part and 0's in the Lower part.
If it is not clear enough, feel free to ask questions.
Thanks in advance.
Florian
If your bMRes is self-adjoint originally, you could use the following code, which only updates the upper half of bMRes.
bMRes.selfadjointView<Eigen::Upper>().rankUpdate(bM);
If not, I think you have to accept that .selfadjointView<>() will always copy the other half when assigned to a MatrixXd.
Compared to A*A.transpose() or .rankUpdate(A), the cost of copying half of A can be ignored when A is reasonably large. So I guess you don't need to optimize your code further.
If you just want to evaluate the difference, you could use low-level BLAS APIs. A*A.transpose() is equivalent to gemm(), and .rankUpdate(A) is equivalent to syrk(), but syrk() don't copy the other half automatically.

If I come from an imperative programming background, how do I wrap my head around the idea of no dynamic variables to keep track of things in Haskell?

So I'm trying to teach myself Haskell. I am currently on the 11th chapter of Learn You a Haskell for Great Good and am doing the 99 Haskell Problems as well as the Project Euler Problems.
Things are going alright, but I find myself constantly doing something whenever I need to keep track of "variables". I just create another function that accepts those "variables" as parameters and recursively feed it different values depending on the situation. To illustrate with an example, here's my solution to Problem 7 of Project Euler, Find the 10001st prime:
answer :: Integer
answer = nthPrime 10001
nthPrime :: Integer -> Integer
nthPrime n
| n < 1 = -1
| otherwise = nthPrime' n 1 2 []
nthPrime' :: Integer -> Integer -> Integer -> [Integer] -> Integer
nthPrime' n currentIndex possiblePrime previousPrimes
| isFactorOfAnyInThisList possiblePrime previousPrimes = nthPrime' n currentIndex theNextPossiblePrime previousPrimes
| otherwise =
if currentIndex == n
then possiblePrime
else nthPrime' n currentIndexPlusOne theNextPossiblePrime previousPrimesPlusCurrentPrime
where currentIndexPlusOne = currentIndex + 1
theNextPossiblePrime = nextPossiblePrime possiblePrime
previousPrimesPlusCurrentPrime = possiblePrime : previousPrimes
I think you get the idea. Let's also just ignore the fact that this solution can be made to be more efficient, I'm aware of this.
So my question is kind of a two-part question. First, am I going about Haskell all wrong? Am I stuck in the imperative programming mindset and not embracing Haskell as I should? And if so, as I feel I am, how do avoid this? Is there a book or source you can point me to that might help me think more Haskell-like?
Your help is much appreciated,
-Asaf
Am I stuck in the imperative programming mindset and not embracing
Haskell as I should?
You are not stuck, at least I don't hope so. What you experience is absolutely normal. While you were working with imperative languages you learned (maybe without knowing) to see programming problems from a very specific perspective - namely in terms of the van Neumann machine.
If you have the problem of, say, making a list that contains some sequence of numbers (lets say we want the first 1000 even numbers), you immediately think of: a linked list implementation (perhaps from the standard library of your programming language), a loop and a variable that you'd set to a starting value and then you would loop for a while, updating the variable by adding 2 and putting it to the end of the list.
See how you mostly think to serve the machine? Memory locations, loops, etc.!
In imperative programming, one thinks about how to manipulate certain memory cells in a certain order to arrive at the solution all the time. (This is, btw, one reason why beginners find learning (imperative) programming hard. Non programmers are simply not used to solve problems by reducing it to a sequence of memory operations. Why should they? But once you've learned that, you have the power - in the imperative world. For functional programming you need to unlearn that.)
In functional programming, and especially in Haskell, you merely state the construction law of the list. Because a list is a recursive data structure, this law is of course also recursive. In our case, we could, for example say the following:
constructStartingWith n = n : constructStartingWith (n+2)
And almost done! To arrive at our final list we only have to say where to start and how many we want:
result = take 1000 (constructStartingWith 0)
Note that a more general version of constructStartingWith is available in the library, it is called iterate and it takes not only the starting value but also the function that makes the next list element from the current one:
iterate f n = n : iterate f (f n)
constructStartingWith = iterate (2+) -- defined in terms of iterate
Another approach is to assume that we had another list our list could be made from easily. For example, if we had the list of the first n integers we could make it easily into the list of even integers by multiplying each element with 2. Now, the list of the first 1000 (non-negative) integers in Haskell is simply
[0..999]
And there is a function map that transforms lists by applying a given function to each argument. The function we want is to double the elements:
double n = 2*n
Hence:
result = map double [0..999]
Later you'll learn more shortcuts. For example, we don't need to define double, but can use a section: (2*) or we could write our list directly as a sequence [0,2..1998]
But not knowing these tricks yet should not make you feel bad! The main challenge you are facing now is to develop a mentality where you see that the problem of constructing the list of the first 1000 even numbers is a two staged one: a) define how the list of all even numbers looks like and b) take a certain portion of that list. Once you start thinking that way you're done even if you still use hand written versions of iterate and take.
Back to the Euler problem: Here we can use the top down method (and a few basic list manipulation functions one should indeed know about: head, drop, filter, any). First, if we had the list of primes already, we can just drop the first 1000 and take the head of the rest to get the 1001th one:
result = head (drop 1000 primes)
We know that after dropping any number of elements form an infinite list, there will still remain a nonempty list to pick the head from, hence, the use of head is justified here. When you're unsure if there are more than 1000 primes, you should write something like:
result = case drop 1000 primes of
[] -> error "The ancient greeks were wrong! There are less than 1001 primes!"
(r:_) -> r
Now for the hard part. Not knowing how to proceed, we could write some pseudo code:
primes = 2 : {-an infinite list of numbers that are prime-}
We know for sure that 2 is the first prime, the base case, so to speak, thus we can write it down. The unfilled part gives us something to think about. For example, the list should start at some value that is greater 2 for obvious reason. Hence, refined:
primes = 2 : {- something like [3..] but only the ones that are prime -}
Now, this is the point where there emerges a pattern that one needs to learn to recognize. This is surely a list filtered by a predicate, namely prime-ness (it does not matter that we don't know yet how to check prime-ness, the logical structure is the important point. (And, we can be sure that a test for prime-ness is possible!)). This allows us to write more code:
primes = 2 : filter isPrime [3..]
See? We are almost done. In 3 steps, we have reduced a fairly complex problem in such a way that all that is left to write is a quite simple predicate.
Again, we can write in pseudocode:
isPrime n = {- false if any number in 2..n-1 divides n, otherwise true -}
and can refine that. Since this is almost haskell already, it is too easy:
isPrime n = not (any (divides n) [2..n-1])
divides n p = n `rem` p == 0
Note that we did not do optimization yet. For example we can construct the list to be filtered right away to contain only odd numbers, since we know that even ones are not prime. More important, we want to reduce the number of candidates we have to try in isPrime. And here, some mathematical knowledge is needed (the same would be true if you programmed this in C++ or Java, of course), that tells us that it suffices to check if the n we are testing is divisible by any prime number, and that we do not need to check divisibility by prime numbers whose square is greater than n. Fortunately, we have already defined the list of prime numbers and can pick the set of candidates from there! I leave this as exercise.
You'll learn later how to use the standard library and the syntactic sugar like sections, list comprehensions, etc. and you will gradually give up to write your own basic functions.
Even later, when you have to do something in an imperative programming language again, you'll find it very hard to live without infinte lists, higher order functions, immutable data etc.
This will be as hard as going back from C to Assembler.
Have fun!
It's ok to have an imperative mindset at first. With time you will get more used to things and start seeing the places where you can have more functional programs. Practice makes perfect.
As for working with mutable variables you can kind of keep them for now if you follow the rule of thumb of converting variables into function parameters and iteration into tail recursion.
Off the top of my head:
Typeclassopedia. The official v1 of the document is a pdf, but the author has moved his v2 efforts to the Haskell wiki.
What is a monad? This SO Q&A is the best reference I can find.
What is a Monad Transformer? Monad Transformers Step by Step.
Learn from masters: Good Haskell source to read and learn from.
More advanced topics such as GADTs. There's a video, which does a great job explaining it.
And last but not least, #haskell IRC channel. Nothing can even come close to talk to real people.
I think the big change from your code to more haskell like code is using higher order functions, pattern matching and laziness better. For example, you could write the nthPrime function like this (using a similar algorithm to what you did, again ignoring efficiency):
nthPrime n = primes !! (n - 1) where
primes = filter isPrime [2..]
isPrime p = isPrime' p [2..p - 1]
isPrime' p [] = True
isPrime' p (x:xs)
| (p `mod` x == 0) = False
| otherwise = isPrime' p xs
Eg nthPrime 4 returns 7. A few things to note:
The isPrime' function uses pattern matching to implement the function, rather than relying on if statements.
the primes value is an infinite list of all primes. Since haskell is lazy, this is perfectly acceptable.
filter is used rather than reimplemented that behaviour using recursion.
With more experience you will find you will write more idiomatic haskell code - it sortof happens automatically with experience. So don't worry about it, just keep practicing, and reading other people's code.
Another approach, just for variety! Strong use of laziness...
module Main where
nonmults :: Int -> Int -> [Int] -> [Int]
nonmults n next [] = []
nonmults n next l#(x:xs)
| x < next = x : nonmults n next xs
| x == next = nonmults n (next + n) xs
| otherwise = nonmults n (next + n) l
select_primes :: [Int] -> [Int]
select_primes [] = []
select_primes (x:xs) =
x : (select_primes $ nonmults x (x + x) xs)
main :: IO ()
main = do
let primes = select_primes [2 ..]
putStrLn $ show $ primes !! 10000 -- the first prime is index 0 ...
I want to try to answer your question without using ANY functional programming or math, not because I don't think you will understand it, but because your question is very common and maybe others will benefit from the mindset I will try to describe. I'll preface this by saying I an not a Haskell expert by any means, but I have gotten past the mental block you have described by realizing the following:
1. Haskell is simple
Haskell, and other functional languages that I'm not so familiar with, are certainly very different from your 'normal' languages, like C, Java, Python, etc. Unfortunately, the way our psyche works, humans prematurely conclude that if something is different, then A) they don't understand it, and B) it's more complicated than what they already know. If we look at Haskell very objectively, we will see that these two conjectures are totally false:
"But I don't understand it :("
Actually you do. Everything in Haskell and other functional languages is defined in terms of logic and patterns. If you can answer a question as simple as "If all Meeps are Moops, and all Moops are Moors, are all Meeps Moors?", then you could probably write the Haskell Prelude yourself. To further support this point, consider that Haskell lists are defined in Haskell terms, and are not special voodoo magic.
"But it's complicated"
It's actually the opposite. It's simplicity is so naked and bare that our brains have trouble figuring out what to do with it at first. Compared to other languages, Haskell actually has considerably fewer "features" and much less syntax. When you read through Haskell code, you'll notice that almost all the function definitions look the same stylistically. This is very different than say Java for example, which has constructs like Classes, Interfaces, for loops, try/catch blocks, anonymous functions, etc... each with their own syntax and idioms.
You mentioned $ and ., again, just remember they are defined just like any other Haskell function and don't necessarily ever need to be used. However, if you didn't have these available to you, over time, you would likely implement these functions yourself when you notice how convenient they can be.
2. There is no Haskell version of anything
This is actually a great thing, because in Haskell, we have the freedom to define things exactly how we want them. Most other languages provide building blocks that people string together into a program. Haskell leaves it up to you to first define what a building block is, before building with it.
Many beginners ask questions like "How do I do a For loop in Haskell?" and innocent people who are just trying to help will give an unfortunate answer, probably involving a helper function, and extra Int parameter, and tail recursing until you get to 0. Sure, this construct can compute something like a for loop, but in no way is it a for loop, it's not a replacement for a for loop, and in no way is it really even similar to a for loop if you consider the flow of execution. Similar is the State monad for simulating state. It can be used to accomplish similar things as static variables do in other languages, but in no way is it the same thing. Most people leave off the last tidbit about it not being the same when they answer these kinds of questions and I think that only confuses people more until they realize it on their own.
3. Haskell is a logic engine, not a programming language
This is probably least true point I'm trying to make, but hear me out. In imperative programming languages, we are concerned with making our machines do stuff, perform actions, change state, and so on. In Haskell, we try to define what things are, and how are they supposed to behave. We are usually not concerned with what something is doing at any particular time. This certainly has benefits and drawbacks, but that's just how it is. This is very different than what most people think of when you say "programming language".
So that's my take how how to leave an imperative mindset and move to a more functional mindset. Realizing how sensible Haskell is will help you not look at your own code funny anymore. Hopefully thinking about Haskell in these ways will help you become a more productive Haskeller.

Process to pass from problem to code. How did you learn?

I'm teaching/helping a student to program.
I remember the following process always helped me when I started; It looks pretty intuitive and I wonder if someone else have had a similar approach.
Read the problem and understand it ( of course ) .
Identify possible "functions" and variables.
Write how would I do it step by step ( algorithm )
Translate it into code, if there is something you cannot do, create a function that does it for you and keep moving.
With the time and practice I seem to have forgotten how hard it was to pass from problem description to a coding solution, but, by applying this method I managed to learn how to program.
So for a project description like:
A system has to calculate the price of an Item based on the following rules ( a description of the rules... client, discounts, availability etc.. etc.etc. )
I first step is to understand what the problem is.
Then identify the item, the rules the variables etc.
pseudo code something like:
function getPrice( itemPrice, quantity , clientAge, hourOfDay ) : int
if( hourOfDay > 18 ) then
discount = 5%
if( quantity > 10 ) then
discount = 5%
if( clientAge > 60 or < 18 ) then
discount = 5%
return item_price - discounts...
end
And then pass it to the programming language..
public class Problem1{
public int getPrice( int itemPrice, int quantity,hourOdDay ) {
int discount = 0;
if( hourOfDay > 10 ) {
// uh uh.. U don't know how to calculate percentage...
// create a function and move on.
discount += percentOf( 5, itemPriece );
.
.
.
you get the idea..
}
}
public int percentOf( int percent, int i ) {
// ....
}
}
Did you went on a similar approach?.. Did some one teach you a similar approach or did you discovered your self ( as I did :( )
I go via the test-driven approach.
1. I write down (on paper or plain text editor) a list of tests or specification that would satisfy the needs of the problem.
- simple calculations (no discounts and concessions) with:
- single item
- two items
- maximum number of items that doesn't have a discount
- calculate for discounts based on number of items
- buying 10 items gives you a 5% discount
- buying 15 items gives you a 7% discount
- etc.
- calculate based on hourly rates
- calculate morning rates
- calculate afternoon rates
- calculate evening rates
- calculate midnight rates
- calculate based on buyer's age
- children
- adults
- seniors
- calculate based on combinations
- buying 10 items in the afternoon
2. Look for the items that I think would be the easiest to implement and write a test for it. E.g single items looks easy
The sample using Nunit and C#.
[Test] public void SingleItems()
{
Assert.AreEqual(5, GetPrice(5, 1));
}
Implement that using:
public decimal GetPrice(decimal amount, int quantity)
{
return amount * quantity; // easy!
}
Then move on to the two items.
[Test]
public void TwoItemsItems()
{
Assert.AreEqual(10, GetPrice(5, 2));
}
The implementation still passes the test so move on to the next test.
3. Be always on the lookout for duplication and remove it. You are done when all the tests pass and you can no longer think of any test.
This doesn't guarantee that you will create the most efficient algorithm, but as long as you know what to test for and it all passes, it will guarantee that you are getting the right answers.
the old-school OO way:
write down a description of the problem and its solution
circle the nouns, these are candidate objects
draw boxes around the verbs, these are candidate messages
group the verbs with the nouns that would 'do' the action; list any other nouns that would be required to help
see if you can restate the solution using the form noun.verb(other nouns)
code it
[this method preceeds CRC cards, but its been so long (over 20 years) that I don't remember where i learned it]
when learning programming I don't think TDD is helpful. TDD is good later on when you have some concept of what programming is about, but for starters, having an environment where you write code and see the results in the quickest possible turn around time is the most important thing.
I'd go from problem statement to code instantly. Hack it around. Help the student see different ways of composing software / structuring algorithms. Teach the student to change their minds and rework the code. Try and teach a little bit about code aesthetics.
Once they can hack around code.... then introduce the idea of formal restructuring in terms of refactoring. Then introduce the idea of TDD as a way to make the process a bit more robust. But only once they are feeling comfortable in manipulating code to do what they want. Being able to specify tests is then somewhat easier at that stage. The reason is that TDD is about Design. When learning you don't really care so much about design but about what you can do, what toys do you have to play with, how do they work, how do you combine them together. Once you have a sense of that, then you want to think about design and thats when TDD really kicks in.
From there I'd start introducing micro patterns leading into design patterns
I did something similar.
Figure out the rules/logic.
Figure out the math.
Then try and code it.
After doing that for a couple of months it just gets internalized. You don't realize your doing it until you come up against a complex problem that requires you to break it down.
I start at the top and work my way down. Basically, I'll start by writing a high level procedure, sketch out the details inside of it, and then start filling in the details.
Say I had this problem (yoinked from project euler)
The sum of the squares of the first
ten natural numbers is, 1^2 + 2^2 +
... + 10^2 = 385
The square of the sum of the first ten
natural numbers is, (1 + 2 + ... +
10)^2 = 55^2 = 3025
Hence the difference between the sum
of the squares of the first ten
natural numbers and the square of the
sum is 3025 385 = 2640.
Find the difference between the sum of
the squares of the first one hundred
natural numbers and the square of the
sum.
So I start like this:
(display (- (sum-of-squares (list-to 10))
(square-of-sums (list-to 10))))
Now, in Scheme, there is no sum-of-squares, square-of-sums or list-to functions. So the next step would be to build each of those. In building each of those functions, I may find I need to abstract out more. I try to keep things simple so that each function only really does one thing. When I build some piece of functionality that is testable, I write a unit test for it. When I start noticing a logical grouping for some data, and the functions that act on them, I may push it into an object.
I've enjoyed TDD every since it was introduced to me. Helps me plan out my code, and it just puts me at ease having all my tests return with "success" every time I modify my code, letting me know I'm going home on time today!
Wishful thinking is probably the most important tool to solve complex problems. When in doubt, assume that a function exists to solve your problem (create a stub, at first). You'll come back to it later to expand it.
A good book for beginners looking for a process: Test Driven Development: By Example
My dad had a bunch of flow chart stencils that he used to make me use when he was first teaching me about programming. to this day I draw squares and diamonds to build out a logical process of how to analyze a problem.
I think there are about a dozen different heuristics I know of when it comes to programming and so I tend to go through the list at times with what I'm trying to do. At the start, it is important to know what is the desired end result and then try to work backwards to find it.
I remember an Algorithms class covering some of these ways like:
Reduce it to a known problem or trivial problem
Divide and conquer (MergeSort being a classic example here)
Use Data Structures that have the right functions (HeapSort being an example here)
Recursion (Knowing trivial solutions and being able to reduce to those)
Dynamic programming
Organizing a solution as well as testing it for odd situations, e.g. if someone thinks L should be a number, are what I'd usually use to test out the idea in pseudo code before writing it up.
Design patterns can be a handy set of tools to use for specific cases like where an Adapter is needed or organizing things into a state or strategy solution.
Yes.. well TDD did't existed ( or was not that popular ) when I began. Would be TDD the way to go to pass from problem description to code?... Is not that a little bit advanced? I mean, when a "future" developer hardly understand what a programming language is, wouldn't it be counterproductive?
What about hamcrest the make the transition from algorithm to code.
I think there's a better way to state your problem.
Instead of defining it as 'a system,' define what is expected in terms of user inputs and outputs.
"On a window, a user should select an item from a list, and a box should show him how much it costs."
Then, you can give him some of the factors determining the costs, including sample items and what their costs should end up being.
(this is also very much a TDD-like idea)
Keep in mind, if you get 5% off then another 5% off, you don't get 10% off. Rather, you pay 95% of 95%, which is 90.25%, or 9.75% off. So, you shouldn't add the percentage.