NetLogo: How to calculate infinite series (or partial sum) with NetLogo? - sum

How to calculate infinite series with NetLogo?
For example, how to calculate the sum of x^i, where i is from 0 to 10?
(the lower bound of summation is i=0 and the upper bound of summation is i=10 (or infinite), x is an indexed variable representing each successive term in the series)

You can't do infinite series, NetLogo doesn't have algebraic manipulation. However, you can construct a function that takes the relevant powers and adds them together.
I think this is what you want:
to-report sum-power [#x #upper]
let range-list range (#upper + 1)
let result reduce [ [so-far next] -> #x ^ next + so-far ] range-list
report result
end
For example, sum-power 2 3 returns 14 because the first is (0 + 2^1), then add 2^2, then 2^3 = 0 + 2 + 4 + 8

Related

What is definition of truncated polynomial?

In NTRUEncryption, I seen the trucated polynimials, but I cannot understand the trunacated polynomial calculation.
So, could tell me anyone How we calculate the truncated polynomial?
The polynomials are truncated in the sense that they only have coefficients up to a certain degree.
Here is how you truncate the product of two truncated polynomials (the sum is trivial):
Assume you have two truncated polynomials, i.e. two polynomials of degree no greater than n-1
a = a[0] + a[1]X + ... + a[n-1]X^(n-1)
b = b[0] + b[1]X + ... + b[n-1]X^(n-1)
Then their "truncated" product is defined as the polynomial
a * b = c[0] + c[1]X + ... +c[n-1]X^(n-1)
where the c[k] coefficients are computed as follow:
Reverse b[0]..b[n-1] to get b[n-1]..b[0].
Rotate the result of step 1 above k+1 times to the right and get b[k]..b[0]b[n-1]..b[k+1]
Denote with b_k[0]..b_k[n-1] the array calculated in 2.
Now define
c[k] = a[0]b_k[0] + a[1]b_k[1] + ... + a[n-1]b_k[n-1].
This operation can also be made by multiplying the polynomials a and b in the usual way and then truncating the result to the degree n-1. The reason for the algorithm above is to avoid computing coefficients that will not be used in the final result.

How do i find the number of times x=x+1 is executed in terms of N?

im also having trouble finding omega(), and theta() as appropriate
x=0;
for k=1 to n
for j=1 to n-k
X=X+1;
The inner loop is n-1 + n-2 + n-3 ... + 1 + 0. Use this tutorial on calculating the sum of an arithmetic series to find the solution. The outer loop is obviously just "n."
This will be the big-theta. The big-oh will be the same as big-theta when you pull off everything but the first term and remove the multiplier, e.g. Theta(2*log(n) + 5) becomes O(log(n)). Omega is the same as big-Oh in this case, because the best case and worst case are identical; or you can cheat and say that big-Omega is constant time, because the big-Omega of EVERY function is constant time.
First, look at your boundaries. k=1 and k=n.
For k=1, the inside loop is executed (n-1) times.
For k=n the inside loop is execured (0) times.
So, 0 + 1 + ... + (n-1) is an arithmetic sum => (n-1)(n)/2 times.
Now, test it on a few small values :)
the answer is like this :
n-1 + n -2 + n -3 + ... = n*n - (1+2+3+ ... + n) = n^2 - n(n-1)/2

Smooth Coloring Mandelbrot Set Without Complex Number Library

I've coded a basic Mandelbrot explorer in C#, but I have those horrible bands of color, and it's all greyscale.
I have the equation for smooth coloring:
mu = N + 1 - log (log |Z(N)|) / log 2
Where N is the escape count, and |Z(N)| is the modulus of the complex number after the value has escaped, it's this value which I'm unsure of.
My code is based off the pseudo code given on the wikipedia page: http://en.wikipedia.org/wiki/Mandelbrot_set#For_programmers
The complex number is represented by the real values x and y, using this method, how would I calculate the value of |Z(N)| ?
|Z(N)| means the distance to the origin, so you can calculate it via sqrt(x*x + y*y).
If you run into an error with the logarithm: Check the iterations before. If it's part of the Mandelbrot set (iteration = max_iteration), the first logarithm will result 0 and the second will raise an error.
So just add this snippet instead of your old return code. .
if (i < iterations)
{
return i + 1 - Math.Log(Math.Log(Math.Sqrt(x * x + y * y))) / Math.Log(2);
}
return i;
Later, you should divide i by the max_iterations and multiply it with 255. This will give you a nice rgb-value.

Recognizing when to use the modulus operator

I know the modulus (%) operator calculates the remainder of a division. How can I identify a situation where I would need to use the modulus operator?
I know I can use the modulus operator to see whether a number is even or odd and prime or composite, but that's about it. I don't often think in terms of remainders. I'm sure the modulus operator is useful, and I would like to learn to take advantage of it.
I just have problems identifying where the modulus operator is applicable. In various programming situations, it is difficult for me to see a problem and realize "Hey! The remainder of division would work here!".
Imagine that you have an elapsed time in seconds and you want to convert this to hours, minutes, and seconds:
h = s / 3600;
m = (s / 60) % 60;
s = s % 60;
0 % 3 = 0;
1 % 3 = 1;
2 % 3 = 2;
3 % 3 = 0;
Did you see what it did? At the last step it went back to zero. This could be used in situations like:
To check if N is divisible by M (for example, odd or even)
or
N is a multiple of M.
To put a cap of a particular value. In this case 3.
To get the last M digits of a number -> N % (10^M).
I use it for progress bars and the like that mark progress through a big loop. The progress is only reported every nth time through the loop, or when count%n == 0.
I've used it when restricting a number to a certain multiple:
temp = x - (x % 10); //Restrict x to being a multiple of 10
Wrapping values (like a clock).
Provide finite fields to symmetric key algorithms.
Bitwise operations.
And so on.
One use case I saw recently was when you need to reverse a number. So that 123456 becomes 654321 for example.
int number = 123456;
int reversed = 0;
while ( number > 0 ) {
# The modulus here retrieves the last digit in the specified number
# In the first iteration of this loop it's going to be 6, then 5, ...
# We are multiplying reversed by 10 first, to move the number one decimal place to the left.
# For example, if we are at the second iteration of this loop,
# reversed gonna be 6, so 6 * 10 + 12345 % 10 => 60 + 5
reversed = reversed * 10 + number % 10;
number = number / 10;
}
Example. You have message of X bytes, but in your protocol maximum size is Y and Y < X. Try to write small app that splits message into packets and you will run into mod :)
There are many instances where it is useful.
If you need to restrict a number to be within a certain range you can use mod. For example, to generate a random number between 0 and 99 you might say:
num = MyRandFunction() % 100;
Any time you have division and want to express the remainder other than in decimal, the mod operator is appropriate. Things that come to mind are generally when you want to do something human-readable with the remainder. Listing how many items you could put into buckets and saying "5 left over" is good.
Also, if you're ever in a situation where you may be accruing rounding errors, modulo division is good. If you're dividing by 3 quite often, for example, you don't want to be passing .33333 around as the remainder. Passing the remainder and divisor (i.e. the fraction) is appropriate.
As #jweyrich says, wrapping values. I've found mod very handy when I have a finite list and I want to iterate over it in a loop - like a fixed list of colors for some UI elements, like chart series, where I want all the series to be different, to the extent possible, but when I've run out of colors, just to start over at the beginning. This can also be used with, say, patterns, so that the second time red comes around, it's dashed; the third time, dotted, etc. - but mod is just used to get red, green, blue, red, green, blue, forever.
Calculation of prime numbers
The modulo can be useful to convert and split total minutes to "hours and minutes":
hours = minutes / 60
minutes_left = minutes % 60
In the hours bit we need to strip the decimal portion and that will depend on the language you are using.
We can then rearrange the output accordingly.
Converting linear data structure to matrix structure:
where a is index of linear data, and b is number of items per row:
row = a/b
column = a mod b
Note above is simplified logic: a must be offset -1 before dividing & the result must be normalized +1.
Example: (3 rows of 4)
1 2 3 4
5 6 7 8
9 10 11 12
(7 - 1)/4 + 1 = 2
7 is in row 2
(7 - 1) mod 4 + 1 = 3
7 is in column 3
Another common use of modulus: hashing a number by place. Suppose you wanted to store year & month in a six digit number 195810. month = 195810 mod 100 all digits 3rd from right are divisible by 100 so the remainder is the 2 rightmost digits in this case the month is 10. To extract the year 195810 / 100 yields 1958.
Modulus is also very useful if for some crazy reason you need to do integer division and get a decimal out, and you can't convert the integer into a number that supports decimal division, or if you need to return a fraction instead of a decimal.
I'll be using % as the modulus operator
For example
2/4 = 0
where doing this
2/4 = 0 and 2 % 4 = 2
So you can be really crazy and let's say that you want to allow the user to input a numerator and a divisor, and then show them the result as a whole number, and then a fractional number.
whole Number = numerator/divisor
fractionNumerator = numerator % divisor
fractionDenominator = divisor
Another case where modulus division is useful is if you are increasing or decreasing a number and you want to contain the number to a certain range of number, but when you get to the top or bottom you don't want to just stop. You want to loop up to the bottom or top of the list respectively.
Imagine a function where you are looping through an array.
Function increase Or Decrease(variable As Integer) As Void
n = (n + variable) % (listString.maxIndex + 1)
Print listString[n]
End Function
The reason that it is n = (n + variable) % (listString.maxIndex + 1) is to allow for the max index to be accounted.
Those are just a few of the things that I have had to use modulus for in my programming of not just desktop applications, but in robotics and simulation environments.
Computing the greatest common divisor
Determining if a number is a palindrome
Determining if a number consists of only ...
Determining how many ... a number consists of...
My favorite use is for iteration.
Say you have a counter you are incrementing and want to then grab from a known list a corresponding items, but you only have n items to choose from and you want to repeat a cycle.
var indexFromB = (counter-1)%n+1;
Results (counter=indexFromB) given n=3:
`1=1`
`2=2`
`3=3`
`4=1`
`5=2`
`6=3`
...
Best use of modulus operator I have seen so for is to check if the Array we have is a rotated version of original array.
A = [1,2,3,4,5,6]
B = [5,6,1,2,3,4]
Now how to check if B is rotated version of A ?
Step 1: If A's length is not same as B's length then for sure its not a rotated version.
Step 2: Check the index of first element of A in B. Here first element of A is 1. And its index in B is 2(assuming your programming language has zero based index).
lets store that index in variable "Key"
Step 3: Now how to check that if B is rotated version of A how ??
This is where modulus function rocks :
for (int i = 0; i< A.length; i++)
{
// here modulus function would check the proper order. Key here is 2 which we recieved from Step 2
int j = [Key+i]%A.length;
if (A[i] != B[j])
{
return false;
}
}
return true;
It's an easy way to tell if a number is even or odd. Just do # mod 2, if it is 0 it is even, 1 it is odd.
Often, in a loop, you want to do something every k'th iteration, where k is 0 < k < n, assuming 0 is the start index and n is the length of the loop.
So, you'd do something like:
int k = 5;
int n = 50;
for(int i = 0;i < n;++i)
{
if(i % k == 0) // true at 0, 5, 10, 15..
{
// do something
}
}
Or, you want to keep something whitin a certain bound. Remember, when you take an arbitrary number mod something, it must produce a value between 0 and that number - 1.

Computational complexity of Fibonacci Sequence

I understand Big-O notation, but I don't know how to calculate it for many functions. In particular, I've been trying to figure out the computational complexity of the naive version of the Fibonacci sequence:
int Fibonacci(int n)
{
if (n <= 1)
return n;
else
return Fibonacci(n - 1) + Fibonacci(n - 2);
}
What is the computational complexity of the Fibonacci sequence and how is it calculated?
You model the time function to calculate Fib(n) as sum of time to calculate Fib(n-1) plus the time to calculate Fib(n-2) plus the time to add them together (O(1)). This is assuming that repeated evaluations of the same Fib(n) take the same time - i.e. no memoization is used.
T(n<=1) = O(1)
T(n) = T(n-1) + T(n-2) + O(1)
You solve this recurrence relation (using generating functions, for instance) and you'll end up with the answer.
Alternatively, you can draw the recursion tree, which will have depth n and intuitively figure out that this function is asymptotically O(2n). You can then prove your conjecture by induction.
Base: n = 1 is obvious
Assume T(n-1) = O(2n-1), therefore
T(n) = T(n-1) + T(n-2) + O(1) which is equal to
T(n) = O(2n-1) + O(2n-2) + O(1) = O(2n)
However, as noted in a comment, this is not the tight bound. An interesting fact about this function is that the T(n) is asymptotically the same as the value of Fib(n) since both are defined as
f(n) = f(n-1) + f(n-2).
The leaves of the recursion tree will always return 1. The value of Fib(n) is sum of all values returned by the leaves in the recursion tree which is equal to the count of leaves. Since each leaf will take O(1) to compute, T(n) is equal to Fib(n) x O(1). Consequently, the tight bound for this function is the Fibonacci sequence itself (~θ(1.6n)). You can find out this tight bound by using generating functions as I'd mentioned above.
Just ask yourself how many statements need to execute for F(n) to complete.
For F(1), the answer is 1 (the first part of the conditional).
For F(n), the answer is F(n-1) + F(n-2).
So what function satisfies these rules? Try an (a > 1):
an == a(n-1) + a(n-2)
Divide through by a(n-2):
a2 == a + 1
Solve for a and you get (1+sqrt(5))/2 = 1.6180339887, otherwise known as the golden ratio.
So it takes exponential time.
I agree with pgaur and rickerbh, recursive-fibonacci's complexity is O(2^n).
I came to the same conclusion by a rather simplistic but I believe still valid reasoning.
First, it's all about figuring out how many times recursive fibonacci function ( F() from now on ) gets called when calculating the Nth fibonacci number. If it gets called once per number in the sequence 0 to n, then we have O(n), if it gets called n times for each number, then we get O(n*n), or O(n^2), and so on.
So, when F() is called for a number n, the number of times F() is called for a given number between 0 and n-1 grows as we approach 0.
As a first impression, it seems to me that if we put it in a visual way, drawing a unit per time F() is called for a given number, wet get a sort of pyramid shape (that is, if we center units horizontally). Something like this:
n *
n-1 **
n-2 ****
...
2 ***********
1 ******************
0 ***************************
Now, the question is, how fast is the base of this pyramid enlarging as n grows?
Let's take a real case, for instance F(6)
F(6) * <-- only once
F(5) * <-- only once too
F(4) **
F(3) ****
F(2) ********
F(1) **************** <-- 16
F(0) ******************************** <-- 32
We see F(0) gets called 32 times, which is 2^5, which for this sample case is 2^(n-1).
Now, we want to know how many times F(x) gets called at all, and we can see the number of times F(0) is called is only a part of that.
If we mentally move all the *'s from F(6) to F(2) lines into F(1) line, we see that F(1) and F(0) lines are now equal in length. Which means, total times F() gets called when n=6 is 2x32=64=2^6.
Now, in terms of complexity:
O( F(6) ) = O(2^6)
O( F(n) ) = O(2^n)
There's a very nice discussion of this specific problem over at MIT. On page 5, they make the point that, if you assume that an addition takes one computational unit, the time required to compute Fib(N) is very closely related to the result of Fib(N).
As a result, you can skip directly to the very close approximation of the Fibonacci series:
Fib(N) = (1/sqrt(5)) * 1.618^(N+1) (approximately)
and say, therefore, that the worst case performance of the naive algorithm is
O((1/sqrt(5)) * 1.618^(N+1)) = O(1.618^(N+1))
PS: There is a discussion of the closed form expression of the Nth Fibonacci number over at Wikipedia if you'd like more information.
You can expand it and have a visulization
T(n) = T(n-1) + T(n-2) <
T(n-1) + T(n-1)
= 2*T(n-1)
= 2*2*T(n-2)
= 2*2*2*T(n-3)
....
= 2^i*T(n-i)
...
==> O(2^n)
Recursive algorithm's time complexity can be better estimated by drawing recursion tree, In this case the recurrence relation for drawing recursion tree would be T(n)=T(n-1)+T(n-2)+O(1)
note that each step takes O(1) meaning constant time,since it does only one comparison to check value of n in if block.Recursion tree would look like
n
(n-1) (n-2)
(n-2)(n-3) (n-3)(n-4) ...so on
Here lets say each level of above tree is denoted by i
hence,
i
0 n
1 (n-1) (n-2)
2 (n-2) (n-3) (n-3) (n-4)
3 (n-3)(n-4) (n-4)(n-5) (n-4)(n-5) (n-5)(n-6)
lets say at particular value of i, the tree ends, that case would be when n-i=1, hence i=n-1, meaning that the height of the tree is n-1.
Now lets see how much work is done for each of n layers in tree.Note that each step takes O(1) time as stated in recurrence relation.
2^0=1 n
2^1=2 (n-1) (n-2)
2^2=4 (n-2) (n-3) (n-3) (n-4)
2^3=8 (n-3)(n-4) (n-4)(n-5) (n-4)(n-5) (n-5)(n-6) ..so on
2^i for ith level
since i=n-1 is height of the tree work done at each level will be
i work
1 2^1
2 2^2
3 2^3..so on
Hence total work done will sum of work done at each level, hence it will be 2^0+2^1+2^2+2^3...+2^(n-1) since i=n-1.
By geometric series this sum is 2^n, Hence total time complexity here is O(2^n)
The proof answers are good, but I always have to do a few iterations by hand to really convince myself. So I drew out a small calling tree on my whiteboard, and started counting the nodes. I split my counts out into total nodes, leaf nodes, and interior nodes. Here's what I got:
IN | OUT | TOT | LEAF | INT
1 | 1 | 1 | 1 | 0
2 | 1 | 1 | 1 | 0
3 | 2 | 3 | 2 | 1
4 | 3 | 5 | 3 | 2
5 | 5 | 9 | 5 | 4
6 | 8 | 15 | 8 | 7
7 | 13 | 25 | 13 | 12
8 | 21 | 41 | 21 | 20
9 | 34 | 67 | 34 | 33
10 | 55 | 109 | 55 | 54
What immediately leaps out is that the number of leaf nodes is fib(n). What took a few more iterations to notice is that the number of interior nodes is fib(n) - 1. Therefore the total number of nodes is 2 * fib(n) - 1.
Since you drop the coefficients when classifying computational complexity, the final answer is θ(fib(n)).
It is bounded on the lower end by 2^(n/2) and on the upper end by 2^n (as noted in other comments). And an interesting fact of that recursive implementation is that it has a tight asymptotic bound of Fib(n) itself. These facts can be summarized:
T(n) = Ω(2^(n/2)) (lower bound)
T(n) = O(2^n) (upper bound)
T(n) = Θ(Fib(n)) (tight bound)
The tight bound can be reduced further using its closed form if you like.
It is simple to calculate by diagramming function calls. Simply add the function calls for each value of n and look at how the number grows.
The Big O is O(Z^n) where Z is the golden ratio or about 1.62.
Both the Leonardo numbers and the Fibonacci numbers approach this ratio as we increase n.
Unlike other Big O questions there is no variability in the input and both the algorithm and implementation of the algorithm are clearly defined.
There is no need for a bunch of complex math. Simply diagram out the function calls below and fit a function to the numbers.
Or if you are familiar with the golden ratio you will recognize it as such.
This answer is more correct than the accepted answer which claims that it will approach f(n) = 2^n. It never will. It will approach f(n) = golden_ratio^n.
2 (2 -> 1, 0)
4 (3 -> 2, 1) (2 -> 1, 0)
8 (4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
14 (5 -> 4, 3) (4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
(3 -> 2, 1) (2 -> 1, 0)
22 (6 -> 5, 4)
(5 -> 4, 3) (4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
(3 -> 2, 1) (2 -> 1, 0)
(4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
The naive recursion version of Fibonacci is exponential by design due to repetition in the computation:
At the root you are computing:
F(n) depends on F(n-1) and F(n-2)
F(n-1) depends on F(n-2) again and F(n-3)
F(n-2) depends on F(n-3) again and F(n-4)
then you are having at each level 2 recursive calls that are wasting a lot of data in the calculation, the time function will look like this:
T(n) = T(n-1) + T(n-2) + C, with C constant
T(n-1) = T(n-2) + T(n-3) > T(n-2) then
T(n) > 2*T(n-2)
...
T(n) > 2^(n/2) * T(1) = O(2^(n/2))
This is just a lower bound that for the purpose of your analysis should be enough but the real time function is a factor of a constant by the same Fibonacci formula and the closed form is known to be exponential of the golden ratio.
In addition, you can find optimized versions of Fibonacci using dynamic programming like this:
static int fib(int n)
{
/* memory */
int f[] = new int[n+1];
int i;
/* Init */
f[0] = 0;
f[1] = 1;
/* Fill */
for (i = 2; i <= n; i++)
{
f[i] = f[i-1] + f[i-2];
}
return f[n];
}
That is optimized and do only n steps but is also exponential.
Cost functions are defined from Input size to the number of steps to solve the problem. When you see the dynamic version of Fibonacci (n steps to compute the table) or the easiest algorithm to know if a number is prime (sqrt(n) to analyze the valid divisors of the number). you may think that these algorithms are O(n) or O(sqrt(n)) but this is simply not true for the following reason:
The input to your algorithm is a number: n, using the binary notation the input size for an integer n is log2(n) then doing a variable change of
m = log2(n) // your real input size
let find out the number of steps as a function of the input size
m = log2(n)
2^m = 2^log2(n) = n
then the cost of your algorithm as a function of the input size is:
T(m) = n steps = 2^m steps
and this is why the cost is an exponential.
Well, according to me to it is O(2^n) as in this function only recursion is taking the considerable time (divide and conquer). We see that, the above function will continue in a tree until the leaves are approaches when we reach to the level F(n-(n-1)) i.e. F(1). So, here when we jot down the time complexity encountered at each depth of tree, the summation series is:
1+2+4+.......(n-1)
= 1((2^n)-1)/(2-1)
=2^n -1
that is order of 2^n [ O(2^n) ].
No answer emphasizes probably the fastest and most memory efficient way to calculate the sequence. There is a closed form exact expression for the Fibonacci sequence. It can be found by using generating functions or by using linear algebra as I will now do.
Let f_1,f_2, ... be the Fibonacci sequence with f_1 = f_2 = 1. Now consider a sequence of two dimensional vectors
f_1 , f_2 , f_3 , ...
f_2 , f_3 , f_4 , ...
Observe that the next element v_{n+1} in the vector sequence is M.v_{n} where M is a 2x2 matrix given by
M = [0 1]
[1 1]
due to f_{n+1} = f_{n+1} and f_{n+2} = f_{n} + f_{n+1}
M is diagonalizable over complex numbers (in fact diagonalizable over the reals as well, but this is not usually the case). There are two distinct eigenvectors of M given by
1 1
x_1 x_2
where x_1 = (1+sqrt(5))/2 and x_2 = (1-sqrt(5))/2 are the distinct solutions to the polynomial equation x*x-x-1 = 0. The corresponding eigenvalues are x_1 and x_2. Think of M as a linear transformation and change your basis to see that it is equivalent to
D = [x_1 0]
[0 x_2]
In order to find f_n find v_n and look at the first coordinate. To find v_n apply M n-1 times to v_1. But applying M n-1 times is easy, just think of it as D. Then using linearity one can find
f_n = 1/sqrt(5)*(x_1^n-x_2^n)
Since the norm of x_2 is smaller than 1, the corresponding term vanishes as n tends to infinity; therefore, obtaining the greatest integer smaller than (x_1^n)/sqrt(5) is enough to find the answer exactly. By making use of the trick of repeatedly squaring, this can be done using only O(log_2(n)) multiplication (and addition) operations. Memory complexity is even more impressive because it can be implemented in a way that you always need to hold at most 1 number in memory whose value is smaller than the answer. However, since this number is not a natural number, memory complexity here changes depending on whether if you use fixed bits to represent each number (hence do calculations with error)(O(1) memory complexity this case) or use a better model like Turing machines, in which case some more analysis is needed.