Optimizing division/exponential calculation - vb.net

I've inherited a Visual Studio/VB.Net numerical simulation project that has a likely inefficient calculation. Profiling indicates that the function is called a lot (1 million times plus) and spends about 50% of the overall calculation within this function. Here is the problematic portion
Result = (A * (E ^ C)) / (D ^ C * B) (where A-C are local double variables and D & E global double variables)
Result is then compared to a threshold which might have additional improvements as well, but I'll leave them another day
any thoughts or help would be appreciated
Steve

The exponent operator (Math.Pow) isn't very fast, there is no dedicated CPU instruction for calculating it. You mentioned that D and E are global variables. That offers a glimmer of hope to get it faster, if you can isolate their changes. Rewriting the equation using logarithms:
log(r) = log((a x e^c) / (b x d^c))
= log(a x e^c) - log (b x d^c)
= log(a) + log(e^c) - log(b) - log(d^c)
= log(a) + c*log(e) - log(b) - c*log(d)
= log(a) - log(b) + c x (log(e) - log(d))
result = exp(r)
Which provides this function to calculate the result:
Function calculate(ByVal a As Double, ByVal b As Double, ByVal c As Double, ByVal d As Double, ByVal e As Double) As Double
Dim logRes = Math.Log(a) - Math.Log(b) + c * (Math.Log(e) - Math.Log(d))
Return Math.Exp(logRes)
End Function
I timed it with the StopWatch class, it is exactly as fast as your original expression. Not a coincidence of course. You'll get ahead by somehow being able to pre-calculate the Math.Log(e) - Math.Log(d) term.

One easy speed up is that
Result = (A/B) * (E/D)^C
At least you are doing one less exponent.
Depending on what C is, there might be faster ways. Like if C is a small integer.
edit:
adding proof to show this is faster
public static void main(String[] args) {
StopWatch sw = new StopWatch();
float e = 1.123F;
float d = 4.456F;
float c = 453;
sw.start();
int max = 5000;
double result = 0;
for (int a = 1; a < max; a++) {
for (float b = 1; b < max; b++) {
result = (a * (Math.pow(e, c))) / (Math.pow(d, c) * b);
}
}
sw.split();
System.out.println("slow: " + sw.getSplitTime() + " result: " + result);
sw.stop();
sw.reset();
sw.start();
result = 0;
for (int a = 1; a < max; a++) {
for (float b = 1; b < max; b++) {
result = a / b * Math.pow(e/d, c);
}
}
sw.split();
System.out.println("fast: " + sw.getSplitTime() + " result: " + result);
sw.stop();
sw.reset();
}
This is the output
slow: 26062 result: 7.077390271736578E-272
fast: 12661 result: 7.077392136525382E-272
There is some skew in the numbers. I would think that the faster version is more exact (but that's just a feeling since i can't think of exactly why).

Well done for profiling. I would also check that A-C are different on every call. In other words, is it possible the caller is actually calculating the same value over and over again? If so, change it so it caches the answer.

For Math.Floor() function, visit:
http://bitsbyta.blogspot.com/2010/12/math-floor-function-vbnet.html
All functions of math library in vb.net is available at:
http://www.bitsbyta.blogspot.com/

Related

Is write_image atomic? Is it better to use atomic_max?

Full disclosure: I am cross-posting from the kronos opencl forums, since I have not received any reply there so far:
https://community.khronos.org/t/is-write-image-atomic-is-it-better-than-atomic-max/106418
I’m writing a connected components labelling algorithm for images (2d and 3d); I found no existing implementations and decided to write one based on pointer jumping and a “recollection step” (btw: if you are aware of an easy-to-use, production ready connected component labelling let me know).
The “recollection” step kernel pseudocode for 2d images is as follows:
1) global_id = (x,y)
2) read v from img[x,y], decode it to a pair (tx,ty)
3) read v1 from img[tx,ty]
4) do some calculations to extract a boolean value C and a target value T from v1, v, and the neighbours of (x,y) and (tx,ty)
5) *** IF ( C ) THEN WRITE T INTO (tx,ty).
Q1: all the kernels where “C” is true will compete for writing. Suppose it does not matter which one wins (writes last). I’ve done some tests on an intel GPU, and (with filtering disabled, and clamping enabled) there seems to be no issue at all, write_image seems to be atomic, there is a winning value and my algorithm converges very fast. Can I safely assume that write_image on “unfiltered” images is atomic?
Q2: What I really need is to write into (tx,ty) the maximum T obtained from each kernel. That would involve using buffers instead of images, do clamping myself (or use a larger buffer padded with zeroes), and ** using atomic_max in each kernel**. I did not do this yet out of laziness since I need to change my code to use a buffer just to test it, but I believe it would be far slower. Am I right?
For completeness, here is my actual kernel (to be optimized, any suggestions welcome!)
```
__kernel void color_components2(/* base image */ __read_only image2d_t image,
/* uint32 */ __read_only image2d_t inputImage1,
__write_only image2d_t outImage1) {
int2 gid = (int2)(get_global_id(0), get_global_id(1));
int x = gid.x;
int y = gid.y;
int lock = 0;
int2 size = get_image_dim(inputImage1);
const sampler_t sampler =
CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP | CLK_FILTER_NEAREST;
uint4 base = read_imageui(image, sampler, gid);
uint4 ui4a = read_imageui(inputImage1, sampler, gid);
int2 t = (int2)(ui4a[0] % size.x, ui4a[0] / size.x);
unsigned int m = ui4a[0];
unsigned int n = ui4a[0];
if (base[0] > 0) {
for (int a = -1; a <= 1; a++)
for (int b = -1; b <= 1; b++) {
uint4 tmpa =
read_imageui(inputImage1, sampler, (int2)(t.x + a, t.y + b));
m = max(tmpa[0], m);
uint4 tmpb = read_imageui(inputImage1, sampler, (int2)(x + a, y + b));
n = max(tmpb[0], n);
}
}
if(n > m) write_imageui(outImage1,t,(uint4)(n,0,0,0));
}
```

When can an algorithm have square root(n) time complexity?

Can someone give me example of an algorithm that has square root(n) time complexity. What does square root time complexity even mean?
Square root time complexity means that the algorithm requires O(N^(1/2)) evaluations where the size of input is N.
As an example for an algorithm which takes O(sqrt(n)) time, Grover's algorithm is one which takes that much time. Grover's algorithm is a quantum algorithm for searching an unsorted database of n entries in O(sqrt(n)) time.
Let us take an example to understand how can we arrive at O(sqrt(N)) runtime complexity, given a problem. This is going to be elaborate, but is interesting to understand. (The following example, in the context for answering this question, is taken from Coding Contest Byte: The Square Root Trick , very interesting problem and interesting trick to arrive at O(sqrt(n)) complexity)
Given A, containing an n elements array, implement a data structure for point updates and range sum queries.
update(i, x)-> A[i] := x (Point Updates Query)
query(lo, hi)-> returns A[lo] + A[lo+1] + .. + A[hi]. (Range Sum Query)
The naive solution uses an array. It takes O(1) time for an update (array-index access) and O(hi - lo) = O(n) for the range sum (iterating from start index to end index and adding up).
A more efficient solution splits the array into length k slices and stores the slice sums in an array S.
The update takes constant time, because we have to update the value for A and the value for the corresponding S. In update(6, 5) we have to change A[6] to 5 which results in changing the value of S1 to keep S up to date.
The range-sum query is interesting. The elements of the first and last slice (partially contained in the queried range) have to be traversed one by one, but for slices completely contained in our range we can use the values in S directly and get a performance boost.
In query(2, 14) we get,
query(2, 14) = A[2] + A[3]+ (A[4] + A[5] + A[6] + A[7]) + (A[8] + A[9] + A[10] + A[11]) + A[12] + A[13] + A[14] ;
query(2, 14) = A[2] + A[3] + S[1] + S[2] + A[12] + A[13] + A[14] ;
query(2, 14) = 0 + 7 + 11 + 9 + 5 + 2 + 0;
query(2, 14) = 34;
The code for update and query is:
def update(S, A, i, k, x):
S[i/k] = S[i/k] - A[i] + x
A[i] = x
def query(S, A, lo, hi, k):
s = 0
i = lo
//Section 1 (Getting sum from Array A itself, starting part)
while (i + 1) % k != 0 and i <= hi:
s += A[i]
i += 1
//Section 2 (Getting sum from Slices directly, intermediary part)
while i + k <= hi:
s += S[i/k]
i += k
//Section 3 (Getting sum from Array A itself, ending part)
while i <= hi:
s += A[i]
i += 1
return s
Let us now determine the complexity.
Each query takes on average
Section 1 takes k/2 time on average. (you might iterate atmost k/2)
Section 2 takes n/k time on average, basically number of slices
Section 3 takes k/2 time on average. (you might iterate atmost k/2)
So, totally, we get k/2 + n/k + k/2 = k + n/k time.
And, this is minimized for k = sqrt(n). sqrt(n) + n/sqrt(n) = 2*sqrt(n)
So we get a O(sqrt(n)) time complexity query.
Prime numbers
As mentioned in some other answers, some basic things related to prime numbers take O(sqrt(n)) time:
Find number of divisors
Find sum of divisors
Find Euler's totient
Below I mention two advanced algorithms which also bear sqrt(n) term in their complexity.
MO's Algorithm
try this problem: Powerful array
My solution:
#include <bits/stdc++.h>
using namespace std;
const int N = 1E6 + 10, k = 500;
struct node {
int l, r, id;
bool operator<(const node &a) {
if(l / k == a.l / k) return r < a.r;
else return l < a.l;
}
} q[N];
long long a[N], cnt[N], ans[N], cur_count;
void add(int pos) {
cur_count += a[pos] * cnt[a[pos]];
++cnt[a[pos]];
cur_count += a[pos] * cnt[a[pos]];
}
void rm(int pos) {
cur_count -= a[pos] * cnt[a[pos]];
--cnt[a[pos]];
cur_count -= a[pos] * cnt[a[pos]];
}
int main() {
int n, t;
cin >> n >> t;
for(int i = 1; i <= n; i++) {
cin >> a[i];
}
for(int i = 0; i < t; i++) {
cin >> q[i].l >> q[i].r;
q[i].id = i;
}
sort(q, q + t);
memset(cnt, 0, sizeof(cnt));
memset(ans, 0, sizeof(ans));
int curl(0), curr(0), l, r;
for(int i = 0; i < t; i++) {
l = q[i].l;
r = q[i].r;
/* This part takes O(n * sqrt(n)) time */
while(curl < l)
rm(curl++);
while(curl > l)
add(--curl);
while(curr > r)
rm(curr--);
while(curr < r)
add(++curr);
ans[q[i].id] = cur_count;
}
for(int i = 0; i < t; i++) {
cout << ans[i] << '\n';
}
return 0;
}
Query Buffering
try this problem: Queries on a Tree
My solution:
#include <bits/stdc++.h>
using namespace std;
const int N = 2e5 + 10, k = 333;
vector<int> t[N], ht;
int tm_, h[N], st[N], nd[N];
inline int hei(int v, int p) {
for(int ch: t[v]) {
if(ch != p) {
h[ch] = h[v] + 1;
hei(ch, v);
}
}
}
inline void tour(int v, int p) {
st[v] = tm_++;
ht.push_back(h[v]);
for(int ch: t[v]) {
if(ch != p) {
tour(ch, v);
}
}
ht.push_back(h[v]);
nd[v] = tm_++;
}
int n, tc[N];
vector<int> loc[N];
long long balance[N];
vector<pair<long long,long long>> buf;
inline long long cbal(int v, int p) {
long long ans = balance[h[v]];
for(int ch: t[v]) {
if(ch != p) {
ans += cbal(ch, v);
}
}
tc[v] += ans;
return ans;
}
inline void bal() {
memset(balance, 0, sizeof(balance));
for(auto arg: buf) {
balance[arg.first] += arg.second;
}
buf.clear();
cbal(1,1);
}
int main() {
int q;
cin >> n >> q;
for(int i = 1; i < n; i++) {
int x, y; cin >> x >> y;
t[x].push_back(y); t[y].push_back(x);
}
hei(1,1);
tour(1,1);
for(int i = 0; i < ht.size(); i++) {
loc[ht[i]].push_back(i);
}
vector<int>::iterator lo, hi;
int x, y, type;
for(int i = 0; i < q; i++) {
cin >> type;
if(type == 1) {
cin >> x >> y;
buf.push_back(make_pair(x,y));
}
else if(type == 2) {
cin >> x;
long long ans(0);
for(auto arg: buf) {
hi = upper_bound(loc[arg.first].begin(), loc[arg.first].end(), nd[x]);
lo = lower_bound(loc[arg.first].begin(), loc[arg.first].end(), st[x]);
ans += arg.second * (hi - lo);
}
cout << tc[x] + ans/2 << '\n';
}
else assert(0);
if(i % k == 0) bal();
}
}
There are many cases.
These are the few problems which can be solved in root(n) complexity [better may be possible also].
Find if a number is prime or not.
Grover's Algorithm: allows search (in quantum context) on unsorted input in time proportional to the square root of the size of the input.link
Factorization of the number.
There are many problems that you will face which will demand use of sqrt(n) complexity algorithm.
As an answer to second part:
sqrt(n) complexity means if the input size to your algorithm is n then there approximately sqrt(n) basic operations ( like **comparison** in case of sorting). Then we can say that the algorithm has sqrt(n) time complexity.
Let's analyze the 3rd problem and it will be clear.
let's n= positive integer. Now there exists 2 positive integer x and y such that
x*y=n;
Now we know that whatever be the value of x and y one of them will be less than sqrt(n). As if both are greater than sqrt(n)
x>sqrt(n) y>sqrt(n) then x*y>sqrt(n)*sqrt(n) => n>n--->contradiction.
So if we check 2 to sqrt(n) then we will have all the factors considered ( 1 and n are trivial factors).
Code snippet:
int n;
cin>>n;
print 1,n;
for(int i=2;i<=sqrt(n);i++) // or for(int i=2;i*i<=n;i++)
if((n%i)==0)
cout<<i<<" ";
Note: You might think that not considering the duplicate we can also achieve the above behaviour by looping from 1 to n. Yes that's possible but who wants to run a program which can run in O(sqrt(n)) in O(n).. We always look for the best one.
Go through the book of Cormen Introduction to Algorithms.
I will also request you to read following stackoverflow question and answers they will clear all the doubts for sure :)
Are there any O(1/n) algorithms?
Plain english explanation Big-O
Which one is better?
How do you calculte big-O complexity?
This link provides a very basic beginner understanding of O() i.e., O(sqrt n) time complexity. It is the last example in the video, but I would suggest that you watch the whole video.
https://www.youtube.com/watch?v=9TlHvipP5yA&list=PLDN4rrl48XKpZkf03iYFl-O29szjTrs_O&index=6
The simplest example of an O() i.e., O(sqrt n) time complexity algorithm in the video is:
p = 0;
for(i = 1; p <= n; i++) {
p = p + i;
}
Mr. Abdul Bari is reknowned for his simple explanations of data structures and algorithms.
Primality test
Solution in JavaScript
const isPrime = n => {
for(let i = 2; i <= Math.sqrt(n); i++) {
if(n % i === 0) return false;
}
return true;
};
Complexity
O(N^1/2) Because, for a given value of n, you only need to find if its divisible by numbers from 2 to its root.
JS Primality Test
O(sqrt(n))
A slightly more performant version, thanks to Samme Bae, for enlightening me with this. 😉
function isPrime(n) {
if (n <= 1)
return false;
if (n <= 3)
return true;
// Skip 4, 6, 8, 9, and 10
if (n % 2 === 0 || n % 3 === 0)
return false;
for (let i = 5; i * i <= n; i += 6) {
if (n % i === 0 || n % (i + 2) === 0)
return false;
}
return true;
}
isPrime(677);

how much time will fibonacci series will take to compute?

i have created the recursive call tree by applying brute force technique but when i give this algorithm 100 values it takes trillion of years to compute..
what you guys suggest me to do that it runs fast by giving 100 values
here is what i have done so far
function fib(n) {
if (n =< 1) {
return n;
} else {
return fib(n - 1) + fib(n - 2);
}
}
You can do it also with a loop:
int a = 1;
int b = 1;
for(int i = 2; i < 100; i++){
int temp = a + b;
a = b;
b = temp;
}
System.out.println("Fib 100 is: "+b);
The runtime is linear and avoids the overhead caused by the recursive calls.
EDIT: Please note that the result is wrong. Since Fib(100) is bigger than Integer.MAX_VALUE you have to use BigInteger or similar to get the correct output but the "logic" will stay the same.
You could have a "cache", where you save already computed Fibonacci numbers. Every time you try to compute
fib(n-1) /* or */ fib(n-2) ;
You would first look into your array of already computed numbers. If it's there, you save a whole lot of time.
So every time you do compute a fibonacci number, save it into your array or list, at the corresponding index.
function fib(n)
{
if (n =< 1)
{
return n;
}
if(fiboList[n] != defaultValue)
{
return fiboList[n];
}
else
{
int fibo = fib(n-1) + fib(n-2);
fiboList[n] = fibo;
return fibo;
}
}
You can also do it by dynamic programming:
def fibo(n):
dp = [0,1] + ([0]*n)
def dpfib(n):
return dp[n-1] + dp[n-2]
for i in range(2,n+2):
dp[i] = dpfib(i)
return dp[n]

Is there a linspace() like method in Math.Net

Is there function in Math.Net like (MatLab/Octave/numpy)'s linspace() which takes 3 parameters (min, max, length) and creates an vector/array of evenly spaced values between min and max? It is not hard to implement but if there was a function already I would prefer to use that.
There is none exactly like linspace, but the signal generator comes quite close and creates an array:
SignalGenerator.EquidistantInterval(x => x, min, max, len)
I'm not fresh on the VB.net syntax, but I guess it's very close to C#.
In case you need a vector:
new DenseVector(SignalGenerator.EquidistantInterval(x => x, min, max, len))
Or you could implement it e.g. using the static Create function (in practice you may want to precompute the step):
DenseVector.Create(len, i => min + i*(max-min)/(len - 1.0))
Update 2013-12-14:
Since v3.0.0-alpha7 this is covered by two new functions:
Generate.LinearSpaced(length, a, b) -> MATLAB linspace(a, b, length)
Generate.LinearRange(a, [step], b) -> MATLAB a:step:b
I used this C# code to replicate the functionality of linspace (how numpy does it), feel free to use it.
public static float[] linspace(float startval, float endval, int steps)
{
float interval = (endval / MathF.Abs(endval)) * MathF.Abs(endval - startval) / (steps - 1);
return (from val in Enumerable.Range(0,steps)
select startval + (val * interval)).ToArray();
}
Here is the VB Translation I made.
Public Function linspace(startval As Single, endval As Single, Steps As Integer) As Single()
Dim interval As Single = (endval / Math.Abs(endval)) *(Math.Abs(endval - startval)) / (Steps - 1)
Return (From val In Enumerable.Range(0, Steps) Select startval + (val * interval)).ToArray()
End Function
Use examples;
C#
float[] arr = linspace(-4,4,5)
VB
Dim arr as Single() = linspace(-4,4,5)
Result:
-4,-2,0,2,4
I checked the result from the code shown below and MATLAB linspace, it exactly matches. I myself use it for my research work in Monte Carlo implementations.
Below is the code image and the actual code.
static double[] LINSPACE(double StartValue, double EndValue, int numberofpoints)
{
double[] parameterVals = new double[numberofpoints];
double increment = Math.Abs(StartValue - EndValue) / Convert.ToDouble(numberofpoints - 1);
int j = 0; //will keep a track of the numbers
double nextValue = StartValue;
for (int i = 0; i < numberofpoints; i++)
{
parameterVals.SetValue(nextValue, j);
j++;
if (j > numberofpoints)
{
throw new IndexOutOfRangeException();
}
nextValue = nextValue + increment;
}
return parameterVals;
}
Code for creating a linspace function in C#

Number of possible combinations

How many possible combinations of the variables a,b,c,d,e are possible if I know that:
a+b+c+d+e = 500
and that they are all integers and >= 0, so I know they are finite.
#Torlack, #Jason Cohen: Recursion is a bad idea here, because there are "overlapping subproblems." I.e., If you choose a as 1 and b as 2, then you have 3 variables left that should add up to 497; you arrive at the same subproblem by choosing a as 2 and b as 1. (The number of such coincidences explodes as the numbers grow.)
The traditional way to attack such a problem is dynamic programming: build a table bottom-up of the solutions to the sub-problems (starting with "how many combinations of 1 variable add up to 0?") then building up through iteration (the solution to "how many combinations of n variables add up to k?" is the sum of the solutions to "how many combinations of n-1 variables add up to j?" with 0 <= j <= k).
public static long getCombos( int n, int sum ) {
// tab[i][j] is how many combinations of (i+1) vars add up to j
long[][] tab = new long[n][sum+1];
// # of combos of 1 var for any sum is 1
for( int j=0; j < tab[0].length; ++j ) {
tab[0][j] = 1;
}
for( int i=1; i < tab.length; ++i ) {
for( int j=0; j < tab[i].length; ++j ) {
// # combos of (i+1) vars adding up to j is the sum of the #
// of combos of i vars adding up to k, for all 0 <= k <= j
// (choosing i vars forces the choice of the (i+1)st).
tab[i][j] = 0;
for( int k=0; k <= j; ++k ) {
tab[i][j] += tab[i-1][k];
}
}
}
return tab[n-1][sum];
}
$ time java Combos
2656615626
real 0m0.151s
user 0m0.120s
sys 0m0.012s
The answer to your question is 2656615626.
Here's the code that generates the answer:
public static long getNumCombinations( int summands, int sum )
{
if ( summands <= 1 )
return 1;
long combos = 0;
for ( int a = 0 ; a <= sum ; a++ )
combos += getNumCombinations( summands-1, sum-a );
return combos;
}
In your case, summands is 5 and sum is 500.
Note that this code is slow. If you need speed, cache the results from summand,sum pairs.
I'm assuming you want numbers >=0. If you want >0, replace the loop initialization with a = 1 and the loop condition with a < sum. I'm also assuming you want permutations (e.g. 1+2+3+4+5 plus 2+1+3+4+5 etc). You could change the for-loop if you wanted a >= b >= c >= d >= e.
I solved this problem for my dad a couple months ago...extend for your use. These tend to be one time problems so I didn't go for the most reusable...
a+b+c+d = sum
i = number of combinations
for (a=0;a<=sum;a++)
{
for (b = 0; b <= (sum - a); b++)
{
for (c = 0; c <= (sum - a - b); c++)
{
//d = sum - a - b - c;
i++
}
}
}
This would actually be a good question to ask on an interview as it is simple enough that you could write up on a white board, but complex enough that it might trip someone up if they don't think carefully enough about it. Also, you can also for two different answers which cause the implementation to be quite different.
Order Matters
If the order matters then any solution needs to allow for zero to appear for any of the variables; thus, the most straight forward solution would be as follows:
public class Combos {
public static void main() {
long counter = 0;
for (int a = 0; a <= 500; a++) {
for (int b = 0; b <= (500 - a); b++) {
for (int c = 0; c <= (500 - a - b); c++) {
for (int d = 0; d <= (500 - a - b - c); d++) {
counter++;
}
}
}
}
System.out.println(counter);
}
}
Which returns 2656615626.
Order Does Not Matter
If the order does not matter then the solution is not that much harder as you just need to make sure that zero isn't possible unless sum has already been found.
public class Combos {
public static void main() {
long counter = 0;
for (int a = 1; a <= 500; a++) {
for (int b = (a != 500) ? 1 : 0; b <= (500 - a); b++) {
for (int c = (a + b != 500) ? 1 : 0; c <= (500 - a - b); c++) {
for (int d = (a + b + c != 500) ? 1 : 0; d <= (500 - a - b - c); d++) {
counter++;
}
}
}
}
System.out.println(counter);
}
}
Which returns 2573155876.
One way of looking at the problem is as follows:
First, a can be any value from 0 to 500. Then if follows that b+c+d+e = 500-a. This reduces the problem by one variable. Recurse until done.
For example, if a is 500, then b+c+d+e=0 which means that for the case of a = 500, there is only one combination of values for b,c,d and e.
If a is 300, then b+c+d+e=200, which is in fact the same problem as the original problem, just reduced by one variable.
Note: As Chris points out, this is a horrible way of actually trying to solve the problem.
link text
If they are a real numbers then infinite ... otherwise it is a bit trickier.
(OK, for any computer representation of a real number there would be a finite count ... but it would be big!)
It has general formulae, if
a + b + c + d = N
Then number of non-negative integral solution will be C(N + number_of_variable - 1, N)
#Chris Conway answer is correct. I have tested with a simple code that is suitable for smaller sums.
long counter = 0;
int sum=25;
for (int a = 0; a <= sum; a++) {
for (int b = 0; b <= sum ; b++) {
for (int c = 0; c <= sum; c++) {
for (int d = 0; d <= sum; d++) {
for (int e = 0; e <= sum; e++) {
if ((a+b+c+d+e)==sum) counter=counter+1L;
}
}
}
}
}
System.out.println("counter e "+counter);
The answer in math is 504!/(500! * 4!).
Formally, for x1+x2+...xk=n, the number of combination of nonnegative number x1,...xk is the binomial coefficient: (k-1)-combination out of a set containing (n+k-1) elements.
The intuition is to choose (k-1) points from (n+k-1) points and use the number of points between two chosen points to represent a number in x1,..xk.
Sorry about the poor math edition for my fist time answering Stack Overflow.
Just a test for code block
Just a test for code block
Just a test for code block
Including negatives? Infinite.
Including only positives? In this case they wouldn't be called "integers", but "naturals", instead. In this case... I can't really solve this, I wish I could, but my math is too rusty. There is probably some crazy integral way to solve this. I can give some pointers for the math skilled around.
being x the end result,
the range of a would be from 0 to x,
the range of b would be from 0 to (x - a),
the range of c would be from 0 to (x - a - b),
and so forth until the e.
The answer is the sum of all those possibilities.
I am trying to find some more direct formula on Google, but I am really low on my Google-Fu today...