Maximum contigous subsequence sum of x elements - sum

So I came up with a question that I've looked and searched but with no answer found... What's the best (and by saying the best, I mean the fastest) way to get the maximum contigous subsequence sum of x elements?
Imagine that I've: A[] = {2, 4, 1, 10, 40, 50, 22, 1, 24, 12, 40, 11, ...}.
And then I ask:
"What is the maximum contigous subsequence on array A with 3 elements?"
Please imagine this in a array with more than 100000 elements... Can someone help me?
Thank you for your time and you help!

I Googled it and found this:
Using Divide and Conquer approach, we can find the maximum subarray sum in O(nLogn) time. Following is the Divide and Conquer algorithm.
The Kadane’s Algorithm for this problem takes O(n) time. Therefore the Kadane’s algorithm is better than the Divide and Conquer approach
See the code:
Initialize:
max_so_far = 0
max_ending_here = 0
Loop for each element of the array
(a) max_ending_here = max_ending_here + a[i]
(b) if(max_ending_here < 0)
max_ending_here = 0
(c) if(max_so_far < max_ending_here)
max_so_far = max_ending_here
return max_so_far

Related

Why does the following algorithm have runtime log(log(n))?

I don't understand how the runtime of the algorithm can be log(log(n)). Can someone help me?
s=1
while s <= (log n)^2 do
s=3s
Notation note: log(n) indicates log2(n) throughout the solution.
Well, I suppose (log n)^2 indicates the square of log(n), which means log(n)*log(n). Let us try to analyze the algorithm.
It starts from s=1 and goes like 1,3,9,27...
Since it goes by the exponents of 3, after each iteration s can be shown as 3^m, m being the number of iterations starting from 1.
We will do these iterations until s becomes bigger than log(n)*log(n). So at some point 3^m will be equal to log(n)*log(n).
Solve the equation:
3^m = log(n) * log(n)
m = log3(log(n) * log(n))
Time complexity of the algorithm can be shown as O(m). We have to express m in terms of n.
log3(log(n) * log(n)) = log3(log(n)) + log3(log(n))
= 2 * log3(log(n)) For Big-Oh notation, constants do not matter. So let us get rid of 2.
Time complexity = O(log3(log(n)))
Well ok, here is the deal: By the definition of Big-Oh notation, it represents an upper bound runtime for our function. Therefore O(n) ⊆ O(n^2).
Notice that log3(a) < log2(a) after a point.
By the same logic we can conclude that O(log3(log(n)) ⊆ O(log(log(n)).
So the time complexity of the algorithm : O(log(logn))
Not the most scientific explanation, but I hope you got the point.
This follows as a special case of a more general principle. Consider the following loop:
s = 1
while s < k:
s = 3s
How many times will this loop run? Well, the values of s taken on will be equal to 1, 3, 9, 27, 81, ... = 30, 31, 32, 33, ... . And more generally, on the ith iteration of the loop, the value of s will be 3i.
This loop stops running at soon as 3i overshoots k. To figure out where that is, we can equate and solve:
3i = k
i = log3 k
So this loop will run a total of log3 k times.
Now, what do you think would happen if we used this loop instead?
s = 1
while s < k:
s = 4s
Using similar logic, the number of loop iterations would be log4 k. And more generally, if we have the following loop:
s = 1
while s < k:
s = c * s
Then assuming c > 1, the number of iterations will be logc k.
Given this, let's look at your loop:
s = 1
while s <= (log n)^2 do
s = 3s
Using the reasoning from above, the number of iterations of this loop works out to log3 (log n)2. Using properties of logarithms, we can simplify this to
log3 (log n)2
= 2 log3 log n
= O(log log n).

Optimization, time complexity and flowchart (Scilab)

I tried to optimize this code, but it’s impossible to optimize anymore.
Please help with building a flowchart for this algorithm.
A = [-1,0,1,2,3,5,6,8,10,13,19,23,45];
B = [0,1,3,6,7,8,9,12,45];
N1 = length(A);
N2 = length(B);
t = 1;
m = 10;
C = [];
for i=1:N1
for j=1:N2
if A(i)==B(j)
break
else
if j==N2
C(t)=A(i);
t=t+1;
end
end
end
end
disp(C);
N3=length(C);
R = [];
y = 1;
for l=1:N3
if C(l)>m
R(y)=C(l);
y=y+1;
end
end
disp(R);
How to find time complexity of an algorithm
I think it should be O(n).
Dominant (elementary) operation:
comparison A(i)== B(j)
But I am not sure yet.
And I can't do
Complexity function (worst case)
and
Worst Computing Complexity:
𝐹(𝑁)
"Optimization" depends of your goal for exmple you may want to minimize the number of floating point operation or to minimize the number of Scilab instruction or minimize the execution time of the algorithm.
As Scilab is an intepreted langage it is possible to reduce the execution time ans code length applying vectorization.
For example your code
N3=length(C);
R = [];
y = 1;
for l=1:N3
if C(l)>m
R(y)=C(l);
y=y+1;
end
end
may be rewritten:
R=C(C>m)
Here the number of computer operations is more or less the same as in the original code, but the execution time is many times faster:
Let C=rand(1,1000);m=0.5;
--> timer();R=C(C>0.5);timer()
ans =
0.000137
--> timer();
--> N3=length(C);
--> R = [];
--> y = 1;
--> for l=1:N3
> if C(l)>m
> R(y)=C(l);
> y=y+1;
> end
> end
--> timer()
ans =
0.388749
This seems like it's probably homework ;p
As for time complexity, it looks more like it would be O(n²) since you have a for loop, inside another for loop.
I recently started watching this course about Algorithms and data structures on Udemy that I highly recommend. He explains BigO notation quite well :)
Link to course. (No affiliations to the author)
As far as the optimization is concerned you should consider that Scilab (and its fellow mathematical programming languages MATLAB and Octave) are intrinsically vectorized. It means if you are using too many for loops, you should probably go back and read some documentation and tutorials. The canonical version of the code you wrote can be implemented as:
A = [-1, 0, 1, 2, 3, 5, 6, 8, 10, 13, 19, 23, 45];
B = [0, 1, 3, 6, 7, 8, 9, 12, 45];
C = setdiff(A, B);
disp(C);
m = 10;
R = C(C > m);
disp(R);
Result:
-1. 2. 5. 10. 13. 19. 23.
23.

Optimizing specific numbers to reach value

I'm trying to make a program, that when given specific values (let's say 1, 4 and 10), will try to get how much of each value is needed to reach a certain amount, say 19.
It will always try to use as many high values as possible, so in this case, the result should be 10*1, 4*2, 1*1.
I tried thinking about it, but couldn't end up with an algorithm that could work...
Any help or hints would be welcome!
Here is a python solution that tries all the choices until one is found. If you pass the values it can use in descending order, the first found will be the one that uses the most high values as possible:
def solve(left, idx, nums, used):
if (left == 0):
return True
for i in range(idx, len(nums)):
j = int(left / nums[idx])
while (j > 0):
used.append((nums[idx], j))
if solve(left - j * nums[idx], idx + 1, nums, used):
return True
used.pop()
j -= 1
return False
solution = []
solve(19, 0, [10, 4, 1], solution)
print(solution) # will print [(10, 1), (4, 2), (1, 1)]
If anyone needs a simple algorithm, one way I found was:
sort the values, in descending order
keep track on how many values are kept
for each value, do:
if the sum is equal to the target, stop
if it isn't the first value, remove one of the previous values
while the total sum of values is smaller than the objective:
add the current value once
Have a nice day!
(As juviant mentionned, this won't work if the skips larger numbers, and only uses smaller ones! I'll try to improve it and post a new version when I get it to work)

Octave: summing indexed elements

The easiest way to describe this is via example:
data = [1, 5, 3, 6, 10];
indices = [1, 2, 2, 2, 4];
result = zeroes(1, 5);
I want result(1) to be the sum of all the elements in data whose index is 1, result(2) to be the sum of all the elements in data whose index is 2, etc.
This works but is really slow when applied (changing 5 to 65535) to 64K element vectors:
result = result + arrayfun(#(x) sum(data(index==x)), 1:5);
I think it's creating 64K vectors with 64K elements that's taking up the time. Is there a faster way to do this? Or do I need to figure out a completely different approach?
for i = [1:5]
idx = indices(i);
result(idx) = result(idx) + data(i);
endfor
But that's a very non-octave-y way to do it.
Seeing how MATLAB is very similar to Octave, I will provide an answer that was tested on MATLAB R2016b. Looking at the documentation of Octave 4.2.1 the syntax should be the same.
All you need to do is this:
result = accumarray(indices(:), data(:), [5 1]).'
Which gives:
result =
1 14 0 10 0
Reshaping to a column vector (arrayName(:) ) is necessary because of the expected inputs to accumarray. Specifying the size as [5 1] and then transposing the result was done to avoid some MATLAB error.
accumarray is also described in depth in the MATLAB documentation

How to solve Euler Project Prblem 303 faster?

The problem is:
For a positive integer n, define f(n) as the least positive multiple of n that, written in base 10, uses only digits ≤ 2.
Thus f(2)=2, f(3)=12, f(7)=21, f(42)=210, f(89)=1121222.
To solve it in Mathematica, I wrote a function f which calculates f(n)/n :
f[n_] := Module[{i}, i = 1;
While[Mod[FromDigits[IntegerDigits[i, 3]], n] != 0, i = i + 1];
Return[FromDigits[IntegerDigits[i, 3]]/n]]
The principle is simple: enumerate all number with 0, 1, 2 using ternary numeral system until one of those number is divided by n.
It correctly gives 11363107 for 1~100, and I tested for 1~1000 (calculation took roughly a minute, and gives 111427232491), so I started to calculate the answer of the problem.
However, this method is too slow. The computer has been calculating the answer for two hours and hasn't finished computing.
How can I improve my code to calculate faster?
hammar's comment makes it clear that the calculation time is disproportionately spent on values of n that are a multiple of 99. I would suggest finding an algorithm that targets those cases (I have left this as an exercise for the reader) and use Mathematica's pattern matching to direct the calculation to the appropriate one.
f[n_Integer?Positive]/; Mod[n,99]==0 := (* magic here *)
f[n_] := (* case for all other numbers *) Module[{i}, i = 1;
While[Mod[FromDigits[IntegerDigits[i, 3]], n] != 0, i = i + 1];
Return[FromDigits[IntegerDigits[i, 3]]/n]]
Incidentally, you can speed up the fast easy ones by doing it a slightly different way, but that is of course a second-order improvement. You could perhaps set the code up to use ff initially, breaking the While loop if i reaches a certain point, and then switching to the f function you have already provided. (Notice I'm returning n i not i here - that was just for illustrative purposes.)
ff[n_] :=
Module[{i}, i = 1; While[Max[IntegerDigits[n i]] > 2, i++];
Return[n i]]
Table[Timing[ff[n]], {n, 80, 90}]
{{0.000125, 1120}, {0.001151, 21222}, {0.001172, 22222}, {0.00059,
11122}, {0.000124, 2100}, {0.00007, 1020}, {0.000655,
12212}, {0.000125, 2001}, {0.000119, 2112}, {0.04202,
1121222}, {0.004291, 122220}}
This is at least a little faster than your version (reproduced below) for the short cases, but it's much slower for the long cases.
Table[Timing[f[n]], {n, 80, 90}]
{{0.000318, 14}, {0.001225, 262}, {0.001363, 271}, {0.000706,
134}, {0.000358, 25}, {0.000185, 12}, {0.000934, 142}, {0.000316,
23}, {0.000447, 24}, {0.006628, 12598}, {0.002633, 1358}}
A simple thing that you can do to is compile your function to C and make it parallelizable.
Clear[f, fCC]
f[n_Integer] := f[n] = fCC[n]
fCC = Compile[{{n, _Integer}}, Module[{i = 1},
While[Mod[FromDigits[IntegerDigits[i, 3]], n] != 0, i++];
Return[FromDigits[IntegerDigits[i, 3]]]],
Parallelization -> True, CompilationTarget -> "C"];
Total[ParallelTable[f[i]/i, {i, 1, 100}]]
(* Returns 11363107 *)
The problem is that eventually your integers will be larger than a long integer and Mathematica will revert to the non-compiled arbitrary precision arithmetic. (I don't know why the Mathematica compiler does not include a arbitrary precision C library...)
As ShreevatsaR commented, the project Euler problems are often designed to run quickly if you write smart code (and think about the math), but take forever if you want to brute force it. See the about page. Also, spoilers posted on their message boards are removed and it's considered bad form to post spoilers on other sites.
Aside:
You can test that the compiled code is using 32bit longs by running
In[1]:= test = Compile[{{n, _Integer}}, {n + 1, n - 1}];
In[2]:= test[2147483646]
Out[2]= {2147483647, 2147483645}
In[3]:= test[2147483647]
During evaluation of In[53]:= CompiledFunction::cfn: Numerical error encountered at instruction 1; proceeding with uncompiled evaluation. >>
Out[3]= {2147483648, 2147483646}
In[4]:= test[2147483648]
During evaluation of In[52]:= CompiledFunction::cfsa: Argument 2147483648 at position 1 should be a machine-size integer. >>
Out[4]= {2147483649, 2147483647}
and similar for the negative numbers.
I am sure there must be better ways to do this, but this is as far as my inspiration got me.
The following code finds all values of f[n] for n 1-10,000 except the most difficult one, which happens to be n = 9999. I stop the loop when we get there.
ClearAll[f];
i3 = 1;
divNotFound = Range[10000];
While[Length[divNotFound] > 1,
i10 = FromDigits[IntegerDigits[i3++, 3]];
divFound = Pick[divNotFound, Divisible[i10, divNotFound]];
divNotFound = Complement[divNotFound, divFound];
Scan[(f[#] = i10) &, divFound]
] // Timing
Divisible may work on lists for both arguments, and we make good use of that here. The whole routine takes about 8 min.
For 9999 a bit of thinking is necessary. It is not brute-forceable in a reasonable time.
Let P be the factor we are looking for and T (consisting only of 0's, 1's and 2's) the result of multiplication P with 9999, that is,
9999 P = T
then
P(10,000 - 1) = 10,000 P - P = T
==> 10,000 P = P + T
Let P1, ...PL be the digits of P, and Ti the digits of T then we have
The last four zeros in the sum originate of course from the multiplication by 10,000. Hence TL+1,...,TL+4 and PL-3,...,PL are each others complement. Where the former only consists of 0,1,2 the latter allows:
last4 = IntegerDigits[#][[-4 ;; -1]] & /# (10000 - FromDigits /# Tuples[{0, 1, 2}, 4])
==> {{0, 0, 0, 0}, {9, 9, 9, 9}, {9, 9, 9, 8}, {9, 9, 9, 0}, {9, 9, 8, 9},
{9, 9, 8, 8}, {9, 9, 8, 0}, {9, 9, 7, 9}, ..., {7, 7, 7, 9}, {7, 7, 7, 8}}
There are only 81 allowable sets, with 7's, 8's, 9's and 0's (not all possible combinations of them) instead of 10,000 numbers, a speed gain of a factor of 120.
One can see that P1-P4 can only have ternary digits, being the sum of ternary digit and naught. You can see there can be no carry over from the addition of T5 and P1. A further reduction can be gained by realizing that P1 cannot be 0 (the first digit must be something), and if it were a 2 multiplication with 9999 would cause a 8 or 9 (if a carry occurs) in the result for T which is not allowed either. It must be a 1 then. Two's may also be excluded for P2-P4.
Since P5 = P1 + T5 it follows that P5 < 4 as T5 < 3, same for P6-P8.
Since P9 = P5 + T9 it follows that P9 < 6, same for P10-P11
In all these cases the additions don't need to include a carry over as they can't occur (Pi+Ti always < 8). This may not be true for P12 if L = 16. In that case we can have a carry over from the addition of the last 4 digits . So P12 <7. This also excludes P12 from being in the last block of 4 digits. The solution must therefore be at least 16 digits long.
Combining all this we are going to try to find a solution for L=16:
Do[
If[Max[IntegerDigits[
9999 FromDigits[{1, 1, 1, 1, i5, i6, i7, i8, i9, i10, i11, i12}~
Join~l4]]
] < 3,
Return[FromDigits[{1, 1, 1, 1, i5, i6, i7, i8, i9, i10, i11, i12}~Join~l4]]
],
{i5, 0, 3}, {i6, 0, 3}, {i7, 0, 3}, {i8, 0, 3}, {i9, 0, 5},
{i10, 0, 5}, {i11, 0, 5}, {i12, 0, 6}, {l4,last4}
] // Timing
==> {295.372, 1111333355557778}
and indeed 1,111,333,355,557,778 x 9,999 = 11,112,222,222,222,222,222
We could have guessed this as
f[9] = 12,222
f[99] = 1,122,222,222
f[999] = 111,222,222,222,222
The pattern apparently being the number of 1's increasing with 1 each step and the number of consecutive 2's with 4.
With 13 min, this is over the 1 min limit for project Euler. Perhaps I'll look into it some time soon.
Try something smarter.
Build a function F(N) which finds out the smallest number with {0, 1, 2} digits which is divisible by N.
So for a given N the number which we are looking for can be written as SUM = 10^n * dn + 10^(n-1) * dn-1 .... 10^1 * d1 + 1*d0 (where di are the digits of the number).
so you have to find out the digits such that SUM % N == 0
basically each digits contributes to the SUM % N with (10^i * di) % N
I am not giving any more hints, but the next hint would be to use DP. Try to figure out how to use DP to find out the digits.
for all numbers between 1 and 10000 it took under 1sec in C++. (in total)
Good luck.