Problem in understanding flowchart of sum of n natural numbers - sum

When i was drawing the flowchart of sum of first n natural numbers i encountered a problem!
So In the step when
val<=n [when no occurs]
Isn't it incorrect when sum is printed in the next step? Cause the sum isn't added with the value in between the process, so... please help me with the problem!!
I tried understanding my way, but failed to do soo...

Related

Multiplying big Theta?

Im doing some practice problems and I'm confused by this question. Where did O(n^2.5) come from? Are they multiplying the big theta somehow? I'm lost.
Think about it this way: (x*y)*(x/y) is x^2, right? And sqrt(x) is x^0.5. So add the exponents together and you get x^2.5.
In the first case, the log n can be simplified out, as it is both multiplied and divided by.

Dealing with rounding errors in SAS

I'm working on a large data application and have run into a rounding error that I can't seem to fix. The code is basically like this:
proc sql;
select round(amount,.01) - round(amount * (percentage/100),.01) as amount
from data;
quit;
I've tried various methods of fixing, but all seem to lead to other rounding errors in the other direction cropping up. For the row that produces the error amount = 56.45 and percentage = 10. I get the result equal to 50.80 and am hoping for the result to equal 50.81. I cannot accept the rounding error as there is a separate process that reverses the transactions that does not have a rounding error and in the end the reversals plus the part producing the rounding error must add up to zero.
Code I've tried:
select round((((100-percentage)/100)*amount), .01)
select round(amount,.01) - round(amount * (percentage/100),.001) as amount
the second of which fixes the issue but creates three rounding errors in the other direction.
Any help is very appreciated.
Thank you.
Without knowing your datatypes, I can't say for certain, but here are some changes that should help resolve your issue:
Make sure you are working with decimal data types, not floats.
Round after you finish the math. You are rounding each step of your calculation in two of your code snippets, which is likely to produce incorrect results.
Be very careful with your order of operations/parentheses. For example, 100-percentage/100 evaluates to 100-10/100 = 100-0.1 = 99.9, which I think is not what you want. Similarly, you have one more close parenthesis than open on that line.

Example for an algorithm with greater than n splits

I was doing the derivation for masters theorem using the tree method and I noticed something.
So we have:
$T(n)=a*T(n/b) + n^c$
From this: we notice, the last level of the tree will have $a^(log_b_n)$ splits, which equals $n^(log_b_a)$
Now, if $a=b$, I get n splits in the last level, which is I've seen used in quick sort and merge sort, and if a
Is there a practical example for greater than n splits?
Where we actually repeat operations for elements?
*Also, math overflow formatting doesn't seem to work. Would appreciate if anyone helps.
The classical matrix multiplication by divide and conquer would be such an example. The recurrence relation is: T(n)=8T(n/2)+ Theta(n^2). Another would be Straussen algorithm.
Math notation is (sadly) limited to only a few stackexchange sites.

Number of ways getting sum K with a pair of 6sided dices.

Given a of 6sided pair of dices, in how many ways can I get sum K.
Can someone give a detailed explanation to me how to use dynamic programming to solve this problem.
I have done reading on it but still hard to understand link

Longest Common Subsequence

Consider 2 sequences X[1..m] and Y[1..n]. The memoization algorithm would compute the LCS in time O(m*n). Is there any better algorithm to find out LCS wrt time? I guess memoization done diagonally can give us O(min(m,n)) time complexity.
Gene Myers in 1986 came up with a very nice algorithm for this, described here: An O(ND) Difference Algorithm and Its Variations.
This algorithm takes time proportional to the edit distance between sequences, so it is much faster when the difference is small. It works by looping over all possible edit distances, starting from 0, until it finds a distance for which an edit script (in some ways the dual of an LCS) can be constructed. This means that you can "bail out early" if the difference grows above some threshold, which is sometimes convenient.
I believe this algorithm is still used in many diff implementations.
If you know a priori an upper bound on the maximum size k you care about, you can force the LCS algorithm to exit early by adding an extra check in the inner loop. This means then when k << min(m,n) you can get small running times in spite of the fact you are doing LCS.
yes we could create a better algorithm than Order O(m*n)---
i.e O(min(m,n)). to find a length.....
just compare the diagonal elements.and whenever the increment is done suppose it occured in c[2,2] then increment all the value from c[2,2++] and c[2++,2] by 1..
and proceed till c[m,m]..(suppose m