How do I set x = x + 1 for y amount of times without a for or do loop? [closed] - vb.net

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
In Vb I am trying to set variable x = x + 10 for y amounts of time.
What I have now is this:
For z As Double = 0 To Y
x = x + 1
Next
I want no do or for loops because they take up a lot of time and resources for larger y values.

This is a job for (cue dramatic music) Mathman!
I think you'll find multiplication is the key, something along the lines of
x = x + y * 10
if you want to add ten each time, or the even simpler
x = x + y
if you're only adding one.
Although you may have to use (y + 1) instead of y if your loop truly is 0 through y inclusive. It's a little unclear from the question exactly what you're after.

Related

Prolog: how to optimize this code(Solving 123456789=100 puzzle)

So there was a puzzle:
This equation is incomplete: 1 2 3 4 5 6 7 8 9 = 100. One way to make
it accurate is by adding seven plus and minus signs, like so: 1 + 2 +
3 – 4 + 5 + 6 + 78 + 9 = 100.
How can you do it using only 3 plus or minus signs?
I'm quite new to Prolog, solved the puzzle, but i wonder how to optimize it
makeInt(S,F,FinInt):-
getInt(S,F,0,FinInt).
getInt(Start, Finish, Acc, FinInt):-
0 =< Finish - Start,
NewAcc is Acc*10 + Start,
NewStart is Start +1,
getInt(NewStart, Finish, NewAcc, FinInt).
getInt(Start, Finish, A, A):-
0 > Finish - Start.
itCounts(X,Y,Z,Q):-
member(XLastDigit,[1,2,3,4,5,6]),
FromY is XLastDigit+1,
numlist(FromY, 7, ListYLastDigit),
member(YLastDigit, ListYLastDigit),
FromZ is YLastDigit+1,
numlist(FromZ, 8, ListZLastDigit),
member(ZLastDigit,ListZLastDigit),
FromQ is ZLastDigit+1,
member(YSign,[-1,1]),
member(ZSign,[-1,1]),
member(QSign,[-1,1]),
0 is XLastDigit + YSign*YLastDigit + ZSign*ZLastDigit + QSign*9,
makeInt(1, XLastDigit, FirstNumber),
makeInt(FromY, YLastDigit, SecondNumber),
makeInt(FromZ, ZLastDigit, ThirdNumber),
makeInt(FromQ, 9, FourthNumber),
X is FirstNumber,
Y is YSign*SecondNumber,
Z is ZSign*ThirdNumber,
Q is QSign*FourthNumber,
100 =:= X + Y + Z + Q.
Not sure this stands for an optimization. The code is just shorter:
sum_123456789_eq_100_with_3_sum_or_sub(L) :-
append([G1,G2,G3,G4], [0'1,0'2,0'3,0'4,0'5,0'6,0'7,0'8,0'9]),
maplist([X]>>(length(X,N), N>0), [G1,G2,G3,G4]),
maplist([G,F]>>(member(Op, [0'+,0'-]),F=[Op|G]), [G2,G3,G4], [F2,F3,F4]),
append([G1,F2,F3,F4], L),
read_term_from_codes(L, T, []),
100 is T.
It took me a while, but I got what your code is doing. It's something like this:
itCounts(X,Y,Z,Q) :- % generate X, Y, Z, and Q s.t. X+Y+Z+Q=100, etc.
generate X as a list of digits
do the same for Y, Z, and Q
pick the signs for Y, Z, and Q
convert all those lists of digits into numbers
verify that, with the signs, they add to 100.
The inefficiency here is that the testing is all done at the last minute. You can improve the efficiency if you can throw out some possible solutions as soon as you pick one of your numbers, that is, testing earlier.
itCounts(X,Y,Z,Q) :- % generate X, Y, Z, and Q s.t. X+Y+Z+Q=100, etc.
generate X as a list of digits, and convert it to a number
if it's so big or small the rest can't possibly bring the sum back to 100, fail
generate Y as a list of digits, convert to number, and pick it sign
if it's so big or so small the rest can't possibly bring the sum to 100, fail
do the same for Z
do the same for Q
Your function is running pretty fast already, even if I search all possible solutions. It only picks 6 X's; 42 Y's; 224 Z's; and 15 Q's. I don't think optimizing will be worth your while.
But if you really wanted to: I tested this by putting a testing function immediately after selecting an X. It reduced the 6 X's to 3 (all before finding the solution); 42 Y's to 30; 224 Z's to 184; and 15 Q's to 11. I believe we could reduce it further by testing immediately after a Y is picked, to see whether X YSign Y is already so large or small there can be no solution.
In PROLOG programs that are more computationally intensive, putting parts of the 'test' earlier in 'generate and test' algorithms can help a lot.

Stuck on estimation for the time complexity [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Time complexity O(n*m) is estimated for
for i ← 0 to n do
for j ← 0 to m do
STATEMENT1;
end for
end for
So, about this algorithm
for i ← 0 to n do
for j ← 0 to l do
STATEMENT1;
end for
for k ← 0 to m-l do
STATEMENT2;
end for
end for
Because time requirements for processing STATEMENT1 and STATEMENT2 are different. If we define time for processing STATEMENT1 = O(1) and time for processing STATEMENT2 = Q(1)
We can estimate the time complexity of this algorithm is n[O[l]+Q[m-l]] or O(nl) + Q(n(m-1))
Please help to check my solution or can anyone help to make the solution more simple!
It is O(n * l + n * (m - l)) = O(n * m)
I assume that $l < m$. Otherwise it reduces to your first question.
Let's consider STATEMENT1 takes k1 (constant) time to execute, similarly STATEMENT2 takes k2 (constant) time to execute.
Now, let's do some maths:
STATEMENT1:
Total number of times STATEMENT1 will be executed, n.l. Therefore, total time taken by it = n.l.k1.
STATEMENT2:
Total number of times STATEMENT2 will be executed, n.(m-l). Thus total time taken by it =
n.(m-l).k2.
Now, the total time taken by an algorithm will be:
=> n.l.k1 + n.(m-l).k2
=> n.(l.k1 + (m-l).k2)
=> n.(l.k1 + m.k2 - l.k2)
=> n.(m.k2 + l.(k1 - k2))
Now separate these two equations out and apply big_O notation onto them
=> n.m.k2 + n.l.(k1 - k2) (Apply Big O notation)
since k2 is since k1-k2 is
constant constant
=> O(n.m) + O(n.l) -eq(i)
Time to split eq(i) into 3 cases as follows:
CASE 1: m >>> l
Then (n.m) will take over the (n.l)
And, the final equation will become as O(n.m)
CASE 2: l >>> m
Then (n.l) will take over the (n.m)
And, the final equation will become as O(n.l).
CASE 3: l ≈ m
=> O(n.m) + O(n.l)
=> O(n.m) + O(n.m) Since l ≈ m
=> 2.O(n.m) Since we are dealing with Big O thus we can eliminate a prefix 2
=> O(n.m)

If f(n) = O(h(n)) then c*f(n) = O(h(n)) for all c > 0 - proof challenged? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
I have been asked to prove or disprove the following conjecture:
For any given constant c>0 | If f(n) = O(h(n)) then c*f(n) = O(h(n))
I have came up with the following counter example:
Let f(n) = n and c = n+1. Then c*f(n) = (n+1)n = n^2+n = O(n^2),
while f(n) = n = O(n)
Therefore, the conjecture is not true because O(n^2) != O(n) when f(n) = n and c = n+1.
Now, I have came upon the following theorem:
Theorem: Any constant value is is O(1).
Aside: You will often hear a constant running time algorithm described
as O(1).
Corollary: Given f(x) which is O(g(x)) and a constant a, we
know that af(x) is O(g(x)).
That is, if we have a function multiplied by a constant, we can ignore
the constant in the big-O.
Why is that the case, and why am I wrong?
You're not thinking of this at large enough scale:
If you have a variable, n multiplied by a constant k and n is growing, the effect of k is minimised. And it becomes a limit problem, from grade school calculus.
c is a constant, n is a variable thus n+1 is not a constant

Percentage Plus and Minus [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
Having an issue calculating money after percentages have been added and then subtracting , I know that 5353.29 + 18% = 6316.88 which I need to do in my tsql but I also need to do the reverse and take 18% from 6316.88 to get back to 5353.29 all in tsql, I might have just been looking at this too long but I just cant get the figures to calculate properly, any help please?
newVal = 5353.29(1 + .18)
origVal = newVal/(1 + .18)
6316.88 is 118%, so to get back you need to divide 6316.88 by 118, then multiply by 100.
(6316.88/118)*100=5353.29
To Add n% to X
Result = X * ( 1 + (n / 100.0))
To do the reverse
X = Result / ( 1 + (n / 100.0))

Basic Gauss Elimination yields wrong result? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I was reading a book and got stuck at a particular point. I stuck here in the following and want to know that how x+(y+z)=0.0 calculated ?
Further in the following example of Gauss elimination method I couldn't get how 2.5/(-0.005)=-2500 got calculated? From where they took this "-0.005" value .
Computers do not do arithmetic in the same way that people do on pen and paper. Numbers have limited precision. Imagine you had a number system where you could only have 4 digits after the decimal point and also a factor of 10 to some power, and so numbers looked like:
±0._ _ _ _ × 10ⁿ
Now, add these two numbers:
0.1234 × 10⁸
0.5678 × 10⁰
You are adding
12340000
and
00000000.5678
The real sum is
12340000.5678,
but the theoretical computer here can store only the first four digits, giving
12340000 = 0.1234 × 10⁸
That is why y+z in the textbook problem is equal to y, and x + (y + z) = 0 ≠ (x + y) + z.
x = 0.1 × 10¹⁰
y = -0.1 × 10¹⁰
z = 0.1 × 10¹
x + y = 0.0
(x + y) + z = 0.0 + 0.1 × 10¹ = 0.1 × 10¹
(Single-precision) floats have only 8 digits of precision in IEEE arithmetic. These correspond to the C float datatype. But y = - 10⁹ z, and z disappears when you add y and z. So,
y + z = -999999999 = -0.1 × 10¹⁰ after rounding.
x + ( y + z) = 0.0
The book also has a typographical error. The quotient should have been 2.5/(-0.001), from rows 2 and 3, column 2 of the matrix.
This is why computer algorithms for matrix algebra are tricky; they seek to minimize the effect of roundoff error and underflow. Unfortunately, any flaw in an algorithm can lead to very bad problems. One test is to look at the Hilbert matrix
H_n = (1/(i+j-1)) 1 ≤ i, j ≤ n
The inverse of this matrix is has integer entries, but that matrix and its inverse are spectacularly ill-conditioned. Any numerical error in computing the inverse will lead to wildly-wrong values. Twenty years ago, I tested the inverse routine for the then-current version of Matlab. It was acceptable for H_10, but too poor to use for H_12.