multiplication of 849 by 3 shift and 3 add/subtract [closed] - multiplication

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Multiplication by 849 achieved by 3 add/sub and 3 shift operations
I have solved till 4 add and 4 add/sub,how can i reduce it further?849 * X= X<<10 -X<<7 -X<<6 + X<<4+ X

Here's the solution. Note you have to use an intermediate variable a, otherwise it's impossible. It uses: ((17 << 6) + 17) - 256 = 849 and that 17 = (1 << 4) + 1.
a = (X << 4) + X
return (a << 6) + a - (X << 8)
I found it by searching possible programs on a stack machine using the code at the end of the answer. The solution was given in the form of a stack program, which I manually translated into the expression above:
['push X', '<<4', 'push X', '+', 'dup 0', '<<6', '+', 'push X', '<<8', '-']
Here's the program... it takes a few minutes to run.
import heapq
def moves(adds, shifts, stack, just_shifted):
if len(stack) and shifts and not just_shifted:
for shv in xrange(1, 10):
yield (adds, shifts-1), '<<%d' % shv, stack[:-1] + [stack[-1] << shv]
if len(stack) <= adds:
for i in xrange(len(stack)):
if stack[i] != 1:
yield (adds, shifts), 'dup %d' % i, stack + [stack[i]]
if len(stack) > 1 and adds:
yield (adds-1, shifts), '+', stack[:-2] + [stack[-2] + stack[-1]]
yield (adds-1, shifts), '-', stack[:-2] + [stack[-2] - stack[-1]]
if len(stack) <= adds:
yield (adds, shifts), 'push X', stack + [1]
def find(target):
work = []
heapq.heappush(work, (0, [], 3, 3, []),)
while work:
it = heapq.heappop(work)
print it
if len(it[1]) == 1 and it[1][-1] == target:
return it[4]
just_shifted = len(it[4]) and ('<<' in it[4][-1])
for nsum, desc, nst in moves(it[2], it[3], it[1], just_shifted):
nit = (it[0]+1, nst, nsum[0], nsum[1], it[4] + [desc])
heapq.heappush(work, nit)
print find(849)

Related

Julia JuMP StackOverflow Error when defining a constraint on linear matrix inequalities

I was inspired by this post to switch from MATLAB's LMI toolbox to a Julia implementation and I tried to adapt the suggested script in the post to a problem I am trying to solve. However, I have failed to do this, consistently causing Julia to crash or give me a StackOverflowError while trying to solve the following LMI problem with JuMP,
Lyapunov stability of hybrid systems
For the unknown symmetric matrices Ui, Wi, M (and Pi) where Ui, Wi are greater than or equal to zero.
I have run the model set-up line by line and found that it occurs when I define the last constraint:
#SDconstraint(model, Ai'*Pi + Pi*Ai + Ei'*Ui*Ei ⪯ 0)
I saw other posts about this with JuMP and they usually were a result of improper variable definition but that knowledge has not allowed me to figure out the issue. Here is a minimal form of the code that produces the error,
using JuMP
using MosekTools
using LinearAlgebra
E1p = [1 0; 0 1];
A1 = [-1 10; -100 -1];
Ei = E1p
Ai = A1
n=2
model = Model(Mosek.Optimizer)
#variable(model, Pi[i=1:n, j=1:n], Symmetric)
#variable(model, Ui[i=1:n, j=1:n], Symmetric)
#variable(model, Wi[i=1:n, j=1:n], Symmetric)
#variable(model, M[i=1:n, j=1:n], Symmetric)
#SDconstraint(model, Ui ⪰ 0)
#SDconstraint(model, Wi ⪰ 0)
#SDconstraint(model, Ei'*M*Ei ⪰ 0)
#SDconstraint(model, Pi - Ei'*Wi*Ei ⪰ 0)
#SDconstraint(model, Ai'*Pi + Pi*Ai + Ei'*Ui*Ei ⪯ 0)
optimize!(model)
Pi = value.(Pi)
The first line of the error which repeats to great length is:
in mutable_operate! at MutableArithmetics/bPWR4/src/linear_algebra.jl:132
That looks like a bug.
Here's a MWE
using JuMP
model = Model()
E = [1 1; 1 1]
#variable(model, P[1:size(E, 1), 1:size(E, 2)], Symmetric)
julia> #SDconstraint(model, E' * P + P * E + E' * P * E <= 0)
ERROR: StackOverflowError:
Stacktrace:
[1] mutable_operate!(::typeof(MutableArithmetics.sub_mul), ::Matrix{AffExpr}, ::LinearAlgebra.Symmetric{VariableRef, Matrix{VariableRef}}, ::Matrix{Int64}) (repeats 79984 times)
# MutableArithmetics ~/.julia/packages/MutableArithmetics/bPWR4/src/linear_algebra.jl:132
If I drop the first term, I get
julia> #SDconstraint(model, P * E + E' * P * E <= 0)
ERROR: MethodError: no method matching *(::Matrix{AffExpr})
Closest candidates are:
*(::StridedMatrix{T} where T, ::LinearAlgebra.AbstractQ) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/qr.jl:680
*(::StridedMatrix{T} where T, ::LinearAlgebra.Adjoint{var"#s832", var"#s831"} where {var"#s832", var"#s831"<:LinearAlgebra.AbstractQ}) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/qr.jl:720
*(::StridedVecOrMat{T} where T, ::LinearAlgebra.Adjoint{var"#s832", var"#s831"} where {var"#s832", var"#s831"<:LinearAlgebra.LQPackedQ}) at /Users/julia/buildbot/worker/package_macos64/build/usr/share/julia/stdlib/v1.6/LinearAlgebra/src/lq.jl:254
...
Stacktrace:
[1] mutable_operate!(::typeof(MutableArithmetics.sub_mul), ::Matrix{AffExpr}, ::LinearAlgebra.Adjoint{Int64, Matrix{Int64}}, ::Matrix{AffExpr}) (repeats 2 times)
# MutableArithmetics ~/.julia/packages/MutableArithmetics/bPWR4/src/linear_algebra.jl:132
[2] operate_fallback!(::MutableArithmetics.IsMutable, ::Function, ::Matrix{AffExpr}, ::LinearAlgebra.Adjoint{Int64, Matrix{Int64}}, ::LinearAlgebra.Symmetric{VariableRef, Matrix{VariableRef}}, ::Matrix{Int64})
# MutableArithmetics ~/.julia/packages/MutableArithmetics/bPWR4/src/interface.jl:330
[3] operate!(::typeof(MutableArithmetics.sub_mul), ::Matrix{AffExpr}, ::LinearAlgebra.Adjoint{Int64, Matrix{Int64}}, ::LinearAlgebra.Symmetric{VariableRef, Matrix{VariableRef}}, ::Matrix{Int64})
# MutableArithmetics ~/.julia/packages/MutableArithmetics/bPWR4/src/rewrite.jl:80
[4] macro expansion
# ~/.julia/packages/MutableArithmetics/bPWR4/src/rewrite.jl:276 [inlined]
[5] macro expansion
# ~/.julia/packages/JuMP/y5vgk/src/macros.jl:447 [inlined]
[6] top-level scope
# REPL[15]:1
So it's probably related to https://github.com/jump-dev/MutableArithmetics.jl/issues/84.
I'll open an issue. (Edit: https://github.com/jump-dev/MutableArithmetics.jl/issues/86)
As a work-around, split it into an expression first:
julia> #expression(model, ex, E' * P + P * E + E' * P * E)
2×2 Matrix{AffExpr}:
3 P[1,1] + 4 P[1,2] + P[2,2] 4 P[1,2] + 2 P[2,2] + 2 P[1,1]
2 P[1,1] + 4 P[1,2] + 2 P[2,2] 4 P[1,2] + 3 P[2,2] + P[1,1]
julia> #SDconstraint(model, ex <= 0)
[-3 P[1,1] - 4 P[1,2] - P[2,2] -2 P[1,1] - 4 P[1,2] - 2 P[2,2];
-2 P[1,1] - 4 P[1,2] - 2 P[2,2] -P[1,1] - 4 P[1,2] - 3 P[2,2]] ∈ PSDCone()

Theoretical time complexity calculation of nested dependent for loops [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
How do I calculate the big-O time complexity of the following nested for loop with dependent indices:
void function1 (int n)
{
int x = 0;
for (int i = 0; i <= n/2; i+=3)
for (int j = i; j <= n/4; j+=2)
x++;
}
Complexity of the code is defined as how many times your code will be executed for a given n.
There are two ways to do it.
Simulation: Run the code for different value of n and find out the values. In this case this is equivalent to the final value of x.
Theoretical:
Let's first check for each i how many times your code runs:
Using the arithmetic progression formula (a_n = a_1 + (k-1)*d):
i=0 => n/4 = 0 + (k-1)*2 => n/8 + 1 times
i=3 => n/4 = 3 + (k-1)*2 => (n-12)/8 + 1 times
i=6 => n/4 = 6 + (k-1)*2 => (n-24)/8 + 1 times
i=9 => n/4 = 9 + (k-1)*2 => (n-36)/8 + 1 times
Let's check the last i's now:
i=n/4 => n/4 = n/4 + (k-1)*2 => 1 times
i=n/4 - 3 => n/4 = (n/4-3) + (k-1)*2 => 3/2 + 1 times
i=n/4 - 6 => n/4 = (n/4-6) + (k-1)*2 => 6/2 + 1 times
So total number of times inner loop will be running is:
= (1) + (3/2 + 1) + (6/2 + 1) + (9/2 + 1) ... + ((n-12)/8 + 1)+ (n/8 + 1)
=> (0/2 + 1) + (3/2 + 1) + (6/2 + 1) + (9/2 + 1) ... + ((n-12)/8 + 1)+ (n/8 + 1)
Can be written as:
=> (0/2 + 3/2 + 6/2 + ... (n-12)/8 + n/8) + (1 + 1 + 1 ... 1 + 1)
Let's assume there are total P terms in the series:
Let's find out P:
n/8 = (0/2) + (P-1)*(3/2) => P = (n+12)/12
Now summing up the above series:
= [(P/2) (0/2 + (P-1) * 3/2)] + [P]
= P(3P+1)/4
= (n+12)(3(n+12)+12)/(4*12*12)
= (n^2 + 28n + 96)/192
So the final complexity of the code is
= (number of operation in each iteration) * (n^2 + 28n + 96)/192
Now look at the term (n^2 + 28n + 96)/192 For a very large n this will be close to ~n^2
Following is the complexity comparison:
Linear scale was difficult to analyse to I plotted log scale. Though for small n you don't see the complexity converging to n^2.
Using a very relax approach one can say that:
for (int i = 0; i <= n/2; i+=3){
for (int j = i; j <= n/4; j+=2) {
x++;
}
}
in the same as :
for (int i = 0; i <= n/4; i+=3){
for (int j = i; j <= n/4; j+=2) {
x++;
}
}
since with i > n/4 the inner loop will not execute. Moreover, to simplify the math you can say that the code is approximately the same as:
for (int i = 0; i < n/4; i+=3){
for (int j = i; j < n/4; j+=2) {
x++;
}
}
since the context is big-O it does not make a difference for the calculation of the upper-bound of the double loop. The number of iterations of a loop of the form:
for (int j = a; j < b; j += c)
can be calculated using the formula (b - a) /c. Hence, the inner loop will run approximately ((n/4) - i) / 2) times, or n/8 - i/2 times.
The outer-loop can be thought as running from k=0 until n/12. So with both loops we have
the summation of [k=0 to n/12] of (n/8 - 3k/2),
which is equivalent to
the summation [k=0 to n/12] of n/8 - the summation [k=0 to n/12] of 3k/2.
Hence,
(N^2) / 96 - the summation [[k=0 to n/12] of 3k/2
which is approximately (n^2) / 192. Therefore, the upper bound is O (n^2).

Minimum number of jumps to reach end dynamic programmig

Given an array, verify from the first element how many steps are needed to reach the end.
Example: arr = [1, 3, 5, 8, 4, 2, 6, 7, 0, 7, 9]
1 -> 3 -> 8 (this is the shortest path)
3 steps.
So far, i have this code from geeks for geeks:
def jumpCount(x, n):
jumps = [0 for i in range(n)]
if (n == 0) or (x[0] == 0):
return float('inf')
jumps[0] = 0
for i in range(1, n):
jumps[i] = float('inf')
for j in range(i):
if (i <= j + x[j]) and (jumps[j] != float('inf')):
jumps[i] = min(jumps[i], jumps[j] + 1)
break
return jumps[n-1]
def jumps(x):
n = len(x)
return jumpCount(x,n)
x = [1, 3, 5, 8, 4, 2, 6, 7, 0, 7, 9]
print(jumps(x))
But I want to print out what numbers made the shortest path (1-3-8). How can I adapt the code to do it?
I tried to create a list of j's but since 5 is tested in the loop, it's appended too.
Link to the problem:
https://www.geeksforgeeks.org/minimum-number-of-jumps-to-reach-end-of-a-given-array/
The essential idea is that you need an auxiliary structure to help you keep track of the minimum path. Those type of structures are usually called "backpointers" (you could call them in our case "forwardpointers" since we are going forward, duh). My code solves the problem recursively, but the same could be done iteratively. The strategy is as follows:
jumps_vector = [ 1, 3, 5, 8, 4, 2, 6, 7, 0, 7, 9 ]
"""
fwdpointers holds the relative jump size to reach the minimum number of jumps
for every component of the original vector
"""
fwdpointers = {}
def jumps( start ):
if start == len( jumps_vector ) - 1:
# Reached the end
return 0
if start > len( jumps_vector ) - 1:
# Cannot go through that path
return math.inf
if jumps_vector[ start ] == 0:
# Cannot go through that path (infinite loop with itself)
return math.inf
# Get the minimum in a traditional way
current_min = jumps( start + 1 )
fwdpointers[ start ] = start + 1
for i in range( 2, jumps_vector[ start ] + 1 ):
aux_min = jumps( start + i )
if current_min > aux_min:
# Better path. Update minimum and fwdpointers
current_min = aux_min
# Store the (relative!) index of where I jump to
fwdpointers[ start ] = i
return 1 + current_min
In this case, the variable fwdpointers stores the relative indexes of where I jump to. For instance, fwdpointers[ 0 ] = 1, since I will jump to the adjacent number, but fwdpointers[ 1 ] = 2 since I will jump two numbers the next jump.
Having done that, then it's only a matter of postprocessing things a bit on the main() function:
if __name__ == "__main__":
min_jumps = jumps( 0 )
print( min_jumps )
# Holds the index of the jump given such that
# the sequence of jumps are the minimum
i = 0
# Remember that the contents of fwdpointers[ i ] are the relative indexes
# of the jump, not the absolute ones
print( fwdpointers[ 0 ] )
while i in fwdpointers and i + fwdpointers[ i ] < len( jumps_vector ):
print( jumps_vector[ i + fwdpointers[ i ] ] )
# Get the index of where I jump to
i += fwdpointers[ i ]
jumped_to = jumps_vector[ i ]
I hope this answered your question.
EDIT: I think the iterative version is more readable:
results = {}
backpointers = {}
def jumps_iter():
results[ 0 ] = 0
backpointers[ 0 ] = -1
for i in range( len( jumps_vector ) ):
for j in range( 1, jumps_vector[ i ] + 1 ):
if ( i + j ) in results:
results[ i + j ] = min( results[ i ] + 1, results[ i + j ] )
if results[ i + j ] == results[ i ] + 1:
# Update where I come from
backpointers[ i + j ] = i
elif i + j < len( jumps_vector ):
results[ i + j ] = results[ i ] + 1
# Set where I come from
backpointers[ i + j ] = i
return results[ len( jumps_vector ) - 1 ]
And the postprocessing:
i = len( jumps_vector ) - 1
print( jumps_vector[ len( jumps_vector ) - 1 ], end = " " )
while backpointers[ i ] >= 0:
print( jumps_vector[ backpointers[ i ] ], end = " " )
i = backpointers[ i ]
print()

Is it a bug in Z3? Incorrect answer on Real and ForAll applied

I'm trying to find the minimal value of the Parabola y=(x+2)**2-3, apparently, the answer should be y==-3, when x ==-2.
But z3 gives the answer [x = 0, y = 1], which doesn't meet the ForAll assertion.
Am I assuming wrongly with something?
Here is the python code:
from z3 import *
x, y, z = Reals('x y z')
print(Tactic('qe').apply(And(y == (x + 2) ** 2 - 3,
ForAll([z], y <= (z + 2) ** 2 - 3))))
solve(y == x * x + 4 * x +1,
ForAll([z], y <= z * z + 4 * z +1))
And the result:
[[y == (x + 2)**2 - 3, True]]
[x = 0, y = 1]
The result shows that 'qe' tactic eliminated that ForAll assertion into True, although it's NOT always true.
Is that the reason that solver gives a wrong answer?
What should I code to find the minimal (or maximal) value of such an expression?
BTW, the Z3 version is 4.3.2 for Mac.
I refered
How does Z3 handle non-linear integer arithmetic?
and found a partial solution, using 'qfnra-nlsat' and 'smt' tactics.
from z3 import *
x, y, z = Reals('x y z')
s1 = Then('qfnra-nlsat','smt').solver()
print s1.check(And(y == (x + 2) ** 2 - 3,
ForAll([z], y <= (z + 2) ** 2 - 3)))
print s1.model()
s2 = Then('qe', 'qfnra-nlsat','smt').solver()
print s2.check(And(y == (x + 2) ** 2 - 3,
ForAll([z], y <= (z + 2) ** 2 - 3)))
print s2.model()
And the result:
sat
[x = -2, y = -3]
sat
[x = 0, y = 1]
Still the 'qe' tactic and the default solver seem buggy. They don't give the correct result.
Further comments and discussions are needed.

Running Time Calculation/Complexity of an Algorithm

I have to calculate the time complexity or theoretical running time of an algorithm (given the psuedocode), line by line as T(n). I've given it a try, but there are a couple things confusing me. For example, what is the time complexity for an "if" statement? And how do I deal with nested loops? The code is below along with my attempt which is commented.
length[A] = n
for i = 0 to length[A] - 1 // n - 1
k = i + 1 // n - 2
for j = 1 + 2 to length[A] // (n - 1)(n - 3)
if A[k] > A[j] // 1(n - 1)(n - 3)
k = j // 1(n - 1)(n - 3)
if k != i + 1 // 1(n - 1)
temp = A[i + 1] // 1(n - 1)
A[i + 1] = A[k] // 1(n - 1)
A[k] = temp // 1(n - 1)
Blender is right, the result is O(n^2): two nested loops that each have an iteration count dependent on n.
A longer explanation:
The if, in this case, does not really matter: Since O-notation only looks at the worst-case execution time of an algorithm, you'd simply choose the execution path that's worse for the overall execution time. Since, in your example, both execution paths (k != i+ 1 is true or false) have no further implication for the runtime, you can disregard it. If there were a third nested loop, also running to n, inside the if, you'd end up with O(n^3).
A line-by-line overview:
for i = 0 to length[A] - 1 // n + 1 [1]
k = i + 1 // n
for j = 1 + 2 to length[A] // (n)(n - 3 + 1) [1]
if A[k] > A[j] // (n)(n - 3)
k = j // (n)(n - 3)*x [2]
if k != i + 1 // n
temp = A[i + 1] // n*y [2]
A[i + 1] = A[k] // n*y
A[k] = temp // n*y
[1] The for loop statement will be executed n+1 times with the following values for i: 0 (true, continue loop), 1 (true, continue loop), ..., length[A] - 1 (true, continue loop), length[A] (false, break loop)
[2] Without knowing the data, you have to guess how often the if's condition is true. This guess can be done mathematically by introducing a variable 0 <= x <= 1. This is in line with what I said before: x is independent of n and therefore influences the overall runtime complexity only as a constant factor: you need to take a look at the execution paths .