I am new to Julia and trying to understand how things work.
Below is the sample code I just wrote.
(This is the baseline code and I am planning to add other lines one by one.)
I expected to see something like 1 2 3 4 5 6 7... from test = check(m)
However, I don't see any result.
Any help will be very much appreciated.
using Pkg
using Optim
using Printf
using LinearAlgebra, Statistics
using BenchmarkTools, Optim, Parameters, QuantEcon, Random
using Optim: converged, maximum, maximizer, minimizer, iterations
using Interpolations
using Distributions
using SparseArrays
using Roots
# ================ 1. Parameters and Constants ============================
mutable struct Model
# Model Parameters and utility function
δ::Float64
function Model(;
δ = 0.018,
)
new(
δ
)
end
end
function check(m)
it = 0
tol=1e-8
itmax = 1000
dif = 0
# Iteration
while it < itmax && dif >=tol
it = it + 1;
V = Vnew;
println(it)
end
return itmax
end
m=Model()
test = check(m)
dif = 0
tol = 1e-8
while it < itmax && dif >= tol
Now explain to me how
dif >= tol
Related
I saw this asked but I couldn't understand the answers!
I got 4 vector2s, P1 & P2 for line 1, P3 & P4 for line 2.
Code for intersection position works, but how do I check if that intersection is happening?
More specifically, I want to check what side of a polygon an imaginary line is passing through/colliding with
...
...Had something like it working in an old test-script, however I made no annotations and I can't adapt it. I don't know if there's anything here that could be used but thought I'd share:
if rotation_angle > PI/2 && rotation_angle < 3*PI/2:
if rad_overflow(($Position2D.position-position).angle()-PI/2) < rad_overflow(rotation_angle-PI/2) or rad_overflow(($Position2D.position-position).angle()-PI/2) > rad_overflow(rotation_angle+PI/2):
actives.x = 1
else:
actives.x = 0
if rad_overflow(($Position2D2.position-position).angle()-PI/2) < rad_overflow(rotation_angle-PI/2) or rad_overflow(($Position2D2.position-position).angle()-PI/2) > rad_overflow(rotation_angle+PI/2):
actives.y = 1
else:
actives.y = 0
else:
if rad_overflow(($Position2D.position-position).angle()-PI/2) < rad_overflow(rotation_angle-PI/2) && rad_overflow(($Position2D.position-position).angle()-PI/2) > rad_overflow(rotation_angle+PI/2):
actives.x = 1
else:
actives.x = 0
if rad_overflow(($Position2D2.position-position).angle()-PI/2) < rad_overflow(rotation_angle-PI/2) && rad_overflow(($Position2D2.position-position).angle()-PI/2) > rad_overflow(rotation_angle+PI/2):
actives.y = 1
else:
actives.y = 0
var point1 = $Position2D.position
var point2 = $Position2D2.position
var limit3 = Vector2(0,1).rotated(rotation_angle+PI/2)
var limit4 = Vector2(0,1).rotated(rotation_angle-PI/2)
var det = (point1.x - point2.x)*(limit3.y - limit4.y) - (point1.y - point2.y)*(limit3.x - limit4.x)
var new_position = Vector2(
((point1.x*point2.y - point1.y*point2.x) * (limit3.x-limit4.x) - (point1.x-point2.x) * (limit3.x*limit4.y - limit3.y*limit4.x))/det,
((point1.x*point2.y - point1.y*point2.x) * (limit3.y-limit4.y) - (point1.y-point2.y) * (limit3.x*limit4.y - limit3.y*limit4.x))/det)
if actives.x != actives.y:
print("hit")
else:
print("miss")
You ask:
I got 4 vector2s, P1 & P2 for line 1, P3 & P4 for line 2. Code for intersection position works, but how do I check if that intersection is happening?
If you are using Godot, you can use the Geometry class for this, in particular the line_intersects_line_2d method. I quote from the documentation:
Variant line_intersects_line_2d ( Vector2 from_a, Vector2 dir_a, Vector2 from_b, Vector2 dir_b )
Checks if the two lines (from_a, dir_a) and (from_b, dir_b) intersect. If yes, return the point of intersection as Vector2. If no intersection takes place, returns null.
Note: The lines are specified using direction vectors, not end points.
So that gives you both if they intersect (if it returns null they don't intersect) and where (if it does not return null it returns a Vector2 with the position of the intersection).
I'm having troubles to define the objective fucntion in a SMT problem with z3py.
Long story, short, I have to optimize the placing of smaller blocks inside a board that has fixed width but variable heigth.
I have an array of coordinates (represented by an array of integers of length 2) and a list of integers (representing the heigth of the block to place).
# [x,y] list of integer variables
P = [[Int("x_%s" % (i + 1)), Int("y_%s" % (i + 1))]
for i in range(blocks)]
y = [int(b) for a, b in data[2:]]
I defined the objective function like this:
obj= Int(max([P[i][1] + y[i] for i in range(blocks)]))
It calculates the max height of the board given the starting coordinate of the blocks and their heights.
I know it could be better, but I think the problem would be the same even with a different definition.
Anyway, if I run my code, the following error occurs on the line of the objective function:
" raise Z3Exception("Symbolic expressions cannot be cast to concrete Boolean values.") "
While debugging I've seen that is P[i][1] that gives an error and I think it's because the program reads "y_i + 3" (for example) and they can't be added togheter.
Point is: it's obvious that the objective function depends on the variables of the problem, so how can I get rid of this error? Is there another place where I should define the objective function so it waits to have the P array instantiated before doing anything?
Full code:
from z3 import *
from math import ceil
width = 8
blocks = 4
x = [3,3,5,5]
y = [3,5,3,5]
height = ceil(sum([x[i] * y[i] for i in range(blocks)]) / width) + 1
# [blocks x 2] list of integer variables
P = [[Int("x_%s" % (i + 1)), Int("y_%s" % (i + 1))]
for i in range(blocks)]
# value/ domain constraint
values = [And(0 <= P[i][0], P[i][0] <= width - 1, 0 <= P[i][1], P[i][1] <= height - 1)
for i in range(blocks)]
obj = Int(max([P[i][1] + y[i] for i in range(blocks)]))
board_problem = values # other constraints I've not included for brevity
o = Optimize()
o.add(board_problem)
o.minimize(obj)
if (o.check == 'unsat'):
print("The problem is unsatisfiable")
else:
print("Solved")
The problem here is that you're calling Python's max on symbolic values, which is not designed to work for symbolic expressions. Instead, define a symbolic version of max and use that:
# Return maximum of a vector; error if empty
def symMax(vs):
m = vs[0]
for v in vs[1:]:
m = If(v > m, v, m)
return m
obj = symMax([P[i][1] + y[i] for i in range(blocks)])
With this change your program will go through and print Solved when run.
I was asked to calculate the Pi number using the Leibniz formula for Pi with a given accuracy (eps).
The formula looks like this:
Initially, I wrote the following code:
fun main() {
val eps = 0.005
var n = 2
var r = row(n) // current row
var r0 = row(n-1)
var s = r0 + r
while (Math.abs(r) > eps) {
n++
r = row(n)
s += r
}
println(r.toString() + " <-- Leibniz(" + n.toString() + ")")
println(Math.abs(s*4).toString() + " <-- our evaluation with eps")
println(Math.PI.toString() + " <-- real Pi")
println((Math.abs(s*4)) in (Math.PI-eps..Math.PI+eps))
}
fun row(n: Int) = ((Math.pow(-1.0, n.toDouble()))/(2*n-1))
Then I found out that it doesn't work correctly, because
println((Math.abs(s*4)) in (Math.PI-eps..Math.PI+eps)) printed false.
I went deeper, made a debug, and realised that if went with
while (Math.abs(r) > eps/2)
over
while (Math.abs(r) > eps) everything works fine.
Could someone please provide any explanation on what I did wrong or why I have to divide eps by 2 if that is correct.
Thanks.
Each term r_i in that series is summed up to PI with a factor of 4 because sum(r_0, .., r_n) = PI/4. So of course, when you stop at the first r_i <= eps that only means that sum(r_0, ..., r_(i-1)) has an accuray of eps, ie it is somewhere in between [PI/4 - eps/2, PI/4 + eps/2]. But PI it self is 4*sum thus the accuracy is of course 4*eps ie the approximation lies somewhere inbetween [PI-2*eps ,PI+2*eps]
For your value of eps = 0.005:
The first r_100 = 0.00497512... is the first r <= eps
sum(r0, ..., r_99) = 0.782829, so PI at that point would be approximated as 3.1315929
EDIT
Also you are actually calculating -PI because are flipping the sign of each term in the series. So what you call r0 in your code (it should rather be called r1 because it's the result of row(1)) is -1 instead of +1
When you check Math.abs(r) > eps you're looking at the size of the n-th element of the series.
The distance of your current approximation from PI is the sum of all the terms in the series after that one.
As far as I know the relationship between the size of the n-th element of a convergent series and how good of an approximation you have depends on the specific series you are summing.
I am trying to find three parameters (a, b, c) to fit my experimental data using ODE solver and optimization by least squares using Scilab in-built functions.
However, I keep having the message "submatrix incorrectly defined" at line "y_exp(:,1) = [0.135 ..."
When I try another series of data (t, yexp) such as the one used in the original template I get no error messages. The template I use was found here: https://wiki.scilab.org/Non%20linear%20optimization%20for%20parameter%20fitting%20example
function dy = myModel ( t , y , a , b, c )
// The right-hand side of the Ordinary Differential Equation.
dy(1) = -a*y(1) - b*y(1)*y(2)
dy(2) = a*y(1) - b*y(1)*y(2) - c*y(2)
endfunction
function f = myDifferences ( k )
// Returns the difference between the simulated differential
// equation and the experimental data.
global MYDATA
t = MYDATA.t
y_exp = MYDATA.y_exp
a = k(1)
b = k(2)
c = k(3)
y0 = y_exp(1,:)
t0 = 0
y_calc=ode(y0',t0,t,list(myModel,a,b,c))
diffmat = y_calc' - y_exp
// Make a column vector
f = diffmat(:)
MYDATA.funeval = MYDATA.funeval+ 1
endfunction
// Experimental data
t = [0,20,30,45,75,105,135,180,240]';
y_exp(:,1) =
[0.135,0.0924,0.067,0.0527,0.0363,0.02445,0.01668,0.012,0.009]';
y_exp(:,2) =
[0,0.00918,0.0132,0.01835,0.0261,0.03215,0.0366,0.0393,0.0401]';
// Store data for future use
global MYDATA;
MYDATA.t = t;
MYDATA.y_exp = y_exp;
MYDATA.funeval = 0;
function val = L_Squares ( k )
// Computes the sum of squares of the differences.
f = myDifferences ( k )
val = sum(f.^2)
endfunction
// Initial guess
a = 0;
b = 0;
c = 0;
x0 = [a;b;c];
[fopt ,xopt]=leastsq(myDifferences, x0)
Does anyone know how to approach this problem?
Just rewrite lines 28,29 as
y_exp = [0.135,0.0924,0.067,0.0527,0.0363,0.02445,0.01668,0.012,0.009
0,0.00918,0.0132,0.01835,0.0261,0.03215,0.0366,0.0393,0.0401]';
or insert a clear at line 1 (you may have defined y_exp before with a different size).
Suppose x,y,z are int variables and A is a matrix, I want to express a constraint like:
z == A[x][y]
However this leads to an error:
TypeError: object cannot be interpreted as an index
What would be the correct way to do this?
=======================
A specific example:
I want to select 2 items with the best combination score,
where the score is given by the value of each item and a bonus on the selection pair.
For example,
for 3 items: a, b, c with related value [1,2,1], and the bonus on pairs (a,b) = 2, (a,c)=5, (b,c) = 3, the best selection is (a,c), because it has the highest score: 1 + 1 + 5 = 7.
My question is how to represent the constraint of selection bonus.
Suppose CHOICE[0] and CHOICE[1] are the selection variables and B is the bonus variable.
The ideal constraint should be:
B = bonus[CHOICE[0]][CHOICE[1]]
but it results in TypeError: object cannot be interpreted as an index
I know another way is to use a nested for to instantiate first the CHOICE, then represent B, but this is really inefficient for large quantity of data.
Could any expert suggest me a better solution please?
If someone wants to play a toy example, here's the code:
from z3 import *
items = [0,1,2]
value = [1,2,1]
bonus = [[1,2,5],
[2,1,3],
[5,3,1]]
choices = [0,1]
# selection score
SCORE = [ Int('SCORE_%s' % i) for i in choices ]
# bonus
B = Int('B')
# final score
metric = Int('metric')
# selection variable
CHOICE = [ Int('CHOICE_%s' % i) for i in choices ]
# variable domain
domain_choice = [ And(0 <= CHOICE[i], CHOICE[i] < len(items)) for i in choices ]
# selection implication
constraint_sel = []
for c in choices:
for i in items:
constraint_sel += [Implies(CHOICE[c] == i, SCORE[c] == value[i])]
# choice not the same
constraint_neq = [CHOICE[0] != CHOICE[1]]
# bonus constraint. uncomment it to see the issue
# constraint_b = [B == bonus[val(CHOICE[0])][val(CHOICE[1])]]
# metric definition
constraint_sumscore = [metric == sum([SCORE[i] for i in choices ]) + B]
constraints = constraint_sumscore + constraint_sel + domain_choice + constraint_neq + constraint_b
opt = Optimize()
opt.add(constraints)
opt.maximize(metric)
s = []
if opt.check() == sat:
m = opt.model()
print [ m.evaluate(CHOICE[i]) for i in choices ]
print m.evaluate(metric)
else:
print "failed to solve"
Turns out the best way to deal with this problem is to actually not use arrays at all, but simply create integer variables. With this method, the 317x317 item problem originally posted actually gets solved in about 40 seconds on my relatively old computer:
[ 0.01s] Data loaded
[ 2.06s] Variables defined
[37.90s] Constraints added
[38.95s] Solved:
c0 = 19
c1 = 99
maxVal = 27
Note that the actual "solution" is found in about a second! But adding all the required constraints takes the bulk of the 40 seconds spent. Here's the encoding:
from z3 import *
import sys
import json
import sys
import time
start = time.time()
def tprint(s):
global start
now = time.time()
etime = now - start
print "[%ss] %s" % ('{0:5.2f}'.format(etime), s)
# load data
with open('data.json') as data_file:
dic = json.load(data_file)
tprint("Data loaded")
items = dic['items']
valueVals = dic['value']
bonusVals = dic['bonusVals']
vals = [[Int("val_%d_%d" % (i, j)) for j in items if j > i] for i in items]
tprint("Variables defined")
opt = Optimize()
for i in items:
for j in items:
if j > i:
opt.add(vals[i][j-i-1] == valueVals[i] + valueVals[j] + bonusVals[i][j])
c0, c1 = Ints('c0 c1')
maxVal = Int('maxVal')
opt.add(Or([Or([And(c0 == i, c1 == j, maxVal == vals[i][j-i-1]) for j in items if j > i]) for i in items]))
tprint("Constraints added")
opt.maximize(maxVal)
r = opt.check ()
if r == unsat or r == unknown:
raise Z3Exception("Failed")
tprint("Solved:")
m = opt.model()
print " c0 = %s" % m[c0]
print " c1 = %s" % m[c1]
print " maxVal = %s" % m[maxVal]
I think this is as fast as it'll get with Z3 for this problem. Of course, if you want to maximize multiple metrics, then you can probably structure the code so that you can reuse most of the constraints, thus amortizing the cost of constructing the model just once, and incrementally optimizing afterwards for optimal performance.