Till now x has two colomns and there was no problems, but now x have got various num of colomns, and I don't know how to write analog code but with dynamic number of colomns in x?
min_x = min(x);
max_x = max(x);
step = (max_x - min_x)/50;
[X, Y] = ndgrid(min_x(1):step(1):max_x(1), min_x(2):step(2):max_x(2));
You can use cellarrays to generate a comma separated list:
%# sample data
x = rand(10,3); %# you can change the column numbers here
%# calculate step sizes
mn = min(x);
mx = max(x);
step = (mx-mn)/50;
%# vec{i} = mn(i):s(i):mx(i)
vec = arrayfun(#(a,s,b)a:s:b, mn,step,mx, 'UniformOutput',false);
%# [X,Y,...] = ndgrid(vec{1},vec{2},...)
C = cell(1,numel(vec));
[C{:}] = ndgrid( vec{:} );
%# result = [X(:),Y(:),...]
result = cell2mat( cellfun(#(v)v(:), C, 'UniformOutput',false) );
Related
First, a picture:
Column A is my source data, 50 points.
Column C and D are the SMA calculated with numpy and mathdotnet.com, respectively, with a window of 15.
Column F is the delta.
As we can see, about halfway, the data becomes identical, but the first half is not. I do not understand why, and, more importantly, do not know what to trust.
So I got from SO an optimized version of the SMA and ran the data through it.
The code is here:
private static NDArray SMA(this NDArray Data, int Period)
{
var Length = Data.len;
// calculate the moving average
var Buffer = new double[Period];
var Output = new double[Length];
var CurrentIndex = 0;
for (var i = 0; i < Length; i++)
{
Buffer[CurrentIndex] = Data.GetDouble(i) / Period;
var MA = 0.0;
for (var j = 0; j < Period; j++)
{
MA += Buffer[j];
}
Output[i] = MA;
CurrentIndex = (CurrentIndex + 1) % Period;
}
var R = new ArraySegment<double>(Output, Period - 1, Length - Period + 1);
return new NDArray(R.ToArray());
}
It is using NumSharp, the .net port of numpy, to hold the source array.
While it is all different code, the C# code and python numpy output the same results (differences happen after the 12th decimal point, so we can consider them identical).
This points out to mathdotnet.com being different; so I guess I can trust the numpy / C# versions more.
Are there different variations of the SMA that could cause this? or something obvious I don't see?
I have put all the data here: https://pastebin.com/WgYJUUJF
Edit:
Here is the numpy code:
import numpy as np
def calcSma(data, smaPeriod):
j = next(i for i, x in enumerate(data) if x is not None)
our_range = range(len(data))[j + smaPeriod - 1:]
empty_list = [None] * (j + smaPeriod - 1)
sub_result = [np.mean(data[i - smaPeriod + 1: i + 1]) for i in our_range]
return np.array(empty_list + sub_result)
def calcSma2(data_set, periods=3):
weights = np.ones(periods) / periods
return np.convolve(data_set, weights, mode='valid')
a = np.array([1.1282553063375, 1.13157696082132, 1.13275406120136, 1.1332879715733, 1.12761933580452, 1.12621836040801, 1.12282485875706, 1.12265572041877, 1.13094386506532, 1.12320520490577, 1.12427293064877, 1.1328332027022, 1.13099445663901, 1.12843355605048, 1.13002750724853, 1.12843355605048, 1.13099445663901, 1.12709476494142, 1.12684879712348, 1.12672349888807, 1.12600933402474, 1.13112070248549, 1.12985951088976, 1.12822416032659, 1.12471789559362, 1.12651004224413, 1.12442669033881, 1.12334638977164, 1.12714333124378, 1.1312233808195, 1.12713229372575, 1.128255040952, 1.12585669781931, 1.12763457442902, 1.12470631424376, 1.12223443223443, 1.12506842815956, 1.12691187181355, 1.12385654130971, 1.13026344596074, 1.12237927400894, 1.1245915922457, 1.13088395780284, 1.13211944646759, 1.12590649028825, 1.12829127560895, 1.11876736364966, 1.12222667492441, 1.12169543369019, 1.12199031071285])
b = calcSma(a, 15)
c = calcSma2(a, 15)
print b
print "----------------------------------"
print c
and here is the mathdotnet one:
var data = Vector<double>.Build.Dense(new[] { 1.1282553063375, 1.13157696082132, 1.13275406120136, 1.1332879715733, 1.12761933580452, 1.12621836040801, 1.12282485875706, 1.12265572041877, 1.13094386506532, 1.12320520490577, 1.12427293064877, 1.1328332027022, 1.13099445663901, 1.12843355605048, 1.13002750724853, 1.12843355605048, 1.13099445663901, 1.12709476494142, 1.12684879712348, 1.12672349888807, 1.12600933402474, 1.13112070248549, 1.12985951088976, 1.12822416032659, 1.12471789559362, 1.12651004224413, 1.12442669033881, 1.12334638977164, 1.12714333124378, 1.1312233808195, 1.12713229372575, 1.128255040952, 1.12585669781931, 1.12763457442902, 1.12470631424376, 1.12223443223443, 1.12506842815956, 1.12691187181355, 1.12385654130971, 1.13026344596074, 1.12237927400894, 1.1245915922457, 1.13088395780284, 1.13211944646759, 1.12590649028825, 1.12829127560895, 1.11876736364966, 1.12222667492441, 1.12169543369019, 1.12199031071285 });
var sma = Vector<double>.Build.Dense(data.MovingAverage(15).Skip(14).ToArray());
var s = sma.Aggregate(string.Empty, (Current, v) => Current + $"{v}, ");
Console.WriteLine(s);
I am trying to find three parameters (a, b, c) to fit my experimental data using ODE solver and optimization by least squares using Scilab in-built functions.
However, I keep having the message "submatrix incorrectly defined" at line "y_exp(:,1) = [0.135 ..."
When I try another series of data (t, yexp) such as the one used in the original template I get no error messages. The template I use was found here: https://wiki.scilab.org/Non%20linear%20optimization%20for%20parameter%20fitting%20example
function dy = myModel ( t , y , a , b, c )
// The right-hand side of the Ordinary Differential Equation.
dy(1) = -a*y(1) - b*y(1)*y(2)
dy(2) = a*y(1) - b*y(1)*y(2) - c*y(2)
endfunction
function f = myDifferences ( k )
// Returns the difference between the simulated differential
// equation and the experimental data.
global MYDATA
t = MYDATA.t
y_exp = MYDATA.y_exp
a = k(1)
b = k(2)
c = k(3)
y0 = y_exp(1,:)
t0 = 0
y_calc=ode(y0',t0,t,list(myModel,a,b,c))
diffmat = y_calc' - y_exp
// Make a column vector
f = diffmat(:)
MYDATA.funeval = MYDATA.funeval+ 1
endfunction
// Experimental data
t = [0,20,30,45,75,105,135,180,240]';
y_exp(:,1) =
[0.135,0.0924,0.067,0.0527,0.0363,0.02445,0.01668,0.012,0.009]';
y_exp(:,2) =
[0,0.00918,0.0132,0.01835,0.0261,0.03215,0.0366,0.0393,0.0401]';
// Store data for future use
global MYDATA;
MYDATA.t = t;
MYDATA.y_exp = y_exp;
MYDATA.funeval = 0;
function val = L_Squares ( k )
// Computes the sum of squares of the differences.
f = myDifferences ( k )
val = sum(f.^2)
endfunction
// Initial guess
a = 0;
b = 0;
c = 0;
x0 = [a;b;c];
[fopt ,xopt]=leastsq(myDifferences, x0)
Does anyone know how to approach this problem?
Just rewrite lines 28,29 as
y_exp = [0.135,0.0924,0.067,0.0527,0.0363,0.02445,0.01668,0.012,0.009
0,0.00918,0.0132,0.01835,0.0261,0.03215,0.0366,0.0393,0.0401]';
or insert a clear at line 1 (you may have defined y_exp before with a different size).
I have found many times a solution for my problems from here, but this time I am totally baffled. I don't know what's wrong at my code.
I made a code to create a box with charged particles inside with Vpython. As I launch the program, I get only a grey screen and the program crash. No error message, nothing.
from visual import *
from random import *
def electronizer(num):
list = []
electron_charge = -1.60217662e-19
electron_mass = 9.10938356e-31
for i in range(num):
another_list = []
e = sphere(pos=(random(), random(),random()), radius=2.818e-15,
color=color.cyan)
e.v = vector(random(), random(), random())
another_list.append(e)
another_list.append(e.v)
another_list.append(electron_charge)
another_list.append(electron_mass)
list.append(another_list)
return list
def protonizer(num):
list = []
proton_charge = 1.60217662e-19
proton_mass = 1.6726219e-27
for i in range(num):
another_list = []
p = sphere(pos=(random(), random(),random()), radius=0.8408739e-15, color=color.red)
p.v = vector(random(), random(), random())
another_list.append(p)
another_list.append(p.v)
another_list.append(proton_charge)
another_list.append(proton_mass)
list.append(another_list)
return list
def cross(a, b):
c = vector(a[1]*b[2] - a[2]*b[1],
a[2]*b[0] - a[0]*b[2],
a[0]*b[1] - a[1]*b[0])
return c
def positioner(work_list):
k = 8.9875517873681764e3 #Nm2/C2
G = 6.674e-11 # Nm2/kg2
vac_perm = 1.2566370614e-6 # H/m
pi = 3.14159265
dt = 0.1e-3
constant = 1
force = vector(0,0,0)
for i in range(len(work_list)):
for j in range(len(work_list)):
if i != j:
r = work_list[i][0].pos - work_list[j][0].pos
r_mag = mag(r)
r_norm = norm(r)
F = k * ((work_list[i][2] * work_list[j][2]) / (r_mag**2)) * r_norm
force += F
B = constant*(vac_perm / 4*pi) * (cross(work_list[j][2] * work_list[j][1], norm(r)))/r_mag**2
F = cross(work_list[i][2] * work_list[i][1], B)
force += F
F = -(G * work_list[i][3] * work_list[j][3]) / r_mag**2 * r_norm
force += F
acceleration = force / work_list[i][3]
difference_in_velocity = acceleration * dt
work_list[i][1] += difference_in_velocity
difference_in_position = work_list[i][1] * dt
work_list[i][0].pos += difference_in_position
if abs(work_list[i][0].pos[0]) > 2.5e-6:
work_list[i][1][0] = -work_list[i][1][0]
elif abs(work_list[i][0][1]) > 2.5e-6:
work_list[i][1][1] = -work_list[i][1][1]
elif abs(work_list[i][0][2]) > 2.5e-6:
work_list[i][1][2] = -work_list[i][1][2]
return work_list
box = box(pos=(0, 0, 0), length = 5e-6, width = 5e-6, height = 5e-6, opacity = 0.5)
protons_num = raw_input("number of protons: ")
electrons_num = raw_input("number of electrons: ")
list_of_electrons = electronizer(int(electrons_num))
list_of_protons = protonizer(int(protons_num))
work_list = list_of_electrons + list_of_protons
while True:
work_list = positioner(work_list)
You should ask your question on the VPython.org forum where the VPython experts hang out and will be able to answer your question. You should mention which operating system you are using and which version of python you are using. From your code I see that you are using classic VPython. There is a newer version of VPython 7 that just came out but the VPython syntax has changed.
I have an image that has been processed throw:
//UIImage to Mat
cv::Mat originalMat = [self cvMatFromUIImage:inputImage];
//Grayscale
cv::Mat grayMat;
cv::cvtColor(originalMat, grayMat, CV_RGB2GRAY);
//Blur
cv::Mat gaussMat;
cv::GaussianBlur( grayMat , gaussMat, cv::Size(9, 9), 2, 2 );
//Threshold
cv::threshold(grayMat,tMat,100,255,cv::THRESH_BINARY);
than I want to analyze (calculate qty of white and black points) that belows to line. For instance: I have an image 100x120px and I want to check lines where x = 5 and y = from 0 to 119; and vice versa x = 0..99; y = 5;
so I expect that Mat will contains x - Mat.cols and y - Mat.rows but looks it saves data in another way. for example I've tried to change pixels color that belows to lines but didn't get 2 lines:
for( int x = 0; x < tMat.cols; x++ ){
tMat.at<cv::Vec4b>(5,x)[0] = 100;
}
for( int y = 0; y < tMat.rows; y++ ){
tMat.at<cv::Vec4b>(y,5)[0] = 100;
}
return [self UIImageFromCVMat:tMat];
result for white image:
why I did't get 2 lines? Is it possible to draw\check lines in Mat directly? what if I going to check line that calculates via y = kx + b?
You are accessing the pixel values in the wrong way. You are working with image that only has one channel, that's why you should access pixel values like this:
for (int x = 0; x < tMat.cols; x++){
tMat.at<unsigned char>(5, x) = 100;
}
for (int y = 0; y < tMat.rows; y++){
tMat.at<unsigned char>(y, 5) = 100;
}
The Mat element's type is defined by two properties - the number of channels and the underlying type of data. If you do not know the meaning of those terms, I strongly suggest that you read the documentation for methods cv::Mat::type(), cv::Mat::channels() and cv::Mat::depth().
Two more examples:
mat.at<float>(x, y) = 1.0f; // if mat type is CV_32FC1
mat.at<cv::Vec3b>(x, y) = Vec3b(1, 2, 3); // if mat type is CV_8UC3
Probably an issue with your Mat data types. The output of threshold is a single channel image that is 8-bit or 32-bit (http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html?highlight=threshold#threshold), so you probably should not be setting values with Mat.at<Vec4b>[0].
Here's a function to return the type of your matrix. Usage is in the commented out part. Copied from How to find out what type of a Mat object is with Mat::type() in OpenCV.
std::string type2str(int type){
//string ty = type2str( comparisonResult.type() );
//printf("Matrix: %s %dx%d \n", ty.c_str(), comparisonResult.cols, comparisonResult.rows );
string r;
uchar depth = type & CV_MAT_DEPTH_MASK;
uchar chans = 1 + (type >> CV_CN_SHIFT);
switch ( depth ) {
case CV_8U: r = "8U"; break;
case CV_8S: r = "8S"; break;
case CV_16U: r = "16U"; break;
case CV_16S: r = "16S"; break;
case CV_32S: r = "32S"; break;
case CV_32F: r = "32F"; break;
case CV_64F: r = "64F"; break;
default: r = "User"; break;
}
r += "C";
r += (chans+'0');
return r;}
Suppose x,y,z are int variables and A is a matrix, I want to express a constraint like:
z == A[x][y]
However this leads to an error:
TypeError: object cannot be interpreted as an index
What would be the correct way to do this?
=======================
A specific example:
I want to select 2 items with the best combination score,
where the score is given by the value of each item and a bonus on the selection pair.
For example,
for 3 items: a, b, c with related value [1,2,1], and the bonus on pairs (a,b) = 2, (a,c)=5, (b,c) = 3, the best selection is (a,c), because it has the highest score: 1 + 1 + 5 = 7.
My question is how to represent the constraint of selection bonus.
Suppose CHOICE[0] and CHOICE[1] are the selection variables and B is the bonus variable.
The ideal constraint should be:
B = bonus[CHOICE[0]][CHOICE[1]]
but it results in TypeError: object cannot be interpreted as an index
I know another way is to use a nested for to instantiate first the CHOICE, then represent B, but this is really inefficient for large quantity of data.
Could any expert suggest me a better solution please?
If someone wants to play a toy example, here's the code:
from z3 import *
items = [0,1,2]
value = [1,2,1]
bonus = [[1,2,5],
[2,1,3],
[5,3,1]]
choices = [0,1]
# selection score
SCORE = [ Int('SCORE_%s' % i) for i in choices ]
# bonus
B = Int('B')
# final score
metric = Int('metric')
# selection variable
CHOICE = [ Int('CHOICE_%s' % i) for i in choices ]
# variable domain
domain_choice = [ And(0 <= CHOICE[i], CHOICE[i] < len(items)) for i in choices ]
# selection implication
constraint_sel = []
for c in choices:
for i in items:
constraint_sel += [Implies(CHOICE[c] == i, SCORE[c] == value[i])]
# choice not the same
constraint_neq = [CHOICE[0] != CHOICE[1]]
# bonus constraint. uncomment it to see the issue
# constraint_b = [B == bonus[val(CHOICE[0])][val(CHOICE[1])]]
# metric definition
constraint_sumscore = [metric == sum([SCORE[i] for i in choices ]) + B]
constraints = constraint_sumscore + constraint_sel + domain_choice + constraint_neq + constraint_b
opt = Optimize()
opt.add(constraints)
opt.maximize(metric)
s = []
if opt.check() == sat:
m = opt.model()
print [ m.evaluate(CHOICE[i]) for i in choices ]
print m.evaluate(metric)
else:
print "failed to solve"
Turns out the best way to deal with this problem is to actually not use arrays at all, but simply create integer variables. With this method, the 317x317 item problem originally posted actually gets solved in about 40 seconds on my relatively old computer:
[ 0.01s] Data loaded
[ 2.06s] Variables defined
[37.90s] Constraints added
[38.95s] Solved:
c0 = 19
c1 = 99
maxVal = 27
Note that the actual "solution" is found in about a second! But adding all the required constraints takes the bulk of the 40 seconds spent. Here's the encoding:
from z3 import *
import sys
import json
import sys
import time
start = time.time()
def tprint(s):
global start
now = time.time()
etime = now - start
print "[%ss] %s" % ('{0:5.2f}'.format(etime), s)
# load data
with open('data.json') as data_file:
dic = json.load(data_file)
tprint("Data loaded")
items = dic['items']
valueVals = dic['value']
bonusVals = dic['bonusVals']
vals = [[Int("val_%d_%d" % (i, j)) for j in items if j > i] for i in items]
tprint("Variables defined")
opt = Optimize()
for i in items:
for j in items:
if j > i:
opt.add(vals[i][j-i-1] == valueVals[i] + valueVals[j] + bonusVals[i][j])
c0, c1 = Ints('c0 c1')
maxVal = Int('maxVal')
opt.add(Or([Or([And(c0 == i, c1 == j, maxVal == vals[i][j-i-1]) for j in items if j > i]) for i in items]))
tprint("Constraints added")
opt.maximize(maxVal)
r = opt.check ()
if r == unsat or r == unknown:
raise Z3Exception("Failed")
tprint("Solved:")
m = opt.model()
print " c0 = %s" % m[c0]
print " c1 = %s" % m[c1]
print " maxVal = %s" % m[maxVal]
I think this is as fast as it'll get with Z3 for this problem. Of course, if you want to maximize multiple metrics, then you can probably structure the code so that you can reuse most of the constraints, thus amortizing the cost of constructing the model just once, and incrementally optimizing afterwards for optimal performance.