Count function fitness in binary particle swarm optimisation [BPSO] - particle-swarm

I have BPSO with function1 and dont know how to count value. I know how to update velocity. And know values of vector x. x[i] = 0 or 1.
Thanks
I want to optimize function1. Single particle contains vectors: velocity[-Vmax,Vmax] and x[0 or 1]. First I initialize my particles with random numbers. Then I update velocities. Next step update x. Last step is to count function value and this is main problem. I don't know which values should I use to do this.

Related

Pecuiliar output with Liblinear

I am facing a very peculiar problem with lib-linear package.
I have two levels (+1, -1).
Say I have only one feature which takes values $x_1$, $x_2$,..., $x_n$ for n points. It classifies well giving some positive weight $w*$ and cost C say for example.
Now if I stack $1$ to the previous feature to make a new feature vectors [1 x_i] i=1, 2, ...,n; Now with this new problem lib-linear gives the following:
a weight vector [w_1 -w_2]; w_i>0 i.e. weights to 1 is w_1 and to x is w_2.
Cost C1 much greater than previous cost C.
I understand that new feature (1) has no variation throughout and hence the weight to it should automatically go zero.
It is a minimization problem so it should give w_1~0 so that now the cost C1 is at most equal to C.
Can anyone help?
Since you have a constant input dimension, its contribution in the decision function will also be constant. LIBLINEAR's decision function is
f(x)=sign(w^T*x-rho)
My guess is that your new model corrects for the extra term (due to non-zero w_1) through rho. I can't say I have a good idea as to why w_1 was not minimized to zero, though. Are the predictions of both models equal?

Fitting curves to a set of points

Basically, I have a set of up to 100 co-ordinates, along with the desired tangents to the curve at the first and last point.
I have looked into various methods of curve-fitting, by which I mean an algorithm with takes the inputted data points and tangents, and outputs the equation of the cure, such as the gaussian method and interpolation, but I really struggled understanding them.
I am not asking for code (If you choose to give it, thats acceptable though :) ), I am simply looking for help into this algorithm. It will eventually be converted to Objective-C for an iPhone app, if that changes anything..
EDIT:
I know the order of all of the points. They are not too close together, so passing through all points is necessary - aka interpolation (unless anyone can suggest something else). And as far as I know, an algebraic curve is what I'm looking for. This is all being done on a 2D plane by the way
I'd recommend to consider cubic splines. There is some explanation and code to calculate them in plain C in Numerical Recipes book (chapter 3.3)
Most interpolation methods originally work with functions: given a set of x and y values, they compute a function which computes a y value for every x value, meeting the specified constraints. As a function can only ever compute a single y value for every x value, such an curve cannot loop back on itself.
To turn this into a real 2D setup, you want two functions which compute x resp. y values based on some parameter that is conventionally called t. So the first step is computing t values for your input data. You can usually get a good approximation by summing over euclidean distances: think about a polyline connecting all your points with straight segments. Then the parameter would be the distance along this line for every input pair.
So now you have two interpolation problem: one to compute x from t and the other y from t. You can formulate this as a spline interpolation, e.g. using cubic splines. That gives you a large system of linear equations which you can solve iteratively up to the desired precision.
The result of a spline interpolation will be a piecewise description of a suitable curve. If you wanted a single equation, then a lagrange interpolation would fit that bill, but the result might have odd twists and turns for many sets of input data.

fmincon : impose vector greater than zero constraint

How do you impose a constraint that all values in a vector you are trying to optimize for are greater than zero, using fmincon()?
According to the documentation, I need some parameters A and b, where A*x ≤ b, but I think if I make A a vector of -1's and b 0, then I will have optimized for the sum of x>0, instead of each value of x greater than 0.
Just in case you need it, here is my code. I am trying to optimize over a vector (x) such that the (componentwise) product of x and a matrix (called multiplierMatrix) makes a matrix for which the sum of the columns is x.
function [sse] = myfun(x) % this is a nested function
bigMatrix = repmat(x,1,120) .* multiplierMatrix;
answer = sum(bigMatrix,1)';
sse = sum((expectedAnswer - answer).^2);
end
xGuess = ones(1:120,1);
[sse xVals] = fmincon(#myfun,xGuess,???);
Let me know if I need to explain my problem better. Thanks for your help in advance!
You can use the lower bound:
xGuess = ones(120,1);
lb = zeros(120,1);
[sse xVals] = fmincon(#myfun,xGuess, [],[],[],[], lb);
note that xVals and sse should probably be swapped (if their name means anything).
The lower bound lb means that elements in your decision variable x will never fall below the corresponding element in lb, which is what you are after here.
The empties ([]) indicate you're not using linear constraints (e.g., A,b, Aeq,beq), only the lower bounds lb.
Some advice: fmincon is a pretty advanced function. You'd better memorize the documentation on it, and play with it for a few hours, using many different example problems.

simple detection algorithm

i'm working on a simple multiplayer game that receives a random 4x4 matrix from a server and extracting a shape from it
for example:
XXOO
XXOX
XOOX
XXXX
OXOO
XXOO
XOOO
OXXX
so in the first matrix the shape i want to parse is this:
oo
o
oo
and the 2nd:
oo
oo
ooo
i know there must be an algorithm for this because i saw this kind of behavior on some puzzle games but i have no idea how to go about to detect them or even have an idea where to start
so my question is:how do i detect what shape is in the matrix and how do i differentiate between multiple colors? (aka..it doesn't come only in x and o..it comes in a maximum of 4)
note: the shape must be a minimum of 4 blocks
It's not fully clear what you want to do, but it seems like you want to extract some sort of shape (based on the second example, top left O not being included). I'd think about traversing the array, and for each cell, check for neighboring Os. Discount the duplicates you're likely to count (maybe by only looking at neighbors below and to the right of the cell you're examining). Then, if it meets whatever criteria you want for a 'shape'. Maybe if you give more examples or a better description, we can be more accurate. Or give it a shot yourself and post where you get stuck.
May be you can try this :
1. Assign integer values to each position in the matrix, like this, [1,2,3,4
5,6,7,8
9,10,11,12
13,14,15,16].
2.Read the position of zeroes. (I guess, shapes correpond to '0' s located either horizontally or vertically aligned). Store them in an array. So, for first case your array will read [3,4,7,11,10]
2. Then Start 'drawing' the shape.
1. First value 3. so shape= 0.
2. Next value 4. Check if it is consecutive to any other value in the array. that value is 3. so the shape = 00
3. Next val= 7. Is it consecutive ? no. Is it 4 more than any other value? yes, 3. So it goes below 3. shape= 00
0
4. Next 11, similar to step3, it goes below 7. shape= 00
0
0
5. Next 10, it is one less than 11, so it is placed before 11. shape= 00
0
00.
You can do some optimizations, like for consecutive, only check the prev. val in the array.
For vertical, check only four prev values.
Also, dont forget to special conditions for boundary values like 4 & 5. I dont they will make a shape.

Optimize MATLAB code (nested for loop to compute similarity matrix)

I am computing a similarity matrix based on Euclidean distance in MATLAB. My code is as follows:
for i=1:N % M,N is the size of the matrix x for whose elements I am computing similarity matrix
for j=1:N
D(i,j) = sqrt(sum(x(:,i)-x(:,j)).^2)); % D is the similarity matrix
end
end
Can any help with optimizing this = reducing the for loops as my matrix x is of dimension 256x30000.
Thanks a lot!
--Aditya
The function to do so in matlab is called pdist. Unfortunately it is painfully slow and doesnt take Matlabs vectorization abilities into account.
The following is code I wrote for a project. Let me know what kind of speed up you get.
Qx=repmat(dot(x,x,2),1,size(x,1));
D=sqrt(Qx+Qx'-2*x*x');
Note though that this will only work if your data points are in the rows and your dimensions the columns. So for example lets say I have 256 data points and 100000 dimensions then on my mac using x=rand(256,100000) and the above code produces a 256x256 matrix in about half a second.
There's probably a better way to do it, but the first thing I noticed was that you could cut the runtime in half by exploiting the symmetry D(i,j)==D(i,j)
You can also use the function norm(x(:,i)-x(:,j),2)
I think this is what you're looking for.
D=zeros(N);
jIndx=repmat(1:N,N,1);iIndx=jIndx'; %'# fix SO's syntax highlighting
D(:)=sqrt(sum((x(iIndx(:),:)-x(jIndx(:),:)).^2,2));
Here, I have assumed that the distance vector, x is initalized as an NxM array, where M is the number of dimensions of the system and N is the number of points. So if your ordering is different, you'll have to make changes accordingly.
To start with, you are computing twice as much as you need to here, because D will be symmetric. You don't need to calculate the (i,j) entry and the (j,i) entry separately. Change your inner loop to for j=1:i, and add in the body of that loop D(j,i)=D(i,j);
After that, there's really not much redundancy left in what that code does, so your only room left for improvement is to parallelize it: if you have the Parallel Computing Toolbox, convert your outer loop to a parfor and before you run it, say matlabpool(n), where n is the number of threads to use.