I am trying to program the following mixed program:
enter image description here
where both xo and xi are to be boolean, thou xi was declared as a float given its the initial condition of the system and CPLEX doesnt allow boolean variables not linked to decision variables.
The markers cc and p are both strings collected from the Excel file.
Cplex is consistently giving the error on the C2 declaration as we see below:
//Standard Variables Initialization
{string} C_PN = ...;
{string} C_CC = ...;
{string} D_PN = ...;
{string} W_CC = ...;
{string} Xi_PN = ...;
{string} Xi_CC = ...;
float u = 0.9; //utilization factor
float C[C_PN][C_CC] = ...;
float D[Xi_PN] = ...;
float Profit[D_PN] = ...;
float Hours[D_PN] = ...;
float W[W_CC] = ...;
float Eff[W_CC] = ...;
int Xi[Xi_PN][Xi_CC]= ...;
float mu[D_PN] = ...;
execute{
for (p in D_PN) mu[p] = Hours[p]/Demand[p] ;//Hours/piece
}
//Decision Variables Initialization
dvar float+ h[Xi_PN][Xi_CC] ; //Produced Parts
dvar boolean Xo[Xi_PN][Xi_CC] ; //flag for pn x cc opening
//Linear optimization problem (linear program)
minimize
sum(p in C_PN, cc in C_CC) 1 + //(D[p] - h[p,cc])*Profit[p] +
sum(p in C_PN, cc in C_CC) 1;//(Xo[p,cc] + Xi[p,cc])*C[p,cc];
//Linear optimization problem (linear program)
subject to{
C1:forall(p in Xi_PN){
sum(cc in Xi_CC){ (Xo[p,cc] + Xi[p,cc])*h[p,cc]} <= D[p] ;} //Demand Constraint
C2:forall(cc in C_CC){
sum(p in C_PN) (Xo[p,cc] + Xi[p,cc])*h[p,cc]*mu[p] <= W[cc]*u*Eff[cc] ;} //Capacity Constraint
C3:forall(p in C_PN, cc in C_CC) Xo[p,cc] + Xi[p,cc] <= 1 ; //Only one tool per CC for each PN
}
As the error is not helpfull to understand the root cause, would any of you guys know more about what could be related to it?
Thanks a lot,
You should change
C1:forall(p in Xi_PN){
sum(cc in Xi_CC){ (Xo[p,cc] + Xi[p,cc])*h[p,cc]} <= D[p] ;} //Demand Constraint
into
C1:forall(p in Xi_PN){
sum(cc in Xi_CC) (Xo[p,cc] + Xi[p,cc])*h[p,cc] <= D[p] ;} //Demand Constraint
Related
My distance matrix in my no overlap constraint does not seem to work in my model outcome. I have formulated the distance matrix by means of a tuple set. I have tried this in 2 different ways as can be seen in the code. Both tuple sets seem to be correct and the distance matrix is added in the noOverlap constraint for the dvar sequence.
Nevertheless I do not see the added transition distance between products in the optimal results. Jobs seem to continue at the same time when a job is finished. Instead of waiting for a transition time. I would like this transition matrix to hold both for machine 1 and machine 2.
Could someone tell me what I did wrong in my model formulation? I have looked into the examples, but they seem to be constructed in the same way. So I do not know what I am doing wrong.
mod.
using CP;
// Number of Machines (Packing + Manufacturing)
int nbMachines = ...;
range Machines = 1..nbMachines;
// Number of Jobs
int nbJobs = ...;
range Jobs = 1..nbJobs;
int duration[Jobs,Machines] = ...;
int release = ...;
int due = ...;
tuple Matrix { int job1; int job2; int value; };
//{Matrix} transitionTimes ={<1,1,0>,<1,2,6>,<1,3,2>,<2,1,2>,<2,2,0>,<2,3,1>,<3,1,2>,<3,2,3>,<3,3,0>};
{Matrix} transitionTimes ={ <i,j, ftoi(abs(i-j))> | i in Jobs, j in Jobs };
dvar interval task[j in Jobs] in release..due;
dvar interval opttask[j in Jobs][m in Machines] optional size duration[j][m];
dvar sequence tool[m in Machines] in all(j in Jobs) opttask[j][m];
execute {
cp.param.FailLimit = 5000;
}
// Minimize the max timespan
dexpr int makespan = max(j in Jobs, m in Machines)endOf(opttask[j][m]);
minimize makespan;
subject to {
// Each job needs one unary resource of the alternative set s (28)
forall(j in Jobs){
alternative(task[j], all(m in Machines) opttask[j][m]);
}
forall(m in Machines){
noOverlap(tool[m],transitionTimes);
}
};
execute {
writeln(task);
};
dat.
nbMachines = 2;
nbJobs = 3;
duration = [
[5,6],
[3,4],
[5,7]
];
release = 1;
due = 30;
``
You should specify interval types for each sequence.
In your case, the type is the job id:
int JobId[j in Jobs] = j;
dvar sequence tool[m in Machines] in all(j in Jobs) opttask[j][m] types JobId;
I am trying to add an Endbeforestartconstraint to my contrained programming problem. However, I receive an error saying that my end beforestart is not an array type. I do not understand this as I almost copied the constraint and data from the sched_seq example in CPLEX, I only changed it to integers.
What I try to accomplish with the constraint, is that task 3 and task 1 will be performed before task 2 will start.
How I can fix the array error for this constraint?
Please find below the relevant parts of my code
tuple Precedence {int pre;int post;};
{Precedence} Precedences = {<3,2>,<1,2>};
dvar interval task[j in Jobs] in release..due;
dvar interval opttask[j in Jobs][m in Machines] optional size duration[j][m];
dvar sequence tool[m in Machines] in all(j in Jobs) opttask[j][m]
dexpr int makespan = max(j in Jobs, m in Machines)(endOf(opttask[j][m]));
minimize makespan;
subject to {
// Each job needs one unary resource of the alternative set s (28)
forall(j in Jobs){
alternative(task[j], all(m in Machines) opttask[j][m]);
}
// No overlap on machines
forall(j in Jobs)
forall(p in Precedences)
endBeforeStart(opttask[j][p.pre],opttask[j][p.post]);
forall(m in Machines){
noOverlap(tool[m],transitionTimes);
}
};
execute {
writeln(task);
dat.
nbMachines = 2;
nbJobs = 3;
duration = [
[5,6],
[4,4],
[5,8]
];
release = 1;
due = 30;
There are several errors in your model, on ranges or on inverted indices.
Also, next time, please post a complete program showing the problem, not just a partial one, this may help you to get quicker answers.
A corrected program:
using CP;
int nbMachines = 2;
int nbJobs = 3;
range Machines = 0..nbMachines-1;
range Jobs = 0..nbJobs-1;
int duration[Jobs][Machines] = [
[5,6],
[4,4],
[5,8]
];
int release = 1;
int due = 30;
tuple Precedence {int pre;int post;};
{Precedence} Precedences = {<2,1>,<0,1>};
dvar interval task[j in Jobs] in release..due;
dvar interval opttask[j in Jobs][m in Machines] optional size duration[j][m];
dvar sequence tool[m in Machines] in all(j in Jobs) opttask[j][m];
dexpr int makespan = max(j in Jobs, m in Machines)(endOf(opttask[j][m]));
minimize makespan;
subject to {
// Each job needs one unary resource of the alternative set s (28)
forall(j in Jobs){
alternative(task[j], all(m in Machines) opttask[j][m]);
}
// No overlap on machines
forall(m in Machines)
forall(p in Precedences)
endBeforeStart(opttask[p.pre][m],opttask[p.post][m]);
};
execute {
writeln(task);
}
You must have values in p.pre or p.post that are outside of the array indexing range.
I am trying to solve an optimization problem, that it's very similar to the knapsack problem but it can not be solved using the dynamic programming.
The problem I want to solve is very similar to this problem:
indeed you may solve this with CPLEX.
Let me show you that in OPL.
The model (.mod)
{string} categories=...;
{string} groups[categories]=...;
{string} allGroups=union (c in categories) groups[c];
{string} products[allGroups]=...;
{string} allProducts=union (g in allGroups) products[g];
float prices[allProducts]=...;
int Uc[categories]=...;
float Ug[allGroups]=...;
float budget=...;
dvar boolean z[allProducts]; // product out or in ?
dexpr int xg[g in allGroups]=(1<=sum(p in products[g]) z[p]);
dexpr int xc[c in categories]=(1<=sum(g in groups[c]) xg[g]);
maximize
sum(c in categories) Uc[c]*xc[c]+
sum(c in categories) sum(g in groups[c]) Uc[c]*Ug[g]*xg[g];
subject to
{
ctBudget:
sum(p in allProducts) z[p]*prices[p]<=budget;
}
{string} solution={p | p in allProducts : z[p]==1};
execute
{
writeln("solution = ",solution);
}
The data .dat
categories={Carbs,Protein,Fat};
groups=[{Meat,Milk},{Pasta,Bread},{Oil,Butter}];
products=[
{Product11,Product12},{Product21,Product22,Product23},
{Product31,Product32},{Product41,Product42},
{Product51},{Product61,Product62}];
prices=[1,4,1,3,2,4,2,1,3,1,2,1];
// User 1
Uc=[1,1,0];
Ug=[0.8,0.2,0.1,1,0.01,0.6];
budget=3;
//User 2
//Uc=[1,1,0];
//Ug=[0.8,0.2,0.1,1,0.01,0.6];
//budget=2;
and this gives
solution = {"Product11" "Product21" "Product41"}
I'm making a bidirectional path tracer and I have some troubles.
To be clear :
1) One point light
2) All objects are diffuse
3) All objects are spheres, even walls (they are very large)
4) NO MIS WEIGHTING
The light emission is a 3D vector. The BRDF of a sphere is a 3D vector. Hard coded.
In the main function below I generate EyePath and LightPath then I connect them. At least I try.
In this post I will talking about the main function then EyePath then LightPath. The talking about connecting function will appear once EyePath and Light are good.
First questions :
Does the generation of the first light point is good ?
Do I need to compute this point according to the emission of the light source? or is it just the emission ? The line is commented where i'm filling the Vertices structure.
Do I need to translate fromlight ? In order to put it on the sphere
The code below is sampled in the main function. Above it there is two for loops going through all pixels. Camera.o is the eye. CameraRayDir is the direction to the current pixel.
//The path light starting point is at the same position as the light
Ray fromLight(Vec(0, 24.3, 0), Vec());
Sphere light = spheres[7];
#define PDF 0.15915494309 // 1 / (2 * PI)
for(int i = 0; i < samps; ++i)
{
std::vector<Vertices> PathEye;
std::vector<Vertices> PathLight;
Vec cameraRayDir = cx * (double(x) / w - .5) + cy * (double(y) / h - .5) + camera.d;
Ray rayEye(camera.o, cameraRayDir.norm());
// Hemisphere oriented towards the top
fromLight.d = generateRayInHemisphere(fromLight.o,Vec(0,1,0)).d;
double f = clamp(n.dot(fromLight.d.norm()));
Vertices vert;
vert.d = fromLight.d;
vert.x = fromLight.o;
vert.id = 7;
vert.cos = f;
vert.n = Vec(0,1,0).norm();
// this one ?
//vert.couleur = spheres[7].e * f / PDF;
// Or this one ?
vert.couleur = spheres[7].e;
PathLight.push_back(vert);
int sizeEye = generateEyePath(PathEye, rayEye, maxDepth);
int sizeLight = generateLightPath(PathLight, fromLight, maxDepth);
for (int s = 0; s < sizeLight; ++s)
{
for (int t = 1; t < sizeEye; ++t)
{
int depth = t + s - 1;
if ((s == 0 && t == 0) || depth < 0 || depth > maxDepth)
continue;
pixelValue = pixelValue + connectPaths(PathEye, PathLight, s, t);
}
}
}
For the EyePath I intersect the geometry then I compute the illumination according to the distance with the light. The colour is black if the point is in the shadow.
Second question : For the eye path and the direct illumination, is the computation good ? I've seen in many code, people use the pdf even in direct illumination. But I'm only using point light and spheres.
int generateEyePath(std::vector<Vertices>& v, Ray eye, int maxDepth)
{
double t;
int id = 0;
Vertices vert;
int RussianRoulette;
while(v.size() <= maxDepth)
{
if(distribRREye(generatorRREye) < 10)
break;
// Intersect all the geometry
// id is the id of the intersected geometry in an array
intersect(eye, t, id);
const Sphere& obj = spheres[id];
// Intersection point
Vec x = eye.o + eye.d * t;
// normal
Vec n = (x - obj.p).norm();
Vec direction = light.p - x;
// Shadow ray
Ray RaytoLight = Ray(x, direction.norm());
const float distance = direction.length();
// shadow
const bool visibility = intersect(RaytoLight, t, id);
const Sphere &lumiere = spheres[id];
float degree = clamp(n.dot((lumiere.p - x).norm()));
// If the intersected geometry is not a light, then in shadow
if(lumiere.e.x == 0)
{
vert.couleur = Vec();
}
else // else we compute the colour
// obj.c is the brdf, lumiere.e is the emission
vert.couleur = (obj.c).mult(lumiere.e / (distance * distance)) * degree;
vert.x = x;
vert.id = id;
vert.n = n;
vert.d = eye.d.normn();
vert.cos = degree;
v.push_back(vert);
eye = generateRayInHemisphere(x,n);
}
return v.size();
}
For the LightPath, for a given point, I compute it according to the previous one and the values at this point. Like in a common path tracing.\n
Third question: Is the colour computation good ?
int generateLightPath(std::vector<Vertices>& v, Ray fromLight, int maxDepth)
{
double t;
int id = 0;
Vertices vert;
Vec previous;
while(v.size() <= maxDepth)
{
if(distribRRLight(generatorRRLight) < 10)
break;
previous = v.back().couleur;
intersect(fromLight, t, id);
// intersected geometry
const Sphere& obj = spheres[id];
// Intersection point
Vec x = fromLight.o + fromLight.d * t;
// normal
Vec n = (x - obj.p).norm();
double f = clamp(n.dot(fromLight.d.norm()));
// obj.c is the brdf
vert.couleur = previous.mult(((obj.c / M_PI) * f) / PDF);
vert.x = x;
vert.id = id;
vert.n = n;
vert.d = fromLight.d.norm();
vert.cos = f;
v.push_back(vert);
fromLight = generateRayInHemisphere(x,n);
}
return v.size();
}
For the moment I get this result.
enter image description here
The connecting function will come once EyePath and LightPath are good.
Thank you all
Try the spherical reference scene mentioned in this paper. I think then you can work out most of your questions by yourself since it has an analytical solution.
https://www.researchgate.net/publication/221546261_Testing_Monte-Carlo_Global_Illumination_Methods_with_Analytically_Computable_Scenes
It would save your time to implement and verify your understanding with path tracing and light tracing first, then try to combine them with weights.
I am working on Kinect for my research project . I have worked previously to calculate the joint angle of kinect and the joint coordinates. I would like to calculate the center of mass of the body which is being tracked.
Any idea would be appreciated and code snippets would be immensely helpful.
I owe a lot to stack overflow without the community help it would had not been possible to do such a thing.
Thanks in Advance
Please find the code where i want to include this center of mass function. This function tracks the skeleton.
Skeleton GetFirstSkeleton(AllFramesReadyEventArgs e)
{
using (SkeletonFrame skeletonFrameData = e.OpenSkeletonFrame())
{
if (skeletonFrameData == null)
{
return null;
}
skeletonFrameData.CopySkeletonDataTo(allSkeletons);
//get the first tracked skeleton
Skeleton first = (from s in allSkeletons
where s.TrackingState == SkeletonTrackingState.Tracked
select s).FirstOrDefault();
return first;
}
I have tried using this code in my code but its not getting accustomed , can any one please help me include the center of mass code.
oreach (SkeletonData data in skeletonFrame.Skeletons) {
SkeletonFrame allskeleton = e.SkeletonFrame;
// Count passive and active person up to six in the group
int numberOfSkeletonsT = (from s in allskeleton.Skeletons
where s.TrackingState == SkeletonTrackingState.Tracked select s).Count();
int numberOfSkeletonsP = (from s in allskeleton.Skeletons
where s.TrackingState == SkeletonTrackingState.PositionOnly select s).Count();
// Count passive and active person up to six in the group
int totalSkeletons = numberOfSkeletonsP + numberOfSkeletonsT;
//Console.WriteLine("TotalSkeletons = " + totalSkeletons);
//======================================================
if (data.TrackingState == SkeletonTrackingState.PositionOnly)
{
foreach (Joint joint in data.Joints)
{
if (joint.Position.Z != 0)
{
double centerofmassX = com.Position.X;
double centerofmassY = com.Position.Y;
double centerofmassZ = com.Position.Z;
Console.WriteLine( centerofmassX + centerofmassY + centerofmassZ );
}
}
See a couple of resources here:
http://mathwiki.ucdavis.edu/Calculus/Vector_Calculus/Multiple_Integrals/Moments_and_Centers_of_Mass#Three-Dimensional_Solids
http://www.slideshare.net/GillianWinters/center-of-mass-presentation
http://en.wikipedia.org/wiki/Locating_the_center_of_mass
Basically no matter what, you are going to need to find the mass of your user. This can be a simple input, then you can determine how much weight the person puts on each foot and use the equations described at all of these sources. Another option may be to use plumb lines on a planar shape representation of the user in 2D, However that won't be the actually accurate 3D center of mass.
Here is an example of how to find what amount of mass is on each foot. using the equation found on http://www.vitutor.com/geometry/distance/line_plane.html
Vector3 v = new Vector3(skeleton.Joints[JointType.Head].Position.X, skeleton.Joints[JointType.Head].Position.Y, skeleton.Joints[JointType.Head].Position.Z);
double mass;
double leftM, rightM;
double A = sFrame.FloorClipPlane.X,
B = sFrame.FloorClipPlane.Y,
C = sFrame.FloorClipPlane.Z;
//find angle
double angle = Math.ASin(Math.Abs(A * v.X + B * v.Y * C * v.Z)/(Math.Sqrt(A * A + B * B + C * C) * Math.Sqrt(v.X * v.X + v.Y * v.Y + v.Z * v.Z)));
if (angle == 90.0)
{
leftM = mass / 2.0;
rightM = mass / 2.0;
}
double distanceFrom90 = 90.0 - angle;
if (distanceFrom90 > 0)
{
double leftMultiple = distanceFrom90 / 90.0;
leftM = mass * leftMultiple;
rightM = mass - leftM;
}
else
{
double rightMultiple = distanceFrom90 / 90.0;
rightM = rightMultiple * mass;
leftM = mass - rightMultiple;
}
This is of course assuming that the user is on both feet, but you could modify the code to create a new plane based off the users feet instead of the automatic one generated by Kinect.
The code to then find the center of mass you have to choose a datum. I would choose the head as that is the top of the person, and you can measure down from it easily. Using the steps found here:
double distanceFromDatumLeft = Math.Sqrt(Math.Pow(headX - footLeftX, 2) + Math.Pow(headY - footLeftY, 2) + Math.Pow(headZ - footLeftZ, 2));
double distanceFromDatumLeft = Math.Sqrt(Math.Pow(headX - footRightX, 2) + Math.Pow(headY - footRightY, 2) + Math.Pow(headZ - footRightZ, 2));
double momentLeft = distanceFromDatumLeft * leftM;
double momentRight = distanceFromDatumRight * rightM;
double momentSum = momentLeft + momentRight;
//measured in units from the datum
double centerOfGravity = momentSum / mass;
You then can of course show this on the screen by passing a point to plot that is centerOfGravity points below the head.