Metapost Equations - metapost

I was given a homework assignment in one of my courses that asked us to google the Metapost language and find the use for the equation solving feature in the language.
After going through the first dozen or so pages of the Metapost user manual I found only one reason as to why it's useful and it's that "allows many programs to be written in
a largely declarative style."
Besides stating that it makes the programming more "declarative" (which from what I understood means that we tell the language what to do as opposed to how to do it) I couldn't think of any other reason why the equations solving is useful.
Can anyone help me out?

Here is an illustration of how solving equations in MetaPost — and declarative programming, for that matter — might be useful.
Suppose we want to draw a die:
To do that, let us first define a macro which will draw a single face of the die: a square with number s on it.
def face (expr s) = image (begingroup
pickup pencircle scaled 1pt;
draw (0.5, 0.5) -- (0.5, 9.5) -- (9.5, 9.5) -- (9.5, 0.5) -- cycle;
label (s, (5, 5));
endgroup) scaled 10 enddef;
Now we can draw it and get the picture:
draw face ("1");
Next, we need an upper face and a right face. To draw them, we will have to compose an affine transformation to skew them. This can be tricky since the only readily available primitive transformation for skewing is slanted a which transforms a point (x, y) into (x + ay, y). Here is our picture slanted 1:
draw face ("2") slanted 1;
We will then (or rather, before that) have to scale by one of the coordinates:
draw face ("2") yscaled 0.35 slanted 1;
The same approach does not work for the third face right away:
draw face ("3") xscaled 0.35 slanted 1;
After a bit of experimentation, we find the right code:
draw face ("3") rotated 90 yscaled 0.35 slanted -1 rotated -90;
But why all the tedium? We know exactly what transformation we need. One natural way to express it is by using primitives. But if that proves unintuitive, as our last line was, it may be more comfortable to just specify which points of the plane transform to which.
transform t;
(0, 0) transformed t = (0, 0);
(0, 1) transformed t = (0.35, 0.35);
(1, 0) transformed t = (1, 0);
draw face ("3") transformed t;
This basically tells MetaPost: there is a transform t, under which the three points we specified move to the other three points we specified. Turns out this uniquely determines a plane transformation, and we get the same picture:
Putting all that together (the code is beginfig (7) at end of post) allows us to finally see our die:
In this simple case, the “coordinates and equations” approach is comparable in difficulty to the “primitive transformations” approach. Now, imagine we wanted a slight tilt for our cube. With the same declarative approach, it would be still possible without invoking three-dimensional geometry (the code is beginfig (8) at end of post):
The complete program is below.
prologues := 3;
def face (expr s) = image (begingroup
pickup pencircle scaled 1pt;
draw (0.5, 0.5) -- (0.5, 9.5) -- (9.5, 9.5) -- (9.5, 0.5) -- cycle;
label (s, (5, 5));
endgroup) scaled 10 enddef;
beginfig (1)
draw face ("1");
endfig;
beginfig (2)
draw face ("2") slanted 1;
endfig;
beginfig (3)
draw face ("2") yscaled 0.35 slanted 1;
endfig;
beginfig (4)
draw face ("3") xscaled 0.35 slanted 1;
endfig;
beginfig (5)
draw face ("3") rotated 90 yscaled 0.35 slanted -1 rotated -90;
endfig;
beginfig (6)
transform t;
(0, 0) transformed t = (0, 0);
(0, 1) transformed t = (0, 1);
(1, 0) transformed t = (0.35, 0.35);
draw face ("3") transformed t;
endfig;
beginfig (7)
transform t [];
draw face ("1");
(0, 0) transformed t[1] = (0, 0);
(0, 1) transformed t[1] = (0.35, 0.35);
(1, 0) transformed t[1] = (1, 0);
draw face ("2") transformed t[1] shifted (0, 100);
(0, 0) transformed t[2] = (0, 0);
(0, 1) transformed t[2] = (0, 1);
(1, 0) transformed t[2] = (0.35, 0.35);
draw face ("3") transformed t[2] shifted (100, 0);
endfig;
beginfig (8)
transform t [];
pair Ox, Oy, Oz;
Ox = (0.86, -0.21);
Oy = (0.21, 0.86);
Oz = (0.29, 0.44);
(0, 0) transformed t[1] = (0, 0);
(1, 0) transformed t[1] = Ox;
(0, 1) transformed t[1] = Oy;
draw face ("4") transformed t[1];
(0, 0) transformed t[2] = (0, 0);
(1, 0) transformed t[2] = Ox;
(0, 1) transformed t[2] = Oz;
draw face ("5") transformed t[2] shifted (Oy scaled 100);
(0, 0) transformed t[3] = (0, 0);
(1, 0) transformed t[3] = Oz;
(0, 1) transformed t[3] = Oy;
draw face ("6") transformed t[3] shifted (Ox scaled 100);
endfig;
end

Related

Obtaining a quadrilateral from four points then splitting it into two triangles

Given 4 unordered points, how do I obtain two triangles from those points WITHOUT forming an hourglass shape or having the triangles overlap. Convex quadrilaterals are fine, but I'd prefer a method that would remove the point near the center bounded by the other points within a single triangle. I have a semi-working solution, but it isn't pretty. I have previously tried Delaunay triangulation, forming 4 triangles via a center point and moving around it radially adding points to create triangles, amongst other methods. I cannot seem to find any information of this topic besides splitting triangles.
So this is what I did and it seemed to work well. For the first triangle, take the first 3 points and make that a triangle. Then, store a list of the midsections of the points for each edge. The midsection that is closest to the fourth point is on the edge that has the other 2 points that will make your second triangle.
pseudo code
func getMidsection(Point a, Point b) -> Point
{
Point midsection = Point(
(a.x + b.x) / 2,
(a.y + b.y) / 2,
(a.z + b.z) / 2
);
return midsection;
}
func getTrianglesFromPoints(Point[4] points) -> Triangle[2]
{
// define first triangle as first 3 points
Triangle tri1 = Triangle(points[0], points[1], points[2]);
Point[3] midsections;
float recordDist = -1;
int closestMidsection;
// loop through each edge in the first triangle
for(i : [0, 3) )
{
// get the midsection using the current point and next point in the first triangle
midsections[i] = getMidsection(points[i], points[(i+1)%3]);
// if the 4th point's distance to the midsection is smaller than past values, set the smallest dist to that point
if(dist(points[3], midsections[i]) < recordDist or recordDist == -1)
{
recordDist = dist(points[3], midsections[i]);
closestMidsection = i;
}
}
// define triangle2 from the closest midsection
Triangle tri2 = Triangle(points[closestMidsection], points[(closestMidsection + 1) % 3, points[3]);
// return the triangles
return [tri1, tri2];
}

Drawing triangles on a slope

I am writing an objective-c method that draws a series of triangles on a slope. In order to complete this, I need to calculate the vertex point of each triangle (C,D). The position starting and ending points are variable.
This seems like it should be an easy math problem. But so far I haven't been able to work it out on paper. Can anyone point me in the right direction?
No trigonometry involved.
Let D= Sqrt(X12^2+Y12^2) the Euclidean distance between P1 and P2 (X12 = X2-X1 and Y12 = Y2-Y1), and let p= P/D, a= A/D.
If P1P2 was the line segment (0, 0)-(1, 0), the vertices would be at (0, 0), (a, p/2), (0, p), (a, 3p/2), (0, 2p)...
The transform below scales and rotates (0, 0)-(1, 0) to P1P2:
X = X1 + X12.x - Y12.y
Y = Y1 + Y12.x + X12.y
Set triangle at origin horizontally:
(0, 0), (p, 0), (p/2, a)
Rotate to get needed slope alpha:
(0, 0), (p*cos(alpha), p*sin(alpha)), (p/2 * cos(alpha) - a * sin(alpha), p/2 * sin(alpha) + a*sin(alpha))
Shift by adding (x1, y1) to all of the coordinates.
The third coordinate is your vertex:
(Cx, Cy) = (p/2 * cos(alpha) - a * sin(alpha) + x1, p/2 * sin(alpha) + a*sin(alpha) + y1)
To find other vertices use the fact that they are shifted by p from each other, under the angle alpha:
(Cx_i, Cy_i) = (Cx, Cy) + i*(p * cos(alpha), p * sin(alpha))

Getting all points in vector (between two points)

Lets say I have 2 points (x1,y1) and (x2,y2). And I can draw vector from point (x1,y1) to point (x2,y2). How can I get all possible points between them at for example every 10 pixels?
Simple visualization:
The vector between point A and a point B is B-A (x2-x1, y2-y1)
If you normalize that vector, and multiply it by the factor you want (it seems you want a distance of 10px, so your factor is 10), you can get all the points by adding it to the a current point (which initially is the origin A) until you reach the end point B.
You can take a smaller stepVector and add him step by step.
PseudoCode:
stepVector = yourVector / 10
Point1 = basePoint + stepVector
Point2 = Point1 + stepVector
...
or something line
stepVector = yourVector / 10
Point1 = basePoint + stepVector
Point2 = basePoint + (stepVector * 2)
Point3 = basePoint + (stepVector * 3)
...

Error with GL_TRIANGLE_STRIP and vertex array

I have this method that prepares the coordinates in the posCoords array. It works properly about 30% of the time, then the other 70% the first few triangles are messed up in the grid.
The entire grid is drawn using GL_TRIANGLE_STRIP.
I'm pulling my hair out trying to figure out whats wrong. Any ideas?
if(!ES2) {
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
}
int cols = floor(SCREEN_WIDTH/blockSize);
int rows = floor(SCREEN_HEIGHT/blockSize);
int cells = cols*rows;
NSLog(#"Cells: %i", cells);
coordCount = /*Points per coordinate*/2 * /*Coordinates per cell*/ 2 * cells + /* additional coord per row */2*2*rows;
NSLog(#"Coord count: %i", coordCount);
if(texCoords) free(texCoords);
if(posCoords) free(posCoords);
if(dposCoords) free(dposCoords);
texCoords = malloc(sizeof(GLfloat)*coordCount);
posCoords = malloc(sizeof(GLfloat)*coordCount);
dposCoords = malloc(sizeof(GLfloat)*coordCount);
int index = 0;
float lowY, hiY = 0;
int x,y = 0;
BOOL drawLeftToRight = YES;
for(y=0;y<SCREEN_HEIGHT;y+=blockSize) {
lowY = y;
hiY = y + blockSize;
// Draw a single row
for(x=0;x<=SCREEN_WIDTH;x+=blockSize) {
CGFloat px,py,px2,py2 = 0;
// Top point of triangle
if(drawLeftToRight) {
px = x;
py = lowY;
// Bottom point of triangle
px2 = x;
py2 = hiY;
}
else {
px = SCREEN_WIDTH-x;
py = lowY;
// Bottom point of triangle
px2 = SCREEN_WIDTH-x;
py2 = hiY;
}
// Top point of triangle
posCoords[index] = px;
posCoords[index+1] = py;
// Bottom point of triangle
posCoords[index+2] = px2;
posCoords[index+3] = py2;
texCoords[index] = px/SCREEN_WIDTH;
texCoords[index+1] = py/SCREEN_HEIGHT;
texCoords[index+2] = px2/SCREEN_WIDTH;
texCoords[index+3] = py2/SCREEN_HEIGHT;
index+=4;
}
drawLeftToRight = !drawLeftToRight;
}
With a triangle strip the last vertex you add replaces the the oldest vertex used so you're using bad vertices along the edge. It's easier to explain with your drawing.
Triangle 1 uses vertices 1, 2, 3 - valid triangle
Triangle 2 uses vertices 2, 3, 4 - valid triangle
Triangle 3 uses vertices 4, 5, 6 - valid triangle
Triangle 4 uses vertices 5, 6, 7 - straight line, nothing will be drawn
Triangle 5 uses vertices 6, 7, 8 - valid
etc.
If you want your strips to work, you'll need to pad your strips with degenerate triangles or break your strips up.
I tend to draw left to right and at the end of the row add a degenerate triangle, then left to right again.
e.g. [1, 2, 3, 4, 5, 6; 6 10; 10, 11, 8, 9, 6, 7]
The middle part is called a degenerate triangle (e.g. triangles of zero area).
Also, if I had to take a guess at why you are seeing various kinds of corruption, I'd check to make sure that your vertices and indices are exactly what you expect them to be - normally you see that kind of corruption when you don't specify indices correctly.
Found the issue: the texture buffer was overflowing in to the vertex buffer, it was random because some background tasks where shuffling memory around on a timer (sometimes)

Discrete Wavelet Transform on images and watermark embedding in LL band coefficients, data is lost when IDWT-DWT is performed again?

I'm writing an image watermarking system to hide a watermark in an image's low frequency band by transforming the image's luminance channel with a Discrete Wavelet Transform, then modifying coefficients in the LL band of the DWT output. I then do an Inverse DWT and rebuild my image.
The problem I'm having is when I modify coefficients in the DWT output, then inverse-DWT, and then DWT again, the modified coefficients are radically different.
For example, one of the output coefficients in the LL band of the 2-scale DWT was -0.10704, I modified this coefficient to be 16.89, then performed the IDWT on my data. I then took the output of the IDWT and performed a DWT on it again, and my coefficient which was modified to be 16.89 became 0.022.
I'm fairly certain that the DWT and IDWT code is correct because I've tested it against other libraries and the output from each transform matches when the filter coefficients and other parameters are the same. (Within what can be expected due to rounding error)
The main problem I have is that I perhaps don't understand the DWT all that well, I thought DWT and IDWT were supposed to be reasonably lossless (Aside from rounding error and such), yet this doesn't seem to be the case here.
I'm hoping someone more familiar with the transform can point me at a possible issue, is it possible that because the coefficients in my other subbands (LH, HL, HH) for that position are insignificant I'm losing data? If so, how can I determine which coefficients this may happen to?
My embedding function is below, coefficients are chosen in the LL band, "strong" is determined to be true if the absolute value of the LH, HH, or HL band for the selected location is larger than the mean value of the corresponding subband.
//If this evaluates to true, then the texture is considered strong.
if ((Math.Abs(LH[i][w]) >= LHmean) || (Math.Abs(HL[i][w]) >= HLmean) || (Math.Abs(HH[i][w]) >= HHmean))
static double MarkCoeff(int index, double coeff,bool strong)
{
int q1 = 16;
int q2 = 8;
int quantizestep = 0;
byte watermarkbit = binaryWM[index];
if(strong)
quantizestep = q1;
else
quantizestep = q2;
coeff /= (double)quantizestep;
double coeffdiff = 0;
if(coeff > 0.0)
coeffdiff = coeff - (int)coeff;
else
coeffdiff = coeff + (int)coeff;
if (1 == ((int)coeff % 2))
{
//odd
if (watermarkbit == 0)
{
if (Math.Abs(coeffdiff) > 0.5)
coeff += 1.0;
else
coeff -= 1.0;
}
}
else
{
//even
if (watermarkbit == 1)
{
if (Math.Abs(coeffdiff) > 0.5)
coeff += 1.0;
else
coeff -= 1.0;
}
}
coeff *= (double)quantizestep;
return coeff;
}