Without getting into too many details - I'm getting parameters (x1,x2,y1,y2,a,b,α) from a web tool and I need to generate a PDF document, by using Zend_PDF library which contains green image rotated and positioned properly on the exact coordinates.
Now, what confuses me is that Zend does not allow elements to be rotated, but instead rotates paper. So, I assume the rotation needs to be done like this
$page->rotate($x1 + ($x2 - $x1) / 2, $y1 + ($y2 - $y1) / 2, - deg2rad($rotation));
because we want the center of the image to be the rotation point, and we rotate it in the reverse orientation so the resulting image will get proper rotation.
The tricky part I'm having trouble with is drawing it. With the simple call
$page->drawImage($image, $x1, $y1, $x2, $y2);
I'm getting the result as displayed on the diagram - the resulting image needs to be translated as well, since (x1,y1) and (x2,y2) are not exact coordinates anymore, but I'm not sure how to calculate them? Any ideas?
The OP confirmed in a comment that he used the same values for (x1,y1) and (X2,y2) in his rotate and his drawImage calls. It is pretty obvious from his sketches, though, that the coordinates for the latter call must differ.
Fortunately we know from the way the green rectangle is inscribed in the rectangle from (x1,y1) to (X2,y2) that it has the same center point as that rectangle. Furthermore, we have the dimensions a and b of the green rectangle.
Thus, the drawImage parameters must be changed to:
$page->drawImage($image, $x1 + ($x2 - $x1) / 2 - $a / 2
, $y1 + ($y2 - $y1) / 2 - $b / 2
, $x1 + ($x2 - $x1) / 2 + $a / 2
, $y1 + ($y2 - $y1) / 2 + $b / 2);
Related
I am developing a GPS waypoint application. I have started by drawing my compass but am finding it difficult to implement degree text around the circle. Can anyone help me with a solution? The compass image am working on] 1 here shows the circle of the compass I have drawn.
This image here shows what I want to achieve, that is implementing degree text round the compass [Image of what I want to achieve] 2
Assuming you're doing this in a custom view, you need to use one of the drawText methods on the Canvas passed in to onDraw.
You'll have to do a little trigonometry to get the x, y position of the text - basically if there's a circle with radius r you're placing the text origins on (i.e. how far out from the centre they are), and you're placing one at angle θ:
x = r * cosθ
y = r * sinθ
The sin and cos functions take a value in radians, so you'll have to convert that if you're using degrees:
val radians = (degrees.toDouble() / 360.0) * (2.0 * Math.PI)
and 0 degrees is at 3 o'clock on the circle, not 12, so you'll have to subtract 90 degrees from your usual compass positions (e.g. 90 degrees on the compass is 0 degrees in the local coordinates). The negative values you get are fine, -90 is the same as 270. If you're trying to replicate the image you posted (where the numbers and everything else are rotating while the needle stays at the top) you'll have to apply an angle offset anyway!
These x and y values are distance from the centre of the circle, which probably needs to be the centre of your view (which you've probably already calculated to draw your circle). You'll also need to account for the extra space you need to draw those labels, scaling everything so it all fits in the View
I have a lot of tutorials & books, but I'm unable to understand how my viewport, my near & far distance etc are used to calc perspective / frustum matrix.
I have the learningwebgl lessons, but.... I dont understand what viewport & 3D space adjustments are made.... What is my initial window projection size ? Why I see the triangle & square placed at z = -7.
Another thing I dont understand . A near plane of 0.001 creates the window projection just in front of my nose ? So what is my projection window dimension ?
I need a very deeper and basic help....
Can anybody help me ? Some really usefull links? I need graphical examples showing & teaching how frustum is calculated.
Thanks
There's this
http://games.greggman.com/game/webgl-3d-perspective/
Imagine you're in 2D. You have a canvas that's 200x100 pixels. If you draw at x = 201 it will be off the canvas. Similarly at x = -1 it will be off the canvas.
In WebGL it works in a 3D space that goes from -1 to +1 in x, y and z. The perspective / frustum matrix is the matrix that takes your 3d scene and converts it to this -1 / +1 space. The near and far values define what range in world space get converted to the -1 / +1 "clipspace". Anything outside that range will be clipped just like the 2D example. If you set near to 10 and far to 100 then something at Z = 9 will be clipped because it's too near and something at 101 will also be clipped as something that's too far. More specifically the near and far settings will form a matrix such that when a point is at Z = near it will become -1 when multiplied by the matrix and when it's at Z = far it will become +1 when multiplied by the matrix.
The viewport setting tells WebGL how to convert from the -1 to +1 space back into pixels.
The best example I can give is located at:
http://www.mathopenref.com/arclength.html
In that Java applet, imagine C is the object to be rotated around and A is the camera. I wish to move the camera to point B, but I do not know how to work out B's co-ordinates. How do you do it? In my case, I know the positions of C and A, and the angle theta to rotate.
I know you can use:
x = Xcentre + radius * sin(theta)
y = Ycentre + radius * cos(theta)
but this fails to take into account the camera current position.
I can't help but feel there's some simple solution I'm missing.
Solved by using the equations listed and just reversing the calculation to derive theta. Then applied a check to ensure 360 degree rotations can be done (else only 180 degrees can).
Imagine I want to draw a custom view in a given rectangle (e.g. 100 x 100 pixels). My custom view's contents might be bigger than 100 x 100. Instead having some content not drawn, I'd like to draw all content inside the 100 x 100 area. For example, a point that would normally be located at (125, 140) would now be drawn at point (25, 40).
Is there any way to do this without having to (majorly) modify the drawing code? Keep in mind that I also draw more complex shapes, like bezier paths.
Perhaps you could scale your drawing space via CGContextScaleCTM(...).
e.x.
CGFloat sx, sy;
sx = self.frame.size.width / desiredWidth;
sy = self.frame.size.height / desiredHeight;
CGContextScaleCTM(context, sx, sy);
EDIT:
As Codo suggests below, you may be looking for CGContextTranslateCTM(...) which will offset your context's coordinate space by some x/y value.
I was asking myself if there are examples online which covers how you can for instance detect shapes in touch gestures.
for example a rectangle or a circle (or more complex a heart .. )
or determine the speed of swiping (over time ( like i'm swiping my iphone against 50mph ))
For very simple gestures (horizontal vs. vertical swipe), calculate the difference in x and y between two touches.
dy = abs(y2 - y1)
dx = abs(x2 - x1)
f = dy/dx
An f close to zero is a horizontal swipe. An f close to 1 is a diagonal swipe. And a very large f is a vertical swipe (keep in mind that dx could be zero, so the above won't yield valid results for all x and y).
If you're interested in speed, pythagoras can help. The length of the distance travelled between two touches is:
l = sqrt(dx*dx + dy*dy)
If the touches happened at times t1 and t2, the speed is:
tdiff = abs(t2 - t1)
s = l/tdiff
It's up to you to determine which value of s you interpret as fast or slow.
You can extend this approach for more complex figures, e.g. your square shape could be a horizontal/vertical/horizontal/vertical swipe with start/end points where the previous swipe stopped.
For more complex figures, it's probably better to work with an idealized shape. One could consider a polygon shape as the ideal, and check if a range of touches
don't have too high a distance to their closest point on the pologyon's outline, and
all touches follow the same direction along the polygon's outline.
You can refine things further from there.
There does exist other methods for detecting non-simple touches on a touchscreen. Check out the $1 unistroke gesture recognizer at the University of Washington. http://depts.washington.edu/aimgroup/proj/dollar/
It basically works like this:
Resample the recorded path into a fixed number of points that are evenly spaced along the path
Rotating the path so that the first point is directly to the right of the path’s center of mass
Scaling the path (non-uniformly) to a fixed height and width
For each reference path, calculating the average distance for the corresponding points in the input path. The path with the lowest average point distance is the match.
What’s great is that the output of steps 1-3 is a reference path that can be added to the array of known gestures. This makes it extremely easy to give your application gesture support and create your own set of custom gestures, as you see fit.
This has been ported to iOS by Adam Preble, repo on github:
http://github.com/preble/GLGestureRecognizer