I need a 2D coordinate system to which renders a user space coordinate system to swing components on the screen. Now that is exactly what Java2D does. But what I need further is to move the relative position of the screen and the coordinate system to get a kind of scrolling.
In Java 2D the default offspring (0,0) is in the upper left corner which is common in computer graphics.
Is it possible to move the point?
If yes: How can I do it?
Thanks in advance.
You can change your coordinate system using the translate() function. For example:
Graphics2D g; // Assume this is already initialized
g.drawLine(100, 100, 200, 200); // Draw in the default coordinate system
g.translate(100.0, 100.0); // Move the origin down and to the right
g.drawLine(0, 0, 100, 100); // Draw the same line relative to new origin
You can also use scale(), rotate() and shear() for more powerful transformations of the coordinate system.
For more information check this page: http://docstore.mik.ua/orelly/java-ent/jfc/ch04_03.htm
Yes. I know it is possible, but it is been i while since i've used java.
Use this google query:
Search
Related
I am using VB.NET to write a game that runs in a windows form that uses collision detection. In order to achieve this, I have to be able to understand the positioning system. I know that windows form coordinates start at the top-left, and don't include the bottom or right edges. But at what numbers do the coordinates start and stop? (What i mean is What is the top left corner coordinate, what is the almost bottom right corner coordinate)
The coordinate system depends on if you're talking about client coordinates or screen coordinates. This is a basic Windows UI manager thing, and the WinForms wrappers follow the same pattern.
When you're dealing with client coordinates, the origin (top-left) point has coordinates (0, 0). Always. The extent is defined by the width and height of your form, accessible via Me.ClientSize.Width and Me.ClientSize.Height, respectively. The client rectangle is, therefore:
{ (0, 0) × (ClientSize.Width, ClientSize.Height) }, also retrievable using the ClientRectangle property.
The unique thing about the client area is that it excludes the non-client areas of the form—the borders, the title bars, and other system-dependent properties.
(Image taken for illustrative purposes from Jose Menendez Póo's article on creating an Aero ToolStrip)
You don't have to worry about calculating these sizes (and you shouldn't, either, since they're subject to change). You just work in client coordinates, and the framework will take care of the rest. You use client coordinates when positioning child objects (such as controls) on their parent form, and you can even resize the form by specifying a client size. Its actual size will be calculated automatically, taking into account the non-client area.
It is quite rare that you will ever have to deal in screen coordinates. You only need those if you want to move a form (window) around on the screen (which should also be rare, because you have no idea what size screen the user has nor should you try to control where she places her windows). In screen coordinates, the top-left corner of the primary monitor has coordinates (0, 0). The rest of the coordinate system is based on the virtual screen, which takes into account multiple-monitor configurations.
A form's Location and Size properties give you values in screen coordinates. Should you need to map (convert) between client and screen coordinates, there are PointToClient and PointToScreen methods. Pass these a location defined either in terms of screen or client coordinates, respectively, and they will convert it to the other coordinate system.
The only other complication to note is that Windows uses endpoint-exclusive rectangles. The WinForms wrapper retains that convention in its Rectangle structure. You hardly ever have to worry about this, since this is really a very natural system once you understand it. Plus, all of the pieces and parts of the WinForms framework use the convention, so if you're just passing around points and sizes and rectangles, you aren't likely to run into trouble. But it is something to be aware of. Think of it this way: your client area has the rectangle { (0, 0) × (ClientSize.Width, ClientSize.Height) }, as we saw earlier. If you were to fill in this rectangle with a solid color, the fill would extend from point (0, 0) to point (ClientSize.Width - 1, ClientSize.Height - 1).
If you stay within your form, you can calculate it by "width" and "height".
Also you have "left" and "top".
Starting is (left = 0 and top = 0) and it ends on the right bottom with the coordinates of the values "width" and "height".
A Windows Forms application specifies the position of a window on the screen in screen coordinates. For screen coordinates, the origin is the upper-left corner of the screen. The full position of a window is often described by a Rectangle structure containing the screen coordinates of two points that define the upper-left and lower-right corners of the window. (MSDN)
So upper left corner is (0, 0) and lower right corner is (Form1.Width, Form1.Height).
Normally the 0,0 coordinate refers to the top left corner of a view. Higher x coordinates are further right. A frame / rectangle in the view has its leftmost point being its x coordinate and its rightmost point being its x coordinate plus its width.
Is it possible to reverse that, or better yet, reverse just the x axis? Make the 0,0 be the top right. Make the higher origins be further to the left. AND make it so a frame / rectangle in the view has its rightmost point as its x coordinate and its leftmost point as its x coordinate plus its width.
I know I could transform this stuff myself with pure math, but I was wondering if iOS offers this capability.
Not really.
iOS 9 has some new flipping stuff for supporting right-to-left languages, but I don't think you can force it.
You can flip the drawing of a view by setting its transform property to CGAffineTransformMakeScale(-1, 1), but that won't change the underlying coordinate system.
SpriteKit has a different coordinate system than normal UIViews, but its coordinate system isn't what you want.
You may want to read Coordinate Systems and Transforms, which discusses some techniques for mapping points between different coordinate systems. This MSDN article covers mapping points using matrices, which can help you on a theoretical level.
It's not documented, but UIView has an instance variable, _flipsHorizontalAxis, that does exactly what it sounds like it would do. It looks like it just passes through the CALayer variable of the same name.
Is it possible to change the origin of an NSImage? If so how would I go about doing this. I have coordinates in regular cartesian system some of them with negative values and I am trying to draw them at the corresponding point in the NSImage but since the origin is at (0,0) there are some missing.
EDIT:Say I have an drawing aspect that needs to be done to an image at the point (-10,-10), currently this doesn't show up. Is there a way to fix that?
If it's like in iOS (you may have to adapt a little the code) and if my memory is still good, you have to do this, since origin is readOnly:
CGRect myFrame = yourImage.frame;
myFrame.origin.x=newX; myFrame.origin.y=newY;
yourImage.frame = myFrame;
I think you are confusing an NSImage with it's container. An NSImage has no bounds or frame, and thus no origin. It does have a size which may represent the pixel dimensions of its birtmap representation ( if it has one) or otherwise could represent it's bounding box ( if it is a vector image). Drawing in an image at a pixel location of (-10,-10) doesn't really make sense.
An NSImage is displayed in a container ( typically an NSImageView), and the container's bounds.origin will dictate the placement of the image relative to the imageView, but you can't modify pixels beyond the edge of the bitmap plane.
In any case you probably want to be using a subclassed NSView in which you would override the drawRect method for your custom drawing. NSView does have a bounds.origin but this is not relevant to your in-drawing coordinates, but rather to the position of the drawn content as a whole to the view's bounding box. The coordinate system that you will be drawing into will be referenced to your graphics context which will (usually) pin the origin (0,0) to the bottom left corner (OSX) or top left corner (iOS). If you are trying to represent negative points on a Cartesian plane, you will need to apply a translation transform to map your points into this positive coordinate space.
I'm trying to explain in a few words, badly, something which Apple explains in great detail in their Quartz 2D Programming Guide.
I am trying to develop an iOS app to make any given image (UIImage) warp on selected locations.
So for this task to be accomplished what should be the rightmost way going forward, for now i'm doing some research on doing this on OpenGL (frankly any heads up on the framework would be nice too).
So finally the requirement is to get the UIImage warp on some given locations. (If x, y coordinates are there)
If you're sufficiently familiar with (or willing to learn) OpenGL, then you could do this:
Create a flat, rectangular grid of points to be a mesh that will be displayed with OpenGL.
Apply the image to the mesh as a texture.
When distorting the image at a particular location, you can just decide which points on the mesh will be affected by the distortion, and move them.
You can push points out from the center, or in toward a center, or shift them all in the same direction. If the distortion affects a large area, then you change a lot of points (possibly changing those in the center by more than those near the edges of the affected area).
Not sure what you mean by 'warp'. Do you mean skew it in 3 dimensions? If so you can adjust the CGAffineTransform for the UIImageView you are displaying it in to get that effect.
If you mean some kind of image processing warp, and you are using iOS 5, you can use Core Image for that.
I am trying to draw some text via Quartz onto an NSView via CGContextShowTextAtPoint(). This worked well until I overrode (BOOL)isFlipped to return YES in my NSView subclass in order to position the origin in the upper-left for drawing. The text draws in the expected area but the letters are all inverted. I also tried the (theoretically, at least) equivalent of flipping my CGContext and translating by the context's height.
e.x.
// drawRect:
CGContextScaleCTM(theContext, 1, -1);
CGContextTranslateCTM(theContext, 0, -dirtyRect.size.height);
This yields the same result.
Many suggestions to similar problems have pointed to modifying the text matrix. I've set the text matrix to the identity matrix, performed an additional inversion on it, and done both, respectively. All these solutions have lead to even stranger rendering of the text (often just a fragment shows up.)
Another suggestion I saw was to simply steer clear of this function in favor of other means of drawing text (e.x. NSString's drawing methods.) However, this is being done amongst mostly C++ / C and I'd like to stay at those levels if possible.
Any suggestions are much appreciated and I'd be happy to post more code if needed.
Thanks,
Sam
This question has been answered here.
Basically it's because the coordinate system on iOS core graphics is fliped (x:0, y:0 in the top left) opposed to the one on the Mac (where x:0, y:0 is bottom left). The solution for this is setting the text transform matrix like this:
CGContextSetTextMatrix(context, CGAffineTransformMake(1.0,0.0, 0.0, -1.0, 0.0, 0.0));
You need to use the view's bounds rather than the dirtyRect and perform the translation before the scale:
CGContextTranslateCTM(theContext, 0, -NSHeight(self.bounds));
CGContextScaleCTM(theContext, 1, -1);
Turns out the answer was to modify the text matrix. The weird "fragments" that were showing up instead of the text was because the font size (set via CGContextSelectFont()) was too small when the "default" text matrix was replaced. The initial matrix had, for some reason, a large scale transform so smaller text sizes looked fine when the matrix was unmodified; when replaced with a inverse scale (1, -1) or an identity matrix, however, they would become unreadably small.