I'm working on an iPad application and that's my problem:
I elaborated an algorithm to know if a point is inside a polygon, in an image. So I need when touching the Image, to know the coordinates of the touched point and then do an action using those coordinates (an NSLog to make the example easy), the problem is that I can't use an IBAction on an UIImageView, and so can't recover the point's coordinates. Thanks for any help
I think at first you have to make polygon which fit to your image. And then you can use touchesBegan:withEvent: to get the coordinate of touch point and judge whether the point is inside of polygon or not.
Here is similar question like yours.
How to get particular touch Area?
I think this is a little difficult work, so maybe you would better use cocos2d library which have collision judgement function.
http://box2d.org/forum/viewtopic.php?f=9&t=7487
But also I think iOS is well constructed for handling touch, so this is beneficial effort for you.
Related
basically I am trying to edit not only the appearance of the button(thats the easy part) but the frame that detects the touch to be a rhombus rather then a square
html example:http://irwinproject.com
I've tried CGAfflineTransform however it doesn't allow me to make non rectangular objects. is there a way to skew
Im just wondering if this is possible because, if not could someone point me in a direction the only viable answer I've found is resorting to something along the lines of a spriteKit;
I found this however this implementation leaves dead spots on buttons where they overlap
custom UIButton with skewed area in iPhone
is there a way to message people on here there was a gentleman who said he figured out how to transform but never posted his solution.
In order to modify the touch area of an oddly shaped button you could use a solution similar to OBShapedButton. I have used this particular project in the past myself for adjacent hexagon buttons and it worked perfectly. That said, you may have to modify it a bit to work with drawn shapes instead of images.
I would really love if someone could tell me how to tell whether 2 objects have touched (An image or a button) I know how to make them draggable but not how to tell if they have touched and to do something when they touch!
Thanks!
If you never rotate the objects, you can use CoreGraphics functions.
BOOL objectsTouch = CGRectIntersectsRect(object1.frame, object2.frame);
This requires of course that the two objects are in the same superview. Otherwise you have to transform the frames using functions of the NSView.
The classical approach is to calculate a minimal circle that encompasses each object, then calculate the distance between circle centers (Pythagorean theorem) and see if it's less than R(object1 circle) + R(object2 circle). If less then you have to get down and dirty with bit mapping or some other scheme, but if greater then you can assume the objects don't touch.
So I want to have a view (NSView, NSOpenGLView, something CG related?) which basically displays a map. Such as:
http://dump.tanaris4.com/map.png
Obviously that looks horrible, but I did it using an NSView, and it draws SO slow. Clearly not designed for this.
I just need to allow users to click on the individual (x,y) coordinates to make changes, and zoom into a certain area (to see it better).
Should I go the OpenGL route? And if so - any suggestions as to how to get started? (I was able to follow the guide to draw a triangle, so that's good).
I did find this post on zooming in an NSView: How to implement zoom/scale in a Cocoa AppKit-application
My concern is if I'm drawing over 6000 coordinates and the lines connecting them, this isn't efficient at all.
I don't think using OpenGL would be of any good here. The problem does not seem to be the actual painting, but rather the rendering strategy. You would need a scene graph of some kind to dynamically handle level of detail and culling.
Qt has all this packaged in a nice class class QGraphicsScene (see http://doc.qt.nokia.com/latest/qgraphicsscene.html for reference, and http://doc.qt.nokia.com/main-snapshot/demos-chip.html for an example).
Some basic concepts you should consider using:
http://en.wikipedia.org/wiki/Scene_graph
http://en.wikipedia.org/wiki/Quadtree
http://en.wikipedia.org/wiki/Level_of_detail
Try using core graphics for this, really there is so much that could be done. Watch the video Practical Drawing for iOS Developers from WWDC 2011 and it should give an over view of what can be done with CG.
I believe even CoreGraphics will suffice for what you want to achieve, and that should work under a UIView if you draw the rectangle of your view completely under the DrawRect method of your UIView (you must overload this method). Please see the UIView Class Reference. I have a mobile application that logs points on the UIMapKit, kind of like Nike+, and it certainly works well for massive amounts of points/line segments. There is no reason why this simple approach cannot work for you as well.
Im not sure how else I should approach it, but if I was to (in my mac application) have a grid of NSViews, which the user can change the colour of each, is it possible to then translate this, so now I have been given a colour for each pixel by the user, make this into an exportable image?
I honestly can't think of how else to do this. I don't want to go ahead an realise I have taken a rather foolish path.
The idea is I will have a grid of squares which the user can paint, a colour in each square, a square representing a pixel in the final image. So they paint with like a paint bucket filling each one, then export it into an actual image file.
Any help much appreciated, thanks.
A grid of NSViews sounds really heavy for what you're doing. Why not write one single custom view that checks the mouse position and modifies the data appropriately? Then you'd write a custom drawing method to fill the custom view, and you could use the same exact draw method to write to an NSImage which you could export.
You'll need to do a bit o' math. For each "pixel", call -set on the appropriate NSColor, then use NSBezierPath's -fillRect method. It may help you to get out a pencil & paper to figure out the math for the rect origins & sizes.
Check http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/CocoaDrawingGuide/Introduction/Introduction.html for help if you've never done custom drawing before. It's really not that bad, just takes a little reading. :)
I'm working on an iPhone app that has some non-rectangular UI elements. Currently, I'm subclassing UIView, and in drawRect I'm using a CGPathRef to draw black border and a color-filled interior.
I'd like to make these items look more like "buttons", though, so I'd like to have some of the same sort of "glass effects" that are used on e.g. the icons for an iPhone app (when you don't set UIPrerenderedIcon to true), or in other buttons.
I hunted around, and found this, which seems to be close to what I need:
Gradients on UIView and UILabels On iPhone
But I'm having difficulty figuring out how to clip the gradient to my shape.
It seems like the mask property on the view would be the right place to go, which seems like it would call for me to create a new CALayer object, with the clipping somehow applied to it.
I'm hoping there's some nice convenience function for doing this, though if I need to write something more complicated, that's OK, too. I'm just having difficulty figuring out how to apply the path as a mask. I'm unsure if I need to create a new drawing context and draw the path into it? And then use CGContextClip?
I think I've got a lot of the right pieces figured out, I'm just having difficulty understanding how to assemble them.
Could someone please point me in the right direction? (I'm happy to read more in the docs, just point me in the right direction, please.)
You can create a CAShapeLayer and set its path to your CGPathRef. Then set the mask property of a CAGradientLayer to your shapeLayer.