Using CoreData with drawing (CGPoint) - objective-c

I would like to use Core Data for a drawing application where the user draws in a UIImageView using CGPoints has the option to save and at a later time load different drawings (and add to them if the user desires - e.g. mutability)
I've read through some cored data tutorials and skimmed the Core Data Programming guide but I can't seem to figure out what need to be done to save the drawing to Core Data i.e. what type of attribute to use (transformable I suppose) and how exactly to get the CGPoints (again transform I suppose) to work with the managedObject I'll need to create.
To draw I'm using methods:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {...}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {...}
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {...}
In the iPhoneCoreDataRecipes project Apple changes the images to data for Core Data - (so you don't have to download the project their code in the implementation file for transforming is at the bottom of the post). In this case it's converting PNGs to data so I'm not sure how to apply this to CGPoints in a UIImageView. I'm happy to supply more of my code if necessary.
Thanks in advance,
James
.
.
Apple's Recipe code:
#implementation Recipe
#dynamic name, image, overview, thumbnailImage, instructions, ingredients, type, prepTime;
#end
#implementation ImageToDataTransformer
+ (BOOL)allowsReverseTransformation {
return YES;
}
+ (Class)transformedValueClass {
return [NSData class];
}
- (id)transformedValue:(id)value {
NSData *data = UIImagePNGRepresentation(value);
return data;
}
- (id)reverseTransformedValue:(id)value {
UIImage *uiImage = [[UIImage alloc] initWithData:value];
return [uiImage autorelease];
}
#end

As advised by TechZen:
Do you need to query for the coordinates in CoreData or do you only want to fetch a drawing and draw all points composing the drawing?
If you need to query for coordinates create a Core Data Entity representing each CGPoint with 2 attributes (coordinates), otherwise create a single Model Object for each drawing with a transformable attribute and NSValue to store all points.
CGPoint -> NSValue -> NSMutableArray -> (transformable) NSManagedObject attribute

Related

Storing information, like alphas and which methods were called, in Spritekit

I am still amateur. I want to save information in the SettingsScene when I press the home button. This information includes alpha of each button and the game should know that the methods like defaultModePlay (check sendInformationAboutDefaultChallenge and SettingsScene.h) were called. I hope you can follow my code below. Please bear with me if this code looks silly, your advice is highly welcomed.
In simpler terms: I want to store information (maybe using NSUserDefaults) from the SettingsScene. Is there any way I can do that?
This is how the delegate is set in the SettingsScene...
#class SettingsScene;
#protocol SettingsSceneDelegate <NSObject>
#required
-(void)defaultModePlay;
-(void)mediumModePlay;
-(void)hardModePlay;
#end
#interface SettingsScene : SKScene
#property (nonatomic, weak) id <SettingsSceneDelegate, SKSceneDelegate> delegate;
#end
this is a method in SettingsScene.m
-(void)sendInformationAboutDefaultChallenge
{
[self.delegate defaultModePlay];
}
touchBegan method in game SettingsScene.m and these are buttons that when pressed, their alphas change and they should call methods that are set in the SettingsScene.h.These methods send information to gameScene which control the speed of particles.
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
CGPoint location = [touch locationInNode:self];
SKNode *node = [self nodeAtPoint:location];
if ([node.name isEqualToString:#"DefaultChallengeMode"]) {
defaultChallengeButton.alpha = 0.5;
mediumChallengeButton.alpha = 1.0;
hardChallengeButton.alpha = 1.0;
// the game should play the default way of playing.
[self sendInformationAboutDefaultChallenge];
}
if ([node.name isEqualToString:#"MediumChallengeMode"]) {
mediumChallengeButton.alpha = 0.5;
defaultChallengeButton.alpha = 1.0;
hardChallengeButton.alpha = 1.0;
}
if ([node.name isEqualToString:#"hardChallengingMode"]) {
hardModeButton.alpha = 0.5;
defaultModeButton.alpha = 1.0;
mediumModeButton.alpha = 1.0;
}
if ([node.name isEqualToString:#"home"]) {
// save what ever happened in this scene
}
}
In the gameScene, I have the method from SettingsScene...
-(void)defaultModePlay
{
defaultGame = YES;
}
Maybe I should store information from the gameScene, I don't know.
I have not shown these methods here..., but I hope you can follow...
-(void)mediumModePlay;
-(void)hardModePlay;
Instead of trying to capture the state of every UI element, why not simply store the difficulty mode and use that to determine how the UI should be displayed? I would first put the difficulty modes into an enum (see this article for the best way to do that) and then store them in NSUserDefaults. That will look something like [[NSUserDefaults standardUserDefaults] setInteger:difficultyLevel forKey:#"difficultyLevel"].

Non-rectangular 'clickable' area

I have a 3D map with many different shaped buildings. I would like users to click a building on the map, which would act as a segue to a new view.
I planned to go about this by creating invisible buttons the same shapes as the buildings, and then laying them on top of the imageview (which holds the map).
After doing some reading I found that it isn't as simple as I thought to create custom buttons (I think that I will have to do a lot of sublcassing and customization, which doesn't seem reasonable when I have 50+ different shaped buttons to make), so I was wondering if there was another aproach I could use to solve this problem.
Edit: I should add that right now that all functionality is working, but I am having to use default rectangular buttons, with alpha set to 0.1.
EDITED TO IMPROVE DATAMODEL
To do this you will have a UIView with your map image in the background. You can do this with either a UIImageView or by rendering the image yourself in drawRect.
You will then define several CGPath references. One for each building by doing something like this... How to create CGPathRef from Array of points The points will be the corners of each building.
Now store these paths in an array somehow. You will need one path for every "clickable"
building.
I would store the paths inside a Building object or something...
#interface Building : NSObject
#property (nonatomic) CGPath path;
#end
Now in the UIView subclass you override - (void)touchesBegan.... You can then get the touch point and iterate through your paths to find which one is touched...
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
CGPoint touchPoint = [touch locationInView:self];
for (Building *building in self.buildings) {
if (CGPathContainsPoint(building.path, CGAffineTransformIdentity, touchPoint, YES)) {
//the touch was inside this building!
//now do something with this knowledge.
}
}
}
I had once to implement a meteo application.
We did a map - with several clickable regions.
The easiest way to define those non-rectangular areas is to define them as UIBezierPath, and use a UITapGestureRecognizer
UIBezierPath uses a CGPath, but it's pure Objective-C.
Other advantage on CGPath, you can easily fill / stroke these paths - using random color - which can be neat for seeing them when debugging
So
//in you .h, or in class extension
//an array of UIBezierPath you create
// each bezierPath represent an area.
#property (nonatomic, strong) NSArray *myShapes;
//in your .m
- (void)viewDidLoad
{
[super viewDidLoad];
//load stuff
UITapGestureRecognizer *tap = [[UITapGestureRecognizer alloc] initWithTarget:self action:#selector(handleTap:)];
[self.view addGesture:tap];
// ...
}
- (void)handleTap:(UITapGestureRecognizer *)tap
{
UIBezierPath *recognizedArea = nil;
//if your shapes are defined in self.view coordinates :
CGPoint hitPoint = [tap locationInView:self.view];
for(UIBezierPath *regionPath in self.myShapes){
if([regionPath containsPoint:tap]){
recognizedArea = regionPath;
break;
}
}
//handle touch for THIS specific area when recognizedArea isn't nil
if(recognizedArea){
//YOUR CODE HERE
}
// you might also iterate with a normal integer increment
// and use the path index to retrieve region-specific-data from another array
}

how do i clear a draw uiview?

I am currently trying to refresh or clear my UIView draw but a lot of methods aren't working. I tried:
self.Draw.clearsContextBeforeDrawing
[self.Draw setNeedsDisplay];
self.Draw.removeFromSuperview;
My code:
-(void)drawPoint:(UITouch *)touch
{
PointLocation *currentLoc = [[PointLocation alloc] init];
currentLoc.location = [touch locationInView:self];
self.previousPoint = self.point;
self.point = currentLoc;
[self drawToBuffer];
[self setNeedsDisplay];
}
Is there any way to clear my uidraw? Or at least reload it?
There's a few basic problems here.
How is Draw declared? Follow cocoa naming conventions. Only Class names should be capitalized. Instances of a class should start with a lowercase letter.
I think what you want is:
#interface Draw: UIView
#end
#implementation Draw
- (void) drawRect: (CGRect) r {
// here you will draw whatever you want in your view.
}
#end
Draw* draw;
in your drawPoint method:
[draw setNeedsDisplay];
For more details see The View Drawing Cycle:
http://developer.apple.com/library/ios/documentation/UIKit/Reference/UIView_Class/UIView/UIView.html#//apple_ref/doc/uid/TP40006816-CH3-SW136
For bonus points, name your view something more descriptive than Draw. It is a view of something, a window containing some content. It is not a verb.

Call touch event on some event in objective C

I'm trying to call these Methods on some event:
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
But i don't know how to fill the parameters :
1 - (NSSet *)touches
2 - withEvent:(UIEvent *)event
Can any one provide me with a sample parameters do call to (void)touchesEnded and (void)touchesMoved: (and
- (void)touchesBegan
There's no public constructors for UITouch and UIEvent, so you cannot generate the parameters required by the touch method.
As dasblinkenlight said, there could be surely another approach to your problem. But, the current approach is absolutely not possible.
Those methods are called automatically by the system. You never call them directly.
What you should do, well that depends on your goal. If you just want to "simulate" touches, just have helper methods. Then, have touchesXXXX call those directly.
We do not get an API to create UITouch objects, so you have to mirror them if you want to keep their information around. Something like...
#interface MyTouch : NSObject {
UITouchPhase phase
NSUInteger tapCount
NSTimeInterval timestamp
UIView *view
UIWindow *window
CGPoint locationInView;
CGPoint locationInWindow;
#end
Then, you could have a method like this...
- (void)handleTouches:(NSSet*)touches {
}
Where touches was a set of MyTouch objects.
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
// For UITouch in touches, create a MyTouch, and add it to your own
// collection of touches.
[self handleTouches:myTouches];
}
If you did this for each touchesXXXX method, each one of them pass off handling to your own method.
Now, by doing this, you have control over creating your own touch objects. You can make them be configured for any view, or pattern you want. Then, in any other piece of code, you can create a collection of MyTouch objects, and call handleTouches: with them.
As long as the touchesXXXX methods just create your special objects, and pass them on to your method, you can have the exact same behavior whether the system generates the touches, or you simulate them.
Now, what you DO NOT get is automatic generation of system events.
If you are getting killed by the performance of that, there are simple ways to avoid the creation of the extra set and objects, but that's not your question.

Allowing interaction with a UIView under another UIView

Is there a simple way of allowing interaction with a button in a UIView that lies under another UIView - where there are no actual objects from the top UIView on top of the button?
For instance, at the moment I have a UIView (A) with an object at the top and an object at the bottom of the screen and nothing in the middle. This sits on top of another UIView that has buttons in the middle (B). However, I cannot seem to interact with the buttons in the middle of B.
I can see the buttons in B - I've set the background of A to clearColor - but the buttons in B do not seem to receive touches despite the fact that there are no objects from A actually on top of those buttons.
EDIT - I still want to be able to interact with the objects in the top UIView
Surely there is a simple way of doing this?
You should create a UIView subclass for your top view and override the following method:
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event {
// UIView will be "transparent" for touch events if we return NO
return (point.y < MIDDLE_Y1 || point.y > MIDDLE_Y2);
}
You may also look at the hitTest:event: method.
While many of the answers here will work, I'm a little surprised to see that the most convenient, generic and foolproof answer hasn't been given here. #Ash came closest, except that there is something strange going on with returning the superview... don't do that.
This answer is taken from an answer I gave to a similar question, here.
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event
{
UIView *hitView = [super hitTest:point withEvent:event];
if (hitView == self) return nil;
return hitView;
}
[super hitTest:point withEvent:event] will return the deepest view in that view's hierarchy that was touched. If hitView == self (i.e. if there is no subview under the touch point), return nil, specifying that this view should not receive the touch. The way the responder chain works means that the view hierarchy above this point will continue to be traversed until a view is found that will respond to the touch. Don't return the superview, as it is not up to this view whether its superview should accept touches or not!
This solution is:
convenient, because it requires no references to any other views/subviews/objects;
generic, because it applies to any view that acts purely as a container for touchable subviews, and the configuration of the subviews does not affect the way it works (as it does if you override pointInside:withEvent: to return a particular touchable area).
foolproof, there's not much code... and the concept isn't difficult to get your head around.
I use this often enough that I have abstracted it into a subclass to save pointless view subclasses for one override. As a bonus, add a property to make it configurable:
#interface ISView : UIView
#property(nonatomic, assign) BOOL onlyRespondToTouchesInSubviews;
#end
#implementation ISView
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event
{
UIView *hitView = [super hitTest:point withEvent:event];
if (hitView == self && onlyRespondToTouchesInSubviews) return nil;
return hitView;
}
#end
Then go wild and use this view wherever you might use a plain UIView. Configuring it is as simple as setting onlyRespondToTouchesInSubviews to YES.
There are several ways you could handle this. My favorite is to override hitTest:withEvent: in a view that is a common superview (maybe indirectly) to the conflicting views (sounds like you call these A and B). For example, something like this (here A and B are UIView pointers, where B is the "hidden" one, that is normally ignored):
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event {
CGPoint pointInB = [B convertPoint:point fromView:self];
if ([B pointInside:pointInB withEvent:event])
return B;
return [super hitTest:point withEvent:event];
}
You could also modify the pointInside:withEvent: method as gyim suggested. This lets you achieve essentially the same result by effectively "poking a hole" in A, at least for touches.
Another approach is event forwarding, which means overriding touchesBegan:withEvent: and similar methods (like touchesMoved:withEvent: etc) to send some touches to a different object than where they first go. For example, in A, you could write something like this:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
if ([self shouldForwardTouches:touches]) {
[B touchesBegan:touches withEvent:event];
}
else {
// Do whatever A does with touches.
}
}
However, this will not always work the way you expect! The main thing is that built-in controls like UIButton will always ignore forwarded touches. Because of this, the first approach is more reliable.
There's a good blog post explaining all this in more detail, along with a small working xcode project to demo the ideas, available here:
http://bynomial.com/blog/?p=74
You have to set upperView.userInteractionEnabled = NO;, otherwise the upper view will intercept the touches.
The Interface Builder version of this is a checkbox at the bottom of the View Attributes panel called "User Interaction Enabled". Uncheck it and you should be good to go.
Custom implementation of pointInside:withEvent: indeed seemed like the way to go, but dealing with hard-coded coordinates seemed odd to me. So I ended up checking whether the CGPoint was inside the button CGRect using the CGRectContainsPoint() function:
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event {
return (CGRectContainsPoint(disclosureButton.frame, point));
}
Lately I wrote a class that will help me with just that. Using it as a custom class for a UIButton or UIView will pass touch events that were executed on a transparent pixel.
This solution is a somewhat better than the accepted answer because you can still click a UIButton that is under a semi transparent UIView while the non transparent part of the UIView will still respond to touch events.
As you can see in the GIF, the Giraffe button is a simple rectangle but touch events on transparent areas are passed on to the yellow UIButton underneath.
Link to class
I guess I'm a bit late to this party, but I'll add this possible solution:
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event {
UIView *hitView = [super hitTest:point withEvent:event];
if (hitView != self) return hitView;
return [self superview];
}
If you use this code to override a custom UIView's standard hitTest function, it will ignore ONLY the view itself. Any subviews of that view will return their hits normally, and any hits that would have gone to the view itself are passed up to its superview.
-Ash
Just riffing on the Accepted Answer and putting this here for my reference. The Accepted Answer works perfectly. You can extend it like this to allow your view's subviews to receive the touch, OR pass it on to any views behind us:
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event {
// If one of our subviews wants it, return YES
for (UIView *subview in self.subviews) {
CGPoint pointInSubview = [subview convertPoint:point fromView:self];
if ([subview pointInside:pointInSubview withEvent:event]) {
return YES;
}
}
// otherwise return NO, as if userInteractionEnabled were NO
return NO;
}
Note: You don't even have to do recursion on the subview tree, because each pointInside:withEvent: method will handle that for you.
This approach is quite clean and allows that transparent subviews are not reacting to touches as well. Just subclass UIView and add the following method to its implementation:
#implementation PassThroughUIView
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event {
for (UIView *v in self.subviews) {
CGPoint localPoint = [v convertPoint:point fromView:self];
if (v.alpha > 0.01 && ![v isHidden] && v.userInteractionEnabled && [v pointInside:localPoint withEvent:event])
return YES;
}
return NO;
}
#end
Setting userInteraction property disabled might help. Eg:
UIView * topView = [[TOPView alloc] initWithFrame:[self bounds]];
[self addSubview:topView];
[topView setUserInteractionEnabled:NO];
(Note: In the code above, 'self' refers to a view)
This way, you can only display on the topView, but won't get user inputs. All those user touches will through this view and the bottom view will respond for them. I'd use this topView for displaying transparent images, or animate them.
My solution here:
-(UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event {
CGPoint pointInView = [self.toolkitController.toolbar convertPoint:point fromView:self];
if ([self.toolkitController.toolbar pointInside:pointInView withEvent:event]) {
self.userInteractionEnabled = YES;
} else {
self.userInteractionEnabled = NO;
}
return [super hitTest:point withEvent:event];
}
Hope this helps
There's something you can do to intercept the touch in both views.
Top view:
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
// Do code in the top view
[bottomView touchesBegan:touches withEvent:event]; // And pass them on to bottomView
// You have to implement the code for touchesBegan, touchesEnded, touchesCancelled in top/bottom view.
}
But that's the idea.
Here is a Swift version:
override func pointInside(point: CGPoint, withEvent event: UIEvent?) -> Bool {
return !CGRectContainsPoint(buttonView.frame, point)
}
Swift 3
override func point(inside point: CGPoint, with event: UIEvent?) -> Bool {
for subview in subviews {
if subview.frame.contains(point) {
return true
}
}
return false
}
I have never built a complete user interface using the UI toolkit, so I don't have much experience with it. Here is what I think should work though.
Every UIView, and this the UIWindow, has a property subviews, which is an NSArray containing all the subviews.
The first subview you add to a view will receive index 0, and the next index 1 and so forth. You can also replace addSubview: with insertSubview: atIndex: or insertSubview:aboveSubview: and such methods that can determine the position of your subview in the hierarchy.
So check your code to see which view you add first to your UIWindow. That will be 0, the other will be 1.
Now, from one of your subviews, to reach another you would do the following:
UIView * theOtherView = [[[self superview] subviews] objectAtIndex: 0];
// or using the properties syntax
UIView * theOtherView = [self.superview.subviews objectAtIndex:0];
Let me know if that works for your case!
(below this marker is my previous answer):
If views need to communicate with each other, they should do so via a controller (that is, using the popular MVC model).
When you create a new view, you can make sure it registers itself with a controller.
So the technique is to make sure your views register with a controller (which can store them by name or whatever you prefer in a Dictionary or Array). Either you can have the controller send a message for you, or you can get a reference to the view and communicate with it directly.
If your view doesn't have a link back the controller (which may be the case) then you can make use of singletons and/or class methods to get a reference to your controller.
I think the right way is to use the view chain built into the view hierarchy.
For your subviews that are pushed onto the main view, do not use the generic UIView, but instead subclass UIView (or one of its variants like UIImageView) to make MYView : UIView (or whatever supertype you want, such as UIImageView). In the implementation for YourView, implement the touchesBegan method. This method will then get invoked when that view is touched. All you need to have in that implementation is an instance method:
- (void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event ;
{ // cannot handle this event. pass off to super
[self.superview touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event]; }
this touchesBegan is a responder api, so you dont need to declare it in your public or private interface; it's one of those magic api's you just have to know about. This self.superview will bubble up the request eventually to the viewController. In the viewController, then, implement this touchesBegan to handle the touch.
Note that the touches location (CGPoint) is automatically adjusted relative to the encompassing view for you as it is bounced up the view hierarchy chain.
Just want to post this, coz I had somewhat similar problem, spent substantial amount of time trying to implement answers here without any luck. What I ended up doing:
for(UIGestureRecognizer *recognizer in topView.gestureRecognizers)
{
recognizer.delegate=self;
[bottomView addGestureRecognizer:recognizer];
}
topView.abView.userInteractionEnabled=NO;
and implementing UIGestureRecognizerDelegate :
- (BOOL)gestureRecognizer:(UIGestureRecognizer *)gestureRecognizer shouldRecognizeSimultaneouslyWithGestureRecognizer:(UIGestureRecognizer *)otherGestureRecognizer
{
return YES;
}
Bottom view was a navigation controller with number of segues and I had sort of a door on top of it that could close with pan gesture. Whole thing was embedded in yet another VC. Worked like a charm. Hope this helps.
Swift 4 Implementation for HitTest based solution
let hitView = super.hitTest(point, with: event)
if hitView == self { return nil }
return hitView
Derived from Stuart's excellent, and mostly foolproof answer, and Segev's useful implementation, here is a Swift 4 package that you can drop into any project:
extension UIColor {
static func colorOfPoint(point:CGPoint, in view: UIView) -> UIColor {
var pixel: [CUnsignedChar] = [0, 0, 0, 0]
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
let context = CGContext(data: &pixel, width: 1, height: 1, bitsPerComponent: 8, bytesPerRow: 4, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
context!.translateBy(x: -point.x, y: -point.y)
view.layer.render(in: context!)
let red: CGFloat = CGFloat(pixel[0]) / 255.0
let green: CGFloat = CGFloat(pixel[1]) / 255.0
let blue: CGFloat = CGFloat(pixel[2]) / 255.0
let alpha: CGFloat = CGFloat(pixel[3]) / 255.0
let color = UIColor(red:red, green: green, blue:blue, alpha:alpha)
return color
}
}
And then with hitTest:
override func hitTest(_ point: CGPoint, with event: UIEvent?) -> UIView? {
guard UIColor.colorOfPoint(point: point, in: self).cgColor.alpha > 0 else { return nil }
return super.hitTest(point, with: event)
}