I have a Transparent NSWindow with an simple icon in it that can be dragged around the screen.
My code is:
.h:
#interface CustomView : NSWindow{
}
#property (assign) NSPoint initialLocation;
.m
#synthesize initialLocation;
- (id) initWithContentRect: (NSRect) contentRect
styleMask: (NSUInteger) aStyle
backing: (NSBackingStoreType) bufferingType
defer: (BOOL) flag{
if (![super initWithContentRect: contentRect
styleMask: NSBorderlessWindowMask
backing: bufferingType
defer: flag]) return nil;
[self setBackgroundColor: [NSColor clearColor]];
[self setOpaque:NO];
[NSApp activateIgnoringOtherApps:YES];
return self;
}
- (void)mouseDragged:(NSEvent *)theEvent {
NSRect screenVisibleFrame = [[NSScreen mainScreen] visibleFrame];
NSRect windowFrame = [self frame];
NSPoint newOrigin = windowFrame.origin;
// Get the mouse location in window coordinates.
NSPoint currentLocation = [theEvent locationInWindow];
// Update the origin with the difference between the new mouse location and the old mouse location.
newOrigin.x += (currentLocation.x - initialLocation.x);
newOrigin.y += (currentLocation.y - initialLocation.y);
// Don't let window get dragged up under the menu bar
if ((newOrigin.y + windowFrame.size.height) > (screenVisibleFrame.origin.y + screenVisibleFrame.size.height)) {
newOrigin.y = screenVisibleFrame.origin.y + (screenVisibleFrame.size.height - windowFrame.size.height);
}
// Move the window to the new location
[self setFrameOrigin:newOrigin];
}
- (void)mouseDown:(NSEvent *)theEvent {
// Get the mouse location in window coordinates.
self.initialLocation = [theEvent locationInWindow];
}
I want to display a NSPopover when the users clicks the image of the transparent window. But, as you can see in the code, the mouseDown event is used to get the mouse location (the code above was taken from an example).
What can i do to know when the user clicks the icon just to drag it around or simply clicked it to display the NSPopover?
Thank you
This is the classic situation of receiving the defining event after you need it in order to begin the action. Specifically, you can't know if the mouseDown is the beginning of a drag until after the drag starts. However, you want to act upon that mouseDown if a drag doesn't start.
In iOS (I realize that's not directly relevant to the code here, but it is instructional), there's an entire API built around letting iOS attempt to make these kinds of decisions for you. The entire Gesture system is based on the idea that the user starts to do something that might be one of many different actions, and thus needs to be resolved over time, possibly resulting in cancelled actions during the tracking period.
On OS X, we don't have many systems to help out with this, so if you have something that needs to handle a click and a drag differentially, you will need to defer your next action until a guard time has passed, and if that passes, you can perform the original action. In this case, you will likely want to do the following:
In the mouseDown, begin an NSTimer set for an appropriate guard time (not so long that people will accidentally move the pointer, and not so short that you'll trigger before the user drags) in order to call you back later to trigger the popover.
In the mouseDragged, use a guard area to make sure that if the user just twitches a little, it doesn't count as a drag. This can be irritating, as it sometimes results in needing to drag something farther than it seems necessary in order to begin a drag, so you'll want to either find a magic constant somewhere, or do some experimentation. When the guard area is exceeded, then begin your legitimate drag operation by canceling the NSTimer with [timer invalidate] and do your drag.
In the callback for the timer, display your popover. If the user dragged, the NSTimer will have been invalidated, causing it not to fire, and so the popover won't be displayed.
Related
I have a scene (SKScene), in which, whenever a click is performed a ball (SKSprtieNode) is dropped from that point.
Now what I want to do is, whenever a click on the ball is performed, the ball should bounce or something.
What I have in GameScene.m is
- (void)mouseDown:(NSEvent *)theEvent {
CGPoint location = [theEvent locationInNode:self];
[self addBallAtLocation:location];
}
- (void)addBallAtLocation:(CGPoint) location {
Ball *ball = [Ball new];
ball.position = location;
[self addChild:ball];
}
And in Ball.m I add the bounce action to mouseDown method:
- (void)mouseDown:(NSEvent *)theEvent{
CGPoint point = [theEvent locationInNode:self];
CGVector impulse = CGVectorMake(point.x * 5.0, point.y * 5.0);
[self.physicsBody applyImpulse:impulse];
}
Right now a new ball is created even when clicked on a exiting ball. I thought that the ball's mouseDown method would be called since I clicked on it, and if that did not exist the scene's mouseDown method would be called.
P.S. I have a feeling that this could be solved with delegate, I could very easily be wrong, but since I am not totally clear on to use them, I didn't. If you think that might be a good way to resolve this issue, please do use them, as it may help me understand how to use them.
By default SKSpriteNode has its userInteractionEnabled property set to NO (certainly for performance reasons). You simply have to set it to YES and your event-handling methods will be called :)
I have been trying to make a simple drawing program. Recently, I have figured out to draw shapes in a custom view for this purpose. My problem is that I have to draw everything at a single point in time. I don't know if that actually makes sense, but it seems to me that it calls the drawRect method only once, at that "once" is on startup.
Here is my code so far:
Header file.
NSBezierPath *thePath;
NSColor *theColor;
NSTimer *updateTimer;
NSPoint *mousePoint;
int x = 0;
int y = 0;
#interface test : NSView {
IBOutlet NSView *myView;
}
#property (readwrite) NSPoint mousePoint;
#end
Then, implementation in the .m file.
#implementation test
#synthesize mousePoint;
- (void) mouseDown:(NSEvent*)someEvent {
CGEventRef ourEvent = CGEventCreate(NULL);
mousePoint = CGEventGetLocation(ourEvent);
NSLog(#"Location: x= %f, y = %f", (float)mousePoint.x, (float)mousePoint.y);
thePath = [NSBezierPath bezierPathWithRect:NSMakeRect(mousePoint.x, mousePoint.y, 10, 10)];
theColor = [NSColor blackColor];
}
- (void) mouseDragged:(NSEvent *)someEvent {
mousePoint = [someEvent locationInWindow];
NSLog(#"Location: x= %f, y = %f", (float)mousePoint.x, (float)mousePoint.y);
x = mousePoint.x;
y = mousePoint.y;
[myView setNeedsDisplay:YES];
}
- (void) drawRect:(NSRect)rect; {
NSLog(#"oisudfghio");
thePath = [NSBezierPath bezierPathWithRect:NSMakeRect(x, y, 10, 10)];
theColor = [NSColor blackColor];
[theColor set];
[thePath fill];
}
#end
On startup, it draws a rectangle in the bottom left corner, like it should. The problem is, the drawRect method is only called on startup. It just won't fire no matter what I do.
EDIT: I have just updated the code. I hope it helps.
SECOND EDIT: I have really simplified the code. I hope this helps a bit more.
Short Answer:
When your view's state is changed such that it would draw differently, you need to invoke -[NSView setNeedsDisplay:]. That will cause your view's drawRect: method to be called in the near future. You should never call drawRect: yourself. That's a callback that's invoked on your behalf.
When events occur in your application that cause you to want to change your drawing, capture state about what happened into instance variables, invoke setNeedsDisplay: and then later when drawRect: is called do the new drawing.
Long Answer:
In Cocoa, window drawing is done with an pull/invalidation model. That means the window has an idea of whether or not it needs to draw, and when it thinks it needs to draw it draws once per event loop.
If you're not familiar with event loops you can read about them on Wikipedia
At the top level of the application you can imagine that Cocoa is doing this:
while (1) {
NSArray *events = [self waitForEvents];
[self doEvents:events];
}
Where events are things like the mouse moving, the keyboard being pressed, and timers going off.
NSView has a method -[NSView setNeedsDisplay:]. It takes a boolean parameter. When that method is invoked the window invalidates the drawing region for that view, and schedules an event for the future to do redrawing - but only if there isn't a preexisting redrawing event scheduled.
When the runloop spins next time, the views that were marked with setNeedsDisplay: are re-drawn. This means you can call setNeedsDisplay: several times in a row and drawing will be batched to one call of drawRect: in the future. This is important for performance reasons and means you can do things like change the frame of a view several times in one method but it will only be drawn once at the final location.
The code in your example has a couple of problems. The first is that all drawing code must be in the drawRect: method or a method called from drawRect:, so the drawing code you've placed in your other methods will have no effect at runtime. The second problem is that your code should never directly call drawRect:; instead, the framework will call it automatically (if necessary) once per event cycle.
Instead of hardcoding all the values, consider using instance variables for things you want to be able to change at runtime, for example, the drawing color and rectangle. Then in your mouseDragged: method, send the view (myView in your example) a setNeedsDisplay: message. If you pass YES as the argument, the drawRect: method will be called for you by the framework.
I am making an image editor (just a simple editor for a program I am making), and I need to find the position of the mouse. Is it possible to do this in Objective-C? If so, how?
EDIT: I just thought I should mention that I have done some research on this and I haven't found anything that works. The code I have in my header file is as follows:
#import <Cocoa/Cocoa.h>
#interface test : NSWindow <NSWindowDelegate> {
}
#end
I can handle any outlets and actions that are needed; I just need to know how to find the position of the mouse.
If you are catching it through an event, such as mouseDown, it will look like this:
- (void)mouseDown:(NSEvent *)theEvent {
NSPoint mouseDownPos = [theEvent locationInWindow];
}
Otherwise, use:
[NSEvent mouseLocation];
EDIT: (Sorry, I wrote NSPoint *, which is wrong, since it's a struct)
Inside a mouse event handler (mouseDown:, mouseUp:, mouseMoved:, etc.), you can ask the event for its locationInWindow. If you need the mouse location at some arbitrary time (generally you won't want to do that, since it's rare for a program to have a one-off need to discover the mouse location), you can do [NSEvent mouseLocation] and it will return the mouse's location at the time in screen coordinates.
If you want the coordinates from the origin of the view itself, use:
NSPoint locationInView = [self convertPoint:[theEvent locationInWindow] fromView:nil];
You can use this NSPoint directly to draw in the NSView coordinates; otherwise with
[theEvent locationInWindow]
you will have the coordinates of the mouse in the window, which is probably not what you want.
Sorry to be a nuisance, but I have yet ANOTHER question. How would I do something like DeskLock from macrabbit's Deskshade app? I've made the little window and that's as far as I've come. I know how to "lock" the screen in 10.6 with PresentationOptions, but I don't want to risk it because last time it wouldn't let me back in ;]
EDIT: The DeskShade app actually is meant to cover your desktop, hiding all icons. It also allows you to randomize wallpaper patterns with several fade/swipes. There is one extra feature called DeskLock that actually presents a translucent black bevel (similar to AppSwitcher build into Mac) with a lock icon, and you can place personal text. When you click the lock icon, it presents a modal that asks for a password you can set. You can also just type this password without pressing anything, followed by the Enter key, and it unlocks the screen. This uses the DeskShade feature of hiding the desktop as well.
Thanks!
To create the overlay window you have to subclass NSWindow and set its style mask and background color:
#implementation BigTransparentWindow
- (id)initWithContentRect:(NSRect)contentRect
styleMask:(NSUInteger)windowStyle
backing:(NSBackingStoreType)bufferingType
defer:(BOOL)deferCreation
{
self = [super initWithContentRect:contentRect
styleMask:NSBorderlessWindowMask //this makes the window transparent
backing:bufferingType
defer:deferCreation];
if(self)
{
[self setOpaque:NO];
[self setHasShadow:NO];
[self setBackgroundColor:[[NSColor blackColor] colorWithAlphaComponent:0.5]];
}
return self;
}
#end
You then need to set the window's frame so that it covers all screens, and you need to set its window level appropriately:
- (IBAction)showWindow:(id)sender
{
//set the window so it covers all available screens
NSRect screensRect = NSZeroRect;
for(NSScreen* screen in [NSScreen screens])
{
screensRect = NSUnionRect(screensRect,[screen frame]);
}
[yourWindow setFrame:screensRect display:YES];
if(coverScreen)
{
//set the window so it is above all other windows
[yourWindow setLevel:kCGMaximumWindowLevel];
}
else
{
//set the window so it sits just above the desktop icons
[yourWindow setLevel:kCGDesktopIconWindowLevel + 1];
}
}
As you've mentioned, you can use the NSApplicationPresentationOptions settings for NSApp to control how the user can interact with the system. An easy way to test this without locking yourself out is to set an NSTimer that calls a method that pulls the app out of kiosk mode after a timeout period.
In one of my iPhone projects, I have three views that you can move around by touching and dragging. However, I want to stop the user from moving two views at the same time, by using two fingers. I have therefore tried to experiment with UIView.exclusiveTouch, without any success.
To understand how the property works, I created a brand new project, with the following code in the view controller:
- (void)loadView {
self.view = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 320, 460)];
UIButton* a = [UIButton buttonWithType:UIButtonTypeInfoDark];
[a addTarget:self action:#selector(hej:) forControlEvents:UIControlEventTouchUpInside];
a.center = CGPointMake(50, 50);
a.multipleTouchEnabled = YES;
UIButton* b = [UIButton buttonWithType:UIButtonTypeInfoDark];
[b addTarget:self action:#selector(hej:) forControlEvents:UIControlEventTouchUpInside];
b.center = CGPointMake(200, 50);
b.multipleTouchEnabled = YES;
a.exclusiveTouch = YES;
[self.view addSubview:a];
[self.view addSubview:b];
}
- (void)hej:(id)sender
{
NSLog(#"hej: %#", sender);
}
When running this, hej: gets called, with different senders, when pressing any of the buttons - even though one of them has exclusiveTouch set to YES. I've tried commenting the multipleTouchEnabled-lines, to no avail. Can somebody explain to me what I'm missing here?
Thanks,
Eli
From The iPhone OS Programming Guide:
Restricting event delivery to a single view:
By default, a view’s exclusiveTouch property is set to NO. If you set
the property to YES, you mark the view so that, if it is tracking
touches, it is the only view in the window that is tracking touches.
Other views in the window cannot receive those touches. However, a
view that is marked “exclusive touch” does not receive touches that
are associated with other views in the same window. If a finger
contacts an exclusive-touch view, then that touch is delivered only if
that view is the only view tracking a finger in that window. If a
finger touches a non-exclusive view, then that touch is delivered only
if there is not another finger tracking in an exclusive-touch view.
It states that the exclusive touch property does NOT affect touches outside the frame of the view.
To handle this in the past, I use the main view to track ALL TOUCHES on screen instead of letting each subview track touches. The best way is to do:
if(CGRectContainsPoint(thesubviewIcareAbout.frame, theLocationOfTheTouch)){
//the subview has been touched, do what you want
}
I was encountering an issue like this where taps on my UIButtons were getting passed through to a tap gesture recognizer that I had attached to self.view, even though I was setting isExclusiveTouch to true on my UIButtons. Upon reviewing the materials here so far, I decided to put some code in my tap gesture code that checks if the tap location is contained in any UIButton frame and if that frame is also visible on the screen at the same time. If both of those conditions are true, then the UIButton will already have handled the tap, and the event triggered in my gesture recognizer can then be ignored as a pass through of the event. My logic allows me to loop over all subviews, checking if they are of type UIButton, and then checking if the tap was in that view and the view is visible.
#objc func singleTapped(tap: UITapGestureRecognizer)
{
anyControlsBreakpoint()
let tapPoint = tap.location(in: self.view)
// Prevent taps inside of buttons from passing through to singleTapped logic
for subview in self.view.subviews
{
if subview is UIButton {
if pointIsInFrameAndThatFrameIsVisible(view: subview, point: tapPoint)
{
return // Completely ignores pass through events that were already handled by a UIButton
}
}
}
Below is the code that checks if point was inside a visible button. Note that I hide my buttons by setting their alpha to zero. If you are using the isHidden property, your logic might need to look for that.
func pointIsInFrameAndThatFrameIsVisible(view : UIView, point : CGPoint) -> Bool
{
return view.frame.contains(point) && view.alpha == 1
}