In Xcode ui builder when one set the UIBarStyle of the UIToolBar (such as BlackTranslucent, for example), the UIToolBarItem matches the background images to it. How does the UIToolBarItem know which style it should use?
I'm trying to do a put a custom colored image on top of the regular background tile (programatically merge a given image on top of the background image). I want to the code to be generalizable enough so that it is able to handle all UIBarStyles. That means I want to know when UIToolBarItem decides which background to use and intercept it so I can compose the button image on the fly.
Without being able to see apple's implementation of UIBarbuttonItem, I posit that they do not, in fact, know what the style of their UIToolbar is. If you look closely, they have the same alpha as their toolbar, and the same overlay (indicative of a subview). Therefore, any image that is below this highlighted layer, added as a subview, should conform to the UIToolbar's style. If you want to use multiple images though (one for each barStyle), you can determine it with self.myToolbar.barStyle and plan appropriately at -viewDidLoad time. As for true image drawing, subclass UIToolbar and override -drawRect: and use [UIImage drawInRect:rect];
Related
I'm trying to get my NSScrollView (and thus a contained NSOutlineView) to use a blurred NSVisualEffectView with behind-window blending effect.
I've successfully made NSVisualEffectView the container view and placed my scroll view as a subview. This seems to work fine (as long as I make all my table cells, table, nsscrollview etc transparent).
However I've now turned 'Reduce transparency' ON under Accessibility options and all of a sudden I have a black background behind my NSScrollView. I tried subclassing the visual effect view in order to override the drawRect method so that I can draw my own background, but I've just learned this isn't possible or recommended.
How do I detect that Reduce Transparency is ON and how do I make my scrollview opaque dynamically?
Took me a while to find it, but there are a couple of new methods on NSWorkspace that you can use to find out about the preferences for OS X Yosemite’s new accessibility features. -[NSWorkspace accessibilityDisplayShouldReduceTransparency] is the one you want.
By Listening for NSWorkspaceAccessibilityDisplayOptionsDidChangeNotification you can find out when that preference changes. Note that you’ll have to register for that notification on the correct NSNotificationCenter, that is [[NSWorkspace sharedWorkspace] notificationCenter].
It seems as for now I've ended up overriding a parent NSView that contains everything and set a background color for that. This way, when Reduce Transparency is ON, the NSVisualEffectView becomes transparent and the color I end up getting is the one visible below it. This seems to work fine for now.
Despite we have SwiftUI nowadays, in classic Cocoa you can still subclass a custom NSScrollView and use ...
-(NSColor *)backgroundColor {
return NSColor.clearColor;
}
-(BOOL)drawsBackground {
return NO;
}
or set properties of your NSScrollView accordingly if you don't want to subclass like...
yourscrollview.drawsBackground = NO;
yourscrollview.backgroundColor = NSColor.clearColor;
this forces your view to show what is below, and with it also the blurEffect or opaque color of your View or Window that is enclosing your NSScrollView.
This solution has the benefit that you do not have to observe the Workspace for some Notification or VibrancyEffect.
I have a UICollectionView, and it works great, but I have a doubt.
In my view I have a UICollectionViewCell and inside that I have a UIImage.
The cell is linked to a blank view controller, and inside that I have a UIScrollView (for managing the zoom) and a UIImage (the full-size image).
I wondered if there was some delegate or something that could handle the image opening process automatically (with zoom, etc.).
Now I'm handling the zoom effect with UIScrollViewDelegate delegate and method viewForZoomingInScrollView:... but the result is very poor, definitely not fluid!
There's no built in view for doing what you want, no. You're doing the right thing with a UIImageView inside UIScrollView. If it's not very fluid, then it's likely because your image is huge. The way to get around that is to load different images for different zoom levels. So as you zoom in, listen to the UIScrollViewDelegate method called scrollViewDidZoom: and change the image to better resolution as you zoom in. Or, take a look at CATiledLayer.
Note that this has nothing to do with your UICollectionView.
I'm trying to write my first iPhone app, and I'm running into a sort of design struggle. What I want to do is have a grid of icons and when you touch one, all the icons above and to the left will "activate" and all the ones below and to the right will "deactivate." If an icon is activated it shows one picture, and if it's not activated it shows another.
The problem I have is that I want to assign a gesture recognizer to each one of these individual icons, and when that icon is tapped, it needs to call a function that updates my grid of icons. But in order to properly update, the function needs to pass as arguments the location of the image in the grid and there's no way to call a function with arguments as part of a gesture recognizer.
So really all I need to do is extend UIImageView to hold two extra integers and the grid it's contained in, and then I could have the following code:
imageView.userInteractionEnabled = YES;
UITapGestureRecognizer *tapgr = [[UITapGestureRecognizer alloc] initWithTarget:
self action:#selector(handleTap)];
[imageView addGestureRecognizer:tapgr];
:
:
- (void)handleTap
{
[self.grid updateTableFromRow:self.row andCol:self.col
}
So I suppose this is one way of doing it, but I'm told that I'm not supposed to extend classes in Objective-C, that I should build them from the ground up. In this case, I would just make a custom view with all the properties and/or instance variables I need and I would just fill this custom view with the UIImageView.
This is mostly fine, except when it comes to building my Storyboard. I put all the code that manages and creates this table of icons (programmatically) in another custom view, GridOfIconsView. So on the Storyboard I drag out a custom view and set to be a GridOfIconsView, but then I just see a big white rectangle, and I really want to be able to visualize my app in Storyboard. I know that I can drag out actual image files that I use for the icons onto Storyboard and set them to be a custom view, but then how does that work? Is that image just a background to the custom view? Would I be able to change it programmatically? So if the activated image was a green square, but the deactivated was a red one and I initially dragged out red squares to the Storyboard, would I have access to that red square image?
And a more concerning issue is that I want to manage all these icons in a data structure, either as a 2d array (id icons[][]) or a NS(Mutable?)Array of NS(Mutable?)Arrays. Either way, how could I initialize the data structure to contain links to all these? The grid will be probably 8x8 or 10x10, and there's no way I'm going to have 64-100 #propertys connecting these icons. I'm thinking the only way to sensible organize this is programmatically, but then still, how can I visualize it in Storyboard?
First, it's completely fine to extend classes in Objective-C, and it's done all the time. UIView, UIViewController, UIComponent, etc., were all designed specifically to be subclassed and extended.
However, there are two ways you can do this that are much simpler than extending the class. First, you can have your grid as you already do, where each view has a gesture recognizer attached that calls back to a method on the view controller. Then, you can set a tag on each view (or even just use the view's frame for identification), and read that from the callback method (the gesture recognizer is passed back to the callback method). For example, let's say you had a grid of 4x4 views and you simply numbered them starting in the top-left, advancing each column to the right and then each row, from 0 to 11, you could easily identify the view as such:
// The system automatically passes the gesture recognizer as the only parameter
- (void)handleTap:(UIGestureRecognizer)gestureRecognizer
{
NSInteger viewNumber = [[gestureRecognizer view] tag];
// do something with this view
}
The other way you can do it is to have a single gesture recognizer on the parent view, and then in your -handleTap: callback, you'd query the position of the tap in the view. If the position is within the frame of any of your views, you'd know which one and what to do with it. If not, you could ignore it. This solution requires slightly more math, but also requires far less maintenance and far fewer gesture recognizer that need to be wired up. I would recommend this solution over tagging your views.
I am working on an app, which actually works like MSPaint (something to draw lines, etc...).
I got a white UIView, basically where the user draws. On top of this UIView I set up a UIImage, which is gray, with a 0,4 alpha. I want this UIImage to be used as a blotting paper. The idea is to disable touch when the user put the palm of his hand on this area, so it's more comfortable to draw (multitouch is disabled, and with this "blotting paper" you won't draw something accidentally with your palm...)
Even if I bring the UIImage to the front, on top of the view, and even if I disable user interactions on this UIImage, it is still possible to draw on the UIView. , behind the UIImage (kind of strange!)
I do not understand what's happening, because, it seems that the image is transparent, and that the UIView "behind" is still active, even if she's overlaid by the UIImage?!
Any help/indication/idea would be much appreciated! Thank you :-)
Have you set the "userInteractionEnabled" property of the UIImage to "NO" ?
You may actually want to do the opposite. When you disable user interaction or touches, the view basically becomes invisible to touches and they are passed on to the next view.
In your case you do want userInteractionEnabled because you want the view to catch those touches.
You have to disable the user interaction on the UIImageView not the UIImage and it should work.
Edit:
Or you could be sneaky and just add an empty view over it. Use the same frame size so it overlaps perfectly and thats it. You'll be able to see everything you need and it's not a subview of it so there will eb no interaction and touches will get registered but won't have any effect. :P
No better ideas unless you post some of your code...
OK, so I managed to do what I wanted to! YAY!
I got 3 different classes :
StrokesViewController (UIViewController)-the view controller
StrokesView (UIView) - the view where the user draws the strokes.
BlottingPaper (UIView) - the blotting paper.
I got a XIB file "linked" to all three.
I created this new class, called "BlottingPaper", typed UIView. the .h and .m file are actually empty (I do import #import < Foundation/Foundation.h >)
User interaction is enable on BlottingPaper.
I do not use the exclusive touch on this class.
On the XIB file, I just put a view on top of StrokesView. I link it to BlottingPaper (modify the alpha as I want, blablabla...)
And that's it! When I put the palm of my hand on it, it doesn't draw anything on the area where my hand is, but I still can draw with my finger on the rest of the StrokesView!
In addition to Dancreek's response, you should be setting buvard.userInteractionEnabled = YES; so that it captures interaction.
You should also set buvard.exclusiveTouch = YES; so that buvard is the only view which will receive touch events.
When you remove buvard you should set buvard.exclusiveTouch = NO; so that other views will regain their ability to receive touches.
I'm not that great with Core Graphics, but I am drawing text on the screen to my CGContext. I am doing this immediately after I add a standard, opaque UIView to my user interface.
Does anyone know why the text I draw after I add my UIView is still at the "bottom" of the user interface?
Thanks in advance.
iOS, like OS X, uses a compositing window manager. Adding and removing UIViews sets their position in the view hierarchy; when and how they're drawn is managed separately. There is no guaranteed relation between when a view is added and when it'll be drawn, and no reason to guarantee one. The content of a view is cached and composited as required from that copy.
If you want to do custom drawing, create a custom UIView subclass, add it to the hierarchy according to where you want it to appear and do your drawing in drawRect: or one of the other override points if you want to render off thread.