Move Back Button to the right UINavigationController - objective-c

I need to move the stock back button over to the right just a few pixels as its so far to the left that when the iPad is in certain cases you cannot see the button at all.
What i've tried:
[self.navigationController.navigationItem.backBarButtonItem setImageInsets:UIEdgeInsetsMake(0, 20, 0, 0)];
[self.navigationItem.backBarButtonItem setImageInsets:UIEdgeInsetsMake(0, 20, 0, 0)];
[self.navigationController.navigationBar.backItem.backBarButtonItem setImageInsets:UIEdgeInsetsMake(0, 20, 0, 0)];
Is this possible?

For some reason, that solution isn't working in the latest versions of iOS (11+). What i made to move the back button image was an extension:
extension UIImage {
func withInsets(_ insets: UIEdgeInsets) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(
CGSize(width: size.width + insets.left + insets.right,
height: size.height + insets.top + insets.bottom),
false,
self.scale)
let origin = CGPoint(x: insets.left, y: insets.top)
self.draw(at: origin)
let imageWithInsets = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return imageWithInsets
}
}
Then, you can call your implementation like:
UINavigationBar.appearance().backIndicatorImage = UIImage(named: "icon-back")?.withInsets(UIEdgeInsets(top: 0, left: 10.0, bottom: 0, right: 0 ))

Related

Why setTitleEdgeInsets not working iOS 13? Objective - C

I have custom profile UIButton that have image and user profile name title. I'm setting image and title of button and also setting title and image edge inset. Before iOS 13 my code was working perfect but at iOS 13 devices setTitleEdgeInsets not working. When I did debug, title inset value is true but button not setting the value that I give. You can find the code and the screen shot below.
#implementation NYTProfileButton
- (void)layoutSubviews
{
[super layoutSubviews];
self.imageView.frame = CGRectMake(0, 0, self.frame.size.height, self.frame.size.height);
self.imageView.layer.masksToBounds = YES;
self.imageView.layer.cornerRadius = self.frame.size.height / 2;
UIEdgeInsets imageInsets = UIEdgeInsetsMake(0.0, 0.0, 0.0, self.frame.size.width - self.frame.size.height);
self.imageEdgeInsets = imageInsets;
UIEdgeInsets titleInsets = UIEdgeInsetsMake(0, self.frame.size.height-self.imageView.image.size.height+5.0, 0, 0);
self.titleEdgeInsets = titleInsets;
[self setTitleEdgeInsets:titleInsets];
}
- (void)awakeFromNib
{
self.imageView.contentMode = UIViewContentModeScaleAspectFill;
}
#end
I had the same issue. The image insets were not working for me for iOS 13.
Though I did not fix the issue but found a work around which might be useful.
For me the image was not visible hence I added a UIImage subview to the button in which I wanted to display the image.(For your case you can add UILabel Subview at the required position)
Code I added to display my image correctly:
if #available(iOS 13.0, *){
let advanceSearchChevronImage = UIImage(named: "General/chevronUp")
advanceSearchChevronImageView = UIImageView(frame: CGRect(x: self.view.bounds.width - 50, y: 0, width: 50.0, height: advanceSearchButton.bounds.height))
advanceSearchChevronImageView.contentMode = .center
advanceSearchChevronImageView.image = advanceSearchChevronImage
advanceSearchButton.addSubview(advanceSearchChevronImageView)
}
For your case you can give ""(empty string) as value for the button Title. Add a label to this button as a subview and position it as required by you.
Make these changes only when iOS 13.
Hope you can take some clue from this and get solution to the problem you are facing - if you are looking for a very quick fix.

Is it possible to covert CGRect into CGPathRef ? - Objective C

I have a CGRect based on that I want to make CGPathRef.
Is it doable in objective C?
let rect = CGRect(x: 1, y: 2, width: 3, height: 4)
let path = CGPath(rect: bounds, transform: nil)

OpenGL Picking not work in NSOpenGLView

I Am trying to implement picking in a NSOpenGLView , but not works, this is the code.
I render only the objects that I need pick with no lights and I render the scene as same in normal render.
- (void)drawSeleccion
{
NSSize size = [self bounds].size;
GLuint selectBuf[16 * 4] = {0};
GLint hits;
glClearColor (0.0, 0.0, 0.0, 0.0);
glColor4f(1.0, 1.0, 1.0, 1.0);
glSelectBuffer (16 * 4, selectBuf);
glRenderMode(GL_SELECT);
/// *** Start ***
glInitNames();
glPushName(0);
// view port.
glViewport(0, 0, size.width, size.height);
// projection matrix.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
float dist = 534;
aspectRatio = size.width / size.height;
nearDist = MAX(10, dist - 360.0);
farDist = dist + 360.0;
GLKMatrix4 m4 = GLKMatrix4MakePerspective(zoom, aspectRatio, nearDist, farDist);
glMultMatrixf(m4.m);
// Model view.
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// I look at.
GLKMatrix4 glm = GLKMatrix4MakeLookAt(0, dist, 0, 0, 0, 0, 0, 0, 1);
glMultMatrixf(glm.m);
// rotate viewPort.
glRotated(rotate_x, 1, 0, 0);
glRotated(rotate_z, 0, 0, 1);
glTranslated(translate_x - frente * 0.5, fondo * -0.5, translate_z - alto * 0.5);
/// render model....
glPushMatrix();
for (int volNo = 0; volNo < [self.modeloOptimo.arr_volumenes count]; volNo++) {
VolumenOptimo *volOp = self.modeloOptimo.arr_volumenes[volNo];
glLoadName(volNo);
volOp->verProblemas = false;
[volOp drawVolumen];
}
glPopName();
glPopMatrix();
// Flush view.
glFlush();
hits = glRenderMode(GL_RENDER);
processHits (hits, selectBuf);
} // Fin de drawRect.
Always hits = 0, and selectBuf is empty.
Any idea. thanks

my UILabel is not shown in my CALayer at Simulator, but appear at view debugger

I have made a conversation box like below Image :
I have make an unique shape with this code and assign it as a Layer
func bubblePathForContentSize(contentSize: CGSize, left: Bool) -> UIBezierPath {
let borderWidth : CGFloat = 4 // Should be less or equal to the `radius` property
let radius : CGFloat = 10
let triangleHeight : CGFloat = 15
let rect: CGRect
let path = UIBezierPath();
let radius2 = radius - borderWidth / 2 // Radius adjasted for the border width
if left {
rect = CGRect(x: 0,y: 0,width: contentSize.width,height: contentSize.height).offsetBy(dx: radius, dy: radius + triangleHeight)
}else{
rect = CGRect(x: self.containerView.width
- contentSize.width - 8 - radius2,y: 0,width: contentSize.width,height: contentSize.height).offsetBy(dx: radius, dy: radius + triangleHeight)
}
path.addArc(withCenter: CGPoint(x: rect.maxX,y: rect.minY), radius: radius2, startAngle: CGFloat(-M_PI_2), endAngle: 0, clockwise: true)
path.addArc(withCenter: CGPoint(x: rect.maxX,y: rect.maxY), radius: radius2, startAngle: 0, endAngle: CGFloat(M_PI_2), clockwise: true)
path.addArc(withCenter: CGPoint(x: rect.minX,y: rect.maxY), radius: radius2, startAngle: CGFloat(M_PI_2), endAngle: CGFloat(M_PI), clockwise: true)
path.addArc(withCenter: CGPoint(x: rect.minX,y: rect.minY), radius: radius2, startAngle: CGFloat(M_PI), endAngle: CGFloat(-M_PI_2), clockwise: true)
if left {
path.move(to: CGPoint(x: self.containerView.width/3,y: rect.maxY + radius2))
path.addLine(to: CGPoint(x: self.containerView.width/3 + (triangleHeight/2),y: rect.maxY + radius2 + triangleHeight))
path.addLine(to: CGPoint(x: self.containerView.width/3 + triangleHeight,y: rect.maxY + radius2))
}else{
path.move(to: CGPoint(x: self.containerView.width*2/3,y: rect.maxY + radius2))
path.addLine(to: CGPoint(x: self.containerView.width*2/3 + (triangleHeight/2),y: rect.maxY + radius2 + triangleHeight))
path.addLine(to: CGPoint(x: self.containerView.width*2/3 + triangleHeight,y: rect.maxY + radius2))
}
let label = UILabel(frame: rect)
label.textAlignment = .center
label.text = "Great! Now double tap the image to zoom in."
label.font = label.font.withSize(11)
label.textColor = UIColor.white
self.containerView.addSubview(label)
path.close()
return path
}
Then, I want to add a text inside the layer as a UILabel, the problem is
the UILabel not appearing, when I check it via view Debugger in Xcode 8, actually the UILabel is appear and it is in front of the Layer.
UILabel in front of CALayer in view Debugger appear.
Any idea how to show the Label ? What attribute or method i`m missing here?

Stitch/Composite multiple images vertically and save as one image (iOS, objective-c)

I need help writing an objective-c function that will take in an array of UIImages/PNGs, and return/save one tall image of all the images stitched together in order vertically. I am new to this so please go slow and take it easy :)
My ideas so far:
Draw a UIView, then addSubviews to each one's parent (of the images)
and then ???
Below are Swift 3 and Swift 2 examples that stitch images together vertically or horizontally. They use the dimensions of the largest image in the array provided by the caller to determine the common size used for each individual frame each individual image is stitched into.
Note: The Swift 3 example preserves each image's aspect ratio, while the Swift 2 example does not. See note inline below regarding that.
UPDATE: Added Swift 3 example
Swift 3:
import UIKit
import AVFoundation
func stitchImages(images: [UIImage], isVertical: Bool) -> UIImage {
var stitchedImages : UIImage!
if images.count > 0 {
var maxWidth = CGFloat(0), maxHeight = CGFloat(0)
for image in images {
if image.size.width > maxWidth {
maxWidth = image.size.width
}
if image.size.height > maxHeight {
maxHeight = image.size.height
}
}
var totalSize : CGSize
let maxSize = CGSize(width: maxWidth, height: maxHeight)
if isVertical {
totalSize = CGSize(width: maxSize.width, height: maxSize.height * (CGFloat)(images.count))
} else {
totalSize = CGSize(width: maxSize.width * (CGFloat)(images.count), height: maxSize.height)
}
UIGraphicsBeginImageContext(totalSize)
for image in images {
let offset = (CGFloat)(images.index(of: image)!)
let rect = AVMakeRect(aspectRatio: image.size, insideRect: isVertical ?
CGRect(x: 0, y: maxSize.height * offset, width: maxSize.width, height: maxSize.height) :
CGRect(x: maxSize.width * offset, y: 0, width: maxSize.width, height: maxSize.height))
image.draw(in: rect)
}
stitchedImages = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
return stitchedImages
}
Note: The original Swift 2 example below does not preserve the
aspect ratio (e.g. in the Swift 2 example all images are expanded to
fit in the bounding box that represents the extrema of the widths and
heights of the images, thus Any non-square image can be stretched
disproportionately in one of its dimensions). If you're using Swift 2
and want to preserve the aspect ratio please use the AVMakeRect()
modification from the Swift 3 example. Since I no longer have access
to a Swift 2 playground and can't test it to ensure no errors I'm not
updating the Swift 2 example here for that.
Swift 2: (Doesn't preserve aspect ratio. Fixed in Swift 3 example above)
import UIKit
import AVFoundation
func stitchImages(images: [UIImage], isVertical: Bool) -> UIImage {
var stitchedImages : UIImage!
if images.count > 0 {
var maxWidth = CGFloat(0), maxHeight = CGFloat(0)
for image in images {
if image.size.width > maxWidth {
maxWidth = image.size.width
}
if image.size.height > maxHeight {
maxHeight = image.size.height
}
}
var totalSize : CGSize, maxSize = CGSizeMake(maxWidth, maxHeight)
if isVertical {
totalSize = CGSizeMake(maxSize.width, maxSize.height * (CGFloat)(images.count))
} else {
totalSize = CGSizeMake(maxSize.width * (CGFloat)(images.count), maxSize.height)
}
UIGraphicsBeginImageContext(totalSize)
for image in images {
var rect : CGRect, offset = (CGFloat)((images as NSArray).indexOfObject(image))
if isVertical {
rect = CGRectMake(0, maxSize.height * offset, maxSize.width, maxSize.height)
} else {
rect = CGRectMake(maxSize.width * offset, 0 , maxSize.width, maxSize.height)
}
image.drawInRect(rect)
}
stitchedImages = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
return stitchedImages
}
The normal way is to create a bitmap image context, draw your images in at the required position, and then get the image from the image context.
You can do this with UIKit, which is somewhat easier, but isn't thread safe, so will need to run in the main thread and will block the UI.
There is loads of example code around for this, but if you want to understand it properly, you should look at UIGraphicsContextBeginImageContext, UIGraphicsGetCurrentContext, UIGraphicsGetImageFromCurrentImageContext and UIImageDrawInRect. Don't forget UIGraphicsPopCurrentContext.
You can also do this with Core Graphics, which is AFAIK, safe to use on background threads ( I've not had a crash from it yet). Efficiency is about the same, as UIKit just uses CG under the hood.
Key words for this are CGBitmapContextCreate, CGContextDrawImage, CGBitmapContextCreateImage, CGContextTranslateCTM, CGContextScaleCTM and CGContextRelease ( no ARC for Core Graphics). The scaling and translating is because CG has the origin in the bottom right hand corner and Y inscreases upwards.
There is also a third way, which is to use CG for the context, but save yourself all the co-ordinate pain by using a CALayer, set your CGImage ( UIImage.CGImage) as the contents and then render the layer to the context. This is still thread safe and lets the layer take care of all the transformations. Keywords for this is - renderInContext:
I know I'm a bit late here but hopefully this can help someone out. If you're trying to create one large image out of an array you can use this method
- (UIImage *)mergeImagesFromArray: (NSArray *)imageArray {
if ([imageArray count] == 0) return nil;
UIImage *exampleImage = [imageArray firstObject];
CGSize imageSize = exampleImage.size;
CGSize finalSize = CGSizeMake(imageSize.width, imageSize.height * [imageArray count]);
UIGraphicsBeginImageContext(finalSize);
for (UIImage *image in imageArray) {
[image drawInRect: CGRectMake(0, imageSize.height * [imageArray indexOfObject: image],
imageSize.width, imageSize.height)];
}
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return finalImage;
}
Swift 4
extension Array where Element: UIImage {
func stitchImages(isVertical: Bool) -> UIImage {
let maxWidth = self.compactMap { $0.size.width }.max()
let maxHeight = self.compactMap { $0.size.height }.max()
let maxSize = CGSize(width: maxWidth ?? 0, height: maxHeight ?? 0)
let totalSize = isVertical ?
CGSize(width: maxSize.width, height: maxSize.height * (CGFloat)(self.count))
: CGSize(width: maxSize.width * (CGFloat)(self.count), height: maxSize.height)
let renderer = UIGraphicsImageRenderer(size: totalSize)
return renderer.image { (context) in
for (index, image) in self.enumerated() {
let rect = AVMakeRect(aspectRatio: image.size, insideRect: isVertical ?
CGRect(x: 0, y: maxSize.height * CGFloat(index), width: maxSize.width, height: maxSize.height) :
CGRect(x: maxSize.width * CGFloat(index), y: 0, width: maxSize.width, height: maxSize.height))
image.draw(in: rect)
}
}
}
}
Try this piece of code,I tried stitching two images together n displayed them in an ImageView
UIImage *bottomImage = [UIImage imageNamed:#"bottom.png"]; //first image
UIImage *image = [UIImage imageNamed:#"top.png"]; //foreground image
CGSize newSize = CGSizeMake(209, 260); //size of image view
UIGraphicsBeginImageContext( newSize );
// drawing 1st image
[bottomImage drawInRect:CGRectMake(0,0,newSize.width/2,newSize.height/2)];
// drawing the 2nd image after the 1st
[image drawInRect:CGRectMake(0,newSize.height/2,newSize.width/2,newSize.height/2)] ;
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
join.image = newImage;
join is the name of the imageview and you get to see the images as a single image.
The solutions above were helpful but had a serious flaw for me. The problem is that if the images are of different sizes, the resulting stitched image would have potentially large spaces between the parts. The solution I came up with combines all images right below each other so that it looks like more like a single image no matter the individual image sizes.
For Swift 3.x
static func combineImages(images:[UIImage]) -> UIImage
{
var maxHeight:CGFloat = 0.0
var maxWidth:CGFloat = 0.0
for image in images
{
maxHeight += image.size.height
if image.size.width > maxWidth
{
maxWidth = image.size.width
}
}
let finalSize = CGSize(width: maxWidth, height: maxHeight)
UIGraphicsBeginImageContext(finalSize)
var runningHeight: CGFloat = 0.0
for image in images
{
image.draw(in: CGRect(x: 0.0, y: runningHeight, width: image.size.width, height: image.size.height))
runningHeight += image.size.height
}
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return finalImage!
}