Screen bounds do not change with orientation - ios9.1

Starting with iOS 8 screen bounds suppose to depend on orientation. But when I print the values in iOS 9.1, changing the simulator orientation - they stay the same!
let h = UIScreen.mainScreen().bounds.height
let w = UIScreen.mainScreen().bounds.width
dbpr("orient: \(devOrient.rawValue) w: \(w), h: \(h)")
//the above prints this:
orient: 1 w: 320.0, h: 548.0
orient: 3 w: 320.0, h: 548.0
orient: 2 w: 320.0, h: 548.0
orient: 4 w: 320.0, h: 548.0
Reinstall of Xcode did not help. Any ideas what's going on?

Turns out the reason was I had the Device Orientation under Target/General set to Portrait only.

Related

iOS Core Graphics how to optimize incremental drawing of very large image?

I have an app written with RXSwift which processes 500+ days of HealthKit data to draw a chart for the user.
The chart image is drawn incrementally using the code below. Starting with a black screen, previous image is drawn in the graphics context, then a new segment is drawn over this image with certain offset. The combined image is saved and the process repeats around 70+ times. Each time the image is saved, so the user sees the update. The result is a single chart image which the user can export from the app.
Even with autorelease pool, I see spikes of memory usage up to 1Gb, which prevents me from doing other resource intensive processing.
How can I optimize incremental drawing of very large (1440 × 5000 pixels) image?
When image is displayed or saved at 3x scale, it is actually 4320 × 15360.
Is there a better way than trying to draw over an image?
autoreleasepool {
//activeEnergyCanvas is custom data processing class
let newActiveEnergySegment = activeEnergyCanvas.draw(in: CGRect(x: 0, y: 0, width: 1440, height: days * 10), with: energyPalette)
let size = CGSize(width: 1440, height: height)
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
//draw existing image
self.activeEnergyImage.draw(in: CGRect(origin: CGPoint(x: 0, y: 0),
size: size))
//calculate where to draw smaller image over larger one
let offsetRect = CGRect(origin: CGPoint(x: 0, y: offset * 10),
size: newActiveEnergySegment.size)
newActiveEnergySegment.draw(in: offsetRect)
//get the combined image
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//assign combined image to be displayed
if let unwrappedImage = newImage {
self.activeEnergyImage = unwrappedImage
}
}
Turns out my mistake was in passing invalid drawing scale (0.0) when creating graphics context, which defaulted to drawing at the device's native screen scale.
In case of iPhone 8 it was 3.0 The result is needing extreme amounts of memory to draw, zoom and export these images. Even if all debug logging prints that image is 1440 pixels wide, the actual canvas ends up being 1440 * 3.0 = 4320.
Passing 1.0 as the drawing scale makes the image more fuzzy, but reduces memory usage to less than 200mb.
// UIGraphicsBeginImageContext() <- also uses #3x scale, even when all display size printouts show
let drawingScale: CGFloat = 1.0
UIGraphicsBeginImageContextWithOptions(size, true, drawingScale)

How do I get the frame of visible content from SKCropNode?

It appears that, in SpriteKit, when I use a mask in a SKCropNode to hide some content, it fails to change the frame calculated by calculateAccumulatedFrame. I'm wondering if there's any way to calculate the visible frame.
A quick example:
import SpriteKit
let par = SKCropNode()
let bigShape = SKShapeNode(rect: CGRect(x: 0, y: 0, width: 100, height: 100))
bigShape.fillColor = UIColor.redColor()
bigShape.strokeColor = UIColor.clearColor()
par.addChild(bigShape)
let smallShape = SKShapeNode(rect: CGRect(x: 0, y: 0, width: 20, height: 20))
smallShape.fillColor = UIColor.greenColor()
smallShape.strokeColor = UIColor.clearColor()
par.maskNode = smallShape
par.calculateAccumulatedFrame() // returns (x=0, y=0, width=100, height=100)
I expected par.calculateAccumulatedFrame() to return (x=0, y=0, width=20, height=20) based on the crop node mask.
I thought maybe I could code the function myself as an extension that basically reimplements calculateAccumulatedFrame with support for checking for SKCropNodes and their masks, but it occurred to me that I would need to consider the alpha of that mask to determine if there's actual content that grows the frame. Sounds difficult.
Is there an easy way to calculate this?

Xcode UILabel upside down

I want to have a label which should be displayed upside down. That means after creating the label I want to turn it around 90 degrees. That works but now the label is anywhere. I don't know HOW the label is rotated. Maybe one could help me. The code is the following:
let label = CreatorClass.createLabelWithFrame(CGRect(x: 10, y: 10, width: 150, height: 15), text: "aString", size: 12.0, bold: false, textAlignment: .Left, textColor: UIColor.whiteColor(), addToView: self)
label.transform = CGAffineTransformMakeRotation(CGFloat(M_PI_2))
CreatorClass creates a label and add it to a certain view (it adds to self because this code is called in a subclass of UIView). Actually it's self-explanatory I think.
you label rotate around its center, that means:
You label rotate around : x = 10 + 75 = 85
y = 10 + 7.5 = 17.5
This is also the center of the new position, what has a width of 15.0 and height of 150;
The new rect of your view, after the transfer is:
x = 85 - 7.5 = 77.5;
y = 17.5 - 75 = -57.5;
width = 15 , height = 150.
It could be out of the self bounds.
You might want to go back your label to its init position, only need to put it to init position:
label.frame = CGRect(x: 10, y: 10, width: 15, height: 150)

Repositioning of a UIView after CGAffineRotation

I'm drawing a map of hexagons inside a UIView and the next step is to give it in isometric view. To do this, I transformed the map using the CGAffineTransform, rotation and scaling, reaching my goal.
Now, if the map becomes bigger, when i rotate it, the lower left corner goes out my screen, this is the frame before and after the transformation:
2012-03-07 17:08:06.160 PratoFiorito[941:f803] X: 0.000000 - Y: 0.000000 || Width: 1408.734619 - Height: 1640.000000
2012-03-07 17:08:06.163 PratoFiorito[941:f803] X: -373.523132 - Y: 281.054779 || Width: 2155.781006 - Height: 1077.890503
I simply can't understand what is the new point of origin and how I can calculate it to replace the view correctly. Somebody can help me?

How do I use the scanCrop property of a ZBar reader?

I am using the ZBar SDK for iPhone in order to scan a barcode. I want the reader to scan only a specific rectangle instead of the whole view, for doing that it is needed to set the scanCrop property of the reader to the desired rectangle.
I'm having hard time with understanding the rectangle parameter that has to be set.
Can someone please tell me what rect should I give as an argument if on portrait view its coordinates would be: CGRectMake( A, B, C, D )?
From the zbar's ZBarReaderView Class documentation :
CGRect scanCrop
The region of the video image that will be scanned, in normalized image coordinates. Note that the video image is in landscape mode (default {{0, 0}, {1, 1}})
The coordinates for all of the arguments is in a normalized float, which is from 0 - 1. So, in normalized value, theView.width is 1.0, and theView.height is 1.0. Therefore, the default rect is {{0,0},{1,1}}.
So for example, if I have a transparent UIView named scanView as a scanning region for my readerView. Rather than do :
readerView.scanCrop = scanView.frame;
We should do this, normalizing every arguments first :
CGFloat x,y,width,height;
x = scanView.frame.origin.x / readerView.bounds.size.width;
y = scanView.frame.origin.y / readerView.bounds.size.height;
width = scanView.frame.size.width / readerView.bounds.size.width;
height = scanView.frame.size.height / readerView.bounds.size.height;
readerView.scanCrop = CGRectMake(x, y, width, height);
It works for me. Hope that helps.
You can use scan crop area by doing this.
reader.scanCrop = CGRectMake(x,y,width,height);
for eg.
reader.scanCrop = CGRectMake(.25,0.25,0.5,0.45);
I used this and its working for me.
come on!!! this is the right way to adjust the crop area;
I had wasted tons of time on it;
readerView.scanCrop = [self getScanCrop:cropRect readerViewBounds:contentView.bounds];
- (CGRect)getScanCrop:(CGRect)rect readerViewBounds:(CGRect)rvBounds{
CGFloat x,y,width,height;
x = rect.origin.y / rvBounds.size.height;
y = 1 - (rect.origin.x + rect.size.width) / rvBounds.size.width;
width = rect.size.height / rvBounds.size.height;
height = rect.size.width / rvBounds.size.width;
return CGRectMake(x, y, width, height);
}