sizeof crashing on iPad 1 but not > iPad 1 - objective-c

Consider this code:
CGFloat largerLineSpacing = kStreamCellParagraphSpacing;
CTParagraphStyleSetting paragraphSettings[1] = {
{ kCTParagraphStyleSpecifierParagraphSpacing, sizeof(CGFloat), &largerLineSpacing }
};
CTParagraphStyleRef paragraphStyle = CTParagraphStyleCreate(paragraphSettings, sizeof(*paragraphSettings));
This code crashes with an EXC_BAD_ACCESS when running on an iPad 1 (5.1), but not a 5.1 simulator or an iPad 3 (6.0). My C is weak - am I making a dumb mistake with sizeof?

The docs for CTParagraphStyleCreate suggest that its second argument gives the number of CTParagraphStyleSetting instances in the paragraphSettings array (1 in your case) rather than the size in bytes of the array.
If you change your code to
CTParagraphStyleRef paragraphStyle = CTParagraphStyleCreate(paragraphSettings, 1);
it should work. Or, if you want to cope with adding more settings in future, you could try
int numElems = sizeof(paragraphSettings)/sizeof(paragraphSettings[0]);
CTParagraphStyleRef paragraphStyle = CTParagraphStyleCreate(paragraphSettings,
numElems);

static CFIndex const settingCount = 1;
CTParagraphStyleSetting paragraphSettings[settingCount] = {
{ kCTParagraphStyleSpecifierParagraphSpacing, sizeof(CGFloat), &largerLineSpacing }
};
CTParagraphStyleRef paragraphStyle = CTParagraphStyleCreate(paragraphSettings, settingCount);

Related

The processing speed between swift and objective-c in Metal

I have written a demo base on apple official document in swift. And I found the usage of CPU is lower in Objective-c than in swift.
Does it mean Objective-c is much effective than swift while handling metal app?
I'm confused because many people says that swift is faster than Objective-c in general. Or it is just an exception?
The demo involved pointer management. I know it is trouble handle pointer with swift. Maybe it is the reason why the app is cost much resource in swift? I am still finding.
The demo below is a triple buffer model which renders hundreds of small quads, updates their positions at the start of each frame and writes them into a vertex buffer. And uses semaphores to wait for full frame completions in case the CPU is running too far ahead of the GPU.
This is the part of code from apple official document
- (void)updateState
{
AAPLVertex *currentSpriteVertices = _vertexBuffers[_currentBuffer].contents;
NSUInteger currentVertex = _totalSpriteVertexCount-1;
NSUInteger spriteIdx = (_rowsOfSprites * _spritesPerRow)-1;
for(NSInteger row = _rowsOfSprites - 1; row >= 0; row--)
{
float startY = _sprites[spriteIdx].position.y;
for(NSInteger spriteInRow = _spritesPerRow-1; spriteInRow >= 0; spriteInRow--)
{
vector_float2 updatedPosition = _sprites[spriteIdx].position;
if(spriteInRow == 0)
{
updatedPosition.y = startY;
}
else
{
updatedPosition.y = _sprites[spriteIdx-1].position.y;
}
_sprites[spriteIdx].position = updatedPosition;
for(NSInteger vertexOfSprite = AAPLSprite.vertexCount-1; vertexOfSprite >= 0 ; vertexOfSprite--)
{
currentSpriteVertices[currentVertex].position = AAPLSprite.vertices[vertexOfSprite].position + _sprites[spriteIdx].position;
currentSpriteVertices[currentVertex].color = _sprites[spriteIdx].color;
currentVertex--;
}
spriteIdx--;
}
}
}
- (void)drawInMTKView:(nonnull MTKView *)view
{
dispatch_semaphore_wait(_inFlightSemaphore, DISPATCH_TIME_FOREVER);
_currentBuffer = (_currentBuffer + 1) % MaxBuffersInFlight;
[self updateState];
id<MTLCommandBuffer> commandBuffer = [_commandQueue commandBuffer];
commandBuffer.label = #"MyCommand";
__block dispatch_semaphore_t block_sema = _inFlightSemaphore;
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> buffer)
{
dispatch_semaphore_signal(block_sema);
}];
MTLRenderPassDescriptor *renderPassDescriptor = view.currentRenderPassDescriptor;
if(renderPassDescriptor != nil)
{
id<MTLRenderCommandEncoder> renderEncoder =
[commandBuffer renderCommandEncoderWithDescriptor:renderPassDescriptor];
renderEncoder.label = #"MyRenderEncoder";
[renderEncoder setCullMode:MTLCullModeBack];
[renderEncoder setRenderPipelineState:_pipelineState];
[renderEncoder setVertexBuffer:_vertexBuffers[_currentBuffer]
offset:0
atIndex:AAPLVertexInputIndexVertices];
[renderEncoder setVertexBytes:&_viewportSize
length:sizeof(_viewportSize)
atIndex:AAPLVertexInputIndexViewportSize];
[renderEncoder drawPrimitives:MTLPrimitiveTypeTriangle
vertexStart:0
vertexCount:_totalSpriteVertexCount];
[renderEncoder endEncoding];
[commandBuffer presentDrawable:view.currentDrawable];
}
[commandBuffer commit];
}
and this is in swift
func updateSprite() {
let currentSpriteVertices = vertexBuffer[currentBuffer].contents().bindMemory(to: DFVertex.self, capacity: totalSpriteVertexCount * MemoryLayout<DFVertex>.size)
var currentVertex = totalSpriteVertexCount - 1
var spriteIdx = (rowsOfSprites * spritePerRow) - 1
for _ in stride(from: rowsOfSprites - 1, through: 0, by: -1) {
let startY = sprites[spriteIdx].position.y
for spriteInRow in stride(from: spritePerRow - 1, through: 0, by: -1) {
var updatePosition = sprites[spriteIdx].position
if spriteInRow == 0 {
updatePosition.y = startY
} else {
updatePosition.y = sprites[spriteIdx - 1].position.y
}
sprites[spriteIdx].position = updatePosition
for vertexOfSprite in stride(from: DFSprite.vertexCount - 1, through: 0, by: -1) {
currentSpriteVertices[currentVertex].position = DFSprite.vertices[vertexOfSprite].position + sprites[spriteIdx].position
currentSpriteVertices[currentVertex].color = sprites[spriteIdx].color
currentVertex -= 1
}
spriteIdx -= 1
}
}
}
func draw(in view: MTKView) {
inFlightSemaphore.wait()
currentBuffer = (currentBuffer + 1) % maxBufferInFlight
updateSprite()
let commandBuffer = commandQueue.makeCommandBuffer()
if commandBuffer == nil {
print("create command buffer failed.")
}
commandBuffer!.label = "command buffer"
commandBuffer!.addCompletedHandler { (buffer) in
self.inFlightSemaphore.signal()
}
if let renderPassDescriptor = view.currentRenderPassDescriptor,
let renderEncoder = commandBuffer!.makeRenderCommandEncoder(descriptor: renderPassDescriptor) {
renderEncoder.setCullMode(.back)
renderEncoder.setRenderPipelineState(pipelineState!)
renderEncoder.setVertexBuffer(vertexBuffer[currentBuffer],
offset: 0,
index: DFVertexInputIndex.vertex.rawValue)
renderEncoder.setVertexBytes(&viewportSize,
length: MemoryLayout.size(ofValue: viewportSize),
index: DFVertexInputIndex.viewportSize.rawValue)
renderEncoder.drawPrimitives(type: .triangle,
vertexStart: 0,
vertexCount: totalSpriteVertexCount)
renderEncoder.endEncoding()
commandBuffer!.present(view.currentDrawable!)
}
commandBuffer!.commit()
}
the result is that the app written in objective-c cost about 40% cpu usage while about 100% in swift. I thought swift would be faster.

How to ignore my app's window on screenshot? (Swift 2.0)

I wanted to get an image of the screen, ignoring my app's window.
Found a code example in Objective-C and tried to convert it to Swift.
Objective-C snipped:
// Get onscreen windows
CGWindowID windowIDToExcude = (CGWindowID)[myNSWindow windowNumber];
CFArrayRef onScreenWindows = CGWindowListCreate(kCGWindowListOptionOnScreenOnly, kCGNullWindowID);
CFMutableArrayRef finalList = CFArrayCreateMutableCopy(NULL, 0, onScreenWindows);
for (long i = CFArrayGetCount(finalList) - 1; i >= 0; i--) {
CGWindowID window = (CGWindowID)(uintptr_t)CFArrayGetValueAtIndex(finalList, i);
if (window == windowIDToExcude)
CFArrayRemoveValueAtIndex(finalList, i);
}
// Get the composite image
CGImageRef ref = CGWindowListCreateImageFromArray(myRectToGrab, finalList, kCGWindowListOptionAll);
My version in Swift (where I managed to get so far):
// Get onscreen windows
let windowIDToExcude = myNSWindow.windowNumber!
let onScreenWindows = CGWindowListCreate(kCGWindowListOptionOnScreenOnly, kCGNullWindowID)
let finalList = CFArrayCreateMutableCopy(nil, 0, onScreenWindows)
for var i = CFArrayGetCount(finalList) - 1; i >= 0; i-=1 {
var window: CGWindowID = (uintptr_t(CFArrayGetValueAtIndex(finalList, i)) as! CGWindowID)
if window == windowIDToExcude {
CFArrayRemoveValueAtIndex(finalList, i)
}
}
// Get the composite image
var ref = CGWindowListCreateImageFromArray(myRectToGrab, finalList, kCGWindowListOptionAll)
But it does not work in swift 2.0 and I have no idea why.
Particularly this line can't be compiled:
CGWindowListCreate(kCGWindowListOptionOnScreenOnly, kCGNullWindowID)
Apparently there is no such thing as CGWindowListCreate, kCGWindowListOptionOnScreenOnly and kCGNullWindowID anymore.
Can you try this:
let imageRef = CGWindowListCreateImage(self.view.frame, CGWindowListOption.OptionOnScreenBelowWindow, CGWindowID(self.view.window!.windowNumber), CGWindowImageOption.Default)
let image = NSImage(CGImage: imageRef!, size: self.view.frame.size)
self.imageView.image = image
That does the trick for me.

Converting MKPolygon Objective-C code to Swift

I have the following Objective-C method:
- (void)addPolygonToMap {
NSInteger numberOfPoints = [self.coordinates count];
if (numberOfPoints > 4) {
CLLocationCoordinate2D points[numberOfPoints];
for (NSInteger i = 0; i < numberOfPoints; i++) {
points[i] = [self.coordinates[i] MKCoordinateValue];
}
self.polygon = [MKPolygon polygonWithCoordinates:points count:numberOfPoints];
[self.mapView addOverlay: self.polygon];
}
self.isDrawingPolygon = NO;
[self.drawPolygonButton setTitle:#"draw" forState:UIControlStateNormal];
self.canvasView.image = nil;
[self.canvasView removeFromSuperview];
}
My attempt at converting it to Swift:
func addPolygonToMap() {
var numberOfPoints: NSInteger = self.coordinates.count
if (numberOfPoints > 4) {
var points: [CLLocationCoordinate2D] = []
var coordsPointer = UnsafeMutablePointer<CLLocationCoordinate2D>.alloc(numberOfPoints)
for i in 0..<numberOfPoints {
points.append(coordsPointer[i])
}
self.polygon = MKPolygon(coordinates: &points, count: numberOfPoints)
self.mapView.addOverlay(self.polygon)
coordsPointer.dealloc(numberOfPoints)
}
self.isDrawingPolygon = false
self.drawPolygonButton.setTitle("Draw", forState: .Normal)
self.canvasView.image = nil
self.canvasView.removeFromSuperview()
}
Finally, when the delegate method is called it's not actually adding the overlay to the mapView. I can't see anything.
I'm assuming it's my self.addPolytonToMap() method, but I'm not 100% sure. The whole scenario works fine in my Objective-C project.
func mapView(mapView: MKMapView!, rendererForOverlay overlay: MKOverlay!) -> MKOverlayRenderer! {
if (overlay is MKPolygon) {
var overlayPathView = MKPolygonRenderer(overlay: overlay)
overlayPathView.fillColor = UIColor.cyanColor().colorWithAlphaComponent(0.2)
overlayPathView.strokeColor = UIColor.cyanColor().colorWithAlphaComponent(0.2)
overlayPathView.lineWidth = 5
return overlayPathView
} else if (overlay is MKPolyline) {
var overlayPathView = MKPolylineRenderer(overlay: overlay)
overlayPathView.strokeColor = UIColor.blueColor().colorWithAlphaComponent(0.7)
overlayPathView.lineWidth = 5
return overlayPathView
}
return nil
}
UPDATE:
I just noticed that in my Objective-C version, the points[i].latitude are coming through okay, however when I do the following I get a strange output:
println(coordsPointer[i].latitude)
Output:
1.63041663127611e-321
1.64523860065135e-321
1.65511991356818e-321
1.68970450877706e-321
1.7045264781523e-321
1.72922976044436e-321
This would explain why I don't see the overlay, however my experience with UnsafeMutablePointer<> is limited.
Fixed by modifying the addPolygonToMap() method:
func addPolygonToMap() {
var numberOfPoints: NSInteger = self.coordinates.count
if (numberOfPoints > 4) {
var points: [CLLocationCoordinate2D] = []
for i in 0..<numberOfPoints {
points.insert(self.coordinates[i].MKCoordinateValue, atIndex: i)
}
self.polygon = MKPolygon(coordinates: &points, count: numberOfPoints)
self.mapView.addOverlay(self.polygon)
}
self.isDrawingPolygon = false
self.drawPolygonButton.setTitle("Draw", forState: .Normal)
self.canvasView.image = nil
self.canvasView.removeFromSuperview()
}
Thanks to #Volker for the help.

Swift physics not working on iOS7

I have a simple SpriteKit game that uses physics. Written in Swift, and works great in iOS8 Simulator.
The node stops at the physicsworld edge.
But when running on iOS7 it falls right trough. Think it has something to do with the category, contact and collision bitmask.
any clue?
Defining the categories here
struct PhysicsCategory {
static let None: UInt32 = 0
static let Edge: UInt32 = 0b1 // 1
static let Player: UInt32 = 0b10 // 2
static let Enemy: UInt32 = 0b100 // 4
}
Setup World
physicsBody = SKPhysicsBody(edgeLoopFromRect: self.frame)
physicsWorld.contactDelegate = self
physicsBody!.categoryBitMask = PhysicsCategory.Edge
physicsWorld.gravity = CGVectorMake(0, -9.81)
Setup Player/Ball/Node
playerNode.physicsBody = SKPhysicsBody(polygonFromPath: path)
playerNode.physicsBody!.contactTestBitMask = PhysicsCategory.Player
playerNode.physicsBody!.dynamic = true
playerNode.physicsBody!.mass = 0.50
playerNode.physicsBody!.categoryBitMask = PhysicsCategory.Player
playerNode.physicsBody!.collisionBitMask = PhysicsCategory.Enemy | PhysicsCategory.Edge
hmmm I'm not having that problem. I almost wrote your code verbatim. I assumed playerNode is a SKShapeNode and used it's path in polygonFromPath Can you try running this in iOS7 and see if you still have a problem?
struct PhysicsCategory {
static let None: UInt32 = 0
static let Edge: UInt32 = 0b1 // 1
static let Player: UInt32 = 0b10 // 2
static let Enemy: UInt32 = 0b100 // 4
}
import SpriteKit
class GameScene: SKScene, SKPhysicsContactDelegate {
let playerNode = SKShapeNode(ellipseInRect: CGRect(origin: CGPointZero, size: CGSize(width: 10, height: 10)))
override func didMoveToView(view: SKView) {
physicsBody = SKPhysicsBody(edgeLoopFromRect: self.frame)
physicsWorld.contactDelegate = self
physicsBody!.categoryBitMask = PhysicsCategory.Edge
physicsWorld.gravity = CGVectorMake(0, -9.81)
self.addChild(playerNode)
playerNode.position = CGPoint(x: self.size.width/2, y: self.size.height/2)
playerNode.physicsBody = SKPhysicsBody(polygonFromPath: playerNode.path)
playerNode.physicsBody!.dynamic = true
playerNode.physicsBody!.mass = 0.50
playerNode.physicsBody!.categoryBitMask = PhysicsCategory.Player
playerNode.physicsBody!.collisionBitMask = PhysicsCategory.Enemy | PhysicsCategory.Edge
}
}
Finally got it working!
Updated to Yosemite 10.10.1 and Xcode 6.1.1, created a new project. Strange, but works great now.

iOS8 video dimension, CMVideoDimensions returns 0,0

in iOS8 the dimension returned is 0,0
CMVideoDimensions dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription);
This was working on iOS7, so how to know the supported video dimension, as i need to know the video aspect ratio
You need to wait for the AVCaptureInputPortFormatDescriptionDidChangeNotification
- (void)avCaptureInputPortFormatDescriptionDidChangeNotification:(NSNotification *)notification {
AVCaptureInput *input = [self.recorder.captureSession.inputs objectAtIndex:0];
AVCaptureInputPort *port = [input.ports objectAtIndex:0];
CMFormatDescriptionRef formatDescription = port.formatDescription;
if (formatDescription) {
CMVideoDimensions dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription);
if ((dimensions.width == 0) || (dimensions.height == 0)) {
return;
}
CGFloat aspect = (CGFloat)dimensions.width / (CGFloat)dimensions.height;
if (floor(NSFoundationVersionNumber) > NSFoundationVersionNumber_iOS_7_1) {
// since iOS8 the aspect ratio is inverted
// remove this check if iOS7 will not be supported
aspect = 1.f / aspect;
}
}
}
Provided you're tracking the device being used, you can access the current format from activeFormat: https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVCaptureDevice_Class/index.html#//apple_ref/occ/instp/AVCaptureDevice/activeFormat
I recently ran into this particular issue, here's the Swift 5 version for those who need it too:
import Foundation
import AVFoundation
class MySessionManager: NSObject {
static let notificationName = "AVCaptureInputPortFormatDescriptionDidChangeNotification"
let session: AVCaptureSession
var videoCaptureDimensions: CMVideoDimensions?
init(session: AVCaptureSession) {
self.session = session
let notificationName = NSNotification.Name()
NotificationCenter.default.addObserver(
self,
selector: #selector(formatDescription(didChange:)),
name: .init(Self.notificationName),
object: nil
)
}
deinit { NotificationCenter.default.removeObserver(self) }
#objc func formatDescription(didChange notification: NSNotification) {
guard
let input = session.inputs.first,
let port = input.ports.first,
let formatDesc = port.formatDescription
else { return }
var dimensions = CMVideoFormatDescriptionGetDimensions(formatDesc)
// ... perform any necessary dim adjustments ...
videoCaptureDimensions = dimensions
}
}