I have a simple SpriteKit game that uses physics. Written in Swift, and works great in iOS8 Simulator.
The node stops at the physicsworld edge.
But when running on iOS7 it falls right trough. Think it has something to do with the category, contact and collision bitmask.
any clue?
Defining the categories here
struct PhysicsCategory {
static let None: UInt32 = 0
static let Edge: UInt32 = 0b1 // 1
static let Player: UInt32 = 0b10 // 2
static let Enemy: UInt32 = 0b100 // 4
}
Setup World
physicsBody = SKPhysicsBody(edgeLoopFromRect: self.frame)
physicsWorld.contactDelegate = self
physicsBody!.categoryBitMask = PhysicsCategory.Edge
physicsWorld.gravity = CGVectorMake(0, -9.81)
Setup Player/Ball/Node
playerNode.physicsBody = SKPhysicsBody(polygonFromPath: path)
playerNode.physicsBody!.contactTestBitMask = PhysicsCategory.Player
playerNode.physicsBody!.dynamic = true
playerNode.physicsBody!.mass = 0.50
playerNode.physicsBody!.categoryBitMask = PhysicsCategory.Player
playerNode.physicsBody!.collisionBitMask = PhysicsCategory.Enemy | PhysicsCategory.Edge
hmmm I'm not having that problem. I almost wrote your code verbatim. I assumed playerNode is a SKShapeNode and used it's path in polygonFromPath Can you try running this in iOS7 and see if you still have a problem?
struct PhysicsCategory {
static let None: UInt32 = 0
static let Edge: UInt32 = 0b1 // 1
static let Player: UInt32 = 0b10 // 2
static let Enemy: UInt32 = 0b100 // 4
}
import SpriteKit
class GameScene: SKScene, SKPhysicsContactDelegate {
let playerNode = SKShapeNode(ellipseInRect: CGRect(origin: CGPointZero, size: CGSize(width: 10, height: 10)))
override func didMoveToView(view: SKView) {
physicsBody = SKPhysicsBody(edgeLoopFromRect: self.frame)
physicsWorld.contactDelegate = self
physicsBody!.categoryBitMask = PhysicsCategory.Edge
physicsWorld.gravity = CGVectorMake(0, -9.81)
self.addChild(playerNode)
playerNode.position = CGPoint(x: self.size.width/2, y: self.size.height/2)
playerNode.physicsBody = SKPhysicsBody(polygonFromPath: playerNode.path)
playerNode.physicsBody!.dynamic = true
playerNode.physicsBody!.mass = 0.50
playerNode.physicsBody!.categoryBitMask = PhysicsCategory.Player
playerNode.physicsBody!.collisionBitMask = PhysicsCategory.Enemy | PhysicsCategory.Edge
}
}
Finally got it working!
Updated to Yosemite 10.10.1 and Xcode 6.1.1, created a new project. Strange, but works great now.
Related
I am looking for ideas regarding an optimal/minimal structure for the inner render loop in Dart 2, for a 2d game (if that part matters).
Clarification / Explanation: Every framework / language has an efficient way to:
1) Deal with time.
2) Render to the screen (via memory, a canvas, an image, or whatever).
For an example, here is someone that answered this for the C# language. Being new to Flutter / Dart, my first attempt (below), is failing to work and as of right now, I can not tell where the problem is.
I have searched high and low without finding any help on this, so if you can assist, you have my eternal gratitude.
There is a post on Reddit by ‘byu/inu-no-policemen’ (a bit old). I used this to start. I suspect that it is crushing the garbage collector or leaking memory.
This is what I have so far, but it crashes pretty quickly (at least in the debugger):
import 'dart:ui';
import 'dart:typed_data';
import 'dart:math' as math;
import 'dart:async';
main() async {
var deviceTransform = new Float64List(16)
..[0] = 1.0 // window.devicePixelRatio
..[5] = 1.0 // window.devicePixelRatio
..[10] = 1.0
..[15] = 1.0;
var previous = Duration.zero;
var initialSize = await Future<Size>(() {
if (window.physicalSize.isEmpty) {
var completer = Completer<Size>();
window.onMetricsChanged = () {
if (!window.physicalSize.isEmpty) {
completer.complete(window.physicalSize);
}
};
return completer.future;
}
return window.physicalSize;
});
var world = World(initialSize.width / 2, initialSize.height / 2);
window.onBeginFrame = (now) {
// we rebuild the screenRect here since it can change
var screenRect = Rect.fromLTWH(0.0, 0.0, window.physicalSize.width, window.physicalSize.height);
var recorder = PictureRecorder();
var canvas = Canvas(recorder, screenRect);
var delta = previous == Duration.zero ? Duration.zero : now - previous;
previous = now;
var t = delta.inMicroseconds / Duration.microsecondsPerSecond;
world.update(t);
world.render(t, canvas);
var builder = new SceneBuilder()
..pushTransform(deviceTransform)
..addPicture(Offset.zero, recorder.endRecording())
..pop();
window.render(builder.build());
window.scheduleFrame();
};
window.scheduleFrame();
window.onPointerDataPacket = (packet) {
var p = packet.data.first;
world.input(p.physicalX, p.physicalY);
};
}
class World {
static var _objectColor = Paint()..color = Color(0xa0a0a0ff);
static var _s = 200.0;
static var _obejectRect = Rect.fromLTWH(-_s / 2, -_s / 2, _s, _s);
static var _rotationsPerSecond = 0.25;
var _turn = 0.0;
double _x;
double _y;
World(this._x, this._y);
void input(double x, double y) { _x = x; _y = y; }
void update(double t) { _turn += t * _rotationsPerSecond; }
void render(double t, Canvas canvas) {
var tau = math.pi * 2;
canvas.translate(_x, _y);
canvas.rotate(tau * _turn);
canvas.drawRect(_obejectRect, _objectColor);
}
}
Well, after a month of beating my face against this, I finally figured out the right question and that got me to this:
Flutter Layers / Raw
// Copyright 2015 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// This example shows how to perform a simple animation using the raw interface
// to the engine.
import 'dart:math' as math;
import 'dart:typed_data';
import 'dart:ui' as ui;
void beginFrame(Duration timeStamp) {
// The timeStamp argument to beginFrame indicates the timing information we
// should use to clock our animations. It's important to use timeStamp rather
// than reading the system time because we want all the parts of the system to
// coordinate the timings of their animations. If each component read the
// system clock independently, the animations that we processed later would be
// slightly ahead of the animations we processed earlier.
// PAINT
final ui.Rect paintBounds = ui.Offset.zero & (ui.window.physicalSize / ui.window.devicePixelRatio);
final ui.PictureRecorder recorder = ui.PictureRecorder();
final ui.Canvas canvas = ui.Canvas(recorder, paintBounds);
canvas.translate(paintBounds.width / 2.0, paintBounds.height / 2.0);
// Here we determine the rotation according to the timeStamp given to us by
// the engine.
final double t = timeStamp.inMicroseconds / Duration.microsecondsPerMillisecond / 1800.0;
canvas.rotate(math.pi * (t % 1.0));
canvas.drawRect(ui.Rect.fromLTRB(-100.0, -100.0, 100.0, 100.0),
ui.Paint()..color = const ui.Color.fromARGB(255, 0, 255, 0));
final ui.Picture picture = recorder.endRecording();
// COMPOSITE
final double devicePixelRatio = ui.window.devicePixelRatio;
final Float64List deviceTransform = Float64List(16)
..[0] = devicePixelRatio
..[5] = devicePixelRatio
..[10] = 1.0
..[15] = 1.0;
final ui.SceneBuilder sceneBuilder = ui.SceneBuilder()
..pushTransform(deviceTransform)
..addPicture(ui.Offset.zero, picture)
..pop();
ui.window.render(sceneBuilder.build());
// After rendering the current frame of the animation, we ask the engine to
// schedule another frame. The engine will call beginFrame again when its time
// to produce the next frame.
ui.window.scheduleFrame();
}
void main() {
ui.window.onBeginFrame = beginFrame;
ui.window.scheduleFrame();
}
I've been racking my brain and searching here and all over to try to find out how to generate a random position on screen to spawn a circle. I'm hoping someone here can help me because I'm completely stumped. Basically, I'm trying to create a shape that always spawns in a random spot on screen when the user touches.
override func touchesBegan(touches: Set<NSObject>, withEvent event: UIEvent) {
let screenSize: CGRect = UIScreen.mainScreen().bounds
let screenHeight = screenSize.height
let screenWidth = screenSize.width
let currentBall = SKShapeNode(circleOfRadius: 100)
currentBall.position = CGPointMake(CGFloat(arc4random_uniform(UInt32(Float(screenWidth)))), CGFloat(arc4random_uniform(UInt32(Float(screenHeight)))))
self.removeAllChildren()
self.addChild(currentBall)
}
If you all need more of my code, there really isn't any more. But thank you for whatever help you can give! (Just to reiterate, this code kind of works... But a majority of the spawned balls seem to spawn offscreen)
The problem there is that you scene is bigger than your screen bounds
let viewMidX = view!.bounds.midX
let viewMidY = view!.bounds.midY
print(viewMidX)
print(viewMidY)
let sceneHeight = view!.scene!.frame.height
let sceneWidth = view!.scene!.frame.width
print(sceneWidth)
print(sceneHeight)
let currentBall = SKShapeNode(circleOfRadius: 100)
currentBall.fillColor = .green
let x = view!.scene!.frame.midX - viewMidX + CGFloat(arc4random_uniform(UInt32(viewMidX*2)))
let y = view!.scene!.frame.midY - viewMidY + CGFloat(arc4random_uniform(UInt32(viewMidY*2)))
print(x)
print(y)
currentBall.position = CGPoint(x: x, y: y)
view?.scene?.addChild(currentBall)
self.removeAllChildren()
self.addChild(currentBall)
First: Determine the area that will be valid. It might not be the frame of the superview because perhaps the ball (let's call it ballView) might be cut off. The area will likely be (in pseudocode):
CGSize( Width of the superview - width of ballView , Height of the superview - height of ballView)
Once you have a view of that size, just place it on screen with the origin 0, 0.
Secondly: Now you have a range of valid coordinates. Just use a random function (like the one you are using) to select one of them.
Create a swift file with the following:
extension Int
{
static func random(range: Range<Int>) -> Int
{
var offset = 0
if range.startIndex < 0 // allow negative ranges
{
offset = abs(range.startIndex)
}
let mini = UInt32(range.startIndex + offset)
let maxi = UInt32(range.endIndex + offset)
return Int(mini + arc4random_uniform(maxi - mini)) - offset
}
}
And now you can specify a random number as follows:
Int.random(1...1000) //generate a random number integer somewhere from 1 to 1000.
You can generate the values for the x and y coordinates now using this function.
Given the following random generators:
public extension CGFloat {
public static var random: CGFloat { return CGFloat(arc4random()) / CGFloat(UInt32.max) }
public static func random(between x: CGFloat, and y: CGFloat) -> CGFloat {
let (start, end) = x < y ? (x, y) : (y, x)
return start + CGFloat.random * (end - start)
}
}
public extension CGRect {
public var randomPoint: CGPoint {
var point = CGPoint()
point.x = CGFloat.random(between: origin.x, and: origin.x + width)
point.y = CGFloat.random(between: origin.y, and: origin.y + height)
return point
}
}
You can paste the following into a playground:
import XCPlayground
import SpriteKit
let view = SKView(frame: CGRect(x: 0, y: 0, width: 500, height: 500))
XCPShowView("game", view)
let scene = SKScene(size: view.frame.size)
view.presentScene(scene)
let wait = SKAction.waitForDuration(0.5)
let popIn = SKAction.scaleTo(1, duration: 0.25)
let popOut = SKAction.scaleTo(0, duration: 0.25)
let remove = SKAction.removeFromParent()
let popInAndOut = SKAction.sequence([popIn, wait, popOut, remove])
let addBall = SKAction.runBlock { [unowned scene] in
let ballRadius: CGFloat = 25
let ball = SKShapeNode(circleOfRadius: ballRadius)
var popInArea = scene.frame
popInArea.inset(dx: ballRadius, dy: ballRadius)
ball.position = popInArea.randomPoint
ball.xScale = 0
ball.yScale = 0
ball.runAction(popInAndOut)
scene.addChild(ball)
}
scene.runAction(SKAction.repeatActionForever(SKAction.sequence([addBall, wait])))
(Just make sure to also paste in the random generators, too, or to copy them to the playground's Sources, as well as to open the assistant editor so you can see the animation.)
I wanted to get an image of the screen, ignoring my app's window.
Found a code example in Objective-C and tried to convert it to Swift.
Objective-C snipped:
// Get onscreen windows
CGWindowID windowIDToExcude = (CGWindowID)[myNSWindow windowNumber];
CFArrayRef onScreenWindows = CGWindowListCreate(kCGWindowListOptionOnScreenOnly, kCGNullWindowID);
CFMutableArrayRef finalList = CFArrayCreateMutableCopy(NULL, 0, onScreenWindows);
for (long i = CFArrayGetCount(finalList) - 1; i >= 0; i--) {
CGWindowID window = (CGWindowID)(uintptr_t)CFArrayGetValueAtIndex(finalList, i);
if (window == windowIDToExcude)
CFArrayRemoveValueAtIndex(finalList, i);
}
// Get the composite image
CGImageRef ref = CGWindowListCreateImageFromArray(myRectToGrab, finalList, kCGWindowListOptionAll);
My version in Swift (where I managed to get so far):
// Get onscreen windows
let windowIDToExcude = myNSWindow.windowNumber!
let onScreenWindows = CGWindowListCreate(kCGWindowListOptionOnScreenOnly, kCGNullWindowID)
let finalList = CFArrayCreateMutableCopy(nil, 0, onScreenWindows)
for var i = CFArrayGetCount(finalList) - 1; i >= 0; i-=1 {
var window: CGWindowID = (uintptr_t(CFArrayGetValueAtIndex(finalList, i)) as! CGWindowID)
if window == windowIDToExcude {
CFArrayRemoveValueAtIndex(finalList, i)
}
}
// Get the composite image
var ref = CGWindowListCreateImageFromArray(myRectToGrab, finalList, kCGWindowListOptionAll)
But it does not work in swift 2.0 and I have no idea why.
Particularly this line can't be compiled:
CGWindowListCreate(kCGWindowListOptionOnScreenOnly, kCGNullWindowID)
Apparently there is no such thing as CGWindowListCreate, kCGWindowListOptionOnScreenOnly and kCGNullWindowID anymore.
Can you try this:
let imageRef = CGWindowListCreateImage(self.view.frame, CGWindowListOption.OptionOnScreenBelowWindow, CGWindowID(self.view.window!.windowNumber), CGWindowImageOption.Default)
let image = NSImage(CGImage: imageRef!, size: self.view.frame.size)
self.imageView.image = image
That does the trick for me.
in iOS8 the dimension returned is 0,0
CMVideoDimensions dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription);
This was working on iOS7, so how to know the supported video dimension, as i need to know the video aspect ratio
You need to wait for the AVCaptureInputPortFormatDescriptionDidChangeNotification
- (void)avCaptureInputPortFormatDescriptionDidChangeNotification:(NSNotification *)notification {
AVCaptureInput *input = [self.recorder.captureSession.inputs objectAtIndex:0];
AVCaptureInputPort *port = [input.ports objectAtIndex:0];
CMFormatDescriptionRef formatDescription = port.formatDescription;
if (formatDescription) {
CMVideoDimensions dimensions = CMVideoFormatDescriptionGetDimensions(formatDescription);
if ((dimensions.width == 0) || (dimensions.height == 0)) {
return;
}
CGFloat aspect = (CGFloat)dimensions.width / (CGFloat)dimensions.height;
if (floor(NSFoundationVersionNumber) > NSFoundationVersionNumber_iOS_7_1) {
// since iOS8 the aspect ratio is inverted
// remove this check if iOS7 will not be supported
aspect = 1.f / aspect;
}
}
}
Provided you're tracking the device being used, you can access the current format from activeFormat: https://developer.apple.com/library/ios/documentation/AVFoundation/Reference/AVCaptureDevice_Class/index.html#//apple_ref/occ/instp/AVCaptureDevice/activeFormat
I recently ran into this particular issue, here's the Swift 5 version for those who need it too:
import Foundation
import AVFoundation
class MySessionManager: NSObject {
static let notificationName = "AVCaptureInputPortFormatDescriptionDidChangeNotification"
let session: AVCaptureSession
var videoCaptureDimensions: CMVideoDimensions?
init(session: AVCaptureSession) {
self.session = session
let notificationName = NSNotification.Name()
NotificationCenter.default.addObserver(
self,
selector: #selector(formatDescription(didChange:)),
name: .init(Self.notificationName),
object: nil
)
}
deinit { NotificationCenter.default.removeObserver(self) }
#objc func formatDescription(didChange notification: NSNotification) {
guard
let input = session.inputs.first,
let port = input.ports.first,
let formatDesc = port.formatDescription
else { return }
var dimensions = CMVideoFormatDescriptionGetDimensions(formatDesc)
// ... perform any necessary dim adjustments ...
videoCaptureDimensions = dimensions
}
}
Consider this code:
CGFloat largerLineSpacing = kStreamCellParagraphSpacing;
CTParagraphStyleSetting paragraphSettings[1] = {
{ kCTParagraphStyleSpecifierParagraphSpacing, sizeof(CGFloat), &largerLineSpacing }
};
CTParagraphStyleRef paragraphStyle = CTParagraphStyleCreate(paragraphSettings, sizeof(*paragraphSettings));
This code crashes with an EXC_BAD_ACCESS when running on an iPad 1 (5.1), but not a 5.1 simulator or an iPad 3 (6.0). My C is weak - am I making a dumb mistake with sizeof?
The docs for CTParagraphStyleCreate suggest that its second argument gives the number of CTParagraphStyleSetting instances in the paragraphSettings array (1 in your case) rather than the size in bytes of the array.
If you change your code to
CTParagraphStyleRef paragraphStyle = CTParagraphStyleCreate(paragraphSettings, 1);
it should work. Or, if you want to cope with adding more settings in future, you could try
int numElems = sizeof(paragraphSettings)/sizeof(paragraphSettings[0]);
CTParagraphStyleRef paragraphStyle = CTParagraphStyleCreate(paragraphSettings,
numElems);
static CFIndex const settingCount = 1;
CTParagraphStyleSetting paragraphSettings[settingCount] = {
{ kCTParagraphStyleSpecifierParagraphSpacing, sizeof(CGFloat), &largerLineSpacing }
};
CTParagraphStyleRef paragraphStyle = CTParagraphStyleCreate(paragraphSettings, settingCount);