I'm using the below code
- (void)panGestureDetected:(UIPanGestureRecognizer *)recognizer
{
static CGRect originalFrame;
if (state == UIGestureRecognizerStateBegan)
{
originalFrame = recognizer.view.frame;
}
else if (state == UIGestureRecognizerStateChanged)
{
CGPoint translate = [recognizer translationInView:recognizer.view.superview];
CGRect newFrame = CGRectMake(fmin(recognizer.view.superview.frame.size.width - originalFrame.size.width, fmax(originalFrame.origin.x + translate.x,0.0)),
fmin(recognizer.view.superview.size.height - originalFrame.size.height, fmax(originalFrame.origin.y + translate.y, 0.0)),
originalFrame.size.width,
originalFrame.size.height);
recognizer.view.frame = newFrame;
}else if (state == UIGestureRecognizerStateEnded)
{
[self performSelectorInBackground:#selector(ChangeViewAlphaTo1) withObject:nil];
}
}
The main problem is that you are trying to solve two constraints in one line.
Try to break things down when you have an issue. Massive statements are hard to read and even harder to debug.
CGRect newFrame = CGRectMake(fmin(recognizer.view.superview.frame.size.width - originalFrame.size.width, fmax(originalFrame.origin.x + translate.x,0.0)),
fmin(recognizer.view.superview.size.height - originalFrame.size.height, fmax(originalFrame.origin.y + translate.y, 0.0)),
originalFrame.size.width,
originalFrame.size.height);
is barely readable and when you revisit this in 6 months time you or another coder will not be able to tell what you were trying to do.
try replacing that monster withβ¦
[self translateView:recognizer.view limitedBySuperViewBounds:translate];
all of a sudden the method tells you what you are doing.
-(void)translateView:(UIView *)aview limitedBySuperViewBounds:(CGPoint)translate
{
CGRect frameToTranslate = aview.frame;
CGRect superviewBounds= aview.superview.bounds;
frameToTranslate.origin.x = frameToTranslate.origin.x + translate.x;
frameToTranslate.origin.y = frameToTranslate.origin.y + translate.y;
if (translate.x < 0) {
frameToTranslate.origin.x = 0;
}
else if (CGRectGetMaxX(frameToTranslate) > CGRectGetWidth(superviewBounds)){
frameToTranslate.origin.x = CGRectGetWidth(superviewBounds) - CGRectGetWidth(frameToTranslate);
}
if (translate.y < 0) {
frameToTranslate.origin.y = 0;
}
else if (CGRectGetMaxY(frameToTranslate) > CGRectGetHeight(superviewBounds)){
frameToTranslate.origin.y = CGRectGetHeight(superviewBounds) - CGRectGetHeight(frameToTranslate);
}
aview.frame = frameToTranslate;
}
This isn't just my opinion , this is Clean Code
Related
I am creating a game using sprite kit but I seem to have trouble with the bodyWithTexture when using it with collisions. bodyWithRectangle and circleOfRadius work fine, but when i use bodyWithTexture it looks like the didBeginContact method is being called more than once.
here is an example of the code i'm using
-(SKNode *) createPlayer
{
level3Player = [SKNode node];
player3Sprite = [SKSpriteNode spriteNodeWithImageNamed:#"character.png"];
level3Player.physicsBody = [SKPhysicsBody bodyWithCircleOfRadius:player3Sprite.size.width/2];
level3Player.physicsBody.categoryBitMask = playerCategory3;
level3Player.physicsBody.contactTestBitMask = platformCategory3 | rockCategory;
[level3Player setPosition:CGPointMake(self.size.height/2, screenHeightL3 *11)];
level3Player.physicsBody.affectedByGravity = NO;
player3Sprite.physicsBody.dynamic = YES;
level3Player.zPosition = 2;
[level3Player setScale:0.6];
[level3Player addChild:player3Sprite];
return level3Player;
}
-(void) addRocksL3
{
int randomNumber = arc4random_uniform(300);
rock1 = [SKSpriteNode spriteNodeWithImageNamed:#"AsteroidFire.png"];
rock1.physicsBody = [SKPhysicsBody bodyWithCircleOfRadius:rock1.size.width/2];
rock1.position = CGPointMake(self.size.width * 3, randomNumber);
rock1.physicsBody.categoryBitMask = rockCategory;
rock1.physicsBody.contactTestBitMask = playerCategory3;
rock1.physicsBody.dynamic = NO;
rock1.physicsBody.affectedByGravity = NO;
rock1.zPosition = 2;
[rock1 setScale:0.3];
[foregroundLayerL3 addChild:rock1];
[self addChild:rock1];
}
-(void) didBeginContact:(SKPhysicsContact*) contact
{
SKPhysicsBody *firstBody, *secondBody;
if(contact.bodyA.categoryBitMask < contact.bodyB.categoryBitMask)
{
firstBody = contact.bodyA;
secondBody = contact.bodyB;
}
else
{
firstBody = contact.bodyB;
secondBody = contact.bodyA;
}
if((secondBody.categoryBitMask == platformCategory3) | redPlatformCategory)
{
level3Player.physicsBody.velocity = CGVectorMake(0, 100);
level3Player.physicsBody.affectedByGravity = YES;
player3Sprite.texture = [SKTexture textureWithImageNamed:#"goo5.png"];
SKAction *sound1 = [SKAction playSoundFileNamed:#"squish.wav" waitForCompletion:NO];
[self runAction:sound1];
gestureRec3.enabled = YES;
}
if(secondBody.categoryBitMask == rockCategory)
{
gestureRec3.enabled = YES;
playerL3.physicsBody.velocity = CGVectorMake(0, 200);
SKAction *playSound = [SKAction playSoundFileNamed:#"Hurt.wav" waitForCompletion:NO];
[self runAction:playSound];
hitCountL3++;
}
switch (hitCountL3)
{
case 1:
[health1Level3 removeFromParent];
[self healthNodelevel31];
break;
case 2:
[hit1L3 removeFromParent];
[self healthNodeLevel32];
break;
case 3:
[hit2L3 removeFromParent];
player3Sprite.texture = [SKTexture textureWithImageNamed:#"splat.png"];
[self gameOverSplatLevel3];
didDie3 = true;
SKAction *playSplat = [SKAction playSoundFileNamed:#"splat.wav" waitForCompletion:NO];
[self runAction:playSplat];
break;
}
when i use this code my character will sometimes take 1 hit and sometimes take all 3 hits when i collide with the rock. I could use circleOfRadius which works fine, but it's not what I am really looking for. Is there anyway i could use bodyWithTexture so my character only takes 1 hit each time?
If you are experiencing multiple collisions eg. didBeginContact is called multiple times, you have few options...
Without looking at your code, lets say you have a player and a rock. Each time when player collides with rock you want to remove the rock. So, you remove the rock from its parent. But before that, you make this change in your code (pseudo code):
if([rockNode parent]){
[rockNode removeFromParent];
}
The other way would be to subclass SKSpriteNode and make a Rock class and to make a custom boolean property which will change its value once when first collision happens. But this is just unnecessary complication:
if(rockNode.canCollide){
[rockNode removeFromParent];
rockNode.canCollide = NO;
}
I had many problems with correct amount of collisions. Sometimes it would get one, sometimes none. So I tried this and it works. The only thing to change is in didBeginContact method.
I will presume that you declared categories like this:
//define collision categories
static const uint32_t category1 = 0x1 << 0;
static const uint32_t category2 = 0x1 << 1;
static const uint32_t category3 = 0x1 << 2;
Try to replace your code in didBeginContact with this one. I remember that correct collisions finally got to work after I did this.
-(void)didBeginContact:(SKPhysicsContact *)contact
{
SKNode *newFirstBody = contact.bodyA.node;
SKNode *newSecondBody = contact.bodyB.node;
uint32_t collision = newFirstBody.physicsBody.categoryBitMask | newSecondBody.physicsBody.categoryBitMask;
if (collision == (category1 | category2))
{
NSLog(#"hit");
}
}
Hope it helps
I don't know where to start with this one. Obviously CGRectIntersectsRect will not work in this case, and you'll see why.
I have a subclass of UIView that has a UIImageView inside it that is placed in the exact center of the UIView:
I then rotate the custom UIView to maintain the frame of the inner UIImageView while still being able to perform a CGAffineRotation. The resulting frame looks something like this:
I need to prevent users from making these UIImageViews intersect, but I have no idea how to check intersection between the two UIImageViews, since not only do their frames not apply to the parent UIView, but also, they are rotated without it affecting their frames.
The only results from my attempts have been unsuccessful.
Any ideas?
The following algorithm can be used to check if two (rotated or otherwise transformed) views overlap:
Use [view convertPoint:point toView:nil] to convert the 4 boundary points of both views
to a common coordinate system (the window coordinates).
The converted points form two convex quadrilaterals.
Use the SAT (Separating Axis Theorem) to check if the quadrilaterals intersect.
This: http://www.geometrictools.com/Documentation/MethodOfSeparatingAxes.pdf is another description of the algorithm containing pseudo-code, more can be found by googling for "Separating Axis Theorem".
Update: I have tried to create a Objective-C method for the "Separating Axis Theorem", and this is what I got. Up to now, I did only a few tests, so I hope that there are not too many errors.
- (BOOL)convexPolygon:(CGPoint *)poly1 count:(int)count1 intersectsWith:(CGPoint *)poly2 count:(int)count2;
tests if 2 convex polygons intersect. Both polygons are given as a CGPoint array of the vertices.
- (BOOL)view:(UIView *)view1 intersectsWith:(UIView *)view2
tests (as described above) if two arbitrary views intersect.
Implementation:
- (void)projectionOfPolygon:(CGPoint *)poly count:(int)count onto:(CGPoint)perp min:(CGFloat *)minp max:(CGFloat *)maxp
{
CGFloat minproj = MAXFLOAT;
CGFloat maxproj = -MAXFLOAT;
for (int j = 0; j < count; j++) {
CGFloat proj = poly[j].x * perp.x + poly[j].y * perp.y;
if (proj > maxproj)
maxproj = proj;
if (proj < minproj)
minproj = proj;
}
*minp = minproj;
*maxp = maxproj;
}
-(BOOL)convexPolygon:(CGPoint *)poly1 count:(int)count1 intersectsWith:(CGPoint *)poly2 count:(int)count2
{
for (int i = 0; i < count1; i++) {
// Perpendicular vector for one edge of poly1:
CGPoint p1 = poly1[i];
CGPoint p2 = poly1[(i+1) % count1];
CGPoint perp = CGPointMake(- (p2.y - p1.y), p2.x - p1.x);
// Projection intervals of poly1, poly2 onto perpendicular vector:
CGFloat minp1, maxp1, minp2, maxp2;
[self projectionOfPolygon:poly1 count:count1 onto:perp min:&minp1 max:&maxp1];
[self projectionOfPolygon:poly2 count:count1 onto:perp min:&minp2 max:&maxp2];
// If projections do not overlap then we have a "separating axis"
// which means that the polygons do not intersect:
if (maxp1 < minp2 || maxp2 < minp1)
return NO;
}
// And now the other way around with edges from poly2:
for (int i = 0; i < count2; i++) {
CGPoint p1 = poly2[i];
CGPoint p2 = poly2[(i+1) % count2];
CGPoint perp = CGPointMake(- (p2.y - p1.y), p2.x - p1.x);
CGFloat minp1, maxp1, minp2, maxp2;
[self projectionOfPolygon:poly1 count:count1 onto:perp min:&minp1 max:&maxp1];
[self projectionOfPolygon:poly2 count:count1 onto:perp min:&minp2 max:&maxp2];
if (maxp1 < minp2 || maxp2 < minp1)
return NO;
}
// No separating axis found, then the polygons must intersect:
return YES;
}
- (BOOL)view:(UIView *)view1 intersectsWith:(UIView *)view2
{
CGPoint poly1[4];
CGRect bounds1 = view1.bounds;
poly1[0] = [view1 convertPoint:bounds1.origin toView:nil];
poly1[1] = [view1 convertPoint:CGPointMake(bounds1.origin.x + bounds1.size.width, bounds1.origin.y) toView:nil];
poly1[2] = [view1 convertPoint:CGPointMake(bounds1.origin.x + bounds1.size.width, bounds1.origin.y + bounds1.size.height) toView:nil];
poly1[3] = [view1 convertPoint:CGPointMake(bounds1.origin.x, bounds1.origin.y + bounds1.size.height) toView:nil];
CGPoint poly2[4];
CGRect bounds2 = view2.bounds;
poly2[0] = [view2 convertPoint:bounds2.origin toView:nil];
poly2[1] = [view2 convertPoint:CGPointMake(bounds2.origin.x + bounds2.size.width, bounds2.origin.y) toView:nil];
poly2[2] = [view2 convertPoint:CGPointMake(bounds2.origin.x + bounds2.size.width, bounds2.origin.y + bounds2.size.height) toView:nil];
poly2[3] = [view2 convertPoint:CGPointMake(bounds2.origin.x, bounds2.origin.y + bounds2.size.height) toView:nil];
return [self convexPolygon:poly1 count:4 intersectsWith:poly2 count:4];
}
Swift version. (Added this behaviour to UIView via an extension)
extension UIView {
func projection(of polygon: [CGPoint], perpendicularVector: CGPoint) -> (CGFloat, CGFloat) {
var minproj = CGFloat.greatestFiniteMagnitude
var maxproj = -CGFloat.greatestFiniteMagnitude
for j in 0..<polygon.count {
let proj = polygon[j].x * perpendicularVector.x + polygon[j].y * perpendicularVector.y
if proj > maxproj {
maxproj = proj
}
if proj < minproj {
minproj = proj
}
}
return (minproj, maxproj)
}
func convex(polygon: [CGPoint], intersectsWith polygon2: [CGPoint]) -> Bool {
//
let count1 = polygon.count
for i in 0..<count1 {
let p1 = polygon[i]
let p2 = polygon[(i+1) % count1]
let perpendicularVector = CGPoint(x: -(p2.y - p1.y), y: p2.x - p1.x)
let m1 = projection(of: polygon, perpendicularVector: perpendicularVector)
let minp1 = m1.0
let maxp1 = m1.1
let m2 = projection(of: polygon2, perpendicularVector: perpendicularVector)
let minp2 = m2.0
let maxp2 = m2.1
if maxp1 < minp2 || maxp2 < minp1 {
return false
}
}
//
let count2 = polygon2.count
for i in 0..<count2 {
let p1 = polygon2[i]
let p2 = polygon2[(i+1) % count2]
let perpendicularVector = CGPoint(x: -(p2.y - p1.y), y: p2.x - p1.x)
let m1 = projection(of: polygon, perpendicularVector: perpendicularVector)
let minp1 = m1.0
let maxp1 = m1.1
let m2 = projection(of: polygon2, perpendicularVector: perpendicularVector)
let minp2 = m2.0
let maxp2 = m1.0
if maxp1 < minp2 || maxp2 < minp1 {
return false
}
}
//
return true
}
func intersects(with someView: UIView) -> Bool {
//
var points1 = [CGPoint]()
let bounds1 = bounds
let p11 = convert(bounds1.origin, to: nil)
let p21 = convert(CGPoint(x: bounds1.origin.x + bounds1.size.width, y: bounds1.origin.y), to: nil)
let p31 = convert(CGPoint(x: bounds1.origin.x + bounds1.size.width, y: bounds1.origin.y + bounds1.size.height) , to: nil)
let p41 = convert(CGPoint(x: bounds1.origin.x, y: bounds1.origin.y + bounds1.size.height), to: nil)
points1.append(p11)
points1.append(p21)
points1.append(p31)
points1.append(p41)
//
var points2 = [CGPoint]()
let bounds2 = someView.bounds
let p12 = someView.convert(bounds2.origin, to: nil)
let p22 = someView.convert(CGPoint(x: bounds2.origin.x + bounds2.size.width, y: bounds2.origin.y), to: nil)
let p32 = someView.convert(CGPoint(x: bounds2.origin.x + bounds2.size.width, y: bounds2.origin.y + bounds2.size.height) , to: nil)
let p42 = someView.convert(CGPoint(x: bounds2.origin.x, y: bounds2.origin.y + bounds2.size.height), to: nil)
points2.append(p12)
points2.append(p22)
points2.append(p32)
points2.append(p42)
//
return convex(polygon: points1, intersectsWith: points2)
}
I have PlayerView class for displaying AVPlayer's playback. Code from documentation.
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#interface PlayerView : UIView
#property (nonatomic) AVPlayer *player;
#end
#implementation PlayerView
+ (Class)layerClass {
return [AVPlayerLayer class];
}
- (AVPlayer*)player {
return [(AVPlayerLayer *)[self layer] player];
}
- (void)setPlayer:(AVPlayer *)player {
[(AVPlayerLayer *)[self layer] setPlayer:player];
}
#end
I set up my AVPlayer (contains video asset with size 320x240) in this PlayerView (with frame.size.width = 100, frame.size.height = 100) and my video is resized. How can i get size of video after adding in PlayerView?
In iOS 7.0 added new feature:
AVPlayerLayer has property videoRect.
This worked for me. When you don't have the AVPLayerLayer.
- (CGRect)videoRect {
// #see http://stackoverflow.com/a/6565988/1545158
AVAssetTrack *track = [[self.player.currentItem.asset tracksWithMediaType:AVMediaTypeVideo] firstObject];
if (!track) {
return CGRectZero;
}
CGSize trackSize = [track naturalSize];
CGSize videoViewSize = self.videoView.bounds.size;
CGFloat trackRatio = trackSize.width / trackSize.height;
CGFloat videoViewRatio = videoViewSize.width / videoViewSize.height;
CGSize newSize;
if (videoViewRatio > trackRatio) {
newSize = CGSizeMake(trackSize.width * videoViewSize.height / trackSize.height, videoViewSize.height);
} else {
newSize = CGSizeMake(videoViewSize.width, trackSize.height * videoViewSize.width / trackSize.width);
}
CGFloat newX = (videoViewSize.width - newSize.width) / 2;
CGFloat newY = (videoViewSize.height - newSize.height) / 2;
return CGRectMake(newX, newY, newSize.width, newSize.height);
}
Found a solution:
Add to PlayerView class:
- (CGRect)videoContentFrame {
AVPlayerLayer *avLayer = (AVPlayerLayer *)[self layer];
// AVPlayerLayerContentLayer
CALayer *layer = (CALayer *)[[avLayer sublayers] objectAtIndex:0];
CGRect transformedBounds = CGRectApplyAffineTransform(layer.bounds, CATransform3DGetAffineTransform(layer.sublayerTransform));
return transformedBounds;
}
Here's the solution that's working for me, takes into account the positioning of the AVPlayer within the view. I just added this to the PlayerView custom class. I had to solve this because doesn't appear that videoRect is working in 10.7.
- (NSRect) videoRect {
NSRect theVideoRect = NSMakeRect(0,0,0,0);
NSRect theLayerRect = self.playerLayer.frame;
NSSize theNaturalSize = NSSizeFromCGSize([[[self.movie asset] tracksWithMediaType:AVMediaTypeVideo][0] naturalSize]);
float movieAspectRatio = theNaturalSize.width/theNaturalSize.height;
float viewAspectRatio = theLayerRect.size.width/theLayerRect.size.height;
if (viewAspectRatio < movieAspectRatio) {
theVideoRect.size.width = theLayerRect.size.width;
theVideoRect.size.height = theLayerRect.size.width/movieAspectRatio;
theVideoRect.origin.x = 0;
theVideoRect.origin.y = (theLayerRect.size.height/2) - (theVideoRect.size.height/2);
}
else if (viewAspectRatio > movieAspectRatio) {
theVideoRect.size.width = movieAspectRatio * theLayerRect.size.height;
theVideoRect.size.height = theLayerRect.size.height;
theVideoRect.origin.x = (theLayerRect.size.width/2) - (theVideoRect.size.width/2);
theVideoRect.origin.y = 0;
}
return theVideoRect;
}
Here is Eric Badros' answer ported to iOS. I also added preferredTransform handling. This assumes _player is an AVPlayer
- (CGRect) videoRect {
CGRect theVideoRect = CGRectZero;
// Replace this with whatever frame your AVPlayer is playing inside of:
CGRect theLayerRect = self.playerLayer.frame;
AVAssetTrack *track = [_player.currentItem.asset tracksWithMediaType:AVMediaTypeVideo][0];
CGSize theNaturalSize = [track naturalSize];
theNaturalSize = CGSizeApplyAffineTransform(theNaturalSize, track.preferredTransform);
theNaturalSize.width = fabs(theNaturalSize.width);
theNaturalSize.height = fabs(theNaturalSize.height);
CGFloat movieAspectRatio = theNaturalSize.width / theNaturalSize.height;
CGFloat viewAspectRatio = theLayerRect.size.width / theLayerRect.size.height;
// Note change this *greater than* to a *less than* if your video will play in aspect fit mode (as opposed to aspect fill mode)
if (viewAspectRatio > movieAspectRatio) {
theVideoRect.size.width = theLayerRect.size.width;
theVideoRect.size.height = theLayerRect.size.width / movieAspectRatio;
theVideoRect.origin.x = 0;
theVideoRect.origin.y = (theLayerRect.size.height - theVideoRect.size.height) / 2;
} else if (viewAspectRatio < movieAspectRatio) {
theVideoRect.size.width = movieAspectRatio * theLayerRect.size.height;
theVideoRect.size.height = theLayerRect.size.height;
theVideoRect.origin.x = (theLayerRect.size.width - theVideoRect.size.width) / 2;
theVideoRect.origin.y = 0;
}
return theVideoRect;
}
Here is a Swift solution based on Andrey Banshchikov's answer. His solution is especially useful when you don't have access to AVPLayerLayer.
func currentVideoFrameSize(playerView: AVPlayerView, player: AVPlayer) -> CGSize {
// See https://stackoverflow.com/a/40164496/1877617
let track = player.currentItem?.asset.tracks(withMediaType: .video).first
if let track = track {
let trackSize = track.naturalSize
let videoViewSize = playerView.bounds.size
let trackRatio = trackSize.width / trackSize.height
let videoViewRatio = videoViewSize.width / videoViewSize.height
var newSize: CGSize
if videoViewRatio > trackRatio {
newSize = CGSize(width: trackSize.width * videoViewSize.height / trackSize.height, height: videoViewSize.height)
} else {
newSize = CGSize(width: videoViewSize.width, height: trackSize.height * videoViewSize.width / trackSize.width)
}
return newSize
}
return CGSize.zero
}
Andrey's answer worked for file/local/downloaded video, but didn't work for streamed HLS video. As a 2nd attempt, I was able to find the video track right inside of the "tracks" property of currentItem. Also rewritten to Swift.
private func videoRect() -> CGRect {
// Based on https://stackoverflow.com/a/40164496 - originally objective-c
var trackTop: AVAssetTrack? = nil
if let track1 = self.player?.currentItem?.asset.tracks(withMediaType: AVMediaType.video).first {
trackTop = track1
}
else {
// For some reason the above way wouldn't find the "track" for streamed HLS video.
// This seems to work for streamed HLS.
if let tracks = self.player?.currentItem?.tracks {
for avplayeritemtrack in tracks {
if let assettrack = avplayeritemtrack.assetTrack {
if assettrack.mediaType == .video {
// Found an assetTrack here?
trackTop = assettrack
break
}
}
}
}
}
guard let track = trackTop else {
print("Failed getting track")
return CGRect.zero
}
let trackSize = track.naturalSize
let videoViewSize = self.view.bounds.size
let trackRatio = trackSize.width / trackSize.height
let videoViewRatio = videoViewSize.width / videoViewSize.height
let newSize: CGSize
if videoViewRatio > trackRatio {
newSize = CGSize.init(width: trackSize.width * videoViewSize.height / trackSize.height, height: videoViewSize.height)
}
else {
newSize = CGSize.init(width: videoViewSize.width, height: trackSize.height * videoViewSize.width / trackSize.width)
}
let newX: CGFloat = (videoViewSize.width - newSize.width) / 2
let newY: CGFloat = (videoViewSize.height - newSize.height) / 2
return CGRect.init(x: newX, y: newY, width: newSize.width, height: newSize.height)
}
Much simpler approach
I found another way. Seems to be much easier, if you're using AVPlayerViewController. Why didn't I find this earlier. π€§
return self.videoBounds
2022 SwiftUI
struct PlayerViewController: UIViewControllerRepresentable {
private let avController = AVPlayerViewController()
var videoURL: URL?
private var player: AVPlayer {
return AVPlayer(url: videoURL!)
}
// ππΌππΌππΌ HERE ππΌππΌππΌ
func getVideoFrame() -> CGRect {
self.avController.videoBounds
}
// ππΌππΌππΌ HERE ππΌππΌππΌ
func makeUIViewController(context: Context) -> AVPlayerViewController {
avController.modalPresentationStyle = .fullScreen
avController.player = player
avController.player?.play()
return avController
}
func updateUIViewController(_ playerController: AVPlayerViewController, context: Context) {}
}
Is there a method in Cocos2d like CGRectIntersectsRect, except instead of limiting it to one sprite, it checks for ALL objects?
You can do it in a loop. This example is a method for a sprite instance (i.e. the player) to test against an array of other sprites.
- (BOOL) checkForCollision
{
BOOL didCollide = NO;
CGRect myRect;
CGRect testRect;
// assuming anchor point is in the center
myRect = CGRectMake(self.position.x - self.contentSize.width /2,
self.position.y - self.contentSize.height /2,
self.contentSize.width,
self.contentSize.height);
for (CCSprite * currSprite in listOfWallSprites) {
testRect = CGRectMake(currSprite.position.x - currSprite.contentSize.width /2,
currSprite.position.y - currSprite.contentSize.height /2,
currSprite.contentSize.width,
currSprite.contentSize.height);
if ( CGRectIntersectsRect(myRect, testRect) ) {
didCollide = YES;
return didCollide;
}
}
return didCollide;
}
you can set up a method like this
CGRect sprite1Rect = CGRectMake(
sprite1.position.x - (sprite1.contentSize.width/2),
sprite1.position.y - (sprite1.contentSize.height/2),
sprite1.contentSize.width,
sprite1.contentSize.height);
CGRect somethingRect = CGRectMake(
something.position.x - (something.contentSize.width/2),
something.position.y - (something.contentSize.height/2),
something.contentSize.width,
something.contentSize.height);
CGRect something2Rect = CGRectMake(
something2.position.x - (something2.contentSize.width/2),
something2.position.y - (something2.contentSize.height/2),
something2.contentSize.width,
something2.contentSize.height);
if (CGRectIntersectsRect(sprite1Rect, somethingRect) || CGRectIntersectsRect(sprite1Rect, something2Rect)) {
// do something
}
or
if (CGRectIntersectsRect(sprite1.frame, something.frame) || CGRectIntersectsRect(sprite1Rect, something2Rect)) {
// do something
}
this means that if sprite1 intersects something OR something2 then it will do something
hi
im pretty new to both frameworks. but maybe someone can point me into the right direction:
basically i try to bounce a ball of a shape. (works fine)
but it would be great when the ball would rotate, too
here is my (copy & paste) code
// BallLayer.m
#import "BallLayer.h"
void updateShape(void* ptr, void* unused){
cpShape* shape = (cpShape*)ptr;
Sprite* sprite = shape->data;
if(sprite){
cpBody* body = shape->body;
[sprite setPosition:cpv(body->p.x, body->p.y)];
}
}
#implementation BallLayer
-(void)tick:(ccTime)dt{
cpSpaceStep(space, 1.0f/60.0f);
cpSpaceHashEach(space->activeShapes, &updateShape, nil);
}
-(void)setupChipmunk{
cpInitChipmunk();
space = cpSpaceNew();
space->gravity = cpv(0,-2000);
space->elasticIterations = 1;
[self schedule: #selector(tick:) interval: 1.0f/60.0f];
cpBody* ballBody = cpBodyNew(200.0, cpMomentForCircle(100.0, 10, 10, cpvzero));
ballBody->p = cpv(150, 400);
cpSpaceAddBody(space, ballBody);
cpShape* ballShape = cpCircleShapeNew(ballBody, 20.0, cpvzero);
ballShape->e = 0.8;
ballShape->u = 0.8;
ballShape->data = ballSprite;
ballShape->collision_type = 1;
cpSpaceAddShape(space, ballShape);
cpBody* floorBody = cpBodyNew(INFINITY, INFINITY);
floorBody->p = cpv(0, 0);
cpShape* floorShape = cpSegmentShapeNew(floorBody, cpv(0,0), cpv(320,160), 0);
floorShape->e = 0.5;
floorShape->u = 0.1;
floorShape->collision_type = 0;
cpSpaceAddStaticShape(space, floorShape);
floorShape = cpSegmentShapeNew(floorBody, cpv(0,200), cpv(320,0), 0);
cpSpaceAddStaticShape(space, floorShape);
}
-(id)init{
self = [super init];
if(nil != self){
ballSprite = [Sprite spriteWithFile:#"ball2.png"];
[ballSprite setPosition:CGPointMake(150, 400)];
[self add:ballSprite];
[self setupChipmunk];
}
return self;
}
#end
please help me out.
well when i decided psoting it i found the solution :)
void updateShape(void* ptr, void* unused)
{
cpShape* shape = (cpShape*)ptr;
Sprite* sprite = shape->data;
if(sprite){
cpBody* body = shape->body;
[sprite setPosition:cpv(body->p.x, body->p.y)];
[sprite setRotation: (float) CC_RADIANS_TO_DEGREES( -body->a )];
}
}