Detect transparent part on the sprite in cocos2d? - objective-c

I'm beginner in Cocos2d. I have a sprite, and I want to ignore touch on transparent area of that sprite.
I'm aware of this answer Cocos2d 2.0 - Ignoring touches to transparent areas of layers/sprites, and also this great article http://www.learn-cocos2d.com/2011/12/fast-pixelperfect-collision-detection-cocos2d-code-1of2/.
I was able to make it work with KKPixelMaskSprite, but only when sprite is used from file, but not from batch node. Whenever I use batch node (Sprite sheet), to get sprite , it stops working.
I have different sprites on each other, and I want detect this way -> if touch is in current sprite bounding box, is that part transparent on sprite or no?
P.S.I'm using cocos2d 1.0. I don't want to use any Physics engine for now, I just want to ignore touches on transparent areas of sprite (that was created using batch node ) .How can I do that? Or might there any tool can be helpfull?
Thanks a lot in advance.

You can use CGMutablePathRef to make non-rectangular sprite collision detection.
//checking
CGPoint loc =[mySprite convertToNodeSpace:touchPoint];
if([mySprite isPointInsideMap:loc])
{
//touched inside..
}
//Add this method in your MySprite class derived from CCSprite.
-(bool)isPointInsideMap:(CGPoint)inPoint
{
if (CGPathContainsPoint(mCollisionPath, NULL, inPoint, NO) )
{
return true;
}
return false;
}
////Create Path
CGMutablePathRef mCollisionPath = CGPathCreateMutable();
CGPathMoveToPoint(mCollisionPath, NULL, 0, 0 );
CGPathAddLineToPoint(mCollisionPath, NULL, 11, 82 );
CGPathAddLineToPoint(mCollisionPath, NULL, 42, 152 );
CGPathAddLineToPoint(mCollisionPath, NULL, 86, 202 );
CGPathAddLineToPoint(mCollisionPath, NULL, 169, 13 );
CGPathCloseSubpath(mCollisionPath);

This answer is more diffuse than you might expect, as I will not give you a code example, but this is how I'd implement this:
You have the location of the sprite's bounding box (corner of the sprite, including the transparency area if applicable), and the position of the touch on screen. Using this information, you can work out the location of the touch inside of the sprite. In other words, you can find the pixel touched, independant of the game screen.
Now that you have that pixel location (x and y), open the image (presumably a PNG), and get the RGB[A] value for that pixel. Each PNG has a transparency key. This is the alpha channel If the interior-pixel colour of the PNG at (x;y) == transparency key then that pixel is transparent
If you can get the alpha value for the pixel in question, if it is equal to 0 then the pixel is transparent.
edit: semantics ("alpha channel")

I would try to change the boundingbox in your is Touch for me, and reduce it for the different sprites...

Related

How does the viewport work in libgdx and how to set it up correctly?

I am learning the use of libgdx and I got confused by the viewport and how objects are arranged on the screen. Let's assume my 2D world is 2x2 units wide and high. Now I create a camera which viewport is 1x1. So I should see 25% of my world. Usually displays are not square shaped. So I would expect libgdx to squish and stretch this square to fit the display.
For a side scroller you would set the viewport height the same as the world height and adjust the viewport width according to the aspect ratio. Independent of the aspect ratio of your display you always see the full height of the world but different expansions on the x-axis. Somebody with a wider than high display could look further on the x-axis than somebody with a square shaped display. But proportions will be maintained and there is no distortion. So far I thought I mastered how the viewport logic works.
I am working with the book "Learning LibGDX Game Development" in which you develop the game "canyon bunny". The source code can be found here:
Canyon Bunny - GitHub
In the WorldRenderer Class you find the initilization of the camera:
private void init() {
batch = new SpriteBatch();
camera = new OrthographicCamera(Constants.VIEWPORT_WIDTH, Constants.VIEWPORT_HEIGHT);
camera.position.set(0, 0, 0);
camera.update();
}
The viewport constants are saved in a separate Constants-Class:
public class Constants {
// Visible game world is 5 meters wide
public static final float VIEWPORT_WIDTH = 5.0f;
// Visible game world is 5 meters tall
public static final float VIEWPORT_HEIGHT = 5.0f;
}
As you can see the viewport is 5x5. But the game objects have the right proportion on my phone (16:9) and even on a desktop when you change the windows size the game maintains the correct proportions. I don't understand why. I would expect that the game tries to paint a square shaped cutout of the world onto a rectangle shaped display which would lead to distortion. Why is that not the case? And why don't you need the adaption of width or height of the viewport to the aspect ratio?
The line:
cameraGUI.setToOrtho(true);
Overrides the values you gave when you called:
cameraGUI = new OrthographicCamera(Constants.VIEWPORT_GUI_WIDTH, Constants.VIEWPORT_GUI_HEIGHT);
Here's the LibGDX code that shows why/how the viewport sizes you set were ignored:
/** Sets this camera to an orthographic projection using a viewport fitting the screen resolution, centered at
* (Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight()/2), with the y-axis pointing up or down.
* #param yDown whether y should be pointing down */
public void setToOrtho (boolean yDown) {
setToOrtho(yDown, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
}
/** Sets this camera to an orthographic projection, centered at (viewportWidth/2, viewportHeight/2), with the y-axis pointing up
* or down.
* #param yDown whether y should be pointing down.
* #param viewportWidth
* #param viewportHeight */
public void setToOrtho (boolean yDown, float viewportWidth, float viewportHeight) {
if (yDown) {
up.set(0, -1, 0);
direction.set(0, 0, 1);
} else {
up.set(0, 1, 0);
direction.set(0, 0, -1);
}
position.set(zoom * viewportWidth / 2.0f, zoom * viewportHeight / 2.0f, 0);
this.viewportWidth = viewportWidth;
this.viewportHeight = viewportHeight;
update();
}
So you would need to do this instead:
cameraGUI.setToOrtho(true, Constants.VIEWPORT_GUI_WIDTH, Constants.VIEWPORT_GUI_HEIGHT);
Also don't forget to call update() right after wherever you change the position or viewport dimensions of your camera (Or other properties)
I found the reason. If you take a look on the worldRenderer class there is a method resize(). In this method the viewport is adapted to the aspect ratio. I am just wondering because until now I thought the resize method is only called when resizing the window. Apparently it's also called at start up. Can anybody clarify?

Is there any way to add a regular ARAnchor on a detected image?

I'm trying to put a 3D object on top of the detected image, and it worked. But when I moved the camera around the image, the object didn't stay at the center of image. Is there any way to add a regular anchor at the center of image to help me fix the 3D object at the right position? The following code is what I've tried, but it not worked.
- (void)renderer:(id<SCNSceneRenderer>)renderer didAddNode:(SCNNode *)node forAnchor:(ARAnchor *)anchor
{
if ([anchor isKindOfClass:[ARImageAnchor class]]) {
ARAnchor *newAnchor = [[ARAnchor alloc] initWithTransform:anchor.transform];
[self.sceneView.session addAnchor:newAnchor];
}
}
I detect an image and put a plane on it, it looks center correctly
But when I move the camera to another position, it doesn't locate on the center of image
There's no need to create a new anchor because there's already one provided by ARKit. You should add your 3D content to the node provided by this method.
According to the renderer:didAddNode:forAnchor: documentation:
You can provide visual content for the anchor by attaching geometry (or other SceneKit features) to this node or by adding child nodes.
So, in this method:
- (void)renderer:(id<SCNSceneRenderer>)renderer didAddNode:(SCNNode *)node forAnchor:(ARAnchor *)anchor
{
[node addChildNode: your3DObjectNode];
}
Then it should stay at the center of your image.

How to use mask with transparency on QWidget?

I am trying to use mask on my QWidget. I want to overlay existing widget with row of buttons - similar to Skype
Notice that these buttons don't have jagged edges - they are nicely antialiased and widget below them is still visible.
I tried to accomplish that using Qt Stylesheets but on pixels that should be "masked out" was just black colour - it was round button on black, rectangular background.
Then I tried to do this using QWidget::mask(). I used following code
QImage alpha_mask(QSize(50, 50), QImage::Format_ARGB32);
alpha_mask.fill(Qt::transparent);
QPainter painter(&alpha_mask);
painter.setBrush(Qt::black);
painter.setRenderHint(QPainter::Antialiasing);
painter.drawEllipse(QPoint(25,25), 24, 24);
QPixmap mask = QPixmap::fromImage(alpha_mask);
widget.setMask(mask.mask());
Sadly, it results in following effect
"Edges" are jagged, where they should be smooth. I saved generated mask so I could investigate if it was the problem
it wasn't.
I know that Linux version of Skype does use Qt so it should be possible to reproduce. But how?
One possible approach I see is the following.
Prepare a nice high resolution pixmap with the circular button icon over transparent background.
Paint the pixmap on a square widget.
Then mask the widget leaving just a little bit of margin beyond the border of the circular icon so that the widget mask jaggedness won't touch the smooth border of the icon.
I managed to get a nice circular button with not so much code.
Here is the constructor of my custom button:
Button::Button(Type t, QWidget *parent) : QPushButton(parent) {
setIcon(getIcon(t));
resize(30,30);
setMouseTracking(true);
// here I apply a centered mask and 2 pixels bigger than the button
setMask(QRegion(QRect(-1,-1,32,32),QRegion::Ellipse));
}
and in the style sheet I have the following:
Button {
border-radius: 15px;
background-color: rgb(136, 0, 170);
}
With border-radius I get the visual circle and the mask doesn't corrupt the edges because it is 1 pixel away.
You are using the wrong approach for generating masks. I would generate them from the button images themselves:
QImage image(widget.size(), QImage::Format_Alpha8);
widget.render(&image);
widget.setMask(QBitmap::fromImage(image.createMaskFromColor(qRgba(0, 0, 0, 0))));

(C++/CLI) Testing Mouse Position in a Rectangle Relevent to Parent

I've been messing around with the Graphics class to draw some things on a panel. So far to draw, I've just been using the Rectangle Structure. On a panel, by clicking a button, it makes a rectangle in a random place and adds it to an array of other rectangles (They're actually a class called UIElement, which contains a Rectangle member). When this panel is clicked, it runs a test with all the elements to see if the mouse is inside any of them, like this:
void GUIDisplay::checkCollision()
{
Point cursorLoc = Cursor::Position;
for(int a = 0; a < MAX_CONTROLS; a++)
{
if(elementList[a] != nullptr)
{
if(elementList[a]->bounds.Contains(cursorLoc))
{
elementList[a]->Select();
//MessageBox::Show("Click!", "Event");
continue;
}
elementList[a]->Deselect();
}
}
m_pDisplay->Refresh();
}
The problem is, when I click the rectangle, nothing happens.
The UIElement class draws its rectangles in the following bit of code. However, I've modified it a bit, because in this example it uses the DrawReversibleFrame method to do the actually drawing, as I was using Graphics.FillRectangle method. When I changed it, I noticed DrawReversibleFrame drew in a different place than FillRectangle. I believe this is because DrawReversibleFrame draws with its positions relative to the window, while FillRectangle does it relative to whatever Paint event its in (Mines in a panel's Paint method.) So let me just show the code:
void UIElement::render(Graphics^ g)
{
if(selected)
{
Pen^ line = gcnew Pen(Color::Black, 3);
//g->FillRectangle(gcnew SolidBrush(Color::Red), bounds);
ControlPaint::DrawReversibleFrame(bounds, SystemColors::Highlight, FrameStyle::Thick);
g->FillRectangle(gcnew SolidBrush(Color::Black), bounds);
//g->DrawLine(line, bounds.X, bounds.Y, bounds.Size.Width, bounds.Size.Height);
}
else
{
ControlPaint::DrawReversibleFrame(bounds, SystemColors::ControlDarkDark, FrameStyle::Thick);
//g->FillRectangle(gcnew SolidBrush(SystemColors::ControlDarkDark), bounds);
}
}
I add in both DrawReverisbleFrame and FillRectangle so that way I could see the difference. This is what it looked like when I clicked the frame drawn by DrawReversibleFrame:
The orange frame is where I clicked, the black is where its rendering. This shows me that the Rectangle's Contains() method is look for the rectangle relevant to the window, and not the panel. That's what I need fixed :)
I'm wondering if this is happening because the collision is tested outside of the panels Paint method. But I don't see how I could implement this collision testing inside the Paint method.
UPDATE:
Ok, so I just discovered that it appears that what DrawReversibleFrame and FillRectangle draw are always a certain distance apart. I don't quite understand this, but someone else might.
Both Cursor::Position and DrawReversableFrame operate in screen coordinates. That is for the entire screen, everything on your monitor, and not just your window. FillRectangle on the other hand operates on window coordinates, that is the position within your window.
If you take your example where you were drawing with both and the two boxes are always the same distance apart, and move your window on the screen then click again, you will see that the difference between the two boxes changes. It will be the difference between the top left corner of your window and the top left corner of the screen.
This is also why when you check to see what rectangle you clicked isn't hitting anything. You are testing the cursor position in screen coordinates against the rectangle coordinates in window space. It is possible that it would hit one of the rectangles, but it probably won't be the one you actually clicked on.
You have to always know what coordiante systems your variables are in. This is related to the original intention of Hungarian Notation which Joel Spolsky talks about in his entry Making Wrong Code Look Wrong.
Update:
PointToScreen and PointToClient should be used to convert coordinates between screen and window coordinates.

Non-deprecated replacement for NSCalibratedBlackColorSpace?

I'm implementing my own NSBitmapImageRep (to draw PBM image files). To draw them, I'm using NSDrawBitmap() and passing it the NSCalibratedBlackColorSpace (as the bits are 1 for black, 0 for white).
Trouble is, I get the following warning:
warning: 'NSCalibratedBlackColorSpace' is deprecated
However, I couldn't find a good replacement for it. NSCalibratedWhiteColorSpace gives me an inverted image, and there seems to be no way to get NSDrawBitmap() to use a CGColorSpaceRef or NSColorSpace that I could create as a custom equivalent to NSCalibratedBlackColorSpace.
I've found a (hacky) way to shut up the warning (so I can still build warning-free until a replacement becomes available) by just passing #"NSCalibratedBlackColorSpace" instead of the symbolic constant, but I'd rather apply a correct fix.
Anybody have an idea?
OK, so I tried
NSData* pixelData = [[NSData alloc] initWithBytes: bytes +imgOffset length: [theData length] -imgOffset];
CGFloat black[3] = { 0, 0, 0 };
CGFloat white[3] = { 100, 100, 100 };
CGColorSpaceRef calibratedBlackCS = CGColorSpaceCreateCalibratedGray( white, black, 1.8 );
CGDataProviderRef provider = CGDataProviderCreateWithCFData( UKNSToCFData(pixelData) );
mImage = CGImageCreate( size.width, size.height, 1, 1, rowBytes, calibratedBlackCS, kCGImageAlphaNone,
provider, NULL, false, kCGRenderingIntentDefault );
CGColorSpaceRelease( calibratedBlackCS );
but it's still inverted. I also tried swapping black/white above, didn't change a thing. Am I misinterpreting the CIE tristimulus color value thing? Most docs seem to assume you know what it is or are willing to transform a piece of matrix maths to figure out what color is what. Or something.
I'd kinda like to avoid having to touch all the data once and invert it, but right now it seems like the best choice (/me unpacks his loops and xor operators).
It seems to have just been removed with no replacement. The fix is probably to just invert all the bits and use NSCalibratedWhiteColorSpace.
I'd create a calibrated gray CGColorSpace with the white and black points exchanged, then create a CGImage with the raster data and that color space. If you want a bitmap image rep, it's easy to create one of those around the CGImage.
Another way (since you need to support planar data and you can require 10.6) would be to create an NSBitmapImageRep in the usual way in the NSCalibaratedWhiteColorSpace, and then replace its color space with an NSColorSpace created with the CGColorSpace whose creation I suggested in my other answer.