Alpha channel in OpenGL shaders - opengl-es-2.0

I have texture with alpha channel (from .png file for example) and I want to mask it with another texture with complex alpha painting. With .jpg files it works great, but if texture have alpha itself, I get some sort of "ghosty" colors, like this
http://i.gyazo.com/5ba2f6594a0027584a6eaf57356588c5.png
So, my question is: why glFragColor = vec4(r, g, b, 0.0) isn't transparent when one of r, g, b color is above zero? Or is there any other way to achieve my task?
Working on iOS with cocos2d-x v3.5, by the way.

It seams problem was in blend function. I've set blend function to {GL_SRC_ALPHA, GL_ONE} and it worked great.

Related

OpenGL ES blend func so color always shows against background

I am using OpenGL ES 1.1 to draw lines in my iPad app. I want to make sure that the drawn lines are always visible on the screen regardless of the background colors, and without allowing the user to choose a color. Is there a blend function that will create this effect? So the color of the line drawn will change based on the colors already drawn beneath it and therefore always be visible.
Sadly the final blending of fragments into the framebuffer is still fixed function. Furthermore glLogicOp isn't implemented in ES so you can't do something cheap like XOR drawing.
I think the net effect is that:
you want the output colour to be a custom function of the colour already in the frame buffer;
but the frame buffer can't be read in a shader (it'd break the pipeline and lead towards concurrency issues).
You're therefore going to have to implement a ping pong pipeline.
You have two off-screen buffers. One represents what you output last frame, the other represents what you output the frame before that.
To generate a new frame you render using the one that represents the frame before as an input. Because it's an input you can sample it wherever you want and make whatever calculations you like on it. You render to the other buffer that you have (ie, the even older one) because you no longer care about its contents.
Then you copy all that to the screen and swap the two over, meaning that what you just drew is still in a texture to refer to as what you drew last frame. What you just referred to becomes your next drawing target because it's something you conveniently already have lying around.
So you'll be immediately interested in rendering to a texture. You'll also need to decide what function you want to use to pick a suitable 'different' colour to the existing background. Maybe just inverting it will do?
I think this could work:
glBlendFunc(GL_ONE_MINUS_DST_COLOR, GL_ZERO);
Draw your lines with a white color, and then the result will be rendered as
[1,1,1,1] * ( 1 - [DstR, DstG, DstB, DstA]) + ([DstR, DstG, DstB, DstA] * 0)
This should render a black pixel where the background is white, a white pixel where the background is black, a yellow pixel where the background is blue, etc.

Programmatically, how does hue blending work in photoshop?

In Photoshop you can set a layer's blending mode to be "Hue". If that layer is, for example, filled with blue then it seems to take the layer below and makes it all blue wherever a non-whiteish color exists.
I'm wondering what it's actually doing though. If I have a background layer with a pixel aarrggbb and the layer on top of that is set to blend mode "Hue" and there's a pixel aarrggbb on that layer, how are those two values combined to give the result that we see?
It doesn't just drop the rrggbb from the layer below. If it did that it'd color white and black as well. It also wouldn't allow color variations through.
If a background pixel is 0xff00ff00 and the corresponding hue layer pixel is 0xff0000ff then I'm assuming the end result will just be 0xff0000ff because the ff blue replaces the ff green. But, if the background pixel is 0x55112233 and the hue layer pixel is 0xff0000ff, how does it come up with the shade of blue that it comes up with?
The reason I ask is that I'd like to take various images and change the hue of the image programmatically in my app. Rather than storing 8 different versions of the same image with different colors, I'd like to store one image and color it as needed.
I've been researching a way to replicate that blending mode in javascript/canvas but I've only come up with the "colorize" filter/blend mode. (Examples below)
Colorize algorithm:
convert the colors from RGB to HSL;
change the Hue value to the wanted one (in my case 172⁰ or 0.477);
revert the update HSL to RGB
Note: this is ok on the desktop but it's noticeably slow on a smartphone, I found.
You can see the difference by comparing these three images. Original:
colorize:
Fireworks' "blend hue" algorithm (which I think is the same as Photoshop's):
The colorize filter might be a good substitute.
RGB/HSL conversion question
Hue/Chroma and HSL on Wikipedia
I found an algorithm to convert RGB to HSV here:
http://www.cs.rit.edu/~ncs/color/t_convert.html
Of course, at the bottom of that page it mentions that the Java Color object already has methods for converting between RGB and HSV, so I just used that.

Coloring a CCSprite without only using its prime colors

I have a CCSprite that was created from a png with transparent background.
I want to be able to apply colors to this sprite in a way that I`m free to define which color it is, without the actual color of the sprite affecting the amount of each color I have to add.
I`ve tried this:
mySprite.color = ccc3(200,200,255);
In an attempt to add a little blue-ish feel to my sprite, but as it works by setting the amount of tint that's gonna be displayed based on existant color of the sprite, and my sprite has virtually no blue in any of it (most of it is yellow) the resulting effect is pretty sketchy, everything gets really dark, and there is one slight blue-ish coloring, but not as I wanted.
The ideal effect for me on this case would be to ADD a light blue mask to it with very low alpha.
Is there an easy way to do that without composing sprites?
I've tried using CCTexture2D, but had no luck, as there is no built in method for working with colors, and most tutorials only teach you how to build textures out of image files.
This is deceptively hard to do in code with the original sprite. Another option would be:
create a new sprite, which is just a white outline version of your original sprite
the color property of this white sprite will now respond exactly to the RGB values you pass in
so pass in your light blue value to the white sprite and set the opacity correctly
then overlay it on your original sprite
Any good?
The only way you can achieve this is by overlaying (masking) the sprite with the proper OpenGL blend functions. When you say "add a light blue mask" then that's what you need to do. You may find this visual blendfunc tool helpful, if only to understand how blending with mask sprites works and what you can achieve with it. There's no built-in support for this by Cocos2D however, so you'll have to revert to pure OpenGL inside a cocos2d node's -(void) draw {} method.
Tinting (changing the color property) will only modify the RGB channels of the entire image by changing the vertex colors of all 4 vertices.

Using Objective C I need to combine 4 CMYK colors into one.

Being an objective C newbie, and not much of a graphics programmer, I am struggling with the various graphic libraries apple supplies with the Xcode SDK ver 4.1. There are Cocoa, Open GL ES, Quartz 2D, and various other layers (frameworks) of high end and low end graphic functions. I just want to combine 4 specific colors and show them on the iPad, iPhone, iPod Touch.
I realize I can convert CMYK colors to RGBA without much difference from an optical point of view, but I guess combining colors into one color object in Objective C is my problem. I've looked at layering of views, blend modes of images in Quartz, etc. I'm not sure if I first need to create BMP files first, or what. I guess its all just a little overwhelming for a newbie, and I would appreciate some guidance on figuring out the graphic concept/terminology and then perhaps help choosing which graphics framework can do the job.
There is no reason to convert from CMYK to RGB. You just need to create your color using CGColor and a CYMK colorspace.
CGColorSpaceRef cmykColorSpace = CGColorSpaceCreateDeviceCMYK();
CGFloat colors[5] = {1, 1, 1, 1, 1}; // CMYK+Alpha
CGColorRef cgColor = CGColorCreate(cmykColorSpace, colors);
UIColor *uiColor = [UIColor colorWithCGColor:cgColor];
CGColorRelease(cgColor);
CGColorSpaceRelease(cmykColorSpace);
You can now use that color to do whatever you want.
Rob's answer is pretty cool - I never knew about that CGColorCreate function.
If you want to tweak the way the colors combine, you can use my category on UIColor - you can change the mixing algorithm in the class methods if you choose (the ones I coded are pretty good, I hope). This category also supports combining in RGB or RYB.
https://github.com/ddelruss/UIColor-Mixing
Enjoy,
Damien

Quartz 2D animating text?

i have to animate text around a circle. The text will also scale up/down. Whats the best approach to accomplish this? (i am using Quartz 2D)
My approach is:
-- Calculate point using sin and cos methods.
-- Move Pen there and draw text with alpha and size.
-- Clear the screen
-- Calculate next point using sin and cos methods.
-- Move pen there and draw text with alpha and size.
-- clear the screen
so on...
any better way?
If you're targeting iPhone OS ≥3.2, use CATextLayer.
Otherwise, you could create a UILabel and modify its transformation value, or control a CALayer directly.
If you're constrained to Core Graphics the method your described is the best already, but CG is not an animation framework. You should use Core Animations for UI animations.