I have some functions that use opengl to draw lines on the screen (health bars) and I recently moved from opengl es 1.1 to opengl es 2.0. I was using glColor4ub(50,160,50,255); to change the line color to green before rendering it on the screen, but that function appears to not exist in version 2.0. It says it is not valid and just renders all lines white.
Is there a different way I should be changing line colors? I've tried looking it up, but there doesn't seem to be anyone with the same question. It may be something simple I'm not seeing.
My game uses cocos2d 2.0 and the health bars are pretty much the only thing that I directly use opengl to render. Everything else is using sprite sheets and such. Thanks for any help.
As with everything else in OpenGL ES 2.0, you need to send the line colour to your fragment shader (via the vertex shader as a vertex attribute, or directly as uniform) and output the colour in your fragment shader.
In the simplest case, you could use a static value in the fragment shader:
void main(void) {
gl_FragColor = vec4(50.0/255.0, 160.0/255.0, 50.0/255.0, 255.0/255.0);
}
If you want to vary the colour at run time, send a value to the fragment shader as a uniform:
GLfloat color[4];
color[0] = 50.0/255.0;
color[1] = 160.0/255.0;
color[2] = 50.0/255.0;
color[3] = 255.0/255.0;
GLint lineColorSlot = glGetUniformLocation(shaderProgram, "LineColor");
glUniformMatrix4fv(lineColorSlot, 1, 0, color);
The fragment shader:
uniform lowp vec4 LineColor;
void main(void) {
gl_FragColor = LineColor;
}
Related
I am learning the use of libgdx and I got confused by the viewport and how objects are arranged on the screen. Let's assume my 2D world is 2x2 units wide and high. Now I create a camera which viewport is 1x1. So I should see 25% of my world. Usually displays are not square shaped. So I would expect libgdx to squish and stretch this square to fit the display.
For a side scroller you would set the viewport height the same as the world height and adjust the viewport width according to the aspect ratio. Independent of the aspect ratio of your display you always see the full height of the world but different expansions on the x-axis. Somebody with a wider than high display could look further on the x-axis than somebody with a square shaped display. But proportions will be maintained and there is no distortion. So far I thought I mastered how the viewport logic works.
I am working with the book "Learning LibGDX Game Development" in which you develop the game "canyon bunny". The source code can be found here:
Canyon Bunny - GitHub
In the WorldRenderer Class you find the initilization of the camera:
private void init() {
batch = new SpriteBatch();
camera = new OrthographicCamera(Constants.VIEWPORT_WIDTH, Constants.VIEWPORT_HEIGHT);
camera.position.set(0, 0, 0);
camera.update();
}
The viewport constants are saved in a separate Constants-Class:
public class Constants {
// Visible game world is 5 meters wide
public static final float VIEWPORT_WIDTH = 5.0f;
// Visible game world is 5 meters tall
public static final float VIEWPORT_HEIGHT = 5.0f;
}
As you can see the viewport is 5x5. But the game objects have the right proportion on my phone (16:9) and even on a desktop when you change the windows size the game maintains the correct proportions. I don't understand why. I would expect that the game tries to paint a square shaped cutout of the world onto a rectangle shaped display which would lead to distortion. Why is that not the case? And why don't you need the adaption of width or height of the viewport to the aspect ratio?
The line:
cameraGUI.setToOrtho(true);
Overrides the values you gave when you called:
cameraGUI = new OrthographicCamera(Constants.VIEWPORT_GUI_WIDTH, Constants.VIEWPORT_GUI_HEIGHT);
Here's the LibGDX code that shows why/how the viewport sizes you set were ignored:
/** Sets this camera to an orthographic projection using a viewport fitting the screen resolution, centered at
* (Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight()/2), with the y-axis pointing up or down.
* #param yDown whether y should be pointing down */
public void setToOrtho (boolean yDown) {
setToOrtho(yDown, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
}
/** Sets this camera to an orthographic projection, centered at (viewportWidth/2, viewportHeight/2), with the y-axis pointing up
* or down.
* #param yDown whether y should be pointing down.
* #param viewportWidth
* #param viewportHeight */
public void setToOrtho (boolean yDown, float viewportWidth, float viewportHeight) {
if (yDown) {
up.set(0, -1, 0);
direction.set(0, 0, 1);
} else {
up.set(0, 1, 0);
direction.set(0, 0, -1);
}
position.set(zoom * viewportWidth / 2.0f, zoom * viewportHeight / 2.0f, 0);
this.viewportWidth = viewportWidth;
this.viewportHeight = viewportHeight;
update();
}
So you would need to do this instead:
cameraGUI.setToOrtho(true, Constants.VIEWPORT_GUI_WIDTH, Constants.VIEWPORT_GUI_HEIGHT);
Also don't forget to call update() right after wherever you change the position or viewport dimensions of your camera (Or other properties)
I found the reason. If you take a look on the worldRenderer class there is a method resize(). In this method the viewport is adapted to the aspect ratio. I am just wondering because until now I thought the resize method is only called when resizing the window. Apparently it's also called at start up. Can anybody clarify?
I'm developing a game using Sprite Kit and Liquid Fun (Google's implementation of Box2d with liquids).
After I applied a threshold shader to make the liquid more real, I stumbled in a strange behavior that can be seen on this video: Current Behavior
The way it should behave can be seen on this video: Normal Behavior
I'm applying the shader on a SKEffectNode that contains all the particles (SKSpriteNodes)
That's the code on my threshold shader:
void main()
{
vec4 color = texture2D(u_texture,v_tex_coord);
if(color.w > 0.4) {
color = vec4(18.0/255.0, 122.0/255.0, 232.0/255.0, 1.0);
} else {
color = vec4(0.0, 0.0, 0.0, 0.0);
}
gl_FragColor = color;
}
Does anyone have any clue of what is going on?
I am new to opengl es, and I can't seem to figure out how you would change the alpha / opacity
on a texture loaded with GLKTextureLoader.
Right now I just draw the texture with the following code.
self.texture.effect.texture2d0.enabled = YES;
self.texture.effect.texture2d0.name = self.texture.textureInfo.name;
self.texture.effect.transform.modelviewMatrix = [self modelMatrix];
[self.texture.effect prepareToDraw];
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
NKTexturedQuad _quad = self.texture.quad;
long offset = (long)&_quad;
glVertexAttribPointer(GLKVertexAttribPosition,
2,
GL_FLOAT,
GL_FALSE,
sizeof(NKTexturedVertex),
(void *)(offset + offsetof(NKTexturedVertex, geometryVertex)));
glVertexAttribPointer(GLKVertexAttribTexCoord0,
2,
GL_FLOAT, GL_FALSE,
sizeof(NKTexturedVertex),
(void *)(offset + offsetof(NKTexturedVertex, textureVertex)));
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Any advice would be very helpful :)
i am no gl expert but to draw with a changed alpha value does not seem to work as described by rickster.
as far as i understand, the values passed to glBlendColor are only used when using glBlendFunc constants like: GL_CONSTANT_…
this will overwrite the textures alpha values and use a defined value to draw with:
glEnable(GL_BLEND);
glBlendFunc(GL_CONSTANT_ALPHA, GL_ONE_MINUS_CONSTANT_ALPHA);
glBlendColor(1.0, 1.0, 1.0, yourAlphaValue);
glDraw... // the draw operations
further reference can be found here http://www.opengl.org/wiki/Blending#Blend_Color
As long as you're in the OpenGL ES 1.1 world (or the emulated-1.1 world of GLKBaseEffect), alpha is a property either of the (per-pixel) bitmap data in the texture or of the (complete) OpenGL ES state you're drawing with. You can't set an opacity level for a texture as a whole, on its own. So, you have three options:
Change the alpha of the texture. This means changing the texture bitmap data itself -- use the 2D image context of your choice to draw the image at half (or whatever) alpha, and read the resulting image into an OpenGL ES texture. Probably not a great idea unless the alpha you want will be constant for the life of your app. In which case you might as well just go back to Photoshop (or whatever you're using to create your image assets) and set the alpha there.
Change the alpha you're drawing with. After you prepareToDraw, set up blending in GL:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_CONSTANT_ALPHA);
glBlendColor(1.0, 1.0, 1.0, myDesiredAlphaValue);
glDraw... // whatever you're drawing
Don't forget to A) draw your partially transparent content after any content you want it blended on top of and B) disable blending before rendering opaque content again on the next frame.
Ditch GLKBaseEffect and write your own shaders. Shaders that work like the 1.1 fixed-function pipeline are a dime a dozen -- you can even get started by using the shaders packaged with the Xcode "OpenGL Game" project template or looking at the shaders GLKit writes in the Xcode Frame Capture tool. Once you have such shaders, changing the alpha of a color you got out of a texel lookup is a simple operation:
vec4 color = texture2D(texUnit, texCoord);
color.a = myDesiredAlphaValue;
gl_FragColor = color;
Is there a way, possibly a ccBlendFunc, that will allow me to dynamically color sprites without affecting the pure white (255, 255, 255), pure black (0, 0, 0), and purely transparent (alpha=255) pixels?
Using the default blend function and setting sprite.color on a CCSprite will re-color the white pixels to whatever value is ccColor3B value is specified, and that is undesirable for me.
Use a shader. If you are using cocos2d version 2.1, start with ccShader_PositionTextureColor_frag (used by CCSprite to render textures, and other classes), copied here
#ifdef GL_ES
precision lowp float;
#endif
varying vec4 v_fragmentColor;
varying vec2 v_texCoord;
uniform sampler2D CC_Texture0;
void main()
{
gl_FragColor = v_fragmentColor * texture2D(CC_Texture0, v_texCoord);
}
You want to change that line in main() to skip the fragments you want to skip. CCSprite writes the 'sprite.color' property into v_fragmentColor (look at the code , there are 'premultiplied alpha' variants). You want to modify the v_fragmentColor when texture2D(CC_Texture0, v_texCoord).a == 0, and other circumstances.
I would extend CCSprite to use this new shader (ie avoid toying directly with the shaders builtin to cocos2d, have your own trial and error place). Once you have the shader doing what you want, add the logic in your class to place the new shader program in CCShaderCache, and retrieve it from there.
I have a simple GLSL texture renderer:
Vertex shader:
varying vec2 UV;
void main() {
UV = gl_MultiTexCoord0.xy;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Fragment shader:
varying vec2 UV;
uniform sampler2D diffuseMap;
void main(void) {
gl_FragColor = texture2D(diffuseMap, UV);
}
And I have texture made of solid colors. I need to render this texture without any interpolation or antialiasing (which seems to happen at the edges of the solid colors). For me it would be better to just take the nearest pixel, rather than try to interpolate.
I'm not sure I was clear. Imagine it like this: I want to texture a ball with a chess pattern, and I want the result to be pure black and white. But the rendered creates a little bit of gray where black and white meet.
Set GL_TEXTURE_MIN_FILTER and GL_TEXTURE_MAX_FILTER to GL_NEAREST and make sure that you do not have mipmaps.