I am trying to render some simple solid shapes in JOGL (and Eclipse) and then step through them 'layer' by 'layer'; but when I add the glClear method all I get are wire frames, not the filled shape!? If I comment that line out (as below) displays the solid shape but 'fills' to the largest the shape will be and then does not shrink down again. e.g with a sphere the front half is fine but the back half comes out as a solid cylinder if that makes sense.
public void render(GLAutoDrawable gLDrawable)
{
GL2 gl = gLDrawable.getGL().getGL2();
**//gl.glClear(GL2.GL_COLOR_BUFFER_BIT | GL2.GL_DEPTH_BUFFER_BIT);**
gl.glColor3b((byte) 0, (byte) 127, (byte) 0);
gl.glLoadIdentity();
gl.glTranslatef(500, 350, -300);
glut.glutSolidSphere(300.0, 20, 16);
gl.glTranslatef(-500, -350, 300);
gl.glEnd();
gl.glFlush();
}
Any help would be much appreciated, Ic an post more of the code if needed.
Thanks
Tim
EDITED To make more sense
insert this before you create your sphere... It will tell Opengl to fill the next polygons that are drawn...
gl.glPolygonMode(gl.GL_FRONT_AND_BACK, gl.GL_FILL);
if you want the wireframe again, just call:
gl.glPolygonMode(gl.GL_FRONT_AND_BACK, gl.GL_LINE);
:)
Related
I am learning the use of libgdx and I got confused by the viewport and how objects are arranged on the screen. Let's assume my 2D world is 2x2 units wide and high. Now I create a camera which viewport is 1x1. So I should see 25% of my world. Usually displays are not square shaped. So I would expect libgdx to squish and stretch this square to fit the display.
For a side scroller you would set the viewport height the same as the world height and adjust the viewport width according to the aspect ratio. Independent of the aspect ratio of your display you always see the full height of the world but different expansions on the x-axis. Somebody with a wider than high display could look further on the x-axis than somebody with a square shaped display. But proportions will be maintained and there is no distortion. So far I thought I mastered how the viewport logic works.
I am working with the book "Learning LibGDX Game Development" in which you develop the game "canyon bunny". The source code can be found here:
Canyon Bunny - GitHub
In the WorldRenderer Class you find the initilization of the camera:
private void init() {
batch = new SpriteBatch();
camera = new OrthographicCamera(Constants.VIEWPORT_WIDTH, Constants.VIEWPORT_HEIGHT);
camera.position.set(0, 0, 0);
camera.update();
}
The viewport constants are saved in a separate Constants-Class:
public class Constants {
// Visible game world is 5 meters wide
public static final float VIEWPORT_WIDTH = 5.0f;
// Visible game world is 5 meters tall
public static final float VIEWPORT_HEIGHT = 5.0f;
}
As you can see the viewport is 5x5. But the game objects have the right proportion on my phone (16:9) and even on a desktop when you change the windows size the game maintains the correct proportions. I don't understand why. I would expect that the game tries to paint a square shaped cutout of the world onto a rectangle shaped display which would lead to distortion. Why is that not the case? And why don't you need the adaption of width or height of the viewport to the aspect ratio?
The line:
cameraGUI.setToOrtho(true);
Overrides the values you gave when you called:
cameraGUI = new OrthographicCamera(Constants.VIEWPORT_GUI_WIDTH, Constants.VIEWPORT_GUI_HEIGHT);
Here's the LibGDX code that shows why/how the viewport sizes you set were ignored:
/** Sets this camera to an orthographic projection using a viewport fitting the screen resolution, centered at
* (Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight()/2), with the y-axis pointing up or down.
* #param yDown whether y should be pointing down */
public void setToOrtho (boolean yDown) {
setToOrtho(yDown, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
}
/** Sets this camera to an orthographic projection, centered at (viewportWidth/2, viewportHeight/2), with the y-axis pointing up
* or down.
* #param yDown whether y should be pointing down.
* #param viewportWidth
* #param viewportHeight */
public void setToOrtho (boolean yDown, float viewportWidth, float viewportHeight) {
if (yDown) {
up.set(0, -1, 0);
direction.set(0, 0, 1);
} else {
up.set(0, 1, 0);
direction.set(0, 0, -1);
}
position.set(zoom * viewportWidth / 2.0f, zoom * viewportHeight / 2.0f, 0);
this.viewportWidth = viewportWidth;
this.viewportHeight = viewportHeight;
update();
}
So you would need to do this instead:
cameraGUI.setToOrtho(true, Constants.VIEWPORT_GUI_WIDTH, Constants.VIEWPORT_GUI_HEIGHT);
Also don't forget to call update() right after wherever you change the position or viewport dimensions of your camera (Or other properties)
I found the reason. If you take a look on the worldRenderer class there is a method resize(). In this method the viewport is adapted to the aspect ratio. I am just wondering because until now I thought the resize method is only called when resizing the window. Apparently it's also called at start up. Can anybody clarify?
When I am using QScatterSeries, I can very easily draw point at (x, y). However, instead of points I would like to draw short lines, like in the figure below. How can I get about doing so?
I tried using RectangleMarker, but it just draws a fat square. I would prefer a thin line about 2px wide and 20px in height.
Is there a way I can add custom marker shapes?
Here are the code and the settings I use to transform my points into lines :
//create scatter series to draw point
m_pSeries1 = new QtCharts::QScatterSeries();
m_pSeries1->setName("trig");
m_pSeries1->setMarkerSize(100.0);
//draw a thin rectangle (50 to 50)
QPainterPath linePath;
linePath.moveTo(50, 0);
linePath.lineTo(50, 100);
linePath.closeSubpath();
//adapt the size of the image with the size of your rectangle
QImage line1(100, 100, QImage::Format_ARGB32);
line1.fill(Qt::transparent);
QPainter painter1(&line1);
painter1.setRenderHint(QPainter::Antialiasing);
painter1.setPen(QColor(0, 0, 0));
painter1.setBrush(painter1.pen().color());
painter1.drawPath(linePath);
//attach your image of rectangle to your series
m_pSeries1->setBrush(line1);
m_pSeries1->setPen(QColor(Qt::transparent));
//then use the classic QtChart pipeline...
You can play the marker size, the dimension of the image and the drawing pattern in the painter to adapt the size and shape of the rectangle to obtain a line.
In the picture, it's the black line. As you can see you can repeat the process for other series.
Keep in mind that you cannot use the openGL acceleration:
m_pSeries0->setUseOpenGL(true);
My work is based on the QtCharts/QScatterSeries example : QScatterSeries example
Hope it will help you.
Florian
I am trying to use mask on my QWidget. I want to overlay existing widget with row of buttons - similar to Skype
Notice that these buttons don't have jagged edges - they are nicely antialiased and widget below them is still visible.
I tried to accomplish that using Qt Stylesheets but on pixels that should be "masked out" was just black colour - it was round button on black, rectangular background.
Then I tried to do this using QWidget::mask(). I used following code
QImage alpha_mask(QSize(50, 50), QImage::Format_ARGB32);
alpha_mask.fill(Qt::transparent);
QPainter painter(&alpha_mask);
painter.setBrush(Qt::black);
painter.setRenderHint(QPainter::Antialiasing);
painter.drawEllipse(QPoint(25,25), 24, 24);
QPixmap mask = QPixmap::fromImage(alpha_mask);
widget.setMask(mask.mask());
Sadly, it results in following effect
"Edges" are jagged, where they should be smooth. I saved generated mask so I could investigate if it was the problem
it wasn't.
I know that Linux version of Skype does use Qt so it should be possible to reproduce. But how?
One possible approach I see is the following.
Prepare a nice high resolution pixmap with the circular button icon over transparent background.
Paint the pixmap on a square widget.
Then mask the widget leaving just a little bit of margin beyond the border of the circular icon so that the widget mask jaggedness won't touch the smooth border of the icon.
I managed to get a nice circular button with not so much code.
Here is the constructor of my custom button:
Button::Button(Type t, QWidget *parent) : QPushButton(parent) {
setIcon(getIcon(t));
resize(30,30);
setMouseTracking(true);
// here I apply a centered mask and 2 pixels bigger than the button
setMask(QRegion(QRect(-1,-1,32,32),QRegion::Ellipse));
}
and in the style sheet I have the following:
Button {
border-radius: 15px;
background-color: rgb(136, 0, 170);
}
With border-radius I get the visual circle and the mask doesn't corrupt the edges because it is 1 pixel away.
You are using the wrong approach for generating masks. I would generate them from the button images themselves:
QImage image(widget.size(), QImage::Format_Alpha8);
widget.render(&image);
widget.setMask(QBitmap::fromImage(image.createMaskFromColor(qRgba(0, 0, 0, 0))));
I am new to opengl es, and I can't seem to figure out how you would change the alpha / opacity
on a texture loaded with GLKTextureLoader.
Right now I just draw the texture with the following code.
self.texture.effect.texture2d0.enabled = YES;
self.texture.effect.texture2d0.name = self.texture.textureInfo.name;
self.texture.effect.transform.modelviewMatrix = [self modelMatrix];
[self.texture.effect prepareToDraw];
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
NKTexturedQuad _quad = self.texture.quad;
long offset = (long)&_quad;
glVertexAttribPointer(GLKVertexAttribPosition,
2,
GL_FLOAT,
GL_FALSE,
sizeof(NKTexturedVertex),
(void *)(offset + offsetof(NKTexturedVertex, geometryVertex)));
glVertexAttribPointer(GLKVertexAttribTexCoord0,
2,
GL_FLOAT, GL_FALSE,
sizeof(NKTexturedVertex),
(void *)(offset + offsetof(NKTexturedVertex, textureVertex)));
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Any advice would be very helpful :)
i am no gl expert but to draw with a changed alpha value does not seem to work as described by rickster.
as far as i understand, the values passed to glBlendColor are only used when using glBlendFunc constants like: GL_CONSTANT_…
this will overwrite the textures alpha values and use a defined value to draw with:
glEnable(GL_BLEND);
glBlendFunc(GL_CONSTANT_ALPHA, GL_ONE_MINUS_CONSTANT_ALPHA);
glBlendColor(1.0, 1.0, 1.0, yourAlphaValue);
glDraw... // the draw operations
further reference can be found here http://www.opengl.org/wiki/Blending#Blend_Color
As long as you're in the OpenGL ES 1.1 world (or the emulated-1.1 world of GLKBaseEffect), alpha is a property either of the (per-pixel) bitmap data in the texture or of the (complete) OpenGL ES state you're drawing with. You can't set an opacity level for a texture as a whole, on its own. So, you have three options:
Change the alpha of the texture. This means changing the texture bitmap data itself -- use the 2D image context of your choice to draw the image at half (or whatever) alpha, and read the resulting image into an OpenGL ES texture. Probably not a great idea unless the alpha you want will be constant for the life of your app. In which case you might as well just go back to Photoshop (or whatever you're using to create your image assets) and set the alpha there.
Change the alpha you're drawing with. After you prepareToDraw, set up blending in GL:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_CONSTANT_ALPHA);
glBlendColor(1.0, 1.0, 1.0, myDesiredAlphaValue);
glDraw... // whatever you're drawing
Don't forget to A) draw your partially transparent content after any content you want it blended on top of and B) disable blending before rendering opaque content again on the next frame.
Ditch GLKBaseEffect and write your own shaders. Shaders that work like the 1.1 fixed-function pipeline are a dime a dozen -- you can even get started by using the shaders packaged with the Xcode "OpenGL Game" project template or looking at the shaders GLKit writes in the Xcode Frame Capture tool. Once you have such shaders, changing the alpha of a color you got out of a texel lookup is a simple operation:
vec4 color = texture2D(texUnit, texCoord);
color.a = myDesiredAlphaValue;
gl_FragColor = color;
I have a ortho set up at the moment for 2D, when I resize the window it stretches anything that is drawn in the window, is there a way to either just have black bars show when the window is resized or at least maintain the aspect ratio of the contents, so they dont stretch at all. I have tried a few implementations that I have seen on here, but nothing really works.
EDIT: Sorry guys had a bit of a blonde moment
Protected Overrides Sub OnResize(ByVal e As EventArgs)
MyBase.OnResize(e)
GL.Viewport(0, 0, Width, Height)
GL.MatrixMode(MatrixMode.Projection)
GL.LoadIdentity()
GL.Ortho(-1.0, testvalue, testvalue , 1.0, 0.0, 4.0)
End Sub
testvalue at the moment is 5000, window size is 800x800
I figured out what I had to do. Rather than utilizing the aspect ratio on the Ortho part, I used it on the view port as such:
Dim ar As Single = Width/Height
GL.Viewport( 0, 0, 800 * ar, 800 * ar)
This prevents all stretching and simply places a black bar on the right hand side when the width is greater than height.
When you resize, Windows creates its own message pump to handle events, bypassing your message pump. There are work-arounds (hacks) to get it to render whilst sizing, including running your update/rendering on a thread. Note that this is a problem for D3D as well as OpenGL.
There's a discussion here on an old Gamedev thread.
the ortho command comprises of: the following parameters: left, right, bottom, top, zNear, zFar further info: http://www.opengl.org/sdk/docs/man2/xhtml/glOrtho.xml
You will want to plug in your 'Width' into right and your 'Height' into bottom and ensure the Width and Height values reflect the new window size.
E.g.:
GL.Viewport(0, 0, Width, Height)
GL.MatrixMode(MatrixMode.Projection)
GL.LoadIdentity()
GL.Ortho(0.0, Width, Height, 0.0, 1.0, 2.0)
Since you're only after 2D your zNear and zFar can be quite small, just make sure to render between zNear and zFar and do not use 0.0 as your zNear, I would recommend using 0.1 or larger for your zNear.