The Right Way To Draw Normals In OpenGL? - objective-c

Right now I am writing a program in OpenGl. I'm rendering some-what complex 3D figures from files. After triple checking the code, I know that all the values are being read right. The only thing thats acting weird is the normals. I'm drawing them like this:
glVertex3fv(vert1);
glVertex3fv(vert2);
glVertex3fv(vert3);
glNormal3fv(norm1);
glNormal3fv(norm2);
glNormal3fv(norm3);
The values are being read from GLFloats. Tell me the right way, or at least what I'm doing wrong.

When you call glVertex, that finishes a vertex, so you need to set all the other vertex state before that. You need to set the normal for a vertex before finishing it.
It should look like this:
glNormal3fv(norm1);
glVertex3fv(vert1);
glNormal3fv(norm2);
glVertex3fv(vert2);
glNormal3fv(norm3);
glVertex3fv(vert3);

Related

GODOT: What is an efficient calculation for the AABB of a simple 3D model from a camera's view

I am attempting to come up with a quick and efficient means of translating a 3d mesh into a projected AABB. In the end, I would like to accomplish something similar to figure 1 wherein only the area of the screen covered by the cube is located inside the bounding box highlighted in red. ((if it is at all possible, getting the area as small as possible, highlighted in blue, would increase efficiency down the road.))
Figure 1. https://i.imgur.com/pd0E20C.png
Currently, I have tried:
Calculating the point position on the screen using camera.unproject_position(). this failed largely due to my inability to wrap my head around the pixel positions trending towards infinity. I understand it has something to do with Tan, but frankly, it is too late for my brain to function anymore.
Getting the area of collision between the view frustum and the AABB of the mesh instance. This method seems convoluted, and to get it in a usable format I would need to project the result into 2d coordinates again.
Using the MeshInstance VisualInstance to create a texture wherein a pixel is white if it contains the mesh instance, and black otherwise. Visual instances in general just baffle me, and I did not think it would be efficient to have another viewport just to output this texture.
What I am looking for:
An output that can be passed to a shader informing where to complete certain calculations. Right now this is set up to use a bounding box, but it could easily be rewritten to also use a texture. It also could be rewritten to use polygons, but I am trying to keep calculations to a minimum in the shader.
Certain solutions I have tried before have worked, slightly, but this must be robust. The camera interfacing with the 3d object will be able to move completely around and through it, meaning at times the view will be completely surrounded by the 3d model with points both in front, and behind.
Thank you for any help you can provide.
I will try my best to update this post with information if needed.

java3d simple way to translate object

I am making my first program using Java3D. I have setup some transformGroups that I now need to move in calculated directions. When I looked this up, I found interpolators and alpha objects and waveforms and couldn't understand a word of it. I have done this in the past in OpenGL using simple vectors and frame refreshment. Is there a similar simple way in Java3d? Thanks.
There's no reason you couldn't do it with vectors and frame refresh in Java3D as well.
A simple way would be to attach a behavior to the scenegraph with a WakeupOnElapsedFrames(0) condition, and then have it update the needed transform every frame.
As its simplest, that is was the interpolators are doing for you. Once you get that working, it will probably make more sense as to how you could do it with interpolators.

Bubbles like smoke ..2D effect. How to?

I'm new to game programming..What I want to achieve is an effect like bubbles which behaves like smoke in 2D. I will explain...I don't want a realistic effect or fog. I want to do something like bubbles in the background which fly in the sky and become bigger and bigger and move like they are suspended in space. They become bigger until they reach some size.
Something like this
What is the best way to achieve this ? Is there somewhere in the skynet? Some examples or ready effects? Where to start ? I program in Java but even though the examples are in C++ or other languages it really doesn't matter.
I assume you have a method to draw already, like openGL or Canvas.
You probably want to create the balls as objects with variables like x,y,size etc. Then when you draw it, make sure you updated these variables, by for example increase the size if you want it to become bigger, or the x if you want it to move to the right.
Other option is to create an animated image like a .gif ofcoarse.

Which pixels did that drawmesh operation just draw to?

Ok, it's a relatively simple problem, I want to know where, in screen space, a particular mesh was just drawn. I plan on then storing that information in a data store of some kind so that when I interact with something in screen space, I can lookup in the register and find the object, i.e, click on the spaceship drawn on the screen and then select target etc.
I can't find any way of finding out which pixels the mesh was drawn to though...
Alternatively, if I'm missing something obvious regarding what it is that I Want to do, please let me know!
There is no easy way to do that. But you can use another texture as render target and render those meshes with unique colors.
So for example you give #FF0000 to your mesh A and draw it also to your second render target with that color. Now when you select a pixel from 2nd render target and look at that color, if it is #FF0000 you can understand that, the pixel is a part of mesh A. Thus you can easily pick the mesh drawn on a certain pixel when you click one of those pixels.
Why dont you Unproject your screen space coords into 3D space? The only complication I had was the fact that I'd be left with a plane, I could check if a Mesh intersected with that plane but I often had multiple candidates for 'picking'.
Check out Google for DirectX Unproject and there are various articles discussing it. It's sometimes complicated for some to implement but done well it's actually pretty nifty; don't get put off by the people online who say it doesn't work, it does work!

OpenGL ES 2.0 multiple meshes? (just Real World Examples)

I`m a little confused about this point.
Everything that I found in books, blogs, forums and even in OpenGl specs just talk about a very abstract techniques. Nothing about real world examples.
And I`m going crazy with this: How to put and manage multiple objects (meshes) with OpenGL ES 2.x?
In theory seems simple. You have a Vertex Shader (vsh) and Fragment Shader (fsh), then you bind the both to one Program(glGenProgram, glUseProgram, ...). In the every cycle of render, that Program will perform its VSH by each Vertex and after this will perform FSH on every "pixel" of that 3d object and finally send the final result to the buffer (obviously without talk about the per-vertex, rasterization, and other steps in the pipeline).
OK, seems simple...
All this is fired by a call to the draw function (glDrawArrays or glDrawElements).
OK again.
Now the things comes confused to me.
And If you have several objects to render?
Let's talk about a real world example.
Imagine that you have a landscape with trees, and a character.
The grass of the landscape have one texture, the trees have texture to the trunk and leaves (Texture Atlas) and finally the character has another texture (Texture Atlas) and is animated too.
After imagine this scene, my question is simple:
How you organize this?
You create a separated Program (with one VSH and FSH) for each element on the scene? Like a Program to the grass and soil's relief, a Program to the trees and a Program to the character?
I've tried it, but... when I create multiple Programs and try to use glVertexAttribPointer() the textures and colors of the objects enter in conflicts with each others. Because the location of the attributes, the indexes, of the first Program repeat in the second Program.
Let me explain, I used glGetAttribLocation() in one class that controls the floor of the scene, so the OpenGL core returned to me the index of 0,1 and 2 for the vertexes attributes.
After, in the class of trees I created another Program, anothers shaders, and after used again the glGetAttribLocation() at this time the OpenGL core return with indexes of 0, 1, 2 and 3.
After in the render cycle, I started setting the first Program with glUseProgram() and I've made changes to its vertexes attributes with glVertexAttribPointer() and finally a call to glDrawElements(). After this, call again glUseProgram() to the second Program and use glVertexAttribPointer() again and finally glDrawElements().
But at this point, the things enter in conflicts, because the indexes of vertexes attributes of the second Program affects the vertexes of the first Program too.
I'm tried a lot of thing, searched a lot, asked a lot... I'm exhausted. I can't find what is wrong.
So I started to think that I'm doing everything wrong!
Now I repeat my question again: How to work with multiple meshes (with different textures and behavior) in OpenGL ES 2.x? Using multiple Programs? How?
To draw multiple meshes, just call glDrawElements/glDrawArrays multiple times. If those meshes require different shaders, just set them. ONE, and only ONE shader program is active.
So each time you change your shader program (Specifically the VS), you need to reset all vertex attributes and pointers.
Just simple as that.
Thanks for answer,
But I think that you just repeat my own words... about the Draw methods, about one Program active, about everything.
Whatever.
The point is that your words give me an insight!
You said: "you need to reset all vertex attributes and pointers".
Well... not exactly reseted, but what I was not updating ALL vertex attributes on render cycle, like texture coordinates. I was updating just that attributes that change. And when I cleared the buffers, I lost the older values.
Now I start to update ALL attributes, independent of change or not their values, everything works!
See, what I had before is:
glCreateProgram();
...
glAttachShader();
glAttachShader();
...
glLinkProgram();
glUseProgram();
...
glGetAttribLocation();
glVertexAttribPointer();
glEnableVertexAttribArray();
...
glDrawElements();
I repeated the process to the second Program, but just call glVertexAttribPointer() a few times.
Now, what I have is a call to glVertexAttribPointer() for ALL attributes.
What drived me crazy is the point that if I removed the First block of code to the first Program, the whole second Program worked fine.
If I removed the Second block of code to the second Program, the first one worked fine.
Now seems so obvious.
Of course, if the VSH is a per-vertex operation, it will work with nulled value if I don't update ALL attributes and uniforms.
I though about OpenGL more like a 3D engine, that work with 3d objects, has a scene where you place your objects, set lights. But not... OpenGL just know about triangles, lines and points, nothing more. I think different now.
Anyway, the point is that now I can move forward!
Thanks