iOS inserting 3D models into Open GL Game - objective-c

I want to make my first 3D game. I decided to make in Open GL. I am using Apples code that comes whit the new Open GL Game project. I know how to import my own objects and manipulate them. I am wondering if someone can tell me where to find a good tutorial or knows how display the objects. For a fact i know that in the default project the both cubes are the same model displayed twice. Sadly i could not find where is the part that displays them and aplies different colors.

Sadly i could not find where is the part that displays them and aplies different colors.
This just tells you, that you need to learn the basics of OpenGL first.
We could highlight you the lines responsible, but all you'd see are "strange" assignments of attribute pointers, index arrays and weird things calls uniforms passed around.

Related

Three.js Visual indication or Effect to show when an object is occluded

I’m building a program where you control a small avatar (this is a basic circle geometry or plane) that traverses through a scene filled with 3D Models and shapes. I’d like to achieve an effect similar to those found in many video games where you can see some sort of indication that the avatar is behind the various models and shapes. For example, here is an image to explain what i mean:
Example image to show desired effect
It doesn’t necessarily need to be the outline of the shape like in the example image. I’m open to any effect really that shows some sort of indication that the avatar is behind something but also cant be too performance heavy as I'd like to get this program running on mobile. Being able to customise the effect somewhat (e.g. color, thickness, etc) is also highly desirable. Any advice or suggestions would be greatly appreciated. There really doesn't seem to be much information that I can find to achieve an effect like this.
Also I thought it was worth mentioning that thus far I have attempted two things on my own. One is just rendering the avatar above everything. That turned out to look really silly and confusing. The other thing I attempted was to use an Outline post processing effect (from this library https://github.com/vanruesc/postprocessing). Which actually looked pretty great but proved to be too performance heavy to run optimally at all times (not to mention other problems with color blending and transparent / see-through shapes and models).
I understand this is kind of a shot in the dark but thought it didn't hurt to ask.

SharpDX How To Render a 3D Environment

I just started coding some basics in SharpDX (VB.net) and I already got it to Render a 2D triangle. And I know how to render other 2D stuff, but I want to create something in 3D where I'm able to rotate the camera around some cubes. I tried it, but failed at converting the 3D Space to screen coordinates. Now Here are my Questions:
How can I calculate a Matrix for Perspective projection?
How can I pass that Matrix to my Vertex Shader
How can I make the Camera rotate around the Objects when I drag the mouse over the screen?
Please explain these things to me and give some code examples. I'm just a Beginner in SharpDX and everything I found was just not understandable for me.
A few things you can do when you first start.
Firstly, there are some great examples you can leverage (Even in c# but you need VB) that you can use to learn from.
I suggest you look at this within the Sharpdx repository. Sharpdx direct 3d 11 samples
Within these examples (especially triangle example), it goes through the basics including setting up the device, the creation of simple resources to bind to your GPU and compiling the bytecode.
The samples though use the effects methodology, which is deprecated and as such once you become familiar with compiling code, I would advise moving away from this paradigm.
The more advanced examples will show you how to set up your matrices.
The last item you wanted to know about is mouse movement. I would advise just having a look at MSDN around mousemove events. You will need to bind one to your window/control and then read the deltas. Use those deltas to create your rotation/movement based upon this. Look into Vector3 (sharpdx), basically, you need to do this all in vector space and then create the various translation/rotation matrices from this.
Hope this is start.

Create mock 3D "space" with forwards and backwards navigation

In iOS, I'd like to have a series of items in "space" similar to the way Time Machine works. The "space" would be navigated by a scroll bar like feature on the side of the page. So if the person scrolls up, it would essentially zoom in in the space and objects that were further away will be closer to the reference point. If one zooms out, then those objects will fade into the back and whatever is behind the frame of refrence will come into view. Kind of like this.
I'm open to a variety of solutions. I imagine there's a relatively easy solution within openGL, I just don't know where to begin.
Check out Nick Lockwood's iCarousel on github. It's a very good component. The example code he provides uses a custom carousel style very much like what you describe. You should get there with just a few tweaks.
As you said, in OpenGL(ES) is relatively easy to accomplish what you ask, however it may not be equally easy to explain it to someone that is not confident with OpenGL :)
First of all, I may suggest you to take a look at The Red Book, the reference guide to OpenGL, or at the OpenGL Wiki.
To begin, you may do some practice using GLUT; it will help you taking confidence with OpenGL, providing some high-level API that will let you skip the boring side of setting up an OpenGL context, letting you go directly to the drawing part.
OpenGL ES is a subset of OpenGL, so essentially has the same structure. Once you understood how to use OpenGL shouldn't be so difficult to use OpenGL ES. Of course Apple documentation is a very important resource.
Now that you know a lot of stuff about OpenGL you should be able to easily understand how your program should be structured.
You may, for example, keep your view point fixed and translate the world (or viceversa). There is not (of course) a universal solution, especially because the only thing that matters is the final result.
Another solution (maybe equally good, it depends on your needs), may be to simply scale up and down images (representing the objects of your world) to simulate the movement through the object itself.
For example you may use an array to store all of your images and use a slider to set (increase/decrease) the dimension of your image. Once the image becomes too large for the display you may gradually decrease alpha, so that the image behind will slowly appear. Take a look at UIImageView reference, it contains all the API's you need for it.
This may lead you to the loss of 3-dimensionality, but it's probably a simpler/faster solution than learn OpenGL.

Zoom-able/ resizable grid with Objective-C

Hi i'm thinking about making midi step sequencer and I need to make a note grid/matrix that resizes/ adapts when you zoom. I've been searching for different ways of doing this but cant figure out a way that works well.
I thought about drawing cell objects made with (NSRect) but I couldn't figure out how to get the right interaction when resizing.
This is my first "biggish" OBJ-c project so please don't kill me, im still battling with the frameworks and the syntax is so foreign to me.
You could use Core Animation layers to create your grid.
Take a look at Apple's Geek Game Board sample code project:
http://developer.apple.com/library/mac/#samplecode/GeekGameBoard/Introduction/Intro.html
The code shows a way to display different kinds of card/board games using CALayer.
The Checkers game looks to be the closest to the grid you want to create.

OpenGL ES 2.0 multiple meshes? (just Real World Examples)

I`m a little confused about this point.
Everything that I found in books, blogs, forums and even in OpenGl specs just talk about a very abstract techniques. Nothing about real world examples.
And I`m going crazy with this: How to put and manage multiple objects (meshes) with OpenGL ES 2.x?
In theory seems simple. You have a Vertex Shader (vsh) and Fragment Shader (fsh), then you bind the both to one Program(glGenProgram, glUseProgram, ...). In the every cycle of render, that Program will perform its VSH by each Vertex and after this will perform FSH on every "pixel" of that 3d object and finally send the final result to the buffer (obviously without talk about the per-vertex, rasterization, and other steps in the pipeline).
OK, seems simple...
All this is fired by a call to the draw function (glDrawArrays or glDrawElements).
OK again.
Now the things comes confused to me.
And If you have several objects to render?
Let's talk about a real world example.
Imagine that you have a landscape with trees, and a character.
The grass of the landscape have one texture, the trees have texture to the trunk and leaves (Texture Atlas) and finally the character has another texture (Texture Atlas) and is animated too.
After imagine this scene, my question is simple:
How you organize this?
You create a separated Program (with one VSH and FSH) for each element on the scene? Like a Program to the grass and soil's relief, a Program to the trees and a Program to the character?
I've tried it, but... when I create multiple Programs and try to use glVertexAttribPointer() the textures and colors of the objects enter in conflicts with each others. Because the location of the attributes, the indexes, of the first Program repeat in the second Program.
Let me explain, I used glGetAttribLocation() in one class that controls the floor of the scene, so the OpenGL core returned to me the index of 0,1 and 2 for the vertexes attributes.
After, in the class of trees I created another Program, anothers shaders, and after used again the glGetAttribLocation() at this time the OpenGL core return with indexes of 0, 1, 2 and 3.
After in the render cycle, I started setting the first Program with glUseProgram() and I've made changes to its vertexes attributes with glVertexAttribPointer() and finally a call to glDrawElements(). After this, call again glUseProgram() to the second Program and use glVertexAttribPointer() again and finally glDrawElements().
But at this point, the things enter in conflicts, because the indexes of vertexes attributes of the second Program affects the vertexes of the first Program too.
I'm tried a lot of thing, searched a lot, asked a lot... I'm exhausted. I can't find what is wrong.
So I started to think that I'm doing everything wrong!
Now I repeat my question again: How to work with multiple meshes (with different textures and behavior) in OpenGL ES 2.x? Using multiple Programs? How?
To draw multiple meshes, just call glDrawElements/glDrawArrays multiple times. If those meshes require different shaders, just set them. ONE, and only ONE shader program is active.
So each time you change your shader program (Specifically the VS), you need to reset all vertex attributes and pointers.
Just simple as that.
Thanks for answer,
But I think that you just repeat my own words... about the Draw methods, about one Program active, about everything.
Whatever.
The point is that your words give me an insight!
You said: "you need to reset all vertex attributes and pointers".
Well... not exactly reseted, but what I was not updating ALL vertex attributes on render cycle, like texture coordinates. I was updating just that attributes that change. And when I cleared the buffers, I lost the older values.
Now I start to update ALL attributes, independent of change or not their values, everything works!
See, what I had before is:
glCreateProgram();
...
glAttachShader();
glAttachShader();
...
glLinkProgram();
glUseProgram();
...
glGetAttribLocation();
glVertexAttribPointer();
glEnableVertexAttribArray();
...
glDrawElements();
I repeated the process to the second Program, but just call glVertexAttribPointer() a few times.
Now, what I have is a call to glVertexAttribPointer() for ALL attributes.
What drived me crazy is the point that if I removed the First block of code to the first Program, the whole second Program worked fine.
If I removed the Second block of code to the second Program, the first one worked fine.
Now seems so obvious.
Of course, if the VSH is a per-vertex operation, it will work with nulled value if I don't update ALL attributes and uniforms.
I though about OpenGL more like a 3D engine, that work with 3d objects, has a scene where you place your objects, set lights. But not... OpenGL just know about triangles, lines and points, nothing more. I think different now.
Anyway, the point is that now I can move forward!
Thanks