Create Block that changes textures like End Portal Frame - minecraft

I'm trying to create my first Mod in Minecraft. I already got an ore block, a brick block and an item that I get when breaking the ore block. No I wanted to go further and create a block that changes its texture when rightclicked with an item. Like the end portal frame where the "eye of ender gets inserted into the frame". At first I don't want to try it with 3D textures. only a texture switch. After this works I'll try to make 3D textures, but I haven't found any good tutorial for any of these things

Related

GML line of sight not working

Im am creating a game where the character will be able to run about a prison only able to see walls and characters within its line of sight. The following screenshot is the desired effect.
Desired effect
However I have a problem where some walls are covered up due to the top corner not being visible as seen here.
Bottom right corner wall is covered
I am using the following code
checkPlayersx = x
checkPlayersy = y
if(!collision_line(checkPlayersx,checkPlayersy,obj_player.x,obj_player.y,obj_wall,1,0)
When looking at the bottom right block, the field of view is obstructed by the top and left tiles. You may have to find an other way to trigger this block.
You might encounter this issue with some other angular blocks too. What you could do is create a new object, that would have as parent the standard wall bloc, but it would take the ID of the adjacent blocks and light up if one of the adjacent blocks it lit up.
To detect the adjacent blocks during creation, it will be essential to create this block after it's neighbors. You can use the instance_nearest() function to detect them.
But the technique I would use is slightly different. I would create a cross-shaped sprite that would, when put in place of the block, cover the centers of the adjacent blocks, and use precise collision checking. At the creation of the wall, I would replace the wall sprite by the cross, and detect collisions with the adjacent walls. All adjacent walls IDs would be stored and checked every step for lighting in order to trigger the lighting of the wall. Then I would go back to the normal wall sprite, and voilĂ  !
Hoping this helps.

OpenGL-ES: how to draw object twice using different shaders

Im trying to make an app that simulates dice rolls, at the moment everything works fine. Im trying to add a shader for when the user selects a dice, it will put an outline around the selected dice. How im going about this is to render the particular dice scaled up slightly and completely black, then draw the textured dice on top of it to make it look like it has an outline.
The problem im having is that when the shader for drawing the object black is first applied, it draws the black dice fine, but when the textured dice is trying to be drawn over it, it draws it in the wrong place, and draws the wrong dice. The odd thing is it draws it inside one of the other dice on the screen.
If I apply the the same shader to the object twice, everything draws how its supposed to for that particular shader (either all black because of the outline shader or all textured and lit from the normal shader), but when I apply both shaders to the same model, things go wrong.
This class loads the vertices and stuff, and draws the object:
http://pastebin.com/N5aYAtBC
This class manages the shaders:
http://pastebin.com/0bT7ABRu
I've left out a lot of code that I feel will have nothing to do with the problem, but if you need more just leave a comment
and when i click on different dice this is what happens (first pic is normal):
http://imgur.com/a/ikZVX
glUniformMatrix4fv(toon_mvp, 1, 0, modelViewProjectionMatrix.m);
glUniform4fv(toon_outline, 1, oc);
glUseProgram(Toon);
I think you're handling the uniforms wrong. I pulled out this snippet from your source as an example.
When you upload a uniform, it only effects the currently bound program (each program has it's own internal storage of uniforms).
If you're expecting 'toon_mvp' and 'toon_outline' to be available to the program 'Toon', they will not be. You need to first bind the program you want to modify, and then modify it's uniforms after it is bound, not in the other order.
You'll probably have to fix this in other places in your code as well.

OpenGL ES Graphics issue when not calling glClear()

I'm working on an iPad app that has a few thousand particles that the user can manipulate with touches. To produce interesting designs, I want to make it so that when a particle is drawn in a location, that drawing is not cleared on the next frame. This creates a sort of "trails" effect. At the moment I'm doing this by when "trails" is turned on, glClear() is not called each frame, so drawing from each frame is added to the drawing of the previous frame. This works fine in the iPad simulator, but for some reason, when I run this on an actual device, when I turn trails on the particle trails flicker like there's something weird going on with the buffers.
Is there a better way to produce trails / why does this graphics problem only occur in the simulator?
Thanks!
glClear() is called between buffers so that you can begin to draw the next one on a clean slate - you really need to clear the buffer between frames. Its not good practice to continue to fill up the buffer as you can start producing artifacts (as you are noticing).
To produce the trailing effect, you would probably want to use additional particles. Keep track of the particle's position or velocity, and then draw additional particles on the trail.

OpenGL ES 2.0 multiple meshes? (just Real World Examples)

I`m a little confused about this point.
Everything that I found in books, blogs, forums and even in OpenGl specs just talk about a very abstract techniques. Nothing about real world examples.
And I`m going crazy with this: How to put and manage multiple objects (meshes) with OpenGL ES 2.x?
In theory seems simple. You have a Vertex Shader (vsh) and Fragment Shader (fsh), then you bind the both to one Program(glGenProgram, glUseProgram, ...). In the every cycle of render, that Program will perform its VSH by each Vertex and after this will perform FSH on every "pixel" of that 3d object and finally send the final result to the buffer (obviously without talk about the per-vertex, rasterization, and other steps in the pipeline).
OK, seems simple...
All this is fired by a call to the draw function (glDrawArrays or glDrawElements).
OK again.
Now the things comes confused to me.
And If you have several objects to render?
Let's talk about a real world example.
Imagine that you have a landscape with trees, and a character.
The grass of the landscape have one texture, the trees have texture to the trunk and leaves (Texture Atlas) and finally the character has another texture (Texture Atlas) and is animated too.
After imagine this scene, my question is simple:
How you organize this?
You create a separated Program (with one VSH and FSH) for each element on the scene? Like a Program to the grass and soil's relief, a Program to the trees and a Program to the character?
I've tried it, but... when I create multiple Programs and try to use glVertexAttribPointer() the textures and colors of the objects enter in conflicts with each others. Because the location of the attributes, the indexes, of the first Program repeat in the second Program.
Let me explain, I used glGetAttribLocation() in one class that controls the floor of the scene, so the OpenGL core returned to me the index of 0,1 and 2 for the vertexes attributes.
After, in the class of trees I created another Program, anothers shaders, and after used again the glGetAttribLocation() at this time the OpenGL core return with indexes of 0, 1, 2 and 3.
After in the render cycle, I started setting the first Program with glUseProgram() and I've made changes to its vertexes attributes with glVertexAttribPointer() and finally a call to glDrawElements(). After this, call again glUseProgram() to the second Program and use glVertexAttribPointer() again and finally glDrawElements().
But at this point, the things enter in conflicts, because the indexes of vertexes attributes of the second Program affects the vertexes of the first Program too.
I'm tried a lot of thing, searched a lot, asked a lot... I'm exhausted. I can't find what is wrong.
So I started to think that I'm doing everything wrong!
Now I repeat my question again: How to work with multiple meshes (with different textures and behavior) in OpenGL ES 2.x? Using multiple Programs? How?
To draw multiple meshes, just call glDrawElements/glDrawArrays multiple times. If those meshes require different shaders, just set them. ONE, and only ONE shader program is active.
So each time you change your shader program (Specifically the VS), you need to reset all vertex attributes and pointers.
Just simple as that.
Thanks for answer,
But I think that you just repeat my own words... about the Draw methods, about one Program active, about everything.
Whatever.
The point is that your words give me an insight!
You said: "you need to reset all vertex attributes and pointers".
Well... not exactly reseted, but what I was not updating ALL vertex attributes on render cycle, like texture coordinates. I was updating just that attributes that change. And when I cleared the buffers, I lost the older values.
Now I start to update ALL attributes, independent of change or not their values, everything works!
See, what I had before is:
glCreateProgram();
...
glAttachShader();
glAttachShader();
...
glLinkProgram();
glUseProgram();
...
glGetAttribLocation();
glVertexAttribPointer();
glEnableVertexAttribArray();
...
glDrawElements();
I repeated the process to the second Program, but just call glVertexAttribPointer() a few times.
Now, what I have is a call to glVertexAttribPointer() for ALL attributes.
What drived me crazy is the point that if I removed the First block of code to the first Program, the whole second Program worked fine.
If I removed the Second block of code to the second Program, the first one worked fine.
Now seems so obvious.
Of course, if the VSH is a per-vertex operation, it will work with nulled value if I don't update ALL attributes and uniforms.
I though about OpenGL more like a 3D engine, that work with 3d objects, has a scene where you place your objects, set lights. But not... OpenGL just know about triangles, lines and points, nothing more. I think different now.
Anyway, the point is that now I can move forward!
Thanks

how to generate graphs using integer values in iphone

i want to show a grapph/bar chart in iphone how do i do this without custom API;s
You may want to investigate the Core Plot project [code.google.com]. Core Plot was the subject of this year's scientific coding project at WWDC and is pretty useable for some cases already. From its inception, Core Plot was intended for both OS X and iPhone uses. The source distribution (there hasn't been a binary release yet) comes with both OS X and iPhone example applications and there's info on the project wiki for using it as a library in an iPhone app. Here's an example of it's current plotting capabilities.
(source: googlecode.com)
Write your own. It's not easy, I'm in the process of doing the same thing right now. Here's how I'm doing it:
First, ignore any desire you may have to try using a UIScrollView if you want to allow zooming. It's totally not worth it.
Second, create something like a GraphElement protocol. I have a hierarchy that looks something like this:
GraphElement
GraphPathElement
GraphDataElement
GraphDataSupplierElement
GraphElement contains the basic necessary methods for a graph element, including how to draw, a maximum width (for zooming in), whether a point is within that element (for touches) and the standard touchBegan, touchMoved, and touchEnded functions.
GraphPathElement contains a CGPath, a line color and width, a fill color and a drawing mode. Whenever it's prompted to draw, it simply adds the path to the context, sets the colors and line width, and draws the path with the given drawing mode.
GraphDataElement, as a subclass of GraphPathElement, takes in a set of data in x-y coordinates, a graph type (bar or line), a frame, and a bounds. The frame is the actual size of the created output CGPath. The bounds is the size of the data in input coordinates. Essentially, it lets you scale the data to the screen size.
It creates a graph by first calculating an affine transform to transform the bounds to the frame, then it loops through each point and adds it as data to a path, applying that transform to the point before adding it. How it adds data depends on the type.
If it's a bar graph, it creates a rectangle of width 0, origin at (x,frame.size.height-y), and height=y. Then it "insets" the graph by -3 pixels horizontally, and adds that to the path.
If it's a line graph, it's much simpler. It just moves to the first point, then for each other point, it adds a line to that point, adds a circle in a rect around that point, then moves back to that point to go on to the next point.
GraphDataSupplierElement is the interface to my database that actually contains all the data. It determines what kind of graph it should be, formats the data into the required type for GraphDataElement, and passes it on, with the color to use for that particular graph.
For me, the x-axis is time, and is represented as NSTimeIntervals. The GraphDataSupplierElement contains a minDate and maxDate so that a GraphDateElement can draw the x-axis labels as required.
Once all this is done, you need to create the actual graph. You can go about it several ways. One option is to keep all the elements in an NSArray and whenever drawRect: is called, loop through each element and draw it. Another option is to create a CALayer for each element, and use the GraphPathElement as the CALayer's delegate. Or you could make GraphPathElement extend from CALayer directly. It's up to you on this one. I haven't gotten as far as trying CALayers yet, I'm still stuck in the simple NSArray stage. I may move to CALayers at some point, once I'm satisfied with how everything looks.
So, all in all, the idea is that you create the graph as one or many CGPaths beforehand, and just draw that when you need to draw the graph, rather than trying to actually parse data whenever you get a drawRect: call.
Scaling can be done by keeping the source data in your GraphDataElement, and just change the frame so that the scaling of the bounds to the frame creates a CGPath wider than the screen, or whatever your needs are. I basically re-implemented my own pinch-zoom for my Graph UIView subclass that only scales horizontally, by changing its transform, then on completion, get the current frame, reset the transform to identity, set the frame to the saved value, and set the frame of all of the GraphElements to the new frame as well, to make them scale. Then just call [self setNeedsDisplay] to draw.
Anyway, that's a bit ramble-ish, but it's an outline of how I made it happen. If you have more specific questions, feel free to comment.