UV mapping in blender not showing properly when unwrapping - blender

I am trying to uv map a cube in blender 2.74, but even though all six faces are placed on the image on the left hand side, only two of them actually show on the cube on the right hand side. I have tried unwrapping in different ways and moving the squares on the left hand side around, but still only two sides show the image.
When I try more complicated shapes (a tree), none of the faces show the texture, no matter how I unwrap.
However, when I export as a .obj file and draw it with opengl, all sides are textured, with the texture coordinates in the places where I uv mapped them to.
So my problem is that I don't know what it is going to look like until I actually export the file.
How do I get all faces of an object to show textured as I do the mapping?

simple solution, duplicate a couple of those light sources(the dashed line outlined black sphere that always appears in the cube startup program), and place them around your object. Also switch between shading options.

Related

GODOT: What is an efficient calculation for the AABB of a simple 3D model from a camera's view

I am attempting to come up with a quick and efficient means of translating a 3d mesh into a projected AABB. In the end, I would like to accomplish something similar to figure 1 wherein only the area of the screen covered by the cube is located inside the bounding box highlighted in red. ((if it is at all possible, getting the area as small as possible, highlighted in blue, would increase efficiency down the road.))
Figure 1. https://i.imgur.com/pd0E20C.png
Currently, I have tried:
Calculating the point position on the screen using camera.unproject_position(). this failed largely due to my inability to wrap my head around the pixel positions trending towards infinity. I understand it has something to do with Tan, but frankly, it is too late for my brain to function anymore.
Getting the area of collision between the view frustum and the AABB of the mesh instance. This method seems convoluted, and to get it in a usable format I would need to project the result into 2d coordinates again.
Using the MeshInstance VisualInstance to create a texture wherein a pixel is white if it contains the mesh instance, and black otherwise. Visual instances in general just baffle me, and I did not think it would be efficient to have another viewport just to output this texture.
What I am looking for:
An output that can be passed to a shader informing where to complete certain calculations. Right now this is set up to use a bounding box, but it could easily be rewritten to also use a texture. It also could be rewritten to use polygons, but I am trying to keep calculations to a minimum in the shader.
Certain solutions I have tried before have worked, slightly, but this must be robust. The camera interfacing with the 3d object will be able to move completely around and through it, meaning at times the view will be completely surrounded by the 3d model with points both in front, and behind.
Thank you for any help you can provide.
I will try my best to update this post with information if needed.

How to deal with non-manifold vertices for 3d printing

I am working on modeling a shape for 3d printing, but I'm running into a problem with it having non-manifold vertices. I've ensured that the faces are oriented properly, but it still says these vertices are non-manifold. Anyone have ideas?
Figured it out. The fact that I was trying to make the long top, outside, and bottom piece one single face was the problem. The vertices where the center section met the inside of the outer part of the design were being overlapped by the edge from the single long face. Instead, I added the vertices (points in red) in that spot, at the top, and in both places on the mirror side of the shape. After that I recreated the faces between the new vertices, checked for manifold vertices, and got none.

overlapping of geometries in Scenekit

I'm have put a plane onto the same height as the edges of the cube are. Everything you see was created in Blender and you can download Blender file here. The plane is a little bigger than the hole so that they overlap.
The whole rendering is a little funny. I get this frame around the hole due to plane and cube edge having the same hight. I only want the plane to be visible. How can I fix this?
EDIT: I can always change height for a tinywiny bit but I would prefer a different approach due to shadows and reflections and stuff.
I'm a little confused because you are referring to a hole while it seems that your cube does not have any hole and your are adding a plane on top of it.
What you are seeing is called depth fighting and it's because both objects have the same z-value, yes.
SCNMaterial exposes properties like writesToDepthBuffer and readsFromDepthBuffer that can help with that. Also check SCNNode's renderingOrder property.

In OpenGL ES 2.0, how can I draw a wireframe of triangles except for the lines on adjacent coplanar faces?

I vaguely remember seeing something in OpenGL (not ES, which was still at v1.0 on the iPhone when I came across this, which is why I never used it) that let me specify which edges of my polygons were considered outlines vs those that made up the interior of faces. As such, this isn't the same as the outline of the entire model (which I know how to do), but rather the outline of a planar face with all its tris basically blended into one poly. For instance, in a cube made up of tri's, each face is actually two tris. I want to render the outline of the square, but not the diagonal across the face. Same thing with a hexagon. That takes four tris, but just one outline for the face.
Now yes, I know I can simply test all the edges to see if they share coplanar faces, but I could have sworn I remember seeing somewhere when you're defining the tri mesh data where you could say 'this line outlines a face whereas this one is inside a face.' That way when rendering, you could set a flag that basically says 'Give me a wireframe, but only the wires around the edges of complete faces, not around the tris that make them up.'
BTW, my target is all platforms that support OpenGL ES 2.0 but my dev platform is iOS. Again, this Im pretty sure was originally in OpenGL and may have been depreciated once shaders came on the scene, but I can't even find a reference to this feature to check if that's the case.
The only way I know now is to have one set of vertices, but two separate sets of indices... one for rendering tris, and another for rendering the wireframes of the faces. It's a real pain since I end up hand-coding a lot of this, which again, I'm 99% sure you can define when rendering the lines.
GL_QUADS, glEdgeFlag and glPolygonMode are not supported in OpenGL ES.
You could use LINES to draw the wireframe: To get hidden lines, first draw black filled triangles (with DEPTH on) and then draw the edges you are interested in with GL_LINES.

Which pixels did that drawmesh operation just draw to?

Ok, it's a relatively simple problem, I want to know where, in screen space, a particular mesh was just drawn. I plan on then storing that information in a data store of some kind so that when I interact with something in screen space, I can lookup in the register and find the object, i.e, click on the spaceship drawn on the screen and then select target etc.
I can't find any way of finding out which pixels the mesh was drawn to though...
Alternatively, if I'm missing something obvious regarding what it is that I Want to do, please let me know!
There is no easy way to do that. But you can use another texture as render target and render those meshes with unique colors.
So for example you give #FF0000 to your mesh A and draw it also to your second render target with that color. Now when you select a pixel from 2nd render target and look at that color, if it is #FF0000 you can understand that, the pixel is a part of mesh A. Thus you can easily pick the mesh drawn on a certain pixel when you click one of those pixels.
Why dont you Unproject your screen space coords into 3D space? The only complication I had was the fact that I'd be left with a plane, I could check if a Mesh intersected with that plane but I often had multiple candidates for 'picking'.
Check out Google for DirectX Unproject and there are various articles discussing it. It's sometimes complicated for some to implement but done well it's actually pretty nifty; don't get put off by the people online who say it doesn't work, it does work!