Blender UV Mapping, coherence of textures - blender

I created a connected surface (several deformed and bent planes). Long story short, the UV mapping looked like this:
I did not find any short tutorial on how to yield a connected surface. I am also aware of the mathematics of a curve, and I don't need absolute control on the look of the texture, only the way it is right now, the textures really look very roughly pixelated.
Is it possible to pull a texture over the surface and to have a more connected UV map?
I also think this question is important so others don't have to buy a whole Udemy course as there has to be a simpler way.

Related

How to apply depth test to diffuse lighting?

I've been fiddling my way through vulkan, and have tried out some basic diffuse lighting, which only takes into account the surface normals. On the side of the model facing the light, things look fine -
On the opposite side of the model though, there's a part of the model which is shaded like it is illuminated even though it shouldn't be-
I know this happens because I'm only considering the surface normals and the shader doesn't care where the vertex is as long as its normal is towards the light, but how do I fix it? I feel like I need a way to do a depth test to figure out whether a part of the model should be lighted or not. How would I go about doing this if that is the case? What should I be doing if otherwise?
Sounds like you want to implement shadows.
A standard way is shadow mapping. You render the scene from the point of the light and only keep the depth buffer. You then pass that depth buffer as a texture to the fragment shader and sample that based on where the point is in the world and compare the sampled depth with the distance to the light.
However there are various caveats with this technique. Most common ones being shadow acne where quantization error leads to fragments self shadowing resulting in speckled lighting, you can fix that by adding a small offset to the depth. The next one is peter panning, where that offset you added previously leads to light bleedthrough where a thin wall meets a floor, you fix that by not having walls thin enough that the offset goes through them.

Live streaming model of a person in a VR environment

Given that a user is static in a VR environment, which of the two camera types below would be better to create a more 'real' looking representation of an live-streamed presenter in the VR world?
1) Kinect (can measure depth)
2) Normal 2D camera such as a high end webcam (maybe something like the pointgrey Flea3) (software assisted 3D illusion from a static angle)
Would be grateful if anyone with any experience with the relevant technologies or fields would be able to help out!
Your question lacks the necessary information to provide a single correct answer. Is it your intent to provide a full 3D VR experience, or are you content with just 2D content? Is the presenter static, or are they moving around the viewer? Towards them? Away from them? Will you be using full spherical projection or something less complete, like cylindrical projection? And what sort of lighting do you think you'll need? These are all nontrivial questions, because the answers determine the best camera package to get your content.
You also fail to consider capturing with a 360º camera, which would be advantageous if the presenter is indeed moving around in the 360º space. My personal bias is towards capturing with these, but there's no single production solution unless you constrain the problem more thoroughly.

Rendering a 'backlit' effect for many individual texture

I was wondering if I could get some advice on the best way to approach this.
I'm in the process of writing an emulator that runs old UK arcade fruit machines games that have 'feature boards'. The machines are similar to US slots. The actual board consists of many semi-transparent squares that are lit from behind (see image for an example). Image
What I'm looking to do is render a 3D representation of a machine by (preferably) using an open source 3D engine. What I'm not sure of is how best to approach the 'backlighting' effect of the individual squares of the feature board. A square can be individually turned on or off and dimmed to any level. I'm very experienced with C++ and assembly but fairly new to directx/opengl.
Bearing in mind there could be up to 512 lamps flashing/dimming individually, I'm guessing that using 'normal' lights behind semi-transparent textures would be too intensive? I've read up about pixel and vertex shaders, and was wondering if this would be the best way to approach the effect? (eg split the feature board up into individual textured polygons for each square, but join them all together so it looks as one)
Thanks for any advice

Bones in 3DS max or blender

I'm pretty much grabbing at straws here cause I have no idea what I'm asking, but here is the question.
I've been looking at 3D modeling out of pure interest and came across the concept of bones.
Now, I am not too sure what bones are even after looking it up on wiki, but they seem like an abstraction of real-life skeletons and whatnot, so in a model of say a human I just think of them as the skeleton.
To my understanding, a bone is defined by a translation, rotation, and a scale on the x, y and z axis'. (Isn't that just a single point?)
I am interested in taking a model in blender or max and export the information (whatever they may be) that is used to define these bones. I can definitely see the bones in these programs, but I want to get that out into a text file Is there a way to export this?
I think you need to seperate these ideas:
Bones - which as you correcly say have a position and rotation. They are the objects that you can control and will effect the skin of the model. They are usually in a hierarchy so that if you move one bone then it will affect all of the bones connected to it, like a human skeleton.
Skin - this is the polygonal mesh that you can usually see. It is given a base position by you in the editor and the skeleton operates on the skin to move it around.
Animation - This is data to pass to the bones. Usually a rotation, for example to make an arm bend.
http://gpwiki.org/index.php/OpenGL:Tutorials:Basic_Bones_System gives a good explanation.
Hope that helps :3

Detect Collision point between a mesh and a sphere?

I am writing a physics simulation using Ogre and MOC.
I have a sphere that I shoot from the camera's position and it travels in the direction the camera is facing by using the camera's forward vector.
I would like to know how I can detect the point of collision between my sphere and another mesh.
How would I be able to check for a collision point between the two meshes using MOC or OGRE?
Update: Should have mentioned this earlier. I am unable to use a 3rd party physics library as we I need to develop this myself (uni project).
The accepted solution here flat out doesn't work. It will only even sort of work if the mesh density is generally high enough that no two points on the mesh are farther apart than the diameter of your collision sphere. Imagine a tiny sphere launched at short range on a random vector at a huuuge cube mesh. The cube mesh only has 8 verts. What are the odds that the cube is actually going to hit one of those 8 verts?
This really needs to be done with per-polygon collision. You need to be able to check intersection of polygon and a sphere (and additionally a cylinder if you want to avoid tunneling like reinier mentioned). There are quite a few resources for this online and in book form, but http://www.realtimerendering.com/intersections.html might be a useful starting point.
The comments about optimization are good. Early out opportunities (perhaps a quick check against a bounding sphere or an axis aligned bounding volume for the mesh) are essential. Even once you've determined that you're inside a bounding volume, it would probably be a good idea to be able to weed out unlikely polygons (too far away, facing the wrong direction, etc.) from the list of potential candidates.
I think the best would be to use a specialized physics library.
That said. If I think about this problem, I would suspect that it's not that hard:
The sphere has a midpoint and a radius. For every point in the mesh do the following:
check if the point lies inside the sphere.
if it does check if it is closer to the center than the previously found point(if any)
if it does... store this point as the collision point
Of course, this routine will be fairly slow.
A few things to speed it up:
for a first trivial reject, first see if the bounding sphere of the mesh collides
don't calc the squareroots when checking distances... use the squared lengths instead.(much faster)
Instead of comparing every point of the mesh, use a dimensional space division algorithm (quadtree / BSP)for the mesh to quickly rule out groups of points
Ah... and this routine only works if the sphere doesn't travel too fast (relative to the mesh). If it would travel very fast, and you sample it X times per second, chances are the sphere would have flown right through the mesh without every colliding. To overcome this, you must use 'swept volumes' which basically makes your sphere into a tube. Making the math exponentially complicated.