Render texture and normalized view rect in Unity - camera

I'm using Unity 3D 3.5 pro.
I've got this scene with two cameras in it. One of them is looking at a plane that has a render texture on it. The other is recording the render texture. When the camera that's recording the render texture has a 1:1 normalized view and height rect, everything is fine. But when It's something different, some weird stuff happens -- the render texture's image becomes distorted. I've tried releasing and discarding the render texture's contents in an update function, but nothing changes! It's totally stopping the project I'm working on from being completed. I have pictures here to explain the situation in detail. The reason its a problem is because i need to be able to place non rectangular objects in front of the square and not have their scales appear to be distorted, due to the scale of the plane on which the render texture is being shown not being a square. What could I be doing wrong?
I also placed a similar question on unity answers, but received no usable help there. Here was the thread:
http://answers.unity3d.com/questions/389094/rendertexture-normalized-view-rect.html

I figured it out. I needed to mess with the offset and tiling of the rendertexture. Silly rabbit!

Related

Flickering material in Blender

I have some problems with a material on a large plane.
It is a simple plane with a texture on it. When I add a simple image texture, the material looks fine. But if I then make the camera move in an animation, the material is flickering and acting wierd in the horizon. I figure that it is because the image texture is getting very small when it's far away, so it renders it a little different on each frame. I can also see that if I disconnect the displacement it stops. So it's maybe a displacement problem and not and image problem.. I don't know :) But is there a way to make this stop. Maybe a way to make it render with less detail when it's far away? Or a way to make the image texture only appear when it's close to the camera? Or something else?
Best
Michael

Metal multisampling results in darkened textures

So I'm trying to implement full-screen MSAA in my Metal app. I have it working and when drawing solid-filled polygons the edges appear smooth as expected. However, my textured polygons appear dark, and get darker as I increase the number of samples, indicating that the shader might be taking only one sample of the texture per fragment and blending it with n - 1 samples of black therefore making it darker.
However, in my app I also have textures that I render to and then draw to the screen. These textures show up perfectly fine. I can't really see a difference between the two kinds of textures that would change the behavior of multisampling.
Anyway, if anyone could maybe give me any clues as to what's going on, I would greatly appreciate it. I'm pretty stumped on this one.
EDIT:
Here is how I am setting up all my pipeline state(s)
Here is how the texture pipeline state is set up specifically
I figured it out. The problem was that I hadn't set my stencil draw pipeline state to be multisampled. Therefore it was only reading the value in the stencil buffer for 1 out of n samples and hence darkening the output. Works fine now.

Blender border render internals

I would like to know how Blender's border render works internally. How can Blender compute lights if it has not information about the lights in the tiles he won't render? I have not found any reference (source code excluded) on how this feature of blender works. Can somebody explain it (or give me some reference)?
The render border setting only alters what part of the image is rendered, it does not alter what data is sent to the render engine to generate the image.
You can test this by placing an object with a reflective surface in front of the camera and another object behind the camera, the object behind the camera will show in the reflection. The border setting doesn't change the reflection in the object, it only changes what part of the image is rendered.
Rendering an image starts at the pixel that will be visible in the final image and sends a "ray" into the scene to determine what colour the specific pixel will be. Each ray will bounce around in the scene from object to object to light source based on render settings to calculate the final result. While the render border will reduce the pixels used as the starting point for each ray, it does not reduce the objects or lights in the scene that each ray may come into contact with. Each ray going through the scene will see every visible object and light in the scene that can influence the final result for each pixel.
This conference video explains ray types and might give you a better grasp of how a ray goes through a scene to get the final image.

Unexpected behavior after zooming into a 3D object with OrbitControls + Three.js

So, I have this code. It's a small 3D scene with a ground, a red box, a custom loaded building and a rotating "sun". I'm delegating camera navigation to OrbitControls script, as it fits the most the way I want the camera to behave, however, there is a little weird problem: after I zoom in into a 3D object within this scene, rotate a little, then zoom out to "leave" the object, the zoom out process takes a billion scrolls. It's a weird behavior and I'm sorry if I'm not clear enough; once I'm in I have to scroll like forever, and every frame it seems to move "out of" the object very slowly, like the camera state is somehow screwed up.
I'm sorry if this very question has been already asked, I looked for this issue and tried stuff from other topics that seemed the same, but it didn't work.
#Edit
Wow, something even weirder. I tested zooming in this example, indefinitely, then the zoom in started to grow VERY slowly (just like in my code). Am I misunderstanding something? It looks as if the amount of zoom-in's somehow blocked rendering or something.
WestLangley tip actually solved my problem. Setting minDistance prevented the camera to zoom in infinitely, despite the actual rendering only showing a small step into the scene.

Flipboard style page turn animation

I'm trying to write a fairly simple animation using Core Animation to simulate a book cover being opened. The book cover is an image, and I'm applying the animations to its layer object.
As well as animating a rotation around the y axis (The the anchorPoint property set of the left of the screen), I need to scale the right hand edge of the image up so it appears to "get closer" to the user and create the illusion of depth. Flipboard, for example, does this scaling really well.
I can't find any way of scaling an image like this, so only one edge is scaled and the image ends up nonrectangular.
All help appreciated with this one!
CoreAnimation, by default, "flattens" its 3D hierarchy into a 2D world at z=0. This causes perspective and the like to not work properly. You need to host your layer in a CATransformLayer, which will render its sublayers as a true 3D layer hierarchy.