Blender border render internals - rendering

I would like to know how Blender's border render works internally. How can Blender compute lights if it has not information about the lights in the tiles he won't render? I have not found any reference (source code excluded) on how this feature of blender works. Can somebody explain it (or give me some reference)?

The render border setting only alters what part of the image is rendered, it does not alter what data is sent to the render engine to generate the image.
You can test this by placing an object with a reflective surface in front of the camera and another object behind the camera, the object behind the camera will show in the reflection. The border setting doesn't change the reflection in the object, it only changes what part of the image is rendered.
Rendering an image starts at the pixel that will be visible in the final image and sends a "ray" into the scene to determine what colour the specific pixel will be. Each ray will bounce around in the scene from object to object to light source based on render settings to calculate the final result. While the render border will reduce the pixels used as the starting point for each ray, it does not reduce the objects or lights in the scene that each ray may come into contact with. Each ray going through the scene will see every visible object and light in the scene that can influence the final result for each pixel.
This conference video explains ray types and might give you a better grasp of how a ray goes through a scene to get the final image.

Related

Render texture and normalized view rect in Unity

I'm using Unity 3D 3.5 pro.
I've got this scene with two cameras in it. One of them is looking at a plane that has a render texture on it. The other is recording the render texture. When the camera that's recording the render texture has a 1:1 normalized view and height rect, everything is fine. But when It's something different, some weird stuff happens -- the render texture's image becomes distorted. I've tried releasing and discarding the render texture's contents in an update function, but nothing changes! It's totally stopping the project I'm working on from being completed. I have pictures here to explain the situation in detail. The reason its a problem is because i need to be able to place non rectangular objects in front of the square and not have their scales appear to be distorted, due to the scale of the plane on which the render texture is being shown not being a square. What could I be doing wrong?
I also placed a similar question on unity answers, but received no usable help there. Here was the thread:
http://answers.unity3d.com/questions/389094/rendertexture-normalized-view-rect.html
I figured it out. I needed to mess with the offset and tiling of the rendertexture. Silly rabbit!

Collision detection of uneven shapes in iOS

I am working on drag and drop activity for iPad. I have a rectangle PNG image (see the image named as obj2). When I drag obj1 only on the black portion of the rectangle then it should react.
if (CGRectIntersectsRect(obj1.frame, obj2.frame))
{
NSLog(#" hit test done!! ");
}
Right now, this piece of code takes hit test even on the transparent area. How to prevent that to happen?
For something as simple as your specific example (triangle and circle), the link that David Rönnqvist gives is very useful. You should definitely look at it to see some available tools. But for the general case, the best bet is clipping, drawing, and searching.
For some background, see Clipping a CGRRect to a CGPath.
First, create an alpha-only bitmap image. This is explained in the above link.
Next, clip your context to one of your images using CGContextClipToMask().
Now, draw your other image onto the context.
Finally, search the bitmap data for any colored pixels (see the above link for example code).
If any of the pixels colored, then there is some overlap.
Another, similar approach (which might actually be faster), is to draw each image into its own alpha-only CGBitmapContext. Then walk the pixels in each context and see if they ever are both >128 at the same time.

glReadPixels read "out of frames" area

I draw OpenGL 3200x2000 size textured quads. OpenGLView frame size is set to 940x560. It draws quad as it should. Bun when I try to save it as image (using glReadPixels) and set glReadPixels area from (0,0) to (3200,2000). It creates pixel data 3200x2000, but when I save it to file I see small image part (940x560 from bottom left corner) and whole other area is black. So how can I read offscreen area? I tried using Framebuffer, but its very complicated, errors while creating it and etc... Is there any other solution?
Situation visualization:
Original image looks like this (3200x2000):
OpenGLView looks like this (940x560):
Saved image looks like that (3200x2000):
So you're rendering to the window. Well, the window has a particular size. And nothing exists outside of that size.
This is part of something OpenGL calls the "pixel-ownership-test". If a pixel is not owned by the context, then its contents are undefined. Pixels outside of the window are not owned by the context, and therefore their contents are undefined.
This is one reason why framebuffer objects exist: so that you can render outside the size of your window. Though be advised: there is a maximum viewport size limit.
Alternatively, you can render in screen-sized pieces, where you download each piece after each rendering, then move the camera to render the next piece.
You haven't given much details in terms of code, or the platform.
But I think you should be using offscreen rendering, rather than just reading from the rendered window. If you are unfamiliar with using frame buffer objects, here is a minimal example:
https://github.com/datenwolf/codesamples/tree/master/samples/OpenGL/minimalfbo
Edit #1:
Since OP mentioned that the platform is OS X, I am posting my code below, which shows a minimal FBO example in iOS:
https://github.com/glman74/simpleFBO

How to add a shadow to an UIImageView which fits the shape of the image content but with some rotation and shift effect

I have been looking for the solution on the web for a long time. Most tutorials are fairly simple about adding shadow to a UIView. I also noticed that if we add a shadow to an UIImageView. The shadow shape could perfectly fit the shape of the content image if the image itself has alpha channel in it. Say for example, if the image is an animal with transparent background, the shadow shape is also the same as that animal (not a rectangle shadow as same as UIImageView frame).
But these are not enough. What I need to do is to add some changes to the shadow so it may have some rotation angle and compressed (squeezed or shift) effect so that looks like the sunlight comes from a certain spot.
To demonstrate what I need, I upload 2 images below, which I captured from the Google Map App created by Apple. You can imagine the Annotation Pin is an image which has the Pin shape, so the shadow is also "pin shaped", but it is not simply "offset" with a CGSize, you can see the top of the shadow is shifted right about 35 degrees and slightly squeezed the height.
When we tap and hold and pin, the shadow is also animated away from the pin, so I believe that such shadow can be made programmably.
The best shadow tutorial I can found so far is http://nachbaur.com/blog/fun-shadow-effects-using-custom-calayer-shadowpaths But unfortunately, that cannot make this effect.
If anyone know the answer or know any better words to search for, please let me know. Thank you.
(Please note that the shape of the image is dynamic in the App, so using any tool like Photoshop to pre-render the shadow is not an option.)
In order to create dynamic effects like this, you have to use Core Graphics. It's incredibly powerful once you know how to use it. Basically you need to set a skew transform on the context, set up a shadow and draw the image. You will probably have to use transparency layers as well.
It doesn't sound like you can use CALayer shadows, since that is meant to solve a specific use-case. The approach Apple takes with the pin marks on the map is to have two separate images that are created ahead of time (e.g. in Photoshop) and they position them within the map relative to a reference point.
If you really do need to do this at run-time, it should still be possible by using either Core Graphics or ImageKit. To get a blurred shadow appearance, you can use the kCICategoryBlur CIFilter. You can then convert the image to grayscale. And to get that compressed look you just need to resize and skew the image.
Once you have two separate images, you can either take the CGImageRef for the shadow image and can set that as the content of another sublayer, or you can add it as a separate view.
If you know what all the shapes are, you could just render a shadow image in Photoshop or something.

Flipboard style page turn animation

I'm trying to write a fairly simple animation using Core Animation to simulate a book cover being opened. The book cover is an image, and I'm applying the animations to its layer object.
As well as animating a rotation around the y axis (The the anchorPoint property set of the left of the screen), I need to scale the right hand edge of the image up so it appears to "get closer" to the user and create the illusion of depth. Flipboard, for example, does this scaling really well.
I can't find any way of scaling an image like this, so only one edge is scaled and the image ends up nonrectangular.
All help appreciated with this one!
CoreAnimation, by default, "flattens" its 3D hierarchy into a 2D world at z=0. This causes perspective and the like to not work properly. You need to host your layer in a CATransformLayer, which will render its sublayers as a true 3D layer hierarchy.