Currently using Blender V2.67
I'm following a tutorial at https://www.youtube.com/watch?v=SUcpgDVBLDQ&list=SP9FE4ACC7E521FBBF Around 17:00 is when it starts explaining what I'm attempting to do. 20:05-22:30 Shows how to make a low-poly place holder. At 22:24 he changes the mesh of the object from the low-poly place holder to the mesh of the palm tree.
When I change the mesh of the low-res place holder object to the mesh of the "hi-res" palm tree mesh it automatically makes the palm tree like 1/100 the original size and makes anything I use almost invisible. Now, when I make a "hi-res" objects mesh to that of the low-res place holder's mesh, I get the low-res mesh but it is gigantic, the opposite of making a low-res object's mesh a "hi-res" mesh. How do I fix this? In the video when he changes the mesh it stays the same dimension.
I scaled the object in object mode instead of edit mode which changed the scale size instead of the dimensions.
Related
I would like to know how Blender's border render works internally. How can Blender compute lights if it has not information about the lights in the tiles he won't render? I have not found any reference (source code excluded) on how this feature of blender works. Can somebody explain it (or give me some reference)?
The render border setting only alters what part of the image is rendered, it does not alter what data is sent to the render engine to generate the image.
You can test this by placing an object with a reflective surface in front of the camera and another object behind the camera, the object behind the camera will show in the reflection. The border setting doesn't change the reflection in the object, it only changes what part of the image is rendered.
Rendering an image starts at the pixel that will be visible in the final image and sends a "ray" into the scene to determine what colour the specific pixel will be. Each ray will bounce around in the scene from object to object to light source based on render settings to calculate the final result. While the render border will reduce the pixels used as the starting point for each ray, it does not reduce the objects or lights in the scene that each ray may come into contact with. Each ray going through the scene will see every visible object and light in the scene that can influence the final result for each pixel.
This conference video explains ray types and might give you a better grasp of how a ray goes through a scene to get the final image.
I understand that CalculateFrustumPlanes() in Unity3D returns an array of Plane objects, each representing a different frustum plane, but I can't find any documentation to suggest which element is which?
for example
[0] = Front
[1] = Back
etc.
I need to calculate whether a point in space (like the centre point of a Bounding volume) is in the camera frustum, for a Quad tree system.
What is exactly the order of the Planes in the returned array is not documented (and I don't know it).
Anyway I think you can figure it out without much effort: you just need to put the camera in a well know orientation and check the normal value's of each Plane.
I need to calculate whether a point in space (like the centre point of
a Bounding volume) is in the camera frustum, for a Quad tree system.
For a Quad Tree system, I think the intersection between the frustum and a GameObject's AABB is enough, so you don't even need to know exactly the order of the Plane's in the array to compute it. You can just use GeometryUtility.TestPlanesAABB.
Order: left, right, bottom, top, near, far.
I'm really new to 3d modeling, blender, etc.
I created a model with blender (a room). Now I exported it (as .obj) so that I can import it to CopperCube (a tool to create 3d scences).
The problem is, that the walls are only visible from outside. Take a look into the pictures:
Blender:
http://imageshack.us/photo/my-images/341/blenderg.png/
CopperCube:
http://imageshack.us/photo/my-images/829/coppercube.png/
I asked the forum of CopperCube and they said that there are only one-side polygons (or flipped). Is there a way to change this? Sorry, but I am a total beginner with this...
Here's the answer of the CopperCube forum:
I don't know blender, but are there any options you can change for exporting? It looks like your model just has one sided polygons, or the normals are flipped for some of them.
Make sure you have the normals checkbox checked in OBJ export options (at the left side, it's off by default):
You will need to model your room to have slim cubes instead of planes whenever they should be visible from both sides.
You can display the normals in Blender in edit mode. In Properties (N) scroll down to Mesh Display and check the type of normals you want to see and their length.
To recalculate the normals or flip their direction go to the Tool Shelf (T) in the Normals section.
We want to render a parametrized surface in front of a grid plane and observe the transformation of the grid due to refraction happening at the surface. Our surface is in this simple example a 2D normal distribution which we will view directly from above and the grid plane is placed below:
The surface is given in many triangle directives which we put together in a mesh and used it with
object {
fovea
scale <1,1,3>
texture { pigment {color rgbt <0,0,1,0.5> }}
interior {ior 1.4}
}
The scale here is not necessary and used only to amplify the artifacts. What you see in the image below is, that the refraction seems not to happen smoothly, but creates some sharp artifacts in the underlying grid pattern.
This image was created with Povray 3.6.1 under MacOS X 10.5.6 with the settings +Q9, +A and -J. Can anyone point out a hint? Thanks.
This was a stupid mistake. Since in Mathematica the surface looked really smooth, I assumed that it created a large number of triangle-faces. This assumption was wrong. The rendering engine Mathematica uses, seems to interpolate the normals given for each vertex and therefore the surfaces only looks as it has a high resolution.
A check of the underlying polygons reveals the truth:
Therefore, what looks like refraction artifacts in the rendered image above is actually correct behavior, because the face-normals of neighboring triangles really change that much.
Increasing the resolution of the surface grid solves the problem.
I have created an experimental fast rectangular object tracking system; it will be used for headtracking and controllling objects in 3D engine (Ogre3D).
For now I am able to show to the webcam any kind of bright colored rectangle (text markers are good objects) and system registers basic properties of this object (hue/value/lightness and initial width and height in 0 degrees rotation).
After I have registered the trackable object, I do some simple frame processing to create grayscale probabilty map.
So now I have 2 known things:
1) 4 corners for the last object position (it's always a rectangle but it may be rotated)
2) a pretty rectangular (but still far from perfect) blob which is the brightest in the frame. I can get coordinates of any point of the blob without problems, point detection is stable enough.
I can find a bounding rectangle of the object without problems, but I have a problem with detecting the object corners themselves.
I need the simplest possible (quick&dirty would be great) algorithm to scan the image starting with some known coordinates (a point inside the blob) and detect new 4 x,y coordinates of a "blobish" rectangle corners (not corners of a bounding box but corners of the rectangular blob itself).
Ready-to-use C++ function would be awesome, but somehow google doesn't like me today :(
I think that it would be overkill to use some complicated function form OpenCV library just to extract 4 points of a single rectanglular blob. But if you know a quick and efficient way how to do it using OpenCV (it must be real-time and light on CPU because I'll run the 3D engine at the same time) then I would be really grateful.
You can apply Hough transform on segmented image to detect lines. Using detected lines you can calculate their intersection to find the corner coordinates of the blob.