How can I get shadows in blender game engine? - blender

I'm making a very simple game in blender and for some reason shadows don't work. I tried every kind of light source, including objects with emisions. 'Cast shadow' and 'recieve shadow' check boxes are checked for ol the objects. I tryed all teh methods to dysplay objects. Is there an easy way to get shadows in the game?

Away to create shadows is like so:
-Switch Muiltitexture to GLSL
Now, you must understand that only certain lights cast shadows. I believe that the only two are the Sun, and Spotlight, however spotlight only casts partial.
While in GLSL Mode, you must change the Solid Mode to Textured Mode for lighting to work. Then, select the sun(angled at your prefered angle) and scroll down in the objects tab. Look for Shadows, and make sure the box is checked. Then play it. The Shadows should automatically appear in the Scene view as well because GLSL has support for realtime shadows.
WAY NUMBER 2:
Another way is to Bake a scene or object. This means that you place lighting in render mode, and capture all the lighting and textures(with lighting) and make a texture. This works really well, but doesn't have realtime shadows. Look it up for more imformation.
Hope this Helped!

In the Viewport press N and in the rendering options switch from multitexture to GLSL and then switch to Texture mode,
You don't have to tweak the settings. you can try this by creating a new blend file.
Hope this helps

Related

Orthographic camera and zoom controls in SceneKit

When using a camera with myCameraNode.camera.usesOrthographicProjection = YES;, as far as I can tell you can't zoom as you normally would using the default controls given by myScene.allowsCameraControl = YES; – the only way to zoom that I can tell is by changing myCameraNode.scale = (SCNVector3*)...
Is there a way to somehow bind the scroll wheel to this scale parameter, while retaining the standard camera controls for rotation/translation? Or to otherwise 'fix' the default camera controls?
Edit: I think I'm misunderstanding how the camera works. The two-finger zoom gesture does still work with orthographic projection, but it seems like it only lets me zoom out and not let me zoom in any further. I suspect it may be related to the myCameraNode.scale, but if I don't set that parameter the objects in my scene are huge and I only see a tiny fraction of it (and the larger the scale the smaller my objects get).
The built-in camera controls on SCNView are pretty basic, probably best used only for debugging. For a production app, it's better to control the camera yourself, especially if you're using orthographic projection. Set up your own event handling that controls the orthographicScale property of the camera, and you're set.
Followup on comment:
The scale property on SCNNode controls how big a node's content is relative to its parent node — it's a coordinate space transformation, just like rotation and position. It's not really appropriate for implementing camera zoom. In a perspective projection, you use the camera's xFov and/or yFov properties to zoom (and I presume that's what the built-in camera controls do). The API doesn't define what the controls do for an orthographic camera, so anything you observe about its behavior is probably undefined and might be a bug... you might not be able to rely on it staying that way.
If there's more you'd like the built-in camera controls to handle, I'd recommend filing an enhancement request.

Three.JS, any idea to create sliding for Three.js?

I am thinking to develop some tools for my prototype system. I saw a function in the following link and seems very useful for me as well. But I dont know how to implement it? any idea?
http://www.arcgis.com/home/item.html?id=9c0e319bfaff4d33a0fe2da97c2c3fd7
Thanks!!!
This may not be the most efficient solution, but you could create two viewports, both rendering the scene from the same camera, but before you render the viewport for the right-side half, set the visible flag of walls (and other Mesh objects as necessary) to false, and after rendering the right-side scene reset those flags to true. This wouldn't implement the slider, however.

OpenglES - Transparent texture blocking objects behind

I have some quads that have a texture with transparency and some objects behind these quads. However, these don't seem to be shown. I know it's something about GL_BLEND but I can't manage to make the objects behind show.
I've tried with:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
but still not working. What I basically have is:
// I paint the object
draw_ac3d_file([actualObject getCurrentObject3d]);
// I paint the quad
paintQuadWithAlphaTexture();
There are two common scenarios that create this situation, and it is difficult to tell which one your program is doing, if either at all.
Draw Order
First, make sure you are drawing your objects in the correct order. You must draw from back-to-front or else the models will not be blended properly.
http://www.opengl.org/wiki/Transparency_Sorting
note as Arne Bergene Fossaa pointed out, front-to-back is the proper way to render objects that are not transparent from a performance stand point. Because of this, most renderers first draw all the models that have no transparency front-to-back, and then they go back and render all models that have transparency back-to-front. This is covered in most 3D-graphic texts out there.
back-to-front
front-to-back
image credit to Geoff Leach at RMIT University
Lighting
The second most common issue is improper use of lighting. Normally in this case if you were using the fixed-function pipeline, people would advise you to simply call glDisable(GL_LIGHTING);
Now this should work (if it is the cause at all) but what if you want lighting? Then you would either have to employ custom shaders or set up proper material settings for the models.
A discussion of using the material properties can be found at http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=285889

How should I design displaying a dynamic map? (Coordinates + Lines)

So I want to have a view (NSView, NSOpenGLView, something CG related?) which basically displays a map. Such as:
http://dump.tanaris4.com/map.png
Obviously that looks horrible, but I did it using an NSView, and it draws SO slow. Clearly not designed for this.
I just need to allow users to click on the individual (x,y) coordinates to make changes, and zoom into a certain area (to see it better).
Should I go the OpenGL route? And if so - any suggestions as to how to get started? (I was able to follow the guide to draw a triangle, so that's good).
I did find this post on zooming in an NSView: How to implement zoom/scale in a Cocoa AppKit-application
My concern is if I'm drawing over 6000 coordinates and the lines connecting them, this isn't efficient at all.
I don't think using OpenGL would be of any good here. The problem does not seem to be the actual painting, but rather the rendering strategy. You would need a scene graph of some kind to dynamically handle level of detail and culling.
Qt has all this packaged in a nice class class QGraphicsScene (see http://doc.qt.nokia.com/latest/qgraphicsscene.html for reference, and http://doc.qt.nokia.com/main-snapshot/demos-chip.html for an example).
Some basic concepts you should consider using:
http://en.wikipedia.org/wiki/Scene_graph
http://en.wikipedia.org/wiki/Quadtree
http://en.wikipedia.org/wiki/Level_of_detail
Try using core graphics for this, really there is so much that could be done. Watch the video Practical Drawing for iOS Developers from WWDC 2011 and it should give an over view of what can be done with CG.
I believe even CoreGraphics will suffice for what you want to achieve, and that should work under a UIView if you draw the rectangle of your view completely under the DrawRect method of your UIView (you must overload this method). Please see the UIView Class Reference. I have a mobile application that logs points on the UIMapKit, kind of like Nike+, and it certainly works well for massive amounts of points/line segments. There is no reason why this simple approach cannot work for you as well.

UIViews laid out in Interface Builder changes position during execution (UIViewContentMode)

I have some graphics that is already scaled and cut correctly for my project.
I choose to build the UI in IB and positioned everything correctly, under size and position I left it a "Frame" (instead of layout).
At runtime my graphics is moved and stretched according to which UIViewContentMode I set.
If I was doing this completely in code and set a frame and no UIViewContentMode, Cocoa would respect this and leave the graphics alone. However IB does things a bit different.
I think my problem is that I don't precisely understand what the different UIViewContentMode's does and I can't find the correct one to "turn off" the manipulation of the graphics at runtime.
Can someone give my a little help on this one:)
Thanks in regards.
If you don't set the value, the default is UIViewContentModeScaleToFill. If doing it in code and then calling setNeedsDisplay does it "scale to fill"? I think it depends on how you "do it in code" when the content mode is enforced - I assume IB is doing some extra stuff in the init to apply the content mode that you are not doing in code.
Anyway, if you don't want it to "scale" you can pick any of the UIViewContentMode operations that don't have the word "Scale" in the name. Review the UIViewContentMode enum for details on what each one does.