When using a camera with myCameraNode.camera.usesOrthographicProjection = YES;, as far as I can tell you can't zoom as you normally would using the default controls given by myScene.allowsCameraControl = YES; – the only way to zoom that I can tell is by changing myCameraNode.scale = (SCNVector3*)...
Is there a way to somehow bind the scroll wheel to this scale parameter, while retaining the standard camera controls for rotation/translation? Or to otherwise 'fix' the default camera controls?
Edit: I think I'm misunderstanding how the camera works. The two-finger zoom gesture does still work with orthographic projection, but it seems like it only lets me zoom out and not let me zoom in any further. I suspect it may be related to the myCameraNode.scale, but if I don't set that parameter the objects in my scene are huge and I only see a tiny fraction of it (and the larger the scale the smaller my objects get).
The built-in camera controls on SCNView are pretty basic, probably best used only for debugging. For a production app, it's better to control the camera yourself, especially if you're using orthographic projection. Set up your own event handling that controls the orthographicScale property of the camera, and you're set.
Followup on comment:
The scale property on SCNNode controls how big a node's content is relative to its parent node — it's a coordinate space transformation, just like rotation and position. It's not really appropriate for implementing camera zoom. In a perspective projection, you use the camera's xFov and/or yFov properties to zoom (and I presume that's what the built-in camera controls do). The API doesn't define what the controls do for an orthographic camera, so anything you observe about its behavior is probably undefined and might be a bug... you might not be able to rely on it staying that way.
If there's more you'd like the built-in camera controls to handle, I'd recommend filing an enhancement request.
Related
I'm making a very simple game in blender and for some reason shadows don't work. I tried every kind of light source, including objects with emisions. 'Cast shadow' and 'recieve shadow' check boxes are checked for ol the objects. I tryed all teh methods to dysplay objects. Is there an easy way to get shadows in the game?
Away to create shadows is like so:
-Switch Muiltitexture to GLSL
Now, you must understand that only certain lights cast shadows. I believe that the only two are the Sun, and Spotlight, however spotlight only casts partial.
While in GLSL Mode, you must change the Solid Mode to Textured Mode for lighting to work. Then, select the sun(angled at your prefered angle) and scroll down in the objects tab. Look for Shadows, and make sure the box is checked. Then play it. The Shadows should automatically appear in the Scene view as well because GLSL has support for realtime shadows.
WAY NUMBER 2:
Another way is to Bake a scene or object. This means that you place lighting in render mode, and capture all the lighting and textures(with lighting) and make a texture. This works really well, but doesn't have realtime shadows. Look it up for more imformation.
Hope this Helped!
In the Viewport press N and in the rendering options switch from multitexture to GLSL and then switch to Texture mode,
You don't have to tweak the settings. you can try this by creating a new blend file.
Hope this helps
Is there a way I can let the SceneKit's camera zoom but not rotate? And how can I delimitate the maximum and minimum zoom the user can do with the camera?
It depends what you mean by zoom – if you mean to do the same thing as 'zooming' a camera lens, you want to modify the yFov and xFov (field of view) attributes of the SCNCamera object. The camera stays in the exact same location, but changes its field of view like a zoom lens.
I cannot see how you can rotate the camera while zooming it – I’d need to see more context of where you’re using the camera. If you don’t touch the SCNNode the camera is attached to, you can’t possibly rotate it.
You're talking about user camera movement with allowsCameraControl, right? I don't think that's really meant to be the basis for a sophisticated user camera movement scheme, more of a simple debugging aid. If you really want fine control over how the user can move the camera, you're best served by creating your own camera node and moving it / changing its properties in response to whatever user input you want to handle (gesture recognizers, game controllers, etc).
I suppose you might be able to constrain the automatic user camera by implementing a scene renderer delegate willRenderScene method. You'd have to get the current pointOfView node, check its position and camera parameters, and change them if they're outside whatever bounds you want. But A) I'm not sure this would work, and B) it's probably not a great idea — it's sort of like messing with the internal view hierarchy of a system control class.
I'm developing an ipad application about 2d drawing.
I need a UIView.frame size of 4000x4000. But if I set a frame with size 4000x4000 the application
crash since i get memory warning.
Right night I'm using 1600*1000 frame size and the user can add new object (rectangle) on frame. User can also translate fram along x and y axis using pan gesture in order to see or add new object.
Have you got some suggestion? how can I tackle this problem?
thanks
Well, I would suggest what is used in video games for a long time - creating a tiled LOD mechanism, where only when you zoom in toward specific tiles, they are rendered at an increasing resolution, while when zoomed out, you only render lower resolution.
If the drawing in based on shapes (rectangles, points, lines, or anything can be represented by simple vector data) there is no reason to create a UIView for the entire size of the drawing. You just redraw the currently visible view as the user pans across the drawing using the stored vector data. There is no persistent bitmapped representation of the drawing.
If using bitmap data for drawing (i.e. a Photoshop type of app) then you'll likely need to use a mechanism that caches off-screen data into secondary storage and loads it back onto the screen as the user pans across it. In either case, the UIView only needs to be as big as the physical screen size.
Sorry I don't have any iOS code examples for any of this - take this as a high-level abstraction and work from there.
Sounds like you want to be using UIScrollView.
I have been looking for the solution on the web for a long time. Most tutorials are fairly simple about adding shadow to a UIView. I also noticed that if we add a shadow to an UIImageView. The shadow shape could perfectly fit the shape of the content image if the image itself has alpha channel in it. Say for example, if the image is an animal with transparent background, the shadow shape is also the same as that animal (not a rectangle shadow as same as UIImageView frame).
But these are not enough. What I need to do is to add some changes to the shadow so it may have some rotation angle and compressed (squeezed or shift) effect so that looks like the sunlight comes from a certain spot.
To demonstrate what I need, I upload 2 images below, which I captured from the Google Map App created by Apple. You can imagine the Annotation Pin is an image which has the Pin shape, so the shadow is also "pin shaped", but it is not simply "offset" with a CGSize, you can see the top of the shadow is shifted right about 35 degrees and slightly squeezed the height.
When we tap and hold and pin, the shadow is also animated away from the pin, so I believe that such shadow can be made programmably.
The best shadow tutorial I can found so far is http://nachbaur.com/blog/fun-shadow-effects-using-custom-calayer-shadowpaths But unfortunately, that cannot make this effect.
If anyone know the answer or know any better words to search for, please let me know. Thank you.
(Please note that the shape of the image is dynamic in the App, so using any tool like Photoshop to pre-render the shadow is not an option.)
In order to create dynamic effects like this, you have to use Core Graphics. It's incredibly powerful once you know how to use it. Basically you need to set a skew transform on the context, set up a shadow and draw the image. You will probably have to use transparency layers as well.
It doesn't sound like you can use CALayer shadows, since that is meant to solve a specific use-case. The approach Apple takes with the pin marks on the map is to have two separate images that are created ahead of time (e.g. in Photoshop) and they position them within the map relative to a reference point.
If you really do need to do this at run-time, it should still be possible by using either Core Graphics or ImageKit. To get a blurred shadow appearance, you can use the kCICategoryBlur CIFilter. You can then convert the image to grayscale. And to get that compressed look you just need to resize and skew the image.
Once you have two separate images, you can either take the CGImageRef for the shadow image and can set that as the content of another sublayer, or you can add it as a separate view.
If you know what all the shapes are, you could just render a shadow image in Photoshop or something.
I would like to create a custom NSView that takes a layered approach to painting. I imagine the majority of the layers would be the same width and height as the backing view.
Is it appropriate to use the Core Animation classes like CALayer for this task, even though I don't expect to need much animation? Is there a more appropriate approach?
To clarify, the view is not meant to be like a canvas in a Photoshop-like application. It more of a data display that should allow for user interaction (selecting, moving, scrolling, etc.)
If it's display and layout you're after, I'd say that a CALayer-based architecture is a good choice. For the open source Core Plot framework, we construct all of our graphs and plot elements out of CALayers, and organize them in a regular hierarchy. CALayers are lightweight and use almost identical APIs between Mac and iPhone. They can even be made to respond to touch or mouse events.
For another example of a CALayer-based user interface, my iPhone application's entire equation entry interface is composed of CALayers, including the menu that slides up from below. Performance is slightly better than that of my previous UIView-based implementation, but the same code also works within my preliminary desktop version of the application.
For a drawing program, I would imagine it would be important to hold a buffer of the bitmap data. The only issue with using a CALayer is that the contents property is a CGImageRef. To turn that back into a graphics context for doing further drawing can be a bit of a pain. You'd have to initialize a new context, draw the bitmap data into it, then do whatever drawing operations you wanted to do, and finally turn that back into a CGImageRef. You probably wouldn't be able to avoid doing a number of pretty large memory allocations, which is virtually guaranteed to slow your program way down.
I would consider holding an off-screen buffer for each layer. Take a look at the Quartz CGLayerRef object. I think it probably does what you want to do: it's an off-screen buffer that holds things you might want to draw repeatedly. You can also quickly get a CGContextRef whenever you need it so you can do additional drawing. And you can always use that CGContextRef with NSGraphicsContext if you want to use Cocoa drawing methods.