Does someone knows why graphics objects such as polygon, point, picture marker and etc are rendering
from scratch while zooming or moving esri map?
For example, in the following link the brackets disappears and rendering from the start in each zoom change or map move: example.
Thanks in advance,
Gal
Yes, the graphics redraw from scratch when panning and zooming because a graphics layer has been added to the map control.
GraLay.SelectedFeature = new ags.GraphicsLayer();
map.addLayer(GraLay.SelectedFeature);
Related
I would like to know how Blender's border render works internally. How can Blender compute lights if it has not information about the lights in the tiles he won't render? I have not found any reference (source code excluded) on how this feature of blender works. Can somebody explain it (or give me some reference)?
The render border setting only alters what part of the image is rendered, it does not alter what data is sent to the render engine to generate the image.
You can test this by placing an object with a reflective surface in front of the camera and another object behind the camera, the object behind the camera will show in the reflection. The border setting doesn't change the reflection in the object, it only changes what part of the image is rendered.
Rendering an image starts at the pixel that will be visible in the final image and sends a "ray" into the scene to determine what colour the specific pixel will be. Each ray will bounce around in the scene from object to object to light source based on render settings to calculate the final result. While the render border will reduce the pixels used as the starting point for each ray, it does not reduce the objects or lights in the scene that each ray may come into contact with. Each ray going through the scene will see every visible object and light in the scene that can influence the final result for each pixel.
This conference video explains ray types and might give you a better grasp of how a ray goes through a scene to get the final image.
When painting textures in blender, I would like to add existing images to the texture image. But blender does not seem to provide such functions.
I tried external editing in photoshop, but the uv unwrapped vertices are lost and so there's no reference points available.
Thanks!
This question would be better suited to blender.stackexchange and maybe a little more info on what steps you are trying.
In the image editor after you unwrap you can use UVs->Export UV Layout to save the uvs for use in an external image editor.
When using Texture painting mode you also have options to use an image as a brush texture.
While looking through the tutorials I've seen the Ogre::Camera::getCameraToViewportRay method being used. I was trying understand what it does.
First I imagine a viewport, being placed somewhere in the 3D scene, let's say on the screen of the TV object. I can easily imagine how to transform the 2D coordinate on the viewport to the 3D coordinate of the scene and then to make a ray from the camera position point through that point on the VP.
But I can not understand how it's done when the VP is on the the RenderWindow(on my monitor). I mean, where is the render window in the scene, where is the point on the renderwindow's VP in the scene? How is the point on the renderwindow's VP transformed into a 3D point of the scene?
Thanks for answer!
The viewport shows what you see through a camera, but the viewport is in front of the camera.
There is a stackoverflow post with information about the relation of camera and viewport and a nice visual illustration: https://stackoverflow.com/a/7125486/2168872
The camera to viewport ray is a worldspace ray, starting from your camera and intersecting the viewport at a certain point, e.g. where your mouse cursor points to.
I want to create a bubbles like game in android and I'm not sure how to draw the graphics.
Should I use canvas ? Every bubble should be a bitmap or maybe an image view ?
Another thing, I want to be able to rotate / scale the bubbles.
I've tried with bitmaps and canvas but wasn't able to rotate the bubbles.
Image view for every bubble looks like a mess to me.
Your help is appreciated.
Thanks
If you want to make a game, I would suggest using a Canvas, and put the Canvas over most, or all, of your layout. Creating anything but the most basic game using the regular UI structures would be a nightmare.
It sounds like you've gotten to the point where you can load the bubble images and draw them to the canvas, which is good. As for rotating the bubbles, use this:
Matrix rotator = new Matrix();
rotator.postRotate(90);
canvas.drawBitmap(bitmap, rotator, paint);
That was from an answer to this SO question, which was more specifically about rotating bitmaps on a Canvas.
For more information on making games in Android, this book was pretty helpful for me.
Hope that helps!
I'm trying to write a fairly simple animation using Core Animation to simulate a book cover being opened. The book cover is an image, and I'm applying the animations to its layer object.
As well as animating a rotation around the y axis (The the anchorPoint property set of the left of the screen), I need to scale the right hand edge of the image up so it appears to "get closer" to the user and create the illusion of depth. Flipboard, for example, does this scaling really well.
I can't find any way of scaling an image like this, so only one edge is scaled and the image ends up nonrectangular.
All help appreciated with this one!
CoreAnimation, by default, "flattens" its 3D hierarchy into a 2D world at z=0. This causes perspective and the like to not work properly. You need to host your layer in a CATransformLayer, which will render its sublayers as a true 3D layer hierarchy.