CAShapeLayer - bounds - core-graphics

do you have to set the bounds of a CAShapeLayer?
I'm creating a shape layer and assigning it a path via a UIBezierPath, the shape is a simple circle of the size of the view.
I'm not setting any position or bounds on the layer, is it wrong?
class View: UIView {
...
var backgroundLayer: CAShapeLayer!
func setup() {
// call from init
backgroundLayer = CAShapeLayer()
backgroundLayer.strokeColor = UIColor.redColor()
backgroundLayer.lineWidth = 3
backgroundLayer.fillColor = UIColor.clearColor().CGColor
layer.addSublayer(backgroundLayer)
...
}
override func layoutSubviews() {
super.layoutSubviews()
backgroundLayer?.path = circlePath(100)
...
}
func circlePath(progress: Int) -> CGPath {
let path = UIBezierPath()
let inverseProgress = 1 - CGFloat(progress) / 100
let endAngleOffset = CGFloat(2 * M_PI) * inverseProgress
path.addArcWithCenter(localCenter, radius: radius, startAngle: CGFloat(-M_PI), endAngle: CGFloat(M_PI) - endAngleOffset, clockwise: true)
return path.CGPath
}
...
}

As you've already seen, the layer will display just fine even without setting the bounds. So, you don't "have to" set it, but not having a bounds (or having a bounds that is different than the path's bounding box) can sometimes be confusing when doing layout or transforms.
When it comes to layout, positioning, and transformation there are a few different coordinates to consider.
The layer is positioned relative to it's parent's coordinate system
The path is positioned relative to the shape layer's coordinate system
Each point in the path is relative to the origin (0,0) of the path.
The shape layer is transformed relative to its center, and the position of the shape layer is also in the center of its bounds. This means that if the shape layer has no bounds (0×0) size, then any transformation (e.g. rotation) happens around the origin of the path (0,0), as opposed to the center of the path. It also means that when setting the position of the shape layer, one is conceptually positioning the origin of the path, as opposed to the center of the path. However, if the origin of the path happens to be the center of the path's bounding box (for example a circle centered around (0,0)) then this isn't really an issue.
So, to recap: you don't have to set a bounds, but sometimes (depending on the path) positioning or transformation might be clearer when it's set.

Related

Jetpack Compose Canvas not accurate

I am having an issue with jetpack compose canvas when using #Preview, I would like to display all the content to fill the canvas in the preview but it doesn't currently.
I am setting 375dp width and then a Rect with 375f, I understand that dp is different than just float, but how can I set the width so the green rect fills the canvas without using canvas.width for example as the rect width?
When drawing on Canvas you're operating with pixels, not Dp. onDraw is called on DrawScope, and it has size property which is already converted to pixels:
Canvas(Modifier) {
drawRect(color = Color.Green, size = size)
}
Also DrawScope is inherited from Density, so you can convert any Dp to pixels with 375.toDp().

How does the viewport work in libgdx and how to set it up correctly?

I am learning the use of libgdx and I got confused by the viewport and how objects are arranged on the screen. Let's assume my 2D world is 2x2 units wide and high. Now I create a camera which viewport is 1x1. So I should see 25% of my world. Usually displays are not square shaped. So I would expect libgdx to squish and stretch this square to fit the display.
For a side scroller you would set the viewport height the same as the world height and adjust the viewport width according to the aspect ratio. Independent of the aspect ratio of your display you always see the full height of the world but different expansions on the x-axis. Somebody with a wider than high display could look further on the x-axis than somebody with a square shaped display. But proportions will be maintained and there is no distortion. So far I thought I mastered how the viewport logic works.
I am working with the book "Learning LibGDX Game Development" in which you develop the game "canyon bunny". The source code can be found here:
Canyon Bunny - GitHub
In the WorldRenderer Class you find the initilization of the camera:
private void init() {
batch = new SpriteBatch();
camera = new OrthographicCamera(Constants.VIEWPORT_WIDTH, Constants.VIEWPORT_HEIGHT);
camera.position.set(0, 0, 0);
camera.update();
}
The viewport constants are saved in a separate Constants-Class:
public class Constants {
// Visible game world is 5 meters wide
public static final float VIEWPORT_WIDTH = 5.0f;
// Visible game world is 5 meters tall
public static final float VIEWPORT_HEIGHT = 5.0f;
}
As you can see the viewport is 5x5. But the game objects have the right proportion on my phone (16:9) and even on a desktop when you change the windows size the game maintains the correct proportions. I don't understand why. I would expect that the game tries to paint a square shaped cutout of the world onto a rectangle shaped display which would lead to distortion. Why is that not the case? And why don't you need the adaption of width or height of the viewport to the aspect ratio?
The line:
cameraGUI.setToOrtho(true);
Overrides the values you gave when you called:
cameraGUI = new OrthographicCamera(Constants.VIEWPORT_GUI_WIDTH, Constants.VIEWPORT_GUI_HEIGHT);
Here's the LibGDX code that shows why/how the viewport sizes you set were ignored:
/** Sets this camera to an orthographic projection using a viewport fitting the screen resolution, centered at
* (Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight()/2), with the y-axis pointing up or down.
* #param yDown whether y should be pointing down */
public void setToOrtho (boolean yDown) {
setToOrtho(yDown, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
}
/** Sets this camera to an orthographic projection, centered at (viewportWidth/2, viewportHeight/2), with the y-axis pointing up
* or down.
* #param yDown whether y should be pointing down.
* #param viewportWidth
* #param viewportHeight */
public void setToOrtho (boolean yDown, float viewportWidth, float viewportHeight) {
if (yDown) {
up.set(0, -1, 0);
direction.set(0, 0, 1);
} else {
up.set(0, 1, 0);
direction.set(0, 0, -1);
}
position.set(zoom * viewportWidth / 2.0f, zoom * viewportHeight / 2.0f, 0);
this.viewportWidth = viewportWidth;
this.viewportHeight = viewportHeight;
update();
}
So you would need to do this instead:
cameraGUI.setToOrtho(true, Constants.VIEWPORT_GUI_WIDTH, Constants.VIEWPORT_GUI_HEIGHT);
Also don't forget to call update() right after wherever you change the position or viewport dimensions of your camera (Or other properties)
I found the reason. If you take a look on the worldRenderer class there is a method resize(). In this method the viewport is adapted to the aspect ratio. I am just wondering because until now I thought the resize method is only called when resizing the window. Apparently it's also called at start up. Can anybody clarify?

ArcGIS API: Drawing circle fixing center

I am using ArcGIS API v4.8 and the drawing tools to draw circle on my map.
1 issue I notice is when I draw a circle, the center of the circle moves when I move my mouse resizing the circle rather than fixed at the point of the 1st mouse click starts:
How do I fix the center regardless of how I move the radius of the circle? What is missing in my code?
const options = {view, layer: tempGraphicsLayer, pointSymbol, polylineSymbol, polygonSymbol}
let sketchViewModel = new SketchViewModel(options)
let drawCircleButton = document.getElementById('circleButton')
drawCircleButton.onclick = function () {
clear()
isDrawLine = false
sketchViewModel.create('polygon', {mode: 'click'})
sketchViewModel.create('circle')
}
EDIT:
I have found a similar sample, choose the Draw Circle tool, start drawing a circle on the map, you will notice that the center of the circle moves when you move your mouse, I want it to fix the center instead.
The problem when the center moves along with your mouse move is that the circle drawn is not accurate, as I want to start with the center of the circle I want, the circle can expand outward but the center should not move.
That is because the circle, in the given example, is being draw inside the square object. Basically your start and end point are representing corners, not the center point and outer layer of the circle. So every time you expand circle object, it expands from one corner, while the rest is dragging along your mouse.
Visual example:
There are workarounds for this of course. I've made a small sample code of one of the possible ways to draw a circle from a fixed center point.
https://jsfiddle.net/wLd46g8k/9/
Basically I used an ArcGis JS API 4.x constructor called Circle, where you pass a starting point and radius. In my example I've calculated the radius from these two points.
function drawCircle(){//draws the circle
graphicsLayer.graphics.removeAll();
var graphic = new Graphic({
geometry: new Circle({//circle constructor
center: startPoint,//pass the pointer-down event X Y as a starting point
radius: Math.floor(Math.sqrt(Math.pow(startPoint.x - endPoint.x, 2) + Math.pow(startPoint.y - endPoint.y, 2)))
}), //calculates endpoint distance from the startpoint and pass it as a radius
symbol: {//circle design
type: "simple-fill",
color: "orange",
style: "solid",
outline:{
color:"darkorange",
width:4
}
}
});
graphicsLayer.graphics.add(graphic);//adds the circle
};

flash cc createjs hittest works without hit

the rect should be alpha=0.1 once the circle touches the rect . but if statement not working . it becomes 0.1 opacity without hitting
/* js
var circle = new lib.mycircle();
stage.addChild(circle);
var rect = new lib.myrect();
stage.addChild(rect);
rect.x=200;
rect.y=300;
circle.addEventListener('mousedown', downF);
function downF(e) {
stage.addEventListener('stagemousemove', moveF);
stage.addEventListener('stagemouseup', upF);
};
function upF(e) {
stage.removeAllEventListeners();
}
function moveF(e) {
circle.x = stage.mouseX;
circle.y = stage.mouseY;
}
if(circle.hitTest(rect))
{
rect.alpha = 0.1;
}
stage.update();
*/
The way you have used hitTest is incorrect. The hitTest method does not check object to object. It takes an x and y coordinate, and determines if that point in its own coordinate system has a filled pixel.
I modified your example to make it more correct, though it doesn't actually do what you are expecting:
circle.addEventListener('pressmove', moveF);
function moveF(e) {
circle.x = stage.mouseX;
circle.y = stage.mouseY;
if (rect.hitTest(circle.x, circle.y)) {
rect.alpha = 0.1;
} else {
rect.alpha = 1;
}
stage.update();
}
Key points:
Reintroduced the pressmove. It works fine.
Moved the circle update above the hitTest check. Otherwise you are checking where it was last time
Moved the stage update to last. It should be the last thing you update. Note however that you can remove it completely, because you have a Ticker listener on the stage in your HTML file, which constantly updates the stage.
Added the else statement to turn the alpha back to 1 if the hitTest fails.
Then, the most important point is that I changed the hitTest to be on the rectangle instead. This essentially says: "Is there a filled pixel at the supplied x and y position inside the rectangle?" Since the rectangle bounds are -49.4, -37.9, 99, 76, this will be true when the circle's coordinates are within those ranges - which is just when it is at the top left of the canvas. If you replace your code with mine, you can see this behaviour.
So, to get it working more like you want, you can do a few things.
Transform your coordinates. Use localToGlobal, or just cheat and use localToLocal. This takes [0,0] in the circle, and converts that coordinate to the rectangle's coordinate space.
Example:
var p = rect.localToLocal(0, 0, circle);
if (rect.hitTest(p.x, p.y)) {
rect.alpha = 0.1;
} else {
rect.alpha = 1;
}
Don't use hitTest. Use getObjectsUnderPoint, pass the circle's x/y coordinate, and check if the rectangle is in the returned list.
Hope that helps. As I mentioned in a comment above, you can not do full shape collision, just point collision (a single point on an object).

Keeping an object made in OpenGL within the boundaries a window

I have been working on a game using objective c and OpenGL. I know how to create the object and how to make it move the way I want, but I cannot keep it within the window. How do you keep the object within the window?
OpenGL FAQ, section 8.070: How can I automatically calculate a view that displays my entire model?:
The following is from a posting by Dave Shreiner on setting up a basic
viewing system:
First, compute a bounding sphere for all objects in your scene. This
should provide you with two bits of information: the center of the
sphere (let ( c.x, c.y, c.z ) be that point) and its diameter (call it
"diam").
Next, choose a value for the zNear clipping plane. General guidelines
are to choose something larger than, but close to 1.0. So, let's say
you set:
zNear = 1.0;
zFar = zNear + diam;
Structure your matrix calls in this order (for an Orthographic projection):
GLdouble left = c.x - diam;
GLdouble right = c.x + diam;
GLdouble bottom c.y - diam;
GLdouble top = c.y + diam;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(left, right, bottom, top, zNear, zFar);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
This approach should center your objects in the middle of the window and stretch them to fit (i.e., its assuming that you're using a
window with aspect ratio = 1.0). If your window isn't square, compute
left, right, bottom, and top, as above, and put in the following logic
before the call to glOrtho():
GLdouble aspect = (GLdouble) windowWidth / windowHeight;
if ( aspect < 1.0 ) { // window taller than wide
bottom /= aspect;
top /= aspect;
} else {
left *= aspect;
right *= aspect;
}
The above code should position the objects in your scene appropriately. If you intend to manipulate (i.e. rotate, etc.), you
need to add a viewing transform to it.
A typical viewing transform will go on the ModelView matrix and might
look like this:
gluLookAt(0., 0., 2.*diam,
c.x, c.y, c.z,
0.0, 1.0, 0.0);
a "look at" camera is a very convenient technique to make a viewer track an object (hence however the object or camera point moves, the object will stay onscreen). see 'gluLookAt' for a commonly provided implementation, most 3d helper code libraries will have one. You give it a desired camera point, object of interest, and 'up-vector', and it will create an appropriate world (camera transformation) matrix.
Otherwise if you're in control of the object, just don't move it outside of the initial frustum.