how to specify rotation angle in wolfram alpha - wolframalpha

I want to draw pentagon rotated to an angle of 45 degree. How to write code for this shape?

Since this question is tagged 'wolframalpha' is assume that the 'code' you refer to should be entered at the wolfram alpha site. Entering Pentagon rotated 45 degrees there gives me a figure of a rotated pentagon.
In Mathematica:
WolframAlpha["Pentagon rotated 45 degrees", {{"VisualRepresentation",1}, "Content"}]

Related

How can I implement degrees round the drawn compass

I am developing a GPS waypoint application. I have started by drawing my compass but am finding it difficult to implement degree text around the circle. Can anyone help me with a solution? The compass image am working on] 1 here shows the circle of the compass I have drawn.
This image here shows what I want to achieve, that is implementing degree text round the compass [Image of what I want to achieve] 2
Assuming you're doing this in a custom view, you need to use one of the drawText methods on the Canvas passed in to onDraw.
You'll have to do a little trigonometry to get the x, y position of the text - basically if there's a circle with radius r you're placing the text origins on (i.e. how far out from the centre they are), and you're placing one at angle θ:
x = r * cosθ
y = r * sinθ
The sin and cos functions take a value in radians, so you'll have to convert that if you're using degrees:
val radians = (degrees.toDouble() / 360.0) * (2.0 * Math.PI)
and 0 degrees is at 3 o'clock on the circle, not 12, so you'll have to subtract 90 degrees from your usual compass positions (e.g. 90 degrees on the compass is 0 degrees in the local coordinates). The negative values you get are fine, -90 is the same as 270. If you're trying to replicate the image you posted (where the numbers and everything else are rotating while the needle stays at the top) you'll have to apply an angle offset anyway!
These x and y values are distance from the centre of the circle, which probably needs to be the centre of your view (which you've probably already calculated to draw your circle). You'll also need to account for the extra space you need to draw those labels, scaling everything so it all fits in the View

Calculate angle on a plane in 3D space from a 2D image

I have 2 input images of a plane where the (static) camera is at an unknown angle. I managed to extract edges and points of interests using opencv. But I'm stuck calculating real angles from the images.
From image #1 I need to calculate the camera angle relative to the plane. I know 3 points on the plane that form a equilateral triangle (angles of 60 degree). The center point of the triangle is also the centerpoint of the plane. However the plane center point on the image is covered by another object.
From image #2 I need to calculate the real angle of an object (Point C) on the plane to one of the 3 points and the plane center point (= line A to B).
How can I calculate the real angle β as if the camera had no angle towards the plane?
Update:
I was looking for a solution for my problem at https://docs.opencv.org/3.4/d9/d0c/group__calib3d.html
There is a number of functions but I couldn't figure out how to apply them to my specific problem.
There is a function to calculate Homography using two images with keypoints but I do not have images of the scene from different camera angles.
Then there is cv::findHomography which Finds a perspective transformation between two planes. I know 4 source points but what are my 4 destination points?
Another one I was looking at is cv::solvePnP and cv::solvePnPRansac but again I only know 4 source points on the plane. I don't know about their 3D correspondence point.
What am I missing?
#Micka: Thanks for your input. I have 4 points for processing the image (the 3 static base points + the object at point C). I can assume these points are all located on the plane at z=0. However I do not have coordinates for a second plane neither the (x,y) of the corresponding 3D points.
Your description does not explicitly say it, but if you can assume that segment AB bisects the base of the triangle, then you have 4 point correspondences between the plane and its image, so you can use cv::findHomography.

How to get the angle of an image ? (IOS)

Ok, so I rotated an image by 25 degrees like this :
MyImage.layer.affineTransform = CGAffineTransformMakeRotation(25);
Now I want to rotate the image again by 25 more degrees (50 degrees in total)
So my question is, How do I rotate X more degrees an image ?
If you have any other way to rotate an image or better way please include your code.
There is no need to know the current angle! Instead of setting the affineTransform, which is only a shortcut, apply an actual 3D transform. Now you can call CATransform3DRotate, which rotates an existing transform — and thus is additive.

opengl texture mapping off by 5-8 pixels

I've got a bunch of thumbnails/icons packed right up next to each other in a texture map / sprite sheet. From a pixel to pixel relationship, these are being scaled up from being 145 pixels square to 238 screen pixels square. I was expecting to get +-1 or 2 pixel accuracy on the edges of the box when accessing the texture coordinates, so I'm also drawing a 4 pixel outline overtop of the thumbnail to hide this probable artifact. But I'm seeing huge variations in accuracy. Sometimes it's off in one direction, sometimes the other.
I've checked over the math and I can't figure out what's happening.
The the thumbnail is being scaled up about 1.64 times. So a single pixel off in the source texture coordinate could result in around 2 pixels off on the screen. The 4 pixel white frame over top is being drawn at a 1-1 pixel to fragment relationship and is supposed to cover about 2 pixels on either side of the edge of the box. That part is working. Here I've turned off the border to show how far off the texture coordinates are....
I can tweak the numbers manually to make it go away. But I have to shrink the texture coordinate width/height by several source pixels and in some cases add (or subtract) 5 or 6 pixels to the starting point. I really just want the math to work out or to figure out what I'm doing wrong here. This sort of stuff drives me nuts!
A bunch of crap to know.
The shader is doing the texture coordinate offsetting in the vertex shader...
v_fragmentTexCoord0 = vec2((a_vertexTexCoord0.x * u_texScale) + u_texOffset.s, (a_vertexTexCoord0.y * u_texScale) + u_texOffset.t);
gl_Position = u_modelViewProjectionMatrix * vec4(a_vertexPosition,1.0);
This object is a box which is a triangle strip with 2 tris.
Not that it should matter, but matrix applied to the model isn't doing any scaling. The box is to screen scale. The scaling is happening only in the texture coordinates that are being supplied.
The texture coordinates of the object as seen above are 0.00 - 0.07, then in the shader have an addition of an offset amount which is different per thumbnail. .07 out of 2048 is like 143. Originally I had it at .0708 which should be closer to 145 it was worse and showed more like 148 pixels from the texture. To get it to only show 145 source pixels I have to make it .0.06835 which is 140 pixels.
I've tried doing the math in a calculator and typing in the numbers directly. I've also tried doing like =1305/2048. These are going in to GLfloats not doubles.
This texture map image is PNG and is loaded with these settings:
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
but I've also tried GL_LINEAR with no apparent difference.
I'm not having any accuracy problems on other textures (in the same texture map) where I'm not doing the texture scaling.
It doesn't get farther off as the coords get higher. In the image above the NEG MAP thumb is right next to the HEAT MAP thumb and are off in different directions but correct at the seam.
here's the offset data for those two..
filterTypes[FT_gradientMap20].thumbTexOffsetS = 0.63720703125;
filterTypes[FT_gradientMap20].thumbTexOffsetT = 0.1416015625;
filterTypes[FT_gradientMap21].thumbTexOffsetS = 0.7080078125;
filterTypes[FT_gradientMap21].thumbTexOffsetT = 0.1416015625;
==== UPDATE ====
A couple of things off the bat I realized I was doing wrong and are discussed over here: OpenGL Texture Coordinates in Pixel Space
The width of a single thumbnail is 145. But that would be 0-144, with 145 starting the next one. I was using a width of 145 so that's going to be 1 pixel too big. Using the above center of pixel type math, we should actually go from the center of 0 to the center of 144. 144.5 - 0.5 = 144.
Using his formula of (2i + 1)/(2N) I made new offset amounts for each of the starting points and used the 144/2048 as the width. That made things better but still off in some areas. And again still off in one direction sometimes and the other other times. Although consistent for each x or y position.
Using a width of 143 proves better results. But I can fix them all by just adjusting the numbers manually to work. I want to have the math to make it work out right.
... or.. maybe it has something to do with min/mag filtering - although I read up on that and what I'm doing seems right for this case.
After a lot of experiments and having to create a grid-lined guide texture so I could see exactly how far off each texture was... I finally got it!
It's pretty simple actually.
uniform mat4 u_modelViewProjectionMatrix;
uniform mediump vec2 u_texOffset;
uniform mediump float u_texScale;
attribute vec3 a_vertexPosition;
attribute mediump vec2 a_vertexTexCoord0;
The precision of the texture coordinates. By specifying mediump it just fixed itself. I suspect this also would help solve the problem I was having in this question:
Why is a texture coordinate of 1.0 getting beyond the edge of the texture?
Once I did that, I had to go back to my original 145 width (which still seems wrong but oh well). And for what it's worth I ended up then going back to all my original math on all the texture coordinates. The "center of pixel" method was showing more of the neighboring pixels than the straight /2048 did.

Finding specific points in UIView

Hey. I'm building an iPad app and I need to mark specific points on top of a map (UIImageView). I have the coordinates in inches, but I'm having trouble mapping the inch-based coords to iOS points, I've googled for hours and nothing too conclusive.
Any ideas?
Thank you.
So you have actual units and want to convert those to points?
The iPad screen has a 9.7" diagonal. In points, it's 1024x768. So, applying Pythagoras, that's 1280 points on the diagonal. Therefore the points per inch is very close to 132, which I've also seen given elsewhere without computation from first principles.
So, a distance of x inches is x * 132 points. E.g. 1.5" is 198 points.