CGRectGetWidth vs CGRect.size.width - objective-c

Which is better to use? I prefer CGRect.size.width cause it looks nicer. But, my colleague says CGRectGetWidth is better.

CGRectGetWidth/Height will normalize the width or height before returning them. Normalization is basically just checking if the width or height is negative, and negating it to make it positive if so.
Answered here

A rect's width and height can be negative. I have no idea when this would be true in practice, but according to Apple docs:
CGGeometry Reference defines structures for geometric primitives and
functions that operate on them. The data structure CGPoint represents
a point in a two-dimensional coordinate system. The data structure
CGRect represents the location and dimensions of a rectangle. The data
structure CGSize represents the dimensions of width and height.
The height and width stored in a CGRect data structure can be
negative. For example, a rectangle with an origin of [0.0, 0.0] and a
size of [10.0,10.0] is exactly equivalent to a rectangle with an
origin of [10.0, 10.0] and a size of [-10.0,-10.0]. Your application
can standardize a rectangle—that is, ensure that the height and width
are stored as positive values—by calling the CGRectStandardize
function. All functions described in this reference that take CGRect
data structures as inputs implicitly standardize those rectangles
before calculating their results. For this reason, your applications
should avoid directly reading and writing the data stored in the
CGRect data structure. Instead, use the functions described here to
manipulate rectangles and to retrieve their characteristics.

Related

Scale down NSImage results into pixel change?

I'm using the following code to scale down my image:
NSImage * smallImage = [[NSImage alloc] initWithSize:CGSizeMake(width, height)];
[smallImage lockFocus];
[[NSGraphicsContext currentContext]
setImageInterpolation:NSImageInterpolationHigh];
[image drawInRect:CGRectMake(0, 0, width, height)
fromRect:NSZeroRect
operation:NSCompositeCopy
fraction:1.0];
[smallImage unlockFocus];
Basically, this works fine, but if I set the width and height to exactly as the original one, and compare the images pixel by pixel, there are still some pixels changed.
And since my app is pixel-sensitive, I need to make sure every pixel is correct, so I'm wondering how can I keep pixels as they are during such scale down, is it possible?
Yes, NSImage will change the image data in various ways. It attempts to optimize the "payload" image data according to the size needed for its graphical representation on the UI.
Scaling it down and up again is generally not a good idea.
AFAIK you can only avoid that by keeping the original image data somehere else (e.g. on disk or in a separate NSData container or so).
If you need to apply calcluations or manipulations on the image data which needs to be 100% accurate down to each pixel, then work with NSData or C strings/byte arrays only. Avoid NSImage unless
a) the result is for presentations on the device only
b) you really need functionality that comes with NSImage objects.
I am explaining the problems in principle, not scientific.
Pixels have a fixed size, for technical reasons.
No, you can't keep your pixels, when scaling down.
An example to explain: Pixelsize in square 0,25 inch. Now you want to fill a square wich 1,1 inch. It's impossible. How many pixels should be used? 4 = too less, 5 too much. Now in the COCOA libs or wherever it happens, a decision is made: better more pixels = enlarging square size, or less = reducing square size. That's out of control for you.
Another problem is - also out of control for you - the way how measures are computed.
An example: 1 inch is nearly 2.54 cm, so 1.27 is 0.5 inch, but what is 1.25 cm? Values, not only measures are internally computed using one measure-unit: I think it's inch (as DOUBLE, with fixed number of digits after the period). When using the unit cm it is internally recomputed in inch, some mathematical operations are done (e.g. How many pixels are neccessary for the square?) and the result is sent back, maybe recomputed in cm. That also happens when using INTEGER, internally computed as DOUBLE and returned as INTEGERS. Funny things = unexpected values happen from that, especially after divisions, which are used for scaling down!
By the way: If an image is scaled, often new pixels are created for the scaled image. For example, if you have 4 pixels: 2 red, 2 blue, the new ONE has a mixed color, somehow violet. There is no way back. So always work on copies of an image!

Three.js camera tilt up or down and keep horizon level

camera.rotate.y pans left or right in a predictable manner.
camera.rotate.x looks up or down predictably when camera.rotate.y is at 180 degrees.
but when I change the value of camera.rotate.y to some new value, and then I change the value of camera.rotate.x, the horizon rotates.
I've looked for an algorithm to adjust for horizon rotation after camera.rotate.x is changed, but haven't found it.
In three.js, an object's orientation can be specified by its Euler rotation vector object.rotation. The three components of the rotation vector represent the rotation in radians around the object's internal x-axis, y-axis, and z-axis respectively.
The order in which the rotations are performed is specified by object.rotation.order. The default order is "XYZ" -- rotation around the x-axis occurs first, then the y-axis, then the z-axis.
Rotations are performed with respect to the object's internal coordinate system -- not the world coordinate system. This is important. So, for example, after the x-rotation occurs, the object's y- and z- axes will generally no longer be aligned with the world axes. Rotations specified in this way are not unique.
So, for example, if in code you specify,
camera.rotation.y = y_radians; // Y first
camera.rotation.x = x_radians; // X second
camera.rotation.z = 0;
the rotations are applied in the object's rotation.order, not in the order you specified them.
In your case, you may find it more intuitive to set rotation.order to "YXZ", which is equivalent to "heading, pitch, and roll".
For more information about Euler angles, see the Wikipedia article. Three.js follows the Tait–Bryan convention, as explained in the article.
three.js r.61
I've been looking for the same info for few days now, the trick is: use regular rotateX to look up/down, but use rotateOnWorldAxis(new THREE.Vector3(0.0, 1.0, 0.0), angle) for horiz turn (https://discourse.threejs.org/t/vertical-camera-rotation/15334).

Draw tiled images in CGContext with a scale transformation gives precision errors

I want to draw tiled images and then transform them by using the usual panning and zooming gestures. The problem that brings me here is that, whenever I have a scaling transformation of a large number of decimal places, a thin line of pixels (1 or 2) appears in the middle of the tiles. I managed to isolate the problem like this:
CGContextSaveGState(UIGraphicsGetCurrentContext());
CGContextSetFillColor(UIGraphicsGetCurrentContext(), CGColorGetComponents([UIColor redColor].CGColor));
CGContextFillRect(UIGraphicsGetCurrentContext(), rect);//rect from drawRect:
float scale = 0.7;
CGContextScaleCTM(UIGraphicsGetCurrentContext(), scale, scale);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(50, 50, 100, 100), testImage);
CGContextDrawImage(UIGraphicsGetCurrentContext(), CGRectMake(150, 50, 100, 100), testImage);
CGContextRestoreGState(UIGraphicsGetCurrentContext());
With a 0.7 scale, the two images appear correctly tiled:
With a 0.777777 scale (changing line 6 to "float scale = 0.777777;"), the visual artifact appears:
Is there any way to avoid this problem? This happens with CGImage, CGLayer and primitive forms such as a rectangle. It also happens on MacOSx.
Thanks for the help!
edit: Added that this also happens with a primitive form, like CGContextFillRect
edit2: It also happens on MacOSx!
Quartz has a floating point coordinate system, so scaling may result in values that are not on pixel boundaries, resulting in visible antialiasing at the edges. If you don't want that, you have two options:
Adjust your scale factor so that all your scaled coordinates are integral. This may not always be possible, especially if you're drawing lots of things.
Disable anti-aliasing for your graphics context using CGContextSetShouldAntialias(UIGraphicsGetCurrentContext(), false);. This will result in crisp pixel boundaries, but anything but straight lines might not look very good.
When all is said and done, iOS is dealing with discrete pixels on integer boundaries. When your frames are reduced 0.7, the 50 is reduced to 35, right on a pixel boundary. At 0.777777 it is not - so iOS adapts and moves/shrinks/blends whatever.
You really have two choices. If you want to use scaling of the context, then round the desired value up or down so that it results in integral scaled frame values (your code shows 50 as the standard multiplication value.)
Otherwise, you can not scale the context, but scale the content one by one, and use CGIntegralRect to round all dimensions up or down as needed.
EDIT: If my suspicion is right, there is yet another option for you. Lets say you want a scale factor of .77777 and a frame of 50,50,100,100. You take the 50, multiply it by the scale, then round the return value up or down. Then you recompute the new frame by using that value divided by 0.7777 to get some fractional value, that when scaled by 0.7777 returns an integer. Quartz is really good at figuring out that you mean an integral value, so small rounding errors are ignored. I'd bet anything this will work just fine for you.

OpenGL GL_POINTS size

How can I make my GL_POINT bigger? I'm using glPointSize, but its working just up to some size. So if I write
glPointSize(100);
its the same size as
glPointSize(500);
How can I make it as big as I need?
The OpenGL wiki says:
There is an implementation-defined range for point sizes, and the size given by either method is clamped to that range. You can query the range with GL_POINT_SIZE_RANGE​ (returns 2 floats). There is also a point granularity that you can query with GL_POINT_SIZE_GRANULARITY​; the implementation will clamp sizes to its granularity as needed.
If the size of point you want isn't in the allowable range consider using a textured quad or even a TRIANGLE_FAN to make a (nearly) circular polygon of whatever size you desire.
You can draw a view aligned quad of whatever size you want at the point location.

Draw a scatterplot matrix using glut, opengl

I am new to GLUT and opengl. I need to draw a scatterplot matrix for n dimensional array.
I have saved the data from csv to a vector of vectors and each vector corresponds to a row. I have plotted just one scatterplot. And used GL_LINES to draw the grid. My questions
1. How do I draw points in a particular grid? Using GL_POINTS I can only draw points in the entire window.
Please let me know need any further info to answer this question
Thanks
What you need to do is be able to transform your data's (x,y) coordinates into screen coordinates. The most straightforward way to do it actually does not rely on OpenGL or GLUT. All you have to do is use a little math. Determine the screen (x,y) coordinates of the place where you want a datapoint for (0,0) to be on the screen, and then determine how far apart you want one increment to be on the screen. Simply take your original data points, apply the offset, and then scale them, to get your screen coordinates, which you then pass into glVertex2f() (or whatever function you are using to specify points in your API).
For instance, you might decide you want point (0,0) in your data to be at location (200,0) on your screen, and the distance between 0 and 1 in your data to be 30 pixels on the screen. This operation will look like this:
int x = 0, y = 0; //Original data points
int scaleX = 30, scaleY = 30; //Scaling values for each component
int offsetX = 100, offsetY = 100; //Where you want the origin of your graph to be
// Apply the scaling values and offsets:
int screenX = x * scaleX + offsetX;
int screenY = y * scaleY + offsetY;
// Calls to your drawing functions using screenX and screenY as your coordinates
You will have to determine values that make sense for the scalaing and offsets. You can also have your program use different values for different sets of data, so you can display multiple graphs on the same screen. But this is a simple way to do it.
There are also other ways you can go about this. OpenGL has very powerful coordinate transformation functions and matrix math capabilities. Those may become more useful when you develop increasingly elaborate programs. They're most useful if you're going to be moving things around the screen in real-time, or operating on incredibly large data sets, as they allow you to perform these mathematical calculations very quickly using your graphics hardware (which is able to do them much faster than the CPU). However, the time it takes for the CPU to do simple calculations like those where you only are going to do them once or very infrequently on limited sets of data is not a problem for computers today.