Metal Sampler ::linear doesn't work as expected - core-graphics

My source image is a 512x512 pixels checkerboard (see source image). When I render it to 1/3 of its size (170,6.. x 170,6..) the result looks like it is downsized with the ::nearest filter. I expect the resulting image to be an approximation of texels (colors) sampled by the sampler in my texture shader, but it is not. I tried to do the same using CALayer, and the result was identical. However, resizing NSImage created with my source image (512x512) to 1/3.0 of size produced the expected result (see image below). Please, could you explain how the sampler in Metal works and what I need to change to get the result I expect?
Thank you.
(I render to CAMetalLayer's drawable, contentsScale #1x, displayScale #1x)
My texture shader sampler: constexpr sampler textureSampler (mag_filter::linear, min_filter::linear);
Source image 512x512:
Result 170,6.. x 170,6..:
Expected result 170,6.. x 170,6..:

I made two mistakes:
I didn't create mipmaps from my source texture
Because I didn't create mipmaps and I didn't use: mip_filter::linear parameter for sampler.
Generating mipmaps and using:
constexpr sampler samplerLinear(min_filter::linear, mag_filter::linear, mip_filter::linear);
Renders correct output texture.

Related

DirectX 11 What is a Fragment?

I have been learning DirectX 11, and in the book I am reading, it states that the Rasterizer outputs Fragments. It is my understanding, that these Fragments are the output of the Rasterizer(which inputs geometric primitives), and in-fact are just 2D Positions(your 2D Render Target View)
Here is what I think I understand, please correct me.
The Rasterizer takes Geometric Primitives(spheres, cubes or boxes, toroids
cylinders, pyramids, triangle meshes or polygon meshes) (https://en.wikipedia.org/wiki/Geometric_primitive). It then translates these primitives into pixels(or dots) that are mapped to your Render Target View(that is 2D). This is what a Fragment is. For each Fragment, it executes the Pixel Shader, to determine its color.
However, I am only assuming because there is no simple explanation of what it is (That I can find).
So my questions are ...
1: What is a Rasterizer? What are the inputs, and what is the output?
2: What is a fragment, in relation to Rasterizer output.
3: Why is a fragment a float 4 value (SV_Position)? If it just 2D Screen Space for the Render Target View?
4: How does it correlate to the Render Target Output (the 2D Screen Texture)?
5: Is this why we clear the Render Target View(to whatever color) because the Razterizer, and Pixel Shader will not execute on all X,Y locations of the Render Target View?
Thank you!
I do not use DirectXI but OpenGL instead but the terminology should bi similar if not the same. My understanding is this:
(scene geometry) -> [Vertex shader] -> (per vertex data)
(per vertex data) -> [Geometry&Teseletaion shader] -> (per primitive data)
(per primitive data) -> [rasterizer] -> (per fragment data)
(per fragment data) -> [Fragment shader] -> (fragment)
(fragment) -> [depth/stencil/alpha/blend...]-> (pixels)
So in Vertex shader you can perform any per vertex operations like transform of coordinate systems, pre-computation of needed parameters etc.
In geometry and teselation you can compute normals from geometry, emit/convert primitives and much much more.
The Rasterizer then convert geometry (primitive) into fragments. This is done by interpolation. It basically divide the viewed part of any primitive into fragments see convex polygon rasterizer.
Fragments are not pixels nor super pixels but they are close to it. The difference is that they may or may not be outputted depending on the circumstances and pipeline configuration (Pixels are visible outputs). You can think of them as a possible super-pixels.
Fragment shader convert per fragment data into final fragments. Here you are computing per fragment/pixel lighting,shading, doing all the texture stuff, compute colors etc. The output is also fragment which is basically pixel + some additional info so it does not have just position and color but can have other properties as well (like more colors, depth, alpha, stencil, etc).
This goes into final combiner which provides the depth test and any other enabled tests or functionality like Blending. And only that output goes into framebuffer as pixel.
I think that answered #1,#2,#4.
Now #3 (I may be wrong here due to my lack of knowledge about DirectX) in per fragment data you often need 3D position of fragments for proper lighting or what ever computations and as homogenuous coordinates are used we need 4D (x,y,z,w) vector for it. The fragment itself has 2D coordinates but the 3D position is its interpolated value from geometry passed from Vertex shader. So it may not contain the screen position but world coordinates instead (or any other).
#5 Yes the scene may not cover whole screen and or you need to preset the buffers like Depth, Stencil, Alpha so the rendering works as should and is not invalidated by previous frame results. So we need to clear framebuffers usually at start of frame. Some techniques require multiple clearings per frame others (like glow effect) clears once per multiple frames ...

What are the commands to get and set the contrast Gamma setting of raster image displays?

I am trying to overlay two images, but I also want to be able to pass the gamma from each of the images to the final image. I know that one can get and set contrast limits as well as adjust the intensity transformation (ITT), but I have not found commands to access the Gamma value.
Am I just missing something? It would be helpful to be able to set the gamma for both images separately before overlaying them.
The according commands are
Number ImageDisplayGetGammaCorrection( ImageDisplay imgDisp )
and
void ImageDisplaySetGammaCorrection( ImageDisplay imgDisp, Number gamma )
and they are used like in the following example:
image img1:=RealImage("test1",4,256,256)
img1 = icol
ShowImage(img1)
img1.ImageGetImageDisplay(0).ImageDisplaySetGammaCorrection(0.6)

ImageMagick losing aspect ratio

Very simply I have a script that calls imagemagick on my photos.
The original image is 320 x 444, and I want to create a few scaled down versions but keeping the same aspect ratio
I call imagemagick using
convert oldfile.png -resize 290x newfile.png
I want to scale it to my set widths but the heights scale accordingly.
I do 80x, 160x and 290x in 3 separate commands.
The smallest 2 produce images with the same original aspect ratio, the 290x does not.
The size of the image it produces is 290 x 402
I have no idea why that one fails to keep the aspect ratio but the other 2 sizes maintaine it.
Any ideas?
I think that the problem in the third command is the requested size itself:
in the first command both dimensions are divided by 4: 320/4=80 and 444/4=111
in the second command both dimensions are divided by 2: 320/2=160 and 444/2=222
444 and 320 have only two common divisors: 2 and 4. You already used these divisors in your first two commands, so any other (width, height) couple will give you a slightly different aspect ratio: it is impossible to obtain the same exact aspect ratio fixing 290.
In fact while your original image has an aspect ratio of 1.3875, with a 290x403 image you would obtain an aspect ratio of 1,389655172 and with a 290x402 image you would get a 1,386206897 ratio: fixing 290 there is no other dimension's value that can give you the desired aspect ratio.
In general however Imagemagick always tries to preserve the aspect ratio of the image, as you can read in Imagemagick documentation:
The argument to the resize operator is the area into which the image
should be fitted. This area is not the final size of the image but the
maximum sizes for the image. that is because IM tries to preserve the
aspect ratio of the image more than the final (unless a '!' flag is
given), but at least one (if not both) of the final dimensions should
match the argument given image. So let me be clear... Resize will fit
the image into the requested size. It does NOT fill, the requested box
size.
For further reference see here

How to convert 32 bit PNG to RGB565?

How can I accomplish this? A programmatic solution (Objective-c) is great, but even a non-progarmmatic one is good.
I have pixelmator -> But that doesn't give you the option. I can't seem to do it with Preview either.
I have tried googling, but haven't been able to find a solution so far. The only tool I have been able to use to do this is TexturePacker, but that creates a sprite sheet.
You can use libpng to convert the PNG image to three-byte (8:8:8) RGB. Then you can downsample to the 5:6:5 16-bit color values of RGB565. If r, g, and b are the respective 8-bit colors (stored in an unsigned char type), then the 16-bit RGB565 value is:
((r >> 3) << 11) | ((g >> 2) << 5) | (b >> 3)
You can improve a tad on this by rounding instead of chopping, being careful to not overflow the values. You can also force the green value to be equal to the blue and red values when they are all equal in the original 8-bit values. Otherwise it is possible to have colors that were originally gray inadvertently take on color after conversion.
Create Bitmap Context with color RGB565 using Quartz, paint your PNG on this context, save this bitmap context to file.
PNG does not support a RGB565 packing. You can always apply a posterize to the image (programatically or with ImageMagick or with any image editor), which amounts to discard the lower significant bits in each channel. When saving to PNG, you will still be saving 8 bits per channel (unless you use a palette), but even then you will get an appreciable reduction in size, because of the PNG compression.
A quick example: original:
after a simple posterize with 32 levels (equivalent to a RGB555) applied with XnView
The size goes from 89KB to 47KB, with a small quality loss.
In case of synthetic images with gradients, the quality loss could be much more noticiable (banding).
I received this answer from the creator of texture packer:
you can do it from command line - see
http://www.texturepacker.com/uncategorized/batch-converting-images-to-pvr-or-pvr-ccz/
Just adjust the opt and set output to .png instead of pvr.ccz
Make sure that you do not overwrite your source images.
According to Wikipedia, which is always right, the only 16-bit PNG is a greyscale PNG. http://en.wikipedia.org/wiki/Portable_Network_Graphics
If you just add your 32-bit (alpha) or 24-bit (no alpha) PNG to your project as normal, and then set the texture format in Cocos2D, all should be fine. The code for that is:
[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGB565];

create a new image with only masked part (without transparent area) with new size

I have a mask and an image on which mask is applied to get a portion of that image.
The problem is when I apply that mask on the image ,the resultant image from masking is of same size as the original image .Though the unmasked part is transparent. What I need is an image which only has the masked part of the original image ,I dont want transparent part to be in the image. so that the resultant image will be of smaller size an contain only the masked part.
Thanks
You can:
Draw the image to a new CGBitmapContext at actual size, providing a buffer for the bitmap. CGBitmapContextCreate
Read alpha values from the bitmap to determine the transparent boundaries. You will have to determine how to read this based on the pixel data you have specified.
Create a new CGBitmapContext providing the external buffer, using some variation or combination of: a) a pixel offset, b) offset bytes per row, or c) manually move the bitmap's data (in place to reduce memory usage, if possible). CGBitmapContextCreate
Create a CGImage from the second bitmap context. CGBitmapContextCreateImage