How should I tile images for CATiledLayer? - objective-c

I know how to tile images, I just don't get how the images should turn out, with sizes and stuff..
The names should be Image_size_row_colum, and one of the Apple tiles images is:
Lake_125_0_0.png
I use TileCutter to tile the images, but I don't know if I should tile my original image to 512x512px, and then make a worse resolution image of the original from ≈7000x6000 to ≈5000x4000 and then tile that image to 512x512px or whatever.. I just don't get the whole setup..
The class reads images like this:
NSString *tileName = [NSString stringWithFormat:#"%#_%d_%d_%d",
imageName, (int)(scale * 1000), row, col];
And with the first of apples tiles are named Lake_125_0_0.png, that gives me nothing.. I just don't get it.. Anyone?
Thanks.

the tiles are by default always 256 to 256 pixels (although in the apple example some tiles at the border of the image got cropped).
Lake_1000_1_2: full resolution tile at scale 1, row 1, col 2.
Lake_500_1_2: half resolution: the tile is also 256 to 256 pixel but you show an area of the image which is actually 512 to 512 pixels (so you loose quality)
Lake_250_1_2: quarter resolution
Lake_125_1_2: show 8*256 to 8*256 pixels of the original image inside a 256 to 256 pixels tile
I hope this helps.

Related

How does the pixel size change after the aspect ratio is chaged?

I am using NI PCI-1411 frame brabber card and the signal is RS-170 signal. According to the discussion on the website:https://www.cs.rochester.edu/~nelson/courses/vision/resources/video_signals.html.
It said"The aspect (width to height) ratio for typical RS-170 signal rectangle is 4:3. The vertical resolution of video is limited to 485 pixels, as determined by the number of scan lines. The RS-170 standard specifies the aspect ratio (ratio of vertical/horizontal dimensions) of the video display as 3:4"
The CCD I use is Hitachi KP-M1AN.Its number of pixels are 768(H)*494(V) and the pixels size is 11.64(um) * 13.5(um),sensing area is 8.91*6.67mm,Horizontal/Vertical TV resolution is 570/485
Here are my questions:
1.Now I gave a square object to CCD (The CCD is Hitachi KP-M1AN.CCD pixels size is 11.64(um)*13.5(um) and the ).According to the pixel size on CCD, the horizontal pixel numbers should be different with vertical pixel numbers.however, I notice the pixel numbers at both directions are same. So I wonder what is the "real" pixels size now after the ratio change ?
For example, here is a square object is 143um*143um and CCD pixel size is 11um*13um.And according to pixel size it must be 13 pixels * 11 pixels. But what I see now is 12 pixels * 12 pixels.
2.According to question 1, if the pixels size change owning to the ratio change.How does the NI software change the pixel size (I mean extend it or compress it.)

Can SoOffscreenRenderer use tiles bigger than 1024

The coin3d offscreen rendering class SoOffscreenRenderer is capable of rendering big images (e.g. 4000 x 2000 pixels), that don't fit on the screen or in a rendering buffer. This is done by partitioning the image into tiles that are rendered one after the other, where the default size of these tiles is 1024 x 1024.
I looked at the code of SoOffscreenRenderer and CoinOffscreenGLCanvas and found environment variables COIN_OFFSCREENRENDERER_TILEWIDTH COIN_OFFSCREENRENDERER_TILEHEIGHT. I could change the tile size using these variables, but only to sizes smaller than 1024. I could create tiles with 512 x 512 pixels, and also 768 x 768. When I used values bigger than 1024, the resulting tiles were always of size 1024 x 1024.
Is it possible to use bigger tile sizes like 2048 x 2048 or 4096 x 4096, and how would I do that?
It is possible to use larger tiles and coin does it automatically. It will find out which tile sizes work by querying the graphics card driver.
From CoinOffscreenGLCanvas.cpp:
// getMaxTileSize() returns the theoretical maximum gathered from
// various GL driver information. We're not guaranteed that we'll be
// able to allocate a buffer of this size -- e.g. due to memory
// constraints on the gfx card.
The reason why it did not work was that the environment variable COIN_OFFSCREENRENDERER_MAX_TILESIZE was set somewhere in our application using coin_setenv("COIN_OFFSCREENRENDERER_MAX_TILESIZE", "1024", 1);. Removing this call allowed bigger tile sizes to be used.
In CoinOffscreenGLCanvas::getMaxTileSize(void), the variable COIN_OFFSCREENRENDERER_MAX_TILESIZE is read and the tile size clamped accordingly.
On my older computer it generated tiles of size 1024, but on a newer machine the tiles were of size 4096.

Can't get MKTileOverlay to support 1024 x 1024 sized tiles...

if I set the tileSize property of MKTileOverlay to tileSize=CGSizeMake(1024, 1024).
the mapViewer shows my tiles all over the view with gaps and in the wrong order.
I have produced tiles that are in the dimension 1024 x 1024.
zoom level 1 has 4 png tiles,
zoom level 2 has 16 png tiles,
zoom level 3 has 64 png tiles
if I set MKTileOverlay.tileSize=CGSizeMake(256, 256) and provide 256x256 tiles
everything works fine?
what could be the problem?

opengl texture mapping off by 5-8 pixels

I've got a bunch of thumbnails/icons packed right up next to each other in a texture map / sprite sheet. From a pixel to pixel relationship, these are being scaled up from being 145 pixels square to 238 screen pixels square. I was expecting to get +-1 or 2 pixel accuracy on the edges of the box when accessing the texture coordinates, so I'm also drawing a 4 pixel outline overtop of the thumbnail to hide this probable artifact. But I'm seeing huge variations in accuracy. Sometimes it's off in one direction, sometimes the other.
I've checked over the math and I can't figure out what's happening.
The the thumbnail is being scaled up about 1.64 times. So a single pixel off in the source texture coordinate could result in around 2 pixels off on the screen. The 4 pixel white frame over top is being drawn at a 1-1 pixel to fragment relationship and is supposed to cover about 2 pixels on either side of the edge of the box. That part is working. Here I've turned off the border to show how far off the texture coordinates are....
I can tweak the numbers manually to make it go away. But I have to shrink the texture coordinate width/height by several source pixels and in some cases add (or subtract) 5 or 6 pixels to the starting point. I really just want the math to work out or to figure out what I'm doing wrong here. This sort of stuff drives me nuts!
A bunch of crap to know.
The shader is doing the texture coordinate offsetting in the vertex shader...
v_fragmentTexCoord0 = vec2((a_vertexTexCoord0.x * u_texScale) + u_texOffset.s, (a_vertexTexCoord0.y * u_texScale) + u_texOffset.t);
gl_Position = u_modelViewProjectionMatrix * vec4(a_vertexPosition,1.0);
This object is a box which is a triangle strip with 2 tris.
Not that it should matter, but matrix applied to the model isn't doing any scaling. The box is to screen scale. The scaling is happening only in the texture coordinates that are being supplied.
The texture coordinates of the object as seen above are 0.00 - 0.07, then in the shader have an addition of an offset amount which is different per thumbnail. .07 out of 2048 is like 143. Originally I had it at .0708 which should be closer to 145 it was worse and showed more like 148 pixels from the texture. To get it to only show 145 source pixels I have to make it .0.06835 which is 140 pixels.
I've tried doing the math in a calculator and typing in the numbers directly. I've also tried doing like =1305/2048. These are going in to GLfloats not doubles.
This texture map image is PNG and is loaded with these settings:
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
but I've also tried GL_LINEAR with no apparent difference.
I'm not having any accuracy problems on other textures (in the same texture map) where I'm not doing the texture scaling.
It doesn't get farther off as the coords get higher. In the image above the NEG MAP thumb is right next to the HEAT MAP thumb and are off in different directions but correct at the seam.
here's the offset data for those two..
filterTypes[FT_gradientMap20].thumbTexOffsetS = 0.63720703125;
filterTypes[FT_gradientMap20].thumbTexOffsetT = 0.1416015625;
filterTypes[FT_gradientMap21].thumbTexOffsetS = 0.7080078125;
filterTypes[FT_gradientMap21].thumbTexOffsetT = 0.1416015625;
==== UPDATE ====
A couple of things off the bat I realized I was doing wrong and are discussed over here: OpenGL Texture Coordinates in Pixel Space
The width of a single thumbnail is 145. But that would be 0-144, with 145 starting the next one. I was using a width of 145 so that's going to be 1 pixel too big. Using the above center of pixel type math, we should actually go from the center of 0 to the center of 144. 144.5 - 0.5 = 144.
Using his formula of (2i + 1)/(2N) I made new offset amounts for each of the starting points and used the 144/2048 as the width. That made things better but still off in some areas. And again still off in one direction sometimes and the other other times. Although consistent for each x or y position.
Using a width of 143 proves better results. But I can fix them all by just adjusting the numbers manually to work. I want to have the math to make it work out right.
... or.. maybe it has something to do with min/mag filtering - although I read up on that and what I'm doing seems right for this case.
After a lot of experiments and having to create a grid-lined guide texture so I could see exactly how far off each texture was... I finally got it!
It's pretty simple actually.
uniform mat4 u_modelViewProjectionMatrix;
uniform mediump vec2 u_texOffset;
uniform mediump float u_texScale;
attribute vec3 a_vertexPosition;
attribute mediump vec2 a_vertexTexCoord0;
The precision of the texture coordinates. By specifying mediump it just fixed itself. I suspect this also would help solve the problem I was having in this question:
Why is a texture coordinate of 1.0 getting beyond the edge of the texture?
Once I did that, I had to go back to my original 145 width (which still seems wrong but oh well). And for what it's worth I ended up then going back to all my original math on all the texture coordinates. The "center of pixel" method was showing more of the neighboring pixels than the straight /2048 did.

ImageMagick losing aspect ratio

Very simply I have a script that calls imagemagick on my photos.
The original image is 320 x 444, and I want to create a few scaled down versions but keeping the same aspect ratio
I call imagemagick using
convert oldfile.png -resize 290x newfile.png
I want to scale it to my set widths but the heights scale accordingly.
I do 80x, 160x and 290x in 3 separate commands.
The smallest 2 produce images with the same original aspect ratio, the 290x does not.
The size of the image it produces is 290 x 402
I have no idea why that one fails to keep the aspect ratio but the other 2 sizes maintaine it.
Any ideas?
I think that the problem in the third command is the requested size itself:
in the first command both dimensions are divided by 4: 320/4=80 and 444/4=111
in the second command both dimensions are divided by 2: 320/2=160 and 444/2=222
444 and 320 have only two common divisors: 2 and 4. You already used these divisors in your first two commands, so any other (width, height) couple will give you a slightly different aspect ratio: it is impossible to obtain the same exact aspect ratio fixing 290.
In fact while your original image has an aspect ratio of 1.3875, with a 290x403 image you would obtain an aspect ratio of 1,389655172 and with a 290x402 image you would get a 1,386206897 ratio: fixing 290 there is no other dimension's value that can give you the desired aspect ratio.
In general however Imagemagick always tries to preserve the aspect ratio of the image, as you can read in Imagemagick documentation:
The argument to the resize operator is the area into which the image
should be fitted. This area is not the final size of the image but the
maximum sizes for the image. that is because IM tries to preserve the
aspect ratio of the image more than the final (unless a '!' flag is
given), but at least one (if not both) of the final dimensions should
match the argument given image. So let me be clear... Resize will fit
the image into the requested size. It does NOT fill, the requested box
size.
For further reference see here