Can't get MKTileOverlay to support 1024 x 1024 sized tiles... - ios7

if I set the tileSize property of MKTileOverlay to tileSize=CGSizeMake(1024, 1024).
the mapViewer shows my tiles all over the view with gaps and in the wrong order.
I have produced tiles that are in the dimension 1024 x 1024.
zoom level 1 has 4 png tiles,
zoom level 2 has 16 png tiles,
zoom level 3 has 64 png tiles
if I set MKTileOverlay.tileSize=CGSizeMake(256, 256) and provide 256x256 tiles
everything works fine?
what could be the problem?

Related

Using GPU to rasterize image with 128 color channels

I need to rasterize a multispectral image, where each pixel contains the intensity (8 bits) at 128 different wavelengths, for a total of 1024 bits/pixel.
Currently I am using OpenGL, and rasterizing in 43 passes, each producing an image with 3 of the 128 channels, but this is too slow.
Is it possible to do it in a single pass by somehow telling the GPU to rasterize a 128 color component image (not necessarily using OpenGL)?

BufferedImage drawString corrupted letters

I'm writing a program in Kotlin to run on Raspberry Pi. I'm using a small 128x64 pixel OLED display
To display a text I'm drawing it on BufferedImage and then display that image
val bufferedImage = BufferedImage(128, 64, BufferedImage.TYPE_INT_RGB)
val g = bufferedImage.createGraphics()
g.paint = Color.WHITE
g.font = Font("PixelMix", Font.BOLD, /*Font size*/8)
g.drawString("IP: 192.168.1.12", 0, 24)
g.dispose()
display.drawImage(bufferedImage, 0, 0)
link to font: https://www.dafont.com/pixelmix.font
Because screen space is very limited I use small font.
But here comes the problem - following code produces image in which the first 1 lack vertical line, while last 1 is a square.
When I do g.font = Font("PixelMix", Font.PLAIN, 10) then 2 in 192 lacks right-most pixel row, similarly with font size 12. On font size 16 P lacks vertical line and so on. It is just at font size 24 when everything looks acceptable, but 24 is way too big for a screen of that size.
My question now is how do I draw a string on BufferedImage so that I don't get artifacts like that?
EDIT:
At font size 10 exactly 8 pixels (vertically) are used to display a character
As for font size 8 it's 6 pixels vertically and for 16 it's 12 pixels
I did g.setRenderingHint(RenderingHints.KEY_TEXT_ANTIALIASING, RenderingHints.VALUE_TEXT_ANTIALIAS_OFF), but it did not helped

How does the pixel size change after the aspect ratio is chaged?

I am using NI PCI-1411 frame brabber card and the signal is RS-170 signal. According to the discussion on the website:https://www.cs.rochester.edu/~nelson/courses/vision/resources/video_signals.html.
It said"The aspect (width to height) ratio for typical RS-170 signal rectangle is 4:3. The vertical resolution of video is limited to 485 pixels, as determined by the number of scan lines. The RS-170 standard specifies the aspect ratio (ratio of vertical/horizontal dimensions) of the video display as 3:4"
The CCD I use is Hitachi KP-M1AN.Its number of pixels are 768(H)*494(V) and the pixels size is 11.64(um) * 13.5(um),sensing area is 8.91*6.67mm,Horizontal/Vertical TV resolution is 570/485
Here are my questions:
1.Now I gave a square object to CCD (The CCD is Hitachi KP-M1AN.CCD pixels size is 11.64(um)*13.5(um) and the ).According to the pixel size on CCD, the horizontal pixel numbers should be different with vertical pixel numbers.however, I notice the pixel numbers at both directions are same. So I wonder what is the "real" pixels size now after the ratio change ?
For example, here is a square object is 143um*143um and CCD pixel size is 11um*13um.And according to pixel size it must be 13 pixels * 11 pixels. But what I see now is 12 pixels * 12 pixels.
2.According to question 1, if the pixels size change owning to the ratio change.How does the NI software change the pixel size (I mean extend it or compress it.)

Can SoOffscreenRenderer use tiles bigger than 1024

The coin3d offscreen rendering class SoOffscreenRenderer is capable of rendering big images (e.g. 4000 x 2000 pixels), that don't fit on the screen or in a rendering buffer. This is done by partitioning the image into tiles that are rendered one after the other, where the default size of these tiles is 1024 x 1024.
I looked at the code of SoOffscreenRenderer and CoinOffscreenGLCanvas and found environment variables COIN_OFFSCREENRENDERER_TILEWIDTH COIN_OFFSCREENRENDERER_TILEHEIGHT. I could change the tile size using these variables, but only to sizes smaller than 1024. I could create tiles with 512 x 512 pixels, and also 768 x 768. When I used values bigger than 1024, the resulting tiles were always of size 1024 x 1024.
Is it possible to use bigger tile sizes like 2048 x 2048 or 4096 x 4096, and how would I do that?
It is possible to use larger tiles and coin does it automatically. It will find out which tile sizes work by querying the graphics card driver.
From CoinOffscreenGLCanvas.cpp:
// getMaxTileSize() returns the theoretical maximum gathered from
// various GL driver information. We're not guaranteed that we'll be
// able to allocate a buffer of this size -- e.g. due to memory
// constraints on the gfx card.
The reason why it did not work was that the environment variable COIN_OFFSCREENRENDERER_MAX_TILESIZE was set somewhere in our application using coin_setenv("COIN_OFFSCREENRENDERER_MAX_TILESIZE", "1024", 1);. Removing this call allowed bigger tile sizes to be used.
In CoinOffscreenGLCanvas::getMaxTileSize(void), the variable COIN_OFFSCREENRENDERER_MAX_TILESIZE is read and the tile size clamped accordingly.
On my older computer it generated tiles of size 1024, but on a newer machine the tiles were of size 4096.

How should I tile images for CATiledLayer?

I know how to tile images, I just don't get how the images should turn out, with sizes and stuff..
The names should be Image_size_row_colum, and one of the Apple tiles images is:
Lake_125_0_0.png
I use TileCutter to tile the images, but I don't know if I should tile my original image to 512x512px, and then make a worse resolution image of the original from ≈7000x6000 to ≈5000x4000 and then tile that image to 512x512px or whatever.. I just don't get the whole setup..
The class reads images like this:
NSString *tileName = [NSString stringWithFormat:#"%#_%d_%d_%d",
imageName, (int)(scale * 1000), row, col];
And with the first of apples tiles are named Lake_125_0_0.png, that gives me nothing.. I just don't get it.. Anyone?
Thanks.
the tiles are by default always 256 to 256 pixels (although in the apple example some tiles at the border of the image got cropped).
Lake_1000_1_2: full resolution tile at scale 1, row 1, col 2.
Lake_500_1_2: half resolution: the tile is also 256 to 256 pixel but you show an area of the image which is actually 512 to 512 pixels (so you loose quality)
Lake_250_1_2: quarter resolution
Lake_125_1_2: show 8*256 to 8*256 pixels of the original image inside a 256 to 256 pixels tile
I hope this helps.