Resizable image resource with embedded cap insets - objective-c

This is by far not a showstopper problem just something I've been curious about for some time.
There is this well-known -[UIImage resizableImageWithCapInsets:] API for creating resizable images, which comes really handy when texturing variable size buttons and frames, especially on the retina iPad and especially if you have lots of those and you want to avoid bloating the app bundle with image resources.
The cap insets are typically constant for a given image, no matter what size we want to stretch it to. We can also put that this way: the cap insets are characteristic for a given image. So here is the thing: if they logically belong to the image, why don't we store them together with the image (as some kind of metadata), instead of having to specify them everywhere where we got to create a new instance?
In the daily practice, this could have serious benefits, mainly by means of eliminating the possibility of human error in the process. If the designer who creates the images could embed the appropriate cap values upon exporting in the image file itself then the developers would no longer have to write magic numbers in the code and maintain them updated each time the image changes. The resizableImage API could read and apply the caps automatically. Heck, even a category on UIImage would make do.
Thus my question is: is there any reliable way of embedding metadata in images?
I'd like to emphasize these two words:
reliable: I have already seen some entries on the optional PNG chunks but I'm afraid those are wiped out of existence once the iOS PNG optimizer kicks in. Or is there a way to prevent that? (along with letting the optimizer do its job)
embedding: I have thought of including the metadata in the filename similarly to what Apple does, i.e. "#2x", "~ipad" etc. but having kilometer-long names like "image-20.0-20.0-40.0-20.0#2x.png" just doesn't seem to be the right way.
Can anyone come up with smart solution to this?

Android has a filetype called nine-patch that is basically the pieces of the image and metadata to construct it. Perhaps a class could be made to replicate it. http://developer.android.com/reference/android/graphics/NinePatch.html

Related

What are layout transitions in graphics programming

I'm following vulkan-tutorial.com and in the images portion of the tutorial it mentions layout transitions, but doesn't elaborate on what they are. I don't like to copy and paste code without knowing exactly what it does and I can't find a sufficient explanation in the tutorial or on google.
A "layout transition" is exactly what those words mean. It's when you transition the layout of an image sub-resource from one layout to another. So your question really seems to be... what is a layout?
In the Vulkan abstraction, images are composed of sub-resources. These represent distinct sections of an image which can be manipulated independently of other sections. For example, each mipmap level of a mipmapped image is a sub-resource.
At any particular time that an image sub-resource is being used by a GPU process, that sub-resource has a layout. This is part of the Vulkan abstraction of GPU operations, so exactly what it means to the GPU will vary from chip to chip.
The important part is this: layouts restrict how you can use an image sub-resource. Or more to the point, in order to use an image sub-resource in a particular way, it must be in a layout which permits that usage.
When a sub-resource is in the VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL layout, you can only perform operations which read from the sub-resource within a shader. The shader cannot write to the image, nor can the image be used as a render target.
Now, the general layout allows pretty much any use at any time while within that layout. However, this also can represent less optimal performance. Any of the more restricted layouts can make those accesses to the image more performance-friendly (depending on hardware).
So it is your job to keep track of the layout of any image sub-resources you plan to use. Now for most images, you're going to use destination transfer layout to upload to them, and then just leave them as shader read-only, because you don't generally use most images more arbitrarily. So generally, this means keeping track of render targets that you want to read from, as well as swapchain images (you have to transition them to the present layout before presenting them) and storage images.
Layout transitions typically happen as part of an explicit dependency between two operations. This makes sense; if you're uploading data to an image, and you later want to read from it, you need a dependency between the upload and the read. You may as well do the layout transition then, since the transition can modify the way the bytes of the image are stored, so you need the transfer to be done first.

How to return acquired SwapChain image back to the SwapChain?

I can currently acquire swap chain image, draw to it and then present it. After vkQueuePresentKHR the image is returned back to the swap chain. Is there other way to return the image back. I do not want to display the rendered data to screen.
You can probably do what you want here by simply not presenting the images to the device. But the number of images you can get depends on the VkSurfaceCapabilities of your device.
The maximum number of images that the application can simultaneously acquire from this swapchain is derived by subtracting VkSurfaceCapabilitiesKHR::minImageCount from the number of images in the swapchain and adding 1.
On my device, I can have an 8-image swapchain and the minImageCount is 2, letting me acquire 7 images at once.
If you really want for whatever reason to scrap the frame just do not Present the Image and reuse it next iteration (do not Acquire new Image; use the one you already have).
If there's a possibility you are never going to use some Swapchain Image, you still do not need to worry about it. Acquired Images will be reclaimed (unpresented) when a Swapchain is destroyed.
Seeing your usage comment now, I must add you still need to synchronize. And it is not guaranteed to be round-robin. And that it sounds very misguided. Creating Swapchain seems like equal programming work to creating and binding memory to the Image. Considering the result is not "how it is meant to be used"...
From a practical point, you will probably not have good choice of Swapchain Image formats, types and usage flags and they can be limited by size and numbers you can use. It will probably not work well across platforms. It may come with performance hit too.
TL;DR Swapchains are only for interaction with the windowing system (or lack thereof) of the OS. For other uses there are appropriate non-Swapchain commands and objects.
Admittedly Vulkan is sometimes less than terse to write in(a product of it being C-based, reasonably low-level and abstracting a wide range of GPU-like HW), but your proposed technique is not a viable way around it. You need to get used to it and where apropriate make your own abstractions (or use a library doing that).

Huge images in iOS app without CATiledLayer?

I have an image at about 7000x6000px. I need this to be in a scrollview/imageView in my app, however this is way to huge for display. It is supposed to be a kind of map. I was hoping to keep the size of the app to the minimum, and the image is just about 13mb in .jpg. In .png it is over 100mb, which is unacceptable. Many have suggested CATiledLayer as an option, but I believe this would result in even bigger file sizes. Anyway, I tried to do it with CATiledLayer, and create my own tiles in TileCutter, (tiles in .jpg), and the size wasn't too bad. But I am having errors all over the place. The iOS version of CATiledLayer is a mystery to me, and I can't find a way to solve this. I get an error saying something about the java-equivalent "index out of bounds of array", even though the array has content at that specific index..
It has a method which returns an array. The array contains data of a .plist. Before the return I log out the content of the array, giving me good data. The call is trying to access
[array objectAtIndex:0]
and put it in a dictionary, but throws OutOfBounds. When logged out the whole array, I can clearly see the content, but when logged out
NSLog("%#",[method objectAtIndex]); I get the same exception.
Anyway, CATiledLayer has given me nothing but problems. I have been reverse-engineering the PhotoScroller project with no luck. Anyone have any other solutions?
Thanks.
Apple has this really neat project, PhotoScroller, that uses CATiledLayer and lets you scroll through several images and zoom them. This seemed really neat until I found out that Apple "cheated" and pre-tiled the images (about 800 tiles saved as file in the bundle!)
I had need for a similar capability, but had to download the images from the network. Thus came about PhotoScrollerNetwork. With the TiledImageBuilder you can download (or read from disk) massive images - I even tested a 18000x18000 image - and it works.
What this class does is start tiling the image as it downloads (when using libjpegturbo) or can save the file then tile (takes longer). The class figures out how many levels of detail are needed to show the image at full resolution and sized to fit in the containing view (a scrollview).
The class uses the disk cache to hold the tiles, but uses and old Unix trick of creating a file, opening it, then unlinking it so that the tiles never really get saved - once the class is dealloced (and the file descriptor closed) the tiles are freed (or if your app crashes they get freed too.
Someone had problems on an iPad 1 with its quite limited memory, so the class now throttles its use of the file system when concurrently loading multiple images. I had a discussion with the iOS kernel manager at WWDC this year, and after explaining the throttling technique to him, he said the algorithm (on managing the amount of disk cache usage) was probably the best technique that could be used.
I think those who suggested CATiledLayer are right. You should really use it! If you need a sample project that displays a huge bitmap using that technology, look here: http://www.cimgf.com/2011/03/01/subduing-catiledlayer/
Many technologies we use as Cocoa/Cocoa Touch developers stand
untouched by the faint of heart because often we simply don’t
understand them and employing them can seem a daunting task. One of
those technologies is found in Core Animation and is referred to as
the CATiledLayer. It seems like a magical sort of technology because
so much of its implementation is a bit of a black box and this fact
contributes to it being misunderstood. CATiledLayer simply provides a
way to draw very large images without incurring a severe memory hit.
This is important no matter where you’re deploying, but it especially
matters on iOS devices as memory is precious and when the OS tells you
to free up memory, you better be able to do so or your app will be
brought down. This blog post is intended to demonstrate that
CATiledLayer works as advertised and implementing it is not as hard as
it may have once seemed.

should i make different images for each folder in an android application?(hdpi,mdpi,ldpi)

I want to develop a game but I'm not sure about the sizes of the pics.
I checked the app after I finished one animation on my phone and on my emulator; the pics changed their size by themselves. Then should i make the same pic with different size for each folder on one pic with the same size and put it in each of the folders.
thanks!
You don't have to, but it can be useful: sometimes it's better to use a simpler image at small resolutions, and that's where the different densities are helpful.
It also allows more fine-grained control of scaling: if you only provide a hdpi version, a lot of detail can be lost if it has to scale that image significantly. By providing a ldpi version, you can provide a clean base picture which won't need to be scaled as much; resulting in fewer potential scaling artifacts (but unless the other versions are cleaned up a bit, you probably won't see much of a difference in the end).

Ajax request to SQL returns hundreds of images to JSON -> CSS Sprites? Base64?

We are writing a Web-based events calendar with thousands of theatre shows, festivals, and concerts in a SQL database.
The user goes to the Website, performs a search, the SQL server returns JSON, and jQuery code on the client side displays the 200+ events.
Our problem is that each event has an image. If I return URLs, the browser has to make 200+ HTTP GET requests for these tiny (2-4Kb) images.
It is tempting to pack all the images into one CSS sprite, but since the user searches can return any combination of images, this would have to be done dynamically and we'd lose control over caching. Every search would have to return every image in a brand new CSS sprite. Also, we want to avoid any solution that requires dynamically reading or writing from a filesystem: that would be way too slow.
I'd rather return binary images as part of the JSON, and keep track of which images we already have cached and which we really need. But I can't figure out how. It seems to require converting to Base64? Is there any example code on this?
Thanks for any help you can give! :)
The web site you are using (StackOverflow) has to provide 50 avatars for 50 questions shown in the questions page. How does it do it? Browser makes 50 requests.
I would say, you had better implement pagination so that the list does not get too big. Also you can load images on the background or when the user scrolls and gets to the image.
Keeping track of images downloaded is not our job, it is browser's. Our job is to make sure the URL is unique and consistent and response headers allow caching.
Also base64 will make the stream at least 33% longer while it's processing on the client side is not trivial - I have never seen an implementation but probably someone has done some javascript for it.
I believe all you need is just pagination.
It looks like the original poster has proceeded with essentially this solution on their own, however based on their comment about 33% space overhead, I don't think they observed an unexpected phenomenon that you get when base64-ing a bunch of images into one response entity and then gzipping it...believe it or not, this can easily produce an overall transfer size that is smaller than the total of the unencoded image files, even if each unencoded image was also separately gzipped.
I've covered this JSON technique for accelerated image presentation in depth here as an alternative to CSS sprites, complete with a couple live samples. It aims show that it is often a superior technique to CSS sprites.
Data is never random. For example you could name your sprites
date_genre_location.jpg
or however you organise your searches. This might be worth it. Maybe.
You'll need to do the math
Here is what we decided.
Creating sprites has some downsides:
To reduce image loss, we need to store the originals as PNGs instead of JPEGs on the server. This is going to add to the slowness, and there's already quite some slowness in creating dynamic CSS sprites with ImageMagick.
To reduce image size, the final CSS sprite must be a JPG, but JPGs are lossy and with a grid of images, things get weird at the edges as JPG tries to blend and compress. We can fix that by putting a blank border around all the images but even so, this adds complexity.
with CSS sprites, it becomes harder to do intelligent caching; we have to pack up all the images regardless of what we've sent in the past. (or we have to keep all the old CSS sprites regardless of whether they contain many or few images we still need).
I'm afraid we have too many combinations of date-category-location for precomputing CSS sprites, although surely we could handle part of the data this way
Thus our solution is to Base64 it, to actually send each image one by one. Despite the 33% overhead, it is far less complex to code and manage and when you take caching issues into account, it may even be less data transfer.
Thank you all for your advice on this!