Best cache size for iOS apps - objective-c

I'm currently developing an application that loads lots of images from the internet and saves them locally (I'm using SDURLCache). However, old images have get removed from the disk again so I was wondering what the best cache size is.
The advantage of a big cache is obviously that more images get saved which leads to better UX.
The disadvantage is that images need a lot of space and the user will run out of disk space faster. The size I am thinking of is 20MB. It seems so big to me though so I'm asking you what you're opinion is.

The best way to decide on an appropriate cache size is to test. Run the app under Instruments to measure both performance and battery usage. Keep increasing the cache size until you can't discern a difference in performance. That's the largest size you'd need, at least under the test conditions. Once you've established that size, reduce the size until performance is just barely acceptable to determine the smallest acceptable size.
The right size is somewhere between those two sizes, depending on what you think is important. If you can't determine a right size, then either pick a size or add a slider to the app's settings to let the user decide. (I'd avoid making it user-adjustable if you can -- users shouldn't have to think about such things.)

Considering that the smallest iDevices have 8GB of storage, I don't think a 20MB cache is too big, especially if it significantly improves the performance of the app. Also, keep in mind the huge advantage a network cache can have for battery life, since network usage is very expensive in battery time.
Determining the ideal size however is hard without some more information. How often is the same picture accessed? How large is each picture (i.e. how many pictures can 20MB hold). How often will images need to be removed from the cache to add new ones?
If you are constantly changing the images in the cache, it could actually have an adverse effect on the battery life due to the increased disk usage.

Related

Does the RavenDB compresion bundle provide benefits with many small documents?

I am trying to better understand how RavenDB uses disk space.
My application has many small documents (approximately 140 bytes each). Presently, there are around 81,000 documents which would give a total data size of around 11MB. However, the size of the database is just over 70MB.
Is most of the actual space being used by indexes?
I had read somewhere else that there may be a minimum overhead of around 600 bytes per document. This would consume around 49MB, which is more in the ballpark of the actual use I am seeing.
Would using the compression bundle provide much benefit in this scenario (many small documents), or is it targeted towards helping reduce the size of databases with very large documents?
I have done some further testing on my own and determined, in answer to my own question, that:
Indexes are not the main consumer of disk space in my scenario. In this case, indexes represent < 25% of the disk space used.
Adding the compression bundle for a database with a large number of small documents does not really reduce the total amount of disk space used. This is likely due to some minimum data overhead that each document requires. Compression would benefit documents that are very large.
Is most of the actual space being used by indexes?
Yes, that's likely. Remember that Raven creates indexes for different queries you make. You can fire up Raven Studio to see what indexes it's created for you:
Would using the compression bundle provide much benefit in this
scenario (many small documents), or is it targeted towards helping
reduce the size of databases with very large documents?
Probably wouldn't benefit your scenario of small documents. The compression bundle works on individual documents, not on indexes. But it might be worth trying to see what results you get.
Bigger question: since hard drive space is cheap and only getting cheaper, and 70MB is a spec on the map, why are you concerned about hard drive space? Databases often trade disk space for speed (e.g. multiple indexes, like Raven), and this is usually a good trade off for most apps.

Does CGBitmapContextCreate() have a size limit?

I want to make an extremely large bitmap (250,000 pixels on each side, to be eventually written out as BigTIFF). I don't see a memory size or dimensional limit anywhere in the docs, can Core Graphics handle it?
CG is not designed for that kind of workload.
(I'd be surprised if you found any general-purpose graphics framework that is, frankly. If you're pushing images that big, you're going to have to write your own code to get anything done in a reasonable amount of time.)
In my experience, images started to fail once dimensions got over 32767 or so. Not in any organized way, just crashes and hard-to-repro failures; certain parts of the API would work, others wouldn't. Things may be better in 64-bit but I wouldn't count on it.

Strange thing in memory management for ios development

I have a app in my ipod.
1.Open the app, and look at the memory Instruments (Activity monitor), it's 8.95M
2.click a button, it will add a UIImageView with a large image to the screen, the memory is 17.8M now.
3.Remove the UIImageView from screen, and wait a second, the memory is 9.09M now.
I am sure the UIImageView is released after it removed from screen. That's very simple code.
So when it removed, the status of the app should as the same as before add the UIImageView add to the screen, am I right? But why the memory is 9.09M rather than 8.95M? If you add a more complex View to the screen, the difference is more obvious.
This is normal. It's due to a "lazy grow, lazy shrink" algorithm. What that means is that you have a data structure that can be sized for small numbers of items or large numbers of items. The sizing for small numbers of items uses very little memory but isn't efficient when handling large numbers of items. The sizing for large numbers is very efficient for managing large collections of things, but uses more memory to index the objects.
A "lazy grow, lazy shrink" algorithm tries to avoid the cost of resizing a structure's index by only growing the index if it's much too small and only shrinking it if it's much too big. For example, a typical algorithm might grow the index only if its ideal size is at least three times bigger than it is and shrink it only if it's more than three times its ideal size. This is also needed to prevent large numbers of resize operations if an application rapidly allocates and frees collections of resources -- you want the index size to be a bit 'sticky'.
When you open the large object and consume GUI objects, you make the index much too small, and it grows. But when you close the large object, you make the index only a bit too big, so it doesn't shrink.
If the device comes under memory pressure, the index will shrink. If the application continues to reduce its use of UI resources, the index will shrink. If the application uses more UI resources, the index will not need to grow again quite as soon.
A good analogy might be stacks of paper on your desk. If you have 30 papers you might need to find, you might keep them in 4 stacks. But if you have 5,000 papers, 4 stacks will make searching tedious. You'll need more stacks in that case. So when the number of papers gets too big for 4 stacks, you need to re-index into a greater number of stacks. But then when the number gets small, you won't bother to constantly re-index until you have way too many stacks, because searching is still pretty fast.
When you're done handling all those papers, your desk has a few extra stacks. That saves it from re-indexing the next time it needs to handle a lot of papers.

Performance overhead for frequent (5Hz) Core Data saves

For an iPhone app that plays audio files, I'm working on a system to track the user's progress in any episode they've listened to (eg, they listen to the first 4:35 of file1, then starts another file, and goes back to file1 and it starts at 4:35).
I've set up a Core Data model to store the metadata, but I'm wondering how aggressively I could/should cache the current location during playback.
Currently I have just stuck the save: call in a method that was previously being used to update the time labels and UISlider playhead. That method is being called by a NSTimerInterval every 0.2 seconds.
0.2 seconds is much more precision than I need to keep track of for the progress cache. The values are rounded to the nearest second anyway, so essentially 4/5 of every save is redundant.
Given, though, that this is pretty much all Core Data is doing, it's only only ever dealing with a single value for a single record at any given time, I'm wondering if it makes more sense to just do the extra, unnecessary save:'s, or to manage a second timer for doing the update less frequently.
As is, Instruments reports the Save Duration of each event as ~800, peaking around 2000. I'm not really sure how to interpret those results. Actual app performance in the simulator doesn't appear to be significantly impacted.
If this kind of save is so cheap that it makes sense to keep code complexity low (only managing a single timer), I would keep it as is, but my gut instinct is that that's a lot of operations, no matter how cheap.
You shouldn't see as much of a difference in performance as you may see in battery consumption.
Writing to disk with flash storage in an iOS device is much faster than writing to a spinning plate HDD on a computer. Also, a write to a HDD does not cost much electricity compared to just keeping the plated spinning anyway. However, writing to the flash storage takes more power relative to a read or just leaving the flash alone.
In other words, the power consumption for a write on an iOS device it not negligible. If you can get away with 4hz, that could easily result in a notable improvement in batter consumption for your app.

How important is size in an application?

When creating applications (Java, run on a normal computer). How important is program size for users? For example, would it be necessary to replace .png's with .jpg's, convert .wav's to .midi's, or strip down libraries to save space, or do users generally not care if my program is 5mb when it could be 50kb if stripped down?
Thanks.
That depends on the delivery mechanism.
Size is generally only relevant in terms of the bandwidth required to download it. If you download it often, then it matters a lot. If its only once, it matters less and you have to weigh up the time involved in reducing that vs how much space you save.
After that, nobody cares until you get into gigabytes. Well, mobile applications will probably start caring at about 10MB+.
Users definitely care (after all, not only does space cost money, but affects program load time). However, the question becomes how much do you optimize. I suggest the 80/20 rule. 80% of your benefit comes from the first 20% of the effort.
If you use a utility like TreePie you might be able to see what parts of a large application are consuming most of your resources. If you find it's just a few large images, or one big DLL with a bunch of embedded resources, it's probably worth taking a look at reducing the size, if it's easy.
But there's a cost/benefit tradeoff. I just saw a terrabyte drive for $100 the other day. Saving the user 1 gig is about 10 cents in terms of storage space, and perhaps some hard to quantify amount of time spent loading every time they load. If you have 100,000 users, it probably worth your time to optimize a bit, but if you're writing custom software for one user it's probably not worth it unless they're complaining.
As mentioned by Graham Lee, a great deal of this is very dependant on your users. If you are writing something that needs to be optimized to fit on the chip of a 68000 processor, then you'd better believe that program size matters. Assuming you're not programming 30 years ago, you probably won't run across that particular issue.
But in general, you should be making your application as small as possible while still achieving the quality you want. That is to say, if your application is likely to be viewed on an 640x480 screen, then you don't need hi-res 6mg pngs for all your images. On the other hand, if your application is designed to be blown up on a big screen at conferences, then you probably want to upsize your images.
Another option that is very common is creating installers with separate options ranging from full to minimal. That way you can allow your users to decide whether size matters to them. It allows you to create the pretty pretty version of your app, and a scaled back version that doesn't include tutorials or mp3 files of a soothing woman's voice telling you that you've push the wrong button.
Know your users. And if you don't, then let them decide for themselves.
Consider yourself, what would you use? Would you rather save space with 5KB programs or waste it with 5MB programs?
I think that smaller is better, especially if the program doesn't use/need much graphics and can be optimized.
I would say not important at all, unless it's obscenely large.
I would argue that startup time is far more important to users that application size.
However if you include a lot of media files with your system it is logical to optimise this data as much as possible. But don't compromise the quality - switching to jpeg might be okay for photos, but it sucks for technical diagrams. A .wav could be an .aac or .mp3, but not if you're writing a professional audio application.