I want to make an extremely large bitmap (250,000 pixels on each side, to be eventually written out as BigTIFF). I don't see a memory size or dimensional limit anywhere in the docs, can Core Graphics handle it?
CG is not designed for that kind of workload.
(I'd be surprised if you found any general-purpose graphics framework that is, frankly. If you're pushing images that big, you're going to have to write your own code to get anything done in a reasonable amount of time.)
In my experience, images started to fail once dimensions got over 32767 or so. Not in any organized way, just crashes and hard-to-repro failures; certain parts of the API would work, others wouldn't. Things may be better in 64-bit but I wouldn't count on it.
Related
I have an image at about 7000x6000px. I need this to be in a scrollview/imageView in my app, however this is way to huge for display. It is supposed to be a kind of map. I was hoping to keep the size of the app to the minimum, and the image is just about 13mb in .jpg. In .png it is over 100mb, which is unacceptable. Many have suggested CATiledLayer as an option, but I believe this would result in even bigger file sizes. Anyway, I tried to do it with CATiledLayer, and create my own tiles in TileCutter, (tiles in .jpg), and the size wasn't too bad. But I am having errors all over the place. The iOS version of CATiledLayer is a mystery to me, and I can't find a way to solve this. I get an error saying something about the java-equivalent "index out of bounds of array", even though the array has content at that specific index..
It has a method which returns an array. The array contains data of a .plist. Before the return I log out the content of the array, giving me good data. The call is trying to access
[array objectAtIndex:0]
and put it in a dictionary, but throws OutOfBounds. When logged out the whole array, I can clearly see the content, but when logged out
NSLog("%#",[method objectAtIndex]); I get the same exception.
Anyway, CATiledLayer has given me nothing but problems. I have been reverse-engineering the PhotoScroller project with no luck. Anyone have any other solutions?
Thanks.
Apple has this really neat project, PhotoScroller, that uses CATiledLayer and lets you scroll through several images and zoom them. This seemed really neat until I found out that Apple "cheated" and pre-tiled the images (about 800 tiles saved as file in the bundle!)
I had need for a similar capability, but had to download the images from the network. Thus came about PhotoScrollerNetwork. With the TiledImageBuilder you can download (or read from disk) massive images - I even tested a 18000x18000 image - and it works.
What this class does is start tiling the image as it downloads (when using libjpegturbo) or can save the file then tile (takes longer). The class figures out how many levels of detail are needed to show the image at full resolution and sized to fit in the containing view (a scrollview).
The class uses the disk cache to hold the tiles, but uses and old Unix trick of creating a file, opening it, then unlinking it so that the tiles never really get saved - once the class is dealloced (and the file descriptor closed) the tiles are freed (or if your app crashes they get freed too.
Someone had problems on an iPad 1 with its quite limited memory, so the class now throttles its use of the file system when concurrently loading multiple images. I had a discussion with the iOS kernel manager at WWDC this year, and after explaining the throttling technique to him, he said the algorithm (on managing the amount of disk cache usage) was probably the best technique that could be used.
I think those who suggested CATiledLayer are right. You should really use it! If you need a sample project that displays a huge bitmap using that technology, look here: http://www.cimgf.com/2011/03/01/subduing-catiledlayer/
Many technologies we use as Cocoa/Cocoa Touch developers stand
untouched by the faint of heart because often we simply don’t
understand them and employing them can seem a daunting task. One of
those technologies is found in Core Animation and is referred to as
the CATiledLayer. It seems like a magical sort of technology because
so much of its implementation is a bit of a black box and this fact
contributes to it being misunderstood. CATiledLayer simply provides a
way to draw very large images without incurring a severe memory hit.
This is important no matter where you’re deploying, but it especially
matters on iOS devices as memory is precious and when the OS tells you
to free up memory, you better be able to do so or your app will be
brought down. This blog post is intended to demonstrate that
CATiledLayer works as advertised and implementing it is not as hard as
it may have once seemed.
i've a lot of images as background and other decorations.
If i reduce their resolution and their size, the memory used by all application will be reduced or will i not have advantages?
Thank you.
If the size of the file is reduced it will use less memory. 1 thing you could do is using sprites instead of images, though using a sprite in objective-c is a bit more difficult than it is on the web.
You could also try to replace images with custom drawing code in subclasses of views.
Yes, if you reduce image resolution, there will be less data to keep in memory and total memory usage will go down. But remember the basic rule of optimization and measure first, so that you know for sure that reducing image size will really make a difference given the rest of your application. (As an example, if you have serious leaks in your application, there’s not much to be gained by resampling images.)
When creating applications (Java, run on a normal computer). How important is program size for users? For example, would it be necessary to replace .png's with .jpg's, convert .wav's to .midi's, or strip down libraries to save space, or do users generally not care if my program is 5mb when it could be 50kb if stripped down?
Thanks.
That depends on the delivery mechanism.
Size is generally only relevant in terms of the bandwidth required to download it. If you download it often, then it matters a lot. If its only once, it matters less and you have to weigh up the time involved in reducing that vs how much space you save.
After that, nobody cares until you get into gigabytes. Well, mobile applications will probably start caring at about 10MB+.
Users definitely care (after all, not only does space cost money, but affects program load time). However, the question becomes how much do you optimize. I suggest the 80/20 rule. 80% of your benefit comes from the first 20% of the effort.
If you use a utility like TreePie you might be able to see what parts of a large application are consuming most of your resources. If you find it's just a few large images, or one big DLL with a bunch of embedded resources, it's probably worth taking a look at reducing the size, if it's easy.
But there's a cost/benefit tradeoff. I just saw a terrabyte drive for $100 the other day. Saving the user 1 gig is about 10 cents in terms of storage space, and perhaps some hard to quantify amount of time spent loading every time they load. If you have 100,000 users, it probably worth your time to optimize a bit, but if you're writing custom software for one user it's probably not worth it unless they're complaining.
As mentioned by Graham Lee, a great deal of this is very dependant on your users. If you are writing something that needs to be optimized to fit on the chip of a 68000 processor, then you'd better believe that program size matters. Assuming you're not programming 30 years ago, you probably won't run across that particular issue.
But in general, you should be making your application as small as possible while still achieving the quality you want. That is to say, if your application is likely to be viewed on an 640x480 screen, then you don't need hi-res 6mg pngs for all your images. On the other hand, if your application is designed to be blown up on a big screen at conferences, then you probably want to upsize your images.
Another option that is very common is creating installers with separate options ranging from full to minimal. That way you can allow your users to decide whether size matters to them. It allows you to create the pretty pretty version of your app, and a scaled back version that doesn't include tutorials or mp3 files of a soothing woman's voice telling you that you've push the wrong button.
Know your users. And if you don't, then let them decide for themselves.
Consider yourself, what would you use? Would you rather save space with 5KB programs or waste it with 5MB programs?
I think that smaller is better, especially if the program doesn't use/need much graphics and can be optimized.
I would say not important at all, unless it's obscenely large.
I would argue that startup time is far more important to users that application size.
However if you include a lot of media files with your system it is logical to optimise this data as much as possible. But don't compromise the quality - switching to jpeg might be okay for photos, but it sucks for technical diagrams. A .wav could be an .aac or .mp3, but not if you're writing a professional audio application.
Initial tests indicate that GDI+ (writing in VB.NET) is not fast enough for my purposes. My application needs to be able to draw tens of thousands of particles (coloured circles, very preferably anti-aliased) in a full screen resolution at 20+ frames per second.
I'm hesitant to step away from GDI+ since I also require many of the other advanced drawing features (dash patterns, images, text, paths, fills) of GDI+.
Looking for good advice about using OpenGL, DirectX or other platforms to speed up particle rendering from within VB.NET. My app is strictly 2D.
Goodwill,
David
If you want to use VB.NET, then you can go with XNA or SlimDX.
I have some experience in creating games with GDI+ and XNA, and I can understand that GDI+ is giving you trouble.
If I where you I'd check out XNA, it's much faster than GDI+ because it actually uses your video card for drawing and it has a lot of good documentation and examples online.
SlimDX also looks good but I don't have any experience with it. SlimDX is basically the DirectX API for .NET.
The only way to get the speed you need is to move away from software rendering to hardware rendering... and unfortunately that does mean moving to OpenGL or DirectX.
The alternative is to try and optimise your graphics routines to only draw the particles that need to be drawn, not the whole screen/window.
I would agree with JaredPar that you're better off profiling first to determine if your existing codebase can be improved before making a huge switch to a new framework. DirectX is not the easiest framework if you're unfamiliar with it.
The most significant speed increase I found, when writing a game maker with GDI+, was to convert my bitmaps to Format32bppPArgb;-
SuperFastBitmap = ConvertImagePixelFormat(SlowBitmap, Imaging.PixelFormat.Format32bppPArgb)
If they are not in this format already, you'll see the difference immediately when you convert.
It's possible the problem is in your algorithm and not GDI+. Profiling is the only way to know for sure. Without a profile it's very possible you will switch to a new GUI framework and hit the exact same problems.
If you did profile, what part of GDI+ was causing a problem?
As Jared said,
it could be that a significant fraction of your cycles are not going into GDI, and you might be able to reduce those.
A simple way to find those is to halt it at random a few times and examine the stack. The chance that you will catch it in the act of wasting time is equal to the fraction of time being wasted.
Any instruction or call instruction that appears on more than one such sample is something that, if you could replace it, you would see a speedup.
In general, the method is this.
As you're working in VB.net, have you tried using WPF (Part of .net since 3.0)? As WPF is based on DirectX rather than GDI+, that should give you the speed you need, although developing WPF is not straight-forward at all.
Because the GDI+ is not moved by the graphics card, it's slow to render because it uses the CPU to render. At least, you can use DirectX or SlimDX.
(sorry for bad english)
See This: http://msdn.microsoft.com/en-us/library/windows/desktop/ff729480%28v=vs.85%29.aspx
http://www.codeproject.com/Articles/159586/Starting-DirectX-with-Visual-Basic-NET
My compact framework application creates a smooth-scrolling list by rendering all the items to a large bitmap surface, then copying that bitmap to an offset position on the screen so that only the appropriate items show. Older versions only rendered the items that should appear on screen at the time, but this approach was too slow for a smooth scrolling interface.
It occasionally generates an OutOfMemoryException when initially creating the large bitmap. If the user performs a soft-reset of the device and runs the application again, it is able to perform the creation without issue.
It doesn't look like this bitmap is being generated in program memory, since the application uses approximately the same amount of program memory as it did before the new smooth-scrolling methods.
Is there some way I can prevent this exception? Is there any way I can free up the memory I need (wherever it is) before the exception is thrown?
I'd suggest going back to the old mechanism of rendering only part of the data, as the size of the fully-rendered data is obviously an issue. To help prevent rendering problems I would probably pre-render a few rows above and below the current view so they can be "scrolled" in with limited impact.
And just as soon as I posted I thought of something you can do to fix your problem with the new version. The problem you have is one of CF trying to find one block of contiguous memory available for the huge bitmap, and this is occasionally a problem.
Instead of creating one big bitmap, you can instead create a collection of smaller bitmaps, one for each item, and render each item onto its own little bitmap. During display, you then just copy over the bitmaps you need. CF will have a much easier time creating a bunch of little bitmaps than one big one, and you shouldn't have any memory problems unless this is a truly enormous bunch of items.
I should avoid expressions like "there is no fix".
One other important point: make sure you call Dispose() on each bitmap when you're finished with it.
Your bitmap definitely is being created in program memory. How much memory the bitmap needs depends on how big it is, and whether or not this required size will generate the OutOfMemoryException depends on how much is available to the PDA (which makes this a randomly-occuring error).
Sorry, but this is generally an inadvisable control rendering technique (especially on the Compact Framework) for which there is no fix short of increasing the physical memory on the PDA, which isn't usually possible (and often won't fix the problem anyway, since a CF process is limited to 32MB no matter how much the device has available).
Your best bet is to go back to the old version and improve its rendering speed. There is also a simple technique available on CF for making a control double-buffered to eliminate flicker.
Since it appears you've run into a device limitation that is restricting the total size of Bitmap space you can create (these are apparently created in video RAM rather than general program memory), one alternative is to replace the big Bitmap object used here with a plain-old block of Windows memory, accessing it for reading and writing by PInvoking the BitBlt API function.
Initially creating the memory block is tricky, and you'd probably want to ask another SO question about that (GCHandle.Alloc can be used here to create a "pinned" object, which means .NET isn't allowed to move it around in memory, which is critical here). I know how to do it, but I'm not sure I do it correctly and I'd rather have an expert's input.
Once you've created the big block, you'd iterate through your items, render each to one small bitmap that you keep re-using (using your existing .NET code), and BitBlt it to the appropriate spot in your memory block.
After creating the entire cache, your rendering code should work just like before, with the difference that instead of copying from the big bitmap to your rendering surface, you BitBlt from your cache block. The arguments for BitBlt are essentially the same as for DrawImage (destination, source, coordinates and sizes etc.).
Since you're creating the cache out of regular memory this way instead of specialized video RAM, I don't think you'll run into the same problem. However, I would definitely get the block creation code working first and test to make sure it can create a big enough block every time.
Update: actually, the ideal approach would be to have a collection of smaller memory blocks rather than one big one (like I thought was the problem with the Bitmap approach), but you already have enough to do. I've worked with CF apps that deal with 5 and 10MB objects and it's not a huge problem anyway (although it might be a bigger problem when that chunk is pinned - I dunno). BTW, I've always been surprised by the OOMEs on BitMap creation because I knew the bitmaps were much smaller than the available memory, as did you - now I know why. Sorry I thought this was an easy solve at first.