Im using the awesome ImageResizing component and am experiencing an "Out of memory" error when trying to upload and read images that are about 100MB in size. It may seem large, but we're a printers so many people do need to provide images of that size.
The line of code that fails is:
ImageResizer.ImageBuilder.Current.Build(Server.MapPath(strImagePath), Server.MapPath(strThumbPath), new ResizeSettings("maxheight=" + "150"+ "&maxwidth=" + "238"));
This is probably the GDI itself failing, but is there any workaround other than detecting the error occured and letting the user know?
Thanks in advance
Al
A 100MB jpeg generally decompresses to around 8 gigabytes in bitmap form. Your only chance of getting that to work is getting 16 GB of RAM and running the process in 64-bit mode.
Alternatively, you could try libvips - it's designed for gigantic image files. There's no .NET wrapper yet, but I really want to make one and get some ImageResizer integration going! Of course, without anyone interested in funding that, it probably won't happen for a while....
As mentioned by Lilith River, libvips is capable of resizing large images with low memory needs. Fortunately, there is now a full libvips binding for .NET available: https://github.com/kleisauke/net-vips/.
It should be able to process 100MB jpeg files without any problems.
Related
I want to plot a larger amount of data within an metro application and I need to buffer this. To find out how far I can buffer it would be great to know how much memory is still available (to my app), this shouldn't inlclude virtual memory.
Is there any way in a metro app to get this Information? I only found GlobalMemoryStatusEx, but that can only be used in desktop apps
Thank you
I just had to deal with this and got hold of the right people at Microsoft to answer this. Unfortunately the answer was : No you can't do that, except use the restricted calls that you found but using those prevents you from getting certified for publication in the store.
How about just trying to allocate a lot of memory in chunks. When it first fails - add up the sizes of the chunks and either release them or use them for your operations.
I am trying to increase the work flow of my app deployment. From building to signing to getting it onto app it can take anywhere up to 40mins. What advice can somebody give me on:
1) Speeding up compile time
2) Speeding up the archive process
3) Speeding up the code signing
thanks
For reference, my early 2009 2.93GHz C2D iMac with 8GB RAM can archive and sign a 2GB application in approximately 15-20 minutes. My late 2011 1.8GHz i7 MacBook Air can do it significantly faster. 40 minutes for a 500MB application seems far too slow unless there is something else bogging down your system. Try checking your disk with Disk Utility and seeing what else is running with Activity Monitor.
Things to consider are the size of resources. Can any resources such as videos or images be compressed and still usable? Are there a large number of files that could be compressed into a zip file and then unzipped on first launch? Also check and make sure you do not have any long running custom scripts in the build process. After you've determined that resources or a build configuration setting is not an issue then I would advise investing in a faster computer (more RAM and processing power) if you are running on older hardware.
The rarely changed code could be imported to the libraries (maybe with the help of additional projects not to produce many targets), that dramatically increases the compilation speed while the signing and archiving is usually faster than the build itself.
I need to read and process a text file. My processing would be easier if I could use the File.ReadAllLines method but I'm not sure what is the maximum size of the file that could be read with this method without reading by chunks.
I understand that the file size depends on the computer memory. But are still there any recommendations for an average machine?
On a 32-bit operating system, you'll get at most a contiguous chunk of memory around 550 Megabytes, allowing loading a file of half that size. That goes down hill quickly after your program has been running for a while and the virtual memory address space gets fragmented. 100 Megabytes is about all you can hope for.
This is not an issue on a 64-bit operating system.
Since reading a text file one line at a time is just as fast as reading all lines, this should never be a real problem.
I've done stuff like this with 1-2GB before, albeit in Python. I do not think .NET would have a problem, though. But I would only do this for one-off processing.
If you are doing this on a regular basis, you might want to go through the file line by line.
Its bad design unless you know the files sizes vs the computer memory that would be avaiable in the running app.
A better solution would be consider memory mapped files. They use themselvses as page fil storage,
Lately I've been having problems reading big files on a network drive and I just can't pinpoint what I may be doing wrong. I tried both in C++ (Unmanaged) and in C# and had about the same performances on both...which were somewhat abysmal.
Sometimes it will read at 4 KB/s a file on the network, but if this file is located on the local HD it will achieve easily the maximum data rate the HD can output. That is with reading 64 KB chunks at a time... I tried with bigger buffers up to insane numbers, or smaller and it doesn't make much differences.
I tried async IO in C# with BeginRead on the FileStream and OVERLAPPED IO in C++ as well as synchronous reads and they all had the same problems, which is being slow on the network.
The only solution we came up with is to copy the file using the OS CopyFile function on the local HD before actually reading the file but I'm not too satisfied with this approach. It just seems like CopyFile is doing something we are not that makes it incredibly faster than our approach.
Anyone has a clue as to why this is?
We would have to guess, since you aren't showing us your code. So my guess is that Windows file copy is opening the file with the FILE_FLAG_SEQUENTIAL_SCAN flag which in turn causes the file system/cache to choose optimal block sizes and submit read requests in anticipation of read calls that havn't been submitted yet.
We only can assume that you have been trying really all possible methods of reading/writing. Have you been reading synchronously or asynchronously? Did you try I/O completion ports? Or ReadFileEx() function? I would guess that the Windows CopyFile() function detects that you want to read a file from network and will use different method for reading then it would use for disk access.
If you have really exhausted all possible reading methods, and if you really need thing to be solved, then I would suggest to check out a bit on what is the CopyFile() function doing. There are numerous tools for doing that. E.g.: this one (or some other -- links on the same page).
Ah.. We've developed a good iPhone application. Now, 'm passing through last phases of it, i.e. profiling it and I've encountered few problems. Application has few leaks and objects occupying large memory chunks. We just checked somehow, application is not lowering its memory requirements and blocks remain occupied with creation of each View Controller.
Some of the views I really don't want after their disappearance, but they are not deallocated.
We're also downloading large files into iPhone through app but once we download very large file (> 10 MB), it crashes. Because after download we've also used thumbnail generation logic into which UIImage is created with 'contentsOfFile'..! So, app generally crashes after use of large files. We've used UIWebView for thumbnails.
My real problem is download, thumbnail, preview of larger files... clearing unnecessary memory (objects) once view is not in focus..!
Can anyone help me get rid of such problems easily???
I really don't wanna go through long long code..!
Thank You..!
As has been written hundreds of times on SO, use ASIHTTPRequest for networking, especially for large files. It can stream big files directly to disc so you don't run out of memory. As for creating a thumbnail of a >10mb file, it sounds like you would do yourself a favor by storing a thumbnail on the server instead.
If your views don't unload, something is wrong with your retain/release cycles. Have you implemented viewDidUnload on all your view controllers? Without more details, it's very hard to help.