GDAL CreateCopy is very slow - gdal

I am processing some 4096x4096 JPEG2000 images with OpenCV and my resulting image is missing all the GDAL metadata. I have implemented a copy method in my C++ program to use GDAL CreateCopy to copy the meta data from the original image to the destination. So far I am seeing a copy time over 97 seconds on a basic metadata copy. This is longer than all my image processing put together! I have tried increasing the GDAL cache to 200, and even to 10485760. Setting it at either value makes the time even worse!

Related

Why can't we just use arraybuffer and convert them to int array to upload file?

I got this silly question which originates from my college assignment.
Basically what I was trying to do at that time is to upload an image to a flask backend in REST way and the backend will use open-cv to do a image recognition. Because json data type does not support binary data, I follow some online instructions to use base64 which is of course feasible(it seems to be used a lot in terms of file uploading for REST, not sure about the behind reason). But Later I realized actually I can read the image into ArrayBuffer and convert it to int array and then post to the backend. I just tried it today and it succeeded. Then on both sides, the encoding overhead is avoided and payload size also get reduced since base64 increases size by around 33%.
I want to ask since we can avoid using based64 why we still use base64. Is it just because it avoids issues of line ending encodings across systems? It seems unrelated to binary data uploading.

Check if an image is fully loaded (without Vision Tools)

I am currently developing a Labview application whose function (amongst others) is to copy and display images that are automatically updated every half a second or so. Depending on when the program copies the picture, it might not be fully generated, thus giving me an incomplete picture, as opposed to when the update is finished and I obtain a full picture
I would like to have a way to check whether the image is full or not. Using the size is not a viable option, as the amount of information and the colors on the images can vary. I don't have access to the image vision tools by the way, making my task more difficult than it should be.
Thank you for your help,
NFM
As a general solution, if you have access to the code for the generation of the image I would strongly suggest that you implement some additional logic that only replaces the image to be copied once it has a complete image.
Without using image processing you have to rely on some additional knowledge about the properties of image files. Once you have loaded the file into memory:
You can perform any of a wide set of processing. If the image is PNG then you have a couple of options:
Decode chunks and check for CRC Validity
Requires a lot of looping through but ensures 100% validity of the PNG
Search for the 'IEND' chunk
Quick and easy search for a matching 4byte value that should be located near the end of the file. Not a perfect confirmation of validity if the file generation is not linear.

Reading compressed image data from PNG using libpng

How can I read the compressed image data from the IDAT chunk of a PNG using libpng? I have not found a method to do this in the libpng documentation, but I may have overlooked it. Is it somehow possible to use the "unknown chunk" facility for the IDAT chunk?
The purpose of this is that I want to write a very fast PNG-to-PDF converter. Because PDF supports the PNG data format (with each scanline prefixed by a filter-type byte), it should be possible to just copy over the contents of the (concatenated) IDAT chunks and slap the right PDF headers around it (also copying the palette if necessary). This saves a decompression/re-compression step.
If libpng does not provide such low-level access, does any other library provide this functionality? Otherwise I'll just write a PNG chunk reader myself...

Loading images (graphics) with VisualWorks very slow

I am trying to load image files like jpeg into vw as part of an application. This seems to take very long and sometimes even crashes vw. The image has roughly 3.5MB and is a simple jpeg picture. This is what causes the problem:
ImageReader fromFile:'pic.jpg'.
This operation takes about 5-10 seconds to complete. It happens in both 32 and 64 bit projects alike.
Any ideas or suggestions as to how I can solve this problem? Same thing in pharo seems to work okay.
Thanks!
ImageReader will automatically choose the correct subclass, like JPEGImageReader. Picking the subclass is not the slow part; decoding the JPG data is.
A jpeg file, unlike PNG doesn't use zip compression but instead uses discrete-cosine-transforms (see https://en.wikipedia.org/wiki/JPG#JPEG_compression). This compression requires a lot of number crunching, which is slower in VisualWorks than it would be in C. The PNG reader on the other hand uses Zlib to have the number-crunching part done in C, which is why it is so much faster.
You could use Cairo or GDI or whatever other C-API you have at hand to speed this up.
Try calling the JPEGImageReader directly:
JPEGImageReader fromFile:'pic.jpg'
If that's fast, then the slowdown is in finding the proper image reader to use for the file. What ImageReaders do you have installed and how do they implement the class method canRead:?
If the JPEGImageReader is still slow, then we can investigate from there.

How to avoid imageresizing if width and height is same as original?

Is there a way (param) to avoid Imageresizing to process an image if the height and width are the same as original ?
If not, where and how do I cancel the scale process in a plugin ?
The ImageResizer does not scale images that are already the appropriate size. It does, however, decode them, strip metadata, and re-encode them into a web-compatible and web-efficient format (usually jpg or png).
If you're wanting the ImageResizer to serve the original file, skipping the entire process, that's a different question, which I'll try to answer below.
Here's the primary challenge with that goal: To discover the width and height of the source image file, you must decode it - at least partially.
This optimization is only useful (or possible) in limited circumstances
The format of the source file allows you to parse the width & height without loading the entire file into memory. JPG/PNG yes, TIFF - no, 30+ formats supported by the FreeImageDecoder, no.
The source file is on local, low-latency disk storage, and is IIS accessible - eliminates UNC path and plugins S3Reader, SqlReader, AzureReader, RemoteReader, MongoReader, etc.
No URL rewriting rules are in place.
No custom plugins are being used.
The image is already in a web-optimized format with proper compression settings, with metadata removed.
No other URL commands are being used
No watermarking rules are in place.
You do not need to control cache headers.
You are 100% sure the image is not malicious (without re-encoding, you can't ensure the file can't be both a script and a bitmap).
In addition, unless you cached the result, this 'optimization' wouldn't, in fact, improve response time or server-side performance. Since the dimension data would need to be decoded separately, it would add uniform, significant overhead to all requests whether or not they happened to have a dimension match.
The only situation in which I see this being useful is if you spend a lot of time optimizing compression in Photoshop and don't want the ImageResizer to touch it unless needed. If you're that concerned, just don't apply the URL in that scenario. Or, set process=no to keep the original bytes as-is.
It's definitely possible to make a plugin to do this; but it's not something that many people would want to use, and I can't envision a usage scenario where it would be a net gain.
If you want to plunge ahead, just handle the Config.Current.Pipeline.PreHandleImage event and replace e.ResizeImageToStream with code that parses the stream returned by e.GetSourceImage(), apply your dimension logic (comparing to Config.Current.GetImageBuilder().GetFinalSize(), then reset the stream and copy it verbatim if desired like this:
using (Stream source = e.GetSourceImage())
StreamExtensions.CopyToStream(source,stream); //4KiB buffer
That might not handle certain scenarios, like if the image actually needs to be resized 1px smaller, but you're adding 1 px border, etc, but it's close. If you're picky, look at the source code for GetFinalSize and return the image bounds instead of the canvas bounds.