Check if an image is fully loaded (without Vision Tools) - labview

I am currently developing a Labview application whose function (amongst others) is to copy and display images that are automatically updated every half a second or so. Depending on when the program copies the picture, it might not be fully generated, thus giving me an incomplete picture, as opposed to when the update is finished and I obtain a full picture
I would like to have a way to check whether the image is full or not. Using the size is not a viable option, as the amount of information and the colors on the images can vary. I don't have access to the image vision tools by the way, making my task more difficult than it should be.
Thank you for your help,
NFM

As a general solution, if you have access to the code for the generation of the image I would strongly suggest that you implement some additional logic that only replaces the image to be copied once it has a complete image.
Without using image processing you have to rely on some additional knowledge about the properties of image files. Once you have loaded the file into memory:
You can perform any of a wide set of processing. If the image is PNG then you have a couple of options:
Decode chunks and check for CRC Validity
Requires a lot of looping through but ensures 100% validity of the PNG
Search for the 'IEND' chunk
Quick and easy search for a matching 4byte value that should be located near the end of the file. Not a perfect confirmation of validity if the file generation is not linear.

Related

Load files into Photoshop layers as vector smart objects

Bridge is packaged with a script that will load multiple files as their own layer in a Photoshop file. There are two problems when you do this with a vector file:
It converts the files to raster layers. And since you don't get to choose the size of the file beforehand, if they're too small, you can't scale them up without losing quality.
It doesn't preserve antialiasing, leaving ugly jagged edges on whatever art you imported.
Is there a way to import multiple files into Photoshop as vector smart objects? Then you'd have full control over the quality. Alternatively, is there a way to define the size of the vector files you're loading into layers and/or preserve their antialiasing?
I found a script that loads files into Photoshop as smart objects, but this has the same two problems the factory Bridge script has. It appears to do the exact same thing, but converts the layers to smart objects after they are imported.
The only way I currently know of to get vector smart objects into Photoshop is to do so manually one by one by copying from Illustrator or by dragging the files to an open Photoshop file. I'm looking for a way to automate the process.
I'm afraid doing it manually is the only way to get where you want to go. I've wrestled with this same issue for years and hope with every PS/Bridge update they'll add the option to load a stack of smart objects, but so far it's still old-school drag n' drop.
Hit the Adobe suggestion box... maybe with enough requests they'll finally add this as a native feature.

Do PDFs sizes change depending on the program that opens it?

I work for a company designing t-shirts. We get transfers printed by another company. The transfers we received for a recent design were to small, as I'm guessing they were printed portrait instead of landscape. The representative from the company we order prints from is claiming that it isn't their fault and that...
"Sometimes the images we receive are not the size you send. This is due to the different formats of the files and the way our computers convert them."
He says it's our fault because we didn't send the design dimensions to him along with the design. The file sent was a PDF. Am I correct in understanding that a PDF will always open at it's intended size? I thought the size was embedded within the PDF. I'm fairly certain I'm correct, but don't want to basically call him out for it without knowing I'm correct. Do computers really convert PDFs so that they're a completely different size? That would be a terrible way for PDFs to operate, if that's the case.
The size is definitely embedded in PDF, and the purpose of PDF format is to be used as the final document format before printing. My advice is to send a note with desired dimensions, perhaps it's their machine that resizes the picture.

How to avoid imageresizing if width and height is same as original?

Is there a way (param) to avoid Imageresizing to process an image if the height and width are the same as original ?
If not, where and how do I cancel the scale process in a plugin ?
The ImageResizer does not scale images that are already the appropriate size. It does, however, decode them, strip metadata, and re-encode them into a web-compatible and web-efficient format (usually jpg or png).
If you're wanting the ImageResizer to serve the original file, skipping the entire process, that's a different question, which I'll try to answer below.
Here's the primary challenge with that goal: To discover the width and height of the source image file, you must decode it - at least partially.
This optimization is only useful (or possible) in limited circumstances
The format of the source file allows you to parse the width & height without loading the entire file into memory. JPG/PNG yes, TIFF - no, 30+ formats supported by the FreeImageDecoder, no.
The source file is on local, low-latency disk storage, and is IIS accessible - eliminates UNC path and plugins S3Reader, SqlReader, AzureReader, RemoteReader, MongoReader, etc.
No URL rewriting rules are in place.
No custom plugins are being used.
The image is already in a web-optimized format with proper compression settings, with metadata removed.
No other URL commands are being used
No watermarking rules are in place.
You do not need to control cache headers.
You are 100% sure the image is not malicious (without re-encoding, you can't ensure the file can't be both a script and a bitmap).
In addition, unless you cached the result, this 'optimization' wouldn't, in fact, improve response time or server-side performance. Since the dimension data would need to be decoded separately, it would add uniform, significant overhead to all requests whether or not they happened to have a dimension match.
The only situation in which I see this being useful is if you spend a lot of time optimizing compression in Photoshop and don't want the ImageResizer to touch it unless needed. If you're that concerned, just don't apply the URL in that scenario. Or, set process=no to keep the original bytes as-is.
It's definitely possible to make a plugin to do this; but it's not something that many people would want to use, and I can't envision a usage scenario where it would be a net gain.
If you want to plunge ahead, just handle the Config.Current.Pipeline.PreHandleImage event and replace e.ResizeImageToStream with code that parses the stream returned by e.GetSourceImage(), apply your dimension logic (comparing to Config.Current.GetImageBuilder().GetFinalSize(), then reset the stream and copy it verbatim if desired like this:
using (Stream source = e.GetSourceImage())
StreamExtensions.CopyToStream(source,stream); //4KiB buffer
That might not handle certain scenarios, like if the image actually needs to be resized 1px smaller, but you're adding 1 px border, etc, but it's close. If you're picky, look at the source code for GetFinalSize and return the image bounds instead of the canvas bounds.

Resizable image resource with embedded cap insets

This is by far not a showstopper problem just something I've been curious about for some time.
There is this well-known -[UIImage resizableImageWithCapInsets:] API for creating resizable images, which comes really handy when texturing variable size buttons and frames, especially on the retina iPad and especially if you have lots of those and you want to avoid bloating the app bundle with image resources.
The cap insets are typically constant for a given image, no matter what size we want to stretch it to. We can also put that this way: the cap insets are characteristic for a given image. So here is the thing: if they logically belong to the image, why don't we store them together with the image (as some kind of metadata), instead of having to specify them everywhere where we got to create a new instance?
In the daily practice, this could have serious benefits, mainly by means of eliminating the possibility of human error in the process. If the designer who creates the images could embed the appropriate cap values upon exporting in the image file itself then the developers would no longer have to write magic numbers in the code and maintain them updated each time the image changes. The resizableImage API could read and apply the caps automatically. Heck, even a category on UIImage would make do.
Thus my question is: is there any reliable way of embedding metadata in images?
I'd like to emphasize these two words:
reliable: I have already seen some entries on the optional PNG chunks but I'm afraid those are wiped out of existence once the iOS PNG optimizer kicks in. Or is there a way to prevent that? (along with letting the optimizer do its job)
embedding: I have thought of including the metadata in the filename similarly to what Apple does, i.e. "#2x", "~ipad" etc. but having kilometer-long names like "image-20.0-20.0-40.0-20.0#2x.png" just doesn't seem to be the right way.
Can anyone come up with smart solution to this?
Android has a filetype called nine-patch that is basically the pieces of the image and metadata to construct it. Perhaps a class could be made to replicate it. http://developer.android.com/reference/android/graphics/NinePatch.html

Ajax request to SQL returns hundreds of images to JSON -> CSS Sprites? Base64?

We are writing a Web-based events calendar with thousands of theatre shows, festivals, and concerts in a SQL database.
The user goes to the Website, performs a search, the SQL server returns JSON, and jQuery code on the client side displays the 200+ events.
Our problem is that each event has an image. If I return URLs, the browser has to make 200+ HTTP GET requests for these tiny (2-4Kb) images.
It is tempting to pack all the images into one CSS sprite, but since the user searches can return any combination of images, this would have to be done dynamically and we'd lose control over caching. Every search would have to return every image in a brand new CSS sprite. Also, we want to avoid any solution that requires dynamically reading or writing from a filesystem: that would be way too slow.
I'd rather return binary images as part of the JSON, and keep track of which images we already have cached and which we really need. But I can't figure out how. It seems to require converting to Base64? Is there any example code on this?
Thanks for any help you can give! :)
The web site you are using (StackOverflow) has to provide 50 avatars for 50 questions shown in the questions page. How does it do it? Browser makes 50 requests.
I would say, you had better implement pagination so that the list does not get too big. Also you can load images on the background or when the user scrolls and gets to the image.
Keeping track of images downloaded is not our job, it is browser's. Our job is to make sure the URL is unique and consistent and response headers allow caching.
Also base64 will make the stream at least 33% longer while it's processing on the client side is not trivial - I have never seen an implementation but probably someone has done some javascript for it.
I believe all you need is just pagination.
It looks like the original poster has proceeded with essentially this solution on their own, however based on their comment about 33% space overhead, I don't think they observed an unexpected phenomenon that you get when base64-ing a bunch of images into one response entity and then gzipping it...believe it or not, this can easily produce an overall transfer size that is smaller than the total of the unencoded image files, even if each unencoded image was also separately gzipped.
I've covered this JSON technique for accelerated image presentation in depth here as an alternative to CSS sprites, complete with a couple live samples. It aims show that it is often a superior technique to CSS sprites.
Data is never random. For example you could name your sprites
date_genre_location.jpg
or however you organise your searches. This might be worth it. Maybe.
You'll need to do the math
Here is what we decided.
Creating sprites has some downsides:
To reduce image loss, we need to store the originals as PNGs instead of JPEGs on the server. This is going to add to the slowness, and there's already quite some slowness in creating dynamic CSS sprites with ImageMagick.
To reduce image size, the final CSS sprite must be a JPG, but JPGs are lossy and with a grid of images, things get weird at the edges as JPG tries to blend and compress. We can fix that by putting a blank border around all the images but even so, this adds complexity.
with CSS sprites, it becomes harder to do intelligent caching; we have to pack up all the images regardless of what we've sent in the past. (or we have to keep all the old CSS sprites regardless of whether they contain many or few images we still need).
I'm afraid we have too many combinations of date-category-location for precomputing CSS sprites, although surely we could handle part of the data this way
Thus our solution is to Base64 it, to actually send each image one by one. Despite the 33% overhead, it is far less complex to code and manage and when you take caching issues into account, it may even be less data transfer.
Thank you all for your advice on this!