I have an NSMutableDicitionary containing two NSStrings and one NSImage, which I am trying to write to a file. However the output file is huge compared to the input image file (88kB to 1.1MB). The image is loaded via an Image Well drag and drop.
My first try was:
NSImage * image = _imageView.image;
[dictionary setObject:image forKey:#"Image"];
[NSKeyedArchiver archiveRootObject:dictionary toFile:filePath];
Result: 1.1MB
Second try:
NSImage * image = _imageView.image;
NSData * imageData = [image TIFFRepresentation];
NSBitmapImageRep * encodedImage = [NSBitmapImageRep imageRepWithData:imageData];
[dictionary setObject:encodedImage forKey:#"Image"];
[NSKeyedArchiver archiveRootObject:dictionary toFile:filePath];
Result: still 1.1MB
At first I thought that NSBitmapImage would keep the image in a (at least relatively) compressed format, but the file size does not differ one bit.
How should I write the NSMutable dictionary to a file so that it does not exceed the initial image file size by so much? Or how should I store the image efficiently in the dictionary in the first place?
I don't know if it matters, but the original format is png.
When you drop an image into an Image View, it gets loaded into an NSImage as a bitmap (color data for every pixel). It isn't compressed any longer like the original image you dropped into it was (PNG in your case). PNG uses lossless compression and its data footprint will be in most cases less than an uncompressed bitmap or TIFF representation of the image.
Then when you request the TIFF representation of the image, it has nothing to do with the original PNG image you loaded into it anymore except the pixel data itself.
If you want to store back the image in a compressed format or keep the original image, you can try one of the following:
Ask for compressed TIFF representation using TIFFRepresentationUsingCompression:factor:. Will still be different format and you will lose all metadata
Re-compressed the image with your desired format e.g PNG, jpeg, etc (i.e. with Quartz API) and store the compressed format in your dictionary. You will still need to preserve metadata yourself if you need it
Keep reference to the original image file or store its data. By overriding the ImageView's drag/drop methods and keeping track of the original's image URL or load it from the file and store the raw data of the original format. Then save that into your dictionary. You will get the exact copy of the original image and won't have to deal with format conversions and metadata copying.
Related
I'm reading a dicom image and accessing its pixel array. After that saving that array again into dicom format using sitk.Write but there is different between original image that is going to be read and same image after being written. How can I get the same image display. I'm using Radiant Viewer for visualization of Dicom images. I want same output as of input. Code along with input and out image is given below:
# Reading a dicom image
Image = pydicom.dcmread('Input.dcm')
output = Image.pixel_array
#Saving the image into another folder
img = sitk.GetImageFromArray(output)
sitk.WriteImage(img, 'output.dcm' )
Size of dicom image is greater so sending the screenshots of input1 and output2 image
I'm gonna answer my own question here. So, the Photometric Interpretation of my original image was MONOCHROME1. But after converting the image into pixel arrays and then saving again with .dcm format, its some details were changed one of which was Photometric Interpretation that changed from MONOCHROME1 to MONOCHROME2. I changed this of the saved image and then saved again as follows.
elem = image[0x0028, 0x0004]
elem.value = 'MONOCHROME1'
image.save_as('P1_L_CC.dcm',write_like_original=False)
I am using PDFBox 2.0.8 to replace image in my application. I am able to extract the image and replace the same with another image of same dimension. However, there is no decrease in the size of PDF if there is decrease in the size of image. For example refer the documents/images in the below links. Original size of PDF is 93 KB. Extracted image is 91 KB. Replaced image is 54 KB. PDF size after image replacement is still 92 KB....
Original Document = http://35.200.192.44/download?fileName=/outbox/pdf/10_cert.pdf
Extracted Image = http://35.200.192.44/download?fileName=/outbox/pdf/image0.jpg
Replacement image = http://35.200.192.44/download?fileName=/outbox/pdf/image1.jpg
PDF after replacement = http://35.200.192.44/download?fileName=/outbox/pdf/10_cert1.pdf.
The change in size of PDF after replacement is not in the same proportion... Code snippet used for image replacement is
BufferedImage buffered_replacement_image_file = ImageIO.read(new File(replacement_image_file));
PDImageXObject replacement_img = JPEGFactory.createFromImage(doc, buffered_replacement_image_file);
resources.put(xObjectName, replacement_img);
The images in your two example PDFs are identical. This most likely is due to the way you load the image data, first creating a BufferedImage from the file and then creating a PDImageXObject from that BufferedImage. This causes the input image data to be expanded to a plain bitmap and then re-compressed to JPEG identically by JPEGFactory.createFromImage.
To use the JPEG data as they initially are, try this approach instead:
PDImageXObject replacement_img = JPEGFactory.createFromStream(doc, new FileInputStream(replacement_image_file));
resources.put(xObjectName, replacement_img);
or, if the replacement_image_file is not necessarily a JPEG file, like this
PDImageXObject replacement_img = PDImageXObject.createFromFileByExtension(new File(replacement_image_file), doc);
resources.put(xObjectName, replacement_img);
If this doesn't help, you most likely have other issues in your code, too, and need to show more of it.
I have calculated image size in bytes by converting image into NSData and its data length got wrong value.
NSData *data = UIImageJPEGRepresentation(image,0.5);
NSLog(#"image size in bytes %lu",(unsigned long)data.length);
Actually, the UIImage.length function here is not returning the wrong value, its just the result of the lossy conversion/reversion from a UIImage to NSData.
Setting the compressionQuality to the lowest compression possible of 1.0 in UIImageJpegRepresentation will not return the original image. Although the image metadata is stripped in this process, the function can and usually will yield an object larger than the original. Note that this increase in filesize does NOT increase the quality of the image from the compressed original either. Jpegs are highly compressed to begin with, which is why they are used so often, and the function is uncompressing it and then recompressing it. Its kind of like getting botox after age has stretched your body out, it might look similar to the original, but the insides are just not as as good as they used to be.
You could use a lower compressionQuality conditionally on larger files, close to 1.0, as the quality will drop off quickly. Other than that, depending on the final purpose of your images, the only other option would be to resize the image or adjust its resolution, perhaps in addition to adjusting the compression ratio. This change will exponentially curtail data usage. Web and mobile usage typically don't need the resolution as something like images meant for digital print.
You can write some code that adjusts each image and NSData representation only as much as needed to fit its individual data constraint.
I have a file of raw data , its a image data. Now i need to convert the Raw data to JPEG image in Objective-C.
STEPS:
1) Read the file containing the raw data into NSString.
2) Encode the string to JPEG encoder
3) Create an JPEG image
Can you please guide me how to achieve this?
Is there any library available in iPhone OS to encode into JPEG.
This is how you create a UIImage from JPEG data:
UIImage *img = [UIImage imageWithData:[NSData dataWithContentsOfFile:…]];
And this is how you create a JPEG representation of a UIImage:
NSData *data = UIImageJPEGRepresentation(anImage, compressionQuality);
…but only after writing this I realized you probably want to create a JPEG image from raw image sample data? In that case you’ll probably have to create a CGImage with the correct image sample format and supply the bitmap data using a provider, see CGImageCreate. (Hopefully somebody can correct me or come up with sample code.) Then you can easily create an UIImage from the CGImage and use the code above.
Objective-C / Cocoa:
I need to load the image from a JPG file into a two dimensional array so that I can access each pixel. I am trying (unsuccessfully) to load the image into a NSBitmapImageRep. I have tried several variations on the following two lines of code:
NSString *filePath = [NSString stringWithFormat: #"%#%#",#"/Users/adam/Documents/phoneimages/", [outLabel stringValue]]; //this coming from a window control
NSImageRep *controlBitmap = [[NSImageRep alloc] imageRepWithContentsOfFile:filePath];
With the code shown, I get a runtime error: -[NSImageRep imageRepWithContentsOfFile:]: unrecognized selector sent to instance 0x100147070.
I have tried replacing the second line of code with:
NSImage *controlImage = [[NSImage alloc] initWithContentsOfFile:filePath];
NSBitmapImageRep *controlBitmap = [[NSBitmapImageRep alloc] initWithData:controlImage];
But this yields a compiler error 'incompatible type' saying that initWithData wants a NSData variable not an NSImage.
I have also tried various other ways to get this done, but all are unsuccessful either due to compiler or runtime error. Can someone help me with this? I will eventually need to load some PNG files in the same way (so it would be nice to have a consistent technique for both).
And if you know of an easier / simpler way to accomplish what I am trying to do (i.e., get the images into a two-dimensional array), rather than using NSBitmapImageRep, then please let me know! And by the way, I know the path is valid (confirmed with fileExistsAtPath) -- and the filename in outLabel is a file with .jpg extension.
Thanks for any help!
Easy!
NSImage *controlImage = [[NSImage alloc] initWithContentsOfFile:filePath];
NSBitmapImageRep *imageRep = [[controlImage representations] objectAtIndex:0];
Then to get the actual bitmap pixel data:
unsigned char *pixelData = [imageRep bitmapData];
If your image has multiple representations (it probably doesn't), you can get them out of that same array. The same code will work for your .png images.
Edit:
Carl has a better answer. This is only good if you also want to manipulate the image in some way, like scaling or changing color mode.
Original:
I would probably use core graphics for this. NSImage and most NSImageRep were not designed to have a pixel array sitting around. They convert source image data into pixels only when drawn.
When you create a CGBitmapContext, you can pass it a buffer of pixels to use. That is your two dimensional array, with whatever row bytes, color depth, pixel format and other properties you specify.
You can initialize a CGImage with JPG or PNG data using CGImageCreateWithJPEGDataProvider or CGImageCreateWithPNGDataProvider respectively.
Once you draw the image into the context with CGContextDrawImage the original buffer you passed to CGBitmapContextCreate is now filled with the pixel data of the image.
Create a CGDataProvider with the image data.
Create a CGImageRef with the data provider.
Create a buffer large enough for the image pixels.
Create a CGBitmapContext that is the size of the image with the pixel buffer.
Draw the image into the bitmap context.
Access the pixels in the buffer.
If you want to use NSImage instead of CGImage, you can create an NSGraphicsContext from a CGContext with graphicsContextWithGraphicsPort:flipped: and set it as the current context. That basically replaces steps 1 and 2 above with whatever code you want to use to make an NSImage.
imageRepWithContentsOfFile is a static function
Correct way to use it:
[NSBitmapImageRep imageRepWithContentsOfFile:filePath];