I have a file of raw data , its a image data. Now i need to convert the Raw data to JPEG image in Objective-C.
STEPS:
1) Read the file containing the raw data into NSString.
2) Encode the string to JPEG encoder
3) Create an JPEG image
Can you please guide me how to achieve this?
Is there any library available in iPhone OS to encode into JPEG.
This is how you create a UIImage from JPEG data:
UIImage *img = [UIImage imageWithData:[NSData dataWithContentsOfFile:…]];
And this is how you create a JPEG representation of a UIImage:
NSData *data = UIImageJPEGRepresentation(anImage, compressionQuality);
…but only after writing this I realized you probably want to create a JPEG image from raw image sample data? In that case you’ll probably have to create a CGImage with the correct image sample format and supply the bitmap data using a provider, see CGImageCreate. (Hopefully somebody can correct me or come up with sample code.) Then you can easily create an UIImage from the CGImage and use the code above.
Related
I have an NSMutableDicitionary containing two NSStrings and one NSImage, which I am trying to write to a file. However the output file is huge compared to the input image file (88kB to 1.1MB). The image is loaded via an Image Well drag and drop.
My first try was:
NSImage * image = _imageView.image;
[dictionary setObject:image forKey:#"Image"];
[NSKeyedArchiver archiveRootObject:dictionary toFile:filePath];
Result: 1.1MB
Second try:
NSImage * image = _imageView.image;
NSData * imageData = [image TIFFRepresentation];
NSBitmapImageRep * encodedImage = [NSBitmapImageRep imageRepWithData:imageData];
[dictionary setObject:encodedImage forKey:#"Image"];
[NSKeyedArchiver archiveRootObject:dictionary toFile:filePath];
Result: still 1.1MB
At first I thought that NSBitmapImage would keep the image in a (at least relatively) compressed format, but the file size does not differ one bit.
How should I write the NSMutable dictionary to a file so that it does not exceed the initial image file size by so much? Or how should I store the image efficiently in the dictionary in the first place?
I don't know if it matters, but the original format is png.
When you drop an image into an Image View, it gets loaded into an NSImage as a bitmap (color data for every pixel). It isn't compressed any longer like the original image you dropped into it was (PNG in your case). PNG uses lossless compression and its data footprint will be in most cases less than an uncompressed bitmap or TIFF representation of the image.
Then when you request the TIFF representation of the image, it has nothing to do with the original PNG image you loaded into it anymore except the pixel data itself.
If you want to store back the image in a compressed format or keep the original image, you can try one of the following:
Ask for compressed TIFF representation using TIFFRepresentationUsingCompression:factor:. Will still be different format and you will lose all metadata
Re-compressed the image with your desired format e.g PNG, jpeg, etc (i.e. with Quartz API) and store the compressed format in your dictionary. You will still need to preserve metadata yourself if you need it
Keep reference to the original image file or store its data. By overriding the ImageView's drag/drop methods and keeping track of the original's image URL or load it from the file and store the raw data of the original format. Then save that into your dictionary. You will get the exact copy of the original image and won't have to deal with format conversions and metadata copying.
I have calculated image size in bytes by converting image into NSData and its data length got wrong value.
NSData *data = UIImageJPEGRepresentation(image,0.5);
NSLog(#"image size in bytes %lu",(unsigned long)data.length);
Actually, the UIImage.length function here is not returning the wrong value, its just the result of the lossy conversion/reversion from a UIImage to NSData.
Setting the compressionQuality to the lowest compression possible of 1.0 in UIImageJpegRepresentation will not return the original image. Although the image metadata is stripped in this process, the function can and usually will yield an object larger than the original. Note that this increase in filesize does NOT increase the quality of the image from the compressed original either. Jpegs are highly compressed to begin with, which is why they are used so often, and the function is uncompressing it and then recompressing it. Its kind of like getting botox after age has stretched your body out, it might look similar to the original, but the insides are just not as as good as they used to be.
You could use a lower compressionQuality conditionally on larger files, close to 1.0, as the quality will drop off quickly. Other than that, depending on the final purpose of your images, the only other option would be to resize the image or adjust its resolution, perhaps in addition to adjusting the compression ratio. This change will exponentially curtail data usage. Web and mobile usage typically don't need the resolution as something like images meant for digital print.
You can write some code that adjusts each image and NSData representation only as much as needed to fit its individual data constraint.
• Background :
We are developing AFP to PDF tool. It involves conversion of AFP (Advanced Function Processing) file to PDF.
• Detailed Problem statement :
We have AFP file with embedded TIFF Image. The image object is described in Function Set 45, represented somewhat like this -
Image Content
Begin Tile
Image Encoding Parameter – TIFF LZW
Begin Transparency Mask
Image Encoding Parameter – G4MMR
Image Data Elements
End Transparency Mask
Image Data Elements (IDE Size 32) – 4 bands: CMYK
End Tile
End Image Content
We want to write this tiled image to PDF either using Java /iText API.
As of now, we can write G4MMR image. But, we are not able to apply CMYK color band data (Blue Color) to this image.
• Solution tried :
The code to write G4MMR image goes as follows –
ByteArrayOutputStream decode = saveAsTIFF(<width>,<height>,<imageByteData>);
RandomAccessFileOrArray ra=new RandomAccessFileOrArray(saveAsTIFF.toByteArray());
int pages = TiffImage.getNumberOfPages(ra);
for(int i1 = 1; i1 <= pages; i1++){
img1 = TiffImage.getTiffImage(ra, i1);
}
img1.scaleAbsolute(256, 75);
document.add(img1);
saveAsTIFF method is given here –
http://www.jpedal.org/PDFblog/2011/08/ccitt-encoding-in-pdf-files-converting-pdf-ccitt-data-into-a-tiff/
As mentioned, we are not able to apply CMYK 4 band image color data to this G4MMR image.
• Technology stack with versions of each component :
1. JDK 1.6
2. itextpdf-5.1
-- Umesh Pathak
The AFP resource you're showing is a TIFF CMYK image compressed with LZW. This image is also using a "transparency mask" which is compressed with G4MMR ( a slightly different encoding than the traditional Fax style G4).
So the image data is already using the CMYK colorspace, each band (C,M,Y,K) is compressed alone using simple LZW encoding and should not be too difficult to extract and store as a basic TIFF CMYK file. You'll also have to convert the transparency mask to G4 or raw data to use it in a pdf file to maks the CMYK image.
If you want better PDF output control, I suggest you take a look at pdflib
You need to add a CMYK colorspace to your image before adding it to the PDF file. However I am afraid this might not be fully supported in iText. A workaround for you could be to convert your image into the default RGB colorspace before adding it to the PDF file, however this will probably imply some quality loss for your image.
I have an UIImagePickerViewController and I'm saving my image to my app just with UIImagePickerControllerOriginalImage and :
[self.fileManager createFileAtPath:aPath contents:UIImageJPEGRepresentation(image, 1.f) attributes:nil];
Result :
original image from Photo.app -> 2.2 Mo
new saved image from my app -> 5.3 Mo with JPEG representation & 10.8 Mo with PNG representation !
So my question is quite simple : why ? And how to reach the Photo.app size ?
Thanks for your help :)
As you probably know, the second parameter passed in UIImageJPEGRepresentation defines the compression quality (1 being the highest). Because the image is broken down back to basic data and then re-compressed (jpeg is a compressed image format), the result may be worse compression (larger file) and of course the image quality will not get any better. Try lowering the parameter to something in between 0.0 and 1.0 and see when you get the best match in file size (Will be unique for each image processed, so try and find a good value in the middle).
Objective-C / Cocoa:
I need to load the image from a JPG file into a two dimensional array so that I can access each pixel. I am trying (unsuccessfully) to load the image into a NSBitmapImageRep. I have tried several variations on the following two lines of code:
NSString *filePath = [NSString stringWithFormat: #"%#%#",#"/Users/adam/Documents/phoneimages/", [outLabel stringValue]]; //this coming from a window control
NSImageRep *controlBitmap = [[NSImageRep alloc] imageRepWithContentsOfFile:filePath];
With the code shown, I get a runtime error: -[NSImageRep imageRepWithContentsOfFile:]: unrecognized selector sent to instance 0x100147070.
I have tried replacing the second line of code with:
NSImage *controlImage = [[NSImage alloc] initWithContentsOfFile:filePath];
NSBitmapImageRep *controlBitmap = [[NSBitmapImageRep alloc] initWithData:controlImage];
But this yields a compiler error 'incompatible type' saying that initWithData wants a NSData variable not an NSImage.
I have also tried various other ways to get this done, but all are unsuccessful either due to compiler or runtime error. Can someone help me with this? I will eventually need to load some PNG files in the same way (so it would be nice to have a consistent technique for both).
And if you know of an easier / simpler way to accomplish what I am trying to do (i.e., get the images into a two-dimensional array), rather than using NSBitmapImageRep, then please let me know! And by the way, I know the path is valid (confirmed with fileExistsAtPath) -- and the filename in outLabel is a file with .jpg extension.
Thanks for any help!
Easy!
NSImage *controlImage = [[NSImage alloc] initWithContentsOfFile:filePath];
NSBitmapImageRep *imageRep = [[controlImage representations] objectAtIndex:0];
Then to get the actual bitmap pixel data:
unsigned char *pixelData = [imageRep bitmapData];
If your image has multiple representations (it probably doesn't), you can get them out of that same array. The same code will work for your .png images.
Edit:
Carl has a better answer. This is only good if you also want to manipulate the image in some way, like scaling or changing color mode.
Original:
I would probably use core graphics for this. NSImage and most NSImageRep were not designed to have a pixel array sitting around. They convert source image data into pixels only when drawn.
When you create a CGBitmapContext, you can pass it a buffer of pixels to use. That is your two dimensional array, with whatever row bytes, color depth, pixel format and other properties you specify.
You can initialize a CGImage with JPG or PNG data using CGImageCreateWithJPEGDataProvider or CGImageCreateWithPNGDataProvider respectively.
Once you draw the image into the context with CGContextDrawImage the original buffer you passed to CGBitmapContextCreate is now filled with the pixel data of the image.
Create a CGDataProvider with the image data.
Create a CGImageRef with the data provider.
Create a buffer large enough for the image pixels.
Create a CGBitmapContext that is the size of the image with the pixel buffer.
Draw the image into the bitmap context.
Access the pixels in the buffer.
If you want to use NSImage instead of CGImage, you can create an NSGraphicsContext from a CGContext with graphicsContextWithGraphicsPort:flipped: and set it as the current context. That basically replaces steps 1 and 2 above with whatever code you want to use to make an NSImage.
imageRepWithContentsOfFile is a static function
Correct way to use it:
[NSBitmapImageRep imageRepWithContentsOfFile:filePath];