I am using the following code to save a frame of a movie to my desktop:
NSCIImageRep *imageRep = [NSCIImageRep imageRepWithCIImage:[CIImage imageWithCVImageBuffer:imageBuffer]];
NSImage *image = [[[NSImage alloc] initWithSize:[imageRep size]] autorelease];
[image addRepresentation:imageRep];
CVBufferRelease(imageBuffer);
NSArray *representations = [image representations];
NSData *bitmapData = [NSBitmapImageRep representationOfImageRepsInArray:representations usingType:NSJPEGFileType properties:nil];
[bitmapData writeToFile:#"/Users/ricky/Desktop/MyImage.jpeg" atomically:YES];
At the second last line of code, I receive the following messages in the console, with no result being saved to the desktop:
<Error>: CGImageDestinationFinalize image destination does not have enough images
CGImageDestinationFinalize failed for output type 'public.jpeg'
The NSImage is still an allocated object for the entire method call, so I'm not sure why I am receiving complaints about insufficient amount of images.
I'd appreciate any help.
Thanks in advance,
Ricky.
I think the source of the problem is that you're passing an array of NSCIImageRep objects to representationOfImageRepsInArray:usingType:properties:, which I believe expects an array of NSBitmapImageRep objects.
What you want to do is create an NSBitmapImageRep from your CIImage. Then you can use that to write to disk. That would be roughly:
CIImage *myImage = [CIImage imageWithCVImageBuffer:imageBuffer];
NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithCIImage:myImage];
NSData *jpegData [bitmapRep representationUsingType:NSJPEGFileType properties:nil];
[jpegData writeToFile:#"/Users/ricky/Desktop/MyImage.jpeg" atomically:YES];
Of course, you'd want to handle any error cases and probably pass a properties dictionary to fine-tune the JPEG creation.
I'm sorry i don't really know why your code doesn't work, but approaching it a different way (and i think more efficiently than your CVImageBuffer to CIImage to NSCIImageRep to NSImage to NSData, albeit at a slightly lower level):-
CVImageBuffer to CGImage
CGImage to jpg file
I don't have code ready made to do this but extracting the right stuff from those examples should be straight forward.
Related
I've got an app that uses Metal to do some rendering to screen (with a CAMetalLayer, not a MTKView, by necessity), and I'd like to provide the user with the option of saving a snapshot of the result to disk. Attempting to follow the answer at https://stackoverflow.com/a/47632198/2752221 while translating to Objective-C, I first wrote a commandBuffer completion callback like so (note this is manual retain/release code, not ARC; sorry, legacy code):
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> buffer) {
[self performSelectorOnMainThread:#selector(saveImageTakingDrawable:) withObject:[drawable retain] waitUntilDone:NO];
optionKeyPressedAndUnhandled_ = NO;
}];
I do this immediately after calling [commandBuffer presentDrawable:drawable]; my id <CAMetalDrawable> drawable is still in scope. Here is my implementation of saveImageTakingDrawable::
- (void)saveImageTakingDrawable:(id <CAMetalDrawable>)drawable
{
// We need to have an image to save
if (!drawable) { NSBeep(); return; }
id<MTLTexture> displayTexture = drawable.texture;
if (!displayTexture) { NSBeep(); return; }
CIImage *ciImage = [CIImage imageWithMTLTexture:displayTexture options:nil];
// release the metal texture as soon as we can, to free up the system resources
[drawable release];
if (!ciImage) { NSBeep(); return; }
NSCIImageRep *rep = [NSCIImageRep imageRepWithCIImage:ciImage];
if (!rep) { NSBeep(); return; }
NSImage *nsImage = [[[NSImage alloc] initWithSize:rep.size] autorelease];
[nsImage addRepresentation:rep];
NSData *tiffData = [nsImage TIFFRepresentation];
if (!tiffData) { NSBeep(); return; }
... filesystem cruft culminating in ...
if ([tiffData writeToFile:filePath options:NSDataWritingWithoutOverwriting error:nil])
{
// play a sound to acknowledge saving
[[NSSound soundNamed:#"Tink"] play];
return;
}
NSBeep();
return;
}
The result is a "Tink" sound and a 7.8 MB .tif file of sensible dimensions (1784x1090), but it's transparent, and there is no usable image data in it; viewing the file in Hex Fiend shows that the whole file is all zeros except fairly brief header and footer sections.
I suspect that the fundamental method is flawed for some reason. I get several console logs when I attempt this snapshot:
2020-06-04 18:20:40.203669-0400 MetalTest[37773:1065740] [CAMetalLayerDrawable texture] should not be called after already presenting this drawable. Get a nextDrawable instead.
Input Metal texture was created with a device that does not match the current context device.
Input Metal texture was created with a device that does not match the current context device.
2020-06-04 18:20:40.247637-0400 MetalTest[37773:1065740] [plugin] AddInstanceForFactory: No factory registered for id <CFUUID 0x600000297260> F8BB1C28-BAE8-11D6-9C31-00039315CD46
2020-06-04 18:20:40.281161-0400 MetalTest[37773:1065740] HALC_ShellDriverPlugIn::Open: Can't get a pointer to the Open routine
That first log seems to suggest that I'm really not even allowed to get the texture out of the drawable after it has been presented in the first place. So... what's the right way to do this?
UPDATE:
Note that I am not wedded to the later parts of saveImageTakingDrawable:'s code. I would be happy to write out a PNG instead of a TIFF, and if there's a way to get where I'm going without using CIImage, NSCIImageRep, or NSImage, so much the better. I just want to save the drawable's texture image out as a PNG or TIFF, somehow.
I just want to save the drawable's texture image out as a PNG or TIFF, somehow.
Here is an alternative approach which you may test; will need to set the path to wherever you want the image file saved.
- (void) windowCapture: (id)sender {
NSTask *task = [[NSTask alloc]init];
[task setLaunchPath:#"/bin/sh"];
NSArray *args = [NSArray arrayWithObjects: #"-c", #"screencapture -i -c -Jwindow", nil];
[task setArguments:args];
NSPipe *pipe = [NSPipe pipe];
[task setStandardOutput:pipe];
[task launch];
[task waitUntilExit];
int status = [task terminationStatus];
NSData *dataRead = [[pipe fileHandleForReading] readDataToEndOfFile];
NSString *pipeOutput = [[[NSString alloc] initWithData:dataRead encoding:NSUTF8StringEncoding]autorelease];
// Tell us if there was a problem
if (!(status == 0)){NSLog(#"Error: %#",pipeOutput);}
[task release];
// Get image data from pasteboard and write to file
NSPasteboard *pboard = [NSPasteboard generalPasteboard];
NSData *pngData = [pboard dataForType:NSPasteboardTypePNG];
NSError *err;
BOOL success = [pngData writeToFile:#"/Users/xxxx/Desktop/ABCD.png" options:NSDataWritingAtomic error:&err];
if(!success){NSLog(#"Unable to write to file: %#",err);} else {NSLog(#"File written to desktop.");}
}
I think you should inject a blit into the command buffer (before submitting it) to copy the texture to a texture of your own. (The drawable's texture is not safe to use after it has been presented, as you've found.)
One strategy is to set the layer's framebufferOnly to false for the pass, and then use a MTLBlitCommandEncoder to encode a copy from the drawable texture to a texture of your own. Or, if the pixel formats are different, encode a draw of a quad from the drawable texture to your own, using a render command encoder.
The other strategy is to substitute your own texture as the render target color attachment where your code is currently using the drawable's texture. Render to your texture primarily and then draw that to the drawable's texture.
Either way, your texture's storageMode can be MTLStorageModeManaged, so you can access its data with one of the -getBytes:... methods. You just have to make sure that the command buffer encodes a synchronizeResource: command of a blit encoder at the end.
You can then use the bytes to construct an NSBitmapImageRep using the -initWithBitmapDataPlanes:pixelsWide:pixelsHigh:bitsPerSample:samplesPerPixel:hasAlpha:isPlanar:colorSpaceName:bitmapFormat:bytesPerRow:bitsPerPixel: method. Then, get PNG data from it using -representationUsingType:properties: and save that to file.
I am a developer from China. I try to use NSBitmap ImageRep to convert GIF images into NSData, but the converted GIF images have no animation, it seems only the first frame. This is my code. I don't know what the problem is. I hope to get your help. Thank you.
NSData *gifData = [gifiImage TIFFRepresentation];
NSBitmapImageRep *bitMapRep = [NSBitmapImageRep imageRepWithData:gifData];
NSData *data = [bitMapRep representationUsingType:bitmapType properties:nil];
Somehow I cannot figure out to get an actual, meaningful histogram image from an NSImage input using the CIAreaHistogram and CIHistogramDisplayFilter filters.
I read Apple's "Core Image Filter Reference" and the relevant posts here on SO, but whatever I try I get no meaningful output.
Here's my code so far:
- (void) testHist3:(NSImage *)image {
CIContext* context = [[NSGraphicsContext currentContext] CIContext];
NSBitmapImageRep *rep = [image bitmapImageRepresentation];
CIImage *ciImage = [[CIImage alloc] initWithBitmapImageRep:rep];
ciImage = [CIFilter filterWithName:#"CIAreaHistogram" keysAndValues:kCIInputImageKey, ciImage, #"inputExtent", ciImage.extent, #"inputScale", [NSNumber numberWithFloat:1.0], #"inputCount", [NSNumber numberWithFloat:256.0], nil].outputImage;
ciImage = [CIFilter filterWithName:#"CIHistogramDisplayFilter" keysAndValues:kCIInputImageKey, ciImage, #"inputHeight", [NSNumber numberWithFloat:100.0], #"inputHighLimit", [NSNumber numberWithFloat:1.0], #"inputLowLimit", [NSNumber numberWithFloat:0.0], nil].outputImage;
CGImageRef cgImage2 = [context createCGImage:ciImage fromRect:ciImage.extent];
NSImage *img2 = [[NSImage alloc] initWithCGImage:cgImage2 size:ciImage.extent.size];
NSLog(#"Histogram image: %#", img2);
self.histImage = img2;
}
What I get is a 64x100 image with zero representations (=invisible). If I create the CI context with
CIContext *context = [[CIContext alloc] init];
then the resulting image is grey, but at least it does have a representation:
Histogram image: <NSImage 0x6100002612c0 Size={64, 100} Reps=(
"<NSCGImageSnapshotRep:0x6100002620c0 cgImage=<CGImage 0x6100001a1880>>" )>
The input image is a 1024x768 JPEG image.
I have little experience with Core Image or Core Graphics, so the mistake might be with the conversion back to NSImage... any ideas?
Edit 2016-10-26: With rickster's very comprehensive answer I was able to make a lot of progress.
Indeed it was the inputExtent parameter that was messing up my result. Supplying a CIVector there solved the problem. I found that you cannot leave that to the default either; I don't know what the default value is, but it is not the input image's full size. (I found that out by running an image and a mirrored version of it through the filter; I got different histograms.)
Edit 2016-10-28:
So, I've got a working, displayable histogram now; my next step will be to figure out how the "intermediate" histogram (the 256x1 pixel image coming out of the filter) can contain the actual histogram information even though all but the last pixel are always (0, 0, 0, 0).
I presume the [image bitmapImageRepresentation] in your code is a local category method that's roughly equivalent to (NSBitmapImageRep *)image.representations[0]? Otherwise, first make sure that you're getting the right input.
Next, it looks like you're passing the raw output of ciImage.extent into your filter parameters — given that said parameter expects a CIVector object and not a CGRect struct, you're probably borking the input to your filter at run time. You can get a bit more useful diagnostics for such problems by using the dictionary-based filter methods filterWithName:withInputParameters or imageByApplyingFilter:withInputParameters: — that way, if you try to pass nil for a filter key or pass something that isn't a proper object, you'll get a compile-time error. The latter gives you an easy way to go straight from input image to output image, or chain filters, without creating intermediary CIFilter objects and needing to set the input image on each.
A related tip: most of the parameters you're passing are the default values for those filters, so you can pass only the values you need:
CIImage *hist = [inputImage imageByApplyingFilter:#"CIAreaHistogram"
withInputParameters:#{ #"inputCount": #256 }];
CIImage *outputImage = [hist imageByApplyingFilter:#"CIHistogramDisplayFilter"
withInputParameters:nil];
Finally, you might still get an almost-all-gray image out of CIHistogramDisplayFilter depending on what your input image looks like, because all of the histogram bins may have very small bars. I get the following for Lenna:
Increasing the value for kCIInputScaleKey can help with that.
Also, you don't need to go through CGImage to get from CIImage to NSImage — create an NSCIImageRep instead and AppKit will automatically manage a CIContext behind the scenes when it comes time to render the image for display/output.
// input from NSImage
NSBitmapImageRep *inRep = [nsImage bitmapImageRepresentation];
CIImage *inputImage = [[CIImage alloc] initWithBitmapImageRep:inRep];
CIImage *outputImage = // filter, rinse, repeat
// output to NSImage
NSCIImageRep *outRep = [NSCIImageRep imageRepWithCIImage: outputImage];
NSImage *outNSImage = [[NSImage alloc] init];
[outNSImage addRepresentation: outRep];
I have as an input the dump of an image in an NSData object. Now, I want to extract relevant information of the image from this object like number of pixels, no. of bits per pixel, etc.
Can anyone tell me how to extract this info from the NSData object dump?
P.S.: I have gone through this documentation of the NSData class, but could not isolate out the relevant methods.
So the easiest way is to actually build the UIImage object from the NSData then extract the info from the UIImage then.
UIImage* image = [UIImage imageWithData:yourData];
NSLog(#"Image is %dx%d",image.size.width, image.size.height);
If you are only interested in the properties of the image but don't want to actually build its representation and only get the properties, take a look at CGImageSource
#import <ImageIO/ImageIO.h>
CGImageSourceRef imgSrc = CGImageSourceCreateWithData((__bridge CFDataRef)data, NULL);
size_t nbImages = CGImageSourceGetCount(imgSrc);
for(size_t idx=0; idx<nbImages; ++idx)
{
NSDictionary* props = (__bridge NSDictionary*)CGImageSourceCopyPropertiesAtIndex(imgSrc, idx, NULL);
NSLog(#"properties for image %lu in imageSource: %#", idx, props);
}
CFRelease(imgSrc);
[EDIT] For this to work, obviously add the ImageIO.framework to your "Link Binary With Libraries"
Convert the data to UIImage, then take a look at this post.
UIImage *image = [UIImage imageWithData:data];
I want to convert a UIImage into a format such as a jpeg or png so that I can then share that file using the IOS plug-in called "AddThis".
I tried to share it using just the UIImage but the plug-in doesn't support it so I need to find a way to convert the UIImage to a jpeg first, then add it into this code:
[AddThisSDK shareImage:[UIImage imageNamed:#"test.jpg] withService:#"twitter" title:#"I'm sharing something" description:#"Random description of image"];
the code has to have shareImage:[UIImageNamed:#""] otherwise an error occurs.
So far I've tried to convert it using UIImageJPEGRepresentation but I don't think I've done it properly. To be honest I tried to do it similarly to how you'd convert it straight from taking an image:
NSString *jpgPath = [NSHomeDirectory() stringByAppendingPathComponent:#"photo_boom.jpg"];
[UIImageJPEGRepresentation(shareImage, 1.0) writeToFile:jpgPath atomically:YES];
NSError *error;
NSFileManager *fileMgr = [NSFileManager defaultManager];
NSString *documentsDirectory = [NSHomeDirectory() stringByAppendingPathComponent:#"photo_boom.jpg"];
NSLog(#"Documents directory: %#", [fileMgr contentsOfDirectoryAtPath:documentsDirectory error:&error]);
Something tells me this isn't the correct way... mainly because I haven't been able to get it to work.
I'd really appreciate any kind of help!
Basically, I've made a UIImage from converting a UIView:
UIGraphicsBeginImageContext(firstPage.bounds.size);
[firstPage.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
Which I then want to give a jpeg format because when I tried to simply put it in as
[AddThisSDK shareImage:image withService:#"twitter" title:#"I'm sharing something" description:#"Random description of image"];
It gives me an error
UIImage has its own internal representation of an image, so it's irrelevant whether you load it with jpeg or png data.
The API call you're interested has a UIImage as the first parameter, so something along the lines of
[AddThisSDK shareImage:[UIImage imageNamed:#"photo_boom.jpg"]
withService:#"twitter"
title:#"I'm sharing something"
description:#"Random description of image"];
should work, provided photo_boom.jpg is included in your bundle. If you're loading a previously saved image from a folder, you'll need something like this:
NSString *jpgPath = [NSHomeDirectory() stringByAppendingPathComponent:#"photo_boom.jpg"];
UIImage * myImage = [UIImage imageWithContentsOfFile: jpgPath];
[AddThisSDK shareImage:myImage
withService:#"twitter"
title:#"I'm sharing something"
description:#"Random description of image"];
If that doesn't work, have you tried putting a breakpoint on the AddThisSDK line, and checking the value of image? Type po image on the console.