Read Multiframe DICOM images using ITK - frame

I can read single frame DICOM images fine. But I am not sure how to go about reading a multi frame DICOM file, which say, has multiple images in a single DICOM file. Here is the code I use to read single frame DICOM. I am thinking along the lines that the image buffer loaded (imageBuf) should be divided into as many parts as the number of frames in the DICOM and use each part to construct an image. Can that be done?
int imageWidth = 880;
int imageHeight = 635;
NSString *dicomPath = [[[[NSBundle mainBundle] resourcePath] stringByAppendingString:#"/"] stringByAppendingString:#"your dicom file name"];
const char *c_dicomPath = [dicomPath UTF8String];
typedef unsigned char InputPixelType;
const unsigned int InputDimension = 3;
typedef itk::Image< InputPixelType, InputDimension > InputImageType;
typedef itk::ImageFileReader< InputImageType > ReaderType;
ReaderType::Pointer reader = ReaderType::New();
reader->SetFileName(c_dicomPath);
typedef itk::GDCMImageIO ImageIOType;
ImageIOType::Pointer gdcmImageIO = ImageIOType::New();
reader->SetImageIO(gdcmImageIO);
InputPixelType *imageBuf = (InputPixelType*)malloc(sizeof(InputPixelType)*imageHeight*imageWidth*3);
reader->Update();
//get dicom image
memset(imageBuf, 0, sizeof(InputPixelType)*imageHeight*imageWidth*3);
gdcmImageIO->Read(imageBuf);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef provider = CGDataProviderCreateWithData(nil, imageBuf, imageWidth*imageHeight*3*sizeof(InputPixelType), nil);
CGImageRef imageRef = CGImageCreate(imageWidth,//width
imageHeight,//height
8,//size_t bitsPerComponent,
24,//size_t bitsPerPixel,
imageWidth*sizeof(InputPixelType)*3,//size_t bytesPerRow,
colorspace,//CGColorSpaceRef space,
kCGBitmapByteOrderDefault,//CGBitmapInfo bitmapInfo,
provider,//CGDataProviderRef provider,
nil,//const CGFloat *decode,
NO,//bool shouldInterpolate,
kCGRenderingIntentDefault//CGColorRenderingIntent intent
);
//here is the dicom image decode from dicom file
UIImage *dicomImage = [[UIImage alloc] initWithCGImage:imageRef scale:1.0 orientation:UIImageOrientationUp];

Did you try itk::ImageSeriesReader?

Related

CGImageRef gets corrupted after returned from a method

My code creates an TIFFRepresentation of an image and I want to recode it to something different. This is not problematic.
My ImgUtils function is:
+ (CGImageRef) processImageData:(NSData*)rep {
NSBitmapImageRep *bitmapRep = [NSBitmapImageRep imageRepWithData:rep];
int width = bitmapRep.size.width;
int height = bitmapRep.size.height;
size_t pixels_size = width * height;
Byte raw_bytes[pixels_size * 3];
//
// processing, creates and stores raw byte stream
//
int bitsPerComponent = 8;
int bytesPerPixel = 3;
int bitsPerPixel = bytesPerPixel * bitsPerComponent;
int bytesPerRow = bytesPerPixel * width;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
raw_bytes,
pixels_size * bytesPerPixel,
NULL);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider,
NULL,
NO,
renderingIntent);
[ImgUtils saveToPng:imageRef withSuffix:#"-ok"];
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
return imageRef;
}
There is another method, that saves a CGImageRef to filesystem.
+ (BOOL) saveToPng:(CGImageRef)imageRef withSuffix:(NSString*)suffix {
CFURLRef url = (__bridge CFURLRef)[NSURL fileURLWithPath:[NSString stringWithFormat:#"~/Downloads/pic%#.png", suffix]];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(url, kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destination, imageRef, nil);
CGImageDestinationFinalize(destination);
CFRelease(destination);
return YES;
}
As you can see, immediately after processing the image, I save it on a disk, as pic-ok.png.
Here is the code, that calls the processing function:
CGImageRef cgImage = [ImgUtils processImageData:imageRep];
[ImgUtils saveToPng:cgImage withSuffix:#"-bad"];
The problem is, that the two images differ. Second one, with the -bad suffix is corrupted.
See examples below. Seems like the memory area the CGImageRef pointer is pointing to is released and overwritten immediately after returning from the method.
I tried also return CGImageCreateCopy(imageRef); but it changed nothing.
What am I missing?
CGDataProviderCreateWithData() does not copy the buffer you provide. Its purpose is to allow creation of a data provider that accesses that buffer directly.
Your buffer is created on the stack. It goes invalid after +processImageData: returns. However, the CGImage still refers to the provider and the provider still refers to the now-invalid buffer.
One solution would be to create the buffer on the heap and provide a callback via the releaseData parameter that frees it. Another would be to create a CFData from the buffer (which copies it) and then create the data provider using CGDataProviderCreateWithCFData(). Probably the best would be to create a CFMutableData of the desired capacity, set its length to match, and use its storage (CFDataGetMutableBytePtr()) as your buffer from the beginning. That's heap-allocated, memory-managed, and doesn't require any copying.

Creating ad displaying a UIImage from raw BGRA data

I'm collecting image data from the camera using this code:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
// Called when a frame arrives
// Should be in BGRA format
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
unsigned char *raw = (unsigned char *)CVPixelBufferGetBaseAddress(imageBuffer);
// Copy memory into allocated buffer
unsigned char *buffer = malloc(sizeof(unsigned char) * bytesPerRow * height);
memcpy(buffer, raw, bytesPerRow * height);
[self processVideoData:buffer width:width height:height bytesPerRow:bytesPerRow];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
The processVideoData: method looks like this:
- (void)processVideoData:(unsigned char *)data width:(size_t)width height:(size_t)height bytesPerRow:(size_t)bytesPerRow
{
dispatch_sync(dispatch_get_main_queue(), ^{
CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, data, bytesPerRow * height, NULL);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef image = CGImageCreate(width, height, 8, 32, bytesPerRow, colorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaLast, dataProvider, NULL, NO, kCGRenderingIntentDefault);
// Set layer contents???
UIImage *objcImage = [UIImage imageWithCGImage:image];
self.imageView.image = objcImage;
free(data);
CGDataProviderRelease(dataProvider);
CGColorSpaceRelease(colorSpace);
CGImageRelease(image);
});
}
No complaints, no leaks but nothing shows up in the image view it just stays blank (yes I have checked the outlet connection). Previously I had the bitmapInfo set just to kCGBitmapByteOrderDefault which was causing a crash when setting the image property of the image view however the image view would go dark which was promising just before the crash.
I summarised that the crash was due to the image being in BGRA not BGR so I set the bitmapInfo to kCGBitmapByteOrderDefault | kCGImageAlphaLast and that solved the crash but no image.
I realise that the image will look weird as the CGImageRef is expecting an RGB image and I'm passing it BGR but that should only result in a weird looking image due to channel swapping. I have also logged out the data that I'm getting and it seems to be in order something like: b:65 g:51 r:42 a:255 and the alpha channel is always 255 as expected.
I'm sorry if it's obvious but I can't work out what is going wrong.
You can use this flag combination to achieve BGRA format:
kCGBitmapByteOrder32Little | kCGImageAlphaSkipFirst
You should prefer to use this solution, it will be more performant way in comparison to OpenCV conversion.
Here is more common way to convert sourcePixelFormat to bitmapInfo:
sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
bitmapInfo = #{
#(kCVPixelFormatType_32ARGB) : #(kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipFirst),
#(kCVPixelFormatType_32BGRA) : #(kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst),
}[#(sourcePixelFormat)].unsignedIntegerValue;
It turns out the data was just in the wrong format and I wasn't feeding it into the CGImageCreate function correctly.
The data comes out in BGRA format so I fed this data into an IplImage structure (I'm using OpenCV v 2.4.9) like so:
// Pack IplImage with data
IplImage *img = cvCreateImage(cvSize((int)width, (int)height), 8, 4);
img->imageData = (char *)data;
I then converted it to RGB like so:
IplImage *converted = cvCreateImage(cvSize((int)width, (int)height), 8, 3);
cvCvtColor(img, converted, CV_BGRA2RGB);
I then fed the data from the converted IplImage into a CGImageCreate function and it works nicely.

How to create a CGImageRef from a NSBitmapImageRep?

How can I create a CGImageRef from a NSBitmapImageRep?
Or how can I define a complete new CGImageRef in the same way as the NSBitmapImageRep? The definition of a NSBitmapImageRep works fine. But I need an image as CGImageRef.
unsigned char *plane = (unsigned char *)[data bytes]; // data = 3 bytes for each RGB pixel
NSBitmapImageRep* imageRep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes: &plane
pixelsWide: width
pixelsHigh: height
bitsPerSample: depth
samplesPerPixel: channel
hasAlpha: NO
isPlanar: NO
colorSpaceName: NSCalibratedRGBColorSpace
//bitmapFormat: NSAlphaFirstBitmapFormat
bytesPerRow: channel * width
bitsPerPixel: channel * depth
];
I have no idea how to create the CGImageRef from the NSBitmapImageRep or how to define a new CGImageRef:
CGImageRef imageRef = CGImageCreate(width, height, depth, channel*depth, channel*width, CGColorSpaceCreateDeviceRGB(), ... );
Please, can somebody give me a hint?
The easy way is by using the CGImage property (introduced in 10.5):
CGImageRef image = imageRep.CGImage;
Documentation:
https://developer.apple.com/library/mac/documentation/Cocoa/Reference/ApplicationKit/Classes/NSBitmapImageRep_Class/index.html#//apple_ref/occ/instm/NSBitmapImageRep/CGImage
Return Value
Returns an autoreleased CGImageRef opaque type based on
the receiver’s current bitmap data.
Discussion
The returned CGImageRef has pixel dimensions that are
identical to the receiver’s. This method might return a preexisting
CGImageRef opaque type or create a new one. If the receiver is later
modified, subsequent invocations of this method might return different
CGImageRef opaque types.
From your code snippet, it seems you're starting with an NSData object. So, your question seems to be how to create a CGImage from a data object. In that case, there's no reason to go through NSBitmapImageRep.
You were almost there with the call to CGImageCreate(). You just needed to figure out how to supply a CGDataProvider to it. You can create a CGDataProvider from an NSData pretty directly, once you realize that NSData is toll-free bridged with CFData. So:
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGColorSpaceRef colorspace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGImageRef image = CGImageCreate(width, height, depth / 3, depth, channel*width, colorspace, kCGImageAlphaNone, provider, NULL, TRUE, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorspace);

How do I access and manipulate JPEG image pixels?

i have a jpg file. I need to convert it to pixel data and then change color of some pixel. I do it like this:
NSString *string = [[NSBundle mainBundle] pathForResource:#"pic" ofType:#"jpg"];
NSData *data = [NSData dataWithContentsOfFile:string];
unsigned char *bytesArray = dataI.bytes;
NSUInteger byteslenght = data.length;
//--------pixel to array
NSMutableArray *array = [[NSMutableArray alloc] initWithCapacity:byteslenght];
for (int i = 0; i<byteslenght; i++) {
[array addObject:[NSNumber numberWithUnsignedChar:bytesArray[i]]];
}
Here i try to change color of pixels since 95 till 154.
NSNumber *number = [NSNumber numberWithInt:200];
for (int i=95; i<155; i++) {
[array replaceObjectAtIndex:i withObject:number];
}
But when i convert array to image i got a blurred picture. I don't understand why i don't have an influence on some pixels and why i have influence on picture in total?
The process of accessing pixel-level data is a little more complicated than your question might suggest, because, as Martin pointed out, JPEG can be a compressed image format. Apple discusses the approved technique for getting pixel data in Technical Q&A QA1509.
Bottom line, to get the uncompressed pixel data for a UIImage, you would:
Get the CGImage for the UIImage.
Get the data provider for that CGImageRef via CGImageGetDataProvider.
Get the binary data associated with that data provider via CGDataProviderCopyData.
Extract some of the information about the image, so you know how to interpret that buffer.
Thus:
UIImage *image = ...
CGImageRef imageRef = image.CGImage; // get the CGImageRef
NSAssert(imageRef, #"Unable to get CGImageRef");
CGDataProviderRef provider = CGImageGetDataProvider(imageRef); // get the data provider
NSAssert(provider, #"Unable to get provider");
NSData *data = CFBridgingRelease(CGDataProviderCopyData(provider)); // get copy of the data
NSAssert(data, #"Unable to copy image data");
NSInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef); // some other interesting details about image
NSInteger bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
NSInteger bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
NSInteger bytesPerRow = CGImageGetBytesPerRow(imageRef);
NSInteger width = CGImageGetWidth(imageRef);
NSInteger height = CGImageGetHeight(imageRef);
CGColorSpaceRef colorspace = CGImageGetColorSpace(imageRef);
Given that you want to manipulate this, you presumably want some mutable pixel buffer. The easiest approach would be to make a mutableCopy of that NSData object and manipulate it there, but in these cases, I tend to fall back to C, creating a void *outputBuffer, into which I copy the original pixels and manipulate using traditional C array techniques.
To create the buffer:
void *outputBuffer = malloc(width * height * bitsPerPixel / 8);
NSAssert(outputBuffer, #"Unable to allocate buffer");
For the precise details on how to manipulate it, you have to look at bitmapInfo (which will tell you whether it's RGBA or ARGB; whether it's floating point or integer) and bitsPerComponent (which will tell you whether it's 8 or 16 bits per component, etc.). For example, a very common JPEG format is 8 bits per component, four components, RGBA (i.e. red, green, blue, and alpha, in that order). But you really need to check those various properties we extracted from the CGImageRef to make sure. See the discussion in the Quartz 2D Programming Guide - Bitmap Images and Image Masks for more information. I personally find "Figure 11-2" to be especially illuminating.
The next logical question is when you're done manipulating the pixel data, how to create a UIImage for that. In short, you'd reverse the above process, e.g. create a data provider, create a CGImageRef, and then create a UIImage:
CGDataProviderRef outputProvider = CGDataProviderCreateWithData(NULL, outputBuffer, sizeof(outputBuffer), releaseData);
CGImageRef outputImageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorspace,
bitmapInfo,
outputProvider,
NULL,
NO,
kCGRenderingIntentDefault);
UIImage *outputImage = [UIImage imageWithCGImage:outputImageRef];
CGImageRelease(outputImageRef);
CGDataProviderRelease(outputProvider);
Where releaseData is a C function that simply calls free the pixel buffer associated with the data provider:
void releaseData(void *info, const void *data, size_t size)
{
free((void *)data);
}

How to convert hex data to UIImage?

I like to display .cpbitmap (its the file formate iOS saves the wallpapers) in a UIImageView. The problem is I need to convert it. I already figure out that if you get its Data (NSData) and convert every bit you get the UIColor, so the first Bit is R, then B, then G and then Alpha (I think). Now I need to "draw" an UIImage out of the info. Does anyone know how to do this?
Here is the link to the .cpbitmap file: https://www.dropbox.com/s/s9v4lahixm9cuql/LockBackground.cpbitmap
It would be really cool if someone can help me,
Thanks
EDIT
I found a working python script, is someone able to translate it to Objective
#!/usr/bin/python
from PIL import Image,ImageOps
import struct
import sys
if len(sys.argv) < 3:
print "Need two args: filename and result_filename\n";
sys.exit(0)
filename = sys.argv[1]
result_filename = sys.argv[2]
with open(filename) as f:
contents = f.read()
unk1, width, height, unk2, unk3, unk4 = struct.unpack('<6i', contents[-24:])
im = Image.fromstring('RGBA', (width,height), contents, 'raw', 'RGBA', 0, 1)
r,g,b,a = im.split()
im = Image.merge('RGBA', (b,g,r,a))
im.save(result_filename)
The basic process of converting RGBA data into an image is to create a CGDataProviderRef with the raw data, and then use CGImageCreate to create a CGImageRef, from which you can easily generate a UIImage. So, that gives you something like:
- (UIImage *) imageForBitmapData:(NSData *)data size:(CGSize)size
{
void * bitmapData;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bitmapBytesPerRow = (size.width * 4);
int bitmapByteCount = (bitmapBytesPerRow * size.height);
bitmapData = malloc( bitmapByteCount );
NSAssert(bitmapData, #"Unable to create buffer");
[data getBytes:bitmapData length:bitmapByteCount];
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, bitmapData, bitmapByteCount, releasePixels);
CGImageRef imageRef = CGImageCreate(size.width,
size.height,
8,
32,
bitmapBytesPerRow,
colorSpace,
(CGBitmapInfo)kCGImageAlphaLast,
provider,
NULL,
NO,
kCGRenderingIntentDefault);
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
return image;
}
With a releasePixels function defined as follows:
void releasePixels(void *info, const void *data, size_t size)
{
free((void*)data);
}
The only trick was identifying the dimensions of the bitmap. There are 4,142,592 bytes of image data (there is some extra stuff at the end of the file, which is self evident if you examine the file in hexadecimal in Xcode). That doesn't correlate to any standard device dimensions. But if you look at the possible values that divide evenly into 4,142,592, you get a couple of promising ones (496, 522, 558, 576, 696, 744, 899, 928, and 992). And if you just try those out, it becomes obvious that the image is 744 x 1392.
You can then use those dimensions with the above method, and you get your image.
While I discovered the size of the image empirically, I noticed that those dimensions were encoded at the end of the file. This is confirmed by your Python code, which suggests that the image width is the fifth from the last UInt32, and the height is the fourth from the last UInt32. Thus, you can use the above routine like so, extracting the dimensions from those two 32-bit integers encoded near the end of the file:
NSString *path = [[NSBundle mainBundle] pathForResource:#"LockBackground" ofType:#"cpbitmap"];
NSData *data = [NSData dataWithContentsOfFile:path];
NSAssert(data, #"no data found");
UInt32 width;
UInt32 height;
[data getBytes:&width range:NSMakeRange([data length] - sizeof(UInt32) * 5, sizeof(UInt32))];
[data getBytes:&height range:NSMakeRange([data length] - sizeof(UInt32) * 4, sizeof(UInt32))];
self.imageView.image = [self imageForBitmapData:data size:CGSizeMake(width, height)];