I'm trying to show a preview of an image in 1-bit monochrome, as in, not grayscale, but bitonal black and white. It's supposed to be an indication of how the image will look if it were faxed. Formats as low as 1-bit per pixel aren't available on OS X, only 8-bit grayscale. Is there any way to achieve this effect using Core Graphics or another framework (ideally with dithering)?
I know there's a filter called CIColorMonochrome but this only converts the image to grayscale.
The creation of a 1 bit deep NSImageRep (and also in the CG-world) is AFAIK not supported, So we have to do it manually. It might be useful to use CIImage for this task. Here I go the classical (you may call it old-fashioned) way. Here is a code that shows how we can do it. First a gray image is created from an NSImageRep so we have a well defined and simple format whatever the source image will be formatted (could also be a PDF file). The resulting gray image is the source for the bitonal image. Here is the code for creating a gray image: (without respecting the size / resolution of the source image, only the pixels count!):
- (NSBitmapImageRep *) grayRepresentationOf:(NSImageRep *)aRep
{
NSBitmapImageRep *newRep =
[[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL
pixelsWide:[aRep pixelsWide]
pixelsHigh:[aRep pixelsHigh]
bitsPerSample:8
samplesPerPixel:1
hasAlpha:NO //must be NO !
isPlanar:NO
colorSpaceName:NSCalibratedWhiteColorSpace
bytesPerRow:0
bitsPerPixel:0 ];
// this new imagerep has (as default) a resolution of 72 dpi
[NSGraphicsContext saveGraphicsState];
NSGraphicsContext *context = [NSGraphicsContext graphicsContextWithBitmapImageRep:newRep];
if( context==nil ){
NSLog( #"*** %s context is nil", __FUNCTION__ );
return nil;
}
[NSGraphicsContext setCurrentContext:context];
[aRep drawInRect:NSMakeRect( 0, 0, [newRep pixelsWide], [newRep pixelsHigh] )];
[NSGraphicsContext restoreGraphicsState];
return [newRep autorelease];
}
In the next method we create an NXBitmapImageRep (bits per pixel=1, samples per pixel=1) from a given NSImageRep (one of it's subclasses) and will use the method just given:
- (NSBitmapImageRep *) binaryRepresentationOf:(NSImageRep *)aRep
{
NSBitmapImageRep *grayRep = [aRep grayRepresentation];
if( grayRep==nil ) return nil;
NSInteger numberOfRows = [grayRep pixelsHigh];
NSInteger numberOfCols = [grayRep pixelsWide];
NSBitmapImageRep *newRep =
[[NSBitmapImageRep alloc] initWithBitmapDataPlanes:NULL
pixelsWide:numberOfCols
pixelsHigh:numberOfRows
bitsPerSample:1
samplesPerPixel:1
hasAlpha:NO
isPlanar:NO
colorSpaceName:NSCalibratedWhiteColorSpace
bitmapFormat:0
bytesPerRow:0
bitsPerPixel:0 ];
unsigned char *bitmapDataSource = [grayRep bitmapData];
unsigned char *bitmapDataDest = [newRep bitmapData];
// here is the place to use dithering or error diffusion (code below)
// iterate over all pixels
NSInteger grayBPR = [grayRep bytesPerRow];
NSInteger binBPR = [newRep bytesPerRow];
NSInteger pWide = [newRep pixelsWide];
for( NSInteger row=0; row<numberOfRows; row++ ){
unsigned char *rowDataSource = bitmapDataSource + row*grayBPR;
unsigned char *rowDataDest = bitmapDataDest + row*binBPR;
NSInteger destCol = 0;
unsigned char bw = 0;
for( NSInteger col = 0; col<pWide; ){
unsigned char gray = rowDataSource[col];
if( gray>127 ) {bw |= (1<<(7-col%8)); };
col++;
if( (col%8 == 0) || (col==pWide) ){
rowDataDest[destCol] = bw;
bw = 0;
destCol++;
}
}
}
// save as PNG for testing and return
[[newRep representationUsingType:NSPNGFileType properties:nil] writeToFile:#"/tmp/bin_1.png" atomically:YES];
return [newRep autorelease];
}
For error diffusion I used the following code which changes directly the bitmap of the gray image. This is allowed because the gray image itself is no longer used.
// change bitmapDataSource : use Error-Diffusion
for( NSInteger row=0; row<numberOfRows-1; row++ ){
unsigned char *currentRowData = bitmapDataSource + row*grayBPR;
unsigned char *nextRowData = currentRowData + grayBPR;
for( NSInteger col = 1; col<numberOfCols; col++ ){
NSInteger origValue = currentRowData[col];
NSInteger newValue = (origValue>127) ? 255 : 0;
NSInteger error = -(newValue - origValue);
currentRowData[col] = newValue;
currentRowData[col+1] = clamp(currentRowData[col+1] + (7*error/16));
nextRowData[col-1] = clamp( nextRowData[col-1] + (3*error/16) );
nextRowData[col] = clamp( nextRowData[col] + (5*error/16) );
nextRowData[col+1] = clamp( nextRowData[col+1] + (error/16) );
}
}
clamp is a macro defined before the method
#define clamp(z) ( (z>255)?255 : ((z<0)?0:z) )
This makes the unsigned char bytes to have valid values (0<=z<=255)
Related
I am trying to get the aspect ration for screen resolutions below is my code from which i am getting width height and refresh rate
-(void)getSupportedDisplays{
NSArray* theref = (__bridge NSArray *)(CGDisplayCopyAllDisplayModes ( CGMainDisplayID(), nil ));
NSMutableArray * rezes = [[NSMutableArray alloc]init];
for (id aMode in theref) {
CGDisplayModeRef thisMode = (__bridge CGDisplayModeRef)(aMode);
size_t theWidth = CGDisplayModeGetWidth( thisMode );
size_t theHeight = CGDisplayModeGetHeight( thisMode );
double refresh = CGDisplayModeGetRefreshRate(thisMode);
NSString *theRez = [NSString stringWithFormat:#"%zux%zu %d Hz",theWidth,theHeight,(int)refresh];
if (![rezes containsObject:theRez]) {
[rezes addObject:theRez];
}
}
NSLog(#" display deatails = %#", rezes);
}
I want aspect ratio for each resolution something like this
Any Suggestions ?
Thanks in Advance!!!
You can get aspect ratio from width and height all you need is to find Greatest common factor for Width and height.
static int gcd (int a, int b) {
return (b == 0) ? a : gcd (b, a%b);
}
this will return Greatest common factor
int commanDivideFactor = gcd(theWidth, theHeight);
NSLog(#"%d : %d", theWidth/commanDivideFactor, theHeight/commanDivideFactor);
I'm trying to optimize the performance in one of my components. The component needs to draw some (10 to 200) rectangles in it's drawRect method, which is triggered about 20 times per second.
Everything works when I use the CGContextFillRect method on each CGRect separately. I want to test if grouping the drawing into one single call with CGContextFillRects on an array of CGRects would increase performance.
The method CGContextFillRects gives me a compiler error No matching function for call to 'CGContextFillRects'.
This code is inside a .mm file. Should I import something before the CGContextFillRects method can be used?
This is what i'm trying to do:
- (void) drawRect:(CGRect)rect{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextSetFillColorWithColor(context, self.fillColor.CGColor);
//check if some objects are present
if (self.leftDrawBuffer && self.rightDrawBuffer){
UInt32 xPosForRect = self.leftPadding;
NSMutableArray *rectsToFill = [[NSMutableArray alloc] init];
for (int drawBufferLRIndex = 0; drawBufferLRIndex < 2; drawBufferLRIndex++){
Float32 *drawBuffer_ptr = self.leftDrawBuffer;
if (drawBufferLRIndex > 0){
drawBuffer_ptr = self.rightDrawBuffer;
}
for (int i=0; i< kAmountOfBarsPerChannel; i=i+1){
Float32 amp = drawBuffer_ptr[i];
Float32 blockNumber = 1.0f;
UInt32 yPosForRect = self.bounds.size.height - self.heightPerBlock;
while (blockNumber <= self.blocksPerLine && blockNumber / self.blocksPerLine < amp){
CGRect rect= CGRectMake(xPosForRect, yPosForRect, self.widthPerBlock, self.heightPerBlock);
[rectsToFill addObject:[NSValue valueWithCGRect:rect]];
//Using the method below works and gives me the expected result
//CGContextFillRect(context, rect);
blockNumber++;
yPosForRect -= self.heightPerBlock + self.vPaddingPerBlock;
}
xPosForRect += self.widthPerBlock + self.hPaddingPerBlock;
}
}
//This is the added code where i try to use CGContextFillRects
//1 -> transform to a c array of CGRects
const CGRect *cRects[rectsToFill.count];
for (int i = 0; i < rectsToFill.count; ++i) {
CGRect rect = [[rectsToFill objectAtIndex:i] CGRectValue];
cRects[i] = ▭
}
size_t size = rectsToFill.count;
//2 -> trigger the method to fill all rects at once
//this method gives me the compiler error 'No matching function for call to 'CGContextFillRects''
CGContextFillRects(context, cRects, size);
}
CGContextRestoreGState(context);
}
The problem is how you convert the rects to a C array. You make pointers to the rects that are temporarily stored on the stack. There are two problems with this. First, the rects are gone with each loop iteration, so you can't do that. Second, You should pass a pointer to an array of CGRects, not an array of pointers to CGRect.
This will likely solve it:
CGRect cRects[rectsToFill.count]; // Replace your lines from this
for (int i = 0; i < rectsToFill.count; ++i) {
CGRect rect = [[rectsToFill objectAtIndex:i] CGRectValue];
cRects[i] = rect;
}
size_t size = rectsToFill.count;
CGContextFillRects(context, cRects, size); // To this
Please note the re-declaration of the cRects array and the change in the assignment.
I am trying to read the ARGB pixel data from a png image asset in my ios App.
I am using CGDataProvider to get a CFDataRef as described here:
http://developer.apple.com/library/ios/#qa/qa1509/_index.html
It works perfectly the first time I use it on a certain image. But the second time I use it on THE SAME image, it returns a length 0 CFDataRef.
Maybe I am not releasing something? Why would it do that?
- (GLuint)initWithCGImage:(CGImageRef)newImageSource
{
CGDataProviderRef dataProvider;
CFDataRef dataRef;
GLuint t;
#try {
// NSLog(#"initWithCGImage");
// report_memory2();
CGFloat widthOfImage = CGImageGetWidth(newImageSource);
CGFloat heightOfImage = CGImageGetHeight(newImageSource);
// pixelSizeOfImage = CGSizeMake(widthOfImage, heightOfImage);
// CGSize pixelSizeToUseForTexture = pixelSizeOfImage;
// CGSize scaledImageSizeToFitOnGPU = [GPUImageOpenGLESContext sizeThatFitsWithinATextureForSize:pixelSizeOfImage];
GLubyte *imageData = NULL;
//CFDataRef dataFromImageDataProvider;
// stbi stbiClass;
int x;
int y;
int comp;
dataProvider = CGImageGetDataProvider(newImageSource);
dataRef = CGDataProviderCopyData(dataProvider);
const unsigned char * bytesRef = CFDataGetBytePtr(dataRef);
// NSUInteger length = CFDataGetLength(dataRef);
//CGDataProviderRelease(dataProvider);
//dataProvider = nil;
/*
UIImage *tmpImage = [UIImage imageWithCGImage:newImageSource];
NSData *data2 = UIImagePNGRepresentation(tmpImage);
// if (data2==NULL)
// data2 = UIImageJPEGRepresentation(tmpImage, 1);
unsigned char *bytes = (unsigned char *)[data2 bytes];
NSUInteger length = [data2 length];*/
// stbiClass.img_buffer = bytes;
// stbiClass.buflen = length;
// stbiClass.img_buffer_original = bytes;
// stbiClass.img_buffer_end = bytes + length;
// unsigned char *data = stbi_load_main(&stbiClass, &x, &y, &comp, 0);
//unsigned char * data = bytesRef;
x = widthOfImage;
y = heightOfImage;
comp = CGImageGetBitsPerPixel(newImageSource)/8;
int textureWidth = [self CalcPow2: x];
int textureHeight = [self CalcPow2: y];
unsigned char *scaledData = [self scaleImageWithParams:#{#"x":#(x), #"y":#(y), #"comp":#(comp), #"targetX":#(textureWidth), #"targetY":#(textureHeight)} andData:(unsigned char *)bytesRef];
//CFRelease (dataRef);
// dataRef = nil;
// free (data);
glGenTextures(1, &t);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, t);
GLint format = (comp > 3) ? GL_RGBA : GL_RGB;
imageData = scaledData;
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, format, textureWidth, textureHeight, 0, format, GL_UNSIGNED_BYTE, imageData);
//GLenum err = glGetError();
}
#finally
{
CGDataProviderRelease(dataProvider);
// CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(dataRef);
}
return t;
}
The second time this is called on a CGImageRef that originate from a [UIimage imageNamed: Path] with the same Path as the first time, I get a dataRef of length 0.
It works the first time though.
I have found one big issue with the code I posted and fixed it.
First of all, I was getting crashs even if I didn't load the same image twice, but rather more images. Since the issue is related to memory it failed in all sort of weird ways.
The issue with the code is that I am calling: "CGDataProviderRelease(dataProvider);"
I am using the data provider of newImageSource, but I didn't create this dataprovider. That is why I shouldn't release it.
You need to release things only if you created, retained or copied them.
Apart from that my App crash sometime due to low memory, but after fixing this I was able to use the "economy" type where I allocate and release as soon as possible.
Currently I can't see anything else wrong with this specific code.
I'm currently using this technique to get the color of a pixel in a UIimage. (on Ios)
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
CGImageRef inImage = self.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((w*round(point.y))+round(point.x));
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
NSLog(#"offset: %i colors: RGB A %i %i %i %i",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data) { free(data); }
return color;
}
As illustrated here;
http://www.markj.net/iphone-uiimage-pixel-color/
it works quite well, but when working with images larger than the UIImageView it fails. I tried adding an image and changing the scaling mode to fit the view. How would I modify the code to so that it would still be able to sample the pixel color with a scaled image.
try this for swift3
func getPixelColor(image: UIImage, x: Int, y: Int, width: CGFloat) -> UIColor
{
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(width) * y) + x) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
Here's a pointer:
0x3A28213A //sorry, I couldn't resist the joke
For real now: after going through the comments on the page at markj.net, a certain James has suggested to make the following changes:
size_t w = CGImageGetWidth(inImage); //Written by Mark
size_t h = CGImageGetHeight(inImage); //Written by Mark
float xscale = w / self.frame.size.width;
float yscale = h / self.frame.size.height;
point.x = point.x * xscale;
point.y = point.y * yscale;
(thanks to http://www.markj.net/iphone-uiimage-pixel-color/comment-page-1/#comment-2159)
This didn't actually work for me... Not that I did much testing, and I'm not the world's greatest programmer (yet)...
My solution was to scale the UIImageView in such a way that each pixel of the image in it was the same size as a standard CGPoint on the screen, then I took my color like normal (using getPixelColorAtLocation:(CGPoint)point) , then I scaled the image back to the size I wanted.
Hope this helps!
Use the UIImageView Layer:
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
UIGraphicsBeginImageContext(self.frame.size);
CGContextRef cgctx = UIGraphicsGetCurrentContext();
if (cgctx == NULL) { return nil; /* error */ }
[self.layer renderInContext:cgctx];
unsigned char* data = CGBitmapContextGetData (cgctx);
/*
...
*/
UIGraphicsEndImageContext();
return color;
}
How do I read the image color information for each pixel of PVRTC image?
Here is my code extracting the integer arrays
NSData *data = [[NSData alloc] initWithContentsOfFile:path];
NSMutableArray *_imageData = [[NSMutableArray alloc] initWithCapacity:10];
BOOL success = FALSE;
PVRTexHeader *header = NULL;
uint32_t flags, pvrTag;
uint32_t dataLength = 0, dataOffset = 0, dataSize = 0;
uint32_t blockSize = 0, widthBlocks = 0, heightBlocks = 0;
uint32_t width = 0, height = 0, bpp = 4;
uint8_t *bytes = NULL;
uint32_t formatFlags;
header = (PVRTexHeader *)[data bytes];
pvrTag = CFSwapInt32LittleToHost(header->pvrTag);
if (gPVRTexIdentifier[0] != ((pvrTag >> 0) & 0xff) ||
gPVRTexIdentifier[1] != ((pvrTag >> 8) & 0xff) ||
gPVRTexIdentifier[2] != ((pvrTag >> 16) & 0xff) ||
gPVRTexIdentifier[3] != ((pvrTag >> 24) & 0xff))
{
return FALSE;
}
flags = CFSwapInt32LittleToHost(header->flags);
formatFlags = flags & PVR_TEXTURE_FLAG_TYPE_MASK;
if (formatFlags == kPVRTextureFlagTypePVRTC_4 || formatFlags == kPVRTextureFlagTypePVRTC_2)
{
[_imageData removeAllObjects];
if (formatFlags == kPVRTextureFlagTypePVRTC_4)
_internalFormat = GL_COMPRESSED_RGBA_PVRTC_4BPPV1_IMG;
else if (formatFlags == kPVRTextureFlagTypePVRTC_2)
_internalFormat = GL_COMPRESSED_RGBA_PVRTC_2BPPV1_IMG;
_width = width = CFSwapInt32LittleToHost(header->width);
_height = height = CFSwapInt32LittleToHost(header->height);
if (CFSwapInt32LittleToHost(header->bitmaskAlpha))
_hasAlpha = TRUE;
else
_hasAlpha = FALSE;
dataLength = CFSwapInt32LittleToHost(header->dataLength);
bytes = ((uint8_t *)[data bytes]) + sizeof(PVRTexHeader);
// Calculate the data size for each texture level and respect the minimum number of blocks
while (dataOffset < dataLength)
{
if (formatFlags == kPVRTextureFlagTypePVRTC_4)
{
blockSize = 4 * 4; // Pixel by pixel block size for 4bpp
widthBlocks = width / 4;
heightBlocks = height / 4;
bpp = 4;
}
else
{
blockSize = 8 * 4; // Pixel by pixel block size for 2bpp
widthBlocks = width / 8;
heightBlocks = height / 4;
bpp = 2;
}
// Clamp to minimum number of blocks
if (widthBlocks < 2)
widthBlocks = 2;
if (heightBlocks < 2)
heightBlocks = 2;
dataSize = widthBlocks * heightBlocks * ((blockSize * bpp) / 8);
[_imageData addObject:[NSData dataWithBytes:bytes+dataOffset length:dataSize]];
for (int i=0; i < mipmapCount; i++)
{
NSLog(#"width:%d, height:%d",width,height);
data = [[NSData alloc] initWithData:[_imageData objectAtIndex:i]];
NSLog(#"data length:%d",[data length]);
//extracted 20 sample data, but all u could see are large integer number
for(int i = 0; i < 20; i++){
NSLog(#"data[%d]:%d",i,data[i]);
}
PVRTC is a 4x4 (or 8x4) texel, block-based compression system that takes into account surrounding blocks to represent two low frequency images with which higher frequency modulation data is combined in order to produce the actual texel output. A better explanation is available here:
http://web.onetel.net.uk/~simonnihal/assorted3d/fenney03texcomp.pdf
So the values you're extracting are actually parts of the encoded blocks and these need to be decoded correctly in order to get sensible values.
There are two ways to get to the colour information: decode/decompress the PVR texture information using a software decompressor or render the texture using a POWERVR graphics core and then read the result back. I'll only discuss the first option here.
It's rather tricky to assemble a decompressor from only the information there, but fortunately there's C++ decompression source code in the POWERVR SDK which you can get here - download one of the iPhone SDKs for instance:
http://www.imgtec.com/powervr/insider/powervr-sdk.asp
It's in the Tools/PVRTDecompress.cpp file.
Hope that helps.