Create a CVPixelBufferRef with OpenGLCompatibility - objective-c

I am trying to create a CVPixelBuffer to allocate a bitmap in it and bind it to an OpenGL texture under IOS 5, but I having some problems on it. I can generate the pixel buffer but the IOSurface is always null, so I can not use it with CVOpenGLESTextureCacheCreateTextureFromImage. Since I am not using a Camera or a Video, but an self generated bitmap I can not use CVSampleBufferRef to get the pixel buffer from it.
I let you my code in case some of you knows how to solve this issue.
CVPixelBufferRef renderTarget = nil;
CFDictionaryRef empty; // empty value for attr value.
CFMutableDictionaryRef attrs;
empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs = CFDictionaryCreateMutable(kCFAllocatorDefault,
2,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs,
kCVPixelBufferIOSurfacePropertiesKey,
empty);
CFDictionarySetValue(attrs, kCVPixelBufferOpenGLCompatibilityKey, [NSNumber numberWithBool:YES]);
CVReturn err = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, tmpSize.width, tmpSize.height, kCVPixelFormatType_32ARGB, imageData, tmpSize.width*4, 0, 0, attrs, &renderTarget);
If I dump the new PixelBufferRef I get:
1 : <CFString 0x3e43024c [0x3f8f8650]>{contents = "OpenGLCompatibility"} = <CFBoolean 0x3f8f8a10 [0x3f8f8650]>{value = true}
2 : <CFString 0x3e43026c [0x3f8f8650]>{contents = "IOSurfaceProperties"} = <CFBasicHash 0xa655750 [0x3f8f8650]>{type = immutable dict, count = 0,
entries =>
}
}
And I get an -6683 error (kCVReturnPixelBufferNotOpenGLCompatible) if I try to use it with a TextureCache, this way:
CVOpenGLESTextureCacheCreateTextureFromImage (
kCFAllocatorDefault,
self.eagViewController.textureCache,
renderTarget,
nil, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
tmpSize.width,
tmpSize.height,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTextureTemp);

Related

Image Quality getting affected on scaling the image using vImageScale_ARGB8888 - Cocoa Objective C

I am capturing my system's screen with AVCaptureSession and then create a video file out of the image buffers captured. It works fine.
Now I want to scale the image buffers by maintaining the aspect ratio for the video file's dimension. I have used the following code to scale the images.
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (pixelBuffer == NULL) { return; }
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
size_t finalWidth = 1080;
size_t finalHeight = 720;
size_t sourceWidth = CVPixelBufferGetWidth(imageBuffer);
size_t sourceHeight = CVPixelBufferGetHeight(imageBuffer);
CGRect aspectRect = AVMakeRectWithAspectRatioInsideRect(CGSizeMake(sourceWidth, sourceHeight), CGRectMake(0, 0, finalWidth, finalHeight));
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t startY = aspectRect.origin.y;
size_t yOffSet = (finalWidth*startY*4);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
void* destData = malloc(finalHeight * finalWidth * 4);
vImage_Buffer srcBuffer = { (void *)baseAddress, sourceHeight, sourceWidth, bytesPerRow};
vImage_Buffer destBuffer = { (void *)destData+yOffSet, aspectRect.size.height, aspectRect.size.width, aspectRect.size.width * 4};
vImage_Error err = vImageScale_ARGB8888(&srcBuffer, &destBuffer, NULL, 0);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
OSType pixelFormat = CVPixelBufferGetPixelFormatType(imageBuffer);
CVImageBufferRef pixelBuffer1 = NULL;
CVReturn result = CVPixelBufferCreateWithBytes(NULL, finalWidth, finalHeight, pixelFormat, destData, finalWidth * 4, NULL, NULL, NULL, &pixelBuffer1);
}
I am able scale the image with the above code but the final image seems to be blurry compare to resizing the image with Preview application. Because of this the video is not clear.
This works fine if I change the output pixel format to RGB with below code.
output.videoSettings = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey, nil];
But I want the image buffers in YUV format (which is the default format for AVCaptureVideoDataOutput) since this will reduce the size of the buffer when transferring it over network.
Image after scaling:
Image resized with Preview application:
I have tried using vImageScale_CbCr8 instead of vImageScale_ARGB8888 but the resulting image didn't contain correct RGB values.
I have also noticed there is function to convert image format: vImageConvert_422YpCbYpCr8ToARGB8888(const vImage_Buffer *src, const vImage_Buffer *dest, const vImage_YpCbCrToARGB *info, const uint8_t permuteMap[4], const uint8_t alpha, vImage_Flags flags);
But I don't know what should be the values for vImage_YpCbCrToARGB and permuteMap as I don't know anything about image processing.
Expected Solution:
How to convert YUV pixel buffers to RGB buffers and back to YUV (or) How to scale YUV pixel buffers without affecting the RGB values.
After a lot search and going through different questions related to image rendering, found the below code to convert the pixel format of the image buffers. Thanks to the answer in this link.
CVPixelBufferRef imageBuffer;
CVPixelBufferCreate(kCFAllocatorDefault, sourceWidth, sourceHeight, kCVPixelFormatType_32ARGB, 0, &imageBuffer);
VTPixelTransferSessionTransferImage(pixelTransferSession, pixelBuffer, imageBuffer);

Convert raw pixel data to JPEG file data on IOS using Objective-C and UIKit

I am attempting to build an application on IOS using gluon mobile that uses the devices camera to take a picture then send the file data of that picture to a backend server. However, this cannot be done directly since the PictureService provided by gluon only returns a javafx Image object when a picture is taken.
To get around this I am attempting to get the raw pixel data from the Image object IOSImageService.java, then use UIKit and Objective-C to take advantage of IOS native libraries to convert the raw pixel data into a JPEG file format JPEGConversion.m(code mainly taken from here).
From my DEBUG OUTPUT I can see that when I attempt to get the NSData object from the UIImage object it always comes out NULL and with a length of 0. I am not sure where I am going wrong with this code.
Am I using the UIKit libraries wrong? Am I not using Objective-C correctly?
Or, am I not even sending to data to the Objective-C code in the proper format?
Everything here seems like I am going in the proper direction, I just have to successfully get the NSData for the JPEG file and then convert that into a jbyteArray and return it.
Java: IOSImageService.java
package com.gluonhq.charm.down.plugins.ios;
import com.gluonhq.charm.down.plugins.ImageService;
import javafx.scene.image.Image;
import javafx.scene.image.PixelFormat;
import javafx.scene.image.PixelReader;
public class IOSImageService implements ImageService{
static {
System.loadLibrary("JPEGConversion");
}
private native byte[] nativeGetImageBytes(byte[] array, int width, int height);
#Override
public byte[] getImageBytes(Image image) {
int width = (int)image.getWidth();
int height = (int)image.getHeight();
System.out.println("Width: " + Integer.toString(width) + "\nHeight: " + Integer.toString(height));
//Get raw pixel data from image
byte[] BGRAFormattedArray = new byte[width * height * 4];
PixelReader p = image.getPixelReader();
p.getPixels(0, 0, width, height, PixelFormat.getByteBgraPreInstance(), BGRAFormattedArray, 0, width * 4);
System.out.println("Size: " + Integer.toString(BGRAFormattedArray.length));
//Change format from BGRA to RGBA
byte[] RGBAFormattedArray = new byte[width * height * 4];
for(int i=0;i<BGRAFormattedArray.length;i+=4) {
RGBAFormattedArray[i + 0] = BGRAFormattedArray[i + 2];
RGBAFormattedArray[i + 1] = BGRAFormattedArray[i + 1];
RGBAFormattedArray[i + 2] = BGRAFormattedArray[i + 0];
RGBAFormattedArray[i + 3] = BGRAFormattedArray[i + 3];
}
return nativeGetImageBytes(RGBAFormattedArray, width, height);
}
}
Objective-C: JPEGConversion.m
JNIEXPORT jbyteArray JNICALL Java_com_gluonhq_charm_down_plugins_ios_IOSImageService_nativeGetImageBytes
(JNIEnv *env, jobject obj, jbyteArray array, jint width, jint height)
{
jbyte* elements = (*env)->GetByteArrayElements(env, array, NULL);
if(elements == NULL) {
NSLog(#"RETURNING BECAUSE DATA WAS NULL");
return NULL;
}
int length = (*env)->GetArrayLength(env, array);
int w = (int)width;
int h = (int)height;
NSLog(#"ARRAY LENGTH: %i", length);
NSLog(#"Width: %i", w);
NSLog(#"Height: %i", h);
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, elements, (64 * 64 * 4), NULL);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * w;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(w, h, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, kCGBitmapByteOrderDefault | kCGImageAlphaLast, provider, NULL, false, renderingIntent);
size_t b_width = CGImageGetWidth(imageRef);
NSLog(#"IMAGE REFFERENCE WIDTH: %i", (int) b_width);
if(imageRef == NULL) {
NSLog(#"RETURNING BECAUSE IMAGE REFERENCE IS NULL");
return NULL;
}
UIImage *image = [UIImage imageWithCGImage:imageRef];
if(image == NULL) {
NSLog(#"UIIMAGE WAS NULL");
}else {
NSLog(#"UIIMAGE WAS NOT NULL");
}
NSData* pictureData = UIImageJPEGRepresentation(image, 1.0);
if(pictureData == NULL) {
NSLog(#"PICTURE DATA NULL");
}
NSLog(#"bytes in hex: %#", [pictureData description]);
//int picLength = (int)[pictureData length];
NSLog(#"Picture Length: %ld", [pictureData length]);
CGImageRelease(imageRef);
(*env)->ReleaseByteArrayElements(env, array, elements, JNI_ABORT);
//dummy return value
jbyte a[] = {1,255,1,31,1,15};
jbyteArray result = (*env)->NewByteArray(env, 6);
(*env)->SetByteArrayRegion(env, result, 0, 6, a);
return result;
}
DEBUG OUTPUT
Width: 128
Height: 128
Size: 65536
2018-03-14 15:51:22.117174-0230 MultiplatformGluonApplicationApp[1166:1138266] ARRAY LENGTH: 65536
2018-03-14 15:51:22.117270-0230 MultiplatformGluonApplicationApp[1166:1138266] Width: 128
2018-03-14 15:51:22.117349-0230 MultiplatformGluonApplicationApp[1166:1138266] Heigth: 128
2018-03-14 15:51:22.117413-0230 MultiplatformGluonApplicationApp[1166:1138266] IMAGE REFFERENCE WIDTH: 128
2018-03-14 15:51:22.117518-0230 MultiplatformGluonApplicationApp[1166:1138266] UIIMAGE WAS NOT NULL
2018-03-14 15:51:22.119407-0230 MultiplatformGluonApplicationApp[1166:1138266] PICTURE DATA NULL
2018-03-14 15:51:22.119504-0230 MultiplatformGluonApplicationApp[1166:1138266] bytes in hex: (null)
2018-03-14 15:51:22.119537-0230 MultiplatformGluonApplicationApp[1166:1138266] Picture Length: 0
JPG size: 6
[ 0x01 0xFF 0x01 0x1F 0x01 0x0F ]

How to use CFDictionaryCreate in Swift 3?

I want to draw an attributed string with clipping rect, and I found a function written in Objective-C,
CFDictionaryRef PINCHFrameAttributesCreateWithClippingRect(CGRect clippingRect, CGAffineTransform transform)
{
CGPathRef clipPath = CGPathCreateWithRect(clippingRect, &transform);
CFDictionaryRef options;
CFStringRef keys[] = {kCTFramePathClippingPathAttributeName};
CFTypeRef values[] = {clipPath};
CFDictionaryRef clippingPathDict = CFDictionaryCreate(NULL,
(const void **)&keys, (const void **)&values,
sizeof(keys) / sizeof(keys[0]),
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFTypeRef clippingArrayValues[] = { clippingPathDict };
CFArrayRef clippingPaths = CFArrayCreate(NULL, (const void **)clippingArrayValues, sizeof(clippingArrayValues) / sizeof(clippingArrayValues[0]), &kCFTypeArrayCallBacks);
CFStringRef optionsKeys[] = {kCTFrameClippingPathsAttributeName};
CFTypeRef optionsValues[] = {clippingPaths};
options = CFDictionaryCreate(NULL, (const void **)&optionsKeys, (const void **)&optionsValues,
sizeof(optionsKeys) / sizeof(optionsKeys[0]),
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFRelease(clippingPathDict);
CFRelease(clippingPaths);
CGPathRelease(clipPath);
return options;
}
But I failed to translate it in Swift 3, anybody can help me?
//
I tried belowing codes, it compiles and run, but text doesn't do clip:
let keys: [CFString] = [kCTFrameClippingPathsAttributeName]
let values: [CFTypeRef] = [clipPath]
let keysPointer = UnsafeMutablePointer<UnsafeRawPointer?>.allocate(capacity: keys.count)
keysPointer.initialize(to: keys)
let valuesPointer = UnsafeMutablePointer<UnsafeRawPointer?>.allocate(capacity: values.count)
valuesPointer.initialize(to: values)
let options = CFDictionaryCreate(kCFAllocatorDefault, keysPointer, valuesPointer, 1, nil, nil)
let framesetter = CTFramesetterCreateWithAttributedString(attributedSring)
let frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, attributedSring.length), rectPath.cgPath, options)
CTFrameDraw(frame, context)
Construct a Swift dictionary and then cast it to CFDictionary. Example:
let d = [
kCGImageSourceShouldAllowFloat as String : true as NSNumber,
kCGImageSourceCreateThumbnailWithTransform as String : true as NSNumber,
kCGImageSourceCreateThumbnailFromImageAlways as String : true as NSNumber,
kCGImageSourceThumbnailMaxPixelSize as String : w as NSNumber
] as CFDictionary
More details here: https://bugs.swift.org/browse/SR-2388

Correctly freeing a buffer from vImageBuffer_InitWithCVPixelBuffer

I'm attempting to convert CVPixelBufferRefs from a video source to CGImageRefs using vImage convert libraries on 10.10. This for the most part works fine. However, each time I initialize a new vImage_Buffer from my CVPixelBufferRef, memory is gobbled up that is never returned.
Here is a simplified version of the conversion, that should ideally use no memory at the end of the day:
CVPixelBufferRef pixelBuffer = ...; // retained CVPixelBufferRef from somewhere else
vImage_Buffer buffer;
vImage_CGImageFormat format = {.bitsPerComponent = 8, .bitsPerPixel = 32, .colorSpace = NULL, .bitmapInfo = kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little, .version = 0, .decode = NULL, .renderingIntent = kCGRenderingIntentAbsoluteColorimetric};
vImage_Error imageError = vImageBuffer_InitWithCVPixelBuffer(&buffer, &format, pixelBuffer, NULL, NULL, kvImagePrintDiagnosticsToConsole);
// Do conversion here
free(buffer.data);
Commenting out the last two lines (the init and free) effectively uses no more memory than what I started with. With the two lines there, however, 6 MB are being consumed each time.
If I only comment out the free, even more memory is being consumed, so the free is definitely doing something, but I can only assume vImageBuffer_InitWithCVPixelBuffer is using more memory than it is supposed to. Has anyone else seen this?
For completion, here is the whole conversion method to go from CVPixelBufferRef to NSImage:
CVPixelBufferRef pixelBuffer = ...; // retained CVPixelBufferRef from somewhere else
NSImage *image = nil;
vImage_Buffer buffer;
vImage_CGImageFormat format = {.bitsPerComponent = 8, .bitsPerPixel = 32, .colorSpace = NULL, .bitmapInfo = kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little, .version = 0, .decode = NULL, .renderingIntent = kCGRenderingIntentAbsoluteColorimetric};
vImage_Error imageError = vImageBuffer_InitWithCVPixelBuffer(&buffer, &format, pixelBuffer, NULL, NULL, kvImagePrintDiagnosticsToConsole);
if (imageError != 0) {
NSLog(#"vImageBuffer_InitWithCVPixelBuffer Error: %zd", imageError);
} else {
CGImageRef imageRef = vImageCreateCGImageFromBuffer(&buffer, &format, NULL, NULL, kvImagePrintDiagnosticsToConsole|kvImageHighQualityResampling, &imageError);
if (!imageRef) {
NSLog(#"vImageCreateCGImageFromBuffer Error: %zd", imageError);
} else {
image = [[NSImage alloc] initWithCGImage:imageRef size:NSMakeSize(CGImageGetWidth(imageRef), CGImageGetHeight(imageRef))];
CGImageRelease(imageRef);
NSAssert(image != nil, #"Creating the image failed!");
}
}
free(buffer.data);

CGDataProvider works the first time, returns an empty image the second time

I am trying to read the ARGB pixel data from a png image asset in my ios App.
I am using CGDataProvider to get a CFDataRef as described here:
http://developer.apple.com/library/ios/#qa/qa1509/_index.html
It works perfectly the first time I use it on a certain image. But the second time I use it on THE SAME image, it returns a length 0 CFDataRef.
Maybe I am not releasing something? Why would it do that?
- (GLuint)initWithCGImage:(CGImageRef)newImageSource
{
CGDataProviderRef dataProvider;
CFDataRef dataRef;
GLuint t;
#try {
// NSLog(#"initWithCGImage");
// report_memory2();
CGFloat widthOfImage = CGImageGetWidth(newImageSource);
CGFloat heightOfImage = CGImageGetHeight(newImageSource);
// pixelSizeOfImage = CGSizeMake(widthOfImage, heightOfImage);
// CGSize pixelSizeToUseForTexture = pixelSizeOfImage;
// CGSize scaledImageSizeToFitOnGPU = [GPUImageOpenGLESContext sizeThatFitsWithinATextureForSize:pixelSizeOfImage];
GLubyte *imageData = NULL;
//CFDataRef dataFromImageDataProvider;
// stbi stbiClass;
int x;
int y;
int comp;
dataProvider = CGImageGetDataProvider(newImageSource);
dataRef = CGDataProviderCopyData(dataProvider);
const unsigned char * bytesRef = CFDataGetBytePtr(dataRef);
// NSUInteger length = CFDataGetLength(dataRef);
//CGDataProviderRelease(dataProvider);
//dataProvider = nil;
/*
UIImage *tmpImage = [UIImage imageWithCGImage:newImageSource];
NSData *data2 = UIImagePNGRepresentation(tmpImage);
// if (data2==NULL)
// data2 = UIImageJPEGRepresentation(tmpImage, 1);
unsigned char *bytes = (unsigned char *)[data2 bytes];
NSUInteger length = [data2 length];*/
// stbiClass.img_buffer = bytes;
// stbiClass.buflen = length;
// stbiClass.img_buffer_original = bytes;
// stbiClass.img_buffer_end = bytes + length;
// unsigned char *data = stbi_load_main(&stbiClass, &x, &y, &comp, 0);
//unsigned char * data = bytesRef;
x = widthOfImage;
y = heightOfImage;
comp = CGImageGetBitsPerPixel(newImageSource)/8;
int textureWidth = [self CalcPow2: x];
int textureHeight = [self CalcPow2: y];
unsigned char *scaledData = [self scaleImageWithParams:#{#"x":#(x), #"y":#(y), #"comp":#(comp), #"targetX":#(textureWidth), #"targetY":#(textureHeight)} andData:(unsigned char *)bytesRef];
//CFRelease (dataRef);
// dataRef = nil;
// free (data);
glGenTextures(1, &t);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, t);
GLint format = (comp > 3) ? GL_RGBA : GL_RGB;
imageData = scaledData;
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, format, textureWidth, textureHeight, 0, format, GL_UNSIGNED_BYTE, imageData);
//GLenum err = glGetError();
}
#finally
{
CGDataProviderRelease(dataProvider);
// CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(dataRef);
}
return t;
}
The second time this is called on a CGImageRef that originate from a [UIimage imageNamed: Path] with the same Path as the first time, I get a dataRef of length 0.
It works the first time though.
I have found one big issue with the code I posted and fixed it.
First of all, I was getting crashs even if I didn't load the same image twice, but rather more images. Since the issue is related to memory it failed in all sort of weird ways.
The issue with the code is that I am calling: "CGDataProviderRelease(dataProvider);"
I am using the data provider of newImageSource, but I didn't create this dataprovider. That is why I shouldn't release it.
You need to release things only if you created, retained or copied them.
Apart from that my App crash sometime due to low memory, but after fixing this I was able to use the "economy" type where I allocate and release as soon as possible.
Currently I can't see anything else wrong with this specific code.