calloc in Swift - objective-c

How do I transform the following ObjectiveC statements into SWIFT:
UInt32 *pixels;
pixels = (UInt32 *) calloc(height * width, sizeof(UInt32));
I tried to do the following:
var pixels: UInt32
pixels = (UInt32)calloc(height * width, sizeof(UInt32))
and I receive the error message:
Int is not convertible to UInt
and the (UInt32) Casting didn't work as well.
Can someone give me some advice please? I am struggling a little bit with SWIFT still. Thank you.

Here's an easier way of allocating that array in swift:
var pixels = [UInt32](count: height * width, repeatedValue: 0)
If that's what you actually want to do.
But, if you need a pointer from calloc for some reason, go with:
let pixels = calloc(UInt(height * width), UInt(sizeof(UInt32)))
The type of pixels though must be a type of UnsafeMutablePointer<T>, and you would handle it like a swift pointer in the rest of your code.

For Swift-3 :
UnsafeMutablePointer is replace by UnsafeMutableRawPointer
var pixels = UnsafeMutableRawPointer( calloc(height * width, MemoryLayout<UInt32>.size) )
Reference

If you really know what you are doing and insist in allocating memory unsafely using calloc:
var pixels: UnsafeMutablePointer<UInt32>
pixels = calloc(height * width, sizeof(UInt32))
or just
var pixels = calloc(height * width, sizeof(UInt32))

Related

Scale vImage_Buffer with offset - Cocoa Objective C

I am trying to scale an image using vImage_Buffer and the below code works for me. My trouble is I want to maintain the aspect ratio of the source image, so I might need to add a xOffset or yOffset. Below code only works for yOffset. How can I scale the image with xOffset as well. I can not do scaling with CGContext since that affect the performance.
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
size_t finalWidth = 1080;
size_t finalHeight = 720;
size_t sourceWidth = CVPixelBufferGetWidth(imageBuffer);
size_t sourceHeight = CVPixelBufferGetHeight(imageBuffer);
CGRect aspectRect = AVMakeRectWithAspectRatioInsideRect(CGSizeMake(sourceWidth, sourceHeight), CGRectMake(0, 0, finalWidth, finalHeight));
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t startY = aspectRect.origin.y;
size_t yOffSet = (finalWidth*startY*4);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
void* destData = malloc(finalHeight * finalWidth * 4);
vImage_Buffer srcBuffer = { (void *)baseAddress, sourceHeight, sourceWidth, bytesPerRow};
vImage_Buffer destBuffer = { (void *)destData+yOffSet, aspectRect.size.height, aspectRect.size.width, aspectRect.size.width * 4};
vImage_Error err = vImageScale_ARGB8888(&srcBuffer, &destBuffer, NULL, 0);
No pun intended, but you should really read Accelerate.framework documentation.
Replace malloc with calloc ...
void *destData = calloc(finalHeight * finalWidth * 4);
... to zero all the bytes (or use any other way).
What does vImage_Buffer.rowBytes documentation say?
The distance, in bytes, between the start of one pixel row and the next in an image, including any unused space between them.
The rowBytes value must be at least the width multiplied by the pixel size, where the pixel size depends on the image format. You can provide a larger value, in which case the extra bytes will extend beyond the end of each row of pixels. You may want to do so either to improve performance, or to describe an image within a larger image without copying the data. The extra bytes aren't considered part of the image represented by the vImage buffer.
When allocating floating-point data for images, keep the data 4-byte aligned by allocating bytes as integer multiples of 4. For best performance, allocate bytes as integer multiples of 16.
Look at the following image:
Red circle (top/left corner) is offset from the buffer start, let's calculate it (assuming 4 bytes per pixel):
size_t startY = aspectRect.origin.y;
size_t startX = aspectRect.origin.x;
size_t offset = 4 * (finalWidth * startY + startX);
The distance, in bytes, between the start of one pixel row and the next in an image, including any unused space between them is finalWidth * 4 (red line between two other circles).
Let's fix the destBuffer:
vImage_Buffer destBuffer = {
(void *)destData+offset,
aspectRect.size.height,
aspectRect.size.width,
finalWidth * 4
};

What is the swift equivalent of this cast?

I want to convert this into swift, or at least find something that does the same thing.
size_t width = CGImageGetWidth(spriteImage);
size_t height = CGImageGetHeight(spriteImage);
GLubyte * spriteData = (GLubyte *) calloc(width*height*4, sizeof(GLubyte));
I need to initialize the spriteData pointer in swift of the right size.
The first two of those are of type size_t, which maps to Uint in Swift, and the last is GLubyte, which maps to UInt8. This code will initialize spriteData as an array of GLubyte, which you could pass to any C function that needs an UnsafeMutablePointer<GLubyte> or UnsafePointer<GLubyte>:
let width = CGImageGetWidth(spriteImage)
let height = CGImageGetHeight(spriteImage)
var spriteData: [GLubyte] = Array(count: Int(width * height * 4), repeatedValue: 0)
What do you need to do with spriteData?
The straightforward way is:
var spriteData = UnsafeMutablePointer<GLubyte>.alloc(Int(width * height * 4))
Note that, if you did this, you have to dealloc it manually.
spriteData.dealloc(Int(width * height * 4))
/// Deallocate `num` objects.
///
/// :param: num number of objects to deallocate. Should match exactly
/// the value that was passed to `alloc()` (partial deallocations are not
/// possible).

How can I deal with operation between Int and CGFloat?

In Objective-C I can do the following operation:
Objective-C code:
CGFloat width = CGRectGetWidth(mView.bounds);
CGFloat total = width*[myArray count];
but in Swift, it will raise an error:
Could not find an overload for '*' that accepts the supplied arguments
How can I avoid this situation elegantly?
First, let's create some demo data
let array: NSArray = ["a", "b", "c"] //you could use a Swift array, too
let view = UIView() //just some view
Now, everything else works almost the same way as in Obj-C
let width: CGFloat = CGRectGetWidth(view.bounds)
or simply
let width = CGRectGetWidth(rect) //type of the variable is inferred
and total:
let total = width * CGFloat(array.count)
Note that we have to add a CGFloat cast for array.count. Obj-C would implicitly cast NSUInteger to CGFloat but Swift has no implicit casts, so we have to add an explicit one.
In Swift you cannot multiply two numbers of different types (NSNumber, Int, Double, etc.) directly. The width of a CGRect is of floating point type and the array count is of integer type. Here's a working example:
let myArray: Int[] = [1,2,3]
let rect: CGRect = CGRect(x:0,y:0,width:100,height:100)
let total: Double = rect.size.width * Double(myArray.count)
Swift does not allow operations between two numbers of different types. Therefore, before to multiply your array.count (Int) by your width (CGFloat), you'll have to cast it to CGFloat.
Fortunately, Swift provides a simple CGFloat initializer init(_:) that creates a new CGFloat from an Int. This initializer has the following declaration:
init<Source>(_ value: Source) where Source : BinaryInteger
Creates a new value, rounded to the closest possible representation.
The Swift 5 Playground sample code below shows how to perform your calculation by using CGFloat's initializer:
import UIKit
import CoreGraphics
// Set array and view
let array = Array(1...3)
let rect = CGRect(x: 0, y: 0, width: 100, height: 100)
let view = UIView(frame: rect)
// Perform operation
let width = view.bounds.width
let total = width * CGFloat(array.count)
print(total) // prints: 300.0
need to change All Int to CGFloat Type, or change All CGFloat to Int
let a: CGFloat = 0.25
let b: Int = 1
// wrong: Binary operator '*' cannot be applied to operands of type 'CGFloat' and 'Int'
// let c = a * b
// right: change All Int to CGFloat Type
let r = a * CGFloat(b)
// right: change All CGFloat to Int
let r = Int(a) * b

vImage not putting channels correctly back together

I tried to extract all 3 channels from an image with vImageConvert_RGB888toPlanar8 and then put them back together with vImageConvert_Planar8toRGB888 but the image gets totally messed up. Why is that?
vImage_Buffer blueBuffer;
blueBuffer.data = (void*)blueImageData.bytes;
blueBuffer.width = size.width;
blueBuffer.height = size.height;
blueBuffer.rowBytes = [blueImageData length]/size.height;
vImage_Buffer rBuffer;
rBuffer.width = size.width;
rBuffer.height = size.height;
rBuffer.rowBytes = size.width;
void *rPixelBuffer = malloc(size.width * size.height);
if(rPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
rBuffer.data = rPixelBuffer;
vImage_Buffer gBuffer;
gBuffer.width = size.width;
gBuffer.height = size.height;
gBuffer.rowBytes = size.width;
void *gPixelBuffer = malloc(size.width * size.height);
if(gPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
gBuffer.data = gPixelBuffer;
vImage_Buffer bBuffer;
bBuffer.width = size.width;
bBuffer.height = size.height;
bBuffer.rowBytes = size.width;
void *bPixelBuffer = malloc(size.width * size.height);
if(bPixelBuffer == NULL)
{
NSLog(#"No pixelbuffer");
}
bBuffer.data = bPixelBuffer;
vImageConvert_RGB888toPlanar8(&blueBuffer, &rBuffer, &gBuffer, &bBuffer, kvImageNoFlags);
size_t destinationImageBytesLength = size.width*size.height*3;
const void* destinationImageBytes = valloc(destinationImageBytesLength);
NSData* destinationImageData = [[NSData alloc] initWithBytes:destinationImageBytes length:destinationImageBytesLength];
vImage_Buffer destinationBuffer;
destinationBuffer.data = (void*)destinationImageData.bytes;
destinationBuffer.width = size.width;
destinationBuffer.height = size.height;
destinationBuffer.rowBytes = [destinationImageData length]/size.height;
vImage_Error result = vImageConvert_Planar8toRGB888(&rBuffer, &gBuffer, &bBuffer, &destinationBuffer, 0);
NSImage* image = nil;
if(result == kvImageNoError)
{
//TODO: If you need color matching, use an appropriate colorspace here
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGDataProviderRef dataProvider = CGDataProviderCreateWithCFData((__bridge CFDataRef)(destinationImageData));
CGImageRef finalImageRef = CGImageCreate(size.width, size.height, 8, 24, destinationBuffer.rowBytes, colorSpace, kCGBitmapByteOrder32Big|kCGImageAlphaNone, dataProvider, NULL, NO, kCGRenderingIntentDefault);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(dataProvider);
image = [[NSImage alloc] initWithCGImage:finalImageRef size:NSMakeSize(size.width, size.height)];
CGImageRelease(finalImageRef);
}
free((void*)destinationImageBytes);
return image;
Working with vImage means to work with pixels only. So you must never use the size of an image (or imageRep), you only use pixelsWide and pixelsHigh. Replace all size.width with pixelsWide and all size.height with pixelsHigh. Apple has example code for vImage and they use size values! Don't believe them! Not all Apple example codes are correct.
The size of an image or imageRep determines how big an image shall be drawn on the screen (or a printer). Size values have the dimension of a length and the units are meter, cm, inch or (as in Cocoa) 1/72 inch aka point. They are represented as float values.
PixelsWide and pixelsHigh have no dimension and no unit (they are simply numbers) and are represented as int values.
There may be more bugs in your code, but the first step should be to replace all size values.
Strictly speaking, you want kCGBitmapByteOrderDefault instead of kCGBitmapByteOrder32Big. 32Big doesn't make much sense for a 24 bit pixel format.
This seems like a weak link:
destinationBuffer.rowBytes = [destinationImageData length]/size.height;
Check to see it is the right number.
A picture of the output would help diagnose the problem. Also check the console to see if CG is giving you any spew. Did you try vImageCreateCGImageFromBuffer() with kvImagePrintDiagnosticsToConsole to see if it has anything to say?
Working with vImage means to work with pixels only. So you must never use the size of an image (or imageRep), you only use pixelsWide and pixelsHigh. Replace all size.width with pixelsWide and all size.height with pixelsHigh. Apple has example code for vImage and they use size values! Don't believe them! Not all Apple example codes are correct.
The size of an image or imageRep determines how big an image shall be drawn on the screen (or a printer). Size values have the dimension of a length and the units are meter, cm, inch or (as in Cocoa) 1/72 inch aka point. They are represented as float values.
PixelsWide and pixelsHigh have no dimension and no unit (they are simply numbers) and are represented as int values.
There may be more bugs in your code, but the first step should be to replace all size values.

Using the contents of an array to set individual pixels in a Quartz bitmap context

I have an array that contains the RGB colour values for each pixel in a 320 x 180 display. I would like to be able to set individual pixel values in the a bitmap context of the same size offscreen then display the bitmap context in a view.
It appears that I have to create 1x1 rects and either put a stroke on them or a line of length 1 at the point in question. Is that correct? I'm looking for a very efficient way of getting the array data onto the graphics context as you can imagine this is going to be an image buffer that cycles at 25 frames per second and drawing in this way seems inefficient.
I guess the other question is should I use OPENGL ES instead?
Thoughts/best practice would be much appreciated.
Regards
Dave
OK, have come a short way, but can't make the final hurdle and I am not sure why this isn't working:
- (void) displayContentsOfArray1UsingBitmap: (CGContextRef)context
{
long bitmapData[WIDTH * HEIGHT];
// Build bitmap
int i, j, h;
for (i = 0; i < WIDTH; i++)
{
for (j = 0; j < HEIGHT; j++)
{
h = frameBuffer01[i][j];
bitmapData[i * j] = h;
}
}
// Blit the bitmap to the context
CGDataProviderRef providerRef = CGDataProviderCreateWithData(NULL, bitmapData,4 * WIDTH * HEIGHT, NULL);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGImageRef imageRef = CGImageCreate(WIDTH, HEIGHT, 8, 32, WIDTH * 4, colorSpaceRef, kCGImageAlphaFirst, providerRef, NULL, YES, kCGRenderingIntentDefault);
CGContextDrawImage(context, CGRectMake(0.0, HEIGHT, WIDTH, HEIGHT), imageRef);
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(providerRef);
}
Read the documentation for CGImageCreate(). Basically, you have to create a CGDataProvider from your pixel array (using CGDataProviderCreateDirect()), then create a CGImage with this data provider as a source. You can then draw the image into any context. It's a bit tedious to get this right because these functions expect a lot of arguments, but the documentation is quite good.
Dave,
The blitting code works fine, but your code to copy from the frame buffer is incorrect.
// Build bitmap
int i, j, h;
for (i = 0; i < WIDTH; i++)
{
for (j = 0; j < HEIGHT; j++)
{
h = frameBuffer01[i][j];
bitmapData[/*step across a line*/i + /*step down a line*/j*WIDTH] = h;
}
}
Note my changes to the assignment to elements of bitmapData.
Not knowing the layout of frame, this may still be incorrect, but from your code, this looks closer to the intent.