iOS Core Graphics how to optimize incremental drawing of very large image? - optimization

I have an app written with RXSwift which processes 500+ days of HealthKit data to draw a chart for the user.
The chart image is drawn incrementally using the code below. Starting with a black screen, previous image is drawn in the graphics context, then a new segment is drawn over this image with certain offset. The combined image is saved and the process repeats around 70+ times. Each time the image is saved, so the user sees the update. The result is a single chart image which the user can export from the app.
Even with autorelease pool, I see spikes of memory usage up to 1Gb, which prevents me from doing other resource intensive processing.
How can I optimize incremental drawing of very large (1440 × 5000 pixels) image?
When image is displayed or saved at 3x scale, it is actually 4320 × 15360.
Is there a better way than trying to draw over an image?
autoreleasepool {
//activeEnergyCanvas is custom data processing class
let newActiveEnergySegment = activeEnergyCanvas.draw(in: CGRect(x: 0, y: 0, width: 1440, height: days * 10), with: energyPalette)
let size = CGSize(width: 1440, height: height)
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
//draw existing image
self.activeEnergyImage.draw(in: CGRect(origin: CGPoint(x: 0, y: 0),
size: size))
//calculate where to draw smaller image over larger one
let offsetRect = CGRect(origin: CGPoint(x: 0, y: offset * 10),
size: newActiveEnergySegment.size)
newActiveEnergySegment.draw(in: offsetRect)
//get the combined image
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
//assign combined image to be displayed
if let unwrappedImage = newImage {
self.activeEnergyImage = unwrappedImage
}
}

Turns out my mistake was in passing invalid drawing scale (0.0) when creating graphics context, which defaulted to drawing at the device's native screen scale.
In case of iPhone 8 it was 3.0 The result is needing extreme amounts of memory to draw, zoom and export these images. Even if all debug logging prints that image is 1440 pixels wide, the actual canvas ends up being 1440 * 3.0 = 4320.
Passing 1.0 as the drawing scale makes the image more fuzzy, but reduces memory usage to less than 200mb.
// UIGraphicsBeginImageContext() <- also uses #3x scale, even when all display size printouts show
let drawingScale: CGFloat = 1.0
UIGraphicsBeginImageContextWithOptions(size, true, drawingScale)

Related

How to map hdr file image onto cubemap with Vulkan?

Curently based on Sascha Willems examples I've cerated samplerCube texture for fragment shader.
It has same JPG image copied to all 6 layers (faces).
I use stbi image library for image loading, it works okay if I use it for regular 2D texture, but if it's mapped on cube mesh it creates distorted image:
int width = 0, height = 0, channel = 0;
float* pixels = stbi_loadf("textures/test.hdr", &width, &height, &channel, STBI_rgb_alpha);
if(!pixels) throw std::runtime_error("failed to load texture image!");
this->texture_image.create_image(width, height, VK_FORMAT_R32G32B32A32_SFLOAT, VK_IMAGE_USAGE_TRANSFER_DST_BIT | VK_IMAGE_USAGE_SAMPLED_BIT);
this->texture_image.fill_memory(width, height, 4*sizeof(float), pixels);
this->texture_image.create_image_view(VK_FORMAT_R32G32B32A32_SFLOAT, VK_IMAGE_ASPECT_COLOR_BIT);
stbi_image_free(pixels);
Found how to do it here: https://learnopengl.com/PBR/IBL/Diffuse-irradiance
Even if it's in OpenGL the concenpt is the same.

Creating a transparency group or setting graphics state soft mask with PDFBox

I have a grayscale image that serves as a soft mask and I want to use it on a group of PDF objects (images or paths).
The mask and the objects do not necessarily use the same transformation matrix, and there might be more than one object to mask, so that excludes the possibility of using the SMask attribute of the ImageXObject dictionary.
So after reading some of the PDF specification, it looks like I should do the following: create a transparency group with the objects to mask, then draw it with the soft mask set on the graphics state.
Will that work? How can I achieve this, preferably with PDFBox?
Here's an example. I have these two images: the mask and another image.
The mask image is 200x200. It is drawn with the matrix [[4 0 100] [0 4 100]].
The image is 400x300. It is drawn with the matrix [[2 0 100] [0 2 150]].
Additionally, a 400x400 black square is drawn below the image with no transform matrix.
So a transparency group is created with the image and the square, then it's drawn with the mask image. Here's the expected result:
Rather ugly as far as the effect goes, but that's just an example.
As far as I can see establishing an extended graphics state soft mask is a very manual task in PDFBox. You can do so as follows:
try ( PDDocument document = new PDDocument() ) {
final PDImageXObject image = RETRIEVE PHOTO IMAGE;
final PDImageXObject mask = RETRIEVE MASK IMAGE;
PDTransparencyGroupAttributes transparencyGroupAttributes = new PDTransparencyGroupAttributes();
transparencyGroupAttributes.getCOSObject().setItem(COSName.CS, COSName.DEVICEGRAY);
PDTransparencyGroup transparencyGroup = new PDTransparencyGroup(document);
transparencyGroup.setBBox(PDRectangle.A4);
transparencyGroup.setResources(new PDResources());
transparencyGroup.getCOSObject().setItem(COSName.GROUP, transparencyGroupAttributes);
try ( PDFormContentStream canvas = new PDFormContentStream(transparencyGroup) ) {
canvas.drawImage(mask, new Matrix(400, 0, 0, 400, 100, 100));
}
COSDictionary softMaskDictionary = new COSDictionary();
softMaskDictionary.setItem(COSName.S, COSName.LUMINOSITY);
softMaskDictionary.setItem(COSName.G, transparencyGroup);
PDExtendedGraphicsState extendedGraphicsState = new PDExtendedGraphicsState();
extendedGraphicsState.getCOSObject().setItem(COSName.SMASK, softMaskDictionary);
PDPage page = new PDPage(PDRectangle.A4);
document.addPage(page);
try ( PDPageContentStream canvas = new PDPageContentStream(document, page) ) {
canvas.saveGraphicsState();
canvas.setGraphicsStateParameters(extendedGraphicsState);
canvas.setNonStrokingColor(Color.BLACK);
canvas.addRect(100, 100, 400, 400);
canvas.fill();
canvas.drawImage(image, new Matrix(400, 0, 0, 300, 100, 150));
canvas.restoreGraphicsState();
}
document.save(new File(RESULT_FOLDER, "SoftMaskedImageAndRectangle.pdf"));
}
The result:
If I were you, though, I would not use a bitmap image for the soft mask but instead a PDF gradient. The result most likely will be much less pixelated.

CGImageCreate with CGColorSpaceCreateDeviceGray on iOS12

I was using CGImageCreate with CGColorSpaceCreateDeviceGray to convert a buffer (CVPixelBufferRef) to grayscale image. It was very fast and did work well until iOS 12... now the returned image is empty.
The code look like this:
bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst;
CGDataProviderRef provider = CGDataProviderCreateWithData((void *)i_PixelBuffer,
sourceBaseAddr,
sourceRowBytes * height,
ReleaseCVPixelBuffer);
retImage = CGImageCreate(width,
height,
8,
32,
sourceRowBytes,
CGColorSpaceCreateDeviceGray(),
bitmapInfo,
provider,
NULL,
true,
kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
This is a known bug in iOS 12? If device gray is no supported anymore in this function, can you suggest me another way to do it?
Note that conversion should take less than 0.1 seconds for a 4K image.
Thanks in advance!
According to the list of Supported Pixel Formats in the Quartz 2D Programming Guide, iOS doesn't support 32 bits per pixel with gray color spaces. And even on macOS, 32 bpp gray requires the use of kCGBitmapFloatComponents (and float data).
Is your data really 32 bpp? If so, is it float? What are you using for bitmapInfo?
I would not expect CGImageCreate() to "convert" a buffer, including to grayscale. The parameters you're supplying are telling it how to interpret the data. If you're not using floating-point components, I suspect it was just taking one of the color channels and interpreting that as the gray level and ignoring the other components. So, it wasn't a proper grayscale conversion.
Apple's advice is to create an image that properly represents the image; create a bitmap context with the colorspace, pixel layout, and bitmap info you desire; draw the former into the latter; and create the final image from the context.
I finally found a workaround for my purpose. Note that the CVPixelBuffer is coming from the video camera.
Changed camera output pixel format to
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
(AVCaptureVideoDataOutput)
Extract the Y plane from YpCbCr
Build a CGImage with the Y plane
Code:
// some code
colorSpace = CGColorSpaceCreateDeviceGray();
sourceRowBytes = CVPixelBufferGetBytesPerRowOfPlane(i_PixelBuffer, 0);
sourceBaseAddr = (unsigned char*)CVPixelBufferGetBaseAddressOfPlane(i_PixelBuffer,0);
bitmapInfo = kCGImageByteOrderDefault;
// some code
CGContextRef context = CGBitmapContextCreate(sourceBaseAddr,
width,
height,
8,
sourceRowBytes,
colorSpace,
bitmapInfo);
retImage = CGBitmapContextCreateImage(context);
// some code
You can also look at this related post:
420YpCbCr8BiPlanarVideoRange To YUV420 ?/How to copy Y and Cbcr plane to Single plane?

Resizing image using Graphics.DrawImage result improperly resized image

I want to resize an image into a 4x4 pixel image in vb.net.
Using the Internet I got this code:
Public Function ResizeImage(ByVal image As Image) As Image
Try
Dim newWidth = 4
Dim newHeight = 4
Dim newImage As New Bitmap(newWidth, newHeight)
newImage.SetResolution(100, 100)
Using graphicsHandle As Graphics = Graphics.FromImage(newImage)
graphicsHandle.InterpolationMode = InterpolationMode.HighQualityBicubic
graphicsHandle.DrawImage(image, 0, 0, newWidth, newHeight)
End Using
Return newImage
Catch ex As Exception
Return image
End Try
End Function
Original:
Resized with photoshop(the real way of resizing it):
using InterpolationMode.Bilinear
using InterpolationMode.HighQualityBicubic
What is the problem ?
The settings you'll want are bilinear or bicubic interpolation (avoid the "high-quality" options) and PixelOffsetMode.Half.
graphicsHandle.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.Bilinear
graphicsHandle.PixelOffsetMode = System.Drawing.Drawing2D.PixelOffsetMode.Half
When interpolating, GDI+ normally offsets the center of the pixel by a half pixel. This can have an undesirable effect when scaling, with the appearance of shifting the scaled image up and left. Using PixelOffsetMode.Half shifts the pixels back where they "belong".
The high-quality bilinear and bicubic interpolation modes appear to blend the edge pixels with hypothetical transparent pixels beyond the image bounds, creating a fringe of semi-transparency at the edges.

How do I use the scanCrop property of a ZBar reader?

I am using the ZBar SDK for iPhone in order to scan a barcode. I want the reader to scan only a specific rectangle instead of the whole view, for doing that it is needed to set the scanCrop property of the reader to the desired rectangle.
I'm having hard time with understanding the rectangle parameter that has to be set.
Can someone please tell me what rect should I give as an argument if on portrait view its coordinates would be: CGRectMake( A, B, C, D )?
From the zbar's ZBarReaderView Class documentation :
CGRect scanCrop
The region of the video image that will be scanned, in normalized image coordinates. Note that the video image is in landscape mode (default {{0, 0}, {1, 1}})
The coordinates for all of the arguments is in a normalized float, which is from 0 - 1. So, in normalized value, theView.width is 1.0, and theView.height is 1.0. Therefore, the default rect is {{0,0},{1,1}}.
So for example, if I have a transparent UIView named scanView as a scanning region for my readerView. Rather than do :
readerView.scanCrop = scanView.frame;
We should do this, normalizing every arguments first :
CGFloat x,y,width,height;
x = scanView.frame.origin.x / readerView.bounds.size.width;
y = scanView.frame.origin.y / readerView.bounds.size.height;
width = scanView.frame.size.width / readerView.bounds.size.width;
height = scanView.frame.size.height / readerView.bounds.size.height;
readerView.scanCrop = CGRectMake(x, y, width, height);
It works for me. Hope that helps.
You can use scan crop area by doing this.
reader.scanCrop = CGRectMake(x,y,width,height);
for eg.
reader.scanCrop = CGRectMake(.25,0.25,0.5,0.45);
I used this and its working for me.
come on!!! this is the right way to adjust the crop area;
I had wasted tons of time on it;
readerView.scanCrop = [self getScanCrop:cropRect readerViewBounds:contentView.bounds];
- (CGRect)getScanCrop:(CGRect)rect readerViewBounds:(CGRect)rvBounds{
CGFloat x,y,width,height;
x = rect.origin.y / rvBounds.size.height;
y = 1 - (rect.origin.x + rect.size.width) / rvBounds.size.width;
width = rect.size.height / rvBounds.size.height;
height = rect.size.width / rvBounds.size.width;
return CGRectMake(x, y, width, height);
}