How interpretation MS-EMF Header object properties - Bounds, Frame, Device and Millimeters? - rendering

I'm implement rendering MS-EMF to raster image tool.
Parser by specification work's fine. But i have interpretatioin 2.2.9 Header Object properties when rendering problem, not enough information in the specification.
To convert from LOGICAL to DEVICE coordinates use current MapMode. How to interpret them (especially interesting MM_ISOTROPIC and MM_ANISOTROPIC) can look at gdi, for example here.
Now, i'm trying to specify the position and size of the whole image:
var minPoint = new PointF(header.Bounds.Left, header.Bounds.Top);
var maxPoint = new PointF(header.Bounds.Right, header.Bounds.Bottom);
float imageWidth = maxPoint.X - minPoint.X;
float imageHeight = maxPoint.Y - minPoint.Y;
float shiftX = -minPoint.X;
float shiftY = -minPoint.Y;
var globalCanvas = new CanvasClass(options.PageWidth, options.PageHeight);
globalCanvas.RenderTransform = new DrMatrix(1, 0, 0, 1, 0, 0);
float scaleX = options.PageWidth / (maxPoint.X + shiftX);
float scaleY = options.PageHeight / (maxPoint.Y + shiftY);
float minCommonScale = Math.Min(scaleX, scaleY);
if (minCommonScale > Epsilon)
{
globalCanvas.RenderTransform.Scale(minCommonScale, minCommonScale);
}
globalCanvas.RenderTransform.Translate(shiftX, shiftY);
but i don't understand how to use all properties - Bounds, Frame, Device and Millimeters - and the result image is stretched or not correct scaling or position of the image is not correct.
How them interpret?
Example 1.
emf file
header:
Bounds: (0, 0) - (579, 429)
Frame: (0, 0) - (10000, 10000)
Device: 1855, 1034
Millimeters: 320, 240
and total 4 records:
SelectObject(hDC, (HGDIOBJ)GRAY_BRUSH);
Ellipse(hDC, 0, 0, 99, 99);
SelectObject(hDC, (HGDIOBJ)BLACK_BRUSH);
Ellipse(hDC, 480, 330, 579, 429);
result:
but we must see ex1-ethalon
Interestingly, viewers display ehalon incorrect, except the windows standard viewer:
Example 2.
emf file
header:
Bounds: (960, 210) - (3396, 2429)
Frame: (6772, 1481) - (23969, 17143)
Device: 2892, 4125
Millimeters: 204, 291
result (incomplete rendering yet):
but we see ethalon (attention on the image position):

Related

CARenderer draws nothing into bound texture

I'm trying to setup a CARenderer to draw into a mtlTexture, but all my attempts to get this working in a playground don't draw anything.
The resulting image is solid red, the yellow layer doesn't seem to be rendered at all.
Here's the simplest version of what I've tried:
import QuartzCore
import Metal
let layerTest = CATextLayer()
layerTest.frame = .init(origin: .zero, size: .init(width: 1920, height: 1080))
layerTest.string = "TEST"
layerTest.foregroundColor = .black
layerTest.backgroundColor = CGColor(red: 1.0, green: 1.0, blue: 0.0, alpha: 1.0)
layerTest.position = CGPoint(x:0.0, y:0.0)
layerTest.anchorPoint = CGPoint(x:0.0, y:0.0)
layerTest.masksToBounds = true
let device = MTLCreateSystemDefaultDevice()!
let context = CIContext(mtlDevice: device)
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: .rgba8Unorm, width: 1920, height: 1080, mipmapped: false)
textureDescriptor.usage = [.unknown]
textureDescriptor.storageMode = .private
let bytes = UnsafeMutablePointer<UInt8>.allocate(capacity: 1920 * 1080 * 4)
//fill the buffer with red
let pattern = UnsafeMutablePointer<UInt8>.allocate(capacity: 4)
(pattern + 0).initialize(to: 255)
(pattern + 1).initialize(to: 0)
(pattern + 2).initialize(to: 0)
(pattern + 3).initialize(to: 255)
memset_pattern4(bytes, pattern, 1920 * 1080 * 4)
let mtlBuffer = device.makeBuffer(bytes: bytes, length: 1920 * 1080 * 4)!
let mtlTexture = mtlBuffer.makeTexture(descriptor: textureDescriptor, offset: 0, bytesPerRow: 1920 * 4)!
let render = CARenderer(mtlTexture: mtlTexture)
render.bounds = layerTest.frame
render.layer = layerTest
render.setDestination(mtlTexture)
render.beginFrame(atTime: CACurrentMediaTime(), timeStamp: nil)
render.addUpdate(render.bounds)
render.render()
render.endFrame()
let ciImage = CIImage(mtlTexture: mtlTexture)!
let cgImage: CGImage = context.createCGImage(ciImage, from: ciImage.extent)! //<- this is just red frame
I submitted this to apple DTS and got a reply:
Setting the root layer of the CARenderer requires one implicit CATransaction to transfer ownership of the layer tree to the CARenderer’s context. CARenderer frame render methods will not work correctly until this ownership transfer is complete.
To complete the current transaction we call flush() and then commit() it:
render.layer = layerTest
CATransaction.flush()
CATransaction.commit()
Adding these two lines resolved the issue for me.

How do I get the frame of visible content from SKCropNode?

It appears that, in SpriteKit, when I use a mask in a SKCropNode to hide some content, it fails to change the frame calculated by calculateAccumulatedFrame. I'm wondering if there's any way to calculate the visible frame.
A quick example:
import SpriteKit
let par = SKCropNode()
let bigShape = SKShapeNode(rect: CGRect(x: 0, y: 0, width: 100, height: 100))
bigShape.fillColor = UIColor.redColor()
bigShape.strokeColor = UIColor.clearColor()
par.addChild(bigShape)
let smallShape = SKShapeNode(rect: CGRect(x: 0, y: 0, width: 20, height: 20))
smallShape.fillColor = UIColor.greenColor()
smallShape.strokeColor = UIColor.clearColor()
par.maskNode = smallShape
par.calculateAccumulatedFrame() // returns (x=0, y=0, width=100, height=100)
I expected par.calculateAccumulatedFrame() to return (x=0, y=0, width=20, height=20) based on the crop node mask.
I thought maybe I could code the function myself as an extension that basically reimplements calculateAccumulatedFrame with support for checking for SKCropNodes and their masks, but it occurred to me that I would need to consider the alpha of that mask to determine if there's actual content that grows the frame. Sounds difficult.
Is there an easy way to calculate this?

UIImageEffects: white image when Gaussian radius above 280, vImageBoxConvolve_ARGB8888 issue?

I'm using the Gaussian blur algorithm found in Apple's UIImageEffects example:
CGFloat inputRadius = blurRadius * inputImageScale;
if (inputRadius - 2. < __FLT_EPSILON__)
inputRadius = 2.;
uint32_t radius = floor((inputRadius * 3. * sqrt(2 * M_PI) / 4 + 0.5) / 2);
radius |= 1; // force radius to be odd so that the three box-blur methodology works.
NSInteger tempBufferSize = vImageBoxConvolve_ARGB8888(inputBuffer, outputBuffer, NULL, 0, 0, radius, radius, NULL, kvImageGetTempBufferSize | kvImageEdgeExtend);
void *tempBuffer = malloc(tempBufferSize);
vImageBoxConvolve_ARGB8888(inputBuffer, outputBuffer, tempBuffer, 0, 0, radius, radius, NULL, kvImageEdgeExtend);
vImageBoxConvolve_ARGB8888(outputBuffer, inputBuffer, tempBuffer, 0, 0, radius, radius, NULL, kvImageEdgeExtend);
vImageBoxConvolve_ARGB8888(inputBuffer, outputBuffer, tempBuffer, 0, 0, radius, radius, NULL, kvImageEdgeExtend);
free(tempBuffer);
vImage_Buffer *temp = inputBuffer;
inputBuffer = outputBuffer;
outputBuffer = temp;
I'm also working with some fairly large images. Unfortunately, when the radius gets over 280, the blurred image suddenly becomes almost completely blank, regardless of the resolution. What's going on here? Does vImageBoxConvolve_ARGB8888 have an undocumented kernel width/height limit? Or does it have to do with the way the box kernel width is computed from the radius?
EDIT:
Found a similar question here: vImageBoxConvolve: errors when kernel size > 255. A Gaussian radius of 280 roughly translates to a 260 size kernel, so that part matches up.
The box and tent convolves can run into a problem where the value modulo overflows the 31-bit accumulator. However 255 seems a bit narrow for that. There should be another 7 bits of headroom at least for 255x255. Certainly, check the error code returned by the function. If it says everything is fine, then this seems bug worthy. Attach some sample code to help Apple reproduce the problem to help ensure it is fixed.

How to handle the orthographic projection when auto-rotating screen?

I have this method for performing the ortho projection:
void myGL::ApplyOrtho(float maxX, float maxY) const
{
float a = 1.0f / maxX;
float b = 1.0f / maxY;
float ortho[16] = {
a, 0, 0, 0,
0, b, 0, 0,
0, 0, -1, 0,
0, 0, 0, 1};
GLint projectionUniform = glGetUniformLocation(m_simpleProgram, "Projection");
glUniformMatrix4fv(projectionUniform, 1, 0, &ortho[0]);
}
It works fine for iPad screen when I do this:
ApplyOrtho(2, 2*1024/768);
Here's my rendered image:
However, when I rotate to landscape, it looks like this:
Now my assumption is this is because the ApplyOrtho matrix is setting a fixed projection and that projection does not rotate while the image is rotating within that projection, thus getting displayed fatter.
Incidentally, this is the rotation:
void myGL::ApplyRotation(float degrees) const
{
float radians = degrees * 3.14159f / 180.0f;
float s = std::sin(radians);
float c = std::cos(radians);
float zRotation[16] = {
c, s, 0, 0,
-s, c, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1
};
GLint modelviewUniform = glGetUniformLocation(m_simpleProgram, "Modelview");
glUniformMatrix4fv(modelviewUniform, 1, 0, &zRotation[0]);
}
It is used right before drawing.
So I experimented and tried this at the same time I rotate:
ApplyOrtho(2*1024/768, 2);
However this has no effect whatsoever, even though the rotation is definitely happening at the same time. My image remains "fat".
Is my interpretation of why the fatness is happening correct?
How to handle the orthographic projection when auto-rotating screen?
UDPATE: Also tried this on iPhone using the 2/3 dimensions of the screen (not iPhone 5) and using ApplyOrtho(2,3) and ApplyOrtho(3,2) but the "fat" triangle in landscape remains.
Also: the viewport is setup just once, before the first Ortho:
glViewport(0, 0, width, height);
Where width and height are the dimensions of the Portrait screen.
The cause of the above discrepancies is that the orthographic projection is not matching the width and height ratio of the screen, thus the X and Y coordinates are not the same screen size. Making the orthographic ratio match the viewport ratio resolves this issue. As a result, when rotating, the image will remain exactly the same shape and size.

How do I use the scanCrop property of a ZBar reader?

I am using the ZBar SDK for iPhone in order to scan a barcode. I want the reader to scan only a specific rectangle instead of the whole view, for doing that it is needed to set the scanCrop property of the reader to the desired rectangle.
I'm having hard time with understanding the rectangle parameter that has to be set.
Can someone please tell me what rect should I give as an argument if on portrait view its coordinates would be: CGRectMake( A, B, C, D )?
From the zbar's ZBarReaderView Class documentation :
CGRect scanCrop
The region of the video image that will be scanned, in normalized image coordinates. Note that the video image is in landscape mode (default {{0, 0}, {1, 1}})
The coordinates for all of the arguments is in a normalized float, which is from 0 - 1. So, in normalized value, theView.width is 1.0, and theView.height is 1.0. Therefore, the default rect is {{0,0},{1,1}}.
So for example, if I have a transparent UIView named scanView as a scanning region for my readerView. Rather than do :
readerView.scanCrop = scanView.frame;
We should do this, normalizing every arguments first :
CGFloat x,y,width,height;
x = scanView.frame.origin.x / readerView.bounds.size.width;
y = scanView.frame.origin.y / readerView.bounds.size.height;
width = scanView.frame.size.width / readerView.bounds.size.width;
height = scanView.frame.size.height / readerView.bounds.size.height;
readerView.scanCrop = CGRectMake(x, y, width, height);
It works for me. Hope that helps.
You can use scan crop area by doing this.
reader.scanCrop = CGRectMake(x,y,width,height);
for eg.
reader.scanCrop = CGRectMake(.25,0.25,0.5,0.45);
I used this and its working for me.
come on!!! this is the right way to adjust the crop area;
I had wasted tons of time on it;
readerView.scanCrop = [self getScanCrop:cropRect readerViewBounds:contentView.bounds];
- (CGRect)getScanCrop:(CGRect)rect readerViewBounds:(CGRect)rvBounds{
CGFloat x,y,width,height;
x = rect.origin.y / rvBounds.size.height;
y = 1 - (rect.origin.x + rect.size.width) / rvBounds.size.width;
width = rect.size.height / rvBounds.size.height;
height = rect.size.width / rvBounds.size.width;
return CGRectMake(x, y, width, height);
}