Using metal to snapshot SCNRenderer produces darker output image - rendering

I'm using a Metal render pass to snapshot my SceneKit scene attached to a SCNRenderer. The method is faster than using the UIImage-producing SCNRenderer.snapshot(), but the output of the two methods is different; my method produces a darker image. I thought this could be to do with either a color-space difference, or alpha issue.
The image on the right shows my custom method, in which the color doesn't look right.
The color space seems to be the same in the UIImage produced by both the standard method's result, and my own (kCGColorSpaceModelRGB; sRGB IEC61966-2.1), so I don't think this is the issue.
I'll share elements of the custom render code that I believe are relevant.
I configure the MTLRenderPassDescriptor as follows:
renderPassDescriptor.colorAttachments[0].loadAction = MTLLoadAction.clear
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(0, 0, 0, 0)
renderPassDescriptor.colorAttachments[0].storeAction = MTLStoreAction.store
I then create a texture to render into. I create a CGContext with:
bitsPerComponent: 8
bitsPerPixel: 32
colorSpace: CGColorSpaceCreateDeviceRGB()
bitmapInfo: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.noneSkipFirst.rawValue
fillColor: UIColor.clear.cgColor
This is an area I'm concerned about. I've tried other color spaces, CGBitmapInfo and CGImageAlphaInfo flags, and other fill colors. The fill color does have an effect on the output, but I do need transparency, so clear does feel correct.
I create a MTLTextureDescriptor.texture2DDescriptor with .rgba8Unorm as the pixel format, with usage MTLTextureUsage(rawValue: MTLTextureUsage.renderTarget.rawValue | MTLTextureUsage.shaderRead.rawValue).
I then go on to hand my texture to the render pass descriptor and run a render command.
renderPassDescriptor.colorAttachments[0].texture = texture
let commandBuffer = commandQueue.makeCommandBuffer()!
renderer.render(atTime: time, viewport: viewport, commandBuffer: commandBuffer,
passDescriptor: renderPassDescriptor)
commandBuffer.commit()
In my normal pipeline, I go on here to create a CVPixelBuffer, but I introduced the creation of a CGImage to be able to more easily preview the image in the Xcode debugger. I do this using the following:
var data = Array<UInt8>(repeatElement(0, count: 4*mtlTexture.width*mtlTexture.height))
mtlTexture.getBytes(&data, bytesPerRow: 4*mtlTexture.width, from: MTLRegionMake2D(0, 0, mtlTexture.width, mtlTexture.height), mipmapLevel: 0)
let bitmapInfo = CGBitmapInfo(rawValue: (CGBitmapInfo.byteOrder32Big.rawValue | CGImageAlphaInfo.premultipliedLast.rawValue))
let colorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: &data,
width: mtlTexture.width,
height: mtlTexture.height,
bitsPerComponent: 8,
bytesPerRow: 4*mtlTexture.width,
space: colorSpace,
bitmapInfo: bitmapInfo.rawValue)
return context?.makeImage()
And this CGImage (or the CVPixelBuffer) is where I first observe the darkened image. So I believe that either the initial Metal render pass is creating the color disparity, or, I'm always performing wrong conversions to each other format I use.
An issue that is perhaps related can be found here:
https://github.com/MetalPetal/MetalPetal/issues/76
That issue seems to be taking place in a render view, and I don't use a SceneView or anything called a renderView. I have a SCNRenderer and I turn snapshots into images to write to video buffers, but the color issue presents itself earlier than those steps. The post does mention that the render view should use the format bgra8Unorm_srgb, so I wonder if that should be introduced in my pipeline, but I just can't work out where it belongs. Changing the pixelFormat from rgba8Unorm to bgra8Unorm_srgb in my MTLTextureDescriptor doesn't seem to make any difference.
Does this effect look familiar to anyone, or can anyone shed light on this?

It should work if you chose CGColorSpaceCreateWithName(kCGColorSpaceSRGB) for the bitmap context and MTLPixelFormatRGBA8Unorm_sRGB for the texture format.

Related

Is it possible to use iOS Charts (Daniel Gindi) to generate graphs in pdf document on iOS

I am using iOS Charts (Daniel Gindi) to generate graphs in an iOS app and I want to be able to generate a PDF report with those graphs included in the body of the report. Can anyone explain how to go about doing this. Ideally I don't want to generate an image from the UIView that shows in the app because the size/resolution would not be suitable for the PDF document.
As I understand it there are a few options:
use the graphics context for the pdf document to draw the graph on - it's not clear whether this would be possible when using the Charts library
use a UIView somehow to generate the graph and generate a PDF image data from that, embed this image into the pdf report
It seems like option 1 is probably the preferred way to get best resolution/control - somewhat speculative - doing it this way means you should be able to specify the exact position and size and get the correct font sizes, line thicknesses, etc..
Using option 2 means you have to figure out the scaling between a UIView and the PDF page view and I am not sure how these would map to each other.
Can anyone provide any suggestions on the following:
Is it possible to use Charts to generate graphs in a PDF document, and if so how?
If not what other options are there, short of writing custom drawing code.
OK so here is what I have done
Option 1: Using a UIView.layer to render on the PDF CGContect
func drawLineGraph(x: CGFloat, y: CGFloat)->CGRect{
let width = (pageSize.width - 2*kBorderInset - 2*kMarginInset)/2.0 - 50.0
let renderingRect = CGRect(x: x, y: y + 50.0, width: width, height: 150.0)
// Create a view for the Graph
let graphController = LineChartController(rect: renderingRect, building: self.building)
if let currentContext = UIGraphicsGetCurrentContext() {
let frame = graphController.chartView.frame
currentContext.saveGState()
currentContext.translateBy(x:frame.origin.x, y:frame.origin.y);
graphController.chartView.layer.render(in: currentContext)
currentContext.restoreGState()
}
return renderingRect
}
The graphController is just an object that has essentially the same function as the usual parent ViewController that would contain the graph. Sets the graph parameters and data.
Once that has been done the function below is called to render on the PDF page context.
A bit of translation required to put the graphs in the correct position.
Option 2: Drawing on the PDF Page CGContect
And the solution is...ta da...
func drawBarGraph(x: CGFloat, y: CGFloat)->CGRect{
let width = (pageSize.width - 2*kBorderInset - 2*kMarginInset)/2.0 - 50.0
let renderingRect = CGRect(x: x + width + 50, y: y + 50.0, width: width, height: 150.0)
// Create a view for the Graph
let graphController = BarChartController(rect: renderingRect, building: self.building)
if let currentContext = UIGraphicsGetCurrentContext() {
let frame = graphController.chartView.frame
currentContext.saveGState()
currentContext.translateBy(x:frame.origin.x, y:frame.origin.y)
//graphController.chartView.layer.render(in: currentContext)
graphController.chartView.draw(frame)
currentContext.restoreGState()
}
return renderingRect
}
Since the current context is set to the PDF page's context just call the charts draw() function directly passing the frame rectangle.
What have I missed here, can it be this easy ?
You can find a copy of the generated PDF here as well as sample code.

Qt5 QtChart drop vertical lines while using QScatterSeries

When I am using QScatterSeries, I can very easily draw point at (x, y). However, instead of points I would like to draw short lines, like in the figure below. How can I get about doing so?
I tried using RectangleMarker, but it just draws a fat square. I would prefer a thin line about 2px wide and 20px in height.
Is there a way I can add custom marker shapes?
Here are the code and the settings I use to transform my points into lines :
//create scatter series to draw point
m_pSeries1 = new QtCharts::QScatterSeries();
m_pSeries1->setName("trig");
m_pSeries1->setMarkerSize(100.0);
//draw a thin rectangle (50 to 50)
QPainterPath linePath;
linePath.moveTo(50, 0);
linePath.lineTo(50, 100);
linePath.closeSubpath();
//adapt the size of the image with the size of your rectangle
QImage line1(100, 100, QImage::Format_ARGB32);
line1.fill(Qt::transparent);
QPainter painter1(&line1);
painter1.setRenderHint(QPainter::Antialiasing);
painter1.setPen(QColor(0, 0, 0));
painter1.setBrush(painter1.pen().color());
painter1.drawPath(linePath);
//attach your image of rectangle to your series
m_pSeries1->setBrush(line1);
m_pSeries1->setPen(QColor(Qt::transparent));
//then use the classic QtChart pipeline...
You can play the marker size, the dimension of the image and the drawing pattern in the painter to adapt the size and shape of the rectangle to obtain a line.
In the picture, it's the black line. As you can see you can repeat the process for other series.
Keep in mind that you cannot use the openGL acceleration:
m_pSeries0->setUseOpenGL(true);
My work is based on the QtCharts/QScatterSeries example : QScatterSeries example
Hope it will help you.
Florian

CreateJs Drawing with alpha

I implemented a little drawing function into my app with CreateJS like so:
var currentPosition = this.posOnStage(event);
var drawing = container.getChildByName('drawing');
drawing.graphics.ss(this.brushSize, "round").s(this.brushColor);
drawing.graphics.mt(this._lastMousePosition.x, this._lastMousePosition.y);
drawing.graphics.lt(currentPosition.x, currentPosition.y);
drawing.alpha = this.brushAlpha;
container.updateCache(this.enableErasing ? "destination-out" : "source-over");
drawing.graphics.clear();
this._lastMousePosition = this.posOnStage(event);
As you can see, the alpha value of this drawing can change. Sadly you can draw over a point you once did draw, so when you draw over a point multiple times the alpha effect will go away. Any idea how to solve this ?
Thanks :)
EDIT:
I tried it like gskinner and Lanny 7 proposed, but it didn't work. I attached a image so you can see the problem.
As suggested by Lanny, apply the alpha to the actual stroke, not to the Shape. You can use Graphics methods to help with this.
For example:
// set the brush color to red with the current brush alpha:
this.brushColor = createjs.Graphics.getRGB(255, 0, 0, this.brushAlpha);

Changing the alpha/opacity channel on a texture using GLKit

I am new to opengl es, and I can't seem to figure out how you would change the alpha / opacity
on a texture loaded with GLKTextureLoader.
Right now I just draw the texture with the following code.
self.texture.effect.texture2d0.enabled = YES;
self.texture.effect.texture2d0.name = self.texture.textureInfo.name;
self.texture.effect.transform.modelviewMatrix = [self modelMatrix];
[self.texture.effect prepareToDraw];
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
NKTexturedQuad _quad = self.texture.quad;
long offset = (long)&_quad;
glVertexAttribPointer(GLKVertexAttribPosition,
2,
GL_FLOAT,
GL_FALSE,
sizeof(NKTexturedVertex),
(void *)(offset + offsetof(NKTexturedVertex, geometryVertex)));
glVertexAttribPointer(GLKVertexAttribTexCoord0,
2,
GL_FLOAT, GL_FALSE,
sizeof(NKTexturedVertex),
(void *)(offset + offsetof(NKTexturedVertex, textureVertex)));
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Any advice would be very helpful :)
i am no gl expert but to draw with a changed alpha value does not seem to work as described by rickster.
as far as i understand, the values passed to glBlendColor are only used when using glBlendFunc constants like: GL_CONSTANT_…
this will overwrite the textures alpha values and use a defined value to draw with:
glEnable(GL_BLEND);
glBlendFunc(GL_CONSTANT_ALPHA, GL_ONE_MINUS_CONSTANT_ALPHA);
glBlendColor(1.0, 1.0, 1.0, yourAlphaValue);
glDraw... // the draw operations
further reference can be found here http://www.opengl.org/wiki/Blending#Blend_Color
As long as you're in the OpenGL ES 1.1 world (or the emulated-1.1 world of GLKBaseEffect), alpha is a property either of the (per-pixel) bitmap data in the texture or of the (complete) OpenGL ES state you're drawing with. You can't set an opacity level for a texture as a whole, on its own. So, you have three options:
Change the alpha of the texture. This means changing the texture bitmap data itself -- use the 2D image context of your choice to draw the image at half (or whatever) alpha, and read the resulting image into an OpenGL ES texture. Probably not a great idea unless the alpha you want will be constant for the life of your app. In which case you might as well just go back to Photoshop (or whatever you're using to create your image assets) and set the alpha there.
Change the alpha you're drawing with. After you prepareToDraw, set up blending in GL:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_CONSTANT_ALPHA);
glBlendColor(1.0, 1.0, 1.0, myDesiredAlphaValue);
glDraw... // whatever you're drawing
Don't forget to A) draw your partially transparent content after any content you want it blended on top of and B) disable blending before rendering opaque content again on the next frame.
Ditch GLKBaseEffect and write your own shaders. Shaders that work like the 1.1 fixed-function pipeline are a dime a dozen -- you can even get started by using the shaders packaged with the Xcode "OpenGL Game" project template or looking at the shaders GLKit writes in the Xcode Frame Capture tool. Once you have such shaders, changing the alpha of a color you got out of a texel lookup is a simple operation:
vec4 color = texture2D(texUnit, texCoord);
color.a = myDesiredAlphaValue;
gl_FragColor = color;

Non-deprecated replacement for NSCalibratedBlackColorSpace?

I'm implementing my own NSBitmapImageRep (to draw PBM image files). To draw them, I'm using NSDrawBitmap() and passing it the NSCalibratedBlackColorSpace (as the bits are 1 for black, 0 for white).
Trouble is, I get the following warning:
warning: 'NSCalibratedBlackColorSpace' is deprecated
However, I couldn't find a good replacement for it. NSCalibratedWhiteColorSpace gives me an inverted image, and there seems to be no way to get NSDrawBitmap() to use a CGColorSpaceRef or NSColorSpace that I could create as a custom equivalent to NSCalibratedBlackColorSpace.
I've found a (hacky) way to shut up the warning (so I can still build warning-free until a replacement becomes available) by just passing #"NSCalibratedBlackColorSpace" instead of the symbolic constant, but I'd rather apply a correct fix.
Anybody have an idea?
OK, so I tried
NSData* pixelData = [[NSData alloc] initWithBytes: bytes +imgOffset length: [theData length] -imgOffset];
CGFloat black[3] = { 0, 0, 0 };
CGFloat white[3] = { 100, 100, 100 };
CGColorSpaceRef calibratedBlackCS = CGColorSpaceCreateCalibratedGray( white, black, 1.8 );
CGDataProviderRef provider = CGDataProviderCreateWithCFData( UKNSToCFData(pixelData) );
mImage = CGImageCreate( size.width, size.height, 1, 1, rowBytes, calibratedBlackCS, kCGImageAlphaNone,
provider, NULL, false, kCGRenderingIntentDefault );
CGColorSpaceRelease( calibratedBlackCS );
but it's still inverted. I also tried swapping black/white above, didn't change a thing. Am I misinterpreting the CIE tristimulus color value thing? Most docs seem to assume you know what it is or are willing to transform a piece of matrix maths to figure out what color is what. Or something.
I'd kinda like to avoid having to touch all the data once and invert it, but right now it seems like the best choice (/me unpacks his loops and xor operators).
It seems to have just been removed with no replacement. The fix is probably to just invert all the bits and use NSCalibratedWhiteColorSpace.
I'd create a calibrated gray CGColorSpace with the white and black points exchanged, then create a CGImage with the raster data and that color space. If you want a bitmap image rep, it's easy to create one of those around the CGImage.
Another way (since you need to support planar data and you can require 10.6) would be to create an NSBitmapImageRep in the usual way in the NSCalibaratedWhiteColorSpace, and then replace its color space with an NSColorSpace created with the CGColorSpace whose creation I suggested in my other answer.