I'm stuck with a problem where and I want to render just two triangles (each one is stored in separated buffer) and Metal API rejects attempts to render second vertex buffer. I suspect this is about alignment. The assertion message is failed assertion `(length - offset)(0) must be >= 32 at buffer binding at index 0 for vertexArray[0].' Here the code:
Vertex and constants structs:
struct VertexPositionColor
{
VertexPositionColor(const simd::float4& pos,
const simd::float4& col)
: position(pos), color(col) {}
simd::float4 position;
simd::float4 color;
};
typedef struct
{
simd::float4x4 model_view_projection;
} constants_t;
This is how I store and add new buffers (the function gets called twice):
NSMutableArray<id<MTLBuffer>> *_vertexBuffer;
NSMutableArray<id<MTLBuffer>> *_uniformBuffer;
NSMutableArray<id<MTLBuffer>> *_indexBuffer;
- (void)linkGeometry:(metalGeometry*)geometry
{
[_vertexBuffer addObject:[_device newBufferWithBytes:[geometry vertices]
length:[geometry vertices_length]
options:0]
];
[_uniformBuffer addObject:[_device newBufferWithLength:[geometry uniforms_length]
options:0]
];
RCB::constants_t* guts = (RCB::constants_t*) [[_uniformBuffer lastObject] contents];
guts->model_view_projection = [geometry uniforms]->model_view_projection;
[geometry linkTransformation:(RCB::constants_t *)[[_uniformBuffer lastObject] contents]];
}
And next are the lines where assert fails (the very last one):
[render setVertexBuffer:_vertexBuffer[0] offset:0 atIndex:0];
[render setVertexBuffer:_uniformBuffer[0] offset:0 atIndex:1];
[render drawPrimitives:MTLPrimitiveTypeTriangle vertexStart:0 vertexCount:3];
[render setVertexBuffer:_vertexBuffer[1] offset:3*sizeof(VertexPositionColor) atIndex:0];
[render setVertexBuffer:_uniformBuffer[1] offset:sizeof(constants_t) atIndex:1];
[render drawPrimitives:MTLPrimitiveTypeTriangle vertexStart:3 vertexCount:3];
So, we just make offsets equal to memory size taken by previous buffer. Note that the first triangle will be rendered as expected if we comment the last line out.
Could anyone understand what I've missed? I would really appreciate that.
Regards
The offset parameter expresses the offset to the beginning of the data in the provided buffer. If you're using separate buffers for each object, the offset should be 0.
Related
I am trying to implement two distinct CAMetalLayers and use one MTLRenderCommandEncoder to render the same scene to both layers (Metal for OS X).
For this purpose, I've tried creating one MTLRenderPassDescriptor and attaching the two layers' textures to its color attachments. My render method looks like the following:
- (void)render {
dispatch_semaphore_wait(_inflight_semaphore, DISPATCH_TIME_FOREVER);
id<MTLCommandBuffer> commandBuffer = [_commandQueue commandBuffer];
__block dispatch_semaphore_t block_sema = _inflight_semaphore;
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> buffer) {
dispatch_semaphore_signal(block_sema);
}];
MTLRenderPassDescriptor *renderPass = [MTLRenderPassDescriptor renderPassDescriptor];
for (int i = 0; i < [_metalLayers count]; i++) {
_metalDrawables[i] = [_metalLayers[i] nextDrawable];
renderPass.colorAttachments[i].texture = _metalDrawables[[_metalDrawables count] - 1].texture;
renderPass.colorAttachments[i].clearColor = MTLClearColorMake(0.5, 0.5, (float)i / (float)[_metalLayers count], 1);
renderPass.colorAttachments[i].storeAction = MTLStoreActionStore;
renderPass.colorAttachments[i].loadAction = MTLLoadActionClear;
}
id<MTLRenderCommandEncoder> commandEncoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPass];
[commandEncoder setRenderPipelineState:_pipeline];
[commandEncoder setVertexBuffer:_positionBuffer offset:0 atIndex:0 ];
[commandEncoder setVertexBuffer:_colorBuffer offset:0 atIndex:1 ];
[commandEncoder drawPrimitives:MTLPrimitiveTypeTriangle vertexStart:0 vertexCount:3 instanceCount:1];
[commandEncoder endEncoding];
for (int i = 0; i < [_metalDrawables count]; i++) {
[commandBuffer presentDrawable:_metalDrawables[i]];
}
[commandBuffer commit];
}
However, the scene gets rendered to just one of the layers, which turns out to be the one associated with the first color attachment's texture. The other layer is cleared with the specified clear color, but nothing is drawn.
Has the approach given any chance of succeeding or is using the render pass descriptor's color attachments entirely pointless when trying to render the same scene to multiple screens (i.e. CAMetalLayers)? If so, is there any other conceivable approach to achieve this result?
To write to multiple render targets, you would need to explicitly write out to that render target in your fragment shader. #lock has already pointed this out.
struct MyFragmentOutput {
// color attachment 0
float4 clr_f [[ color(0) ]];
// color attachment 1
int4 clr_i [[ color(1) ]];
// color attachment 2
uint4 clr_ui [[ color(2) ]];
};
fragment MyFragmentOutput
my_frag_shader( ... )
{
MyFragmentOutput f;
....
f.clr_f = ...;
f.clr_i = ...;
...
return f;
}
However, this is an overkill since you don't really need the GPU to render the scene twice. So the answer above by #Kacper is more accurate for your case. However, to add to his answer, I would recommend using BlitEncoder that can copy data between two textures on the GPU, which I assume should be much faster than the CPU.
https://developer.apple.com/library/mac/documentation/Miscellaneous/Conceptual/MetalProgrammingGuide/Blit-Ctx/Blit-Ctx.html#//apple_ref/doc/uid/TP40014221-CH9-SW4
As far as I read about this problem you can try to render just to one MTLTexture (not drawable layer) and then try to use MTLTexture methods getBytes and replaceRegion to copy texture data into two drawable layers.
Currently I am working on rendering to ordinary texture but I encounter some artifacts and currently it not working for me, maybe you will find way to solve that.
I've been looking at the new OpenGL framework for iOS, aptly named GLKit, and have been playing around with porting some existing OpenGL 1.0 code to OpenGL ES 2.0 just to dip my toe in the water and get to grips with things.
After reading the API and a whole ream of other best practices provided by Apple and the OpenGL documentation, i've had it pretty much ingrained into me that I should be using Vertex Buffer Objects and using "elements" or rather, vertex indices. There seems to be a lot of mention of optimising memory storage by using padding where necessary too but that's a conversation for another day perhaps ;)
I read on SO a while ago about the benefits of using NSMutableData over classic malloc/free and wanted to try and take this approach when writing my VBO. So far i've managed to bungle together a snippet that looks like i'm heading down the right track but i'm not entirely sure about how much data a VBO should contain. Here's what i've got so far:
//import headers
#import <GLKit/GLKit.h>
#pragma mark -
#pragma mark InterleavingVertexData
//vertex buffer object struct
struct InterleavingVertexData
{
//vertices
GLKVector3 vertices;
//normals
GLKVector3 normal;
//color
GLKVector4 color;
//texture coordinates
GLKVector2 texture;
};
typedef struct InterleavingVertexData InterleavingVertexData;
#pragma mark -
#pragma mark VertexIndices
//vertex indices struct
struct VertexIndices
{
//vertex indices
GLuint a;
GLuint b;
GLuint c;
};
typedef struct VertexIndices VertexIndices;
//create and return a vertex index with specified indices
static inline VertexIndices VertexIndicesMake(GLuint a, GLuint b, GLuint c)
{
//declare vertex indices
VertexIndices vertexIndices;
//set indices
vertexIndices.a = a;
vertexIndices.b = b;
vertexIndices.c = c;
//return vertex indices
return vertexIndices;
}
#pragma mark -
#pragma mark VertexBuffer
//vertex buffer struct
struct VertexBuffer
{
//vertex data
NSMutableData *vertexData;
//vertex indices
NSMutableData *indices;
//total number of vertices
NSUInteger totalVertices;
//total number of indices
NSUInteger totalIndices;
};
typedef struct VertexBuffer VertexBuffer;
//create and return a vertex buffer with allocated data
static inline VertexBuffer VertexBufferMake(NSUInteger totalVertices, NSUInteger totalIndices)
{
//declare vertex buffer
VertexBuffer vertexBuffer;
//set vertices and indices count
vertexBuffer.totalVertices = totalVertices;
vertexBuffer.totalIndices = totalIndices;
//set vertex data and indices
vertexBuffer.vertexData = nil;
vertexBuffer.indices = nil;
//check vertices count
if(totalVertices > 0)
{
//allocate data
vertexBuffer.vertexData = [[NSMutableData alloc] initWithLength:(sizeof(InterleavingVertexData) * totalVertices)];
}
//check indices count
if(totalIndices > 0)
{
//allocate data
vertexBuffer.indices = [[NSMutableData alloc] initWithLength:(sizeof(VertexIndices) * totalIndices)];
}
//return vertex buffer
return vertexBuffer;
}
//grow or shrink a vertex buffer
static inline void VertexBufferResize(VertexBuffer *vertexBuffer, NSUInteger totalVertices, NSUInteger totalIndices)
{
//check adjusted vertices count
if(vertexBuffer->totalVertices != totalVertices && totalVertices > 0)
{
//set vertices count
vertexBuffer->totalVertices = totalVertices;
//check data is valid
if(vertexBuffer->vertexData)
{
//allocate data
[vertexBuffer->vertexData setLength:(sizeof(InterleavingVertexData) * totalVertices)];
}
else
{
//allocate data
vertexBuffer->vertexData = [[NSMutableData alloc] initWithLength:(sizeof(InterleavingVertexData) * totalVertices)];
}
}
//check adjusted indices count
if(vertexBuffer->totalIndices != totalIndices && totalIndices > 0)
{
//set indices count
vertexBuffer->totalIndices = totalIndices;
//check data is valid
if(vertexBuffer->indices)
{
//allocate data
[vertexBuffer->indices setLength:(sizeof(VertexIndices) * totalIndices)];
}
else
{
//allocate data
vertexBuffer->indices = [[NSMutableData alloc] initWithLength:(sizeof(VertexIndices) * totalIndices)];
}
}
}
//release vertex buffer data
static inline void VertexBufferRelease(VertexBuffer *vertexBuffer)
{
//set vertices and indices count
vertexBuffer->totalVertices = 0;
vertexBuffer->totalIndices = 0;
//check vertices are valid
if(vertexBuffer->vertexData)
{
//clean up
[vertexBuffer->vertexData release];
vertexBuffer->vertexData = nil;
}
//check indices are valid
if(vertexBuffer->indices)
{
//clean up
[vertexBuffer->indices release];
vertexBuffer->indices = nil;
}
}
Currently, the interleaving vertex data contains enough to store the vertices, normals, colors and texture coordinates for each vertex. I was under the impression that there would be an equal number of vertices and indices but in practice this obviously isn't the case so for this reason, the indices are part of the VBO rather than the InterleavingVertexData.
Question Updated:
I've updated the code above after managing to wrangle it into a working state. Hopefully it will come in useful to someone in the future.
Now that i've managed to set everything up, i'm having trouble getting the expected results from rendering the content bound to the VBO. Here's the code i've got so far for loading my data into OpenGL:
//generate buffers
glGenBuffers(2, buffers);
//bind vertices buffer
glBindBuffer(GL_ARRAY_BUFFER, buffers[0]);
glBufferData(GL_ARRAY_BUFFER, (sizeof(InterleavingVertexData) * vertexBuffer.totalVertices), self.vertexData, GL_STATIC_DRAW);
//bind indices buffer
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, (sizeof(VertexIndices) * vertexBuffer.totalIndices), self.vertexIndices, GL_STATIC_DRAW);
//reset buffers
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
And the code for rendering everything:
//enable required attributes
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribNormal);
glEnableVertexAttribArray(GLKVertexAttribColor);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
//bind buffers
glBindBuffer(GL_ARRAY_BUFFER, buffers[0]);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[1]);
//set shape attributes
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, sizeof(InterleavingVertexData), (void *)offsetof(InterleavingVertexData, vertices));
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_TRUE, sizeof(InterleavingVertexData), (void *)offsetof(InterleavingVertexData, normal));
glVertexAttribPointer(GLKVertexAttribColor, 4, GL_FLOAT, GL_TRUE, sizeof(InterleavingVertexData), (void *)offsetof(InterleavingVertexData, color));
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_TRUE, sizeof(InterleavingVertexData), (void *)offsetof(InterleavingVertexData, texture));
//draw shape
glDrawElements(GL_TRIANGLES, vertexBuffer.totalIndices, GL_UNSIGNED_INT, (void *)0);
//reset buffers
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
//disable atttributes
glDisableVertexAttribArray(GLKVertexAttribTexCoord0);
glDisableVertexAttribArray(GLKVertexAttribColor);
glDisableVertexAttribArray(GLKVertexAttribNormal);
glDisableVertexAttribArray(GLKVertexAttribPosition);
Whilst my iPhone hasn't yet exploded with awesome graphics of unicorns shooting rainbows from their eyes, I haven't been able to render a simple shape in it's entirety without tearing my hair out.
From the rendering it looks as though only 1/3rd of each shape is being drawn, perhaps 1/2 depending on the viewing angle. It seems the culprit it the count parameter passed to glDrawElements as fiddling with this has differing results but I've read the documentation and checked the value over and over again and it does indeed expect the total number of indices (which is what i'm passing currently).
As I mentioned in my original question, i'm quite confused by VBO's currently or rather, confused by the implementation rather than the concept at least. If anyone would be so kind as to cast an eye over my implementation, that would be super awesome as i'm sure i've made a rookie error somewhere along the way but you know how it is when you stare at something for hours on end with no progress.
Thanks for reading!
I think I see your problem.
You've got a struct, VertexIndices which contains three indices, or the indices for one triangle. When you bind your IBO (Index Buffer Object, the buffer object containing your indices), you do this:
glBufferData(GL_ELEMENT_ARRAY_BUFFER, (sizeof(VertexIndices) * vertexBuffer.totalIndices), self.vertexIndices, GL_STATIC_DRAW);
Which is fine. The size parameter in glBufferData is in bytes so you're multiplying sizeof(3 floats) by the number of groups of 3 floats that you have. Great.
But then when you actually call glDrawElements, you do this:
glDrawElements(GL_TRIANGLES, vertexBuffer.totalIndices, GL_UNSIGNED_INT, (void *)0);
However, the vertexBuffer.totalIndices is equal to the number of VertexIndices structs you've got, which is equal to the total number of indices / 3 (or total number of triangles). So you need to do one of the following:
Easy fix yet stupid: glDrawElements(..., vertexBuffer.totalIndices * 3, ...);
Proper yet more work: vertexBuffer.totalIndices should contain the actual total number of indices that you've got, not the total number of triangles you're rendering.
You need to do one of these because right now totalIndices contains the total number VertexIndices you've got, and each one has 3 indices. The right thing to do here is either rename totalIndices to totalTriangles, or keep track of the actual total number of indices somewhere.
I have been working on reading in an audio asset using AVAssetReader so that I can later play back the audio with an AUGraph with an AudioUnit callback. I have the AUGraph and AudioUnit callback working but it reads files from disk and if the file is too big it would take up too much memory and crash the app. So I am instead reading the asset directly and only a limited size. I will then manage it as a double buffer and get the AUGraph what it needs when it needs it.
(Note: I would love know if I can use Audio Queue Services and still use an AUGraph with AudioUnit callback so memory is managed for me by the iOS frameworks.)
My problem is that I do not have a good understanding of arrays, structs and pointers in C. The part where I need help is taking the individual AudioBufferList which holds onto a single AudioBuffer and add that data to another AudioBufferList which holds onto all of the data to be used later. I believe I need to use memcpy but it is not clear how to use it or even initialize an AudioBufferList for my purposes. I am using MixerHost for reference which is the sample project from Apple which reads in the file from disk.
I have uploaded my work in progress if you would like to load it up in Xcode. I've figured out most of what I need to get this done and once I have the data being collected all in one place I should be good to go.
Sample Project: MyAssetReader.zip
In the header you can see I declare the bufferList as a pointer to the struct.
#interface MyAssetReader : NSObject {
BOOL reading;
signed long sampleTotal;
Float64 totalDuration;
AudioBufferList *bufferList; // How should this be handled?
}
Then I allocate bufferList this way, largely borrowing from MixerHost...
UInt32 channelCount = [asset.tracks count];
if (channelCount > 1) {
NSLog(#"We have more than 1 channel!");
}
bufferList = (AudioBufferList *) malloc (
sizeof (AudioBufferList) + sizeof (AudioBuffer) * (channelCount - 1)
);
if (NULL == bufferList) {NSLog (#"*** malloc failure for allocating bufferList memory"); return;}
// initialize the mNumberBuffers member
bufferList->mNumberBuffers = channelCount;
// initialize the mBuffers member to 0
AudioBuffer emptyBuffer = {0};
size_t arrayIndex;
for (arrayIndex = 0; arrayIndex < channelCount; arrayIndex++) {
// set up the AudioBuffer structs in the buffer list
bufferList->mBuffers[arrayIndex] = emptyBuffer;
bufferList->mBuffers[arrayIndex].mNumberChannels = 1;
// How should mData be initialized???
bufferList->mBuffers[arrayIndex].mData = malloc(sizeof(AudioUnitSampleType));
}
Finally I loop through the reads.
int frameCount = 0;
CMSampleBufferRef nextBuffer;
while (assetReader.status == AVAssetReaderStatusReading) {
nextBuffer = [assetReaderOutput copyNextSampleBuffer];
AudioBufferList localBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(nextBuffer, NULL, &localBufferList, sizeof(localBufferList), NULL, NULL,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer);
// increase the number of total bites
bufferList->mBuffers[0].mDataByteSize += localBufferList.mBuffers[0].mDataByteSize;
// carefully copy the data into the buffer list
memcpy(bufferList->mBuffers[0].mData + frameCount, localBufferList.mBuffers[0].mData, sizeof(AudioUnitSampleType));
// get information about duration and position
//CMSampleBufferGet
CMItemCount sampleCount = CMSampleBufferGetNumSamples(nextBuffer);
Float64 duration = CMTimeGetSeconds(CMSampleBufferGetDuration(nextBuffer));
Float64 presTime = CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(nextBuffer));
if (isnan(duration)) duration = 0.0;
if (isnan(presTime)) presTime = 0.0;
//NSLog(#"sampleCount: %ld", sampleCount);
//NSLog(#"duration: %f", duration);
//NSLog(#"presTime: %f", presTime);
self.sampleTotal += sampleCount;
self.totalDuration += duration;
frameCount++;
free(nextBuffer);
}
I am unsure about the what that I handle mDataByteSize and mData, especially with memcpy. Since mData is a void pointer this is an extra tricky area.
memcpy(bufferList->mBuffers[0].mData + frameCount, localBufferList.mBuffers[0].mData, sizeof(AudioUnitSampleType));
In this line I think it should be copying the value from the data in localBufferList to the position in the bufferList plus the number of frames to position the pointer where it should write the data. I have a couple of ideas on what I need to change to get this to work.
Since a void pointer is just 1 and not the size of the pointer for an AudioUnitSampleType I may need to multiply it also by sizeof(AudioUnitSampleType) to get the memcpy into the right position
I may not be using malloc properly to prepare mData but since I am not sure how many frames there will be I am not sure what to do to initialize it
Currently when I run this app it ends this function with an invalid pointer for bufferList.
I appreciate your help with making me better understand how to manage an AudioBufferList.
I've come up with my own answer. I decided to use an NSMutableData object which allows me to appendBytes from the CMSampleBufferRef after calling CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer to get an AudioBufferList.
[data appendBytes:localBufferList.mBuffers[0].mData length:localBufferList.mBuffers[0].mDataByteSize];
Once the read loop is done I have all of the data in my NSMutableData object. I then create and populate the AudioBufferList this way.
audioBufferList = (AudioBufferList *)malloc(sizeof(AudioBufferList));
if (NULL == audioBufferList) {
NSLog (#"*** malloc failure for allocating audioBufferList memory");
[data release];
return;
}
audioBufferList->mNumberBuffers = 1;
audioBufferList->mBuffers[0].mNumberChannels = channelCount;
audioBufferList->mBuffers[0].mDataByteSize = [data length];
audioBufferList->mBuffers[0].mData = (AudioUnitSampleType *)malloc([data length]);
if (NULL == audioBufferList->mBuffers[0].mData) {
NSLog (#"*** malloc failure for allocating mData memory");
[data release];
return;
}
memcpy(audioBufferList->mBuffers[0].mData, [data mutableBytes], [data length]);
[data release];
I'd appreciate a little code review on how I use malloc to create the struct and populate it. I am getting a EXC_BAD_ACCESS error sporadically but I cannot pinpoint where the error is just yet. Since I am using malloc on the struct I should not have to retain it anywhere. I do call "free" to release child elements within the struct and finally the struct itself everywhere that I use malloc.
a big noob needs help understanding things.
I have three UIViews stored inside a NSMutableArray
lanes = [[NSMutableArray arrayWithCapacity:3] retain];
- (void)registerLane:(Lane*)lane {
NSLog (#"registering lane:%i",lane);
[lanes addObject:lane];
}
in the NSLog I see: registering lane:89183264
The value displayed in the NSLog (89183264) is what I am after.
I'd like to be able to save that number in a variable to be able to reuse it elsewhere in the code.
The closest I could come up with was this:
NSString *lane0 = [lanes objectAtIndex:0];
NSString *description0 = [lane0 description];
NSLog (#"description0:%#",description0);
The problem is that description0 gets the whole UIView object, not just the single number (dec 89183264 is hex 0x550d420)
description0's content:
description0:<Lane: 0x550d420; frame = (127 0; 66 460); alpha = 0.5; opaque = NO; autoresize = RM+BM; tag = 2; layer = <CALayer: 0x550d350>>
what I don't get is why I get the correct decimal value with with NSLog so easily, but seem to be unable to get it out of the NSMutableArray any other way. I am sure I am missing some "basic knowledge" here, and I would appreciate if someone could take the time and explain what's going on here so I can finally move on. it's been a long day studying.
why can't I save the 89183264 number easily with something like:
NSInteger * mylane = lane.id;
or
NSInteger * mylane = lane;
thank you all
I'm really confused as to why you want to save the memory location of the view? Because that's what your '89183264' number is. It's the location of the pointer. When you are calling:
NSLog (#"registering lane:%i",lane);
...do you get what's actually being printed out there? What the number that's being printed means?
It seems like a really bad idea, especially when if you're subclassing UIView you've already got a lovely .tag property which you can assign an int of your choosing.
You're making life infinitely more complex than it needs to be. Just use a pointer. Say I have an array containing lots of UIViews:
UIView *viewToCompare = [myArray objectAtIndex:3];
for (id object in myArray) {
if (object == viewToCompare) {
NSLog(#"Found it!");
}
}
That does what you're trying to do - it compares two pointers - and doesn't need any faffing around with ints, etc.
I am trying to manipulate an image using a chain of CIFilters, and then examine each byte of the resulting image (bitmap). Long term, I do not need to display the resulting image (bitmap) -- I just need to "analyze" it in memory. But near-term I am displaying it on screen, to help with debugging.
I have some "bitmap examination" code that works as expected when examining the NSImage (bitmap representation) I use as my input (loaded from a JPG file into an NSImage). And it SOMETIMES works as expected when I use it on the outputBitmap produced by the code below. More specifically, when I use an NSAffineTransform filter to create outputBitmap, then outputBitmap contains the data I would expect. But if I use a CILineOverlay filter to create the outputBitmap, none of the bytes in the bitmap have any data in them. I believe both of these filters are working as expected, because when I display their results on screen (via outputImageView), they look "correct." Yet when I examine the outputBitmaps, the one created from the CILineOverlay filter is "empty" while the one created from NSAffineTransfer contains data. Furthermore, if I chain the two filters together, the final resulting bitmap only seems to contain data if I run the AffineTransform last. Seems very strange, to me???
My understanding (from reading the CI programming guide) is that the CIImage should be considered an "image recipe" rather than an actual image, because the image isn't actually created until the image is "drawn." Given that, it would make sense that the CIimage bitmap doesn't have data -- but I don't understand why it has data after I run the NSAffineTransform but doesn't have data after running the CILineOverlay transform? Basically, I am trying to determine if creating the NSCIImageRep (ir in the code below) from the CIImage (myResult) is equivalent to "drawing" the CIImage -- in other words if that should force the bitmap to be populated? If someone knows the answer to this please let me know -- it will save me a few hours of trial and error experimenting!
Finally, if the answer is "you must draw to a graphics context" ... then I have another question: would I need to do something along the lines of what is described in the Quartz 2D Programming Guide: Graphics Contexts, listing 2-7 and 2-8: drawing to a bitmap graphics context? That is the path down which I am about to head ... but it seems like a lot of code just to force the bitmap data to be dumped into an array where I can get at it. So if there is an easier or better way please let me know. I just want to take the data (that should be) in myResult and put it into a bitmap array where I can access it at the byte level. And since I already have code that works with an NSBitmapImageRep, unless doing it that way is a bad idea for some reason that is not readily apparent to me, then I would prefer to "convert" myResult into an NSBitmapImageRep.
CIImage * myResult = [transform valueForKey:#"outputImage"];
NSImage *outputImage;
NSCIImageRep *ir = [NSCIImageRep alloc];
ir = [NSCIImageRep imageRepWithCIImage:myResult];
outputImage = [[[NSImage alloc] initWithSize:
NSMakeSize(inputImage.size.width, inputImage.size.height)]
autorelease];
[outputImage addRepresentation:ir];
[outputImageView setImage: outputImage];
NSBitmapImageRep *outputBitmap = [[NSBitmapImageRep alloc] initWithCIImage: myResult];
Thanks,
Adam
Edit #1 -- for Peter H. comment:
Sample code accessing bitmap data...
for (row = 0; row < heightInPixels; row++)
for (column = 0; column < widthInPixels; column++) {
if (row == 1340) { //just check this one row, that I know what to expect
NSLog(#"Row 1340 column %d pixel redByte of pixel is %d",column,thisPixel->redByte);
}
}
Results from above (all columns contain the same zero/null value, which is what I called "empty")...
2010-06-13 10:39:07.765 ImageTransform[5582:a0f] Row 1340 column 1664 pixel redByte of pixel is 0
2010-06-13 10:39:07.765 ImageTransform[5582:a0f] Row 1340 column 1665 pixel redByte of pixel is 0
2010-06-13 10:39:07.766 ImageTransform[5582:a0f] Row 1340 column 1666 pixel redByte of pixel is 0
If I change the %d to %h nothing prints at all (blank rather than "0"). If I change it to %# I get "(null)" on every line, instead of the "0" shown above. On the other hand ... when I run just the NSAffineTransform filter and then execute this code the bytes printed contain the data I would expect (regardless of how I format the NSLog output, something prints).
Adding more code on 6/14 ...
// prior code retrieves JPG image from disk and loads into NSImage
CIImage * inputCIimage = [[CIImage alloc] initWithBitmapImageRep:inputBitmap];
if (inputCIimage == nil) {
NSLog(#"Bailing out. Could not create CI Image");
return;
}
NSLog (#"CI Image created. working on transforms...");
Filter that rotates image.... this was previously in a method, but I have since moved it to be "in line" as I have been trying to figure out what is wrong...
// rotate imageIn by degreesToRotate, using an AffineTransform
CIFilter *transform = [CIFilter filterWithName:#"CIAffineTransform"];
[transform setDefaults];
[transform setValue:inputCIimage forKey:#"inputImage"];
NSAffineTransform *affineTransform = [NSAffineTransform transform];
[affineTransform transformPoint: NSMakePoint(inputImage.size.width/2, inputImage.size.height / 2)];
//inputImage.size.width /2.0,inputImage.size.height /2.0)];
[affineTransform rotateByDegrees:3.75];
[transform setValue:affineTransform forKey:#"inputTransform"];
CIImage * myResult2 = [transform valueForKey:#"outputImage"];
Filter to apply CILineOverlay filter... (was also previously in a method)
CIFilter *lineOverlay = [CIFilter filterWithName:#"CILineOverlay"];
[lineOverlay setDefaults];
[lineOverlay setValue: inputCIimage forKey:#"inputImage"];
// start off with default values, then tweak the ones needed to achieve desired results
[lineOverlay setValue: [NSNumber numberWithFloat: .07] forKey:#"inputNRNoiseLevel"]; //.07 (0-1)
[lineOverlay setValue: [NSNumber numberWithFloat: .71] forKey:#"inputNRSharpness"]; //.71 (0-2)
[lineOverlay setValue: [NSNumber numberWithFloat: 1] forKey:#"inputEdgeIntensity"]; //1 (0-200)
[lineOverlay setValue: [NSNumber numberWithFloat: .1] forKey:#"inputThreshold"]; //.1 (0-1)
[lineOverlay setValue: [NSNumber numberWithFloat: 50] forKey:#"inputContrast"]; //50 (.25-200)
CIImage *myResult2 = [lineOverlay valueForKey:#"outputImage"]; //apply the filter to the CIImage object and return it
Finally ... the code that uses the results...
if (myResult2 == Nil)
NSLog(#"Transformations failed");
else {
NSLog(#"Finished transformations successfully ... now render final image");
// make an NSImage from the CIImage (to display it, during initial development)
NSImage *outputImage;
NSCIImageRep *ir = [NSCIImageRep alloc];
// show the tranformed output on screen...
ir = [NSCIImageRep imageRepWithCIImage:myResult2];
outputImage = [[[NSImage alloc] initWithSize:
NSMakeSize(inputImage.size.width, inputImage.size.height)]
autorelease];
[outputImage addRepresentation:ir];
[outputImageView setImage: outputImage]; //rotatedImage
At this point the transformed image displays on screen just fine, regardless of which transform I apply and which one I leave commented out. It even works just fine if I "chain" together the transforms so that the output from #1 goes into #2. So, to me, this seems to indicates that the filters are working.
However ... the code that I really need to use is the "bitmap analysis" code that is examining the bitmap that is in (or "should be" in) Results2. And that code works only on the bitmap resulting from the CIAffineTransform filter. When I use it to examine the bitmap resulting from the CILineOverlay, the entire bitmap seems to contain only zeroes.
So here is the code used for that analysis...
// this is the next line after the [outputImageView ...] shown above
[self findLeftEdge :myResult2];
And then this is the code from the findLeftEdge method...
- (void) findLeftEdge :(CIImage*)imageInCI {
// find the left edge of the input image, assuming it will be the first non-white pixel
// because we have already applied the Threshold filter
NSBitmapImageRep *outputBitmap = [[NSBitmapImageRep alloc] initWithCIImage: imageInCI];
if (outputBitmap == nil)
NSLog(#"unable to create outputBitmap");
else
NSLog(#"ouputBitmap image rep created -- samples per pixel = %d", [outputBitmap samplesPerPixel]);
RGBAPixel
*thisPixel,
*bitmapPixels = (RGBAPixel *)[outputBitmap bitmapData];
int
row,
column,
widthInPixels = [outputBitmap pixelsWide],
heightInPixels = [outputBitmap pixelsHigh];
//RGBAPixel *leftEdge [heightInPixels];
struct {
int pixelNumber;
unsigned char pixelValue;
} leftEdge[heightInPixels];
// Is this necessary, or does objective-c always intialize it to zero, for me?
for (row = 0; row < heightInPixels; row++) {
leftEdge[row].pixelNumber = 0;
leftEdge[row].pixelValue = 0;
}
for (row = 0; row < heightInPixels; row++)
for (column = 0; column < widthInPixels; column++) {
thisPixel = (&bitmapPixels[((widthInPixels * row) + column)]);
//red is as good as any channel, for this test (assume threshold filter already applied)
//this should "save" the column number of the first non-white pixel encountered
if (leftEdge[row].pixelValue < thisPixel->redByte) {
leftEdge[row].pixelValue = thisPixel->redByte;
leftEdge[row].pixelNumber = column;
}
// For debugging, display contents of each pixel
//NSLog(#"Row %d column %d pixel redByte of pixel is %#",row,column,thisPixel->redByte);
// For debugging, display contents of each pixel on one row
//if (row == 1340) {
// NSLog(#"Row 1340 column %d pixel redByte of pixel is %#",column,thisPixel->redByte);
//}
}
// For debugging, display the left edge that we discovered
for (row = 0; row < heightInPixels; row++) {
NSLog(#"Left edge on row %d was at pixel #%d", row, leftEdge[row].pixelNumber);
}
[outputBitmap release];
}
Here is another filter. When I use it I do get data in the "output bitmap" (just like the Rotation filter). So it is just the AffineTransform that does not yield up its data for me in the resulting bitmap ...
- (CIImage*) applyCropToCI:(CIImage*) imageIn {
rectToCrop {
// crop the rectangle specified from the input image
CIFilter *crop = [CIFilter filterWithName:#"CICrop"];
[crop setDefaults];
[crop setValue:imageIn forKey:#"inputImage"];
// [crop setValue:rectToCrop forKey:#"inputRectangle"]; //vector defaults to 0,0,300,300
//CIImage * myResult = [transform valueForKey:#"outputImage"]; //this is the way it was "in-line", before putting this code into a method
return [crop valueForKey:#"outputImage"]; //does this need to be retained?
}
You claim that the bitmap data contains “all zeroes”, but you're only looking at one byte per pixel. You're assuming that the first component is the red component, and you're assuming that the data is one byte per component; if the data is alpha-first or floating-point, one or both of these assumptions will be wrong.
Create a bitmap context in whatever format you want using a buffer you allocate, and render the image into that context. Your buffer will then contain the image in the format you expect.
You might also want to switch from structure-based access to byte-based access—i.e., pixels[(row*bytesPerRow)+col], incrementing col by the number of components per pixel. Endianness can easily become a headache when you use structures to access the components.
for (row = 0; row < heightInPixels; row++)
for (column = 0; column < widthInPixels; column++) {
if (row == 1340) { //just check this one row, that I know what to expect
NSLog(#"Row 1340 column %d pixel redByte of pixel is %d",column,thisPixel->redByte);
}
}
Aside from the syntax error, this code doesn't work because you never assigned to thisPixel. You are looping through indexes for nothing, since you never actually look up a pixel value at those indexes and assign it to thisPixel in order to inspect it.
Add such an assignment before the NSLog statement.
Furthermore, if the only row you care about is 1340, there's no need to loop through rows. Check using an if statement whether 1340 is less than the height, and if it is, then do only the columns loop. (Also, don't embed magic number literals like this in your code. Give that constant a name that explains the significance of the number 1340—i.e., why it's the only row you care about.)