CaptureScreen in Objective-C using OpenGL on an IMac - objective-c

I have a multidisplay system, 27 inch (2560 x 1440) and Apple Cinema Display (1920 x 1200) with ATI Radeon HD 5750 1024 MB graphics card. I used to be able to capture screen shots with my previous iMac with a Nvidia card, also multidisplay systems. Now I get an OpenGL error.
This GDB was configured as "x86_64-apple-darwin".tty /dev/ttys001
[Switching to process 28434 thread 0x0] 2011-08-20 19:07:42.040
DetectObjectColor[28434:c03] invalid fullscreen drawable 2011-08-20
19:07:42.043 DetectObjectColor[28434:c03] * Assertion failure in
-[OpenGLScreenReader readPartialScreenToBuffer:bufferBaseAddress:],
/Documents/Personal/DetectObjectColor/OpenGLScreenReader.m:230
2011-08-20 19:07:42.043 DetectObjectColor[28434:c03] * Terminating
app due to uncaught exception 'NSInternalInconsistencyException',
reason: 'OpenGL error 0x0506'
I call this from my main program:
OpenGLScreenReader *mOpenGLScreenReader;
mOpenGLScreenReader = [[OpenGLScreenReader alloc] init];
[mOpenGLScreenReader readRectScreen:CGRectMake(570, 265, 35, 800)];
This is my init method
-(id) init
{
if (self = [super init])
{
// Create a full-screen OpenGL graphics context
// Specify attributes of the GL graphics context
NSOpenGLPixelFormatAttribute attributes[] = {
NSOpenGLPFANoRecovery,
NSOpenGLPFAFullScreen,
NSOpenGLPFAScreenMask,
CGDisplayIDToOpenGLDisplayMask(kCGDirectMainDisplay),
(NSOpenGLPixelFormatAttribute) 0
};
NSOpenGLPixelFormat *glPixelFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:attributes];
if (!glPixelFormat)
{
return nil;
}
// Create OpenGL context used to render
mGLContext = [[[NSOpenGLContext alloc] initWithFormat:glPixelFormat shareContext:nil] autorelease];
// Cleanup, pixel format object no longer needed
[glPixelFormat release];
if (!mGLContext)
{
[self release];
return nil;
}
[mGLContext retain];
// Set our context as the current OpenGL context
[mGLContext makeCurrentContext];
// Set full-screen mode
[mGLContext setFullScreen];
NSRect mainScreenRect = [[NSScreen mainScreen] frame];
mWidth = mainScreenRect.size.width;
mHeight = mainScreenRect.size.height;
mByteWidth = mWidth * 4; // Assume 4 bytes/pixel for now
mByteWidth = (mByteWidth + 3) & ~3; // Align to 4 bytes
mData = malloc(mByteWidth * mHeight);
NSAssert( mData != 0, #"malloc failed");
}
return self;
}
This is my readRectScreen method which is giving me the error on OpenGL when it executes the readPartialScreenToBuffer method
- (void) readRectScreen:(CGRect) srcRect
{
mWidth = srcRect.size.width;
mHeight = srcRect.size.height;
mByteWidth = mWidth * 4; // Assume 4 bytes/pixel for now
mByteWidth = (mByteWidth + 3) & ~3; // Align to 4 bytes
mData = malloc(mByteWidth * mHeight);
[self readPartialScreenToBuffer:srcRect bufferBaseAddress: mData];
}
- (void) readPartialScreenToBuffer: (CGRect) srcRect bufferBaseAddress: (void *) baseAddress
{
// select front buffer as our source for pixel data
GLint width, height;
width = srcRect.size.width;
height = srcRect.size.height;
glReadBuffer(GL_FRONT);
//Read OpenGL context pixels directly.
// For extra safety, save & restore OpenGL states that are changed
glPushClientAttrib(GL_CLIENT_PIXEL_STORE_BIT);
glPixelStorei(GL_PACK_ALIGNMENT, 4); /* Force 4-byte alignment */
glPixelStorei(GL_PACK_ROW_LENGTH, 0);
glPixelStorei(GL_PACK_SKIP_ROWS, 0);
glPixelStorei(GL_PACK_SKIP_PIXELS, 0);
//Read a block of pixels from the frame buffer
glReadPixels((GLint)srcRect.origin.x, (GLint)srcRect.origin.y, width, height, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV,baseAddress);
glPopClientAttrib();
//Check for OpenGL errors
GLenum theError = GL_NO_ERROR;
theError = glGetError();
NSAssert1( theError == GL_NO_ERROR, #"OpenGL error 0x%04X", theError);
}
Can anybody please help... I am totally lost as to where the error might be.

I guess your new computer has Mac OS X 10.7 (Lion) installed. Lion does not allow this method of screen capture (reading the front buffer) anymore. Applications must migrate to CGDisplayCaptureImage() API, which was introduced in 10.6.

Your issue seems to be, that you don't get/have a valid OpenGL context. But that's not your real problem:
OpenGL is a drawing/rendering library. glReadPixels is meant to read back the image generated by your program. OpenGL is not a general purpose graphics API (though some/many people confuse it for that).
For some time one could exploit the side effects between OpenGL contexts, windows and framebuffers to take screenshots using glReadPixels. On modern systems with compositing window managers this no longer works!
You're barking up the wrong tree, sorry.

Related

Open CV memory stacked (not released properly)

I am using 3rd party library for image processing, this method seems to be the cause of large memory usage (+30MB) everytime it executed, and it won't release properly. Repeated use of it ends up crashing the app (memory overload). The image used is directly from camera of my iP6.
+ (UIImage *)UIImageFromCVMat:(cv::Mat)cvMat
{
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
CGImageRef imageRef = CGImageCreate(cvMat.cols, // Width
cvMat.rows, // Height
8, // Bits per component
8 * cvMat.elemSize(), // Bits per pixel
cvMat.step[0], // Bytes per row
colorSpace, // Colorspace
kCGImageAlphaNone | kCGBitmapByteOrderDefault, // Bitmap info flags
provider, // CGDataProviderRef
NULL, // Decode
false, // Should interpolate
kCGRenderingIntentDefault); // Intent
// UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return image;
}
I suspect the problem is here: (__bridge CFDataRef)data. I cant use CFRelease on it cause it make app crash. Project is running with ARC.
EDIT:
It seems the same code is also in openCV official website:
http://docs.opencv.org/2.4/doc/tutorials/ios/image_manipulation/image_manipulation.html
Gah!
EDIT 2
Here is the code how I use it (actually below code is also a part of the 3rd party lib, but i added some lines).
cv::Mat undistorted = cv::Mat( cvSize(maxWidth,maxHeight), CV_8UC4); // here nothing
cv::Mat original = [MMOpenCVHelper cvMatFromUIImage:_adjustedImage]; // here +30MB
//NSLog(#"%f %f %f %f",ptBottomLeft.x,ptBottomRight.x,ptTopRight.x,ptTopLeft.x);
cv::warpPerspective(original, undistorted,
cv::getPerspectiveTransform(src, dst), cvSize(maxWidth, maxHeight)); // here +16MB
_cropRect.hidden=YES;
#autoreleasepool {
_sourceImageView.image=[MMOpenCVHelper UIImageFromCVMat:undistorted]; // here +15MB (PROBLEM)
}
original.release(); // here -30MB (THIS IS OK)
undistorted.release(); // here -16MB (ok)
Guess it is a hard subject since not many people knows OpenCV that well. What I found is that most answer for the similar problem involves putting #autoreleasepool where this method is used. But seems to be not releasing memory either.
As temporary solution I resize the image fed to this method by half. At least app will last longer before it crash finally. It just works.

CGImageRef faster way to access pixel data?

My current method is:
CGDataProviderRef provider = CGImageGetDataProvider(imageRef);
imageData.rawData = CGDataProviderCopyData(provider);
imageData.imageData = (UInt8 *) CFDataGetBytePtr(imageData.rawData);
I only get about 30 frames per second. I know part of the performance hit is copying the data, it'd be nice if I could just have access to the stream of bytes and not have it automatically create a copy for me.
I'm trying to get it to process CGImageRefs as fast as possible, is there a faster way?
Here's my working solutions snippet:
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
// Insert code here to initialize your application
//timer = [NSTimer scheduledTimerWithTimeInterval:1.0/60.0 //2000.0
// target:self
// selector:#selector(timerLogic)
// userInfo:nil
// repeats:YES];
leagueGameState = [LeagueGameState new];
[self updateWindowList];
lastTime = CACurrentMediaTime();
// Create a capture session
mSession = [[AVCaptureSession alloc] init];
// Set the session preset as you wish
mSession.sessionPreset = AVCaptureSessionPresetMedium;
// If you're on a multi-display system and you want to capture a secondary display,
// you can call CGGetActiveDisplayList() to get the list of all active displays.
// For this example, we just specify the main display.
// To capture both a main and secondary display at the same time, use two active
// capture sessions, one for each display. On Mac OS X, AVCaptureMovieFileOutput
// only supports writing to a single video track.
CGDirectDisplayID displayId = kCGDirectMainDisplay;
// Create a ScreenInput with the display and add it to the session
AVCaptureScreenInput *input = [[AVCaptureScreenInput alloc] initWithDisplayID:displayId];
input.minFrameDuration = CMTimeMake(1, 60);
//if (!input) {
// [mSession release];
// mSession = nil;
// return;
//}
if ([mSession canAddInput:input]) {
NSLog(#"Added screen capture input");
[mSession addInput:input];
} else {
NSLog(#"Couldn't add screen capture input");
}
//**********************Add output here
//dispatch_queue_t _videoDataOutputQueue;
//_videoDataOutputQueue = dispatch_queue_create( "com.apple.sample.capturepipeline.video", DISPATCH_QUEUE_SERIAL );
//dispatch_set_target_queue( _videoDataOutputQueue, dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_HIGH, 0 ) );
AVCaptureVideoDataOutput *videoOut = [[AVCaptureVideoDataOutput alloc] init];
videoOut.videoSettings = #{ (id)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
[videoOut setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
// RosyWriter records videos and we prefer not to have any dropped frames in the video recording.
// By setting alwaysDiscardsLateVideoFrames to NO we ensure that minor fluctuations in system load or in our processing time for a given frame won't cause framedrops.
// We do however need to ensure that on average we can process frames in realtime.
// If we were doing preview only we would probably want to set alwaysDiscardsLateVideoFrames to YES.
videoOut.alwaysDiscardsLateVideoFrames = YES;
if ( [mSession canAddOutput:videoOut] ) {
NSLog(#"Added output video");
[mSession addOutput:videoOut];
} else {NSLog(#"Couldn't add output video");}
// Start running the session
[mSession startRunning];
NSLog(#"Set up session");
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
//NSLog(#"Captures output from sample buffer");
//CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription( sampleBuffer );
/*
if ( self.outputVideoFormatDescription == nil ) {
// Don't render the first sample buffer.
// This gives us one frame interval (33ms at 30fps) for setupVideoPipelineWithInputFormatDescription: to complete.
// Ideally this would be done asynchronously to ensure frames don't back up on slower devices.
[self setupVideoPipelineWithInputFormatDescription:formatDescription];
}
else {*/
[self renderVideoSampleBuffer:sampleBuffer];
//}
}
- (void)renderVideoSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
//CVPixelBufferRef renderedPixelBuffer = NULL;
//CMTime timestamp = CMSampleBufferGetPresentationTimeStamp( sampleBuffer );
//[self calculateFramerateAtTimestamp:timestamp];
// We must not use the GPU while running in the background.
// setRenderingEnabled: takes the same lock so the caller can guarantee no GPU usage once the setter returns.
//#synchronized( _renderer )
//{
// if ( _renderingEnabled ) {
CVPixelBufferRef sourcePixelBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );
const int kBytesPerPixel = 4;
CVPixelBufferLockBaseAddress( sourcePixelBuffer, 0 );
int bufferWidth = (int)CVPixelBufferGetWidth( sourcePixelBuffer );
int bufferHeight = (int)CVPixelBufferGetHeight( sourcePixelBuffer );
size_t bytesPerRow = CVPixelBufferGetBytesPerRow( sourcePixelBuffer );
uint8_t *baseAddress = CVPixelBufferGetBaseAddress( sourcePixelBuffer );
int count = 0;
for ( int row = 0; row < bufferHeight; row++ )
{
uint8_t *pixel = baseAddress + row * bytesPerRow;
for ( int column = 0; column < bufferWidth; column++ )
{
count ++;
pixel[1] = 0; // De-green (second pixel in BGRA is green)
pixel += kBytesPerPixel;
}
}
CVPixelBufferUnlockBaseAddress( sourcePixelBuffer, 0 );
//NSLog(#"Test Looped %d times", count);
CIImage *ciImage = [CIImage imageWithCVImageBuffer:sourcePixelBuffer];
/*
CIContext *temporaryContext = [CIContext contextWithCGContext:
[[NSGraphicsContext currentContext] graphicsPort]
options: nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(sourcePixelBuffer),
CVPixelBufferGetHeight(sourcePixelBuffer))];
*/
//UIImage *uiImage = [UIImage imageWithCGImage:videoImage];
// Create a bitmap rep from the image...
NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithCIImage:ciImage];
// Create an NSImage and add the bitmap rep to it...
NSImage *image = [[NSImage alloc] init];
[image addRepresentation:bitmapRep];
// Set the output view to the new NSImage.
[imageView setImage:image];
//CGImageRelease(videoImage);
//renderedPixelBuffer = [_renderer copyRenderedPixelBuffer:sourcePixelBuffer];
// }
// else {
// return;
// }
//}
//Profile code? See how fast it's running?
if (CACurrentMediaTime() - lastTime > 3) //10 seconds
{
float time = CACurrentMediaTime() - lastTime;
[fpsText setStringValue:[NSString stringWithFormat:#"Elapsed Time: %f ms, %f fps", time * 1000 / loopsTaken, (1000.0)/(time * 1000.0 / loopsTaken)]];
lastTime = CACurrentMediaTime();
loopsTaken = 0;
[self updateWindowList];
if (leagueGameState.leaguePID == -1) {
[statusText setStringValue:#"No League Instance Found"];
}
}
else
{
loopsTaken++;
}
}
I get a very nice 60 frames per second even after looping through the data.
It captures the screen, I get the data, I modify the data and I re-show the data.
Which "stream of bytes" do you mean? CGImage represents the final bitmap data, but under the hood it may still be compressed. The bitmap may currently be stored on the GPU, so getting to it might require a GPU->CPU fetch (which is expensive, and should be avoided when you don't need it).
If you're trying to do this at greater than 30fps, you may want to rethink how you're attacking the problem, and use tools designed for that, like Core Image, Core Video, or Metal. Core Graphics is optimized for display, not processing (and definitely not real-time processing). A key difference in tools like Core Image is that you can perform more of your work on the GPU without shuffling data back to the CPU. This is absolutely critical for maintaining fast pipelines. Whenever possible, you want to avoid getting the actual bytes.
If you have a CGImage already, you can convert it to a CIImage with imageWithCGImage: and then use CIImage to process it further. If you really need access to the bytes, your options are the one you're using, or to render it into a bitmap context (which also will require copying) with CGContextDrawImage. There's just no promise that a CGImage has a bunch of bitmap bytes hanging around at any given time that you can look at, and it doesn't provide "lock your buffer" methods like you'll find in real-time frameworks like Core Video.
Some very good introductions to high-speed image processing from WWDC videos:
WWDC 2013 Session 509 Core Image Effects and Techniques
WWDC 2014 Session 514 Advances in Core Image
WWDC 2014 Sessions 603-605 Working with Metal

CoreGraphics random crash

This is a little test program to duplicate an intermittent issue in a larger class. The real class creates 4 thumbs of various sizes.
This main.m program will crash 1 out of 5 times it's run with EXC_BAD_ACCESS and highlights CGImageRelease(imgRef); If i comment out CGImageRelease(imgRef) then the app experiences serious memory leaks but doesn't crash...
#import <Foundation/Foundation.h>
#import <Cocoa/Cocoa.h>
int main(int argc, const char * argv[])
{
#autoreleasepool {
NSString * image = #"/Users/xxx/Pictures/wallpaper/7gjMT.jpg";
NSData * imageData = [NSData dataWithContentsOfFile:image];
CFDataRef imgData = (__bridge CFDataRef)imageData;
CGImageRef imgRef;
CGDataProviderRef imgDataProvider = NULL;
imgDataProvider = CGDataProviderCreateWithCFData(imgData);
imgRef = CGImageCreateWithJPEGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(imgDataProvider);
for (int i = 0; i < 1000; i++) {
// create context, keeping original image properties
CGColorSpaceRef colorspace = CGImageGetColorSpace(imgRef);
CGContextRef context = CGBitmapContextCreate(NULL, 2560, 1440,
CGImageGetBitsPerComponent(imgRef),
CGImageGetBytesPerRow(imgRef),
colorspace,
kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorspace);
// draw image to context (resizing it)
CGContextDrawImage(context, CGRectMake(0, 0, 2560, 1440), imgRef);
// extract resulting image from context
CGImageRef newImgRef;
newImgRef = CGBitmapContextCreateImage(context);
CGImageRelease(imgRef);
CGContextRelease(context);
imgRef = newImgRef;
}
}
return 0;
}
I found if I release the context first then 1 out of 10 failures it highlights CGImageGetBytesPerRow(imgRef) with the same error.
I added a breakpoint for malloc_error_break and got this on CGImageRelease:
Are CGImageRelease and CGImageRelease releasing a shared resource?
The main problem is almost certainly that you're releasing a colorspace that you don't own. CGImageGetColorSpace(imgRef) does not give you an ownership of the returned colorspace object, so you shouldn't be calling CGColorSpaceRelease(colorspace) later. (By the way, although I happened to get a different failure that clued me into the problem, the static analyzer catches this, too.)
As a secondary issue, I was getting failures to create the context because you're using an inappropriate bytes-per-row value. CGImageGetBytesPerRow(imgRef) is the bytes-per-row of that image, but that's only appropriate for the width of that image. Given that you're hard-coding a width rather than using the width of the image (since you're scaling), you should not be using the bytes-per-row of the image.
I guess it will work if you're scaling down, but it will waste space. If you're scaling up, it fails.
In any case, pass 0. That lets CGBitmapContextCreate() compute an optimal value.

I need help optimizing BGR888 blitting to NSView

This is best I've come up with for blitting a 24-bit BGR image out to an NSView.
I did trim a significant amount of CPU time by ensuring that the NSWindow host also had the same colorSpace.
I think there are 4 or 5 pixel copies going on here:
in the vImage conversion (required)
calling CGDataProviderCreateWithData
calling CGImageCreate
creating the NSBitmapImageRep bitmap
in the final blit with drawInRect (required)
Anyone want to chime in on improving it?
Any help would be much appreciated.
{
// one-time setup code
CGColorSpaceRef useColorSpace = nil;
int w = 1920;
int h = 1080;
[theWindow setColorSpace: [NSColorSpace genericRGBColorSpace]];
// setup vImage buffers (not listed here)
// srcBuffer is my 24-bit BGR image (malloc-ed to be w*h*3)
// dstBuffer is for the resulting 32-bit RGBA image (malloc-ed to be w*h*4)
...
// this is called # 30-60fps
if (!useColorSpace)
useColorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
vImage_Error err = vImageConvert_BGR888toRGBA8888(srcBuffer, NULL, 0xff, dstBuffer, NO, 0);
CGDataProviderRef newProvider = CGDataProviderCreateWithData(NULL,dstBuffer->data,w*h*4,myReleaseProvider);
CGImageRef myImageRGBA = CGImageCreate(w, h, 8, 32, w*4, useColorSpace, kCGBitmapByteOrderDefault | kCGImageAlphaLast, newProvider, NULL, false, kCGRenderingIntentDefault);
CGDataProviderRelease(newProvider);
// store myImageRGBA in an array of frames (using NSObject wrappers) for later access (setNeedsDisplay:)
...
}
- (void)drawRect:(NSRect)dirtyRect
{
// this is called # 30-60fps
CGImageRef storedImage = ...; // retrieve from array
NSBitmapImageRep *repImg = [[NSBitmapImageRep alloc] initWithCGImage:storedImage];
CGRect myFrame = CGRectMake(0,0,CGImageGetWidth(storedImage),CGImageGetHeight(storedImage));
[repImg drawInRect:myFrame fromRect:myFrame operation:NSCompositeCopy fraction:1.0 respectFlipped:TRUE hints:nil];
// free image from array (not listed here)
}
// this is called when the CGDataProvider is ready to release its data
void myReleaseProvider (void *info, const void *data, size_t size)
{
if (data) {
free((void *)data);
data=nil;
}
}
Use CGColorSpaceCreateDeviceRGB instead of genericRGB to avoid colorspace conversion inside CG. Use kCGImageAlphaNoneSkipLast instead of kCGImageAlphaLast since we know alpha is opaque to allow for a copy instead of a blend.
After you make those changes, it would be useful to run an Instruments time profile on it to show where the time is going.

IOS Receiving video from Network

UPDATE
- I have fixed some mistakes in the code below and the images are displayed on the other device, but I have another problem. While video capture is open, the "master" device sends data continuously, sometimes this capture appears on "slave" device and in a very short time, the image "blinks" to blank and repeat this all time for a short period. Any idea about this?
I'm working on a app that's need to send live camera capture and live microphone capture to another device in network.
I have done the connection between devices using a TCP server and publish it with bonjour, this works like a charm.
The most important part is about to send and receive video and audio from "master" device and render it on "slave" device.
First, here a piece of code where the app get the camera sample buffer and transform in UIImage:
#implementation AVCaptureManager (AVCaptureVideoDataOutputSampleBufferDelegate)
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
dispatch_sync(dispatch_get_main_queue(), ^{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
NSData *data = UIImageJPEGRepresentation(image, 0.2);
[self.delegate didReceivedImage:image];
[self.delegate didReceivedFrame:data];
[pool drain];
});
}
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
size_t bytesPerRow = width * 4;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
CGContextRef context = CGBitmapContextCreate(
baseAddress,
width,
height,
8,
bytesPerRow,
colorSpace,
kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little
);
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
UIImage *image = [UIImage imageWithCGImage:quartzImage];
CGImageRelease(quartzImage);
CGColorSpaceRelease(colorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return image;
}
#end
The message "[self.delegate didReceivedImage:image];" is just to test the image capture on master device, and this image works on capture device.
The next is about how to I send it to network:
- (void) sendData:(NSData *)data
{
if(_outputStream && [_outputStream hasSpaceAvailable])
{
NSInteger bytesWritten = [_outputStream write:[data bytes] maxLength:[data length]];
if(bytesWritten < 0)
NSLog(#"[ APP ] Failed to write message");
}
}
Look I'm using RunLoop to write and read streams, I think this is better than open and closes streams constantly.
Next, I receive the "NSStreamEventHasBytesAvailable" event on the slave device, the piece of code where handle this is:
case NSStreamEventHasBytesAvailable:
/*I can't to start a case without a expression, why not?*/
NSLog(#"[ APP ] stream handleEvent NSStreamEventHasBytesAvailable");
NSUInteger bytesRead;
uint8_t buffer[BUFFER_SIZE];
while ([_inputStream hasBytesAvailable])
{
bytesRead = [_inputStream read:buffer maxLength:BUFFER_SIZE];
NSLog(#"[ APP ] bytes read: %i", bytesRead);
if(bytesRead)
[data appendBytes:(const void *)buffer length:sizeof(buffer)];
}
[_client writeImageWithData:data];
break;
The value of BUFFER_SIZE is 32768.
I think the while block is not necessary, but I use it because if I can't read all available bytes at first iteration, I can read in the next.
So, this is the point, the stream comes correctly but the image serialized on NSData seems be corrupted, in the next, I just send data to client...
[_client writeImageWithData:data];
... and create a UIImage with data in client class simple like this...
[camPreview setImage:[UIImage imageWithData:data]];
In the camPreview (yes is a UIImageView), I have a image just to display the placeholder on the screen, when I get the imagem from network and pass to camPreview, the placeholder gets blank.
Other think is about the output, when I start the capture, first parts where I receive data, I get this message from system:
Error: ImageIO: JPEG Corrupt JPEG data: 28 extraneous bytes before marker 0xbf
Error: ImageIO: JPEG Unsupported marker type 0xbf
After some little time, I get this messages anymore.
The point is find the cause of the image not are displayed on the "slave" device.
Thanks.
I am not sure how often you are sending images, but even if it is not very often I think I would scan for the SOI and EOI markers in the JPEG data to insure you have all the data. Here is a post I quickly found
I found a answer to check jpeg format before render.
This resolved my problem and now I can display video capture from a "master" ios device to a "slave" ios device.