Building NSImage from bytes - objective-c

Im trying to build an NSImage from some strange bytes.
Im using BlackMagic SDK to get the bytes of a recieved frame:
unsigned char* frame3 = NULL;
unsigned char* frame2 = (Byte*)malloc(699840);
videoFrame->GetBytes ( (void**)&frame3);
memcpy(frame2, frame3, 699840);
NSData* data = [NSData dataWithBytes:frame2 length:sizeof(frame2) ];
NSImage *image = [[NSImage alloc] initWithData:data];
//(till now i use statically 699840, because i know its size)
Why i said the bytes are strange is that the content of the "frame2" looks like this:
printf("content: %s",frame2);
\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200\200.........\200 (to the end)
It should be blank black frame.
Does somebody know how could I figure out something with this?

You should use these apis to get an image from data bytes.
NSString *filePath = [yourDirectory stringByAppendingPathComponent:#"imageName.jpg"];
[data writeToFile:filePath atomically:YES];

Related

Appending random bytes to an image doesn't change the outcome?

So to simplify the situation I want to append a couple random bytes to an image to change the MD5 hash every time.
I have the code set up to look up the image then create an NSImage. After that it converts the NSImage to NSMutableData which offers me the opportunity to append random bytes. I then end it all by exporting the altered NSImage to the desktop.
That all works fine and dandy until I run my program twice and compare the MD5 hashes of the two outputs. They are exactly the same! It doesn't matter if I append 1 or 1000 random bytes, if you compare the two outputs, it is exactly the same to each other.
My code:
- (void)createNewImage:(NSString *)filePath {
// NSImage from path
NSImage *newImage = [[NSImage alloc]initWithContentsOfFile:filePath];
// NSData to NSMutableData
NSData *imgData = [newImage TIFFRepresentation];
NSMutableData *mutableData = [imgData mutableCopy];
// Get the random bytes
NSData *randomData = [self createRandomBytes:10];
// Append random data to new image
[mutableData appendData:randomData];
(etc...)
// Create file path for the new image
NSString *fileName = #"/Users/Computer/Desktop/MD5/newImage.jpg";
// Cache the new image
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:mutableData];
NSDictionary *imageProps = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:1.0] forKey:NSImageCompressionFactor];
NSData *newData = [imageRep representationUsingType:NSJPEGFileType properties:imageProps];
[newData writeToFile:fileName atomically:NO];
}
-(NSData *)createRandomBytes:(NSUInteger)amount {
return [[NSFileHandle fileHandleForReadingAtPath:#"/dev/random"] readDataOfLength:amount];
}
UPDATE:
With the help of picciano, I found that exporting the edited NSData directly manages to achieve my goal
[mutableData writeToFile:fileName atomically:NO];
HOWEVER, the image is significantly larger. The source image is 182 KB while the new images are 503 KBs. picciano's answer explains why this happens but does anyone happen to have a workaround to the inflation?
You are adding random data, but it is not being used in creating the image. When the image is converted back to a JPG data representation, only the valid portion of the image data is used.
To verify this, check the length of your newData object.

New CGIImage or CGImageRef from std::string using using base64

everyone:
I've been working on this for days. Here's a little bit of background. I'm sending an image to a server using protobuf. The image is directly from the camera, so it is not a jpeg nor a png. I found code to get the data from the UIImage using the CGImage to create a CGImageRef. See the following code:
- (UIImage *)testProcessedImage:(UIImage *)processedImage
{
CGImageRef imageRef = processedImage.CGImage;
NSData *data1 = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
Google protobuf uses C++ code to send and receive the bytes to and from the server. When I tried to get the data bytes back into NSData and alloc init a UIImage with that data the UIImage was always nil. This tells me that my NSData is not in the correct format.
At first, I thought my issue was with the C++ conversion, as shown with my previous question here. But after much frustration, I cut out everything in the middle and just created a UIImage with the CGImageRef and it worked. See the following code:
- (UIImage *)testProcessedImage:(UIImage *)processedImage
{
CGImageRef imageRef = processedImage.CGImage;
NSData *data1 = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
// Added this line and cut out everything in the middle
UIImage *image = [UIImage imageWithCGImage:imageRef];
Following is a description of what I ultimately need to do. There are two parts. Part 1 takes a UIImage and converts it into a std::string.
take a UIImage
get the NSData from it
convert the data to unsigned char *
stuff the unsigned char * into a std::string
The string is what we would receive from the protobuf call. Part 2 takes the data from the string and converts it back into the NSData format to populate a UIImage. Following are the steps to do that:
convert the std::string to char array
convert the char array to a const char *
put the char * into NSData
return NSData
Now, with that background information and armed with the fact that populating the UIImage with a CGImageRef works, meaning that data in that format is the correct format to populate the UIImage, I'm looking for help in figuring out how to get the base64.data() into either a CFDataRef or a CGImageRef. Below is my test method:
- (UIImage *)testProcessedImage:(UIImage *)processedImage
{
CGImageRef imageRef = processedImage.CGImage;
NSData *data1 = (NSData *) CFBridgingRelease(CGDataProviderCopyData(CGImageGetDataProvider(imageRef)));
unsigned char *pixels = (unsigned char *)[data1 bytes];
unsigned long size = [data1 length];
// ***************************************************************************
// This is where we would call transmit and receive the bytes in a std::string
//
// The following line simulates that:
//
const std::string byteString(pixels, pixels + size);
//
// ***************************************************************************
// converting to base64
std::string encoded = base64_encode(reinterpret_cast<const unsigned char*>(byteString.c_str()), byteString.length());
// retrieving base64
std::string decoded = base64_decode(encoded);
// put byte array back into NSData format
NSUInteger usize = decoded.length();
const char *bytes = decoded.data();
NSData *data2 = [NSData dataWithBytes:(const void *)bytes length:sizeof(unsigned char)*usize];
NSLog(#"examine data");
// But when I try to alloc init a UIImage with the data, the image is nil
UIImage *image2 = [[UIImage alloc] initWithData:data2];
NSLog(#"examine image2");
// *********** Below is my convoluted approach at CFDataRef and CGImageRef ****************
CFDataRef dataRef = CFDataCreate( NULL, (const UInt8*) decoded.data(), decoded.length() );
NSData *myData = (__bridge NSData *)dataRef;
//CGDataProviderRef ref = CGDataProviderCreateWithCFData(dataRef);
id sublayer = (id)[UIImage imageWithCGImage:imageRef].CGImage;
UIImage *image3 = [UIImage imageWithCGImage:(__bridge CGImageRef)(sublayer)];
return image3;
}
As any casual observer can see, I need help. HELP!!! I've tried some of the other questions on SO, such as this one and this one and this one and cannot find the information I need for the solution. I admit part of my problem is that I do not understand much about images (like RGBA values and other stuff).

Image size anomaly

I have an image in the form of an NSURL as input. I converted this url to NSImage and then to NSData from which I could get CGImageRef. This imageRef helped me extracting the raw data information from the image such as the height, width, bytesPerRow, etc.
Here's the code that I used:
NSString * urlName = [url path];
NSImage *image = [[NSImage alloc] initWithContentsOfFile:urlName];
NSData *imageData = [image TIFFRepresentation];
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)CFBridgingRetain(imageData), NULL);
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
NSUInteger numberOfBitsPerPixel = CGImageGetBitsPerPixel(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
...
...
Now, I checked the size of the image using:
int sz = [imageData length];
which is different from - int sz' = bytesPerRow * height
I cannot understand why is there such a difference. And sz is actually half of sz'.
Am I making some mistake while extracting various info? From what I can get is that maybe while conversion of image to NSData some decompressions are done. In such a case, what should I use that can get me the reliable data.
I am new to the world image processing in Objective-C, so please bear with me!
P.S. I actually checked the size of the file that I am getting as input in the form of NSURL which is same as sz.
Try This:
Instead of
NSData *imageData = [image TIFFRepresentation];
use this:
NSData *imageData = [image TIFFRepresentationUsingCompression:NSTIFFCompressionLZW factor:0];

How to run multiple task at the same time safely?

I'm developing a iPhone app and I'm relatively new to objective-c so I hope some one can give a clue.
What im doing is reading a file in chunks and encoding the chuncks into base64 and everything is working fine, the problem is that in this line NSString *str = [data base64EncodedString]; it takes a little bit of time because im encodeing chunks of 256KB, there is no problem with one file the problem is that i'm encoding image files so imagine that I encode 10 images it will be alot of chunks per image so the process can be slow.
this is the process:
*Get the file.
*Read chunck of 256KB of the file.
*Encode chunck to base64.
*Save the encoded chunck and repeat until there is no more bytes to read from the file.
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library assetForURL:referenceURL resultBlock:^(ALAsset *asset)
{
NSUInteger chunkSize =262144;
uint8_t *buffer = calloc(chunkSize, sizeof(*buffer));
ALAssetRepresentation *rep = [asset defaultRepresentation];
NSUInteger length = [rep size];
self.requestsToServer=[[NSMutableArray alloc]init];
NSUInteger offset = 0;
do {
NSUInteger bytesCopied = [rep getBytes:buffer fromOffset:offset length:chunkSize error:nil];
offset += bytesCopied;
NSData *data = [[NSData alloc] initWithBytes:buffer length:bytesCopied];
NSString *str = [data base64EncodedString];
//After this I add the str in a NSMutableURLRequest and I store the request
//in a NSMutableArray for later use.
} while (offset < length);
free(buffer);
buffer = NULL;
}
failureBlock:^(NSError *error)
{
}];
I want to start another thread so I can be encoding the chuncks in paralel and know when the process finish, this way while encoding one chunck I can be encodign another 3 or 4 chunks at the same time.
How I can implement this in a safely way or is this a good idea?
Thanks for your time.
Look at NSOperation and NSOperationQueue.
http://developer.apple.com/library/ios/DOCUMENTATION/Cocoa/Reference/NSOperationQueue_class/Reference/Reference.html
Simply create one NSOperation per chunk and pass them the chunk they need to encode and queue them up.
You can tell the queue how many operations can run simultaneously.
There are lots of good options for doing chunks of work in parallel for iOS.
Take a look at the Apple's Concurrency Programming Guide to get you going.

Playing video on iOS using OpenGL-ES

I'm trying to play a video (MP4/H.263) on iOS, but getting really fuzzy results.
Here's the code to initialize the asset reading:
mTextureHandle = [self createTexture:CGSizeMake(400,400)];
NSURL * url = [NSURL fileURLWithPath:file];
mAsset = [[AVURLAsset alloc] initWithURL:url options:NULL];
NSArray * tracks = [mAsset tracksWithMediaType:AVMediaTypeVideo];
mTrack = [tracks objectAtIndex:0];
NSLog(#"Tracks: %i", [tracks count]);
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary * settings = [[NSDictionary alloc] initWithObjectsAndKeys:value, key, nil];
mOutput = [[AVAssetReaderTrackOutput alloc]
initWithTrack:mTrack outputSettings:settings];
mReader = [[AVAssetReader alloc] initWithAsset:mAsset error:nil];
[mReader addOutput:mOutput];
So much for the reader init, now the actual texturing:
CMSampleBufferRef sampleBuffer = [mOutput copyNextSampleBuffer];
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
glBindTexture(GL_TEXTURE_2D, mTextureHandle);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 600, 400, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress( pixelBuffer ));
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CFRelease(sampleBuffer);
Everything works well ... except the rendered image looks like this; sliced and skewed?
I've even tried looking into AVAssetTrack's preferred transformation matrix, to no avail, since it always returns CGAffineTransformIdentity.
Side-note: If I switch the source to camera, the image gets rendered fine. Am I missing some decompression step? Shouldn't that be handled by the asset reader?
Thanks!
Code: https://github.com/shaded-enmity/objcpp-opengl-video
I think the CMSampleBuffer uses a padding for performance reason, so you need to have the right width for the texture.
Try to set width of the texture with : CVPixelBufferGetBytesPerRow(pixelBuffer) / 4
(if your video format uses 4 bytes per pixel, change if other)