Is there a way to use compressed textures with SceneKit background CubeMaps? - background

SceneKit uses a specific SCNMaterial for the 3D background of a scene.
We have to use scnScene.background.diffuse.contents = one of:
A vertical strip (single image with height = 6x width)
An horizontal strip (single image with 6x height = width)
A spherical projection ( single image with 2xheight = width)
An array of 6 square images
My background images are currently in JPG or PNG format, but they are slow to decompress, and I would like to use compressed textures (PVRTC or ASTC formats).
I cannot use compressed textures using the vertical, horizontal strips and spherical projections as they are not square images, and PVRTC/ASTC require square textures under iOS.
I tried to PVRTC compress the array of 6 square images, but background.diffuse.contents requires an array of 6 UIImages, and although there is no error in the log, I don't see any 3D background when I assign an array of 6 SKTexture to background.diffuse.contents.
My question is the following:
Is there a way to use PVRTC or ASTC textures as a 3D SceneKit background (CubeMap, Spherical projection...) ?

I found the solution. For anyone interested:
You can assign a Model IO texture to scnScene.background.contents
You can load a cube map Model IO texture using the function textureCubeWithImagesNamed:(Array of paths to 6 PVRTC compressed textures)
NSURL* posx = [artworkUrl URLByAppendingPathComponent:#"posx.pvr"];
NSURL* negx = [artworkUrl URLByAppendingPathComponent:#"negx.pvr"];
NSURL* posy = [artworkUrl URLByAppendingPathComponent:#"posy.pvr"];
NSURL* negy = [artworkUrl URLByAppendingPathComponent:#"negy.pvr"];
NSURL* posz = [artworkUrl URLByAppendingPathComponent:#"posz.pvr"];
NSURL* negz = [artworkUrl URLByAppendingPathComponent:#"negz.pvr"];
MDLTexture* cubeTexture = [MDLTexture textureCubeWithImagesNamed:#[posx.path,negx.path,posy.path,negy.path,posz.path,negz.path] ];
scnScene.background.contents = cubeTexture;

Related

AVAssetImageGenerator copyCGImageAtTime:actualTime:error: generates 8-bit images from 10-bit video

I've got a 10-bit video on my Mac that I want to extract frames in full 10-bit/channel data. I load my asset, and verify that it's 10-bit:
CMFormatDescriptionRef formatDescriptions = (__bridge CMFormatDescriptionRef)(track.formatDescriptions[0]);
float frameRate = track.nominalFrameRate;
int bitDepth = ((NSNumber*)CMFormatDescriptionGetExtension(formatDescriptions, (__bridge CFStringRef)#"BitsPerComponent")).intValue;
bitDepth is 8 for many videos, but 10 for this video (that I know that I recorded in 10-bit anyway), so AVFoundation recognizes the channel bit depth correctly.
However, I want to generate single frames from the video using AVAssetImageGenerator's copyCGImageAtTime:actualTime:error: method:
NSError *err;
NSImage *img = [[NSImage alloc] initWithCGImage:[imageGenerator copyCGImageAtTime:time actualTime:NULL error:&err] size:dimensions];
The image is generated successfully with no errors, but when I check it I see that it's 8 bits/channel:
(lldb) po img
<NSImage 0x600001793f80 Size={3840, 2160} Reps=(
"<NSCGImageSnapshotRep:0x6000017a5a40 cgImage=<CGImage 0x100520800>
(DP)\n\t<<CGColorSpace 0x60000261ae80> (kCGColorSpaceICCBased;
kCGColorSpaceModelRGB; Composite NTSC)>\n\t\twidth = 3840, height = 2160,
bpc = 8, bpp = 32, row bytes = 15360
\n\t\tkCGImageAlphaPremultipliedFirst | kCGImageByteOrder32Big |
kCGImagePixelFormatPacked \n\t\tis mask? No, has masking color? No,
has soft mask? No, has matte? No, should interpolate? Yes>"
)>
How do I generate full, lossless 10-bit (or, for compatibility, 16-bit or 32-bit) frames from a 10-bit video?
I'm on macOS 10.14.
Due to lack of any information about this, I've given up on AVAssetImageGenerator and went with embedding ffmpeg in my app and invoking that to extract 10-bit frames.

How to combine large JPEG's in Cocoa Mac application

In our Mac application built in Cocoa, we would like to combine 5 large JPEG images into one JPEG. We have the following images:
1.jpeg 50000px wide X 5000px high
2.jpeg 50000px wide X 5000px high
3.jpeg 50000px wide X 5000px high
4.jpeg 50000px wide X 5000px high
5.jpeg 50000px wide X 5000px high
We would like to combine these images one above the other to form an output jpeg:
50000px wide X 25000px high
The problem is that the resulting JPEG is very large and this results in memory issues when we use the following approach to make output JPEG.
NSRect imageRect = NSMakeRect(0.0, 0.0, 50000, 25000);
NSBitmapImageRep *savedImageBitmapRep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:nil
pixelsWide:imageRect.size.width
pixelsHigh:imageRect.size.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bitmapFormat:0
bytesPerRow:(4 * imageRect.size.width)
bitsPerPixel:32];
[NSGraphicsContext saveGraphicsState];
[NSGraphicsContext
setCurrentContext:[NSGraphicsContext graphicsContextWithBitmapImageRep:
savedImageBitmapRep]];
// Read 1.jpeg, 2.jpeg, 3.jpeg, 4.jpeg, 5.jpeg as NSImage
// and draw them on the current context in their respective location
[NSGraphicsContext restoreGraphicsState];
NSMutableData *imageData = [NSMutableData data];
CGImageDestinationRef imageDest = CGImageDestinationCreateWithData(
(__bridge CFMutableDataRef)imageData, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(imageDest, [savedImageBitmapRep CGImage],
(__bridge CFDictionaryRef)properties);
CGImageDestinationFinalize(imageDest);
if (imageDest != NULL) {
CFRelease(imageDest);
}
//write imageData to a JPEG file
How can we achieve our objective without facing memory issues?
"Memory issues" is vague but I assume you simply mean that you want to avoid the 4.5+ GBs of memory required to hold 50,000 x 25,000 pixels.
The only way to avoid that is to drop down to a lower level API which lets you encode each MCU row (each 8 or 16 rows of pixels) separately at a time. libjpeg supports this, for example. You would load and render the minimum required data to produce such a row with whatever minimum memory constraints you have in mind, and then pass that row off to the encoder, before moving to the next row.
You can resize them to be smaller for displaying. but only when saving/storing you can directly store all the sizes

NSImage Image Size With Multiple Layers

I have a Mac (not iOS) application that allows the user to select one or more images with NSOpenPanel. What I have trouble with is how to get correct dimensions for multiple-layered images. If an image contains one layer or compressed, the following will get me correct image dimensions with the file path.
NSImage *image0 = [[NSImage alloc] initWithContentsOfFile:path];
CGFloat w = image0.size.width;
CGFloat h = image0.size.height;
But if I select an image that has multiple layers, I'll get strange numbers. For example, I have a single-layer image whose dimensions are 1,440 x 900 px according to Fireworks. If I add a small layer of a circle and save an image as PNG and read it, I get 1,458 x 911 px. According to this topic and this topic, they suggest that I read the largest layer. Okay. So I've created a function as follows.
- (CGSize)getimageSize :(NSString *)filepath {
NSArray * imageReps = [NSBitmapImageRep imageRepsWithContentsOfFile:filepath];
NSInteger width = 0;
NSInteger height = 0;
for (NSImageRep * imageRep in imageReps) {
if ([imageRep pixelsWide] > width) width = [imageRep pixelsWide];
if ([imageRep pixelsHigh] > height) height = [imageRep pixelsHigh];
}
NSSize size = CGSizeMake((CGFloat)width, (CGFloat)height);
return size;
}
Using the function above, I get wrong dimensions (1,458 x 911 px) instead of 1,440 x 900 px. Actually, I had the same problem when I was developing Mac applications with REAL Stupid till a few years ago. So how can I get correct dimensions when an image contains multiple layers?
Thank you for your advice.

iOS OpenGL using parameters for glTexImage2D to make a UIImage?

I am working through some existing code for a project i am assigned to.
I have a successful call to glTexImage2D like this:
glTexImage2D(GL_TEXTURE_2D, 0, texture->format, texture->widthTexture, texture->heightTexture, 0, texture->format, texture->type, texture->data);
I would like create an image (preferably a CGImage or UIImage) using the variables passed to glTexImage2D, but don't know if it's possible.
I need to create many sequential images(many of them per second) from an OpenGL view and save them for later use.
Should i be able to create a CGImage or UIImage using the variables i use in glTexImage2D?
If i should be able to, how should i do it?
If not, why can't i and what do you suggest for my task of saving/capturing the contents of my opengl view many times per second?
edit: i have already successfully captured images using some techniques provided by apple with glReadPixels, etc etc. i want something faster so i can get more images per second.
edit: after reviewing and adding the code from Thomson, here is the resulting image:
the image very slightly resembles what the image should look like, except duplicated ~5 times horizontally and with some random black space underneath.
note: the video(each frame) data is coming over an ad-hoc network connection to the iPhone. i believe the camera is shooting over each frame with the YCbCr color space
edit: further reviewing Thomson's code
I have copied your new code into my project and got a different image as result:
width: 320
height: 240
i am not sure how to find the number of bytes in texture-> data. it is a void pointer.
edit: format and type
texture.type = GL_UNSIGNED_SHORT_5_6_5
texture.format = GL_RGB
Hey binnyb, here's the solution to creating a UIImage using the data stored in texture->data. v01d is certainly right that you're not going to get the UIImage as it appears in your GL framebuffer, but it'll get you an image from the data before it has passed through the framebuffer.
Turns out your texture data is in 16 bit format, 5 bits for red, 6 bits for green, and 5 bits for blue. I've added code for converting the 16 bit RGB values into 32 bit RGBA values before creating a UIImage. I'm looking forward to hearing how this turns out.
float width = 512;
float height = 512;
int channels = 4;
// create a buffer for our image after converting it from 565 rgb to 8888rgba
u_int8_t* rawData = (u_int8_t*)malloc(width*height*channels);
// unpack the 5,6,5 pixel data into 24 bit RGB
for (int i=0; i<width*height; ++i)
{
// append two adjacent bytes in texture->data into a 16 bit int
u_int16_t pixel16 = (texture->data[i*2] << 8) + texture->data[i*2+1];
// mask and shift each pixel into a single 8 bit unsigned, then normalize by 5/6 bit
// max to 8 bit integer max. Alpha set to 0.
rawData[channels*i] = ((pixel16 & 63488) >> 11) / 31.0 * 255;
rawData[channels*i+1] = ((pixel16 & 2016) << 5 >> 10) / 63.0 * 255;
rawData[channels*i+2] = ((pixel16 & 31) << 11 >> 11) / 31.0 * 255;
rawData[channels*4+3] = 0;
}
// same as before
int bitsPerComponent = 8;
int bitsPerPixel = channels*bitsPerComponent;
int bytesPerRow = channels*width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
rawData,
channels*width*height,
NULL);
free( rawData );
CGImageRef imageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider,NULL,NO,renderingIntent);
UIImage *newImage = [UIImage imageWithCGImage:imageRef];
The code for creating a new image comes from Creating UIImage from raw RGBA data thanks to Rohit. I've tested this with our original 320x240 image dimension, having converted a 24 bit RGB image into 5,6,5 format and then up to 32 bit. I haven't tested it on a 512x512 image but I don't expect any problems.
You could make an image from the data you are sending to GL, but I doubt that's really what you want to achieve.
My guess is you want the output of the Frame Buffer. To do that you need glReadPixels(). Bare in mind for a large buffer (say 1024x768) it will take seconds to read the pixels back from GL, you wont get more than 1 per second.
You should be able to use the UIImage initializer imageWithData for this. All you need is to ensure that the data in texture->data is in a structured format that is recognizable to the UIImage constructor.
NSData* imageData = [NSData dataWithBytes:texture->data length:(3*texture->widthTexture*texture->heightTexture)];
UIImage* theImage = [UIImage imageWithData:imageData];
The types that imageWithData: supports are not well documented, but you can create NSData from .png, .jpg, .gif, and I presume .ppm files without any difficulty. If texture->data is in one of those binary formats I suspect you can get this running with a little experimentation.

How to load PNG with alpha with Cocoa?

I'm developing an iPhone OpenGL application, and I need to use some textures with transparency. I have saved the images as PNGs. I already have all the code to load PNGs as OpenGL textures and render them. This is working fine for all images that don't have transparency (all alpha values are 1.0). However, now that I'm trying to load and use some PNGs that have transparency (varying alpha values), my texture is messed up, like it loaded the data incorrectly or something.
I'm pretty sure this is due to my loading code which uses some of the Cocoa APIs. I will post the relevant code here though.
What is the best way to load PNGs, or any image format which supports transparency, on OSX/iPhone? This method feels roundabout. Rendering it to a CGContext and getting the data seems weird.
* LOADING *
CGImageRef CGImageRef_load(const char *filename) {
NSString *path = [NSString stringWithFormat:#"%#/%s",
[[NSBundle mainBundle] resourcePath],
filename];
UIImage *img = [UIImage imageWithContentsOfFile:path];
if(img) return [img CGImage];
return NULL;
}
unsigned char* CGImageRef_data(CGImageRef image) {
NSInteger width = CGImageGetWidth(image);
NSInteger height = CGImageGetHeight(image);
unsigned char *data = (unsigned char*)malloc(width*height*4);
CGContextRef context = CGBitmapContextCreate(data,
width, height,
8, width * 4,
CGImageGetColorSpace(image),
kCGImageAlphaPremultipliedLast);
CGContextDrawImage(context,
CGRectMake(0.0, 0.0, (float)width, (float)height),
image);
CGContextRelease(context);
return data;
}
* UPLOADING *
(define (image-opengl-upload data width height)
(let ((tex (alloc-opengl-image)))
(glBindTexture GL_TEXTURE_2D tex)
(glTexEnvi GL_TEXTURE_ENV GL_TEXTURE_ENV_MODE GL_DECAL)
(glTexImage2D GL_TEXTURE_2D
0
GL_RGBA
width
height
0
GL_RGBA
GL_UNSIGNED_BYTE
(->void-array data))
(glTexParameteri GL_TEXTURE_2D
GL_TEXTURE_MIN_FILTER
GL_LINEAR)
(glTexParameteri GL_TEXTURE_2D
GL_TEXTURE_MAG_FILTER
GL_LINEAR)
(glTexParameteri GL_TEXTURE_2D
GL_TEXTURE_WRAP_S
GL_CLAMP_TO_EDGE)
(glTexParameteri GL_TEXTURE_2D
GL_TEXTURE_WRAP_T
GL_CLAMP_TO_EDGE)
(glBindTexture GL_TEXTURE_2D 0)
tex))
To be explicit…
The most common issue with loading textures using Core Image is that it insists on converting data to premultiplied alpha format. In the case of PNGs included in the bundle, this is actually done in a preprocessing step in the build process. Failing to take this into account results in dark banding around blended objects.
The way to take it into account is to use glBlendMode(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) instead of glBlendMode(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). If you want to use alpha channels for something other than regular blending, your only option is to switch to a different format and loader (as prideout suggested, but for different reasons).
ETA: the premultiplication issue also exists under Mac OS X, but the preprocessing is iPhone-specific.
Your Core Graphics surface should be cleared to all zeroes before you render to it, so I recommend using calloc instead of malloc, or adding a memset after the malloc.
Also, I'm not sure you want your TexEnv set to GL_DECAL. You might want to leave it set to its default (GL_MODULATE).
If you'd like to avoid Core Graphics for decoding PNG images, I recommend loading in a PVR file instead. PVR is an exceedingly simple file format. An app called PVRTexTool is included with the Imagination SDK which makes it easy to convert from PNG to PVR. The SDK also includes some sample code that shows how to parse their file format.
I don't know anything about OpenGL, but Cocoa abstracts this functionality with NSImage/UIImage.
You can use PVR's but there will be some compression artifacts, so I would only recommend those for 3D object textures, or textures that do not require a certain level of detail that PVR can not offer, especially with gradients.