NSImage loading a portion of image - objective-c

The requirement is like this,
I would get a single large PNG Images for a button, this single image will contain images for hOver, button clicked , mouse exit that need to be displayed,
Single PNG File size would be 1024 X 28, so each image have size about 256 X 28,
I am googling the best possible approach but couldn't make out how to achieve this,
I have following approach in mind,
NSImage *pBtnImage[MAX_BUTTON_IMAGES]
for ( i = 0; i < 4 ; i++) {
pBtnImage[i] = [[NSImage alloc]initWithData:??????];
}
I want to know what should i give in the NSData parameter,
Is it possible to load a Single Image and clipped image accordingly as and when it needed.
Thanks in advance

There's no simple Cocoa-supported way to read only a sub-rectangle of the image from its data. It's a simple matter, however, to read the whole image in and only use a select rectangle of the image when compositing. Thing is, with all the available API, you might be better off just to use the standard +[NSImage imageNamed:] method to read the images in individually and let the OS handle caching.
What actual, measured performance problem are you trying to solve? Does one really exist, or is this a case of premature optimization?

Related

Reliable access and modify captured camera frames under SceneKit

I try to add a B&W filter to the camera images of an ARSCNView, then render colored AR objects over it.
I'am almost there with the following code added to the beginning of - (void)renderer:(id<SCNSceneRenderer>)aRenderer updateAtTime:(NSTimeInterval)time
CVPixelBufferRef bg=self.sceneView.session.currentFrame.capturedImage;
if(bg){
char* k1 = CVPixelBufferGetBaseAddressOfPlane(bg, 1);
if(k1){
size_t x1 = CVPixelBufferGetWidthOfPlane(bg, 1);
size_t y1 = CVPixelBufferGetHeightOfPlane(bg, 1);
memset(k1, 128, x1*y1*2);
}
}
This works really fast on mobile, but here's the thing: sometimes a colored frame is displayed.
I've checked and my filtering code is executed but I assume it's too late, SceneKit's pipeline already processed camera input.
Calling the code earlier would help, but updateAtTime is the earliest point one can add custom frame by frame code.
Getting notifications on frame captures might help, but looks like the whole AVCapturesession is unaccessible.
The Metal ARKit example shows how to convert the camera image to RGB and that is the place where I would do filtering, but that shader is hidden when using SceneKit.
I've tried this possible answer but it's way too slow.
So how can I overcome the frame misses and convert the camera feed reliably to BW?
Here's the key for this problem:
session:didUpdateFrame:
Provides a newly captured camera image and accompanying AR information to the delegate.
So just moved CVPixelBufferRef manipulation, the image filtering code from
- (void)renderer:(id<SCNSceneRenderer>)aRenderer updateAtTime:(NSTimeInterval)time
to
- (void)session:(ARSession *)session didUpdateFrame:(ARFrame *)frame
Made sure to set self.sceneView.session.delegate = self to have this delegate called.

DirectX 11.1 Disable the depth buffer

This question relates to a previous question I have asked.
I have a series of 48 textures on flat square meshes that I am rendering and they all combine to form one "scene." They each have a large percentage of of transparency with one or two smaller images, and when they are line up, I should be able to see the full scene. I expected this would work without much issue, but when when I went to test it, I see the top-most texture, and then anywhere it would have transparency, it is just the clear color.
At first, I thought it was an issue with how I was loading the image and somehow was disabling the alpha, but after playing around with the clear color, I realized that there was some transparency.
Second, I tried was to enable blending - this works if all the textures get combined on a single z plane.
I have posted my image loading and blending code on the question I linked to above.
Now I am starting to think it may be an issue with the depth buffer, so I added the following code to my window dependent resources:
Microsoft::WRL::ComPtr<ID3D11DepthStencilState> DepthDefault;
D3D11_DEPTH_STENCIL_DESC depthstencilDesc;
ZeroMemory(&depthstencilDesc, sizeof(depthstencilDesc));
depthstencilDesc.DepthEnable = FALSE;
depthstencilDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
depthstencilDesc.DepthFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.StencilEnable = FALSE;
depthstencilDesc.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
depthstencilDesc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
depthstencilDesc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
DX::ThrowIfFailed( direct3d.device->CreateDepthStencilState(&depthstencilDesc, DepthDefault.GetAddressOf() ) );
direct3d.context->OMSetDepthStencilState(DepthDefault.Get(), 0);
Even with this code, I am only seeing the topmost layer. Am I missing something, or am I setting something incorrectly?
Edit: To visualize the problem, it's as if I had 48 panes of glass that are all the same size and they are all in a row. Each piece of glass has one image somewhere on it. When you look through all the glass panes, you get one extra awesome image of all the smaller images combined. For me, directx or the pixel shader is only drawing the first glass pane and filling all the transparency of the first pane with the clear/background color.
Edit: The code I'm using to create the depthstencilview:
CD3D11_TEXTURE2D_DESC depthStencilDesc( DXGI_FORMAT_D24_UNORM_S8_UINT, backBufferDesc.Width, backBufferDesc.Height, 1, 1, D3D11_BIND_DEPTH_STENCIL );
ComPtr<ID3D11Texture2D> depthStencil;
DX::ThrowIfFailed( direct3d.device->CreateTexture2D( &depthStencilDesc, nullptr, &depthStencil ) );
auto viewDesc = CD3D11_DEPTH_STENCIL_VIEW_DESC(D3D11_DSV_DIMENSION_TEXTURE2D);
DX::ThrowIfFailed( direct3d.device->CreateDepthStencilView( depthStencil.Get(), &viewDesc, &direct3d.depthStencil ) );
That code is literally right above my depth test/ D3D11_DEPTH_STENCIL_DESC code. I'm presuming that this creates the depth code.
I think you might need to sort the order in which you render your vertices if you want to render semi-transparencies with a depth buffer. If you don't want to use a depth buffer - perhaps just don't define/create/set it?

How do I analyze video stream on iOS?

For example, there are QR scanners which scan video stream in real time and get QR codes info.
I would like to check the light source from the video, if it is on or off, it is quite powerful so it is no problem.
I will probably take a video stream as input, maybe make images of it and analyze images or stream in real time for presence of light source (maybe number of pixels of certain color on the image?)
How do I approach this problem? Maybe there is some source of library?
It sounds like you are asking for information about several discreet steps. There are a multitude of ways to do each of them and if you get stuck on any individual step it would be a good idea to post a question about it individually.
1: Get video Frame
Like chaitanya.varanasi said, AVFoundation Framework is the best way of getting access to an video frame on IOS. If you want something less flexible and quicker try looking at open CV's video capture. The goal of this step is to get access to a pixel buffer from the camera. If you have trouble with this, ask about it specifically.
2: Put pixel buffer into OpenCV
This part is really easy. If you get it from openCV's video capture you are already done. If you get it from an AVFoundation you will need to put it into openCV like this
//Buffer is of type CVImageBufferRef, which is what AVFoundation should be giving you
//I assume it is BGRA or RGBA formatted, if it isn't, change CV_8UC4 to the appropriate format
CVPixelBufferLockBaseAddress( Buffer, 0 );
int bufferWidth = CVPixelBufferGetWidth(Buffer);
int bufferHeight = CVPixelBufferGetHeight(Buffer);
unsigned char *pixel = (unsigned char *)CVPixelBufferGetBaseAddress(Buffer);
cv::Mat image = cv::Mat(bufferHeight,bufferWidth,CV_8UC4,pixel); //put buffer in open cv, no memory copied
//Process image Here
//End processing
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
note I am assuming you plan to do this in OpenCV since you used its tag. Also I assume you can get the OpenCV framework to link to your project. If that is an issue, ask a specific question about it.
3: Process Image
This part is by far the most open ended. All you have said about your problem is that you are trying to detect a strong light source. One very quick and easy way of doing that would be to detect the mean pixel value in a greyscale image. If you get the image in colour you can convert with cvtColor. Then just call Avg on it to get the mean value. Hopefully you can tell if the light is on by how that value fluctuates.
chaitanya.varanasi suggested another option, you should check it out too.
openCV is a very large library that can do a wide wide variety of things. Without knowing more about your problem I don't know what else to tell you.
Look at the AVFoundation Framework from Apple.
Hope it helps!
You can try this method: start by getting all images to an AVCaptureVideoDataOutput. From the method:captureOutput:didOutputSampleBuffer:fromConnection,you can sample/calculate every pixel. Source: answer
Also, you can take a look at this SO question where they check if a pixel is black. If its such a powerful light source, you can take the inverse of the pixel and then determine using a set threshold for black.
The above sample code only provides access to the pixel values stored in the buffer; you cannot run any other commands but those that change those values on a pixel-by-pixel basis:
for ( uint32_t y = 0; y < height; y++ )
{
for ( uint32_t x = 0; x < width; x++ )
{
bgraImage.at<cv::Vec<uint8_t,4> >(y,x)[1] = 0;
}
}
This—to use your example—will not work with the code you provided:
cv::Mat bgraImage = cv::Mat( (int)height, (int)extendedWidth, CV_8UC4, base );
cv::Mat grey = bgraImage.clone();
cv::cvtColor(grey, grey, 44);

Iphone SDK: compare image sizes

How can I compare image sizes?
I tried doing something like this: if (image1.image.size > image2.image.size) {}
I failed :(
Can somebody tell me how size-comparsion works?
Maybe it's worth to compare their areas?
if (image1.image.size.width * image1.image.size.height > image2.image.size.width *image2.image.size.height)
{
//Do smth
}
You should decide yourself what "larger" does mean.
The size property of UIImage is a C-struct consisting of two parameters, width & height. To compare sizes, you might compare the total area of each image. If you are comparing UIImages, the following code will do:
if (image1.size.width * image1.size.height > image2.size.width * image2.size.height) {}
Note that your code, if it's referring to UIImages, has an extra image.
If you are, however, comparing UIImageViews, it would probably be preferable to compare the frames. I'm not sure whether the image size may diverge from the frame in some cases, like when the image is scaled according to the UIView property contentMode. (Note that UIImageView inherits from UIView.) So, to compare frames, the code would be as follows:
if (imageView1.frame.size.width * imageView1.frame.size.height > imageView2.frame.size.width * imageView2.frame.size.height) {}

Label and LabelAtlas comparison? LabelAtlas is hard to use

I have some confusion about how to use AtlasLabel. It seems Label consume a lot memory than LabelAtlas?
Such as if I create 100 line of text. Each of them is created by Label, then will it consume more memory than 100 line of text created by LabelAtlas?
Label *label1 = [[Label alloc] initWithString:#"text1" dimensions:CGSizeMake(0, 0) alignment:UITextAlignmentLeft fontName:#"Arial" fontSize:22];
.....
.....
Label *label100 = [[Label alloc] initWithString:#"text100" dimensions:CGSizeMake(0, 0) alignment:UITextAlignmentLeft fontName:#"Arial" fontSize:22];
will they be the same with
LabelAtlas *label1 = [LabelAtlas labelAtlasWithString:#"text1" charMapFile:#"abc_22c.png" itemWidth:34 itemHeight:40 startCharMap:' '];
........
.......
LabelAtlas *label100 = [LabelAtlas labelAtlasWithString:#"text100" charMapFile:#"abc_22c.png" itemWidth:34 itemHeight:40 startCharMap:' '];
I assume LabelAtlas is less expensive than Label since it uses just one image. Label creates likely an image each time it created.
I would like to convert all the text from label to labelAtlas. But I still don;t really understand how to use LabelAtlas deeply. I hardly display the string I want. I read number of examples. It seems simple but when I tried....It does not give me what I expect. Could you show me some example for displaying a long text using LabelAtlas instead of Label. I used LabelAtlas before for my point counter. But it is so hard now to display a long string. Thanks in Advance
The main difference between CCLabel and CCLabelAtlas is that the atlas version (like all the other atlas classes) uses one big texture with all the letters pre-rendered to draw a string. This means that the drawing is much faster, because if you draw 100 labels, the graphics processor doesn't have to read in 100 textures but just keep one texture in memory. But it also means that all the letters will be of a fixed size. If you want to get around the fixed-size limitation, use CCBitmapFontAtlas.
And, yes, CCLabel creates one texture for every label, whereas CCLabelAtlas renders the text on the fly, using the provided texture (containing all the characters), so using CCLabelAtlas results in lower memory consumption.
In general, try to always use the *Atlas versions of classes. You can start by using the non-atlas versions and then switch to the atlas version when you've progressed a bit and had time to generate the atlas bitmaps. Don't worry too much about it if you're just starting out.