Ask about some functions about glkit? - opengl-es-2.0

Usually we need to call glport to set up the view port, is it necessary for using glkit?
For double buffer, when using glkit, do we need to use swapbuffer?

The GLKView class automatically maintains a framebuffer and associated renderbuffers, sets the GL viewport to match its size (and scale factor), and manages presenting a rendered frame (i.e. swapping buffers) for display. For the common case, all you need to do is implement drawRect: (or set a delegate and implement glkView:drawInRect: there) and issue GL drawing commands inside that method.

Related

prepareLayout method of NSCollcectionViewLayout

I have hard time understanding the purpose of prepareLayout method of NSCollectionViewLayout.
According to the official apple documentation it is written
During the layout process, the collection view calls several methods of your layout object to gather information. In particular, it calls three very important methods, whose implementations drive the core layout behavior.
Use the prepareLayout method to perform your initial layout calculations. These calculations provide the basis for everything the layout object does later.
Use the collectionViewContentSize method to return the smallest rectangle that completely encloses all of the elements in the collection view. Use the calculations from your prepareLayout method to specify this rectangle.
Use the layoutAttributesForElementsInRect: method to return the layout attributes for all elements in the specified rectangle. The collection view typically requests only the subset of visible elements, but may include elements that are just offscreen.
The prepareLayout method is your chance to perform the main calculations associated with the layout process. Use this method to generate an initial list of layout attributes for your content. For example, use this method to calculate the frame rectangles of all elements in the collection view. Performing all of these calculations up front and caching the resulting data is often simpler than trying to compute attributes for individual items later.
In addition to the layoutAttributesForElementsInRect: method, the collection view may call other methods to retrieve layout attributes for specific items. By performing your calculations in advance, your implementations of those methods should be able to return cached information without having to recompute that information first. The only time your layout object needs to recompute its layout information is when your app invalidates the layout. For example, you might invalidate the layout when the user inserts or deletes items.
So I naively used this as a guide and rewrote my implementation of custom layout. I computed collectionViewContentSize and precomputed the array used in this method
- (NSArray<__kindof NSCollectionViewLayoutAttributes *>*)layoutAttributesForElementsInRect:(NSRect)rect;
such that in all 3 required methods I just return the cached values. And after this suddenly my collectionView became extremely laggy.
Apparently the method prepareLayout is called on every scroll.
Can anyone clarify what does it mean. Or maybe I do not understand anything?
The usual thing, if you don't want prepare called every time there is a scroll event, is to keep a CGSize instance property and implement shouldInvalidateLayout so that it returns YES only if the new bounds size is different from the stored CGSize property value.
Here's a Swift example, but I'm sure you can translate it into Objective-C:
var oldBoundsSize = CGSize.zero
override func shouldInvalidateLayout(forBoundsChange newBounds: CGRect) -> Bool {
let ok = newBounds.size != self.oldBoundsSize
if ok {
self.oldBoundsSize = newBounds.size
}
return ok
}
So this was my bad. Apparently if
- (BOOL)shouldInvalidateLayoutForBoundsChange:(NSRect)newBounds;
returns YES. Then this method is called. I just changed that this method returns NO and the method prepareLayout has been stopped called all the time.

Share Textures Between 2 OpenGL Contexts

I have an existing openGL context, using an OpenGL 2.1 core profile. I am able to draw objects/textures/etc no problem. However, now I want to be able to have my application to launch a separate NSWindow, with an NSOpenGLView, that displays part of a texture I drew in the original renderer's view. After some reading, I eventually bumped into the topic of context sharing, which I think may be the route I have to take if I want to pull this off.
My shared openGL context is of type - CGLContextObj, but I don't know what to do with it as my window resides in a different process. I've read the Apple documentation on rendering contexts, but I am unable to apply the concepts they laid out if there's barely any examples for me to go through. Any advice will be really appreciated, thank you in advance.
EDIT:
Perhaps I did not give enough description, my apologies. I subclass my NSOpenGLView, and it's init I do the following:
// *** irrelevant initialization stuff above inside init *** //
// Get pixel format from first context to be used for NSOpenGLView when it's finally initialized later
_pixFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:(void*)_attribs];
// We will create CGPixelFormatObj from our C array of pixel format ttributes
GLint nPix;
CGPixelFormatObj myCgPixObj;
CGLChoosePixelFormat(_attribs, &myCgPixOPbj, &nPix);
// Now that we have the pixel format in CGPixelFormatObj form, create CGLContextObj to be passed in later when we init NSOpenGLView
CGLContextObj newContext;
CGLCreateContext(myCgPixObj, mainRenderingContext, &newContext);
// Create an NSOpenGLContext object here to feed into NSOpenGLView
NSOpenGLContext* _contextForGLView = [[NSOpenGLContext alloc] initWithCGLContextObj:newContext];
[newContext setView:self];
[self setOpenGLContext:newContext];
// We don't need this anymore
CGLDestroyPixelFormat(myCgPixObj);
return self;
I am able to draw objects in this view just fine. But I get a blank white rectangle whenever I try to use the textures created in the main rendering context. I'm a little lost on how to proceed from here, I have never dealt with shared contexts before.
Seems like I got it working, partially at least since I had to force the view to redraw by moving my Window around to actually render the texture from the main context (another problem for another time!). Anyways, here's how I did it:
My main rendering context is supplied by a host application (yes, I'm working on a plugin), and is of type CGLContextObj. I wrap that context in an NSOpenGLContext object via calling initWithCGLContextObj
Next step was to create an NSOpenGLPixelFormat object, initializing it with the pixel format attributes used by the host application's renderer. This step is important as it ensures that the rendering context that will be used in my view will have the same OpenGL core profile, along with other attributes used by the host application.
Then in my subclassed NSOpenGLView, I create a new NSOpenGLContext object, preferably in the prepareOpenGL method, by using initWithFormat:shareContext: for allocation. I used the NSOpenGLPixelFormat and NSOpenGLContext objects created previously to pass as parameters.
Upon assigning the newly created context to my view, I was able to render the textures from the main rendering context.

How to set custom texture in custom GPUImage filter?

I have a custom filter builded with GPUImageFilter which has two texture samplers in the shader. The first texture is defined as inputImageTexture in the .fsh file which is bind as default. But I don't know how to bind inputImageTexture2 in code.
I searched the GPUImageFilter.h .m file, but have not found any method related to that. Anyone knows?
If you want to use two textures as inputs, subclass or use GPUImageTwoInputFilter instead of GPUImageFilter. The former has all the code set up properly to take in two inputs, and defines the uniform needed for that second texture input.
Otherwise, you could look at what GPUImageTwoInputFilter defines to be able to provide that second texture input and use that yourself in a custom class.

Is it better to layout subviews in layout/layoutSubviews method?

This question is for both cocoa and cocoa touch. But I'll write an example just for cocoa.
As I understood, I can setNeedsLayout to YES multiple times in a cycle and -layout will be called just once. But are there any other benefits of laying out subviews in -layout method?
Explanation / example: At the moment I'm laying out my subviews in custom viewController (that has default NSView) every time I call custom redraw method. And I call redraw method only when user changes some properties so I really want to relayout subviews.
There are plenty of external circumstances not under your direct control that might cause the system to want to lay out your views. For example, device rotation or incoming calls on iOS, or window resizing on OS X. If you have your layout logic in the standard places, then your code accommodates these without any additional effort, and in the places your internal state changes, you can request such a layout explicitly.
To turn your question around: is there a significant benefit to not doing your layout in the standard way? Do you believe that this will be a performance issue? Have you measured it to see whether it is actually a performance issue?

Fitting Cocoa Animation into MVC/OOP patterns

MVC/OOP design patterns say you don't set a property, per se, you ask an object to set its property. Similarly, in Cocoa you don't tell an object when to draw itself. Your object's code has detailed HOW it will draw itself so we trust the frameworks to decide when (for the most part) it should draw.
But, when it comes to animation in Cocoa (specifically Cocoa-Touch) it seems that we now must take control of when the object draws itself from within the objects view controller. I can't send a message to a UIView subclass asking it to change some value and then leave it alone knowing it will slowly (duration = X) animate itself to a new position, alpha, rotation, etc. depending on the property changes. Or can I?
Basically, I'm looking for a way to set the property and then walk away. Instead, it seems, I need to wrap the code that calls the object asking it to change its property with an animation block of some sort "[UIView beginAnimations:nil context:NULL]; ... [UIView commitAnimations];"
I'm ending up with lots and lots of animation blocks in my view controllers and none in my view objects...I guess I'm just looking for someone to verify that this is how things are done and I'm not overlooking something. I haven't gotten much farther than the UIView animations within Cocoa-Touch, so maybe that's my problem and it's time to dig deeper?!?
You are correct that UIView does not animate its property changes by default the way CALayer does, but I don't think this indicates a break in MVC. It is appropriate for a Controller to instruct a View in how it should transform. That is the role of a Controller class as surely as it is appropriate for the Controller to know the correct frame for the View and even manage layout. I agree that it's a little weird that you call -beginAnimations:context: on the UIView class rather than on an instance, but in practice it does actually work much better that way since you may want to animate many views together.
That said, if you had a UIView subclass that managed the layout of its subviews, there would be nothing wrong with allowing that UIView to manage the animation rather than relying on a UIViewController to do it. So this is something that could go either place, but in practice it generally goes in the Controller as you've discovered.
I am using "MVC" here in the typical Cocoa sense. You're correct that this might not be appropriate in a SmallTalk program, but then SmallTalk Controllers have a much more limited role (management of user input events). Cocoa significantly expands the role of Controllers in MVC and I think it's an improvement, even if it means there are now some functions that could go in either the Controller or the View (and this is one of them).