How to set custom texture in custom GPUImage filter? - objective-c

I have a custom filter builded with GPUImageFilter which has two texture samplers in the shader. The first texture is defined as inputImageTexture in the .fsh file which is bind as default. But I don't know how to bind inputImageTexture2 in code.
I searched the GPUImageFilter.h .m file, but have not found any method related to that. Anyone knows?

If you want to use two textures as inputs, subclass or use GPUImageTwoInputFilter instead of GPUImageFilter. The former has all the code set up properly to take in two inputs, and defines the uniform needed for that second texture input.
Otherwise, you could look at what GPUImageTwoInputFilter defines to be able to provide that second texture input and use that yourself in a custom class.

Related

Share Textures Between 2 OpenGL Contexts

I have an existing openGL context, using an OpenGL 2.1 core profile. I am able to draw objects/textures/etc no problem. However, now I want to be able to have my application to launch a separate NSWindow, with an NSOpenGLView, that displays part of a texture I drew in the original renderer's view. After some reading, I eventually bumped into the topic of context sharing, which I think may be the route I have to take if I want to pull this off.
My shared openGL context is of type - CGLContextObj, but I don't know what to do with it as my window resides in a different process. I've read the Apple documentation on rendering contexts, but I am unable to apply the concepts they laid out if there's barely any examples for me to go through. Any advice will be really appreciated, thank you in advance.
EDIT:
Perhaps I did not give enough description, my apologies. I subclass my NSOpenGLView, and it's init I do the following:
// *** irrelevant initialization stuff above inside init *** //
// Get pixel format from first context to be used for NSOpenGLView when it's finally initialized later
_pixFormat = [[NSOpenGLPixelFormat alloc] initWithAttributes:(void*)_attribs];
// We will create CGPixelFormatObj from our C array of pixel format ttributes
GLint nPix;
CGPixelFormatObj myCgPixObj;
CGLChoosePixelFormat(_attribs, &myCgPixOPbj, &nPix);
// Now that we have the pixel format in CGPixelFormatObj form, create CGLContextObj to be passed in later when we init NSOpenGLView
CGLContextObj newContext;
CGLCreateContext(myCgPixObj, mainRenderingContext, &newContext);
// Create an NSOpenGLContext object here to feed into NSOpenGLView
NSOpenGLContext* _contextForGLView = [[NSOpenGLContext alloc] initWithCGLContextObj:newContext];
[newContext setView:self];
[self setOpenGLContext:newContext];
// We don't need this anymore
CGLDestroyPixelFormat(myCgPixObj);
return self;
I am able to draw objects in this view just fine. But I get a blank white rectangle whenever I try to use the textures created in the main rendering context. I'm a little lost on how to proceed from here, I have never dealt with shared contexts before.
Seems like I got it working, partially at least since I had to force the view to redraw by moving my Window around to actually render the texture from the main context (another problem for another time!). Anyways, here's how I did it:
My main rendering context is supplied by a host application (yes, I'm working on a plugin), and is of type CGLContextObj. I wrap that context in an NSOpenGLContext object via calling initWithCGLContextObj
Next step was to create an NSOpenGLPixelFormat object, initializing it with the pixel format attributes used by the host application's renderer. This step is important as it ensures that the rendering context that will be used in my view will have the same OpenGL core profile, along with other attributes used by the host application.
Then in my subclassed NSOpenGLView, I create a new NSOpenGLContext object, preferably in the prepareOpenGL method, by using initWithFormat:shareContext: for allocation. I used the NSOpenGLPixelFormat and NSOpenGLContext objects created previously to pass as parameters.
Upon assigning the newly created context to my view, I was able to render the textures from the main rendering context.

Ask about some functions about glkit?

Usually we need to call glport to set up the view port, is it necessary for using glkit?
For double buffer, when using glkit, do we need to use swapbuffer?
The GLKView class automatically maintains a framebuffer and associated renderbuffers, sets the GL viewport to match its size (and scale factor), and manages presenting a rendered frame (i.e. swapping buffers) for display. For the common case, all you need to do is implement drawRect: (or set a delegate and implement glkView:drawInRect: there) and issue GL drawing commands inside that method.

How can I use a CAMediaTimingFunction to map a custom value?

I'd like to use a CAMediaTimingFunction (e.g. kCAMediaTimingFunctionEaseIn) to map an input value (0-1) to an output value (0-1), just like the docs say this class does. However, this functionaity doesn't appear to be exposed at all.
Is there some way I can access this functionality for my own custom values? This is not in a UIView, CALayer etc - it's just in some custom code where I'd like to use the iOS curves.
This is what you need: https://stackoverflow.com/a/19084008/251440
I needed it myself so built a class with the interface and capabilities exactly like CAMediaTimingFunction but with the needed -valueForX: method.
Actually that functionality is the only functionality that is exposed. A CAMediaTimingFunction is a class that stores the coordinates of the two control points that define a Bézier curve that begins at (0.0,0.0) and ends at (1.0,1.0). Such a curve is in fact a mapping from interval [0,1] to interval [0,1].
You may want to google what a bezier curve is if you want to understand how to set the coordinates of the control points for a specific curve.
Addendum:
As you clarified, you want to use an existing CAMediaTimingFunction instance to do the mapping from a value to it's transformed value (aka compute the mapping). You can retrieve the coordinates of the control points with getControlPointAtIndex:values: from the instance and use them to compute the mapping. See http://en.wikipedia.org/wiki/Bézier_curve for the math.

Applying MVC philosophy in Objective C

I'm starting a small project that displays circles having random radii, random color and random position on the screen. I want to implement this using the MVC paradigm in Objective C.
I have a class Circle that contains the following instance variables:
CGFloat radius
CGPoint center
UIColor radiusColor
This class doesn't contain methods, it just holds data. It is put in a separate file. (Circle.m & Circle.h)
I have a myModel class that is supposed to be the model for my MVC. It contains methods that randomly generate centers inside bound of my view, where the bound dimensions are requested from the View throughout the controller.
Every time a random property (that is center, color and radius) is generated, an instance of the Circle class is created within the myModel class, and stored in an NSMutableArray.
When the generation is done, this NSMutableArray is passed to the controller, which in turn passes it to the view, thus displaying the circles.
My question is that if I am to implement the MVC paradigm correctly, should :
The Model (myModel) hold instances of Circle, or the instances of Circle should be held by the controller?
My model be made of 1 class, or is it legal to be made of several classes?
The model know the bound size of the view or is that something that a violation in the MVC philosophy?
One last question. If I have made the implementation as I have stated above, are myModel and Circle separate models or both classes constitute one model?
Thank you!
[Should] The Model (myModel) hold instances of Circle, or the
instances of Circle should be held by the controller?
The model should hold the data. That's it's job. Imagine what would happen if you wanted to change the interface to your program. Instead of (or in addition to) drawing circles on the screen, you might want to display a list of circles and their locations. You'd might want to change or replace the view controller to do that, but you wouldn't need to change the model that stores the circles. Likewise, you might want to change the way that circles are generated, but keep displaying them the way you are now. In that case, you'd change the model, but the view controller and view could probably stay the same.
[Should] My model be made of 1 class, or is it legal to be made of several classes?
A data model is typically a whole graph of objects, very often of different types. You might have one object that manages the rest (although you don't have to). For example, your MyModel class contains an array that stores Circle objects. You could add Square objects, Group objects, etc.
[Should] The model know the bound size of the view or is that
something that a violation in the MVC philosophy?
The model shouldn't know specifically about the view, but it's fine for the view controller to tell it to produce circles within a given range of coordinates. That way, if the view changes size or orientation, the view controller will likely know about it, and it can in turn give the model new info.
If you have other components to your model than just circles, wrap everything in myModel. Even if you don't, you might still want to do so to allow for future additions.
Depends on your design. If you are writing a "document based" application (regardless of whether you are using UIDocument) you normally would have a single class that contains the others. Even if you aren't, having a single root class for archiving purposes, etc., is usually convenient.
The model should definitely not know anything about the view hierarchy. (Note that this is different from knowing something like "canvas size" - it would be legitimate to store such a property in the model, and let the view display the canvas however it wishes, such as in a UIScrollView.)
Btw, kudos for thinking about this ahead of time!

Using Opacity-generated layers in Quartz 2d instead of drawing to the view's layer

Firstly, be gently with me. I'm new to Objective C and iOS programming.
I've just purchased a copy of Opacity and I'm playing around with layers. My only grade A success so far is to design and successfully generate my own design of a calculator - so I haven't got that far!
Specifically, I want to know how to use Opacity CALayer subclasses to 'replace' the backing layer on a UIView. Adding sublayers to the existing view layer is obvious enough - and I've managed to get that working - but how do I use an Opacity CALayer subclass to provide the initial content of the view's backing layer? I've tried copying the drawInContext code from an Opacity subclass into drawRect on my UIView subclass that hosts the proxy view for the controllers' 'view' property but (for some reason I can't fathom) that didn't work.
I assume that the view's original backing layer is item '0' in the layer array? My ultimate goal is to have just my layers behind the view without having to 'ignore' the original one.
If there's a chance that Dr Brad Larson might read this;...
I've watched your Quartz 2D and animation videos over and over and I've read all of the copies of your course notes that I can lay hands on but all the examples I've found start with an image already in the hosting view - onto which more layers are added, which isn't really what I"m after. I've also read the Big Nerd Ranch Guide and got all of Conway and Hillegass's example code too but - I'll be darned - they also start with an image already in the view!
If any one can help me out - just point me at the relevant documentation, please don't bother writing huge tranches of code here - I'd be seriously grateful.
VV.
PS: I'm deliberately NOT using IB yet as I want to grip the underlying mechanics of Cocoa Touch first and I won't use dot notation on principal :-). (See my profile!).
The trick is to override the view's class method 'layerClass' to be the base layer sub-class that's needed before nailing other layers into the hierarchy. This builds the view's base (implicit) layer when the view is instantiated and then I can slap my other layers down afterwards.
This is a fun game. I'll get the mind-set soon!
This technique differs markedly from the same requirement using NSViews which have a 'setLayer' instance method that can be used to change the implicit layer AFTER instantiation. An expensive procedure not offered on a lightweight object like a UIView.