How can I support gles1.1 extensions feature OES using gles 2.x? - opengl-es-2.0

Is there anyway to support gles1.1 OES extensions using gles 2.x or 3.x? This is the snippet of my code
bool ColorBuffer::bind_fbo()
{
if (m_fbo) {
// fbo already exist - just bind
s_gl.glBindFramebufferOES(GL_FRAMEBUFFER_OES, m_fbo);
return true;
}
s_gl.glGenFramebuffersOES(1, &m_fbo);
s_gl.glBindFramebufferOES(GL_FRAMEBUFFER_OES, m_fbo);
s_gl.glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES,
GL_COLOR_ATTACHMENT0_OES,
GL_TEXTURE_2D, m_tex, 0);
GLenum status = s_gl.glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES);
if (status != GL_FRAMEBUFFER_COMPLETE_OES) {
s_gl.glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0);
s_gl.glDeleteFramebuffersOES(1, &m_fbo);
m_fbo = 0;
return false;
}
return true;
}
I need to re-implement this code using gles 2.x or higher since new platform doesn't support gles 1.1 anymore.
Anyone know how to do?
Thanks,
Jiancong

Related

Adjusting screen contrast programmatically in Cocoa application

I'm trying to adjust the screen contrast in Objective C for a Cocoa application using kIODisplayContrastKey. I saw a post about adjusting screen brightness here:
Programmatically change Mac display brightness
- (void) setBrightnessTo: (float) level
{
io_iterator_t iterator;
kern_return_t result = IOServiceGetMatchingServices(kIOMasterPortDefault,
IOServiceMatching("IODisplayConnect"),
&iterator);
// If we were successful
if (result == kIOReturnSuccess)
{
io_object_t service;
while ((service = IOIteratorNext(iterator)))
{
IODisplaySetFloatParameter(service, kNilOptions, CFSTR(kIODisplayBrightnessKey), level);
// Let the object go
IOObjectRelease(service);
return;
}
}
}
Code by Alex King from the link above.
And that code worked. So I tried to do the same for contrast by using a different key (kIODisplayContrastKey), but that doesn't seem to work. Does anybody have an idea if that's possible?
I'm using 10.9.3

windows 8 set as desktop background

I'm writing a small program to change desktop background with one or two mouse clicks..
I know that I can right click on any Image file and set it as Desktop Background..
And exactly there is where the problem starts. I cant find the proper entry in any dll which would have the entry Set As Desktop Background or even New Desktop Background.
I know how I can create those in registry, but I don't want to edit registry for this, rather I would like to have it set right in my Tiny Program so with two clicks I would get control over all image files on my Computer to display them as Desktop Background. and this from any folder or even from any connected drive, without to have to return to Personalization menu.
If anyone of you knows where I can find the entry's of above mentioned Context menu Strings, so I would be very thankful.
This is just for personal use, neither to sell or give away..
Thank you Chris
P.S. Please forgive me my bad English, I'm from a non English speaking European country.
If you look at, for example, HKEY_CLASSES_ROOT\SystemFileAssociations.jpg\Shell\setdesktopwallpaper\Command
You'll notice that it has the DelegateExecute member set. This means that windows will attempt to use the IExecuteCommand interface in the specified DLL. Reading up on what that does on MSDN, and attempting to emulate explorer, I came up with this, which works.
I'm not sure why that Sleep() is needed though, I'd love if anyone could elaborate on that.
void SetWallpaper(LPCWSTR path)
{
const GUID CLSID_SetWallpaper = { 0xFF609CC7, 0xD34D, 0x4049, { 0xA1, 0xAA, 0x22, 0x93, 0x51, 0x7F, 0xFC, 0xC6 } };
HRESULT hr;
IExecuteCommand *executeCommand = nullptr;
IObjectWithSelection *objectWithSelection = nullptr;
IShellItemArray *shellItemArray = nullptr;
IShellFolder *rootFolder = nullptr;
LPITEMIDLIST idlist = nullptr;
// Initalize COM, probably shouldn't be done in this function
hr = CoInitialize(nullptr);
if (SUCCEEDED(hr))
{
// Get the IExecuteCommand interface of the DLL
hr = CoCreateInstance(CLSID_SetWallpaper, nullptr, CLSCTX_INPROC_SERVER, IID_IExecuteCommand, reinterpret_cast<LPVOID*>(&executeCommand));
// Get the IObjectWithSelection interface
if (SUCCEEDED(hr))
{
hr = executeCommand->QueryInterface(IID_IObjectWithSelection, reinterpret_cast<LPVOID*>(&objectWithSelection));
}
//
if (SUCCEEDED(hr))
{
hr = SHGetDesktopFolder(&rootFolder);
}
if (SUCCEEDED(hr))
{
hr = rootFolder->ParseDisplayName(nullptr, nullptr, (LPWSTR)path, nullptr, &idlist, NULL);
}
if (SUCCEEDED(hr))
{
hr = SHCreateShellItemArrayFromIDLists(1, (LPCITEMIDLIST*)&idlist, &shellItemArray);
}
if (SUCCEEDED(hr))
{
hr = objectWithSelection->SetSelection(shellItemArray);
}
if (SUCCEEDED(hr))
{
hr = executeCommand->Execute();
}
// There is probably some event, or something to wait for here, but we
// need to wait and relinquish control of the CPU, or the wallpaper won't
// change.
Sleep(2000);
// Release interfaces and memory
if (idlist)
{
CoTaskMemFree(idlist);
}
if (executeCommand)
{
executeCommand->Release();
}
if (objectWithSelection)
{
objectWithSelection->Release();
}
if (shellItemArray)
{
shellItemArray->Release();
}
if (rootFolder)
{
rootFolder->Release();
}
CoUninitialize();
}
}
Edit: After doing some more research on this, for my own sake, I realized that stobject.dll actually just uses the IDesktopWallpaper interface; which is part of CLSID_DesktopWallpaper
http://msdn.microsoft.com/en-us/library/windows/desktop/hh706946(v=vs.85).aspx

SimpleOpenNI: Multiple Kinects and enableScene()/sceneImage() in Processing

In Processing, I can successfully draw depth maps from 2 Kinects using SimpleOpenNI, but I'm now trying to draw 2 "scenes" (from enableScene() vs enableDepth()). Both Kinects are detected but when I draw the output, I see the same scene is drawn twice (whereas using enableDepth() always gave me 2 different depth images). Any ideas what I'm doing wrong? Thanks in advance.
/* --------------------------------------------------------------------------
* SimpleOpenNI Multi Camera Test
* --------------------------------------------------------------------------
*/
import SimpleOpenNI.*;
SimpleOpenNI cam1;
SimpleOpenNI cam2;
void setup()
{
size(640 * 2 + 10,480);
// start OpenNI, loads the library
SimpleOpenNI.start();
// init the cameras
cam1 = new SimpleOpenNI(0,this);
cam2 = new SimpleOpenNI(1,this);
// set the camera generators ** HAD TO REVERSE ORDER FOR BOTH KINECTS TO WORK
// enable Scene
if(cam2.enableScene() == false)
{
println("Can't open the scene for Camera 2");
exit();
return;
}
// enable depthMap generation
if(cam1.enableScene() == false)
{
println("Can't open the scene for Camera 1");
exit();
return;
}
background(10,200,20);
}
void draw()
{
// update the cams
SimpleOpenNI.updateAll();
image(cam1.sceneImage(),0,0);
image(cam2.sceneImage(),640 + 10,0);
}
I've done another text using the sceneMap() functionality but it looks like there is indeed an issue with SimpleOpenNI not updating properly internally:
/* --------------------------------------------------------------------------
* SimpleOpenNI Multi Camera Test
* --------------------------------------------------------------------------
*/
import SimpleOpenNI.*;
SimpleOpenNI cam1;
SimpleOpenNI cam2;
int numPixels = 640*480;
int[] sceneM1 = new int[numPixels];
int[] sceneM2 = new int[numPixels];
PImage scene1,scene2;
void setup()
{
size(640 * 2 + 10,480 * 2 + 10);
// start OpenNI, loads the library
SimpleOpenNI.start();
// init the cameras
cam1 = new SimpleOpenNI(0,this);
cam2 = new SimpleOpenNI(1,this);
// set the camera generators ** HAD TO REVERSE ORDER FOR BOTH KINECTS TO WORK
// enable Scene
if(cam2.enableScene() == false)
{
println("Can't open the scene for Camera 2");
exit();
return;
}
// cam2.enableDepth();//this fails when using only 1 bus
// enable depthMap generation
if(cam1.enableScene() == false)
{
println("Can't open the scene for Camera 1");
exit();
return;
}
cam1.enableDepth();
scene1 = createImage(640,480,RGB);
scene2 = createImage(640,480,RGB);
background(10,200,20);
}
void draw()
{
// update the cams
SimpleOpenNI.updateAll();
image(cam1.depthImage(),0,0);
image(cam1.sceneImage(),0,0);
cam1.sceneMap(sceneM1);
cam2.sceneMap(sceneM2);
updateSceneImage(sceneM1,scene1);
updateSceneImage(sceneM2,scene2);
image(scene1,0,490);
image(scene2,650,490);
}
void updateSceneImage(int[] sceneMap,PImage sceneImage){
for(int i = 0; i < numPixels; i++) sceneImage.pixels[i] = sceneMap[i] * 255;
sceneImage.updatePixels();
}
Using something like
cam1.update();
cam2.update();
rather than
SimpleOpenNI.updateAll();
doesn't change anything.
An issue was filed, hopefully it will be resolved.
In the meantime, try using OpenNI in a different language/framework.
OpenFrameworks has many similarities to Processing (and many differences
as well to be honest, but it's not rocket science).
Try the experimental ofxOpenNI addon to test multiple cameras, hopefully it will resolve your issue.

Using Core OpenGL to programmatically create a context with depth buffer: What am I doing wrong?

I'm trying to create an OpenGL context with depth buffer using Core OpenGl. I then wish to display the OpenGL content via a CAOpenGLLayer. From what I've read it seems I should be able to create the desired context by the following method:
I declare the following instance variable in the interface
#interface TorusCAOpenGLLayer : CAOpenGLLayer
{
//omitted code
CGLPixelFormatObj pix;
GLint pixn;
CGLContextObj ctx;
}
Then in the implementation I override copyCGLContextForPixelFormat, which I believe should create the required context
- (CGLContextObj)copyCGLContextForPixelFormat:(CGLPixelFormatObj)pixelFormat
{
CGLPixelFormatAttribute attrs[] =
{
kCGLPFAColorSize, (CGLPixelFormatAttribute)24,
kCGLPFAAlphaSize, (CGLPixelFormatAttribute)8,
kCGLPFADepthSize, (CGLPixelFormatAttribute)24,
(CGLPixelFormatAttribute)0
};
NSLog(#"Pixel format error:%d", CGLChoosePixelFormat(attrs, &pix, &pixn)); //returns 0
NSLog(#"Context error: %d", CGLCreateContext(pix, NULL, &ctx)); //returns 0
NSLog(#"The context:%p", ctx); // returns same memory address as similar NSLog call in function below
return ctx;
}
Finally I override drawInCGLContext to display the content.
-(void)drawInCGLContext:(CGLContextObj)glContext pixelFormat: (CGLPixelFormatObj)pixelFormat forLayerTime:(CFTimeInterval)timeInterval displayTime:(const CVTimeStamp *)timeStamp
{
// Set the current context to the one given to us.
CGLSetCurrentContext(glContext);
int depth;
NSLog(#"The context again:%p", glContext); //returns the same memory address as the NSLog in the previous function
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
glOrtho(0.5, 0.5, 1.0, 1.0, -1.0, 1.0);
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glEnable(GL_DEPTH_TEST);
glGetIntegerv(GL_DEPTH_BITS, &depth);
NSLog(#"%i bits depth", depth); // returns 0
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//drawing code here
// Call super to finalize the drawing. By default all it does is call glFlush().
[super drawInCGLContext:glContext pixelFormat:pixelFormat forLayerTime:timeInterval displayTime:timeStamp];
}
The program compiles fine and displays the content, but without the depth testing. Is there something extra I have to do to get this to work? Or is my entire approach wrong?
Looks like I was overriding the wrong method. To obtain the required depth buffer one should override the copyCGLPixelFormatForDisplayMask like so:
- (CGLPixelFormatObj)copyCGLPixelFormatForDisplayMask:(uint32_t)mask {
CGLPixelFormatAttribute attributes[] =
{
kCGLPFADepthSize, 24,
0
};
CGLPixelFormatObj pixelFormatObj = NULL;
GLint numPixelFormats = 0;
CGLChoosePixelFormat(attributes, &pixelFormatObj, &numPixelFormats);
if(pixelFormatObj == NULL)
NSLog(#"Error: Could not choose pixel format!");
return pixelFormatObj;
}
Based on the code here.

hand tracking not working after a reload of openni dynamic library

Our project is (http://www.play4health.com/p4h_eng/) using Ogre 3D
over Ubuntu 11.04. Except for core services all is based in a plugin architecture taking advantage of Ogre 3d plugin facilities.
In our plugin architecture plugins can be:
Videogames
Interaction methods
Users configure their session creating tuples (videogame, interaction
method). The flow is a session is:
* User load his session.
* User click of one of the tuples for the session and play to
videogame with a specific interaction method.
* Repeat it until end all activities of the session.
Plugin are loaded/unloaded dynamically by demand.
One of this interaction methods is hand tracking using openni. What is
the problem?
* Fist time that openni plugin is loading all work perfectly.
* Next time that plugin openni has to be loaded system is able to
detect gestures but not do hand tracking. Note that all plugin are
executed in the same process. Right now only solution is to reboot
platform.
This is the code for init and release OpenNI in our plugin
bool IPKinectPlugin::onInitialise()
{
mHandPointer.mId = "KinectHandPointer";
mHandPointer.mHasAbsolute = true;
mHandPointer.mHasRelative = false;
XnStatus nRetVal = XN_STATUS_OK;
nRetVal = gContext.InitFromXmlFile(String(this->getPluginInfo()-
>getResPath() + "SamplesConfig.xml").c_str());
CHECK_RC(nRetVal, bContext, "InitFromXml");
#if SHOW_DEPTH
nRetVal = gContext.FindExistingNode(XN_NODE_TYPE_DEPTH,gDepthGenerator);
bDepthGenerator = (nRetVal != XN_STATUS_OK);
if (bDepthGenerator)
{
nRetVal = gDepthGenerator.Create(gContext);
CHECK_RC(nRetVal, bDepthGenerator, "Find Depth generator");
}
#endif
nRetVal = gContext.FindExistingNode(XN_NODE_TYPE_USER, gUserGenerator);
bUserGenerator = (nRetVal != XN_STATUS_OK);
if (/*bUserGenerator*/false)
{
nRetVal = gUserGenerator.Create(gContext);
CHECK_RC(nRetVal, bUserGenerator, "Find user generator");
}
nRetVal = gContext.FindExistingNode(XN_NODE_TYPE_GESTURE, gGestureGenerator);
bGestureGenerator = (nRetVal != XN_STATUS_OK);
if (bGestureGenerator)
{
nRetVal = gGestureGenerator.Create(gContext);
CHECK_RC(nRetVal, bGestureGenerator, "Find gesture generator");
XnCallbackHandle hGestureCallbacks;
gGestureGenerator.RegisterGestureCallbacks(gestureRecognized, gestureProcess, 0,
hGestureCallbacks);
}
nRetVal = gContext.FindExistingNode(XN_NODE_TYPE_HANDS,gHandsGenerator);
bHandsGenerator = (nRetVal != XN_STATUS_OK);
if (bHandsGenerator)
{
nRetVal = gHandsGenerator.Create(gContext);
CHECK_RC(nRetVal, bHandsGenerator, "Find hands generator");
XnCallbackHandle hHandsCallbacks;
gHandsGenerator.RegisterHandCallbacks(handsNew, handsMove,handsLost, 0, hHandsCallbacks);
}
nRetVal = gContext.FindExistingNode(XN_NODE_TYPE_DEVICE, gDevice);
bDevice = (nRetVal != XN_STATUS_OK);
gContext.RegisterToErrorStateChange(onErrorStateChanged, NULL, hDummyCallbackHandle);
//Preparo la textura para la webcam
if (bGenerateRGBTexture)
mWebcamTexture = KinectTools::createDepthTexture("KinectWebCamTexture", sPluginName);
return true;
}
//-----------------------------------------------------------------------------
bool IPKinectPlugin::onShutdown()
{
if (bContext)
{
if (bHandsGenerator)
{
gHandsGenerator.StopTrackingAll();
}
if (bGestureGenerator)
{
gGestureGenerator.RemoveGesture(GESTURE_TO_USE);
gGestureGenerator.RemoveGesture(GESTURE_TO_START);
}
gContext.StopGeneratingAll();
gContext.Shutdown();
}
return true;
}
Any idea about this issue? Any wrong with this code?
Maybe you already found a solution in the meantime...
I normally work with the Java Wrapper, but what I see as difference to my code is that I call contect.startGeneratingAll() after creating the generators (Depth, Hands and so on). I had also problems when I did this multiple times at start up. Another difference is that I use a context.release at shutdown.
My procedure is normally:
Init config (License, Nodes, settings)
Create generators
Start Generating All
Run your code ...
Stop Generating ALL
Context release
From OpenNI Documentation
XN_C_API void XN_C_DECL xnShutdown ( XnContext * pContext )
Shuts down an OpenNI context, destroying all its nodes. Do not call
any function of this context or any correlated node after calling this
method. NOTE: this function destroys the context and all the nodes it
holds and so should be used very carefully. Normally you should just
call xnContextRelease()