I am looking for a solution to intersection point of a cube and a line. So i used
GLES20.glReadPixels(touchX, touchY, 1, 1, GLES20.GL_DEPTH_COMPONENT, GLES20.GL_FLOAT, zz);
and i showed the zz , but result was 0. so how could i get the depth buffer value of a Cube when i touched on the cube(actually on the 2d screen). I use GLES20 and Android API level15.And My code is below.
ByteBuffer PixelBuffer = ByteBuffer.allocateDirect(4);
ByteBuffer zBuffer = ByteBuffer.allocateDirect(4);
PixelBuffer.order(ByteOrder.nativeOrder());
PixelBuffer.position(0);
zBuffer.order(ByteOrder.nativeOrder());
zBuffer.position(0);
FloatBuffer zz;
zz = zBuffer.asFloatBuffer();
GLES20.glReadPixels(touchX, touchY, 1, 1, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, PixelBuffer);
GLES20.glReadPixels(touchX, touchY, 1, 1, GLES20.GL_DEPTH_COMPONENT, GLES20.GL_FLOAT, zz);
by the way picking color works fine.
Thanks!
You forget to prepare target framebuffer to read... Try like this:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glReadPixels(0, 0, deviceWidth, deviceHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
Or just write simple shader and render your zbuffer data into your FBO, simething like
gl_FragColor = vec4(vec3(gl_FragCoord.z), 1.0);
and then read color information form this FBO...
Related
(HalconDotNET)
I want to render an image from a visualized match result with a colored pointcloud.
In the example program find_surface_model_with_edges_simple.hdev after running find_surface_model() you receive a pose, with this pose you can visualize how the surface model matched in the scene using: visualize_object_model_3d(). From a visualization like this I want to create a rendered image to display the matching result in an application I am making.
To render a colored pointcloud I use:
render_object_model_3d (Image, ObjectModel3DSceneRaw_ccs, camPar, Pose_0, ['red_channel_attrib','green_channel_attrib','blue_channel_attrib'], ['&overlay_red','&overlay_green','&overlay_blue'])
To render a match result I use:
render_object_model_3d (Image, [ObjectModel3DSceneRaw_ccs, ObjectModel3D], camPar, [Pose_0, detectedPose], ['color_0', 'color_1'], ['white', 'red'])
I can not get the two objects in this function and still have the RGB attributes, Halcon gives parameter errors. I would also like to specify the color of the objectmodel.
I also tried to use 3D scene:
create_scene_3d (Scene3D)
add_scene_3d_camera (Scene3D, camPar, CameraIndex)
set_scene_3d_camera_pose (Scene3D, CameraIndex, detectedPose)
add_scene_3d_light (Scene3D, PoseInvert[0:2], 'point_light', LightIndex)
* The scene
add_scene_3d_instance (Scene3D, ObjectModel3DSceneRaw_ccs, detectedPose, InstanceIndex)
set_scene_3d_instance_param (Scene3D, InstanceIndex, ['red_channel_attrib','green_channel_attrib','blue_channel_attrib'], ['&overlay_red','&overlay_green','&overlay_blue'])
* The transformed objectModel
add_scene_3d_instance (Scene3D, ObjectModel3DRigidTrans, Pose_0, InstanceIndex2)
set_scene_3d_instance_param (Scene3D, InstanceIndex2, 'color', 'red')
* Display
display_scene_3d (WindowHandle, Scene3D, CameraIndex)
But this only shows the scene and not the matched objectmodel.
Anyone know what I'm doing wrong?
This hack worked well enough for my purposes:
HT empty = new HT();
hop.CreatePose(0, 0, 0, 0, 0, 0, "Rp+T", "gba", "point", out HT pose_0);
HT camParam = new HT(0.008, 0, 0, 0, 0, 0, 5.2e-006, 5.2e-006, 960, 600, 1920, 1200);
HT renderGenParam = new HT("red_channel_attrib", "green_channel_attrib", "blue_channel_attrib");
HT renderGenValue = new HT("red", "green", "blue");
hop.ReadObjectModel3d(om3Path, 'm', empty, empty, out HT om3, out HT status);
hop.RenderObjectModel3d(out HObject sceneImage, current_OM3, camParam, pose_0, renderGenParam, renderGenValue);
hop.RenderObjectModel3d(out HObject objectImage, om3, camParam, avg_pose, "color_0", "green");
hop.AddImage(sceneImage, objectImage, out HObject resultImage, 0.6, 0);
hop.CropRectangle1(resultImage, out HObject resultCropped, 250, 400, 935, 1450);
hop.WriteImage(resultCropped, "tiff", 0, #"./testImage.tiff");
Not a real solution tho.
I am trying to create jigsaw puzzle shapes using P5.js. After creating puzzle shapes, I want to cut areas from main image into pieces. For that I have options of using GET() or COPY():
But both of them take fix height and width as parameter. How can I copy a custom area like given in following shapes:
https://editor.p5js.org/techty/sketches/h7qwatZRb
let cutout = createGraphics(w, h);
cutout.background(255, 255);
cutout.blendMode(REMOVE);
//draw shape on cutout
let newshapeimagegraphic = createGraphics(w, h);
newshapeimagegraphic.image(myImg, 0, 0);
newshapeimagegraphic.blendMode(REMOVE);
newshapeimagegraphic.image(cutout, 0, 0);
image(newshapeimagegraphic, 0, 0);
I am trying to use an alppha mask filter to apply a texture to a canvas element but cannot seem to get things to work. I have a base image which is a flat white color and to which I want to apply a color filter at runtime based on a users selection for example:
bitmap = new createjs.Bitmap(image);
bitmap.filters = [
new createjs.ColorFilter(0,0, 0.5, 1, 0, 0, 120, 0)
];
bitmap.cache(0, 0, 500, 500, 2);
I then want to use a second image which is a texture png that will add various shading texture to that first one. Looking over the docs it would seem that I need to use an AlphaMaskFilter but that does not seem to work and nothing is rendered onto the canvas. For example:
//filterImage contains the transparent image which has a shaded texture
var bitmap2 = new createjs.Bitmap(filterImage);
bitmap2.cache(0, 0, 500, 500, 2);
var bitmap = new createjs.Bitmap(image);
bitmap.filters = [
new createjs.ColorFilter(0,0, 0.5, 1, 0, 0, 120, 0),
new createjs.AlphaMaskFilter(bitmap2.cacheCanvas)
];
bitmap.cache(0, 0, 500, 500, 2);
Can someone help point me in the right direction here or if I am trying to do something which is just not possible using that filter.
I'm working on openGL es on Android.
Now i meet a problem. I defined a float array, which is used to pass to fragment shader.
float[] data = new float[texWidth*texHeight];
// test data
for (int i = 0; i < data.length; i++) {
data[i] = 0.123f;
}
1. initTexture:
glGenTextures...
glBindTexture...
glTexParameteri...
FloatBuffer fb = BufferUtils.array2FloatBuffer(data);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, fb);
2.FBO:
glGenBuffers...
glBindFramebuffer...
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texId, 0);
3.onDrawFrame:
glUseProgram(mProgram);...
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);...
IntBuffer fb = BufferUtils.iBufferAllocateDirect(texWidth*texHeight);
glReadPixels(0, 0, texWidth, texHeight, GL_RGBA, GL_UNSIGNED_BYTE, fb);
System.out.println(Integer.toHexString(fb.get(0)));
System.out.println(Integer.toHexString(fb.get(1)));
System.out.println(Integer.toHexString(fb.get(2)));
fragment shader:
precision mediump float;
uniform sampler2D sTexture;
varying vec2 vTexCoord;
void main()
{
tex = texture2D(sTexture, vTexCoord.st);
vec4 color = tex;
gl_FragColor = color;
}
So, how can i get the float data(0.123f, which i defined before) whith glReadPixels? Now what i get is ff000000(ABGR), so i suspect shader doesn't get the data through this way. Can someone tell me why and how can i deal with it? i am a newbie on it and really appreciate it.
Your main problem happens before glReadPixels(). The primary issue is with the way you use glTexImage2D():
FloatBuffer fb = BufferUtils.array2FloatBuffer(data);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0,
GL_RGBA, GL_UNSIGNED_BYTE, fb);
The GL_UNSIGNED_BYTE value for the 8th argument specifies that the data passed in consists of unsigned bytes. However, the values in your buffer are floats. So your float values are interpreted as bytes, which can't possibly end well because they are completely different formats, with different sizes and memory layouts.
Now, you might be tempted to do this instead:
FloatBuffer fb = BufferUtils.array2FloatBuffer(data);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texWidth, texHeight, 0,
GL_RGBA, GL_FLOAT, fb);
This would work in desktop OpenGL, which supports implicit format conversions as part of specifying texture data. But it is not supported in OpenGL ES. In ES 2.0, GL_FLOAT is not even a legal value for the format argument. In ES 3.0, it is legal, but only for internal formats that actually store floats, like GL_RGBA16F or GL_RGBA32F. It is an error to use it in combination with the GL_RGBA internal format (3rd argument).
So unless you use float textures in ES 3.0 (which consume much more memory), you need to convert your original data to bytes. If you have float values between 0.0 and 1.0, you can do that by multiplying them by 255, and rounding to the next integer.
Then you can read them back also as bytes with glReadPixels(), and should get the same values again.
I have this method for performing the ortho projection:
void myGL::ApplyOrtho(float maxX, float maxY) const
{
float a = 1.0f / maxX;
float b = 1.0f / maxY;
float ortho[16] = {
a, 0, 0, 0,
0, b, 0, 0,
0, 0, -1, 0,
0, 0, 0, 1};
GLint projectionUniform = glGetUniformLocation(m_simpleProgram, "Projection");
glUniformMatrix4fv(projectionUniform, 1, 0, &ortho[0]);
}
It works fine for iPad screen when I do this:
ApplyOrtho(2, 2*1024/768);
Here's my rendered image:
However, when I rotate to landscape, it looks like this:
Now my assumption is this is because the ApplyOrtho matrix is setting a fixed projection and that projection does not rotate while the image is rotating within that projection, thus getting displayed fatter.
Incidentally, this is the rotation:
void myGL::ApplyRotation(float degrees) const
{
float radians = degrees * 3.14159f / 180.0f;
float s = std::sin(radians);
float c = std::cos(radians);
float zRotation[16] = {
c, s, 0, 0,
-s, c, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1
};
GLint modelviewUniform = glGetUniformLocation(m_simpleProgram, "Modelview");
glUniformMatrix4fv(modelviewUniform, 1, 0, &zRotation[0]);
}
It is used right before drawing.
So I experimented and tried this at the same time I rotate:
ApplyOrtho(2*1024/768, 2);
However this has no effect whatsoever, even though the rotation is definitely happening at the same time. My image remains "fat".
Is my interpretation of why the fatness is happening correct?
How to handle the orthographic projection when auto-rotating screen?
UDPATE: Also tried this on iPhone using the 2/3 dimensions of the screen (not iPhone 5) and using ApplyOrtho(2,3) and ApplyOrtho(3,2) but the "fat" triangle in landscape remains.
Also: the viewport is setup just once, before the first Ortho:
glViewport(0, 0, width, height);
Where width and height are the dimensions of the Portrait screen.
The cause of the above discrepancies is that the orthographic projection is not matching the width and height ratio of the screen, thus the X and Y coordinates are not the same screen size. Making the orthographic ratio match the viewport ratio resolves this issue. As a result, when rotating, the image will remain exactly the same shape and size.