Is an AlphaMaskFilter correct in this case? - createjs

I am trying to use an alppha mask filter to apply a texture to a canvas element but cannot seem to get things to work. I have a base image which is a flat white color and to which I want to apply a color filter at runtime based on a users selection for example:
bitmap = new createjs.Bitmap(image);
bitmap.filters = [
new createjs.ColorFilter(0,0, 0.5, 1, 0, 0, 120, 0)
];
bitmap.cache(0, 0, 500, 500, 2);
I then want to use a second image which is a texture png that will add various shading texture to that first one. Looking over the docs it would seem that I need to use an AlphaMaskFilter but that does not seem to work and nothing is rendered onto the canvas. For example:
//filterImage contains the transparent image which has a shaded texture
var bitmap2 = new createjs.Bitmap(filterImage);
bitmap2.cache(0, 0, 500, 500, 2);
var bitmap = new createjs.Bitmap(image);
bitmap.filters = [
new createjs.ColorFilter(0,0, 0.5, 1, 0, 0, 120, 0),
new createjs.AlphaMaskFilter(bitmap2.cacheCanvas)
];
bitmap.cache(0, 0, 500, 500, 2);
Can someone help point me in the right direction here or if I am trying to do something which is just not possible using that filter.

Related

Render RGB pointcloud together with surface match result

(HalconDotNET)
I want to render an image from a visualized match result with a colored pointcloud.
In the example program find_surface_model_with_edges_simple.hdev after running find_surface_model() you receive a pose, with this pose you can visualize how the surface model matched in the scene using: visualize_object_model_3d(). From a visualization like this I want to create a rendered image to display the matching result in an application I am making.
To render a colored pointcloud I use:
render_object_model_3d (Image, ObjectModel3DSceneRaw_ccs, camPar, Pose_0, ['red_channel_attrib','green_channel_attrib','blue_channel_attrib'], ['&overlay_red','&overlay_green','&overlay_blue'])
To render a match result I use:
render_object_model_3d (Image, [ObjectModel3DSceneRaw_ccs, ObjectModel3D], camPar, [Pose_0, detectedPose], ['color_0', 'color_1'], ['white', 'red'])
I can not get the two objects in this function and still have the RGB attributes, Halcon gives parameter errors. I would also like to specify the color of the objectmodel.
I also tried to use 3D scene:
create_scene_3d (Scene3D)
add_scene_3d_camera (Scene3D, camPar, CameraIndex)
set_scene_3d_camera_pose (Scene3D, CameraIndex, detectedPose)
add_scene_3d_light (Scene3D, PoseInvert[0:2], 'point_light', LightIndex)
* The scene
add_scene_3d_instance (Scene3D, ObjectModel3DSceneRaw_ccs, detectedPose, InstanceIndex)
set_scene_3d_instance_param (Scene3D, InstanceIndex, ['red_channel_attrib','green_channel_attrib','blue_channel_attrib'], ['&overlay_red','&overlay_green','&overlay_blue'])
* The transformed objectModel
add_scene_3d_instance (Scene3D, ObjectModel3DRigidTrans, Pose_0, InstanceIndex2)
set_scene_3d_instance_param (Scene3D, InstanceIndex2, 'color', 'red')
* Display
display_scene_3d (WindowHandle, Scene3D, CameraIndex)
But this only shows the scene and not the matched objectmodel.
Anyone know what I'm doing wrong?
This hack worked well enough for my purposes:
HT empty = new HT();
hop.CreatePose(0, 0, 0, 0, 0, 0, "Rp+T", "gba", "point", out HT pose_0);
HT camParam = new HT(0.008, 0, 0, 0, 0, 0, 5.2e-006, 5.2e-006, 960, 600, 1920, 1200);
HT renderGenParam = new HT("red_channel_attrib", "green_channel_attrib", "blue_channel_attrib");
HT renderGenValue = new HT("red", "green", "blue");
hop.ReadObjectModel3d(om3Path, 'm', empty, empty, out HT om3, out HT status);
hop.RenderObjectModel3d(out HObject sceneImage, current_OM3, camParam, pose_0, renderGenParam, renderGenValue);
hop.RenderObjectModel3d(out HObject objectImage, om3, camParam, avg_pose, "color_0", "green");
hop.AddImage(sceneImage, objectImage, out HObject resultImage, 0.6, 0);
hop.CropRectangle1(resultImage, out HObject resultCropped, 250, 400, 935, 1450);
hop.WriteImage(resultCropped, "tiff", 0, #"./testImage.tiff");
Not a real solution tho.

VBA - How do I find the name of a shape created from an effect?

I was wondering if there is a more specific way to rename my shape extracted from a contour effect than using ActiveLayer.Shapes(2)? The main thing I don't like about this method is it's general and I'm afraid that somewhere down the road it might not be Shape(2) anymore, causing issues. My hope is to define it by name, but I don't know what that is since it's created via an effect.
Is there by chance a function or something to look up an unknown shape name?
I found .findeshape, but I couldn't get it to work and I'm not sure if that's what I actually need in this instance. Any help is appreciated.
'Create Rectangle
Set Rect = ActiveLayer.CreateRectangle(1, 1, 0, 0)
'Apply .1" Outside Contour.
Set Contour1 = ActiveLayer.Shapes(1).CreateContour(cdrContourOutside, 0.1, 1, cdrDirectFountainFillBlend, CreateCMYKColor(75, 68, 65, 90), CreateCMYKColor(0, 0, 0, 100), CreateCMYKColor(0, 0, 0, 100), 0, 0, cdrContourSquareCap, cdrContourCornerMiteredOffsetBevel, 15#)
ActiveDocument.CreateSelection Contour1.Contour.ContourGroup, ActiveLayer.Shapes(1)
ActiveSelection.Separate
ActiveLayer.Shapes(2).ObjectData("Name").Value = "Renamed 2"
End Sub```

Fit text in to a shape using gd library

I am trying to put a text of any length (up to 15) in a shape like in below image using gd library. The width of shape should expand/reduce according to the text length. I know how to write simple text using gd library, but i am stuck on how to make this ?
I am trying something similar like this
<?php
// Set the content-type
header('Content-Type: image/png');
$im = imagecreate(400, 300);
// Background color
$white = imagecolorallocate($im, 255, 255, 255);
// font color
$black = imagecolorallocate($im, 0, 0, 0);
imagefilledrectangle($im, 0, 0, 399, 29, $white);
// text string
$text = 'ABCDEFG';
$font = 'arial.ttf';
// Add the text
imagettftext($im, 20, 0, 10, 20, $black, $font, $text);
imagepng($im);
imagedestroy($im);
?>
I tried creating text to image... now i want to fit that text in a shape... if the length of text increases the width of the shape has to be increased... and i m stuck on how to do this.
How can i achieve this ? any idea on this ?

How do I get the frame of visible content from SKCropNode?

It appears that, in SpriteKit, when I use a mask in a SKCropNode to hide some content, it fails to change the frame calculated by calculateAccumulatedFrame. I'm wondering if there's any way to calculate the visible frame.
A quick example:
import SpriteKit
let par = SKCropNode()
let bigShape = SKShapeNode(rect: CGRect(x: 0, y: 0, width: 100, height: 100))
bigShape.fillColor = UIColor.redColor()
bigShape.strokeColor = UIColor.clearColor()
par.addChild(bigShape)
let smallShape = SKShapeNode(rect: CGRect(x: 0, y: 0, width: 20, height: 20))
smallShape.fillColor = UIColor.greenColor()
smallShape.strokeColor = UIColor.clearColor()
par.maskNode = smallShape
par.calculateAccumulatedFrame() // returns (x=0, y=0, width=100, height=100)
I expected par.calculateAccumulatedFrame() to return (x=0, y=0, width=20, height=20) based on the crop node mask.
I thought maybe I could code the function myself as an extension that basically reimplements calculateAccumulatedFrame with support for checking for SKCropNodes and their masks, but it occurred to me that I would need to consider the alpha of that mask to determine if there's actual content that grows the frame. Sounds difficult.
Is there an easy way to calculate this?

glreadpixel gl_depth_component returns 0?

I am looking for a solution to intersection point of a cube and a line. So i used
GLES20.glReadPixels(touchX, touchY, 1, 1, GLES20.GL_DEPTH_COMPONENT, GLES20.GL_FLOAT, zz);
and i showed the zz , but result was 0. so how could i get the depth buffer value of a Cube when i touched on the cube(actually on the 2d screen). I use GLES20 and Android API level15.And My code is below.
ByteBuffer PixelBuffer = ByteBuffer.allocateDirect(4);
ByteBuffer zBuffer = ByteBuffer.allocateDirect(4);
PixelBuffer.order(ByteOrder.nativeOrder());
PixelBuffer.position(0);
zBuffer.order(ByteOrder.nativeOrder());
zBuffer.position(0);
FloatBuffer zz;
zz = zBuffer.asFloatBuffer();
GLES20.glReadPixels(touchX, touchY, 1, 1, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, PixelBuffer);
GLES20.glReadPixels(touchX, touchY, 1, 1, GLES20.GL_DEPTH_COMPONENT, GLES20.GL_FLOAT, zz);
by the way picking color works fine.
Thanks!
You forget to prepare target framebuffer to read... Try like this:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glReadPixels(0, 0, deviceWidth, deviceHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
Or just write simple shader and render your zbuffer data into your FBO, simething like
gl_FragColor = vec4(vec3(gl_FragCoord.z), 1.0);
and then read color information form this FBO...