VRML IndexedLineSet thickness - vrml

I want to increase the IndexedLineSet thickness here is my code:
geometry IndexedLineSet {
coord DEF Line Coordinate {
point [0 0 0, 0 0 0]
}
coordIndex [ 0, 1, -1 ]
}
I have tried lineWidth but it doesn't work is there any other attribute I can use?
Thanks.

Unfortunately, as I understand it (it's been a long while) if you want to use an IndexedLineSet you can't control the thickness. There's no thickness attribute for an IndexedLineSet and the appearance is implementation dependent. You'd have to have boxes, or extruded circles/ellipses to get a thickness.

Related

Creating a mesh within spherical shell with gmsh 4.7.1

I'm trying to use gmsh 4.7.1 to create a mesh within a 3D volume, that is a sphere with a concentric spherical hole (in other words, I have a spherical shell). In order to do so, I wrote the following .geo file:
// Gmsh project created on Wed Feb 17 15:22:45 2021
SetFactory("OpenCASCADE");
//+
Sphere(1) = {0, 0, 0, 0.1, -Pi/2, Pi/2, 2*Pi};
//+
Sphere(2) = {0, 0, 0, 1, -Pi/2, Pi/2, 2*Pi};
//+
Surface Loop(3) = {2};
//+
Surface Loop(4) = {1};
//+
Volume(3) = {3, 4};
//+
Physical Surface(1) = {1};
//+
Physical Surface(2) = {2};
//+
Physical Volume(3) = {3};
But, as soon as I create a 3D mesh by using the 3D command in the gmsh gui, my inner hole gets meshed too, while I'd like to have no elements of the mesh within the hole.
What am I doing wrong? How can I obtain the desired result? Thank you.
There are several issues at play here:
Sphere command, already creates a volume, not surfaces as you expect.
due to the point above, the command Surface Loop(3) = {2}; is assumed to create a surface loop from a volume, which is 1) not a supported operation. 2) will try to use the surface with the tag 2. It is unclear, what it will do in reality (as a surface with the tag 2 probably still exists).
Thus, the Volume command gets some weird things as an input
and it is all connected with the fact that the characteristic length is not setup, thus the mesh density is quite arbitrary.
If you insist on using the OpenCASCADE kernel, you probably want to use boolean operations.
Here is the code I have with an arbitrarily chosen characteristic length of 0.05 for all the points defining the solid spherical shell:
SetFactory("OpenCASCADE");
Sphere(1) = {0, 0, 0, 0.1, -Pi/2, Pi/2, 2*Pi};
Sphere(2) = {0, 0, 0, 1, -Pi/2, Pi/2, 2*Pi};
BooleanDifference(3) = { Volume{2}; Delete; }{ Volume{1}; Delete; };
Characteristic Length{ PointsOf{ Volume{3}; } } = 0.05;
Visualization from Paraview of the produced mesh with clipping:

Cut out a custom shape from an image in P5.js

I am trying to create jigsaw puzzle shapes using P5.js. After creating puzzle shapes, I want to cut areas from main image into pieces. For that I have options of using GET() or COPY():
But both of them take fix height and width as parameter. How can I copy a custom area like given in following shapes:
https://editor.p5js.org/techty/sketches/h7qwatZRb
let cutout = createGraphics(w, h);
cutout.background(255, 255);
cutout.blendMode(REMOVE);
//draw shape on cutout
let newshapeimagegraphic = createGraphics(w, h);
newshapeimagegraphic.image(myImg, 0, 0);
newshapeimagegraphic.blendMode(REMOVE);
newshapeimagegraphic.image(cutout, 0, 0);
image(newshapeimagegraphic, 0, 0);

OpenGL: How to set z co-ordinate of vertex to positive quantity?

I'm just starting to learn opengl so I may not know all things. Using OpenTK with VB.net.
I have this code:
Dim pm As Matrix4 = Matrix4.CreatePerspectiveFieldOfView(1.57, 16 / 9, 0.1, 100)
GL.MatrixMode(MatrixMode.Projection)
GL.LoadMatrix(pm)
GL.Begin(PrimitiveType.Triangles)
GL.Vertex3(0, 0, -1) : GL.Color4(1.0!, 0!, 0!, 1.0!)
GL.Vertex3(1, 0, -1) : GL.Color4(0.0!, 1.0!, 0!, 1.0!)
GL.Vertex3(0, 1, -1) : GL.Color4(0.0!, 0.0!, 1.0!, 1.0!)
GL.End()
This correctly draws the triangle.
But when I change the z co-ordinate to +ve, they disappear. Like when GL.Vertex3(0, 0, +1), that vertex disappears. When all 3 vertices' z coords are +ve then nothing is visible.
I thought that this was because of some matrix stuff so I added a translation to the 'pm' matrix defined above:
Matrix4.CreateTranslation(0, 0, +1.0!, pm)
Still nothing happens.
But when all are -ve then all is good:
GL.Vertex3(0, 0, -10) : GL.Color4(1.0!, 0!, 0!, 1.0!)
GL.Vertex3(1, 0, -10) : GL.Color4(0.0!, 1.0!, 0!, 1.0!)
GL.Vertex3(0, 1, -10) : GL.Color4(0.0!, 0.0!, 1.0!, 1.0!)
Note: I've followed this guide. His code seems to have the same problem though no one addressed it. His code
Can someone explain this stuff to me? Thanks in advance.
In OpenGL one usually assumes that the camera is looking in -z direction (it's a convention, not a limit). Due to this, projection matrices are usually constructed in a way that they map the interval between [-nearPlane, -farPlane] to the visible depth region.
If you want to look in positive z, then you have to add a additional viewing matrix that rotates the camera by 180° around the y-axis (Or one that simply mirrors the z-axis but then the winding order changes).

Fabricjs line coordinates after (moved, scaled, rotated) - canvas.on('object:modified'…

I need to find the Line coordinates(x1,y1,x2,y2) after the object has been modified. (moved, scaled, rotated)
I thought to use the oCoords information and based on angle and flip information to decide which corners are the line ends, but it seems that it will not be too accurate…
Any help?
Example:
x1: 164,
y1: 295.78334045410156,
x2: 451,
y2: 162.78334045410156
x: 163, y: 161.78334045410156 - top left corner
x: 452, y: 161.78334045410156 - top right corner
x: 163, y: 296.78334045410156 - bottom left corner
x: 452, y: 296.78334045410156 - bottom right corner
When Fabric.js calculates oCoords - i.e. object's corners' coordinates - it takes into account the object's strokeWidth:
// fabric.Object.prototype
_getNonTransformedDimensions: function() {
var strokeWidth = this.strokeWidth,
w = this.width + strokeWidth,
h = this.height + strokeWidth;
return { x: w, y: h };
},
For most objects, stroke is kind of a border that outlines the outer edges, so it makes perfect sense to account for strokeWidth it when calculating corner coordinates.
In fabric.Line, though, stroke is used to draw the body of the line. There is no example in the question but I assume this is the reason behind discrepancies between the real end-point coordinates and those in oCoords.
So, if you really want to use oCoords to detect the coordinates of the end points, you'll have to adjust for strokeWidth / 2, e.g.
const realx1 = line.oCoords.tl.x + line.strokeWidth / 2
const realy1 = line.oCoords.tl.y + line.strokeWidth / 2
Keep in mind that fabric.Line's own _getNonTransformedDimensions() does adjust for strokeWidth, but only when the line's width or height equal 0:
// fabric.Line.prototype
_getNonTransformedDimensions: function() {
var dim = this.callSuper('_getNonTransformedDimensions');
if (this.strokeLineCap === 'butt') {
if (this.width === 0) {
dim.y -= this.strokeWidth;
}
if (this.height === 0) {
dim.x -= this.strokeWidth;
}
}
return dim;
},

Can't correctly rotate cylinder in openGL to desired position

I've got a little objective-c utility program that renders a convex hull. (This is to troubleshoot a bug in another program that calculates the convex hull in preparation for spatial statistical analysis). I'm trying to render a set of triangles, each with an outward-pointing vector. I can get the triangles without problems, but the vectors are driving me crazy.
I'd like the vectors to be simple cylinders. The problem is that I can't just declare coordinates for where the top and bottom of the cylinders belong in 3D (e.g., like I can for the triangles). I have to make them and then rotate and translate them from their default position along the z-axis. I've read a ton about Euler angles, and angle-axis rotations, and quaternions, most of which is relevant, but not directed at what I need: most people have a set of objects and then need to rotate the object in response to some input. I need to place the object correctly in the 3D "scene".
I'm using the Cocoa3DTutorial classes to help me out, and they work great as far as I can tell, but the rotation bit is killing me.
Here is my current effort. It gives me cylinders that are located correctly, but all point along the z-axis (as in this image:. We are looking in the -z direction. The triangle poking out behind is not part of the hull; for testing/debugging. The orthogonal cylinders are coordinate axes, more or less, and the spheres are to make sure the axes are located correctly, since I have to use rotation to place those cylinders correctly. And BTW, when I use that algorithm, the out-vectors fail as well, although in a different way, coming out normal to the planes, but all pointing in +z instead of some in -z)
from Render3DDocument.m:
// Make the out-pointing vector
C3DTCylinder *outVectTube;
C3DTEntity *outVectEntity;
Point3DFloat *sideCtr = [thisSide centerOfMass];
outVectTube = [C3DTCylinder cylinderWithBase: tubeRadius top: tubeRadius height: tubeRadius*10 slices: 16 stacks: 16];
outVectEntity = [C3DTEntity entityWithStyle:triColor
geometry:outVectTube];
Point3DFloat *outVect = [[thisSide inVect] opposite];
Point3DFloat *unitZ = [Point3DFloat pointWithX:0 Y:0 Z:1.0f];
Point3DFloat *rotAxis = [outVect crossWith:unitZ];
double rotAngle = [outVect angleWith:unitZ];
[outVectEntity setRotationX: rotAxis.x
Y: rotAxis.y
Z: rotAxis.z
W: rotAngle];
[outVectEntity setTranslationX:sideCtr.x - ctrX
Y:sideCtr.y - ctrY
Z:sideCtr.z - ctrZ];
[aScene addChild:outVectEntity];
(Note that Point3DFloat is basically a vector class, and that a Side (like thisSide) is a set of four Point3DFloats, one for each vertex, and one for a vector that points towards the center of the hull).
from C3DTEntity.m:
if (_hasTransform) {
glPushMatrix();
// Translation
if ((_translation.x != 0.0) || (_translation.y != 0.0) || (_translation.z != 0.0)) {
glTranslatef(_translation.x, _translation.y, _translation.z);
}
// Scaling
if ((_scaling.x != 1.0) || (_scaling.y != 1.0) || (_scaling.z != 1.0)) {
glScalef(_scaling.x, _scaling.y, _scaling.z);
}
// Rotation
glTranslatef(-_rotationCenter.x, -_rotationCenter.y, -_rotationCenter.z);
if (_rotation.w != 0.0) {
glRotatef(_rotation.w, _rotation.x, _rotation.y, _rotation.z);
} else {
if (_rotation.x != 0.0)
glRotatef(_rotation.x, 1.0f, 0.0f, 0.0f);
if (_rotation.y != 0.0)
glRotatef(_rotation.y, 0.0f, 1.0f, 0.0f);
if (_rotation.z != 0.0)
glRotatef(_rotation.z, 0.0f, 0.0f, 1.0f);
}
glTranslatef(_rotationCenter.x, _rotationCenter.y, _rotationCenter.z);
}
I added the bit in the above code that uses a single rotation around an axis (the "if (_rotation.w != 0.0)" bit), rather than a set of three rotations. My code is likely the problem, but I can't see how.
If your outvects don't all point in the correct directino, you might have to check your triangles' winding - are they all oriented the same way?
Additionally, it might be helpful to draw a line for each outvec (Use the average of the three vertices of your triangle as origin, and draw a line of a few units' length (depending on your scene's scale) into the direction of the outvect. This way, you can be sure that all your vectors are oriented correctly.
How do you calculate your outvects?
The problem appears to be in that glrotatef() expects degrees and I was giving it radians. In addition, clockwise rotation is taken to be positive, and so the sign of the rotation was wrong. This is the corrected code:
double rotAngle = -[outVect angleWith:unitZ]; // radians
[outVectEntity setRotationX: rotAxis.x
Y: rotAxis.y
Z: rotAxis.z
W: rotAngle * 180.0 / M_PI ];
I can now see that my other program has the inVects wrong (the outVects below are poking through the hull instead of pointing out from each face), and I can now track down that bug in the other program...tomorrow: