I m using Avizo to generate a mesh of my microstructure obtained from CT scan in order to launch computations in Abaqus. I can generate interesting surfaces meshes, nonetheless outside mesh is too fine (you can see figure with this question). I am trying to create a surface path to have a coarser mesh in outside mesh but it doesnt work. When I remesh my model all is modified...
How can i generate sub-surfaces in order to specificy special mesh conditions?
Thanks for your help
It looks like your question is how can I break my mesh into separate components - perform some work on those components and then put them back together.
There is a tool to select connected components in a region in the toolbar area at the top of the meshlab window which you can use to select any triangle on the outside boundary and it will grab all the connected parts of the mesh. Then with "Filters->Mesh Layer->Move Selected Faces to Another Layer" you can separate the selected part of the mesh into a new layer. Alternatively you can use right click on the layer and select split in connected components. On this new layer you can perform simplification filters such as clustering decimation.
Once you have completed the simplification to your satisfaction, make visible the layers that you want to merge back together and use the filter "Filters->Mesh Layer->Flatten Visible Layers" to put them back together.
Related
Iam trying to generate wireframe on top of objects after generating the point clouds. How can i get wireframes similar to the ones generated in the image?
I am able to run ORB SLAM2 and generate point clouds and save them. Iam even able to generate wireframe from .pcd files from the point cloud library.
However Iam looking for results such as the ones shown in this picture.
How can i approach towards this?
The target wireframe image
ORB-SLAM 2 is at its heart just a sparse feature-based SLAM. What you want can't be achieved with just that library, and furthermore, the image you give as an example is reprojecting a CDI mesh into the image. The only way you can get results like this is by having a 3D mesh of the object before you run your SLAM, and localise said mesh in the scene (there is a vast litterature on model-based SLAM, which I think is the best place for you to look into). The main idea in that case would be to match elements from the 3d mesh to elements in the image (whethere those are keypoints or some form of features), and use them either in your cost function or in some PnP-like scheme.
I'm trying to build a descending graph in Cytoscape. I've got the majority done quite well, but now I'm stuck on the edge types. I'd like to use something like the 'segments' curve-style, where my edges have points.
However, instead of being zig-zags, I would like the edges to be constrained to horizontal/vertical lines.
My graph is pretty constrained and the user cannot manipulate the positions. I would like the edges to start at the 'parent' element, go straight down a set amount, then hit a point, turn, head horizontally to the same X as the child, then straight down to the child element.
Right now, the lines go straight, and I can add segments easily, but they aren't constrained and are based on percentages that I won't have access to without doing a bunch of math, which I guess isn't terrible.
Current:
Desired:
If you want specific absolute positions on segments edges, you'll need to convert the absolute co-ordinates to the relative co-ordinates that you specify for segments edges.
If you want a different type of edges for your usecase, feel free to propose it in the issue tracker.
Well I have an outline layer and a region layer.
The region for sure sure using the outline, so it would be great if I could select some important point from the outline layer and than copy paste it to the region layer.
How is that possible?
The way to go is another. Let's say we have layer outline(o) and region(r).
You create the polygon in o and after that select it. Now hit CTRL+C for copying. Now switch to r and paste it there with CTRL+V. It should be the same place now. Don't forget to disable the view of polygon from o for sure. Now in r you can use the Split Features from the Advanced Digitizing. Now split the polygon at the place where you need it. Delete the second unwanted polygon after that.
I'm trying to figure out a style/selector that could be applied globally to make edges draw on-top-of compound nodes using cytoscape.js. I understand the value of having regular nodes always on-top-of edges but was wondering if there is a way to work around this with compound nodes?
Edges connecting to, from, or inside compound nodes are drawn on top of the associated compound nodes. Unrelated edges are drawn behind, as usual. You can control draw order with z-index, but those values are used relatively according to the hierarchy created by the previous rules.
It sounds like your graph has nodes placed too closely together. Have you tried adjusting the CoSE layout options for your graph?
I am trying to build a UI for recording and playing videos.
I am using the GPUImage framework and would like to apply a mask filter and the GPUImageiOSBlurFilter to the camera.
Goal:
I am struggling with how to set everything up so that my input (camera) goes through unfiltered in the circle, but the blur filter is masked around the centre and applied to the camera output.
When I construct the chain like this:
[_camera addTarget:_maskFilter];
[_maskPicture processImage];
[_maskPicture addTarget:_maskFilter];
[_maskFilter addTarget:_blurFilter];
[_blurFilter addTarget:_screen];
The blur filter blurs everything in the view and the mask cuts out the video in all but the centre.
My Mask image is a black rectangle with a white circle.
Result:
How can I construct a chain of filters that help me achieve the UI in the picture above - I am looking for a nudge in which direction I should go. I am currently looking at GPUImageFilterGroups and the video buffer to try and "route" parts of my input around some filters, but I am having trouble finding resources.
You can do this fairly easily by modifying the GPUImageGaussianSelectiveBlurFilter.
Take the code for that filter and create a new filter based on it. In your new filter, replace the GPUImageGaussianBlurFilter with the GPUImageiOSBlurFilter. In the fragment shader, swap the sharpImageColor and blurredImageColor within the final mix() command . That should be it for replicating this effect.
The GPUImageGaussianSelectiveBlurFilter is a filter group that masks off and blurs things within a circle, and you want to invert that and use the stronger GPUImageiOSBlurFilter, so the above modifications will do that. This will be more performant than trying to apply a mask as a separate filter, and should be simple enough to implement.