How to sculpt with odd linear artefacts - blender

I am currently learning how to sculpt in Blender; working on my own projects after completing BlenderGuru's Beginner & Intermediate classes, and some of Grant Abbitts videos with pleasing results. I am trying to sculpt a plasmapistol with a skull on it, which can be seen in the reference photo that I have provided.
However, when I sculpt, I get these really odd linear artefacts (See picture below, circled in black). I added a Subsurf Modifier to the primitive UV Sphere, with the Viewport and Render Values set to 4, so it is a fairly fine mesh. However, these still artefacts occur.
I assume it is due to the stretching of the polygons when I grab the sphere with the Snake Hook tool and deform it to encompass the frontal part of the skull.
EDIT: Whilst writing this comment I went back, and switched on Dynamic Topology with Relative Detail selected.
It appears that I am no longer getting the issues that I was getting last night with the linear artefacts.
Can I confirm that these problems are a result of having the incorrect Dynamic Topology settings for using the Snake Hook Tool; I was using Constant Detail instead of Relative Detail, or is this being caused by another issue?
Also, any advice on avoiding common pitfalls when choosing the settings in sculpting would be most appreciated.
I will continue to ask this question incase anyone has a similar problem and it can be resolved by reading this.
Sculpt, showing lineations
Experimenting with Dynamic Topology

In Object Mode, does the object have uniform X, Y, & Z scaling? If not, you can apply the scale from the object menu.
Object ‣ Apply ‣ Scale / Rotation & Scale

Related

Implicit mesh generation with make_mesh_3 failing sometimes

I am using make_mesh_3 to mesh implicit functions. However, for some models it fails to generate a mesh at all.
For one of the circumstances it fails I assume it is because the mesh generator does not construct a good enough initial set of points. For this instance I am hoping that increasing the number of rays shoot in random directions or changing the point from which they are shot (by default the center of the bounding box) may help alleviate the problem. Is there a way without changing the source code of CGAL to specify the number of rays to shoot and the point from which they are shot?
The other circumstance under which it seems to fail regularly is when the volume becomes very thin. Is there any way to keep this from happening?

Why instantiated Static Mesh gets scaled in Unreal Engine 4?

Building a project in Unreal Engine 4.26 while trying to increase the performance of the videogame I ended up using instances of static meshes where possible. This generated an issue:
My instantiated meshes are scaled to 0.9995/1 (experimentally proven).
Looking for an answer I found a work-around suggested by Unreal Engine devs themselves: they suggested to rotate the mesh adding a rotation of 360 degrees using higher values of the same rotation as you can read here. This didn't work for me and, as you can see, the difference between instantiated and manually placed meshes are evident.
Following the way I've instantiated the meshes:
Increasing the rotation on z-axis using 450 degrees didn't solve anything even though doing it with the meshes provided by devs here actually works.
I'm sure that rotations are the key to solve the problem since the problem is not systematic, I didn't get the logic behind but by building a square with instantiated meshes I end up having some walls with spaces and some of them with perfect scale. I increased the size of them all so as to not have spaces but I'm afraid that the solution will bring more issues in the future while working with light and production materials. Seems that the bug isn't solved yet in UE4, is there another workaround that I may use without any risk of overlapping meshes?
Seems to be a bug not yet solved, to populate the map with objects where is necessary to have a high accuracy level is suggested to place them manually. If you place them manually, UE4 will make them instances, it's not necessary to have a blueprint to populate rooms programmatically.
Increasing the scale how I did in the first place doesn't seem to cause problems but I ended up placing them manually since as solution is more elegant even if it requires more effort.

ANSYS Meshing Issue - How To Mesh Complicated Geometry (~80,000 Faces)?

I am attempting to mesh a complicated design (~80,000 faces) for a microchannel heat sink, as pictured, and I would appreciate some advice. I have tried a range of different mesh controls (especially face sizing and body sizing), mesh settings and element sizes, and all have failed to produce a working mesh. The most common errors are shown in the linked picture, in particular the one regarding "The following surfaces cannot be meshed with acceptable quality. Try using a different element size or virtual topology." However, I have already reduced the element size to 2x10^-6 m, which takes two days to resolve before failure.
Unfortunately I cannot alter the geometry significantly, as it is imported from generation in SolidWORKS as either a STEP or an x.t file. As such, any advice for how I can successfully mesh the geometry for CFD analysis in FLUENT would be greatly appreciated.
I can provide more details or the geometry file itself if required.
Thanks in advance.
Meshing Attempt
Probably your cad design is not clean at all. But it is impossible to notice from this image. If you don't have control over the geometry source it is trouble. Because you might ask somebody else about check and fix something. First check you can do with your model it's trying to reduce the number of elements until the minimum possible value. Then if the mesh runs properly you can relay in the surfaces of your cad model. After that, you can refine the mesh, but the refining process is something that you have to do following some error criteria. If you are also the designer why not try to simplify a bit the geometry if you consider it is really hard to mesh? Meshing properly is a hard task, you should go step-by-step until you reach some solution. Also, you must not allow the preprocessor mesh automatically, without giving some criteria. Probably the first thing you have to answer even before apply any mesh is, what is your Reynolds number? And what is the most valuable result in which you can base the goodness of your discretization?
Thank you for your suggestions. In the end I solved the issue by importing the original mesh generated by COMSOL into SpaceClaim, then employing both the "Smooth" and "Reduce Faces" tools in tandem to simplify the geometry, before finally using SolidWORKS to turn the smoothed mesh into a solid body. This body retained many of the same features as the original, but was much less complex, having two orders of magnitude fewer faces. In turn, this permitted both meshing and heat transfer analysis in FLUENT.

3D Objects are not being in their regular shape at distance

I am working on a game which was developed by some other guy earlier. I am facing a problem that when player(with camera) start running on the road the buildings are not being shown up in their regular shape and as we move forward (more closer to the buildings) they gain their original shapes, and some times the buildings present on either side of the road are not visible by camera ( empty space ) and when we move closer to the building it comes up as visible object suddenly. I think it may be some unity3d setting problem (rendering , camera or quality). May be, it was being done due to increase performance on mobile devices.
can anybody know what may be the issue or how to resolve it.
Any help will be appreciated. Thanks in advance
This sounds like it's a problem with the available LODs for each building's 3D model.
Basically, 3d games work by having 2-3 different versions of each 3D model, with varying *L*evels *O*f *D*etail. So for example, if you have a house model which uses 500 polygons, you'll probably have another 2 versions (eg 250 polys and 100 polys), which are used depending on the distance between the player and the object. The farther away he is, the simpler the version used will be.
The issue occurs when developers use automatically generated LOD models, which will look distorted or won't appear at all. Unity probably auto generates them, but I'm unsure where you'll find the settings for this in unity. However I've seen 3d models on the unity store offering models with different LODs, so unity probably gives you the option to set your own. The simplest solution would be to increase the distance the LODs change at, while the complicated solution would be to fix custom versions of the 3D models for larger distances, with a lower poly count.
I have resolved the problem. This was due to the LOD (level of details) used for objects (buildings) in Unity3d to enhance the performance on the slower device. LOD provides many level of details (of an object) which you can adjust according to your need . In my specific problem the buildings were suddenly appear due to the different (wrong) position for LOD1, i.e. for LOD1 the building was at wrong place but for LOD0 it was at its right place. So when my camera see from the distance it see LOD1 which was at wrong place thence it sees empty space with no building at the expected position. But when camera comes closer it sees LOD0 in which building is at the right position and it seems that buildings are suddenly come or become visible.

Tweaking Heightmap Generation For Hexagon Grids

Currently I'm working on a little project just for a bit of fun. It is a C++, WinAPI application using OpenGL.
I hope it will turn into a RTS Game played on a hexagon grid and when I get the basic game engine done, I have plans to expand it further.
At the moment my application consists of a VBO that holds vertex and heightmap information. The heightmap is generated using a midpoint displacement algorithm (diamond-square).
In order to implement a hexagon grid I went with the idea explained here. It shifts down odd rows of a normal grid to allow relatively easy rendering of hexagons without too many further complications (I hope).
After a few days it is beginning to come together and I've added mouse picking, which is implemented by rendering each hex in the grid in a unique colour, and then sampling a given mouse position within this FBO to identify the ID of the selected cell (visible in the top right of the screenshot below).
In the next stage of my project I would like to look at generating more 'playable' terrains. To me this means that the shape of each hexagon should be more regular than those seen in the image above.
So finally coming to my point, is there:
A way of smoothing or adjusting the vertices in my current method
that would bring all point of a hexagon onto one plane (coplanar).
EDIT:
For anyone looking for information on how to make points coplanar here is a great explination.
A better approach to procedural terrain generation that would allow
for better control of this sort of thing.
A way to represent my vertex information in a different way that allows for this.
To be clear, I am not trying to achieve a flat hex grid with raised edges or platforms (as seen below).
)
I would like all the geometry to join and lead into the next bit.
I'm hope to achieve something similar to what I have now (relatively nice undulating hills & terrain) but with more controllable plateaus. This gives me the flexibility of cording off areas (unplayable tiles) later on, where I can add higher detail meshes if needed.
Any feedback is welcome, I'm using this as a learning exercise so please - all comments welcome!
It depends on what you actually want and what you mean by "more controlled".
Do you want to be able to say "there will be a mountain on coordinates [11, -127] with radius 20"? Complexity of this this depends on how far you want to go. If you want just mountains, then radial gradients are enough (just add the gradient values to the noise values). But if you want some more complex shapes, you are in for a treat.
I explore this idea to great depth in my project (please consider that the published version is just a prototype, which is currently undergoing major redesign, it is completely usable a map generator though).
Another way is to make the generation much more procedural - you just specify a sequence of mathematical functions, which you apply on the terrain. Even a simple value transformation can get you very far.
All of these methods should work just fine for hex grid. If artefacts occur because of the odd-row shift, then you could interpolate the odd rows instead (just calculate the height value for the vertex from the two vertices between which it is located with simple linear interpolation formula).
Consider a function, which maps the purple line into the blue curve - it emphasizes lower located heights as well as very high located heights, but makes the transition between them steeper (this example is just a cosine function, making the curve less smooth would make the transformation more prominent).
You could also only use bottom half of the curve, making peaks sharper and lower located areas flatter (thus more playable).
"sharpness" of the curve can be easily modulated with power (making the effect much more dramatic) or square root (decreasing the effect).
Implementation of this is actually extremely simple (especially if you use the cosine function) - just apply the function on each pixel in the map. If the function isn't so mathematically trivial, lookup tables work just fine (with cubic interpolation between the table values, linear interpolation creates artefacts).
Several more simple methods of "gamification" of random noise terrain can be found in this paper: "Realtime Synthesis of Eroded Fractal Terrain for Use in Computer Games".
Good luck with your project