Inflation layer not working in certain geometries in ANSYS meshing tool - mesh

I am trying to implement an inflation layer between two geometries in my mesh using ANSYS, and I am confused about the procedure.
I found online (see the answer from Gopinath N K on 1/17/22) that in the ANSYS meshing tool you cannot combine face meshing with inflation. So I tried to remove the face sizings thinking that was what was being referred to but it gave mixed results which I'll explain below.
Second, I saw here that to create inflation I might need to employ named selections instead of selecting the two geometries (a body and a face) but this also gave mixed results.
As to my mixed results, I successfully got an inflation layer to work for a cylindrical body inside another cylindrical one (see images below). The blue larger cylinder is the body (red arrow), and the green circles are the edges of the small cylinder inside (green arrows). I created this inflation layer successfully.
However, when I try to create an inflation layer between the Rotating Zone (larger cylinder) and the Stationary Zone the inflation layer fails. This occurs as soon as I select the rectangular larger body. I didn't bother to finish selecting the other faces since next to Active it says "No, Invalid Method". The same thing occurs if I select the Structured Zone (smallest cylinder) and the faces of the wing (angled plate subtracted from the Structured Zone). So I really no clue what is causing this since it seems to occur as soon as I select the outer larger body geometry. Maybe I'm not selecting the right set of faces, or there is something else that is leading to this.
Thank you

So it turns out that the message saying "No, Invalid Method" is referring to a Hex Dominant method I created. There are certain mesh methods that inflation does not like to work with, and I haven't been able to find any reason why. I hope anyone using the ANSYS Mesher finds this helpful.

Related

creating multi-faceted plot of large geospatial data using geom_raster()

TL;DR: can anyone help w/ geospatial things in R using geom_raster() in ggplot?
Basically it seems that my issue is stemming from the fact that I don't have perfectly gridded values (aka I get this message: "Warning: Raster pixels are placed at uneven vertical/horizontal intervals and will be shifted. Consider using geom_tile() instead."). So, if I switch to geom_tile, then I can control the size of the tiles, but then they look chunky and awful since there's no interpolation feature in geom_tile. If I use geom_raster, I think it gets confused what size it should plot the pixels since it's not perfectly gridded, so when I facet my ggplot, the output sometimes will have teeny tiny dots (or even no dots at all) on some facets, and pretty maps on the other facets. When I round the lat/lon coordinates to 0 decimal places, this fixes the teeny tiny / no dots problem, but then ends up with things just like geom_tile (chunky with no blending between).Any ideas on how to fix this? I think if there's a way to manually interpolate to fill in NA values so that I do have nice symmetrically-gridded data, then geom_raster should work fine. Like what needs to happen is for each situation where we're missing a value at a certain lat/lon, we need to take the mean value between the two closest neighboring points to fill in that missing lat/lon point on the grid. But I'm not sure how to do this (aka how to convert from my dataframe into all the different spatial classes and back again). Then again, this manual approach might be overcomplicating things and I'd love a simpler solution. (fingers crossed)
I'm building this in a shiny app with crazy long code, and a very large dataset, but happy to share additional info as needed!
example plot
example plot2
warning example

How to sculpt with odd linear artefacts

I am currently learning how to sculpt in Blender; working on my own projects after completing BlenderGuru's Beginner & Intermediate classes, and some of Grant Abbitts videos with pleasing results. I am trying to sculpt a plasmapistol with a skull on it, which can be seen in the reference photo that I have provided.
However, when I sculpt, I get these really odd linear artefacts (See picture below, circled in black). I added a Subsurf Modifier to the primitive UV Sphere, with the Viewport and Render Values set to 4, so it is a fairly fine mesh. However, these still artefacts occur.
I assume it is due to the stretching of the polygons when I grab the sphere with the Snake Hook tool and deform it to encompass the frontal part of the skull.
EDIT: Whilst writing this comment I went back, and switched on Dynamic Topology with Relative Detail selected.
It appears that I am no longer getting the issues that I was getting last night with the linear artefacts.
Can I confirm that these problems are a result of having the incorrect Dynamic Topology settings for using the Snake Hook Tool; I was using Constant Detail instead of Relative Detail, or is this being caused by another issue?
Also, any advice on avoiding common pitfalls when choosing the settings in sculpting would be most appreciated.
I will continue to ask this question incase anyone has a similar problem and it can be resolved by reading this.
Sculpt, showing lineations
Experimenting with Dynamic Topology
In Object Mode, does the object have uniform X, Y, & Z scaling? If not, you can apply the scale from the object menu.
Object ‣ Apply ‣ Scale / Rotation & Scale

3D Objects are not being in their regular shape at distance

I am working on a game which was developed by some other guy earlier. I am facing a problem that when player(with camera) start running on the road the buildings are not being shown up in their regular shape and as we move forward (more closer to the buildings) they gain their original shapes, and some times the buildings present on either side of the road are not visible by camera ( empty space ) and when we move closer to the building it comes up as visible object suddenly. I think it may be some unity3d setting problem (rendering , camera or quality). May be, it was being done due to increase performance on mobile devices.
can anybody know what may be the issue or how to resolve it.
Any help will be appreciated. Thanks in advance
This sounds like it's a problem with the available LODs for each building's 3D model.
Basically, 3d games work by having 2-3 different versions of each 3D model, with varying *L*evels *O*f *D*etail. So for example, if you have a house model which uses 500 polygons, you'll probably have another 2 versions (eg 250 polys and 100 polys), which are used depending on the distance between the player and the object. The farther away he is, the simpler the version used will be.
The issue occurs when developers use automatically generated LOD models, which will look distorted or won't appear at all. Unity probably auto generates them, but I'm unsure where you'll find the settings for this in unity. However I've seen 3d models on the unity store offering models with different LODs, so unity probably gives you the option to set your own. The simplest solution would be to increase the distance the LODs change at, while the complicated solution would be to fix custom versions of the 3D models for larger distances, with a lower poly count.
I have resolved the problem. This was due to the LOD (level of details) used for objects (buildings) in Unity3d to enhance the performance on the slower device. LOD provides many level of details (of an object) which you can adjust according to your need . In my specific problem the buildings were suddenly appear due to the different (wrong) position for LOD1, i.e. for LOD1 the building was at wrong place but for LOD0 it was at its right place. So when my camera see from the distance it see LOD1 which was at wrong place thence it sees empty space with no building at the expected position. But when camera comes closer it sees LOD0 in which building is at the right position and it seems that buildings are suddenly come or become visible.

Tweaking Heightmap Generation For Hexagon Grids

Currently I'm working on a little project just for a bit of fun. It is a C++, WinAPI application using OpenGL.
I hope it will turn into a RTS Game played on a hexagon grid and when I get the basic game engine done, I have plans to expand it further.
At the moment my application consists of a VBO that holds vertex and heightmap information. The heightmap is generated using a midpoint displacement algorithm (diamond-square).
In order to implement a hexagon grid I went with the idea explained here. It shifts down odd rows of a normal grid to allow relatively easy rendering of hexagons without too many further complications (I hope).
After a few days it is beginning to come together and I've added mouse picking, which is implemented by rendering each hex in the grid in a unique colour, and then sampling a given mouse position within this FBO to identify the ID of the selected cell (visible in the top right of the screenshot below).
In the next stage of my project I would like to look at generating more 'playable' terrains. To me this means that the shape of each hexagon should be more regular than those seen in the image above.
So finally coming to my point, is there:
A way of smoothing or adjusting the vertices in my current method
that would bring all point of a hexagon onto one plane (coplanar).
EDIT:
For anyone looking for information on how to make points coplanar here is a great explination.
A better approach to procedural terrain generation that would allow
for better control of this sort of thing.
A way to represent my vertex information in a different way that allows for this.
To be clear, I am not trying to achieve a flat hex grid with raised edges or platforms (as seen below).
)
I would like all the geometry to join and lead into the next bit.
I'm hope to achieve something similar to what I have now (relatively nice undulating hills & terrain) but with more controllable plateaus. This gives me the flexibility of cording off areas (unplayable tiles) later on, where I can add higher detail meshes if needed.
Any feedback is welcome, I'm using this as a learning exercise so please - all comments welcome!
It depends on what you actually want and what you mean by "more controlled".
Do you want to be able to say "there will be a mountain on coordinates [11, -127] with radius 20"? Complexity of this this depends on how far you want to go. If you want just mountains, then radial gradients are enough (just add the gradient values to the noise values). But if you want some more complex shapes, you are in for a treat.
I explore this idea to great depth in my project (please consider that the published version is just a prototype, which is currently undergoing major redesign, it is completely usable a map generator though).
Another way is to make the generation much more procedural - you just specify a sequence of mathematical functions, which you apply on the terrain. Even a simple value transformation can get you very far.
All of these methods should work just fine for hex grid. If artefacts occur because of the odd-row shift, then you could interpolate the odd rows instead (just calculate the height value for the vertex from the two vertices between which it is located with simple linear interpolation formula).
Consider a function, which maps the purple line into the blue curve - it emphasizes lower located heights as well as very high located heights, but makes the transition between them steeper (this example is just a cosine function, making the curve less smooth would make the transformation more prominent).
You could also only use bottom half of the curve, making peaks sharper and lower located areas flatter (thus more playable).
"sharpness" of the curve can be easily modulated with power (making the effect much more dramatic) or square root (decreasing the effect).
Implementation of this is actually extremely simple (especially if you use the cosine function) - just apply the function on each pixel in the map. If the function isn't so mathematically trivial, lookup tables work just fine (with cubic interpolation between the table values, linear interpolation creates artefacts).
Several more simple methods of "gamification" of random noise terrain can be found in this paper: "Realtime Synthesis of Eroded Fractal Terrain for Use in Computer Games".
Good luck with your project

transform a path along an arc

Im trying to transform a path along an arc.
My project is running on osX 10.8.2 and the painting is done via CoreAnimation in CALayers.
There is a waveform in my project which will be painted by a path. There are about 200 sample points which are mirrored to the bottom side. These are painted 60 times per second and updated to a song postion.
Please ignore the white line, it is just a rotation indicator.
What i am trying to achieve is drawing a waveform along an arc. "Up" should point to the middle. It does not need to go all the way around. The waveform should be painted along the green circle. Please take a look at the sketch provided below.
Im not sure how to achieve this in a performant manner. There are many points per second that need coordinate correction.
I tried coming up with some ideas of my own:
1) There is the possibility to add linear transformations to paths, which, i think, will not help me here. The only thing i can think of is adding a point, rotating the path with a transformation, adding another point, rotating and so on. But this would be very slow i think
2) Drawing the path into an image and bending it would surely lead to image-artifacts.
3) Maybe the best idea would be to precompute sample points on an arc, then save save a vector to the center. Taking the y-coordinates of the waveform, placing them on the sample points and moving them along the vector to the center.
But maybe i am just not seeing some kind of easy solution to this problem. Help is really appreciated and fresh ideas very welcome. Thank you in advance!
IMHO, the most efficient way to go (in terms of CPU usage) would be to use some form of pre-computed approach that would take into account the resolution of the display.
Cleverly precomputed values
I would go for the mathematical transformation (from linear to polar) and combine two facts:
There is no need to perform expansive mathematical computation
There is no need to render two points that are too close from each other
I have no ready-made algorithm for you, but you could use a pre-computed sin or cos table, and match the data range to the display size in order to work with integers.
For instance imagine we have some data ranging from 0 to 1E6 and we need to display the sin value of each point in a 100 pix height rectangle. We can use a pre-computed sin table and work with integers. This way displaying the sin value of a point would be much quicker. This concept can be refined to get a nicer result.
Also, there are some ways to retain only significant points of a curve so that the displayed curve actually looks like the original (see the Ramer–Douglas–Peucker algorithm on wikipedia). But I found it to be inefficient for quickly displaying ever-changing data.
Using multicore rendering
You could compute different areas of the curve using multiple cores (can be tricky)
Or you could use pre-computing using several cores, and one core to do finish the job.