How to add ground shadows by Entities created programmatically in RealityKit? - entity

I am trying to create two types of Entities in my project. Both of them, I create them programmatically by generating a MeshResource and a Material.
The first one (named Placement Indicator) has a mesh of Plane and an UnlitMaterial, and I attach a texture on top.
The second one (name Point Charge) has a mesh of Sphere and a SimpleMaterial.
Everything works great, except that when these entities appear on the surface of my table, there are no shadows on the table.
How can I make the entities I created to create ground shadows on my table surface?

You can try add lighting in you sceneUnderstanding options.
It simulates light right above your entity to generate shadows
ArView.environment.sceneUnderstanding.options.insert(.receivesLighting)

Related

How to refine some specific surfaces in Avizo?

I m using Avizo to generate a mesh of my microstructure obtained from CT scan in order to launch computations in Abaqus. I can generate interesting surfaces meshes, nonetheless outside mesh is too fine (you can see figure with this question). I am trying to create a surface path to have a coarser mesh in outside mesh but it doesnt work. When I remesh my model all is modified...
How can i generate sub-surfaces in order to specificy special mesh conditions?
Thanks for your help
It looks like your question is how can I break my mesh into separate components - perform some work on those components and then put them back together.
There is a tool to select connected components in a region in the toolbar area at the top of the meshlab window which you can use to select any triangle on the outside boundary and it will grab all the connected parts of the mesh. Then with "Filters->Mesh Layer->Move Selected Faces to Another Layer" you can separate the selected part of the mesh into a new layer. Alternatively you can use right click on the layer and select split in connected components. On this new layer you can perform simplification filters such as clustering decimation.
Once you have completed the simplification to your satisfaction, make visible the layers that you want to merge back together and use the filter "Filters->Mesh Layer->Flatten Visible Layers" to put them back together.

Design pattern for child calling method in parent

I am currently working on my biggest project and I am having trouble figuring out how to structure my code. I'm looking for some guidance.
I have 2 objects a Tile and Container. Each Tile has a 2D coordinate and are all children of the Container. The Container has methods that return tile for location, switch tiles, add tiles, and remove tiles.
Now when you click on a tile it disappears, that was easy because it was self contained. The problem comes when I created different types of tiles that inherit from the base Tile. Each different type of tile does a different action when you click on it. Some destroy surrounding tiles some switch with other tiles and others add new tiles. For simplicity we will call these 3 subclasses Tile-destroy, Tile-swap, and Tile-add.
My problem is when I click on these tiles how can they act on other tiles in the Container. Should I just call functions in the parent class or is there a better way to do this? I am having trouble #including the Tile in the Container as well as the other way around. I feel like its not a proper pattern.
I have it set up so when a click takes place the Container handles it and checks the type of tile that is clicked and acts from there with a large else-if statement however this makes it very difficult to add new tile types. Ideally all the information for what happens when you click on a tile is contained within each tile subclass.
Any ideas?
I can suggest you the simpliest design:
Your Container will be a game controller
Each tile has Parent property which is refer to Container
When you click on tile it sends Command to Container (for example, DestroyTile(x, y) or AddTile(x, y)
Container handle this commands and destroys, adds or swap tiles.
If you want really good and more decoupled design you can also create handlers for all operation types DestroyTileHandler, AddTileHandler. In Container on different commands you will just pass them [commands] to appropriate handler. Also you need to pass context object (like Field with tiles) to handler. This allows you to add and modify new operations without even changing Container code.
See related patterns: Command, Observer
Feel free to ask questions and good luck!

Cocos3D - background shown through meshes

I imported the .pod file created from Blender and the blue background is shown through the eyelash and eyebrow meshes. Does anyone know why I'm encountering this?
WITHOUT additional material (looking normal except the root of the hair).
WITH new green material added to her left shoulder, the eyebrow and eyelash began showing the background
This issue is caused by the order in which the nodes are being rendered in your scene.
In the first model, the hair is drawn first, then the skin, then the eyebrows and eyelashes. In the second model, the hair, eyebrows and eyelashes are all drawn before the skin. By the time the skin under the hair or eyelashes is drawn, the depth buffer indicates that something closer to the camera has already been drawn, and the engine doesn't bother rendering those skin pixels. But because the eyelashes, eyebrows and hair all contain transparency, we end up looking right through them onto the backdrop.
This design use of a depth buffer is key to all 3D rendering. It's how the engine knows not to render pixels that are being visually occluded by another object, otherwise all we'd ever see was the last object to be rendered.However, when rendering overlapping objects that contain transparency, it's important to get the rendering order correct, so that more distant objects that are behind closer transparent objects are rendered first.
In Cocos3D, there are several tool available for you to order your transparent objects for rendering:
The first, and primary tool, is the drawingSequencer that is managed by the CC3Scene. You can configure several different types of drawing sequencers. The default sequencer is smart enough to render all opaque objects first, then to render the objects that contain transparency in decreasing order of distance from the camera (rendering farther objects first). This works best for most scenes, and in particular where objects are moving around and can move in front of each other unpredictably. Unfortunately, in your custom CC3Scene initialization code (which you sent me per the question comments), you replaced the default drawing sequencer with one that does not sequence transparent objects based on distance. If you remove that change, everything works properly.
Objects that are not explicitly sequenced by distance (as in part 1 above) are rendered in the order in which they are added to the scene. You can therefore also define rendering order by ensuring that the objects are added to your scene in the order in which you want them rendered. This can work well for static models, such as your first character (if you change it to add the hair after the skin).
CC3Node also has a zOrder property, which allows you to override the rendering order explicitly, so that objects with larger zOrder value are rendered before those with smaller zOrder values. This is useful when you have a static model whose components cannot be added in rendering order, or to temporarily override the rendering order of two transparent objects that might be passing in front of each other. Using the zOrder property does depend on using a drawingSequencer that makes use of it (the default drawing sequencer does).
Finally, you can temporarily turn off depth testing or masking when rendering particular nodes, by setting the shouldDisableDepthTest and shouldDisableDepthMask properties to YES on those nodes.

Calculating the area and position of dynamically formed polygons

Hi stackoverflow community,
This is a continuation of a question I asked 6 months regarding calculating the area and position of dynamically formed rectangles. The solution provided for that worked a treat but now I want to take this a step further.
Some background - I'm working on a puzzle game using Cocos2D/Box2D were the player draws lines on the screen. Depending on were the player draws, I want to then work out the area and position of polygons that appear as a result of the drawn lines.
In the following image, the black border represents a playing area, this will always be the same shape. The grey lines are player drawn and will always be straight. The green square is an obstacle. The obstacle objects will be convex shapes. The formed polygons (3 in this case) are the blue areas and are the shapes I'm trying to get the coordinates and area for.
I think I'll be fine with working out the area of a polygon using determinants but before that, I need to work out the coordinates of the blue polygons and I'm not sure how to do this.
I've got the lines (x,y) coordinates for both ends, the coordinates for the obstacle and the corner coordinates for the black border. Using those, is it possible to work out the coordinates of the blue polygons or am I approaching this the wrong way?
UPDATE - response to duffymo
Thanks for your answer. To explain further, each object mentioned is defined and encapsulated in a class i.e. I've got a Line/Obstacle/PlayingArea object. My polygon object is encapsulated in a 'Rectangle' object. Each one of these objects has it's own properties associated with it such as its coordinates/area/ID/state etc...
In order to keep track of all the objects, I've got an over-seeing singleton object which holds all of the Line objects / Obstacle objects etc in their own respective array. This way, I can loop through say all Lines and know were each one has been drawn by the player.
The game is a bit like classic JezzBall so I need to be able to create these polygon shapes when a user draws a line because the polygon shape will be used as my way of detecting if that particular area contains a ball. If not the area needs to be filled.
Since you already have the nodes and edges for your polygons, I'd recommend that you calculate the centroids, perimeters, and areas using contour integration You can express the centroids and areas as contour integrals using Green's theorem.
You can use Gaussian quadrature to do piecewise integration along each edge.
It'll be fast and accurate; it'll work on polygons of arbitrary complexity.
UPDATE: Objective-C is an object-oriented language. I don't know it myself, but I believe it's based on ideas from C and C++. Since that's the case, I'd recommend that you start writing more in terms of objects. Arrays of coordinates? They need to encapsulated together. I'd suggest a Point abstraction that encapsulates a point (id, x, y) together. Make a Grid that has a List of Points.
It sounds like users supply the relationship between Points to form Polygons. That's not clear from your description, so it's not a surprise that you're having trouble implementing it.

Simple Drawing App Design -- Hillegass Book, Ch. 18

I am working through Aaron Hillegass' Cocoa Programming for Mac OS X and am doing the challenge for Chapter 18. Basically, the challenge is to write an app that can draw ovals using your mouse, and then additionally, add saving/loading and undo support. I'm trying to think of a good class design for this app that follows MVC. Here's what I had in mind:
Have a NSView-subclass that represents an oval (say JBOval) that I can use to easily draw an oval.
Have a main view (JBDrawingView) that holds JBOvals and draws them.
The thing is that I wasn't sure how to add archiving. Should I archive each JBOval? I think this would work, but archiving an NSView doesn't seem very efficient. Any ideas on a better class design?
Thanks.
Have a NSView-subclass that represents an oval (say JBOval) that I can use to easily draw an oval.
That doesn't sound very MVC. “JBOval” sounds like a model class to me.
Have a main view (JBDrawingView) that holds JBOvals and draws them.
I do like this part.
My suggestion is to have each model object (JBOval, etc.) able to create a Bézier path representing itself. The JBDrawingView (and you should come up with a better name for that, as all views draw by definition) should ask each model object for its Bézier path, fill settings, and stroke settings, and draw the object accordingly.
This keeps the knowledge of how to draw (the path, line width, colors, etc.) in the various shape classes where they belong, while also keeping the actual drawing code in the view layer where it belongs.
The answer to where to put archiving code should be intuitively obvious from this point.
Having a whole NSView for each oval seems rather heavyweight to me. I would descend them from NSObject instead and just have them draw to the current view.
They could also know how to archive themselves, although at that point you'd probably want to think about pulling them out of the view and thinking of them more as part of your model.
Your JBOval views would each be responsible for drawing themselves (basically drawing an oval path and filling it, within their bounds), but JBDrawingView would be responsible for mousing and dragging (and thereby sizing and positioning the JBOvals, which would be its subviews). The drawingView would do no drawing itself.
So far as archiving, you could either have a model class to represent each oval (such as its bounding rectangle, or any other dimensions you choose to represent each oval with). You could archive and unarchive these models to recreate your views.
Finally, I use the JB prefix too, so … :P at you.