Why instantiated Static Mesh gets scaled in Unreal Engine 4? - instance

Building a project in Unreal Engine 4.26 while trying to increase the performance of the videogame I ended up using instances of static meshes where possible. This generated an issue:
My instantiated meshes are scaled to 0.9995/1 (experimentally proven).
Looking for an answer I found a work-around suggested by Unreal Engine devs themselves: they suggested to rotate the mesh adding a rotation of 360 degrees using higher values of the same rotation as you can read here. This didn't work for me and, as you can see, the difference between instantiated and manually placed meshes are evident.
Following the way I've instantiated the meshes:
Increasing the rotation on z-axis using 450 degrees didn't solve anything even though doing it with the meshes provided by devs here actually works.
I'm sure that rotations are the key to solve the problem since the problem is not systematic, I didn't get the logic behind but by building a square with instantiated meshes I end up having some walls with spaces and some of them with perfect scale. I increased the size of them all so as to not have spaces but I'm afraid that the solution will bring more issues in the future while working with light and production materials. Seems that the bug isn't solved yet in UE4, is there another workaround that I may use without any risk of overlapping meshes?

Seems to be a bug not yet solved, to populate the map with objects where is necessary to have a high accuracy level is suggested to place them manually. If you place them manually, UE4 will make them instances, it's not necessary to have a blueprint to populate rooms programmatically.
Increasing the scale how I did in the first place doesn't seem to cause problems but I ended up placing them manually since as solution is more elegant even if it requires more effort.

Related

How to sculpt with odd linear artefacts

I am currently learning how to sculpt in Blender; working on my own projects after completing BlenderGuru's Beginner & Intermediate classes, and some of Grant Abbitts videos with pleasing results. I am trying to sculpt a plasmapistol with a skull on it, which can be seen in the reference photo that I have provided.
However, when I sculpt, I get these really odd linear artefacts (See picture below, circled in black). I added a Subsurf Modifier to the primitive UV Sphere, with the Viewport and Render Values set to 4, so it is a fairly fine mesh. However, these still artefacts occur.
I assume it is due to the stretching of the polygons when I grab the sphere with the Snake Hook tool and deform it to encompass the frontal part of the skull.
EDIT: Whilst writing this comment I went back, and switched on Dynamic Topology with Relative Detail selected.
It appears that I am no longer getting the issues that I was getting last night with the linear artefacts.
Can I confirm that these problems are a result of having the incorrect Dynamic Topology settings for using the Snake Hook Tool; I was using Constant Detail instead of Relative Detail, or is this being caused by another issue?
Also, any advice on avoiding common pitfalls when choosing the settings in sculpting would be most appreciated.
I will continue to ask this question incase anyone has a similar problem and it can be resolved by reading this.
Sculpt, showing lineations
Experimenting with Dynamic Topology
In Object Mode, does the object have uniform X, Y, & Z scaling? If not, you can apply the scale from the object menu.
Object ‣ Apply ‣ Scale / Rotation & Scale

D3D12 Use backbuffer surface as unordered access view (UAV)

Im making a simple raytracer for a schoolproject were a compute shader is supposed to be used to shade a triangle or some other primitive.
For this I'd like to write to a backbuffer-surface directly in the compute shader, to then present the results imideatly. I know for certain that this is possible in DX11 though i can't seem to get it to work in DX12.
I couldn't gather that much information about this, but i found this gamedev thread discussing the exact same problem I try to figure out and they seem to come to the conclusion which was my go to workaround: writing to an intermediate texture and then sampling in a pipeline.
I can't fully accept that this would be impossible to achieve in dx12. Why would that feature be removed? Could it be that the queuing-systems removes some overhead that makes it unnecessary to have this feature?
Is there any way to achieve a raytracer without writing to a separate texture and then sampling in a pipeline or copy it onto the back-buffer? What are my best alternatives for achieving performance?
You will have to access the answer. They removed the capability to create an UAV the same way they removed the capability to use multisample surface in the swapchain.
The problem with authorizing UAV on the swapchain surface is that they would have to forfeit tracking of what is happening to it. DX12 rely on descriptor heaps that are 100% volatile at runtime for UAVs ( render targets are CPU side only and can be tracked ).
Microsoft need to track the swapchain surface status strongly in order to guarantee behavior with the desktop presentation and for that reason, they choose to deny the UAV binding.

3D Objects are not being in their regular shape at distance

I am working on a game which was developed by some other guy earlier. I am facing a problem that when player(with camera) start running on the road the buildings are not being shown up in their regular shape and as we move forward (more closer to the buildings) they gain their original shapes, and some times the buildings present on either side of the road are not visible by camera ( empty space ) and when we move closer to the building it comes up as visible object suddenly. I think it may be some unity3d setting problem (rendering , camera or quality). May be, it was being done due to increase performance on mobile devices.
can anybody know what may be the issue or how to resolve it.
Any help will be appreciated. Thanks in advance
This sounds like it's a problem with the available LODs for each building's 3D model.
Basically, 3d games work by having 2-3 different versions of each 3D model, with varying *L*evels *O*f *D*etail. So for example, if you have a house model which uses 500 polygons, you'll probably have another 2 versions (eg 250 polys and 100 polys), which are used depending on the distance between the player and the object. The farther away he is, the simpler the version used will be.
The issue occurs when developers use automatically generated LOD models, which will look distorted or won't appear at all. Unity probably auto generates them, but I'm unsure where you'll find the settings for this in unity. However I've seen 3d models on the unity store offering models with different LODs, so unity probably gives you the option to set your own. The simplest solution would be to increase the distance the LODs change at, while the complicated solution would be to fix custom versions of the 3D models for larger distances, with a lower poly count.
I have resolved the problem. This was due to the LOD (level of details) used for objects (buildings) in Unity3d to enhance the performance on the slower device. LOD provides many level of details (of an object) which you can adjust according to your need . In my specific problem the buildings were suddenly appear due to the different (wrong) position for LOD1, i.e. for LOD1 the building was at wrong place but for LOD0 it was at its right place. So when my camera see from the distance it see LOD1 which was at wrong place thence it sees empty space with no building at the expected position. But when camera comes closer it sees LOD0 in which building is at the right position and it seems that buildings are suddenly come or become visible.

Inserting CCParticleSystemQuad between sprites in different CCSpriteBatchNode's

I currently have a few layers in a Cocos2d scene (running in Kobold2d). Each layer has a sprite batch node attached to it. I need to use batch nodes given the ridiculous number of sprites I have on screen at once. Everything is working fine, and I've set up a little particle system. Problem I'm running into is CCParticleBatchNode particle emmiters are always on top of everything (as it is the highest zOrder's layer) - but this is an isometric game and obviously doesn't work.
Is there a way that I could sneak the emmiters between the sprites on any of my layers containing CCSpriteBatchNode's? I've tried messing around with vertexZ (I'm on the newest version of cocos2d 2.+) but it doesn't matter what I do, it doesn't seem to change anything, even though the LUA file for Kobold2d that would enable this is set properly and the shader for programForKey:kCCShader_PositionTextureColorAlphaTest on my batch nodes is enabled - but maybe this isn't even the best solution?
Has anyone run into anything like this or suggest any sacrifices I could make or tricks I could do that I'm not thinking of?
To use vertexZ you need to enable depth buffering (see config.lua). Vertexz is the only way to change draw order between spritebatches and other nodes.

Cocos2d moving nodes is choppy

In my upcoming iPhone game different scene elements are split up into their own CCNode.
My Obstacle node contains many nodes, each representing an obstacle. Inside every obstacle node are the images that make up the obstacle (1 - 4 images), and there are only ~10 obstacles at a time. Every update my game calls the update function in the Obstacle node, which moves every obstacle to the left. But this slows down my game quite a bit.
At the same time, I have a particle node that just contains images and moves them all every frame exactly the same way the Obstacle node does, but it has no noticeable effect on performance. But it has hundreds of images at a time.
My question is why do the obstacles slow it down so much but the particles don't? I have even tried replacing the images used in the obstacles with the ones in the particles and it makes no (noticeable) difference. Would it be that there is another level of child nodes?
You will dramatically increase the app's performance, run speed, frame rate and more if you put all your images in a texture atlas and rendering them once as a batch using the CCSpriteBatchNode class. If you are moving lots of objects around on the screen a lot, this makes the hardware work a lot less.
Using this class is easy. Create the class with a texture atlas that contains all your images, and then add this class as a child to your layer, just as you would a sprite.
However, when you create sprites, add them as children to this batch node, not as children to the layer.
It's very easy and will probably help you quite a lot here.
From what I recall of the Cocos2d documentation, particles are intended to be VERY lightweight so you can have many, many of them on screen at once. Nodes are heavier, require more processing between frames as they interact with the physics system and requiring node-specific rendering. The last time I looked at the render loop code, it was basically O(n) based on the number of CCnodes you had in a scene. Using NSTimers versus Cocos' built in run loop also makes quite a bit of difference in performance.
Could you provide an example of something that slows down a lot? Exactly how do you update these Obstacles?
The cocos2d documentation has some best practices that all, in one way or another, touch on performance. There's a LOT you can do to optimize your frames per second.
In general, when your code is slow, it helps to use Instruments.app to figure out where your code is spending so much time. Since you're using a framework this will be less helpful because you'll end up finding out what functions your code spends a lot of time in, and then figure out how to reduce that via the framework's best practices or other optimizations. There are a few good blog posts on improving performance, I found this one very helpful.