I am working on a game which was developed by some other guy earlier. I am facing a problem that when player(with camera) start running on the road the buildings are not being shown up in their regular shape and as we move forward (more closer to the buildings) they gain their original shapes, and some times the buildings present on either side of the road are not visible by camera ( empty space ) and when we move closer to the building it comes up as visible object suddenly. I think it may be some unity3d setting problem (rendering , camera or quality). May be, it was being done due to increase performance on mobile devices.
can anybody know what may be the issue or how to resolve it.
Any help will be appreciated. Thanks in advance
This sounds like it's a problem with the available LODs for each building's 3D model.
Basically, 3d games work by having 2-3 different versions of each 3D model, with varying *L*evels *O*f *D*etail. So for example, if you have a house model which uses 500 polygons, you'll probably have another 2 versions (eg 250 polys and 100 polys), which are used depending on the distance between the player and the object. The farther away he is, the simpler the version used will be.
The issue occurs when developers use automatically generated LOD models, which will look distorted or won't appear at all. Unity probably auto generates them, but I'm unsure where you'll find the settings for this in unity. However I've seen 3d models on the unity store offering models with different LODs, so unity probably gives you the option to set your own. The simplest solution would be to increase the distance the LODs change at, while the complicated solution would be to fix custom versions of the 3D models for larger distances, with a lower poly count.
I have resolved the problem. This was due to the LOD (level of details) used for objects (buildings) in Unity3d to enhance the performance on the slower device. LOD provides many level of details (of an object) which you can adjust according to your need . In my specific problem the buildings were suddenly appear due to the different (wrong) position for LOD1, i.e. for LOD1 the building was at wrong place but for LOD0 it was at its right place. So when my camera see from the distance it see LOD1 which was at wrong place thence it sees empty space with no building at the expected position. But when camera comes closer it sees LOD0 in which building is at the right position and it seems that buildings are suddenly come or become visible.
Related
Building a project in Unreal Engine 4.26 while trying to increase the performance of the videogame I ended up using instances of static meshes where possible. This generated an issue:
My instantiated meshes are scaled to 0.9995/1 (experimentally proven).
Looking for an answer I found a work-around suggested by Unreal Engine devs themselves: they suggested to rotate the mesh adding a rotation of 360 degrees using higher values of the same rotation as you can read here. This didn't work for me and, as you can see, the difference between instantiated and manually placed meshes are evident.
Following the way I've instantiated the meshes:
Increasing the rotation on z-axis using 450 degrees didn't solve anything even though doing it with the meshes provided by devs here actually works.
I'm sure that rotations are the key to solve the problem since the problem is not systematic, I didn't get the logic behind but by building a square with instantiated meshes I end up having some walls with spaces and some of them with perfect scale. I increased the size of them all so as to not have spaces but I'm afraid that the solution will bring more issues in the future while working with light and production materials. Seems that the bug isn't solved yet in UE4, is there another workaround that I may use without any risk of overlapping meshes?
Seems to be a bug not yet solved, to populate the map with objects where is necessary to have a high accuracy level is suggested to place them manually. If you place them manually, UE4 will make them instances, it's not necessary to have a blueprint to populate rooms programmatically.
Increasing the scale how I did in the first place doesn't seem to cause problems but I ended up placing them manually since as solution is more elegant even if it requires more effort.
I am currently learning how to sculpt in Blender; working on my own projects after completing BlenderGuru's Beginner & Intermediate classes, and some of Grant Abbitts videos with pleasing results. I am trying to sculpt a plasmapistol with a skull on it, which can be seen in the reference photo that I have provided.
However, when I sculpt, I get these really odd linear artefacts (See picture below, circled in black). I added a Subsurf Modifier to the primitive UV Sphere, with the Viewport and Render Values set to 4, so it is a fairly fine mesh. However, these still artefacts occur.
I assume it is due to the stretching of the polygons when I grab the sphere with the Snake Hook tool and deform it to encompass the frontal part of the skull.
EDIT: Whilst writing this comment I went back, and switched on Dynamic Topology with Relative Detail selected.
It appears that I am no longer getting the issues that I was getting last night with the linear artefacts.
Can I confirm that these problems are a result of having the incorrect Dynamic Topology settings for using the Snake Hook Tool; I was using Constant Detail instead of Relative Detail, or is this being caused by another issue?
Also, any advice on avoiding common pitfalls when choosing the settings in sculpting would be most appreciated.
I will continue to ask this question incase anyone has a similar problem and it can be resolved by reading this.
Sculpt, showing lineations
Experimenting with Dynamic Topology
In Object Mode, does the object have uniform X, Y, & Z scaling? If not, you can apply the scale from the object menu.
Object ‣ Apply ‣ Scale / Rotation & Scale
We have to work with very large models and we're hoping to use the first person camera to walk through them, and eventually do this in VR. The progressive rendering does wonders for improving perceived responsiveness, but it can be disorienting to have so many items around you disappear as you move.
Is there any way to turn progressive rendering off but only for objects closest to the camera? Maybe a number of objects up to a maximum number, or objects within a radius from the camera. Everything further away can load in later and flicker during motion, but it would be nice to keep the nearby objects rendered, especially structural objects like stairs. I've often walked towards stairs just to have them disappear in front of me, forcing me to fly to a platform with E and Q instead of walking.
So far I've only found a way to toggle progressive rendering for the entire model on or off with viewer.setProgressiveRendering(bool) but I haven't found a way to customize the rendering behavior.
Per our Engineering's recommendation can you try set rendering targets with
viewer.impl.setFPSTargets(1, 5, 15) //min, target, max
In fact we've been having similar reports from other developers requesting similar capability to fine tune rendering with large models so our Engineering is considering their options to extend on existing functions and even build them into extensions.
I was wondering if someone else tried developing a hololens application using vuforia. Specifically, using vuforia's capacity to recognize and track objects.
I tried and it seems like it's working. I was just not sure about the result I got from the Debug.Log that print the name of the tracked object.
I tried putting two trackable targets millimeters away from each other and pointed my Gaze towards the distance between the objects(hoping it takes both).
Some how the output window gave me this.
It seems like I was able to track both targets but I want to know if I tracked two different objects at the same time.
I have this doubt because at some point, eventhough the hololens was in the same position as before, the output started to change and started printing only one of the two objects(the one in the right).
I think of this as a problem caused by hololens' small camera window or by hololens limited hardware.
In the vuforiaconfiguration you should be able to set the maximum simultaneous amount of objects your app can track. You need to make sure it is set to more than 1.
In the image above you see how you can set the maximum amount of tracked images in unity.
If you are not using unity you'll have to access the vuforiaconfiguration in another way and set the maximum simultaneous amount of tracked object there.
From code you can do it in c# like this:
VuforiaConfiguration.Instance.Vuforia.MaxSimultaneousImageTargets = 2;
Background
I'm working on a project where a user gets scanned by a Kinect (v2). The result will be a generated 3D model which is suitable for use in games.
The scanning aspect is going quite well, and I've generated some good user models.
Example:
Note: This is just an early test model. It still needs to be cleaned up, and the stance needs to change to properly read skeletal data.
Problem
The problem I'm currently facing is that I'm unsure how to place skeletal data inside the generated 3D model. I can't seem to find a program that will let me insert the skeleton in the 3D model programmatically. I'd like to do this either via a program that I can control programmatically, or adjust the 3D model file in such a way that skeletal data gets included within the file.
What have I tried
I've been looking around for similar questions on Google and StackOverflow, but they usually refer to either motion capture or skeletal animation. I know Maya has the option to insert skeletons in 3D models, but as far as I could find that is always done by hand. Maybe there is a more technical term for the problem I'm trying to solve, but I don't know it.
I do have a train of thought on how to achieve the skeleton insertion. I imagine it to go like this:
Scan the user and generate a 3D model with Kinect;
1.2. Clean user model, getting rid of any deformations or unnecessary information. Close holes that are left in the clean up process.
Scan user skeletal data using the Kinect.
2.2. Extract the skeleton data.
2.3. Get joint locations and store as xyz-coordinates for 3D space. Store bone length and directions.
Read 3D skeleton data in a program that can create skeletons.
Save the new model with inserted skeleton.
Question
Can anyone recommend (I know, this is perhaps "opinion based") a program to read the skeletal data and insert it in to a 3D model? Is it possible to utilize Maya for this purpose?
Thanks in advance.
Note: I opted to post the question here and not on Graphics Design Stack Exchange (or other Stack Exchange sites) because I feel it's more coding related, and perhaps more useful for people who will search here in the future. Apologies if it's posted on the wrong site.
A tricky part of your question is what you mean by "inserting the skeleton". Typically bone data is very separate from your geometry, and stored in different places in your scene graph (with the bone data being hierarchical in nature).
There are file formats you can export to where you might establish some association between your geometry and skeleton, but that's very format-specific as to how you associate the two together (ex: FBX vs. Collada).
Probably the closest thing to "inserting" or, more appropriately, "attaching" a skeleton to a mesh is skinning. There you compute weight assignments, basically determining how much each bone influences a given vertex in your mesh.
This is a tough part to get right (both programmatically and artistically), and depending on your quality needs, is often a semi-automatic solution at best for the highest quality needs (commercial games, films, etc.) with artists laboring over tweaking the resulting weight assignments and/or skeleton.
There are algorithms that get pretty sophisticated in determining these weight assignments ranging from simple heuristics like just assigning weights based on nearest line distance (very crude, and will often fall apart near tricky areas like the pelvis or shoulder) or ones that actually consider the mesh as a solid volume (using voxels or tetrahedral representations) to try to assign weights. Example: http://blog.wolfire.com/2009/11/volumetric-heat-diffusion-skinning/
However, you might be able to get decent results using an algorithm like delta mush which allows you to get a bit sloppy with weight assignments but still get reasonably smooth deformations.
Now if you want to do this externally, pretty much any 3D animation software will do, including free ones like Blender. However, skinning and character animation in general is something that tends to take quite a bit of artistic skill and a lot of patience, so it's worth noting that it's not quite as easy as it might seem to make characters leap and dance and crouch and run and still look good even when you have a skeleton in advance. That weight association from skeleton to geometry is the toughest part. It's often the result of many hours of artists laboring over the deformations to get them to look right in a wide range of poses.