I need help to structure a Map system like fallensword.com. Basically, there are different maps you can move around on. (You choose a map you start on then you can move around) And on the map there are for example caves, that you can enter, and you get on another map, etc.
How do I structure that SQL? I guess I need a x/y colum, but then what. What more should I have, there are sometimes NPC (that you attack) and sometimes NPC that you get a quest from, and sometimes, a house/cave or something (you can enter/get quest out of)
Any ideas?
A kd tree or a quadtree can help you much to solve your problem. A quadtree reduces the 2d complexity to a 1d complexity. It's used in many maps applications like bing or google maps. A good start is Nick's spatial index quadtree hilbert curve blog. You can use mysql with a spatial index but if you want to write a game this isn't the right place to ask. There is game.stackexchange.
Related
I've been working on a GPU-based boid simulation recently. I've spent most of my time trying to get the underlying sorting system working, in an attempt to avoid having each boid check every other boid—I'm ideally looking for this algorithm to end up being scalable into the hundreds of thousands of individual particles. However, I'm a bit confused as to how I should try to organize my boids into some kind of spatial tree structure when I don't have access to pointers (I'm working in HLSL).
I elected to try and base my method off of this incredibly helpful article. I already have a relatively quick radix sort functioning properly, but what I'm confused about is how I can actually put the sorted z-order morton keys to use. I naïvely assumed that, once sorted, all sequential boids would be sorted by distance, but this assumption breaks down whenever the boids are near the edge of two "sections" in the z-order curve, which causes some bizarre behavior that I've pictured below:
It seems clear that I also need to construct some kind of BVH (Bounding Volume Hierarchy) data structure so I can predictably access boids within a set distance, instead of just iterating over nearby sorted boids, but I'm stuck on how to achieve this in a language like HLSL that doesn't include pointers. I've read this article a few times, but I'm not sure if it's well-suited to what I'm trying to do. Should I create nodes that store buffer indices instead of pointers? Or is there a simpler way that I could go about this?
I'd deeply appreciate any advice on how to move forward, thank you!
I have about 2000+ sets of geographical coordinates (lat, long). Given one coordinate I want to find the closest one from that set. My approach was to measure the distance but hundreds of requests per second can be a little rough to the server doing all that math.
What is the best-optimized solution for this?
The problem you’re describing here is called a nearest neighbor search and there are lots of good data structures that support fast nearest neighbor lookups. The k-d tree is a particularly simple and fast choice and there are many good libraries out there that you can use. You can also look into alternatives like vantage-point trees or quadtrees if you’d like.
Hope this helps!
Does a 3D engine needs to analyse every single object on the map to see if it's gonna be rendered or not. My understanding is that a line from the center of projection to a pixel in the view plan, the engine will find the closest plan that intersect with it, but wouldn't that mean that for each pixel the engine needs to analyse all objects in the map, is there a way to limits the objects analysed.
Thanks for your help.
Such procedure are called frustum-culling algorithm.
You can also find more information about it here :-
https://en.wikipedia.org/wiki/Viewing_frustum (wiki)
http://www.lighthouse3d.com/tutorials/view-frustum-culling/
http://www.cse.chalmers.se/~uffe/vfc.pdf (better but hard to read)
IMHO, this last link is similar as what Nico Schertler mentioned in comment.
Beware, what you seek for is not the same as "occlusion culling" (another related link "Most efficient algorithm for mesh-level, optimal occlusion culling? ) ", which is another optimization when an object is totally hidden behind another one.
Note that most game-engine render by object (a pack of many triangles - via draw calls, roughly speaking ), not by tracing each pixel (ray-tracing) as you might understand.
Ray-tracing is too expensive in most real-time application.
Background
I'm working on a project where a user gets scanned by a Kinect (v2). The result will be a generated 3D model which is suitable for use in games.
The scanning aspect is going quite well, and I've generated some good user models.
Example:
Note: This is just an early test model. It still needs to be cleaned up, and the stance needs to change to properly read skeletal data.
Problem
The problem I'm currently facing is that I'm unsure how to place skeletal data inside the generated 3D model. I can't seem to find a program that will let me insert the skeleton in the 3D model programmatically. I'd like to do this either via a program that I can control programmatically, or adjust the 3D model file in such a way that skeletal data gets included within the file.
What have I tried
I've been looking around for similar questions on Google and StackOverflow, but they usually refer to either motion capture or skeletal animation. I know Maya has the option to insert skeletons in 3D models, but as far as I could find that is always done by hand. Maybe there is a more technical term for the problem I'm trying to solve, but I don't know it.
I do have a train of thought on how to achieve the skeleton insertion. I imagine it to go like this:
Scan the user and generate a 3D model with Kinect;
1.2. Clean user model, getting rid of any deformations or unnecessary information. Close holes that are left in the clean up process.
Scan user skeletal data using the Kinect.
2.2. Extract the skeleton data.
2.3. Get joint locations and store as xyz-coordinates for 3D space. Store bone length and directions.
Read 3D skeleton data in a program that can create skeletons.
Save the new model with inserted skeleton.
Question
Can anyone recommend (I know, this is perhaps "opinion based") a program to read the skeletal data and insert it in to a 3D model? Is it possible to utilize Maya for this purpose?
Thanks in advance.
Note: I opted to post the question here and not on Graphics Design Stack Exchange (or other Stack Exchange sites) because I feel it's more coding related, and perhaps more useful for people who will search here in the future. Apologies if it's posted on the wrong site.
A tricky part of your question is what you mean by "inserting the skeleton". Typically bone data is very separate from your geometry, and stored in different places in your scene graph (with the bone data being hierarchical in nature).
There are file formats you can export to where you might establish some association between your geometry and skeleton, but that's very format-specific as to how you associate the two together (ex: FBX vs. Collada).
Probably the closest thing to "inserting" or, more appropriately, "attaching" a skeleton to a mesh is skinning. There you compute weight assignments, basically determining how much each bone influences a given vertex in your mesh.
This is a tough part to get right (both programmatically and artistically), and depending on your quality needs, is often a semi-automatic solution at best for the highest quality needs (commercial games, films, etc.) with artists laboring over tweaking the resulting weight assignments and/or skeleton.
There are algorithms that get pretty sophisticated in determining these weight assignments ranging from simple heuristics like just assigning weights based on nearest line distance (very crude, and will often fall apart near tricky areas like the pelvis or shoulder) or ones that actually consider the mesh as a solid volume (using voxels or tetrahedral representations) to try to assign weights. Example: http://blog.wolfire.com/2009/11/volumetric-heat-diffusion-skinning/
However, you might be able to get decent results using an algorithm like delta mush which allows you to get a bit sloppy with weight assignments but still get reasonably smooth deformations.
Now if you want to do this externally, pretty much any 3D animation software will do, including free ones like Blender. However, skinning and character animation in general is something that tends to take quite a bit of artistic skill and a lot of patience, so it's worth noting that it's not quite as easy as it might seem to make characters leap and dance and crouch and run and still look good even when you have a skeleton in advance. That weight association from skeleton to geometry is the toughest part. It's often the result of many hours of artists laboring over the deformations to get them to look right in a wide range of poses.
I have a 3D set of points. These points will undergo a series of tiny perturbations (all points will be perturbed at once). Example: if I have 100 points in a box, each point may be moved up to, but no more than 0.2% of the box width in each iteration of my program.
After each perturbation operation, I want to know the new distance to each point's nearest neighbor.
This needs to use a very fast data structure; I'm optimizing this for speed. It's a somewhat tricky problem because I'm modifying all points at once. Approximate NN algorithms are not suitable for this problem.
I feel like the answer is somewhere between kd-trees and Voronoi tessellations, but I am not an expert on data structures, so I am baffled about what to do. I sure this is a very hard problem that would require a lot of research to reach a truly optimal solution, but even something fairly optimal will work for me.
Thanks
You can try a quadkey or monster curve. It reduce the dimension and fills the plane. Microsoft bing maps quadkey is a good start to learn.