How do Razor Walls in a Grid-based 1st person "Blobber" RPG work? - game-engine

I am struggling to find a way to even ask this question. It might be clearer if you look at the Glossary I am getting my terms from: http://crpgaddict.blogspot.com/p/glossary.html
This is an example of "Razor Walls", this is what I am trying to do:
This is an example of "Worm Tunnel Walls", this is the other approach:
In "Worm Tunnel Walls" I would use a 2D array. Each index in the array would either be a wall or a floor, pretty easy.
I am having a harder time figuring out how to conceptualize Razor walls. I have tried some different approaches, such as making the array of a custom type so that each cell tracked whether or not each of the edges had a wall, but you want adjacent cells to share a wall, which means walls have to be stored in two cells, which seems inefficient.
Then I tried setting up a 2D array and making all the even numbers walls, and all the odd numbers floors, but I am having to code a LOT to make that work in the various edge case.
I can make either of these approaches work, of course, but it feels klunky.
Pool of Radiance did this in 1988 on Amiga! It has been a standard way to make these games for decades on machines with serious memory and processing limitations. It seems like there should be a more elegant approach that I am missing.

Ok, I have another idea.
I am thinking 3 2D arrays.
Array one stores floor information.
Array two stores Up-Down wall information.
Array three stores left-right wall information.
So my map is 8x8. The first room is floor[0,0] with walls at UDWall[0,0] and UDWall[1,0] and walls at LRwall[0,0] and LRWall[0,1]
I am on mobile right now, but I will try it later and report back.

Related

How to best create an Octree/BVH in GPU memory without pointers?

I've been working on a GPU-based boid simulation recently. I've spent most of my time trying to get the underlying sorting system working, in an attempt to avoid having each boid check every other boid—I'm ideally looking for this algorithm to end up being scalable into the hundreds of thousands of individual particles. However, I'm a bit confused as to how I should try to organize my boids into some kind of spatial tree structure when I don't have access to pointers (I'm working in HLSL).
I elected to try and base my method off of this incredibly helpful article. I already have a relatively quick radix sort functioning properly, but what I'm confused about is how I can actually put the sorted z-order morton keys to use. I naïvely assumed that, once sorted, all sequential boids would be sorted by distance, but this assumption breaks down whenever the boids are near the edge of two "sections" in the z-order curve, which causes some bizarre behavior that I've pictured below:
It seems clear that I also need to construct some kind of BVH (Bounding Volume Hierarchy) data structure so I can predictably access boids within a set distance, instead of just iterating over nearby sorted boids, but I'm stuck on how to achieve this in a language like HLSL that doesn't include pointers. I've read this article a few times, but I'm not sure if it's well-suited to what I'm trying to do. Should I create nodes that store buffer indices instead of pointers? Or is there a simpler way that I could go about this?
I'd deeply appreciate any advice on how to move forward, thank you!

Basics of face Sculpting in Blender

I mean, the basics..
1) I have seen in the Online videos, that they are modelling a character (or anything) through one object only, they are extruding, loop cut, scaling, etc and model a character, why don't they design different objects separately (like hands separately, legs separately, body separate and then join them together and make one object)..??????
2) Like What the texturing department has to see so that they should not return the model back to the modelling department. I mean like the meshes(polygons) over the model face must be quad, etc not triangle. while modelling a character..
what type of basics i should know , means is there any check list or is there any basics which i should see before modelling a character..
Please correct me if i am wrong , and answer my both questions.. Thanks
It may be common but it definitely isn't mandatory to have a model as one solid mesh. Some models will have parts of the body underneath clothing removed to reduce the poly count. How the model is to be used will be a big factor to how you model it, that is a for a single image it is easy to get away with multiple parts, while a character that will be animated in a cartoony animation could be stretched and distorted in ways that could show holes in a model with multiple pieces. When working in a team, there may be rules in place determining whether a solid or multi-part model is considered acceptable.
An example of an animated model made from multiple parts is Sintel, the main character in the Sintel short animation.
There is nothing stopping you from making a library of separate body parts and joining them together when you make your model. Be aware that this can bring complications, if you model an arm with 12 verts and then you make your hand with 15, then you have to fiddle around to merge them together.
You will also find some extra freedom to work with multiple body parts during the sculpting phase as you are creating a high density mesh that is used as a template to model a clean mesh over. This step is called retopology.
It is more likely that the rigging department will send a model back for fixing than the texturing department. When adding a rig and deforming the mesh in different ways, any parts that deform badly will be revealed and need fixing.
[...] (like hands separately, legs separately, body separate and then
join them together and make one object) [...]
Some modelers I know do precisely this and they do it in a way where they block in the design using broad primitive shapes, start slicing some edge loops and add broad details, then merge everything together, then sculpt it a bit further with high-res sculpting tools, and finally retopologize everything.
The main modelers I know who do this, however, model in a way that tries to adhere as close as possible to the concept artist's illustration. They're not creating their own models from scratch but are instead given top/front/back/side illustrations of a character, for example, and are just trying to match it as closely as possible.
When you start modeling everything in small pieces, it helps to have that concept illustration since you can get lost in the topology otherwise and fusing organic meshes together can be difficult to do in a clean way.
[...] why don't they design different objects separately? [...]
Again they sometimes do, but one of the appeals of creating organic meshes by keeping it seamless the entire time is that you can start to focus on how edge loops propagate across the entire model. It helps to know that the base of a finger is a hexagon, for example, in figuring out how to cleanly propagate and terminate the edge loops for a hand, and likewise have a strategy for the hand to cleanly propagate and terminate edge loops as it joins into the forearm.
It can be hard to get the topology to match up cleanly if you designed everything in small pieces and then had to figure out how to merge it all together. Polygonal modeling is very topology-oriented. It tends to require as much thinking about the wireframe and edge flows as it does the shape of the model, since it needs to be a certain way for everything to subdivide cleanly and smoothly and animate predictably with subdivision surfaces.
I used to work with developers who took one glance at the topology-dominated workflow of polygonal modeling and immediately wanted to jump to seeking alternatives, like voxel sculpting. With voxels you could be able to potentially model everything in pieces and foose it all together in a nice and smooth organic way without thinking about topology whatsoever.
However, that loses sight of the key appeal of polygonal meshes. Their wire flow forms a control lattice with a very finite number of control points for the artist to animate and move around to predictably control the shape of their model. You immediately lose that with a voxel representation -- so while voxels free the artist of thinking about how the topology works and how the wireframe flows through the model, it also loses all those control benefits of having that. So often if people use voxel sculpting, they end up meticulously retopologizing everything at the end anyway to gain back that level of coarse and predictable control they have with polygonal meshes.
I mean like the
meshes(polygons) over the model face must be quad, etc not triangle.
while modelling a character..
This is all in the context of subdivision surfaces: the most popular of which are variants of catmull-clark. That favors quads to get the most predictable subdivision. It's much easier for the artist to predict how everything will look like and deform if they favor, as much as possible, uniform grids of quadrangles wrapped around their model with 4-valence vertices and every polygon having 4 points. Then only in the case where they kind of need to "join" these quad grids together, they might create some funky topology: a 5-valence vertex here, a 3-valence vertex there, a 5-sided polygon here, a triangle there -- but those cases tend to deform a bit unpredictably (at least unintuitively), so artists tend to try to avoid these as much as possible.
Because when artists model polygonal meshes in this way, they are not just trying to create a statue with a nice shape. If that's all they wanted to do, they'd save themselves a lot of grief avoiding dealing with things in terms of individual vertices/edges/polygons in the first place and using something like Sculptris. Instead they are designing not only shapes but also designing a control lattice, a wire flow and a set of control points they can easily move around in the future to get predictable behavior out of their control cage. They're basically designing controls or an "interactive GUI/rig" almost for themselves with how they design the topology.
2) Like What the texturing department has to see so that they should
not return the model back to the modelling department.
Generally how a mesh is modeled in a direct sense shouldn't affect the texture department's work much at all if they're working with UV maps and painting textures over them (at that point it doesn't really matter if a model has clean wire flows or not, since all the texture artists do is pain images over the 2D UV map or directly onto the 3D model).
However, if the modeler does the UV mapping, then regardless of whether he uses quad meshes and clean wire flows or not, if the UV mapping is poor, then the resulting texture images will look all distorted. So the UV maps need to be made well with minimal distortion, though that's usually easy to do automatically these days.
The other exception is if the department doesn't use UV maps and instead uses, say, PTex from Disney. PTex really favors quads. In the original paper at least, it only worked with quads.

Insert skeleton in 3D model programmatically

Background
I'm working on a project where a user gets scanned by a Kinect (v2). The result will be a generated 3D model which is suitable for use in games.
The scanning aspect is going quite well, and I've generated some good user models.
Example:
Note: This is just an early test model. It still needs to be cleaned up, and the stance needs to change to properly read skeletal data.
Problem
The problem I'm currently facing is that I'm unsure how to place skeletal data inside the generated 3D model. I can't seem to find a program that will let me insert the skeleton in the 3D model programmatically. I'd like to do this either via a program that I can control programmatically, or adjust the 3D model file in such a way that skeletal data gets included within the file.
What have I tried
I've been looking around for similar questions on Google and StackOverflow, but they usually refer to either motion capture or skeletal animation. I know Maya has the option to insert skeletons in 3D models, but as far as I could find that is always done by hand. Maybe there is a more technical term for the problem I'm trying to solve, but I don't know it.
I do have a train of thought on how to achieve the skeleton insertion. I imagine it to go like this:
Scan the user and generate a 3D model with Kinect;
1.2. Clean user model, getting rid of any deformations or unnecessary information. Close holes that are left in the clean up process.
Scan user skeletal data using the Kinect.
2.2. Extract the skeleton data.
2.3. Get joint locations and store as xyz-coordinates for 3D space. Store bone length and directions.
Read 3D skeleton data in a program that can create skeletons.
Save the new model with inserted skeleton.
Question
Can anyone recommend (I know, this is perhaps "opinion based") a program to read the skeletal data and insert it in to a 3D model? Is it possible to utilize Maya for this purpose?
Thanks in advance.
Note: I opted to post the question here and not on Graphics Design Stack Exchange (or other Stack Exchange sites) because I feel it's more coding related, and perhaps more useful for people who will search here in the future. Apologies if it's posted on the wrong site.
A tricky part of your question is what you mean by "inserting the skeleton". Typically bone data is very separate from your geometry, and stored in different places in your scene graph (with the bone data being hierarchical in nature).
There are file formats you can export to where you might establish some association between your geometry and skeleton, but that's very format-specific as to how you associate the two together (ex: FBX vs. Collada).
Probably the closest thing to "inserting" or, more appropriately, "attaching" a skeleton to a mesh is skinning. There you compute weight assignments, basically determining how much each bone influences a given vertex in your mesh.
This is a tough part to get right (both programmatically and artistically), and depending on your quality needs, is often a semi-automatic solution at best for the highest quality needs (commercial games, films, etc.) with artists laboring over tweaking the resulting weight assignments and/or skeleton.
There are algorithms that get pretty sophisticated in determining these weight assignments ranging from simple heuristics like just assigning weights based on nearest line distance (very crude, and will often fall apart near tricky areas like the pelvis or shoulder) or ones that actually consider the mesh as a solid volume (using voxels or tetrahedral representations) to try to assign weights. Example: http://blog.wolfire.com/2009/11/volumetric-heat-diffusion-skinning/
However, you might be able to get decent results using an algorithm like delta mush which allows you to get a bit sloppy with weight assignments but still get reasonably smooth deformations.
Now if you want to do this externally, pretty much any 3D animation software will do, including free ones like Blender. However, skinning and character animation in general is something that tends to take quite a bit of artistic skill and a lot of patience, so it's worth noting that it's not quite as easy as it might seem to make characters leap and dance and crouch and run and still look good even when you have a skeleton in advance. That weight association from skeleton to geometry is the toughest part. It's often the result of many hours of artists laboring over the deformations to get them to look right in a wide range of poses.

Table Structure for a Map system like Fallensword

I need help to structure a Map system like fallensword.com. Basically, there are different maps you can move around on. (You choose a map you start on then you can move around) And on the map there are for example caves, that you can enter, and you get on another map, etc.
How do I structure that SQL? I guess I need a x/y colum, but then what. What more should I have, there are sometimes NPC (that you attack) and sometimes NPC that you get a quest from, and sometimes, a house/cave or something (you can enter/get quest out of)
Any ideas?
A kd tree or a quadtree can help you much to solve your problem. A quadtree reduces the 2d complexity to a 1d complexity. It's used in many maps applications like bing or google maps. A good start is Nick's spatial index quadtree hilbert curve blog. You can use mysql with a spatial index but if you want to write a game this isn't the right place to ask. There is game.stackexchange.

Smoothing data received from CoreLocation

I'm trying to develop an app which allows you to walk around, and where you walked will be drawn on a map. I have this all working fine, but I'm finding that even with a reasonably accurate GPS location the points still jump around a bit. When drawn on a map this has the effect of creating a squiggly or zig-zag line.
I'm looking for suggestions/strategies on how to smooth the data, so that the line drawn on the map is more of a smooth best fit, rather than an accurate point to point drawing.
There are many different types of smoothing algorithms you could apply to the data (for a few starting points, see this Wikipedia article). The only way to know for sure which is/are suitable for your application is to implement and test them.
Simple or weighted moving averages are fairly common (taking the last n samples and averaging them), but have the problem of lagging behind the data. A common one for filtering signal noise is a high-pass filter, which attenuates small (noisy) movements while passing through larger ones. Apple has some code for this in their AccelerometerGraph sample.
I'd suggest trying those out first as they're easy to implement, before looking at the move complex ones.