Concept of Program Making Own Class Instances - conceptual

This is probably a basic question with a simple answer, but I just can't seem to wrap my head around the logic behind it.
I will start with a simple example with a well-known Java game, Minecraft. A player is put into a world and is allowed to interact with different objects. Say the player wants wood. He sees a tree, walks over, and cuts it down. He sees another tree, walks over, and does the same. He can do this as many times as he wants, and the game will load in more trees if the player explores far enough. But how is this done? In other words, how can a program make an essentially endless amount of trees that the player will always be able to interact with?
I imagine there is a tree class in the code. But obviously, the programmer has not coded in different class instances, such as tree1, tree2, and so on, since he has no idea how many trees will need to be loaded in the game. So how does the program do this without being told how many trees there will be?
In other words, I do not understand how programs can decide on their own to make x number of class instances instead of the programmer having to manually code his/her own class instances. How do game developers and other programmers do this?
Thank you.

Consider how we might generate a random game map. Let's say the environment of the map will be in a forest. What sorts of objects are part of a forest?
Trees
Critters (i.e. squirrels, rabbits)
Forest floor texture (i.e. grass or fallen leaves)
We start with a basic 100x100 unit game map. We will have the program generate a number of trees between 10 and 30, and also generate the tree's X and Y coordinates with the condition that no tree can have the same X and Y coordinates. We do the same for the other objects.
Now we tell the program to load the game map with all of these objects in place. Done.

Consider a simple grid where each position on the grid can be occupied
by an object. The object could be a tree, rock, or creature.
In this world, I have areas on the map defined as 'growable'.
In these areas, something will grow if there isn't something there already.
The game would have a set of turns. During each turn, I run a routine to
find all the growable spots, and if there isn't something growing there
randomly determine if something should sprout. This way if someone
chops down a tree, that spot can sprout a new tree later in the game.
Perhaps the game is more complex and a simple grid isn't good enough.
Whatever the case, the game has to track all the objects in some way.

Related

Basics of face Sculpting in Blender

I mean, the basics..
1) I have seen in the Online videos, that they are modelling a character (or anything) through one object only, they are extruding, loop cut, scaling, etc and model a character, why don't they design different objects separately (like hands separately, legs separately, body separate and then join them together and make one object)..??????
2) Like What the texturing department has to see so that they should not return the model back to the modelling department. I mean like the meshes(polygons) over the model face must be quad, etc not triangle. while modelling a character..
what type of basics i should know , means is there any check list or is there any basics which i should see before modelling a character..
Please correct me if i am wrong , and answer my both questions.. Thanks
It may be common but it definitely isn't mandatory to have a model as one solid mesh. Some models will have parts of the body underneath clothing removed to reduce the poly count. How the model is to be used will be a big factor to how you model it, that is a for a single image it is easy to get away with multiple parts, while a character that will be animated in a cartoony animation could be stretched and distorted in ways that could show holes in a model with multiple pieces. When working in a team, there may be rules in place determining whether a solid or multi-part model is considered acceptable.
An example of an animated model made from multiple parts is Sintel, the main character in the Sintel short animation.
There is nothing stopping you from making a library of separate body parts and joining them together when you make your model. Be aware that this can bring complications, if you model an arm with 12 verts and then you make your hand with 15, then you have to fiddle around to merge them together.
You will also find some extra freedom to work with multiple body parts during the sculpting phase as you are creating a high density mesh that is used as a template to model a clean mesh over. This step is called retopology.
It is more likely that the rigging department will send a model back for fixing than the texturing department. When adding a rig and deforming the mesh in different ways, any parts that deform badly will be revealed and need fixing.
[...] (like hands separately, legs separately, body separate and then
join them together and make one object) [...]
Some modelers I know do precisely this and they do it in a way where they block in the design using broad primitive shapes, start slicing some edge loops and add broad details, then merge everything together, then sculpt it a bit further with high-res sculpting tools, and finally retopologize everything.
The main modelers I know who do this, however, model in a way that tries to adhere as close as possible to the concept artist's illustration. They're not creating their own models from scratch but are instead given top/front/back/side illustrations of a character, for example, and are just trying to match it as closely as possible.
When you start modeling everything in small pieces, it helps to have that concept illustration since you can get lost in the topology otherwise and fusing organic meshes together can be difficult to do in a clean way.
[...] why don't they design different objects separately? [...]
Again they sometimes do, but one of the appeals of creating organic meshes by keeping it seamless the entire time is that you can start to focus on how edge loops propagate across the entire model. It helps to know that the base of a finger is a hexagon, for example, in figuring out how to cleanly propagate and terminate the edge loops for a hand, and likewise have a strategy for the hand to cleanly propagate and terminate edge loops as it joins into the forearm.
It can be hard to get the topology to match up cleanly if you designed everything in small pieces and then had to figure out how to merge it all together. Polygonal modeling is very topology-oriented. It tends to require as much thinking about the wireframe and edge flows as it does the shape of the model, since it needs to be a certain way for everything to subdivide cleanly and smoothly and animate predictably with subdivision surfaces.
I used to work with developers who took one glance at the topology-dominated workflow of polygonal modeling and immediately wanted to jump to seeking alternatives, like voxel sculpting. With voxels you could be able to potentially model everything in pieces and foose it all together in a nice and smooth organic way without thinking about topology whatsoever.
However, that loses sight of the key appeal of polygonal meshes. Their wire flow forms a control lattice with a very finite number of control points for the artist to animate and move around to predictably control the shape of their model. You immediately lose that with a voxel representation -- so while voxels free the artist of thinking about how the topology works and how the wireframe flows through the model, it also loses all those control benefits of having that. So often if people use voxel sculpting, they end up meticulously retopologizing everything at the end anyway to gain back that level of coarse and predictable control they have with polygonal meshes.
I mean like the
meshes(polygons) over the model face must be quad, etc not triangle.
while modelling a character..
This is all in the context of subdivision surfaces: the most popular of which are variants of catmull-clark. That favors quads to get the most predictable subdivision. It's much easier for the artist to predict how everything will look like and deform if they favor, as much as possible, uniform grids of quadrangles wrapped around their model with 4-valence vertices and every polygon having 4 points. Then only in the case where they kind of need to "join" these quad grids together, they might create some funky topology: a 5-valence vertex here, a 3-valence vertex there, a 5-sided polygon here, a triangle there -- but those cases tend to deform a bit unpredictably (at least unintuitively), so artists tend to try to avoid these as much as possible.
Because when artists model polygonal meshes in this way, they are not just trying to create a statue with a nice shape. If that's all they wanted to do, they'd save themselves a lot of grief avoiding dealing with things in terms of individual vertices/edges/polygons in the first place and using something like Sculptris. Instead they are designing not only shapes but also designing a control lattice, a wire flow and a set of control points they can easily move around in the future to get predictable behavior out of their control cage. They're basically designing controls or an "interactive GUI/rig" almost for themselves with how they design the topology.
2) Like What the texturing department has to see so that they should
not return the model back to the modelling department.
Generally how a mesh is modeled in a direct sense shouldn't affect the texture department's work much at all if they're working with UV maps and painting textures over them (at that point it doesn't really matter if a model has clean wire flows or not, since all the texture artists do is pain images over the 2D UV map or directly onto the 3D model).
However, if the modeler does the UV mapping, then regardless of whether he uses quad meshes and clean wire flows or not, if the UV mapping is poor, then the resulting texture images will look all distorted. So the UV maps need to be made well with minimal distortion, though that's usually easy to do automatically these days.
The other exception is if the department doesn't use UV maps and instead uses, say, PTex from Disney. PTex really favors quads. In the original paper at least, it only worked with quads.

SKPhysicsBody optimization

I have a 2D sidescrolling game. Right now, in order to jump, the player must be touching the ground. Therefor, I have a boolean, isOnGround, that is set to YES when the player collides with a tile object, and no when the player jumps. This generates a LOT of calls to didBeginContact method, slowing down the game.
Firstly, how can I optimise this by using one big physics body for the tiles on the floor (for example clustering multiple adjacent tiles into one single physics body)?
Secondly, is this even efficient? Is there a better way to detect if the play is on the ground? My current method opens up a lot of bugs, for example wall jumping. If a player collides with a wall, isOnGround becomes YES and allows the player to jump.
Having didBeginContact called numerous times should in no way slow down your game. If you are having performance issues, I suspect the problem is probably elsewhere. Are you testing on device or simulator?
If you are using the Tiled app to create your game map, you can use the Objects Layer to create a individual objects in your map which your code can translate into physics bodies later on.
Using physics and collisions is probably the easiest way for you to determine your player's state in relation to ground contact. To solve your wall issue, you simply make a wall contact a different category than your ground. This will prevent the isOnGround to be set to YES.
You could use the physics engine to detect when jumping is enabled, (and this is what I used to do in my game). However I too have noticed significant overhead using the physics engine to detect when a unit was on a surface and that is because contact detection in sprite kit for whatever reason is expensive, even when collisions are already enabled. Even the documentation notes:
For best performance, only set bits in the contacts mask for
interactions you are interested in.
So I found a better solution for my game (which has 25+ simultaneous units that all need surface detection). Instead of going through the physics engine, I just did my own surface calculation and cache the result each game update. Something like this:
final class func getSurfaceID(nodePosition: CGPoint) -> SurfaceID {
//Loop through surface rects and see if position is inside.
}
What I ended up doing was handling my own surface detection by checking if the bottom point of my unit was inside any of the surface frames. And if your frames are axis-aligned (your rectangles are not rotated) you can perform even faster checks to see if the point is inside the frame.
This is more work in terms of level design because you will need to build an array of surface frames either dynamically from your tiles or manually place surface frames in your world (this is what I did).
Making this change reduced the cpu time spent on surface detection from over 20% to 0.1%. It also allows me to check if any arbitrary point lies on a surface rather than needing to create a physics body (which is unnecessary overhead). However this solution obviously won't work for you if you need to use contact detection.
Now regarding your point about creating one large physics body from smaller ones. You could group adjacent floor tiles using a container node and recreate a physics body that fits the nodes that are grouped. Depending on how your nodes are grouped and how you recycle tiles this can get complicated. A better solution would be to create large physics bodies that just overlap your tiles. This would reduce the number of total physics bodies, as well as the number of detections. And if used in combination with the surface frames solution you could really reduce your overhead.
I'm not sure how your game is designed and what its requirements are. I'm just giving you some possible solutions I looked at when developing surface detection in my game. If you haven't already you should definitely profile your game in instruments to see if contact detection is indeed the source of your overhead. If you game doesn't have a lot of contacts I doubt that this is where the overhead is coming from.

Using Pathfinding, such as A*, for NPC's & character without Tiles

I've been reading a book called "iOS Games by Tutorials" (recommend it to anyone interested in making iPhone games) & I'm learning how to make Tiled Maps with Sprite Kit with an overhead view (like the legend of zelda link's awakening). So far, I have made a tiled map using tiles that are 32x32, placed the player character & several NPC's into the world. Even made the NPC's randomly move around the map, though the way it teaches in the book is having them move from tile to tile (any of the 8 tiles surrounding the NPC at any time - if a tile has some property such as categoryBitMask then it won't move to that tile).
I am going to change NPC movement to physics-based (which is its own problem) just like the player character has right now (which means NPC's will collide with objects that have a physicsBody like the player character does). It's more fluid & dynamic.
But here is where the question begins. I want to implement Pathfinding (such as the A* algorithm) into the NPC & player character movement due to the map containing buildings, water, trees, etc. with their own physicsBodies. It's one thing to limit NPC's random movement or to force them to walk a predetermined path (which will kill the point of this game), but it's another to have to tap the screen very often to have the player character avoid all the buildings/trees he has to walk past. I don't want to use a grid system. Is it possible to implement some pathfinding algorithm into x,y coordinates? Is this more resource intensive? Could you share your thoughts about this?
Thank you.
This is a very interesting topic.
There are algorithms for finding paths in continuous spaces. For example, you can use a potential based method with the objective having a very low potential and obstacles being "hills" (perhaps infinitely high, although this requires a bit of care). The downside of potential methods is that you have to take special precautions to keep them from getting stuck at a local minimum. Situations like this
P
+----+
| M|
| |
+ ---+
Where M is a monster trying to get to the player, P can occur. In the example, the monster is at a local minimum, and it would have to go to a higher potential in order to get out the door at the lower left of the building. A variant of potential algorithms (in fact, it's often useful to reduce it to one), is to assign anti-gravity to obstacles and gravity to objectives. This is also somewhat non-deterministic and requires special precautions to avoid getting "stuck".
As #rickster points out, SpriteKit provides an SKFieldNode class that can help you implement a potential based solution.
Other approaches include "wall following" (for example, Pledge's algorithm) and are useful for finding your way around in a maze like environment.
One drawback to continuous methods is that NPC movement will often seem a bit unnatural -- for example, even if our monster in the example above is able to decide that it's at a local minimum and increase the "temperature" of it's search (that is, make larger moves, perhaps at random, against the potential gradient), it will bounce around instead of going straight for the door.
An alternative to searching in continuous spaces is to quantize the space. A simple method is to tile it, cover it with polygons, or represent it as a quadtree. Essentially, you want to have a way of mapping every point in the continuous space to a vertex on a graph representing the quantized space. At this point, graph search algorithms like A* and friends are applicable.
Graph search is somewhat resource intensive, but for a 2d zelda like game, it should be doable on a mobile device, especially with various optimizations like only "waking up" NPCs that are within a certain distance of the player (think aggro).
This page is a bit thin on implementation details, but it'll give you the right terms to google.
As always, start simple and iterate. Tiling is incredibly easy, and will let you experiment with the graph search method before optimizing.

Tutorials for controlling 3D modeling objects

I have some experience with Blender such that I can make a semitransparent cylinder of specified dimensions and small spheres. I want to (for a chemistry tutorial video explaining temperature and heat concepts) write a program that will:
Set up the cylinder and some spheres in a coordinate space
Set up a camera and lighting
Get the spheres moving around in random directions while keeping track of their positions and making them bounce when necessary (this I can figure out given a coordinate space; and I'm not going to get bone-crunchingly accurate trying to do accelerations, taking "mass" into account, etc. just going to send balls in another direction at the "speed" all the balls are going)
Record what this would look like through the camera for a set amount of time (thinking command line option in seconds)
In other words, by #4, this program doesn't even need to be GUI at all. I just want the program to make a video.
It may take me a very long time to actualize this because though I have a lot of experience with C, C++, and Java, I don't know how to take a 3D model file and programmatically control it. I don't even know the infrastructure of libraries and accompanying API to control 3D objects and record the camera to a file.
Are there any tutorials that would go from starting with some 3D models to programmatically setting up a scene (objects, camera, lights), programmatically moving the objects in the coordinate space, and recording the video to a file?
Knowing some programming already, I want to point you to Unity, www.unity3d.com
Unity is a 3d game engine, though it can be used for a number of different things, including this program you have in mind.
It's programmed with C# or Javascript, and I think you could pick these languages up easily enough.
Basically what you described in your last paragraph is exactly what Unity does.

Representing the board in a boardgame

I'm trying to write a nice representation of a boardgame's board and the movement of players around it. The board is a grid of tiles, players can move up, down, left or right. Several sets of contiguous tiles are grouped together into named regions. There are walls which block movement between some tiles.
That's basically it. I think I know where to start if all the players were human controlled, but I'm struggling with what happens with a computer controlled player. I want the player to be able to say to itself: "I'm on square x, I want to go to region R a lot, and I want to go to region S a little. I have 6 moves available, therefore I should do..."
I'm at a loss where to begin. Any ideas? This would be in a modern OO language.
EDIT: I'm not concerned (yet) with the graphical representation of the board, it's more about the route-finding part.
I'd say use a tree structure representing each possible move.
You can use a Minimax-type algorithm to figure out what move the computer should take.
If the problem is with pathfinding, there are quite a few pathfinding algorithms out there.
The Wikipedia article on Pathfinding has a list of pathfinding algorithms. One of the common ones used in games is the A* search algorithm, which can do a good job. A* can account for costs of passing over different types of areas (such as impenetrable walls, tiles which take longer to travel on, etc.)
In many cases, a board can be represented by a 2-dimensional array, where each element represents a type of tile. However, the requirement for regions may make it a little more interesting to try to solve.
Have a Player class, which has Map field associating Squares to probability of moving there, that is, Map<Square, Double> if you'll represent them as a 0..1 double.
Have a Board class encapsulating a series of Squares. Each Square will have four booleans or similar to mark where it has a wall, its coordinates, and which Player, if any, is on it.
I can tell you what worked for me on a commercial board game style product.
Break your representation of the board and core game logic into it's own module, with well defined interfaces to the rest of the game. We had functions like bool IsValidMove(origin, dest), and bool PerformMove(origin, dest), along with interfaces back to the GUI such as AnimateMove(gamePieceID, origin, dest, animInfo).
The board and rules only knew the state of the board, and what was valid to do. It didn't know anything about rendering, AI, animations, sound, input, or anything else. Each frame, we would handle input from the user at the GUI level, send commands to the board/game state code, and then be done. The game state code would get commands, resolve if they were valid or not, update the game state and board, then send messages back to the GUI to visually represent the new state of teh board. These updates were queued by the visual representation system, so we could batch a bunch of animations to happen in sequence.
The good thing about this is that the board doesn't know or care about human vs. AI players. Your AI can be a separate submodule that acts on it's turn. It can send the same commands as the human player, and the game logic and visual results will be the same. You'll need to either have a local per-AI bit of info about the game board state, or expose some BoardSnapshot() functionality from the game logic that lets the AI "see" the board, but that's it. Alternately, you could register each AI as an Observer Pattern on the game state, so they get notified when the board updates as well, in case they need to do any complex realtime planning.
Keeping each section of your game separate and isolated will help with unit testing, and provide a more robust system. Well defined interfaces are your friend.
If you are looking for in-memory representation of the games (and it's state), a matrix is the simplest. However, depending on the complexity of the board, the strategy, you may have to maintain a list of states.
If you mean on-screen representation, you'd need some graphics library to begin with.