In a game, as far as camera controlling scripts, would you have one "camera control script" that handles all the cameras, or a script for each individual camera (player camera, cutscene cameras etc).
I'm just trying to figure out the proper way to handle scripting in unity that will keep things simple in the long run as well as be memory efficient.
Camera actions would be following the player for the player camera, or panning around a cutscene or staying still for a different cutscene
It really depends on the scenario/work flow/production pipe, it really doesn't matter which way you organise it but if you are less comfortable with scripting I would say putting it all in one file is probably the best way to deal with it at the moment.
I would normally say that if you plan to reuse segments of code its generally a good thing to keep things modular i.e. one class per script.
EDIT: To your specific problem, I would have a system in which you switch between one (or more) camera/s that translates between scripted points i.e. stored vector3 variables (for static cameras and cinematic) and the main camera which has the movement script associated with it. This way you minimise the amount of cameras you are using in your scene and you can reuse the static cameras as many times as you need to.
A tidy way to deal with the data storing might be to use an array of structs that can store all the data you need, i.e the cameras position, rotation, FOV etc
Related
We have to work with very large models and we're hoping to use the first person camera to walk through them, and eventually do this in VR. The progressive rendering does wonders for improving perceived responsiveness, but it can be disorienting to have so many items around you disappear as you move.
Is there any way to turn progressive rendering off but only for objects closest to the camera? Maybe a number of objects up to a maximum number, or objects within a radius from the camera. Everything further away can load in later and flicker during motion, but it would be nice to keep the nearby objects rendered, especially structural objects like stairs. I've often walked towards stairs just to have them disappear in front of me, forcing me to fly to a platform with E and Q instead of walking.
So far I've only found a way to toggle progressive rendering for the entire model on or off with viewer.setProgressiveRendering(bool) but I haven't found a way to customize the rendering behavior.
Per our Engineering's recommendation can you try set rendering targets with
viewer.impl.setFPSTargets(1, 5, 15) //min, target, max
In fact we've been having similar reports from other developers requesting similar capability to fine tune rendering with large models so our Engineering is considering their options to extend on existing functions and even build them into extensions.
I have a 2D sidescrolling game. Right now, in order to jump, the player must be touching the ground. Therefor, I have a boolean, isOnGround, that is set to YES when the player collides with a tile object, and no when the player jumps. This generates a LOT of calls to didBeginContact method, slowing down the game.
Firstly, how can I optimise this by using one big physics body for the tiles on the floor (for example clustering multiple adjacent tiles into one single physics body)?
Secondly, is this even efficient? Is there a better way to detect if the play is on the ground? My current method opens up a lot of bugs, for example wall jumping. If a player collides with a wall, isOnGround becomes YES and allows the player to jump.
Having didBeginContact called numerous times should in no way slow down your game. If you are having performance issues, I suspect the problem is probably elsewhere. Are you testing on device or simulator?
If you are using the Tiled app to create your game map, you can use the Objects Layer to create a individual objects in your map which your code can translate into physics bodies later on.
Using physics and collisions is probably the easiest way for you to determine your player's state in relation to ground contact. To solve your wall issue, you simply make a wall contact a different category than your ground. This will prevent the isOnGround to be set to YES.
You could use the physics engine to detect when jumping is enabled, (and this is what I used to do in my game). However I too have noticed significant overhead using the physics engine to detect when a unit was on a surface and that is because contact detection in sprite kit for whatever reason is expensive, even when collisions are already enabled. Even the documentation notes:
For best performance, only set bits in the contacts mask for
interactions you are interested in.
So I found a better solution for my game (which has 25+ simultaneous units that all need surface detection). Instead of going through the physics engine, I just did my own surface calculation and cache the result each game update. Something like this:
final class func getSurfaceID(nodePosition: CGPoint) -> SurfaceID {
//Loop through surface rects and see if position is inside.
}
What I ended up doing was handling my own surface detection by checking if the bottom point of my unit was inside any of the surface frames. And if your frames are axis-aligned (your rectangles are not rotated) you can perform even faster checks to see if the point is inside the frame.
This is more work in terms of level design because you will need to build an array of surface frames either dynamically from your tiles or manually place surface frames in your world (this is what I did).
Making this change reduced the cpu time spent on surface detection from over 20% to 0.1%. It also allows me to check if any arbitrary point lies on a surface rather than needing to create a physics body (which is unnecessary overhead). However this solution obviously won't work for you if you need to use contact detection.
Now regarding your point about creating one large physics body from smaller ones. You could group adjacent floor tiles using a container node and recreate a physics body that fits the nodes that are grouped. Depending on how your nodes are grouped and how you recycle tiles this can get complicated. A better solution would be to create large physics bodies that just overlap your tiles. This would reduce the number of total physics bodies, as well as the number of detections. And if used in combination with the surface frames solution you could really reduce your overhead.
I'm not sure how your game is designed and what its requirements are. I'm just giving you some possible solutions I looked at when developing surface detection in my game. If you haven't already you should definitely profile your game in instruments to see if contact detection is indeed the source of your overhead. If you game doesn't have a lot of contacts I doubt that this is where the overhead is coming from.
I have some experience with Blender such that I can make a semitransparent cylinder of specified dimensions and small spheres. I want to (for a chemistry tutorial video explaining temperature and heat concepts) write a program that will:
Set up the cylinder and some spheres in a coordinate space
Set up a camera and lighting
Get the spheres moving around in random directions while keeping track of their positions and making them bounce when necessary (this I can figure out given a coordinate space; and I'm not going to get bone-crunchingly accurate trying to do accelerations, taking "mass" into account, etc. just going to send balls in another direction at the "speed" all the balls are going)
Record what this would look like through the camera for a set amount of time (thinking command line option in seconds)
In other words, by #4, this program doesn't even need to be GUI at all. I just want the program to make a video.
It may take me a very long time to actualize this because though I have a lot of experience with C, C++, and Java, I don't know how to take a 3D model file and programmatically control it. I don't even know the infrastructure of libraries and accompanying API to control 3D objects and record the camera to a file.
Are there any tutorials that would go from starting with some 3D models to programmatically setting up a scene (objects, camera, lights), programmatically moving the objects in the coordinate space, and recording the video to a file?
Knowing some programming already, I want to point you to Unity, www.unity3d.com
Unity is a 3d game engine, though it can be used for a number of different things, including this program you have in mind.
It's programmed with C# or Javascript, and I think you could pick these languages up easily enough.
Basically what you described in your last paragraph is exactly what Unity does.
Does anyone have any idea what technique I should use to make the display video shift left, right, up and down as in the video below? I want to achieve this with a Kinect but with a different idea.
Thanks in advance.
http://www.youtube.com/watch?v=V2hxaijuZ6w
EDIT:
Now that I'm awake, I'll go into better detail about this (it apparently took me a week to wake up).
So the Winscape project connects a real and virtual world by giving windows from the real world into a virtual world. The way it does this is act like the real world is part of the virtual world, and then changes the display of the monitors (disguised to look like windows) to replicate the view a person should see if they existed in the virtual world.
Imagine your virtual world. It doesn't necessarily have an end to it, but there's a point where you stop trying to render stuff into it, so let's say the world in enclosed in a box that contains all the rendered elements. Now what Winscape does is make it appear that the virtual world actually exists in the real world, and that you can see it through the monitors.
First step is obviously to create your virtual world. For starters, I'd suggest just creating a literal box. Make each wall a difference color, or put color gradients on the walls. Make something simple. If you haven't already decided on a 3D framework to handle this, I'd suggest XNA. It's C#, which works with the Kinect SDK, and it's got a ton of tutorials online to help you. Once you've created your world, use XNA to place a camera inside the box and add some simple controls to rotate the camera. This will allow you to look around the box from the inside, to make sure the rendering is working as expected.
Once you've done that, you need to decide where to put your windows. These will be the viewpoints into your 3D scene. To demonstrate this concept, here's a picture I took from an XNA camera tutorial.
Note that, if you read the actual tutorial, they won't say the exact same thing as me because I'm just hijacking the picture to demonstrate my meaning. So, the (0,0,0) point is where your "eye" is. The pink rectangle would represent your window. Looking at the window, four lines are drawn from the eye to the corners of the pink window. These four lines are extended forward until they collide with the background, creating the green rectangle. This would be the rectangle that your eye can see through the window.
Note that XNA will actually handle a LOT of this for you. You simply need to create a camera in your virtual scene and move it around, doing some math to aim it directly at your window. You'll want that camera to be in the virtual space in a way that represent your location in the real world. You can do this by using the Kinect to get your real world coordinates in relation to itself, then configure your application to know where your Kinect is in relation to your windows. Combing that data, you can get the location of your eyes in relation to your monitors in the real world, and since the monitors are represented by the windows in the virtual world, you can figure out where you exist in the virtual world. So place the virtual camera where your head is in the virtual world, point it at the windows, and do some magic to make sure only the window is viewed by the camera.
Original semi-lucid rant:
Okay, I'm going to take a shot at this (it's almost 1 AM, so let me know if I did a less than brilliant job and I'll come back to it when I wake up).
First, it'll involve quite a bit of math that I'm just going to skim over. You have, essentially, three layers.
Person ---- "Windows" (Monitors) ---- Scene
The scene, of course, doesn't really exist. You have to kind of incorporate the person into a virtual world where the scene, which is really just a flat image, exists behind a wall. The only way the person can see said scene is through the windows in the wall, which in reality is faked by monitors.
So, here comes the math. The Kinect can calculate where you're standing in the room, and more importantly, where your head is. From this you can get a general sense of where your eyes are. You'll need to take this point (your eyes) and translate it into the coordinates you're using in your virtual world. Then, calculate what those eyes should be able to see through the virtual windows. You can do this by projecting lines from the eyes to each corner of a window, all the way through until it hits the "scene" picture. Each window will correspond to a rectangular area on the background picture. This rectangle is what needs to be drawn to the screen.
The trickiest part is going to be setting up virtual world to nearly perfectly mimic the real world. Essentially, a lot of configuration ("okay, this window is 1.5 meters above the Kinect.. and .25 meters to its left.."). I'm also not sure how far back you should put the scene picture. If I think of something, I'll come back to this, but you can certainly just try it out and figure out a distance that works well for your set up.
Oh wait, now I know why I couldn't figure out the distance. It's because that example is using a 3D simulation. Pretty nifty. So you'd just need to figure out where you want to play your windows in the simulation or whatnot.
There are multiple techniques based on what setup you want to use (KinectDSK, libfreenect, OpenNI, etc.) and how accurate you want this to be.
OpenNI for example has a function called GetCoM which returns the centre of mass for a user (it doesn't need to track a skeleton at this point) which can be used. It looks like OpenNI was used in the video but they still use an old version. The newer version allows skeleton tracking without the 'psi'(ψ) pose.
Note that it doesn't look like it takes the user's head direction. The body could point in one direction and the head in another for example. G.Fanelli and his team have done quite a bit of research in the area. For Kinect check out Real Time Head Pose Estimation from Consumer Depth Cameras
I've played a bit with the KinectSDK and a Kinect for Windows and noticed there's a Face Tracker included.
In the end, based on to how loose or precise do you want the tracking to be, what's your ideal setup (maximum distance covered, content used, etc.) you can figure out what SDK/library will suit you best. Also, I imagine this also depends a bit on your experience with programming, in which case, also look for wrappers easier to tackle (e.g. Unity, MaxMSP/Jitter, Processing, openFrameworks, etc.)
I'm trying to write a nice representation of a boardgame's board and the movement of players around it. The board is a grid of tiles, players can move up, down, left or right. Several sets of contiguous tiles are grouped together into named regions. There are walls which block movement between some tiles.
That's basically it. I think I know where to start if all the players were human controlled, but I'm struggling with what happens with a computer controlled player. I want the player to be able to say to itself: "I'm on square x, I want to go to region R a lot, and I want to go to region S a little. I have 6 moves available, therefore I should do..."
I'm at a loss where to begin. Any ideas? This would be in a modern OO language.
EDIT: I'm not concerned (yet) with the graphical representation of the board, it's more about the route-finding part.
I'd say use a tree structure representing each possible move.
You can use a Minimax-type algorithm to figure out what move the computer should take.
If the problem is with pathfinding, there are quite a few pathfinding algorithms out there.
The Wikipedia article on Pathfinding has a list of pathfinding algorithms. One of the common ones used in games is the A* search algorithm, which can do a good job. A* can account for costs of passing over different types of areas (such as impenetrable walls, tiles which take longer to travel on, etc.)
In many cases, a board can be represented by a 2-dimensional array, where each element represents a type of tile. However, the requirement for regions may make it a little more interesting to try to solve.
Have a Player class, which has Map field associating Squares to probability of moving there, that is, Map<Square, Double> if you'll represent them as a 0..1 double.
Have a Board class encapsulating a series of Squares. Each Square will have four booleans or similar to mark where it has a wall, its coordinates, and which Player, if any, is on it.
I can tell you what worked for me on a commercial board game style product.
Break your representation of the board and core game logic into it's own module, with well defined interfaces to the rest of the game. We had functions like bool IsValidMove(origin, dest), and bool PerformMove(origin, dest), along with interfaces back to the GUI such as AnimateMove(gamePieceID, origin, dest, animInfo).
The board and rules only knew the state of the board, and what was valid to do. It didn't know anything about rendering, AI, animations, sound, input, or anything else. Each frame, we would handle input from the user at the GUI level, send commands to the board/game state code, and then be done. The game state code would get commands, resolve if they were valid or not, update the game state and board, then send messages back to the GUI to visually represent the new state of teh board. These updates were queued by the visual representation system, so we could batch a bunch of animations to happen in sequence.
The good thing about this is that the board doesn't know or care about human vs. AI players. Your AI can be a separate submodule that acts on it's turn. It can send the same commands as the human player, and the game logic and visual results will be the same. You'll need to either have a local per-AI bit of info about the game board state, or expose some BoardSnapshot() functionality from the game logic that lets the AI "see" the board, but that's it. Alternately, you could register each AI as an Observer Pattern on the game state, so they get notified when the board updates as well, in case they need to do any complex realtime planning.
Keeping each section of your game separate and isolated will help with unit testing, and provide a more robust system. Well defined interfaces are your friend.
If you are looking for in-memory representation of the games (and it's state), a matrix is the simplest. However, depending on the complexity of the board, the strategy, you may have to maintain a list of states.
If you mean on-screen representation, you'd need some graphics library to begin with.