How would I go about tracking actions in an arcade hockey game? - game-development

I'm not very experienced in programming, but I'm building an xG model for an arcade hockey game.
To achieve this, I need to track different actions in the game.
The game has a clock, but statistics from each game do not link statistic (e.g. goal or shot) to a specific time, so I would need to do that first.
I also need to track lots of things from each match, and spit them out into a .xlsx or similar file.
These things include
If a shot is taken, who took the shot, at what time, in what positions (i would use a grid with coordinates system for shooter position)
If a shot is taken, where in the net/goal does it hit, even if its saved
If anybody could point me in the right direction, that would be appreciated.
Sorry if this is the wrong place to ask this question.
I have created a grid and coordinate system based on the game's mini-map, but no code as of yet.

Related

How to organize freeRTOS project

I am new in the world of freertos, I have to do a project that consists of an automatic alcohol dispenser that measures temperature. The parts/sensors of my project are:
DHT22 for temperature (I know its not ideal but its the only one
that I have).
Hc-sr04 for distance measurement (ultrasound).
I2c display 16x2 to show the temperature.
Buzzer to make sound.
Servo to dispense alcohol.
The idea of the project is that when someone comes within 15 cm of the device, the temperature is displayed on the screen, the servo moves and can dispense alcohol, and the buzzer makes a little sound.
As I understand it, I have to create a task for each activity. One to measure temperature and possibly send that information to a queue, another to read the queue and display it on the screen, another to make the sound with the buzzer, another to measure distance with the ultrasound, and another to move the servo.
This is how I was asked to do it, but my question is what is the best way to organize the tasks?
How do I make it so that ...
first the distance is measured,
then the temperature is measured,
then it is shown on the display,
the servo is moved and the sound is made?
What is the best way to communicate between tasks (when a task measures less than 15 cm, tell another task to measure temperature, and then it is shown on the display, and the servo moves and makes the sound)?
I would like to see how you think about it and it would help me a lot to know.
I’m very new to the subject and I’m having a hard time thinking which is the best way. I would appreciate simple solutions that not involve complicated stuff as I'm having a hard time with freeRTOS.
This seems like a fairly simple system, as all work can be done sequentially (i.e. one thing happens after another). You certainly don't need to use dedicated tasks for activities which are done sequentially. In fact, the simplest architecture by far is to have a single task, running in a loop, doing everything. I strongly suggest you start with that approach and build something that works.
Then after you have something that works sequentially in a single task, re-consider your options. It might be the perfect architecture, it might need minor adjustments. You'll be in a much better position to judge.

Tutorials for controlling 3D modeling objects

I have some experience with Blender such that I can make a semitransparent cylinder of specified dimensions and small spheres. I want to (for a chemistry tutorial video explaining temperature and heat concepts) write a program that will:
Set up the cylinder and some spheres in a coordinate space
Set up a camera and lighting
Get the spheres moving around in random directions while keeping track of their positions and making them bounce when necessary (this I can figure out given a coordinate space; and I'm not going to get bone-crunchingly accurate trying to do accelerations, taking "mass" into account, etc. just going to send balls in another direction at the "speed" all the balls are going)
Record what this would look like through the camera for a set amount of time (thinking command line option in seconds)
In other words, by #4, this program doesn't even need to be GUI at all. I just want the program to make a video.
It may take me a very long time to actualize this because though I have a lot of experience with C, C++, and Java, I don't know how to take a 3D model file and programmatically control it. I don't even know the infrastructure of libraries and accompanying API to control 3D objects and record the camera to a file.
Are there any tutorials that would go from starting with some 3D models to programmatically setting up a scene (objects, camera, lights), programmatically moving the objects in the coordinate space, and recording the video to a file?
Knowing some programming already, I want to point you to Unity, www.unity3d.com
Unity is a 3d game engine, though it can be used for a number of different things, including this program you have in mind.
It's programmed with C# or Javascript, and I think you could pick these languages up easily enough.
Basically what you described in your last paragraph is exactly what Unity does.

Kinect - Techniques required to achieve the following display

Does anyone have any idea what technique I should use to make the display video shift left, right, up and down as in the video below? I want to achieve this with a Kinect but with a different idea.
Thanks in advance.
http://www.youtube.com/watch?v=V2hxaijuZ6w
EDIT:
Now that I'm awake, I'll go into better detail about this (it apparently took me a week to wake up).
So the Winscape project connects a real and virtual world by giving windows from the real world into a virtual world. The way it does this is act like the real world is part of the virtual world, and then changes the display of the monitors (disguised to look like windows) to replicate the view a person should see if they existed in the virtual world.
Imagine your virtual world. It doesn't necessarily have an end to it, but there's a point where you stop trying to render stuff into it, so let's say the world in enclosed in a box that contains all the rendered elements. Now what Winscape does is make it appear that the virtual world actually exists in the real world, and that you can see it through the monitors.
First step is obviously to create your virtual world. For starters, I'd suggest just creating a literal box. Make each wall a difference color, or put color gradients on the walls. Make something simple. If you haven't already decided on a 3D framework to handle this, I'd suggest XNA. It's C#, which works with the Kinect SDK, and it's got a ton of tutorials online to help you. Once you've created your world, use XNA to place a camera inside the box and add some simple controls to rotate the camera. This will allow you to look around the box from the inside, to make sure the rendering is working as expected.
Once you've done that, you need to decide where to put your windows. These will be the viewpoints into your 3D scene. To demonstrate this concept, here's a picture I took from an XNA camera tutorial.
Note that, if you read the actual tutorial, they won't say the exact same thing as me because I'm just hijacking the picture to demonstrate my meaning. So, the (0,0,0) point is where your "eye" is. The pink rectangle would represent your window. Looking at the window, four lines are drawn from the eye to the corners of the pink window. These four lines are extended forward until they collide with the background, creating the green rectangle. This would be the rectangle that your eye can see through the window.
Note that XNA will actually handle a LOT of this for you. You simply need to create a camera in your virtual scene and move it around, doing some math to aim it directly at your window. You'll want that camera to be in the virtual space in a way that represent your location in the real world. You can do this by using the Kinect to get your real world coordinates in relation to itself, then configure your application to know where your Kinect is in relation to your windows. Combing that data, you can get the location of your eyes in relation to your monitors in the real world, and since the monitors are represented by the windows in the virtual world, you can figure out where you exist in the virtual world. So place the virtual camera where your head is in the virtual world, point it at the windows, and do some magic to make sure only the window is viewed by the camera.
Original semi-lucid rant:
Okay, I'm going to take a shot at this (it's almost 1 AM, so let me know if I did a less than brilliant job and I'll come back to it when I wake up).
First, it'll involve quite a bit of math that I'm just going to skim over. You have, essentially, three layers.
Person ---- "Windows" (Monitors) ---- Scene
The scene, of course, doesn't really exist. You have to kind of incorporate the person into a virtual world where the scene, which is really just a flat image, exists behind a wall. The only way the person can see said scene is through the windows in the wall, which in reality is faked by monitors.
So, here comes the math. The Kinect can calculate where you're standing in the room, and more importantly, where your head is. From this you can get a general sense of where your eyes are. You'll need to take this point (your eyes) and translate it into the coordinates you're using in your virtual world. Then, calculate what those eyes should be able to see through the virtual windows. You can do this by projecting lines from the eyes to each corner of a window, all the way through until it hits the "scene" picture. Each window will correspond to a rectangular area on the background picture. This rectangle is what needs to be drawn to the screen.
The trickiest part is going to be setting up virtual world to nearly perfectly mimic the real world. Essentially, a lot of configuration ("okay, this window is 1.5 meters above the Kinect.. and .25 meters to its left.."). I'm also not sure how far back you should put the scene picture. If I think of something, I'll come back to this, but you can certainly just try it out and figure out a distance that works well for your set up.
Oh wait, now I know why I couldn't figure out the distance. It's because that example is using a 3D simulation. Pretty nifty. So you'd just need to figure out where you want to play your windows in the simulation or whatnot.
There are multiple techniques based on what setup you want to use (KinectDSK, libfreenect, OpenNI, etc.) and how accurate you want this to be.
OpenNI for example has a function called GetCoM which returns the centre of mass for a user (it doesn't need to track a skeleton at this point) which can be used. It looks like OpenNI was used in the video but they still use an old version. The newer version allows skeleton tracking without the 'psi'(ψ) pose.
Note that it doesn't look like it takes the user's head direction. The body could point in one direction and the head in another for example. G.Fanelli and his team have done quite a bit of research in the area. For Kinect check out Real Time Head Pose Estimation from Consumer Depth Cameras
I've played a bit with the KinectSDK and a Kinect for Windows and noticed there's a Face Tracker included.
In the end, based on to how loose or precise do you want the tracking to be, what's your ideal setup (maximum distance covered, content used, etc.) you can figure out what SDK/library will suit you best. Also, I imagine this also depends a bit on your experience with programming, in which case, also look for wrappers easier to tackle (e.g. Unity, MaxMSP/Jitter, Processing, openFrameworks, etc.)

Creating a program that takes GPS data and displays the current location on a geo-referenced image

My name is John and I am a grad student at the University of Florida. As part of my research one of my tasks is to create a piece of software that is to display a map of the surrounding area, which shows the current location (from a GPS), and to implement a shapefile (as a boundary outline). I am not able to really get enough information to get on the right track on how to do this, and would appreciate any assistance!
The project involves a large-scale robot that will be operated by tele-communication in rough terrain. So this mapping and gps software will need to be entirely offline, but the location in use will be known. It is very preferred to find a cost effective means to doing this process (maybe even a simple API that could do the simple task, dll libraries, or active x.
My initial guess is to use a geo-referenced image (that I would get the lat and long of and know the boundaries of that image). Then from a GPS I then would treat the image as an XY plot somehow and that would provide the current position. Obviously even this step can be a challenge depending on what kind of image, map, kml file, etc that I can find and use.
So I would appreciate any advice, suggestions, or comments.
Suggest you online reference source code, and then modify their own, this project is currently on the Internet, you can through search engines to find. Good luck!

Representing the board in a boardgame

I'm trying to write a nice representation of a boardgame's board and the movement of players around it. The board is a grid of tiles, players can move up, down, left or right. Several sets of contiguous tiles are grouped together into named regions. There are walls which block movement between some tiles.
That's basically it. I think I know where to start if all the players were human controlled, but I'm struggling with what happens with a computer controlled player. I want the player to be able to say to itself: "I'm on square x, I want to go to region R a lot, and I want to go to region S a little. I have 6 moves available, therefore I should do..."
I'm at a loss where to begin. Any ideas? This would be in a modern OO language.
EDIT: I'm not concerned (yet) with the graphical representation of the board, it's more about the route-finding part.
I'd say use a tree structure representing each possible move.
You can use a Minimax-type algorithm to figure out what move the computer should take.
If the problem is with pathfinding, there are quite a few pathfinding algorithms out there.
The Wikipedia article on Pathfinding has a list of pathfinding algorithms. One of the common ones used in games is the A* search algorithm, which can do a good job. A* can account for costs of passing over different types of areas (such as impenetrable walls, tiles which take longer to travel on, etc.)
In many cases, a board can be represented by a 2-dimensional array, where each element represents a type of tile. However, the requirement for regions may make it a little more interesting to try to solve.
Have a Player class, which has Map field associating Squares to probability of moving there, that is, Map<Square, Double> if you'll represent them as a 0..1 double.
Have a Board class encapsulating a series of Squares. Each Square will have four booleans or similar to mark where it has a wall, its coordinates, and which Player, if any, is on it.
I can tell you what worked for me on a commercial board game style product.
Break your representation of the board and core game logic into it's own module, with well defined interfaces to the rest of the game. We had functions like bool IsValidMove(origin, dest), and bool PerformMove(origin, dest), along with interfaces back to the GUI such as AnimateMove(gamePieceID, origin, dest, animInfo).
The board and rules only knew the state of the board, and what was valid to do. It didn't know anything about rendering, AI, animations, sound, input, or anything else. Each frame, we would handle input from the user at the GUI level, send commands to the board/game state code, and then be done. The game state code would get commands, resolve if they were valid or not, update the game state and board, then send messages back to the GUI to visually represent the new state of teh board. These updates were queued by the visual representation system, so we could batch a bunch of animations to happen in sequence.
The good thing about this is that the board doesn't know or care about human vs. AI players. Your AI can be a separate submodule that acts on it's turn. It can send the same commands as the human player, and the game logic and visual results will be the same. You'll need to either have a local per-AI bit of info about the game board state, or expose some BoardSnapshot() functionality from the game logic that lets the AI "see" the board, but that's it. Alternately, you could register each AI as an Observer Pattern on the game state, so they get notified when the board updates as well, in case they need to do any complex realtime planning.
Keeping each section of your game separate and isolated will help with unit testing, and provide a more robust system. Well defined interfaces are your friend.
If you are looking for in-memory representation of the games (and it's state), a matrix is the simplest. However, depending on the complexity of the board, the strategy, you may have to maintain a list of states.
If you mean on-screen representation, you'd need some graphics library to begin with.