Background
One problem with games using online highscore lists is that they often can be abused. The game sends the current score to the server and a cunning user can analyze the protocol/scheme and send bogus scores. That is why some highscore lists are topped with 999999 scores.
A common solution to this problem is to encrypt the score in some way, and on top of that put other mechanisms to recognize false scores. But even if you do this, it's the client that sends the score and the client is living in the user's computer and can be reverse-engineered.
My idea
I am designing/thinking about a game (that I will complete, yeah right :) ) where you configure your player/robot with instructions on how to perform a task (and when these instructions are to be carried out). When a "Go" button is pressed the game runs the instructions. Finally a result and, if successful, a score, is obtained.
So, how about this: Instead of submitting the score, the actual instructions are sent to the server, where they are run, using the same implementation. Then the server calculates the score and places the user on the highscore list.
The question
Are there ways this idea can be abused to get a false score?
I understand that this probably is not a new idea. But if it works, it wouldn't be impossible to extend it to other games too, where it is possible to record all user actions.
People will always find a way to cheat, but this seems like a reasonable counter measure. You'll have to consider your intended traffic levels as your scheme will require more resources than if it was just recording the high score sent by the client.
But, as an aside - this game sounds an awful lot like my job (giving instructions to a machine so it performs some task). No high-score board though (although, that would be awesome).
as long as the robot program's behavior doesn't depend on the speed of the computer it'll be fine and if the programs are quite small at most a few kilobytes this would work fine; the only way i can see to cheat it is if one cloned the work space and ran a program to find the optimal program for the robot and then put it in and submitted it or if some one posted the solutions, and people used that but both of those issues can be solved with randomization.
(a note about the issue of speed dependent games, it's fine for the game to uniformly slow down if the computer can't run it at full speed but if the physics time step depends on the frame rate, you can get problems like the jump height varying with the frame rate)
Related
I built an application which implements a similar function as task assignment. I thought it works well until recently I noticed the solutions are not optimal. In details, there is a score table for each possible pair between machines and tasks, and usually the number of machines is much less than the number of tasks. I used hard/medium/soft rules, where the soft rule is incremental based on the score of each assignment from the score table.
However, when I reviewed the results after 1-2 hours run, I found out of the unassigned tasks there are many better choices (would achieve higher soft score if assigned) than current assignments. The benchmark reports indicate that the total soft score reached plateau within a hour and then stuck at that score level.
I checked the logic of rules - if the soft rule working perfectly, it should eventually find a way of allocation which achieves the highest overall soft score, whereas meeting the other hard/medium rules, isn't it?
I've been trying various things such as tuning algorithm parameters, scaling the score table, etc. but none delivers the optimal solution.
One problem is that you might be facing a score trap (see docs). In that case, make your constraint score more fine grained to deal with that.
If that's not the case, and you're stuck in a local optima, then I wouldn't play too much with the algorithm parameters - they will probably fix it, but you'll be overfitting on that dataset.
Instead, figure out the smallest possible move that gets you of that local optima and a step closer to the global optimum. Add that kind of moves as a custom move. For example if a normal swap move can't help, but you see a way of getting there by doing a 3-swap move, then implement that move.
I've created a solitaire game for Mac, but people keep complaining that there's not enough "winning" shuffles. People have a win-rate of about 5%-10%, where their usual win-rate is around 50%.
Right now, I'm creating an array with all the cards in the deck and after that I shuffle that array, using the F/Y method.
So my question is... is there any way that I can "check" for a winning solitaire shuffle, so I can bump up the numbers of winning solitaire shuffles I'm dealing to people?
Do you have a backend part of your app? if so then you can store all success (win) shuffled arrays on the server from all the users and increase the win-rate by sending them successes from other users with some frequency.
I read that for some sort of these games, there is no more efficient way than to do brute force checking of all possible moves.
My suggestion for situations like this is to start from the completed state (end of game), and move backwards randomly to create a random start state.
I run site with thousands of solitaire games played a day. We tried to built an AI that would allow us to understand what hands were winnable or not. It turns out, like many of the responses, that there are so many permutations to win or lose a solitaire game that an AI to detect a winnable or non-winnable game is extremely difficult to build and requires significant processing power. We weren't able to succeed in building it.
Now, we simply leverage our user data to find which games are winnable, and offer that as option for players who want to play winnable only games. Otherwise, they can choose a random game. I would recommend trying something similar!
i'm developing a 2d game , rts game, is sort of like COC (Clash of Clans). cool mobile game,huh. but i run into some problem with path-finding, as usual,i do path-finding algorithm once every agent was placed somewhere in screen by finger-touch,but in some case, this incur performance penalty, and your mobile phone will be very hot as your agents increase suddenly and simultaneously.
actually,no matter what path-finding i use,e.g a*,dijkstra, or something special(maybe optimal),which is always time-comsuming process throughout the whole game loop ,especially massive agents on less powerfull mobile cpu. as far as i know, some game like this, shortest path is not the focus (will people care about the path agent walk through intentionally?) instead of efficient and naturally path-finding. so my mind come up with some solutions,maybe impractical.
solution 1 : use some cheaper path-finding algorithm, could be graph related or somethingelse because shortest path doesn't matter.
solution 2 : put some limits on ai module to process agent for path-finding, e.g upper limit to path-finding algorithm calls at interval,that is,just one or two agents of the those agents got planning, let rest of them plann after several game frames. as you know ,its drawback is obvious.
the above is what i thougt. hope your game dev disciplined guys give me brilliant idea , tricks, i'll appricate. thank you very much.
EDIT:
here is my related pseudo code,and procedure cresspond to my game logic.
//inside logic thread
procedure putonagent
if (need to put agent on world space)
//do standard a* path-finding for an agent
path_list=do_aStar_path_finding(attacktargetpos,startpos);
and then enqueue path_list;
......
end
the path_list's queue finally used by visual agents for stepping forward. any hints?
Look up "Hierarchical pathfinding" Say you're driving to a city far away, you don't plan the entire path before you get in the car!
Pathfinding is usually done in steps, like it's not one function call, after N iterations it'll return (and indicate it's not complete) so it can be run at the next available time. Basically rather than a function with locals think operator() and state variables as members of a class.
To make it fast you can make the heuristic crap with A* pathfinding, suppose I use a heuristic of 10* the distance-as-the-crow-flies, it may not find the shortest path but it will have a strong preference for heading towards the target rather than "fanning out" and exploring further around the closed region.
I am working on a point of sale (POS vending machine) project which has many images on the screen where the customer is expected to browse almost all of them. Here are my questions:
Can you please suggest me test cases for testing load time for images?
What is the acceptable load time for these images on screen?
Do we have any standards for testing these kind of acceptable load time?
"What is an acceptable loading time?" is a very broad question, one that has been studied as a research question for human computer interaction issues. In general the answer depends on:
How predictable the loading time is? (does it vary according to time of day, e.g. from 9am to 2am. unpredictable is usually the single most annoying thing about waiting)
How good is the feedback to the user? (does it look like it's broken or have a nice progress bar during the waiting? knowing it's nearly there can help ease the pain, even if the loading times are always consistent)
Who are the users and what other systems have they used previously? If it was all writing in a book before then waiting 2 minutes for images is going to be positively slow. If you're replacing something that took 3 minutes then it's pretty fast.
Ancillary input issues, e.g. does it buffer presses whilst loading and also move items around on the display so people press before it's finished and accidentally press the wrong thing? Does it annoyingly eat input soon after you've started to input it so you have to type/scan it again?
In terms of testing I'm assuming you're not planning on observing users and asking "how hard would you like to hurt this proxy for frustration?" What you can realistically test is how it copes under realistic loads and how accurate the predictions are.
Many games that are created these days come with their own achievement system that rewards players/users for accomplishing certain tasks. The badges system here on stackoverflow is exactly the same.
There are some problems though for which I couldn't figure out good solutions.
Achievement systems have to watch out for certain events all the time, think of a game that offers 20 to 30 achievements for e.g.: combat. The server would have to check for these events (e.g.: the player avoided x attacks of the opponent in this battle or the player walked x miles) all time.
How can a server handle this large amount of operations without slowing down and maybe even crashing?
Achievement systems usually need data that is only used in the core engine of the game and wouldn't be needed out of there anyway if there weren't those nasty achievements (think of e.g.: how often the player jumped during each fight, you don't want to store all this information in a database.). What I mean is that in some cases the only way of adding an achievement would be adding the code that checks for its current state to the game core, and thats usually a very bad idea.
How do achievement systems interact with the core of the game that holds the later unnecessary information? (see examples above)
How are they separated from the core of the game?
My examples may seem "harmless" but think of the 1000+ achievements currently available in World of Warcraft and the many, many players online at the same time, for example.
Achievement systems are really just a form of logging. For a system like this, publish/subscribe is a good approach. In this case, players publish information about themselves, and interested software components (that handle individual achievements) can subscribe. This allows you to watch public values with specialised logging code, without affecting any core game logic.
Take your 'player walked x miles' example. I would implement the distance walked as a field in the player object, since this is a simple value to increment and does not require increasing space over time. An achievement that rewards players that walk 10 miles is then a subscriber of that field. If there were many players then it would make sense to aggregate this value with one or more intermediate broker levels. For example, if 1 million players exist in the game, then you might aggregate the values with 1000 brokers, each responsible for tracking 1000 individual players. The achievement then subscribes to these brokers, rather than to all the players directly. Of course, the optimal hierarchy and number of subscribers is implementation-specific.
In the case of your fight example, players could publish details of their last fight in exactly the same way. An achievement that monitors jumping in fights would subscribe to this info, and check the number of jumps. Since no historical state is required, this does not grow with time either. Again, no core code need be modified; you only need to be able to access some values.
Note also that most rewards do not need to be instantaneous. This allows you some leeway in managing your traffic. In the previous example, you might not update the broker's published distance travelled until a player has walked a total of one more mile, or a day has passed since last update (incrementing internally until then). This is really just a form of caching; the exact parameters will depend on your problem.
You can even do this if you don't have access to source, for example in videogame emulators. A simple memory-scan tool can be written to find the displayed score for example. Once you have that your achievement system is as easy as polling that memory location every frame and seeing if their current "score" or whatever is higher than their highest score. The cool thing about videogame emulators is that memory locations are deterministic (no operating system).
There are two ways this is done in normal games.
Offline games: nothing as complex as pub/sub - that's massive overkill. Instead you just use a big map / dictionary, and log named "events". Then every X frames, or Y seconds (or, usually: "every time something dies, and 1x at end of level"), you iterate across achievements and do a quick check. When the designers want a new event logged, it's trivial for a programmer to add a line of code to record it.
NB: pub/sub is a poor fit for this IME because the designers never want "when player.distance = 50". What they actually want is "when player's distance as perceived by someone watching the screen seems to have travelled past the first village, or at least 4 screen widths to the right" -- i.e. far more vague and abstract than a simple counter.
In practice, that means that the logic goes at the point where the change happens (before the event is even published), which is a poor way to use pub/sub. There are some game engines that make it easier to do a "logic goes at the point of receipt" (the "sub" part), but they're not the majority, IME.
Online games: almost identical, except you store "counters" (int that goes up), and usually also: "deltas" (circular buffers of what's-happened frame to frame), and: "events" (complex things that happened in game that can be hard-coded into a single ID plus a fixed-size array of parameters). These are then exposed via e.g SNMP for other servers to collect at low CPU cost and asynchronously
i.e. almost the same as 1 above, except that you're careful to do two things:
Fixed-size memory usage; and if the "reading" servers go offline for a while, achievements won in that time will need to be re-won (although you usually can have a customer support person manually go through the main system logs and work out that the achievement "probably" was won, and manually award it)
Very low overhead; SNMP is a good standard for this, and most teams I know end up using it
If your game architecture is Event-driven, then you can implement achievements system using finite-state machines.