You may know a lot of programs, e.g some password cracking programs, we can stop them while they're running, and when we run the program again (with or without entering a same input), they will be able to continue from where they have left. I wonder what kind of technique those programs are using?
[Edit] I am writing a program mainly based on recursion functions. Within my knowledge, I think it is incredibly difficult to save such states in my program. Is there any technique, somehow, saves the stack contents, function calls, and data involved in my program, and then when it is restarted, it can run as if it hasn't been stopped? This is just some concepts I got in my mind, so please forgive me if it doesn't make sense...
It's going to be different for every program. For something as simple as, say, a brute force password cracker all that would really need to be saved was the last password tried. For other apps you may need to store several data points, but that's really all there is too it: saving and loading the minimum amount of information needed to reconstruct where you were.
Another common technique is to save an image of the entire program state. If you've ever played with a game console emulator with the ability to save state, this is how they do it. A similar technique exists in Python with pickling. If the environment is stable enough (ie: no varying pointers) you simply copy the entire apps memory state into a binary file. When you want to resume, you copy it back into memory and begin running again. This gives you near perfect state recovery, but whether or not it's at all possible is highly environment/language dependent. (For example: most C++ apps couldn't do this without help from the OS or if they were built VERY carefully with this in mind.)
Use Persistence.
Persistence is a mechanism through which the life of an object is beyond programs execution lifetime.
Store the state of the objects involved in the process on the local hard drive using serialization.
Implement Persistent Objects with Java Serialization
To achieve this, you need to continually save state (i.e. where you are in your calculation). This way, if you interrupt the probram, when it restarts, it will know it is in the middle of calculation, and where it was in that calculation.
You also probably want to have your main calculation in a separate thread from your user interface - this way you can respond to "close / interrupt" requests from your user interface and handle them appropriately by stopping / pausing the thread.
For linux, there is a project named CRIU, which supports process-level save and resume. It is quite like hibernation and resuming of the OS, but the granularity is broken down to processes. It also supports container technologies, specifically Docker. Refer to http://criu.org/ for more information.
Related
I just started to learn Smalltalk, went through its syntax, but hasn't done any real coding with it. While reading some introductory articles and some SO questions like:
What gives Smalltalk the ability to do image persistence, and why
can't languages like Ruby/Python serialize
themselves?
What is a Smalltalk “image”?
One question always comes into my mind: How does Smalltalk image handle IO?
A smalltalk program can resume from where it exits, using information stored in the image. Say I have some opened TCP connections(not to mention all sorts of buffer), how do they get recovered? There seems to be no way other than reopening them(confirmed by this answer). And if Smalltalk does reopen those connections, isn't it going against the idea of "resume execution of the program at a later time exactly from where you left off"? Or is there some magic behind it?
I don't mind if the answer is specific to certain dialects, say Pharo.
Also would be interested to know some resources to learn more about this topic.
As you have noted some resources are not part of the memory heap and therefore will not be recovered just by loading the image back in memory. In particular this applies to all kinds of resources managed by the operating system, and cross-platform Smalltalks where you can copy the image from one OS to another and restart the image even have to restore such resources differently than they were before.
The trick in the Smalltalks I have seen is that all classes receive a message immediately after the image resumed. By implementing a method for that message they can restore any transient resources (sockets, connections, foreign handles, ...) that their instances might need. To find all instances some Smalltalks provide messages such as allInstances, or you must maintain a registry of the relevant objects yourself.
And if Smalltalk does reopen those connections, isn't it going against the idea of "resume execution of the program at a later time exactly from where you left off"?
From a user perspective, after that reinitialization and reallocation of resources, everything still looks like "exactly where you left off", even though some technical details have changed under the hood. Of course this won't be the case if it is impossible to restore the resources (no network, for example). Some limits cannot be overcome by Smalltalk magic.
How does the Smalltalk image handle IO?
To make that resumption described above possible, all external resources are wrapped and represented as some kind of Smalltalk object. The wrapper objects will be persisted in the image, although they will be disconnected from the outside world when Smalltalk is shut down. The wrappers can then restore the external resources after the image has been started up again.
It might be useful to add a small history lesson: Smalltalk was created in the 1970s at Xerox's Palo Alto Research Center (PARC). In the same time, at the same place, Personal Computing was invented. Also in the same time at the same place, the Ethernet was invented.
Smalltalk was a single integrated system, it was at the same time the IDE, the GUI, the shell, the kernel, the OS, even the microcode for the CPU was part of the Smalltalk System. Smalltalk didn't have to deal with non-Smalltalk resources from outside the image, because for all intents and purposes, there was no "outside". It was possible to re-create the exact machine state, since there wasn't really any boundary between the Virtual Machine and the machine. (Almost all the system was implemented in Smalltalk. There were only a couple of tiny bits of microcode, assembly, and Mesa. Even what we would consider device drivers nowadays were written in Smalltalk.)
There was no need to persist network connections to other computers, because nobody outside of a few labs had networks. Heck, almost no organization even had more than one computer. There was no need to interact with the host OS because Smalltalk machines didn't have an OS; Smalltalk was the OS. (You probably know the famous quote from Dan Ingalls' Design Principles Behind Smalltalk: "An operating system is a collection of things that don't fit into a language. There shouldn't be one.") Because Smalltalk was the OS, there was no need for a filesystem, all data was simply objects.
Smalltalk cannot control what is outside of Smalltalk. This is a general property that is not unique to Smalltalk. You can break encapsulation in Java by editing the compiled bytecode. You can break type-safety in Haskell by editing the compiled machine code. You can create a memory leak in Rust by editing the compiled machine code.
So, all the guarantees, features, and properties of Smalltalk are only available as long as you don't leave Smalltalk.
Here's an even simpler example that does not involve networking or moving the image to a different machine: open a file in the host filesystem. Suspend the image. Delete the file. Resume the image. There is no possible way to resume the image in the same state.
All Smalltalk can do, is approximate the state of external resources as good as it possibly can. It can attempt to re-open the file. If the file is gone, it can maybe attempt to create one with the same name. It can try to resume a network connection. If that fails, it can try to re-establish the connection, create a new connection to the same address.
But ultimately, everything outside the image is outside of the control of Smalltalk, and there is nothing Smalltalk can do about it.
Note that this impedance mismatch between the inside of the image and the "outside world" is one of the major criticisms that is typically leveled at Smalltalk. And if you look at Smalltalk systems that try to integrate deeply with the outside world, they often have to compromise. E.g. GNU Smalltalk, which is specifically designed to integrate deeply into a Unix system, actually gives up on the image and persistence.
I'll add one more angle to the nice answers of Joerg and JayK.
What is important to understand is the context of the time and age Smalltalk was created. (Joerg already pointed out important aspect of everything being Smalltalk). We are talking about time right after ARPANET.
I think they were not expecting the collaboration and interconnection we have nowadays. The image was meant as a record of a session of a single programmer without any external communication. Times changed and now you naturally ask the IO question. As JayK noted you can re-init the image so you will get image similar to the point you ended your session.
The real issue, the reason I've decided to add my 2c, is the collaboration among multiple developers. This is where the image, the original idea, is, in my opinion, outlived. There is no way to share an image among multiple developers so they could develop at the same time and share the code.
Imagine wouldn't it be great if you could have one central image and the developers would have only diffs and open their environment where they ended with everyone's new code incorporated? Sound familiar? This is kind of VCS we have like mercurial, git etc. without the image, only code. Sadly, to say such image re-construction does not exist.
Smalltalk is trying to catch up with the std. versioning tooling we use nowadays.
(Side note: Smalltalk had their own "versioning" systems but they rather lack in many ways compared to the current VCS. The ones used Monticello (Pharo), ENVY (VA Smalltalk), and Store (VisualWorks).)
Pharo is now trying to catch the train and implement the git functionality via iceberg. The Smalltalk/X-jv branch has integrated decent mercurial support. The Squeak has Squot for git (for now) [thank you #JayK].
Now "only" (that is BIG only) to add support for central/diff images :).
For some practical code dealing with image startUp and shutDown, take a look at the class side of ZnServer. SessionManager is the class providing all the functionality you need to deal with giving up system resources on shutdown and re-aquiring them again on startup.
Need to chime in on the source control discussion a bit.
The solution I have seen with VS(E)/VA is that you work with Envy/PVCS and share this repository with the developers.
Every developer has his/her own image with all the pros and cons of images.
One company I was working for was discussing whether it wouldnt make sense to build up the development image egain from scratch every couple of weeks in order to get rid of everything that might dilute the code quality (open Filehandles, global variables, pool dictionary entries, you name it, you will get it and it will crash your code during run-time).
When it comes to building a "run-time", you take the plain tiny standard image, and all your code comes from bind files and SLLs.
For background I have been working on an RPG based off Ray Wenderlich's tutorials. (Example)http://www.raywenderlich.com/1163/how-to-make-a-tile-based-game-with-cocos2d.
Now I am trying to build a scripted event/Cut scene system so that for instance when a player enters a building, the different characters can have a discussion of the current events, before continuing the adventure. My only problem is I can't really visualize how one would implement this.
I would guess some sort of one time use trigger, maybe kept in a large switch statement on a singleton somewhere ? Which maybe draws all the temp characters ? Then the event then deactivates itself.
I am just looking for a blueprint of how one would do this. Although programming examples are welcome as well.
It depends a lot on how much time you want to commit to the system and how versatile you want the final system to be. A powerful cut-scene system can be flexible enough to be used in almost every interaction in a typical 2d RPG.
If you want to go all out I would suggest a heavily data drive approach. Keep as much data in files and use the filesystem to your advantage. If you say 'all the dialog scenes are in this folder' then when adding a new scene it just needs to be dropped in the folder rather than creating the scene then touching some master switch statement somewhere. Just keep in mind with a large system you want to make adding a new cutscene as simple as possible, not 400 different places to touch.
I would also stay away from switch statements for tracking progress in a cutscene. It adds a lot of code overhead per scene. Idly a cutscene would be a simple as an array of data and a position. Your cutscene manager, the singleton, can parse through the array, decode the data into commands and fire them off.
Sorry if thats a big vague but a lot of these decisions depend on how your engine is structured and what you want out of the system. Keep in mind that the more general the system is, the more uses you may find for it going forward but it will take longer to get up in running to begin with.
you can just check for the tile you are on while you are moving, and when you are on a specific tile you can start a cutscene, also you can add a tag via TiledEditor (this is the recommended editor for using with CCTMXTiledMap) to your map to specify where the cutscene should begin just like he pointed the character start point in that tutorial. the you check for the triger specified (either a specific tile or what the point taged in map) in every gamecycle. and then it's almost very easy you just freeze controls and play a prerecorded camera and object movements till the cutscene finishes. the restore the game to normal mode and turn off checking for the triger.
When things go badly awry in embedded systems I tend to write an error to a special log file in flash and then reboot (there's not much option if, say, you run out of memory).
I realize even that can go wrong, so I try to minimize it (by not allocating any memory during the final write, and boosting the write processes priority).
But that relies on someone retrieving the log file. Now I was considering sending a message over the intertubes to report the error before rebooting.
On second thoughts, of course, it would be better to send that message after reboot, but it did get me to thinking...
What sort of things ought I be doing if I discover an irrecoverable error, and how can I do them as safely as possible in a system which is in an unstable state?
One strategy is to use a section of RAM that is not initialised by during power-on/reboot. That can be used to store data that survives a reboot, and then when your app restarts, early on in the code it can check that memory and see if it contains any useful data. If it does, then write it to a log, or send it over a comms channel.
How to reserve a section of RAM that is non-initialised is platform-dependent, and depends if you're running a full-blown OS (Linux) that manages RAM initialisation or not. If you're on a small system where RAM initialisation is done by the C start-up code, then your compiler probably has a way to put data (a file-scope variable) in a different section (besides the usual e.g. .bss) which is not initialised by the C start-up code.
If the data is not initialised, then it will probably contain random data at power-up. To determine whether it contains random data or valid data, use a hash, e.g. CRC-32, to determine its validity. If your processor has a way to tell you if you're in a reboot vs a power-up reset, then you should also use that to decide that the data is invalid after a power-up.
There is no single answer to this. I would start with a Watchdog timer. This reboots the system if things go terribly awry.
Something else to consider - what is not in a log file is also important. If you have routine updates from various tasks/actions logged then you can learn from what is missing.
Finally, in the case that things go bad and you are still running: enter a critical section, turn off as much of the OS a possible, shut down peripherals, log as much state info as possible, then reboot!
The one thing you want to make sure you do is to not corrupt data that might legitimately be in flash, so if you try to write information in a crash situation you need to do so carefully and with the knowledge that the system might be an a very bad state so anything you do needs to be done in a way that doesn't make things worse.
Generally, when I detect a crash state I try to spit information out a serial port. A UART driver that's accessible from a crashed state is usually pretty simple - it just needs to be a simple polling driver that writes characters to the transmit data register when the busy bit is clear - a crash handler generally doesn't need to play nice with multitasking, so polling is fine. And it generally doesn't need to worry about incoming data; or at least not needing to worry about incoming data in a fashion that can't be handled by polling. In fact, a crash handler generally cannot expect that multitasking and interrupt handling will be working since the system is screwed up.
I try to have it write the register file, a portion of the stack and any important OS data structures (the current task control block or something) that might be available and interesting. A watchdog timer usually is responsible for resetting the system in this state, so the crash handler might not have the opportunity to write everything, so dump the most important stuff first (do not have the crash handler kick the watchdog - you don't want to have some bug mistakenly prevent the watchdog from resetting the system).
Of course this is most useful in a development setup, since when the device is released it might not have anything attached to the serial port. If you want to be able to capture these kinds of crash dumps after release, then they need to get written somewhere appropriate (like maybe a reserved section of flash - just make sure it's not part of the normal data/file system area unless you're sure it can't corrupt that data). Of course you'd need to have something examine that area at boot so it can be detected and sent somewhere useful or there's no point, unless you might get units back post-mortem and can hook them up to a debugging setup that can look at the data.
I think the most well known example of proper exception handling is a missile self-destruction. The exception was caused by arithmetic overflow in software. There obviously was a lot of tracing/recording media involved because the root cause is known. It was discovered debugged.
So, every embedded design must include 2 features: recording media like your log file and graceful halt, like disabling all timers/interrupts, shutting all ports and sitting in infinite loop or in case of a missile - self-destruction.
Writing messages to flash before reboot in embedded systems is often a bad idea. As you point out, no one is going to read the message, and if the problem is not transient you wear out the flash.
When the system is in an inconsistent state, there is almost nothing you can do reliably and the best thing to do is to restart the system as quickly as possible so that you can recover from transient failures (timing, special external events, etc.). In some systems I have written a trap handler that uses some reserved memory so that it can, set up the serial port and then emit a stack dump and register contents without requiring extra stack space or clobbering registers.
A simple restart with a dump like that is reasonable because if the problem is transient the restart will resolve the problem and you want to keep it simple and let the device continue. If the problem is not transient you are not going to make forward progress anyway and someone can come along and connect a diagnostic device.
Very interesting paper on failures and recovery: WHY DO COMPUTERS STOP AND WHAT CAN BE DONE ABOUT IT?
For a very simple system, do you have a pin you can wiggle? For example, when you start up configure it to have high output, if things go way south (i.e. watchdog reset pending) then set it to low.
Have you ever considered using a garbage collector ?
And I'm not joking.
If you do dynamic allocation at runtime in embedded systems,
why not reserve a mark buffer and mark and sweep when the excrement hits the rotating air blower.
You've probably got the malloc (or whatever) implementation's source, right ?
If you don't have library sources for your embedded system forget I ever suggested it, but tell the rest of us what equipment it is in so we can avoid ever using it. Yikes (how do you debug without library sources?).
If you're system is already dead.... who cares how long it takes. It obviously isn't critical that it be running this instant;
if it was you couldn't risk "dieing" like this anyway ?
Almost every application out there performs i/o operations, either with disk or over network.
As my applications work fine under the development-time environment, I want to be sure they will still do when the Internet connection is slow or unstable, or when the user attempts to read data from badly-written CD.
What tools would you recommend to simulate:
slow i/o (opening files, closing files, reading and writing, enumeration of directory items)
occasional i/o errors
occasional 'access denied' responses
packet loss in tcp/ip
etc...
EDIT:
Windows:
The closest solution to do the job as described seems to be holodeck, commercial software (>$900).
Linux:
Open solution wasn't found by now, but the same effect
can be achived as specified by smcameron and krosenvold.
Decorator pattern is a good idea.
It would require to wrap my i/o classes, but resulting in a testing framework.
The only remaining untested code would be in 3rd party libraries.
Yet I decided not to go this way, but leave my code as it is and simulate i/o errors from outside.
I now know that what I need is called 'fault injection'.
I thought it was a common production-line part with plenty of solutions I just didn't know.
(By the way, another similar good idea is 'fuzz testing', thanks to Lennart)
On my mind, the problem is still not worth $900.
I'm going to implement my own open-source tool based on hooks (targeting win32).
I'll update this post when I'm done with it. Come back in 3 or 4 weeks or so...
What you need is a fault injecting testing system. James Whittaker's 'How to break software' is a good read on this subject and includes a CD with many of the tools needed.
If you're on linux you can do tons of magic with iptables;
iptables -I OUTPUT -p tcp --dport 7991 -j DROP
Can simulate connections up/down as well. There's lots of tutorials out there.
Check out "Fuzz testing": http://en.wikipedia.org/wiki/Fuzzing
At a programming level many frameworks will let you wrap the IO stream classes and delegate calls to the wrapped instance. I'd do this and add in a couple of wait calls in the key methods (writing bytes, closing the stream, throwing IO exceptions, etc). You could write a few of these with different failure or issue type and use the decorator pattern to combine as needed.
This should give you quite a lot of flexibility with tweaking which operations would be slowed down, inserting "random" errors every so often etc.
The other advantage is that you could develop it in the same code as your software so maintenance wouldn't require any new skills.
You don't say what OS, but if it's linux or unix-ish, you can wrap open(), read(), write(), or any library or system call etc, with an LD_PRELOAD-able library to inject faults.
Along these lines:
http://scaryreasoner.wordpress.com/2007/11/17/using-ld_preload-libraries-and-glibc-backtrace-function-for-debugging/
I didn't go writing my own file system filter, as I initially thought, because there's a simpler solution.
1. Network i/o
I've found at least 2 ways to simulate i/o errors here.
a) Running a virtual machine (such as vmware) allows to configure bandwidth and packet loss rate. Vmware supports on-machine debugging.
b) Running a proxy on the local machine and tunneling all the traffic through it. For the case of upd/tcp communications a proxifier (e.g. widecap) can be used.
2. File i/o
I've managed to deduce this scenario to the previous one by mapping a drive letter to a network share which resides inside the virtual machine. The file i/o will be slow.
A cheaper alternative exists: to set up a local ftp server (e.g. FileZilla), configure speeds and use Novell's NetDrive to access it.
You'll wanna setup a test lab for this. What type of application are you building anyway? Are you really expecting the application be fed corrupt data?
A test technique I know the Microsoft Exchange Server people tried was sending noise to the server. Basically feeding every possible input with seemingly random data. They managed to crash the server quite often this way.
But still, if you can't trust input that hasn't been signed then general rules apply. Track every operation which could potentially be untrusted (result of corrupt data) and you should be able to handle most problems gracefully.
Just test your application behavior on random input, that should catch most problems but you'll never be able to fully protect your self from corrupt data. That's just not possible, as the data could be part of some internal buffer being handed off within the application itself.
Be mindful of when and how you decode data. That is all.
The first thing you'll need to do is define what "correct" means under these circumstances. You can only test against a definition of what behaviour is intended.
The tactics of testing will depend on technology. In the context of automated unit testing, I have found it very useful, in OO languages such as Java, to use various flavors of "mocking" or "stubbing" to pass e.g. misbehaving InputStreams to parts of my code that used file I/O.
Consider holodeck for some of the fault injection, if you have access to spare hardware you can simulate network impairment using Netem or a commercial product based on it the Mini-Maxwell, which is much more expensive than free but possibly easier to use.
One of our next projects is supposed to be a MS Windows based game (written in C#, with a winform GUI and an integrated DirectX display-control) for a customer who wants to give away prizes to the best players. This project is meant to run for a couple of years, with championships, ladders, tournaments, player vs. player-action and so on.
One of the main concerns here is cheating, as a player would benefit dramatically if he was able to - for instance - let a custom made bot play the game for him (more in terms of strategy-decisions than in terms of playing many hours).
So my question is: what technical possibilites do we have to detect bot activity? We can of course track the number of hours played, analyze strategies to detect anomalies and so on, but as far as this question is concerned, I would be more interested in knowing details like
how to detect if another application makes periodical screenshots?
how to detect if another application scans our process memory?
what are good ways to determine whether user input (mouse movement, keyboard input) is human-generated and not automated?
is it possible to detect if another application requests informations about controls in our application (position of controls etc)?
what other ways exist in which a cheater could gather informations about the current game state, feed those to a bot and send the determined actions back to the client?
Your feedback is highly appreciated!
I wrote d2botnet, a .net diablo 2 automation engine a while back, and something you can add to your list of things to watch out for are malformed /invalid/forged packets. I assume this game will communicate over TCP. Packet sniffing and forging are usually the first way games (online anyways) are automated. I know blizzard would detect malformed packets, somehting i tried to stay away from doing in d2botnet.
So make sure you detect invalid packets. Encrypt them. Hash them. do somethign to make sure they are valid. If you think about it, if someone can know exactly what every packet means that is sent back and forth they dont even need to run the client software, which then makes any process based detection a moot point. So you can also add in some sort of packet based challenge response that your cleint must know how to respond to.
Just an idea what if the 'cheater' runs your software in a virtual machine (like vmware) and makes screenshots of that window? I doubt you can defend against that.
You obviously can't defend against the 'analog gap', e.g. the cheater's system makes external screenshots with a high quality camera - I guess it's only a theoretical issue.
Maybe you should investigate chess sites. There is a lot of money in chess, they don't like bots either - maybe they have come up with a solution already.
The best protection against automation is to not have tasks that require grinding.
That being said, the best way to detect automation is to actively engage the user and require periodic CAPTCHA-like tests (except without the image and so forth). I'd recommend utilizing a database of several thousand simple one-off questions that get posed to the user every so often.
However, based on your question, I'd say your best bet is to not implement the anti-automation features in C#. You stand very little chance of detecting well-written hacks/bots from within managed code, especially when all the hacker has to do is simply go into ring0 to avoid detection via any standard method. I'd recommend a Warden-like approach (download-able module that you can update whenever you feel like) combined with a Kernel-Mode Driver that hooks all of the windows API functions and watches them for "inappropriate" calls. Note, however, that you're going to run into a lot of false positives, so you need to not base your banning system on your automated data. Always have a human look over it before banning.
A common method of listening to keyboard and mouse input in an application is setting a windows hook using SetWindowsHookEx.
Vendors usually try to protect their software during installation so that hacker won't automate and crack/find a serial for their application.
Google the term: "Key Loggers"...
Here's an article that describes the problem and methods to prevent it.
I have no deeper understanding on how PunkBuster and such softwar works, but this is the way I'd go:
Iintercept calls to the API functions that handle the memory stuff like ReadProcessMemory, WriteProcessMemory and so on.
You'd detect if your process is involved in the call, log it, and trampoline the call back to the original function.
This should work for the screenshot taking too, but you might want to intercept the BitBlt function.
Here's a basic tutorial concerning the function interception:
Intercepting System API Calls
You should look into what goes into Punkbuster, Valve Anti-Cheat, and some other anti-cheat stuff for some pointers.
Edit: What I mean is, look into how they do it; how they detect that stuff.
I don't know the technical details, but Intenet Chess Club's BlitzIn program seems to have integrated program switching detection. That's of course for detecting people running a chess engine on the side and not directly applicable to your case, but you may be able to extrapolate the apporach to something like if process X takes more than Z% CPU time the next Y cycles, it's probably a bot running.
That in addition to a "you must not run anything else while playing the game to be eligible for prizes" as part of the contest rules might work.
Also, a draconian "we might decide in any time for any reason that you have been using a bot and disqualify you" rule also helps with the heuristic approach above (used in prized ICC chess tournaments).
All these questions are easily solved by the rule 1 above:
* how to detect if another application makes periodical screenshots?
* how to detect if another application scans our process memory?
* what are good ways to determine whether user input (mouse movement, keyboard input) is human-generated and not automated?
* is it possible to detect if another application requests informations about controls in our application (position of controls etc)?
I think a good way to make harder the problem to the crackers is to have the only authoritative copies of the game state in your servers, only sending to and receiving updates from the clients, that way you can embed in the communication protocol itself client validation (that it hasn't been cracked and thus the detection rules are still in place). That, and actively monitoring for new weird behavior found might get you close to where you want to be.