text based RPG command interpreter - interpreter

I was just playing a text based RPG and I got to wondering, how exactly were the command interpreters implemented and is there a better way to implement something similar now? It would be easy enough to make a ton of if statements, but that seems cumbersome especially considering for the most part pick up the gold is the same as pick up gold which has the same effect as take gold. I'm sure this is a really in depth question, I'd just like to know the general idea of how interpreters like that were implemented. Or if there's an open source game with a decent and representative interpreter, that would be perfect.
Answers can be language independent, but try to keep it in something reasonable, not prolog or golfscript or something. I'm not sure exactly what to tag this as.

The usual name for this sort of game is text adventure or interactive fiction, if it is single player, or MUD if it is multiplayer.
There are several special purpose programming languages for writing interactive fiction, such as Inform 6, Inform 7 (an entirely new language that compiles down to Inform 6), TADS, Hugo, and more.
Here's an example of a game in Inform 7, that has a room, an object in the room, and you can pick up, drop, and otherwise manipulate the object:
"Example Game" by Brian Campbell
The Alley is a room. "You are in a small, dark alley." A bronze key is in the
Alley. "A bronze key lies on the ground."
Produces when played:
Example Game
An Interactive Fiction by Brian Campbell
Release 1 / Serial number 100823 / Inform 7 build 6E59 (I6/v6.31 lib 6/12N) SD
Alley
You are in a small, dark alley.
A bronze key lies on the ground.
>take key
Taken.
>drop key
Dropped.
>take the key
Taken.
>drop key
Dropped.
>pick up the bronze key
Taken.
>put down the bronze key
Dropped.
>
For the multiplayer games, which tend to have simpler parsers than interactive fiction engines, you can check out a list of MUD servers.
If you would like to write your own parser, you can start by simply checking your input against regular expressions. For instance, in Ruby (as you didn't specify a language):
case input
when /(?:take|pick +up)(?: +(?:the|a))? +(.*)/
take_command(lookup_name($3))
when /(?:drop|put +down)(?: +(?:the|a))? +(.*)/
drop_command(lookup_name($3))
end
You may discover that this becomes cumbersome after a while. You could simplify it somewhat using some shorthands to avoid repetition:
OPT_ART = "(?: +(?:the|a))?" # shorthand for an optional article
case input
when /(?:take|pick +up)#{OPT_ART} +(.*)/
take_command(lookup_name($3))
when /(?:drop|put +down)#{OPT_ART} +(.*)/
drop_command(lookup_name($3))
end
This may start to get slow if you have a lot of commands, and it checks the input against each command in sequence. You also may find that it still becomes hard to read, and involves some repetition that is difficult to simply extract into shorthands.
At that point, you might want to look into lexers and parsers, a topic much too big for me to do justice to in a reply here. There are many lexer and parser generators, that given a description of a language, will produce a lexer or parser that is capable of parsing that language; check out the linked articles for some starting points.
As an example of how a parser generator would work, I'll give an example in Treetop, a Ruby based parser generator:
grammar Adventure
rule command
take / drop
end
rule take
('take' / 'pick' space 'up') article? space object {
def command
:take
end
}
end
rule drop
('drop' / 'put' space 'down') article? space object {
def command
:drop
end
}
end
rule space
' '+
end
rule article
space ('a' / 'the')
end
rule object
[a-zA-Z0-9 ]+
end
end
Which can be used as follows:
require 'treetop'
Treetop.load 'adventure.tt'
parser = AdventureParser.new
tree = parser.parse('take the key')
tree.command # => :take
tree.object.text_value # => "key"

If by 'text based RPG' you are referring to Interactive Fiction, there are specific programming languages for this. My favorite (the only one I know ;P) is Inform: http://en.wikipedia.org/wiki/Inform
The rec.arts.int-fiction FAQ has further information: http://www.plover.net/~textfire/raiffaq/FAQ.htm

Related

Writing a computer program that will analyse the quality of another computer program?

I'm interested in knowing the possibilities of this. I'm working on a project that validates the skills of a software engineer, currently we validate skills based on code reviews by credentialed developers.
I know the answer if far more completed that the question, I couldn't imagine how complex the program would have to be able to analyse complex code but I am starting with basic programming interview questions.
For example, the classic FizzBuzz question:
Write a program that prints the numbers from 1 to 20. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”.
and below is the solution in python:
for num in range(1,21):
string = ""
if num % 3 == 0:
string = string + "Fizz"
if num % 5 == 0:
string = string + "Buzz"
if num % 5 != 0 and num % 3 != 0:
string = string + str(num)
print(string)
Question is, can we programatically analyse the validity of this solution?
I would like to know if anyone has attempted this, and if there are current implementations I can take a look at. Also if anyone has used z3, and if it is something I can use to solve this problem.
As Vilx- mentioned, correctness of programs (including whether or not they terminate) is in general known to be undecidable. However, tools such as Z3 show that relevant concrete cases can still be reasoned about, despite the general undecidability of the problem.
Static analysers typically look for "simple" problems (e.g. null dereferences, out-of-bounds accesses, numerical overflows), but are comparably fast and require little user guidance (think of guidance in the spirit of adding type annotations to your code).
A non-exhaustive (and biased) list of keywords to search for: "static analysers", "abstract interpretation"; "facebook infer", "airbus absint", "juliasoft".
Verifiers attempt to prove much richer properties, in particular functional correctness, e.g. "does this sort-implementation really sort my array (and not do anything else, e.g. deallocate some global memory or update an element reachable from the array)?" or "does that crypto-implementation really implement the crypto protocol it promises to implement?". This is a much harder task and tools from that line of research are typically rather slow, require expert users with a background in formal verification and significant user guidance.
A non-exhaustive (and biased) list of keywords to search for: "verification", "hoare logic", "separation logic"; "eth viper", "microsoft dafny", "kuleuven verifast", "microsoft f*".
Other formal methods exist, e.g. refinement (or correct-by-construction), but with even less tool support and, as far as I know, industry acceptance.
Let's put it this way: it's been mathematically proven that you CANNOT determine if a program will ever terminate. So if you want a mathematically perfect answer of if the target program is correct, you're doomed.
That said, you can still do unit tests and "linting" which will give you plenty of intetesting insights.
But for simple pieces of code like the FizzBuzz, I think that eyeballing by an experienced dev will probably bring the best results.

Evaluate Formal tools

What are all the factors one should consider in order to compare 3 formal verification tools?
Eg: Jaspergold, Onespin, Incisive.
From my little research, Jaspergold comes on top. But i want to do it myself on a project.
I have noted down some points such as
1.Supported languages(vhdl, sv, verilog, sva, psl,etc)
2.GUI
3.Capability(how much big design can they handle)
4.Number of Evaluation cycles
5.Performance(How fast they find proof or counter example)
With what other features can i extend this list?
Thanks!

Turing Machines

I'm reading a book on Languages & Automata and I'm not understanding Turing Machines. I've taught myself about DFA's NFA's and Pushdown Automata without any problems. Can someone please explain what this is doing?
B = {w#w|w ∈ {0, 1}*}
The following figure contains several snapshots of Ml 's tape while it is computing
in stages 2 and 3 when started on input 011000#011000.
Thanks alot!
"Imagine an endless row of hotel rooms, and each room contains a lightbulb and a switch that controls it. Initially, all the rooms are dark. A robot starts at one of the rooms, and has the ability to operate switches and move to adjacent rooms.
The robot has several states that it can be in, and each state determines what it should do based on whether the current room is light or dark. For example, a robot's rules could include these states:
The "scared" state:
If the room is dark, turn on the light and move to the room to the left.
If the room is light, do nothing and go to the "normal" state.
The "normal" state:
If the room is light, turn off the light and move to the room on the right.
Otherwise, go to the "scared" state.
One special state is the "stop" state. When the robot finds itself in this state, the process is complete.
Suppose a robot has n states (not including the "stop" state), and it stops. What is the maximum number of light rooms at this point?
This system is in direct allegory to Turing machines. The hotel is the tape, the robot is the Turing machine, and dark rooms and light rooms are 0 and 1 cells."
It is from googology wiki. I gave an idea to it, but, of course, this text has been improved since me.
Turing machine is a hypothetical machine with a tape where symbols are stored. It can have multiple heads that can read symbols from the tape or write symbols to the tape.
Now your grammar says B = {w#w|w ∈ {0, 1}*}, that is any string of the form "w#w", where w is any combination of 0's and 1's or none at all. So let's say w = 011000 for this particular example. The resulting string will be 011000#011000 and your turing machine will be verifying if it follows this grammar.
Your turing machine has one head in this case. It starts at the beginning of string. Reads the first character which is 0. Mark it "x": meaning I've read this. Then goes immediately after the # and checks if what it just read is matching. In this case it's 0 as well so it marks it as matching "x". It then goes back to previous position and does the same for next character. It keeps doing this until it reaches #. When it reads hash or #, it checks for the end of the string and if it is the end of string, it accepts this string saying yes this follows the given grammar.

What is ANTLR3 error recovery method?

This seems to be a theoretical question.
As I far as I know ANTLR3 handles errors itself using its recover(###) method. I want to know what the method ANTLR3 uses for error recovery. (i.e. panic-mode/phrase-level etc.) Can someone help me figure this out?
It would be nice if someone can show me the declaration of its recover method, if my first guess is correct. Thank you.
Quote:
ANTLR’s error recovery mechanism is based upon Niklaus Wirth’s early
ideas in Algorithms + Data Structures = Programs 1 (as well as
Rodney Topor’s A Note on Error Recovery in Recursive Descent Parsers
2) but also includes Josef Grosch’s good ideas from his CoCo
parser generator (Efficient and Comfortable Error Recovery in Recur-
sive Descent Parsers 3). Essentially, recognizers perform single-
symbol insertion and deletion upon mismatched symbol errors (as
described in a moment) if possible. If not, recognizers gobble up sym-
bols until the lookahead is a member of the resynchronization set and
then exit the rule. The resynchronization set is the set of input symbols
that can legally follow references to the current rule and references to
any invoking rules up the call chain. Similarly, if the recognizer cannot
choose any of the alternatives from the start of a rule, the recognizer
again uses the gobble-and-exit strategy.
[...]
-- Terence Parr. The Definitive ANTLR Reference, 10.7 Automatic Error Recovery Strategy.
References
1 Niklaus Wirth. Algorithms + Data Structures = Programs. Prentice Hall PTR, Upper Saddle River, NJ, USA, 1978.
2 Rodney W. Topor. A note on error recovery in recursive descent parsers. SIGPLAN Not., 17(2):37–40, 1982.
3 Josef Grosch. Efficient and comfortable error recovery in recursive descent parsers. Structured Programming, 11(3):129–140, 1990.

Building dictionary of words from large text

I have a text file containing posts in English/Italian. I would like to read the posts into a data matrix so that each row represents a post and each column a word. The cells in the matrix are the counts of how many times each word appears in the post. The dictionary should consist of all the words in the whole file or a non exhaustive English/Italian dictionary.
I know this is a common essential preprocessing step for NLP. And I know it's pretty trivial to code it, sill I'd like to use some NLP domain specific tool so I get stop-words trimmed etc..
Does anyone know of a tool\project that can perform this task?
Someone mentioned apache lucene, do you know if lucene index can be serialized to a data-structure similar to my needs?
Maybe you want to look at GATE. It is an infrastructure for text-mining and processing. This is what GATE does (I got this from the site):
open source software capable of solving almost any text processing problem
a mature and extensive community of developers, users, educators, students and scientists
a defined and repeatable process for creating robust and maintainable text processing workflows
in active use for all sorts of language processing tasks and applications, including: voice of the customer; cancer research; drug research; decision support; recruitment; web mining; information extraction; semantic annotation
the result of a €multi-million R&D programme running since 1995, funded by commercial users, the EC, BBSRC, EPSRC, AHRC, JISC, etc.
used by corporations, SMEs, research labs and Universities worldwide
the Eclipse of Natural Language Engineering, the Lucene of Information Extraction, the ISO 9001 of Text Mining
What you want is so simple that, in most languages, I would suggest you roll your own solution using an array of hash tables that map from strings to integers. For example, in C#:
foreach (var post in posts)
{
var row = new Dictionary<string, int>();
foreach (var word in GetWordsFromPost(post))
{
IncrementContentOfRow(row, word);
}
}
// ...
private void IncrementContentOfRow(IDictionary<string, int> row, string word)
{
int oldValue;
if (!row.TryGet(word, out oldValue))
{
oldValue = 0;
}
row[word] = oldValue + 1;
}
You can check out:
bow - a veteran C library for text classification; I know it stores the matrix, it may require some hacking to get it.
Weka - a Java machine learning framework that can handle text and build the matrix
Sujit Pal's blog post on building the term-document matrix from scratch
If you insist on using Lucene, you should create an index using term vectors, and use something like a loop over getTermFreqVector() to get the matrix.
Thanks to #Mikos' comment, I googled the term "term-document matrix' and found TMG (Text to Matrix Generator).
I found it suitable for my needs.