Using open source SNES emulator code to turn a rom file into a self-contained executable game - executable

Would it be possible to take the source code from a SNES emulator (or any other game system emulator for that matter) and a game ROM for the system, and somehow create a single self-contained executable that lets you play that particular ROM without needing either the individual rom or the emulator itself to play? Would it be difficult, assuming you've already got the rom and the emulator source code to work with?

It shouldn't be too difficult if you have the emulator source code. You can use a method that is often used to store images in c source files.
Basically, what you need to do is create a char * variable in a header file, and store the contents of the rom file in that variable. You may want to write a script to automate this for you.
Then, you will need to alter the source code so that instead of reading the rom in from a file, it uses the in memory version of the rom, stored in your variable and included from your header file.
It may require a little bit of work if you need to emulate file pointers and such, or you may be lucky and find that the rom loading function just loads the whole file in at once. In this case it would probably be as simple as replacing the file load function with a function to return your pointer.
However, be careful for licensing issues. If the emulator is licensed under the GPL, you may not be legally allowed to store a proprietary file in the executable, so it would be worth checking that, especially before you release / distribute it (if you plan to do so).

Yes, more than possible, been done many times. Google: static binary translation. Graham Toal has a good howto paper on the subject, should show up early in the hits. There may be some code out there I may have left some code out there.
Completely removing the rom may be a bit more work than you think, but not using an emulator, definitely possible. Actually, both requirements are possible and you may be surprised how many of the handheld console games or set top box games are translated and not emulated. Esp platforms like those from Nintendo where there isnt enough processing power to emulate in real time.
You need a good emulator as a reference and/or write your own emulator as a reference. Then you need to write a disassembler, then you have that disassembler generate C code (please dont try to translate directly to another target, I made that mistake once, C is portable and the compilers will take care of a lot of dead code elimination for you). So an instruction of a make believe instruction set might be:
add r0,r0,#2
And that may translate into:
//add r0,r0,#2
r0=r0+2;
do_zflag(r0);
do_nflag(r0);
It looks like the SNES is related to the 6502 which is what Asteroids used, which is the translation I have been working on off and on for a while now as a hobby. The emulator you are using is probably written and tuned for runtime performance and may be difficult at best to use as a reference and to check in lock step with the translated code. The 6502 is nice because compared to say the z80 there really are not that many instructions. As with any variable word length instruction set the disassembler is your first big hurdle. Do not think linearly, think execution order, think like an emulator, you cannot linearly translate instructions from zero to N or N down to zero. You have to follow all the possible execution paths, marking bytes in the rom as being the first byte of an instruction, and not the first byte of an instruction. Some bytes you can decode as data and if you choose mark those, otherwise assume all other bytes are data or fill. Figuring out what to do with this data to get rid of the rom is the problem with getting rid of the rom. Some code addresses data directly others use register indirect meaning at translation time you have no idea where that data is or how much of it there is. Once you have marked all the starting bytes for instructions then it is a trivial task to walk the rom from zero to N disassembling and or translating.
Good luck, enjoy, it is well worth the experience.

Related

Reverse a decryption algorithm with a given .exe GUI

I am using a Keygen application (.exe). There are two input fields in it's GUI:
p1 - at least 1 digit, 10 digits max - ^[0-9]{1,10}$
p2 - 12 chars max - uppercase letters/digits/underscores - ^[A-Z0-9_]{0,12}$
Pressing generate button produce a key x.
x - 20 digits exactly - ^[0-9]{20}$
For each pair (p1,p2), there is only one x (in other words: f(p1,p2) = x is a function)
I am interested in it's encryption algorithm.
Is there any way of reverse engineering the algorithm?
I thought of two ways:
decompiling. I used snowman, but the output is too polluted. The decompiled code probably contains non-relevant parts, such as the GUI.
analyzing of input and output. I wonder if there any option to determine the used encryption algorithm by analyzing a set of f(p1,p2) = x results.
As you mentioned, using snowman or some other decompiling tools is probably the way to go.
I doubt you would be able to determine the algorithm just by looking at the input output combinations, since it is possible to write any kind of arbitrary algorithm, that can behave in any way.
Perhaps you could just ask the author what algorithm they're using ?
Unless it's something really simple, I'd rule out your option 2 of trying to figure it out by looking at input and output pairs.
For decompiling / reverse engineering a static binary, you should first determine whether it's a .NET application or something else. If it's written in .NET you can try this for decompilation:
https://www.jetbrains.com/decompiler/
It's really easy to use, unless the binary has been obfuscated.
If the application is not a .NET application, you can try Ghidra and/or Cutter which both has pretty impressive decompilers built in:
https://ghidra-sre.org/
https://cutter.re/
If static code analysis is not enough, you can add a debugger to it. Ghidra and x64dbg work really well together, and can be synced via a plugin installed in both.
If you're new to this, I can recommend both that you look into basic assembler for the x86 platform so you have a general idea of how the CPU works. Another way to get started is "crackme" style challenges from CTF competitions. Often there great write-ups with the solution, so you have both the question and answer available.
Good luck!
Type in p1 and p2. Scan the process for that byte string. Then put a hardware breakpoint for memory access on it. Generate the key, it will hit that hardware breakpoint. Then you have the address which accesses it and start reversing from there in Ghidra(Don't forget to use BASE + OFFSET) since ghidra's output won't have the same base as the running application. The relevant code HAS to access the inputs. So you know where the algorithm is. Since it either directly accesses it, or somewhere within that call chain is accessed relatively fast. Nobody can know without actually seeing the executable.

How to find the size of a reg in verilog?

I was wondering if there were a way to compute the size of a reg in Verilog. I researched it quite a bit, and found $size(a), but it's only in SystemVerilog, and it won't work in my verilog program.
Does anyone know an alternative for this??
I also wanted to ask as a side note; I'm having some trouble with my test bench in the sense that when I update a value in the file, that change is not taken in consideration when I simulate. I've been told I might have been using an old test bench but the one I am continuously simulating is the only one available in this project.
EDIT:
To give you an idea of what's the problem: in my code there is a "start" signal and when it is set to 1, the operation starts. Otherwise, it stays idle. I began writing the test bench with start=0, tested it and simulated it, then edited the test bench by setting start to 1. But when I simulate it, the start signal remains 0 in the waveform. I tried to check whether I was using another test bench, but it is the only test bench I am using in this project.
Given that I was on a deadline, I worked on the code so that it would adapt to the "frozen" test bench. I am getting now all the results I want, but I wanted to test some other features of my code, so I created a new project and copy pasted the code in new files (including the same test bench). But when I ran a simulation, the waveform displayed wrong results (even though I was using the exact same code in all modules and test bench). Any idea why?
Any help would be appreciated :)
There is a standardised way to do this, but it requires you to use the VPI, which I don't think you get on Modelsim's student edition. In short, you have to write C code, and dynamically link it to the simulator. In the C code, you can get object properties using routines such as vpi_get. Useful properites might be vpiSize, which is what you want, vpiLeftRange, vpiRightRange, and so on.
Having said all that, Verilog is essentially a static language, and objects have to be declared with a static width using constant expressions. Having a run-time method to determine an object's size is therefore of pretty limited value (since you should already know it), and may not solve whatever problem you actually have. Your question would make more sense for VHDL (and SystemVerilog?), which are much more dynamic.
Note on Icarus: the developers have pushed lots of SystemVerilog stuff back into the main language. If you take advantge of this you may find that your code is not portable.
Second part of your question: you need to be specific on what your problem actually is.

VGA programming without using interrupt (only registers)

I want to develop a VGA graphics driver (for Linux(Ubuntu)) with support for the basic primitives such as putpixel, drawline, fillrect and bitblt. I want to do it in protected mode.
I´ve been googling for a week and the following four links are the best I have found:
http://www.brackeen....vga/basics.html
http://www.osdever.n...VGA/vga/vga.htm
http://bos.asmhacker...sing%20bios.htm
Unfortunately, the first one uses a BIOS call so I cannot use it. The second link has lots of information on the VGA registers but no examples showing how to make them work together. The third example is a example to switch in 13h mode but i've tried it and nothing happened. Can you guys give me a hint? Thanks in advance!
--Vincenzo
my code at http://bos.asmhackers.net/docs/vga_without_bios/snippet_5/vga.php
works fine if you are in 32bit mode with full hardware access. Unfortunately I doubt that any Linux variant will let you directly access the VGA ports. I'm not sure how you develop this driver, but if you made sure that you have full access to the VGA ports it should work. In my example code I only switch between mode 0x03 and 0x13, but in the folders above you'll be able to find port values for most other common VGA modes, as well as C code to do the switch if you prefer that.
Christoffer code include files are found BOS operating system source code like text.inc and font8x16.inc
http://bos.asmhackers.net/downloads.php
This is coming many many years later but I think it's still very relevant and if somebody is struggling I hope they can find it useful.
First of all, it is completely possible to configure VGA only using registers without interrupts, as hard as it may be. A useful resource about registers and how to configure them can be found here, but unless you have a ton of time to spare to learn how to properly do all of it, move to the following section.
If you wish to really learn how to do it, I suggest going through with the documentation provided earlier. However, some of it is already done!
Chris Giese did a great job demonstrating exactly how to do this for MS-DOS system, and while you may think that doesn't help you, it really does.
Chris's code can be found here. If you want another useful codes check here as well.
Now, while it only works for MS-DOS it's actually easy to convert to other systems. The code already contains all data needed to configure the registers in many different modes. And that's the part that saves you a ton of time going through documentation.
The code uses functions outportb, inportb, which are MS-DOS functions, to write/read single byte to/from a port. Therefore, you have to redefine these functions to read/write for your own system. Redefinition complexity depends on the system you operate on.
In addition, you will also need to provide means to write to physical memory region between 0xA0000-0xBFFFF which corresponds to standard VGA memory area. Once you have that allocated, you need to also redefine the functions pokeb pokew peekb which will help you output things (text or pixel data) on the screen.
One last note: the code is already defined to work with many different modes including both text and display modes.

How to autostart a program from floppy disk on a Commodore c64

Good news, my c64 ist still running after lots of years spending time on my attic..
But what I always wanted to know is:
How can I automatically load & run a program from a floppy disk that is already inserted
when I switch on the c64?
Some auto-running command like load "*",8,1 would be adequate...
Regards
MoC
You write that a command that you type in, like LOAD"*",8,1 would be adequate. Can I assume, then, that the only problem with that particular command is that it only loads, but doesn't automatically run, the program? If so, you have a number of solutions:
If it's a machine language program, then you should type LOAD"<FILENAME>",8,1: and then (without pressing <RETURN>) press <SHIFT>+<RUN/STOP>.
If it's a BASIC program, type LOAD"<FILENAME>",8: and then (without pressing <RETURN>) press <SHIFT>+<RUN/STOP>.
It is possible to write a BASIC program such that it automatically runs when you load it with LOAD"<FILENAME>",8,1. To do so, first add the following line to the beginning of your program:
0 POKE770,131:POKE771,164
Then issue the following commands to save the program:
PRINT"{CLR}":POKE770,113:POKE771,168:POKE43,0;POKE44,3:POKE157,0:SAVE"<FILENAME>",8
This is not possible without some custom cartridge.
One way to fix this would be getting the Retro Replay cartridge and hacking your own code for it.
I doubt there is a way to do it; you would need a cartridge which handles this case and I don't think one like that exists.
A better and more suitable solution is EasyFlash actually. Retro Replay is commonly used with its own ROM. Since it is a very useful cartridge by default ROM, I would never flash another ROM to it. Also it is more expensive than EasyFlash if you don't have any of those cartridges.
At the moment, I have Prince Of Persia (!) ROM written to my EasyFlash and when I open my c64, it autoruns just like you asked for.
Not 100% relevant, but C128 can autoboot disks in C128 mode. For example Ultima V (which has musics on C128 but not on C64 or C128 in C64 mode) autoboots.
As for cartridges, I'd recommend 1541 Ultimate 2. It can also run games from module rom images (although Prince of Persia doesn't work for me for some reason, perhaps software issue?), but you also get rather good floppy emulator (which also makes it easier to transfer stuff to real disks), REU, tape interface (if you order it) etc.
If you are working with a ML program, there are several methods. If you aren't worried about ever returning to normal READY prompt without a RESET, you can have a small loader that loads into the stack ($0100-$01FF) The loader would just load the next section of code, then jump to it. It would start at $0102 and needs to be as small as possible. Many times, the next piece to load is only 2 characters, so the file name can be placed at $0100 & $0101. Then all you need to do is set LFS, SETNAM, LOAD, then JMP to it. Fill the rest of the stack area with $01. It is also rather safe to only save $0100-$010d so that the entire program will fit on a single disk block.
One issue with this, is that it clears out past stack entries (so, your program will need to reset the stack pointer back to the top.) If your program tries to do a normal RTS out of itself, random things can occur. If you want to exit the program, you'll need to jmp to the reset vector ($FFFC by default,) to do so.

code running very slowly after importing ansi c into iphone project

I have an ANSI C code that is about 10,000 lines long that I am trying to use in an iPhone project. When I compile the code with gcc on the command line, I type the following:
gcc -o myprog -O3 myprog.c
This program reads in large jpeg files and does some fancy processing on them, so I call it with the following
./myprog mypic.jpg
and from the command line, this takes take about 0.1 seconds.
I'm trying to import this code into an iPhone project but I'm not entirely sure how. I was able to get it to compile and run successfully by renaming myprog.c to myprog.h and then calling the functions in the C code from within a generic NSObject class. I added the O3 optimization to the project's Other C Flags. However, when I do this, the code on the simulator takes about 2 seconds to run and on the iPhone about 7 seconds to run which renders an unacceptable user experience.
Any tips on on hoe to get this going would be much appreciated.
It's hard to say for sure where the slowness comes from, or if there is any way around it, but right off the bat you've done something wrong.
You shouldn't have renamed a .c file to a .h file and included it. You should have written a .h (header) file that had the function, variable, and type declarations declared:
myprog.h:
#ifndef MYPROG_H_
#define MYPROG_H_
struct thing {
int a;
int b;
};
extern int woof;
int foo(void * buf, int size);
#endif /* MYPROG_H_ */
Then you should compile the .c file to an object file (or library) and link the main program against that. If you were to have included the .h file that was really just a renamed .c file into more than one source code files it may have resulted in having multiple versions of some data and code in your program.
You'll probably also want to go through and separate out any code in myprog.c that you won't be using in your iPhone program. I'll bet that there is plenty.
As far as why the program is slowing down, this could have to do with myprog being written to make use of some resources that aren't available on the iPhone. The first thing that comes to mind is large amounts of RAM, since many desktop applications are written as though available RAM is infinite, and I could see how some .jpg manipulation code could be written this way. The way to get around this would be to try to rework the algorithm so that it did not load as much of the picture at one time while working on it.
The second thing that come is floating point code. Floating point operations are common in image manipulation code, but often either not available or severely limited in embedded systems. In the case of iPhones they are available, but according to something I heard, their performance is noticeably hampered if you compile your code to thumb rather than regular ARM code. (I've never developed for an iPhone or its particular processor so I don't know for sure, but it is worth looking into).
Another place where things could be slowing down would be if there were some sort of translation between Objective C objects and C structures that you have somehow introduced and is happening a lot more often than it should need to. There are probably other slow downs that could happen because of this, but you might be able to test this theory out by creating a objective C program for your desktop that uses the myprog.c code in a manner similar to the iPhone program's use of it.
Another thing you probably should look into is profiling your iPhone program. Profiling determines (or only helps to determine, in some cases) where the program is spending its time. Knowing this doesn't necessarily tell you that the code that runs the most is bad or that anything about it could be improved, but it does tell you where to look. And sometimes you may look at the results and immediately know that some function that you thought was only going to be called once at the beginning of the program is actually being called repeatedly, which highly suggests that some improvement can be made.
I'm sure that a little searching will turn up how to go about this.