I am building a decompiler for Lua 5.1. (for study purposes only)
Here is the generated code:
main <test.lua:0,0> (12 instructions, 48 bytes at 008D0520)
0+ params, 2 slots, 0 upvalues, 0 locals, 6 constants, 0 functions
1 [1] LOADK 0 -2 ; 2
2 [1] SETGLOBAL 0 -1 ; plz_help_me
3 [2] LOADK 0 -4 ; 24
4 [2] SETGLOBAL 0 -3 ; oh_no
5 [3] GETGLOBAL 0 -1 ; plz_help_me
6 [3] GETGLOBAL 1 -3 ; oh_no
7 [3] ADD 0 0 1
8 [3] SETGLOBAL 0 -5 ; plz_work
9 [4] GETGLOBAL 0 -6 ; print
10 [4] GETGLOBAL 1 -5 ; plz_work
11 [4] CALL 0 2 1
12 [4] RETURN 0 1
constants (6) for 008D0520:
1 "plz_help_me"
2 2
3 "oh_no"
4 24
5 "plz_work"
6 "print"
locals (0) for 008D0520:
upvalues (0) for 008D0520:
Original Code:
plz_help_me = 2
oh_no = 24
plz_work = plz_help_me + oh_no
print(plz_work)
How to build a decompiler efficiently to generate this code? Should I use AST trees to map the behavior of the code? (opcodes in this case)
Lua VM is a register machine with a nearly unlimited supply of registers, which means you don't have to deal with the consequences of register allocation. It makes the whole thing much more bearable than decompiling, say, x86.
A very convenient intermediate representation for going up the abstraction level would have been an SSA. A trivial transform treating registers as local variable pointers and leaving memory loads as is, followed by an SSA-transform [1], will give you a code suitable for further analysis. Next step will be loop detection (done purely on CFG level), and, helped by SSA, detection of the loop variables and loop invariants. Once done, you'll see that only a handful of common patterns exist, that can be directly translated to higher level loops. Detecting if and other linear control flow sequences is even easier, once you're in an SSA.
One nice property of an SSA is that you can easily construct high level AST expressions out of it. You have a use count for every SSA variable, so you can simply substitute all the single-use variables (that were not produced by side-effect instructions) in place of their use (and side-effect ones too if you maintain their order). Only multi-use variables will remain.
Of course, you'll never get meaningful local variable names out of this procedure. Globals are preserved.
[1] https://pfalcon.github.io/ssabook/latest/
It looks like there is a lua decompiler for lua 5.1 written in C that can be used for study; https://github.com/viruscamp/luadec https://www.lua.org/wshop05/Muhammad.pdf describes how it works.
And on the extremely simple example you give, it comes back with the exactly same thing:
$ ./luadec ../tmp/luac.out
-- Decompiled using luadec 2.2 rev: 895d923 for Lua 5.1 from https://github.com/viruscamp/luadec
-- Command line: ../tmp/luac.out
-- params : ...
-- function num : 0
plz_help_me = 2
oh_no = 24
plz_work = plz_help_me + oh_no
print(plz_work)
There is a debug flag -d to show a little bit about how it works:
-- Decompiled using luadec 2.2 rev: 895d923 for Lua 5.1 from https://github.com/viruscamp/luadec
-- Command line: -d ../tmp/luac.out
LoopTree of function 0
FUNCTION_STMT=0x2e2f5ec0 prep=-1 start=0 body=0 end=11 out=12 block=0x2e2f5f10
----------------------------------------------
1 LOADK 0 1
SET_SIZE(tpend) = 0
next bool: 0
locals(0):
vpend(0):
tpend(1): 0{2}
Note: The Lua VM in version in 5.1 (and other versions up to the latest 5.4) seems to be stack oriented. This means that in order to compute plz_hep_me + oh_no, both values need to be pushed onto a stack and then when the ADD bytecode instruction is seen two entries are popped off of the stack and the result is pushed onto the stack. From the decompiler's standpoint, a tree fragment or "AST" fragment of the addition with its two operands is created.
In the output above, a constant is push onto stack tpend to which we see 0{2}. The stuff in brackets seems to hold the output representation. Later on, just before the ADD opcode you will see two entries on the stack. Entries are separate simply with a space in the debug output.
----------------------------------------------
2 SETGLOBAL 0 0
next bool: 0
locals(0):
vpend(1): -1{plz_help_me=2}
tpend(0):
----------------------------------------------
Seems to set variable plz_help_me from the top of the stack and push the assignment statement instead.
3 LOADK 0 3
SET_SIZE(tpend) = 0
next bool: 0
locals(0):
vpend(0):
tpend(1): 0{24}
...
At this point I guess the decompiler sees that the stack is 0 SET_SIZE(tpend)=0 or the first statement then is done.
Moving onto the next statement, the constant 24 is then loaded onto tpend.
In vpend and tpend, we see the output as it gets built up.
Looking at the C code, It looks like it loops over functions and within that loops over what it thinks are statements, and within that is driven by the a loop over bytecode instructions but "walking" them based on instruction type and information it saves.
It looks it does build an AST on the fly in a somewhat ad-hoc manner. (By "ad hoc" I simply mean programmer code builds the tree).
Although for compilation especially of optimized code Single Static Assignment (SSA) is a win, in the decompilation direction of high-level bytecode such as is found here, usually this is neither needed nor wanted. SSA maps to finite reusable names to a potentially infinite set.
In a decompiler we want to go the other direction from infinite registers (or even finite registers) back to names the programmer used and here this mapping is already given, so just use it. This is simpler too.
If the same name is used for two conceptually different computations, tracking that is probably wanted. Correcting or improving the style of the source code in the way that the decompiler writer feels might be nice, but usually isn't appreciated. (In fact, in my experience many programmers using a decompiler like this will call it a bug, when technically, it isn't.)
The luadec decompiler breaks things into functions and walks over disassembled instructions in that in an ad hoc way based on specific instructions for various opcodes. After this tree or list of tree nodes is built, an AST if you will (and that is the term this decompiler uses), there seem to be custom print routines for each AST node type.
Personally, I think there is a better way, and one that is slightly different from the one suggested in another answer.
I wrote about decompilation from this novel perspective here. I believe it applies to Lua as well. And older, longer more research oriented paper here.
As with many decompilers, logic instructions introduce control flow, and these strategies which are instruction based and that go down a path early on from the instructions, have a hard time shifting these around. So a pattern matching/parsing/term-rewrite approach where control flow is marked in the instructions such as with addition of pseudo Phi-function instructions (or its equivalent). Dominator regions from a control-flow graph also greatly help here.
Edit:
There is also unluac, a lua decompiler written in Java that can be studied. Although there are very few comments in the code describing how it works, the code it self seems well organized and the functions are pretty small and clear. It seems similar in operation in that there is a part that is bytecode operation based, and a part that is more expression-tree based. It seems to be currently worked on and is the only decompiler that supports lua versions from 5.0 up to 5.4.
Related
Consider the following method in the Juicer class:
Juicer >> juiceOf: aString
| fruit juice |
fruit := self gather: aString.
juice := self extractJuiceFrom: fruit.
^juice withoutSeeds
It generates the following bytecodes
25 self ; 1
26 pushTemp: 0 ; 2
27 send: gather:
28 popIntoTemp: 1 ; 3
29 self ; 4
30 pushTemp: 1 ; 5
31 send: extractJuiceFrom:
32 popIntoTemp: 2 ; 6 <-
33 pushTemp: 2 ; 7 <-
34 send: withoutSeeds
35 returnTop
Now note that 32 and 33 cancel out:
25 self ; 1
26 pushTemp: 0 ; 2
27 send: gather:
28 popIntoTemp: 1 ; 3 *
29 self ; 4 *
30 pushTemp: 1 ; 5 *
31 send: extractJuiceFrom:
32 storeIntoTemp: 2 ; 6 <-
33 send: withoutSeeds
34 returnTop
Next consider 28, 29 and 30. They insert self below the result of gather. The same stack configuration could have been achieved by pushing self before sending the first message:
25 self ; 1 <-
26 self ; 2
27 pushTemp: 0 ; 3
28 send: gather:
29 popIntoTemp: 1 ; 4 <-
30 pushTemp: 1 ; 5 <-
31 send: extractJuiceFrom:
32 storeIntoTemp: 2 ; 6
33 send: withoutSeeds
34 returnTop
Now cancel out 29 and 30
25 self ; 1
26 self ; 2
27 pushTemp: 0 ; 3
28 send: gather:
29 storeIntoTemp: 1 ; 4 <-
30 send: extractJuiceFrom:
31 storeIntoTemp: 2 ; 5
32 send: withoutSeeds
33 returnTop
Temporaries 1 and 2 are written but not read. So, except when debugging, they could be skipped leading to:
25 self ; 1
26 self ; 2
27 pushTemp: 0 ; 3
28 send: gather:
29 send: extractJuiceFrom:
30 send: withoutSeeds
31 returnTop
This last version, which saves 4 out 7 stack operations, corresponds to the less expressive and clear source:
Juicer >> juiceOf: aString
^(self extractJuiceFrom: (self gather: aString)) withoutSeeds
Note also that there are other possible optimizations that Pharo (I haven't checked Squeak) does not implement (e.g., jump chaining.) These optimizations would encourage the Smalltalk programmer to better express their intentions without having to pay the cost of additional computations.
My question is whether these improvements are an illusion or not. Concretely, are bytecode optimizations absent from Pharo/Squeak because they are known to have little relevance, or are they regarded as beneficial but haven't been addressed yet?
EDIT
An interesting advantage of using a register+stack architecture [cf. A Smalltalk Virtual Machine Architectural Model by Allen Wirfs-Brock and Pat Caudill] is that the additional space provided by registers makes it easier the manipulation of bytecodes for the sake of optimization. Of course, even though these kinds of optimizations are not as relevant as method inlining or polymorphic inline caches, as pointed out in the answer below, they shouldn't be disregarded, especially when combined with others implemented by the JIT compiler. Another interesting topic to analyze is whether destructive optimization (i.e., the one that requires de-optimization for supporting the debugger) is actually necessary or enough performance gains can be attained by non-destructive techniques.
The main annoyance when you start playing with such optimizations is debugger interface.
Historically and still currently in Squeak, the debugger is simulating the bytecode level and needs to map the bytecodes to corresponding Smalltalk instruction.
So I think the gain was too low for justifying complexification, or even worse degradation of debugging facility.
Pharo wants to change the debugger to operate at a higher level (Abstract Syntax Tree), but I don't know how they will end up at bytecode which is all the VM knows of.
IMO, this kind of optimization might better be implemented in the JIT compiler which transforms bytecode to machine native code.
EDIT
The greatest gains are in eliminating the sends themselves (by inlining) because they are much more expensive (x10) than the stack operations - there are 10 times more bytecodes executed per second than sends when you test 1 tinyBenchmarks (COG VM).
Interestingly, such optimizations could take place in the Smalltalk image, but only on hotspot detected by VM, as in the SISTA effort. See for example https://clementbera.wordpress.com/2014/01/22/the-sista-chronicles-iii-an-intermediate-representation-for-optimizations/
So, in the light of SISTA, the answer is rather: interesting, not yet addressed, but actively studied (and work in progress)!
All the machinery for de-optimizing when the method has to be debugged still is one of the difficult points as I understand it.
I think that a broader question is worth answering: are bytecodes worth the effort? Bytecodes were thought as a compact and portable representation of code that is close the target machine. As such, they are easy to interpret, but slow to execute.
Bytecodes do not excel in any of these games, and that usually makes them not the best choice if you want to either write an interpreter or a fast VM. On one hand, AST nodes far easier to interpret (only a few node types vs lots of different bytecodes). On the other hand, with the advent of JIT compilers, it became clear that running native code instead is not only possible but also much faster.
If you look at the most efficient VM implementations of JavaScript (which can be considered the most modern compilers of today) and also Java (HotSpot, Graal), you'll see they all use a tiered compilation scheme. Methods are initially interpreted from the AST, and only jitted when they become a hot spot.
At the hardest tiers of compilation there are no bytecodes. The key component in a compiler is its intermediate representation, and bytecodes do not fulfill the required properties. The most optimizable IRs are much more fine grained: they are in SSA form, and allow specific representation of registers and memory. This allows for much better code analysis and optimization.
Then again, if you are interested in portable code, there isn't anything more portable than the AST. Besides, it's easier and more practical to implement AST-based debuggers and profilers than bytecode-based ones. The only remaining problem is compactness, but in any case you can implement something like ast-codes (coded asts, similar to bytecodes but representing the tree)
On the other hand, if you want full speed, then you'll go for a JIT with a good IR and no bytecodes. I think that bytecodes don't fill many gaps in today VMs, but still remain mostly for backwards compatibility (also there are many examples of hardware archiqutectures that directly execute Java bytecodes).
There are also some cool experiments with the Cog VM related with bytecodes. But from what I understand they transform the bytecode into another IR for optimizing, then they convert back to bytecodes. I'm not sure if there's a technical gain in the last conversion besides reusing the original JIT architecture, or if there actually is any optimization at the bytecode level.
From Section 4.1 of Programming in Lua.
In a multiple assignment, Lua first evaluates all values and only then
executes the assignments. Therefore, we can use a multiple assignment
to swap two values, as in
x, y = y, x -- swap x' fory'
How does the assignment work actually?
How multiple assignment gets implemented depends on what implementation of Lua you are using. The implementation is free to do things anyway it likes as long as it preserves the semantics. That is, no matter how things get implemented, you should get the same result as if you had saved all the values in the RHS before assigning them to the LHS, as the Lua book explains.
If you are still curious about the actual implementation, one thing you can do is see what is the bytecode that gets produced for a certain program. For example, taking the following program
local x,y = 10, 11
x,y = y,x
and passing it to the bytecode compiler (luac -l) for Lua 5.2 gives
main <lop.lua:0,0> (6 instructions at 0x9b36b50)
0+ params, 3 slots, 1 upvalue, 2 locals, 2 constants, 0 functions
1 [1] LOADK 0 -1 ; 10
2 [1] LOADK 1 -2 ; 11
3 [2] MOVE 2 1
4 [2] MOVE 1 0
5 [2] MOVE 0 2
6 [2] RETURN 0 1
The MOVE opcode assigns the value in the right register to the left register (see lopcodes.h in the Lua source for more details). Apparently, what is going on is that registers 0 and 1 are being used for x and y and slot 2 is being used as a temporary extra slot. x and y get initialized with constants in the first two opcodes and in the next three 3 opcodes a swap is performed using the "temporary" second slot, kind of like you would do by hand:
tmp = y -- MOVE 2 1
y = x -- MOVE 1 0
x = tmp -- MOVE 0 2
Given how Lua used a different approach when doing a swapping assignment and a static initialization, I wouldn't be surprised if you got different results for different kinds of multiple assignments (setting table fields is probably going to look very different, specially since then the order should matter due to metamethods...). We would need to find the part in the source where the bytecode gets emitted to be 100% sure though. And as I mentioned before, all of this might vary between Lua versions and implementations, specially if you look at LuaJIT vs PUC Lua.
I'm trying to understand the output of the gcov tool. Running it with no options makes sense, but I'm wanting to try and understand the branch coverage options. Unfortunately it's hard to make sense of what the branches do and why they aren't taken. Below is the output for a method (compile using the latest LLVM/Clang build).
function -[TestCoverageAppDelegate loopThroughArray:] called 5 returned 100% blocks executed 88%
5: 30:- (NSInteger)loopThroughArray:(NSArray *)array {
5: 31: NSInteger i = 0;
22: 32: for (NSString *string in array) {
branch 0 taken 0
branch 1 taken 7
-: 33:
22: 34: }
branch 0 taken 4
branch 1 taken 3
branch 2 taken 0
branch 3 taken 3
5: 35: return i;
-: 36:}
I've run 5 test through this, passing in nil, an empty array, an array with 1 object, and array with 2 objects and an array with 4 objects. I can guess that in the first case, branch 1 means "go into the loop" but I haven't a clue what branch 0 is. In the second case branch 0 seems to be loop through again, branch 1 seems to be end the loop and branch 3 is continue/exit the loop, but I have no idea what branch 2 is or why/when it would be executed.
If anyone knows how to decipher the branch info, or knows of any detailed documentation on what it all means, I'd appreciate the help.
Gcov works by instrumenting (while compiling) every basic block of machine commands (you can think about assembler). Basic block means a linear section of code, which have no branches inside it and no lables inside it. So, If and only if you start running a basic block, you will reach end of basic block. Basic blocks are organized in CFG (Control flow graph, think about it as directed graph), which shows relations between basicblocks (edge from V1 to V2 is V1 calls V2; and V2 is called by V1). So, profile-arcs mode of compiler and gcov want to get execution count for every line and do this via counting basic block executions. Some of edges in CFG are instrumented and some are not, because there are algebraic relations between basic blocks in graph.
Your ObjC construction (for..in) is lowered (converted in early compilation) to several basic blocks. So, gcov sees 4 branches, because it sees only lowered BBs. It knows nothing about this lowering, but it knows which line corresponds to every assembler instruction (this is debug info). So, branches are edges of CFG.
If you want to see basic blocks, you should do an assembler dump of compiled program or disassemble a binary or dump CFG from compiler. You can do this both for profile-arcs and non-profile-arcs modes and compare them.
profile-arcs mode will have a lot calls and increments of something like "__llvm_gcov_ctr" or "__llvm_gcda_edge" - it is an actual instrumentation of basic blocks.
i'm trying to write a routine that will logically bitshift by n positions to the right all elements of a vector in the most efficient way possible for the following vector types: BYTE->BYTE, WORD->WORD, DWORD->DWORD and WORD->BYTE (assuming that only 8 bits are present in the result). I would like to have three routines for each type depending on the type of processor (SSE2 supported, only MMX suppported, only standard instruction se supported). Therefore i need 12 functions in total.
I have already found by myself how to backup and restore the registers that i need, how to make a loop, how to copy data into regular registers or MMX registers and how to shift by 1 position logically.
Because i'm not familiar with assembly language that's about it.
Which registers should i use for each instruction set?
How will the availability of the large vector (an image) in L1 cache be optimized?
How do i find the next element of the vector (a pointer kind of thing), i know i can make a mov by address and i assume i have to increment the address by 1, 2 or 4 depending on my type of data?
Although i have all the ideas, writing the code is a bit difficult at this point.
Thank you.
Arnaud.
Edit:
Here is what i'm trying to do for MMX for a shift by 1 on a DWORD:
__asm("push mm"); // backup register
__asm("push cx"); // backup register
__asm("mov %cx, length"); // initialize loop
__asm("loopstart_shift1:"); // start label
__asm("movd %xmm0, r/m32"); // get 32 bits data
__asm("psrlq %xmm0, 1"); // right shift 32 bits data logically (stuffs 0 on the left) by 1
__asm("mov r/m32,%xmm0"); // set 32 bits data
__asm("dec %cx"); // decrement index
__asm("cmp %cx,0");
__asm("jnz loopstart_shift1");
__asm("pop cx"); // restore register
__asm("pop mm"); // restore register
__asm("emms"); // leave MMX state
I strongly suggest you pause and take a look at using intrinsics with C or C++ instead of trying to write raw asm - that way the C/C++ compiler will take care of all the register allocation, instruction scheduling and general housekeeping tasks and you can just focus on the important parts, e.g. instead of using psrlq see _m_psrlq in mmintrin.h. (Better yet, look at using 128 bit SSE intrinsics.)
Sounds like you'd benefit from either using or looking into BitMagic's source. its entirely intrinsics based too, which makes its far more portable (though from the looks of it your using GCC, so it might have to get an MSVC to GCC intrinics mapping).
I'm not talking about algorithmic stuff (eg use quicksort instead of bubblesort), and I'm not talking about simple things like loop unrolling.
I'm talking about the hardcore stuff. Like Tiny Teensy ELF, The Story of Mel; practically everything in the demoscene, and so on.
I once wrote a brute force RC5 key search that processed two keys at a time, the first key used the integer pipeline, the second key used the SSE pipelines and the two were interleaved at the instruction level. This was then coupled with a supervisor program that ran an instance of the code on each core in the system. In total, the code ran about 25 times faster than a naive C version.
In one (here unnamed) video game engine I worked with, they had rewritten the model-export tool (the thing that turns a Maya mesh into something the game loads) so that instead of just emitting data, it would actually emit the exact stream of microinstructions that would be necessary to render that particular model. It used a genetic algorithm to find the one that would run in the minimum number of cycles. That is to say, the data format for a given model was actually a perfectly-optimized subroutine for rendering just that model. So, drawing a mesh to the screen meant loading it into memory and branching into it.
(This wasn't for a PC, but for a console that had a vector unit separate and parallel to the CPU.)
In the early days of DOS when we used floppy discs for all data transport there were viruses as well. One common way for viruses to infect different computers was to copy a virus bootloader into the bootsector of an inserted floppydisc. When the user inserted the floppydisc into another computer and rebooted without remembering to remove the floppy, the virus was run and infected the harddrive bootsector, thus permanently infecting the host PC. A particulary annoying virus I was infected by was called "Form", to battle this I wrote a custom floppy bootsector that had the following features:
Validate the bootsector of the host harddrive and make sure it was not infected.
Validate the floppy bootsector and
make sure that it was not infected.
Code to remove the virus from the
harddrive if it was infected.
Code to duplicate the antivirus
bootsector to another floppy if a
special key was pressed.
Code to boot the harddrive if all was
well, and no infections was found.
This was done in the program space of a bootsector, about 440 bytes :)
The biggest problem for my mates was the very cryptic messages displayed because I needed all the space for code. It was like "FFVD RM?", which meant "FindForm Virus Detected, Remove?"
I was quite happy with that piece of code. The optimization was program size, not speed. Two quite different optimizations in assembly.
My favorite is the floating point inverse square root via integer operations. This is a cool little hack on how floating point values are stored and can execute faster (even doing a 1/result is faster than the stock-standard square root function) or produce more accurate results than the standard methods.
In c/c++ the code is: (sourced from Wikipedia)
float InvSqrt (float x)
{
float xhalf = 0.5f*x;
int i = *(int*)&x;
i = 0x5f3759df - (i>>1); // Now this is what you call a real magic number
x = *(float*)&i;
x = x*(1.5f - xhalf*x*x);
return x;
}
A Very Biological Optimisation
Quick background: Triplets of DNA nucleotides (A, C, G and T) encode amino acids, which are joined into proteins, which are what make up most of most living things.
Ordinarily, each different protein requires a separate sequence of DNA triplets (its "gene") to encode its amino acids -- so e.g. 3 proteins of lengths 30, 40, and 50 would require 90 + 120 + 150 = 360 nucleotides in total. However, in viruses, space is at a premium -- so some viruses overlap the DNA sequences for different genes, using the fact that there are 6 possible "reading frames" to use for DNA-to-protein translation (namely starting from a position that is divisible by 3; from a position that divides 3 with remainder 1; or from a position that divides 3 with remainder 2; and the same again, but reading the sequence in reverse.)
For comparison: Try writing an x86 assembly language program where the 300-byte function doFoo() begins at offset 0x1000... and another 200-byte function doBar() starts at offset 0x1001! (I propose a name for this competition: Are you smarter than Hepatitis B?)
That's hardcore space optimisation!
UPDATE: Links to further info:
Reading Frames on Wikipedia suggests Hepatitis B and "Barley Yellow Dwarf" virus (a plant virus) both overlap reading frames.
Hepatitis B genome info on Wikipedia. Seems that different reading-frame subunits produce different variations of a surface protein.
Or you could google for "overlapping reading frames"
Seems this can even happen in mammals! Extensively overlapping reading frames in a second mammalian gene is a 2001 scientific paper by Marilyn Kozak that talks about a "second" gene in rat with "extensive overlapping reading frames". (This is quite surprising as mammals have a genome structure that provides ample room for separate genes for separate proteins.) Haven't read beyond the abstract myself.
I wrote a tile-based game engine for the Apple IIgs in 65816 assembly language a few years ago. This was a fairly slow machine and programming "on the metal" is a virtual requirement for coaxing out acceptable performance.
In order to quickly update the graphics screen one has to map the stack to the screen in order to use some special instructions that allow one to update 4 screen pixels in only 5 machine cycles. This is nothing particularly fantastic and is described in detail in IIgs Tech Note #70. The hard-core bit was how I had to organize the code to make it flexible enough to be a general-purpose library while still maintaining maximum speed.
I decomposed the graphics screen into scan lines and created a 246 byte code buffer to insert the specialized 65816 opcodes. The 246 bytes are needed because each scan line of the graphics screen is 80 words wide and 1 additional word is required on each end for smooth scrolling. The Push Effective Address (PEA) instruction takes up 3 bytes, so 3 * (80 + 1 + 1) = 246 bytes.
The graphics screen is rendered by jumping to an address within the 246 byte code buffer that corresponds to the right edge of the screen and patching in a BRanch Always (BRA) instruction into the code at the word immediately following the left-most word. The BRA instruction takes a signed 8-bit offset as its argument, so it just barely has the range to jump out of the code buffer.
Even this isn't too terribly difficult, but the real hard-core optimization comes in here. My graphics engine actually supported two independent background layers and animated tiles by using different 3-byte code sequences depending on the mode:
Background 1 uses a Push Effective Address (PEA) instruction
Background 2 uses a Load Indirect Indexed (LDA ($00),y) instruction followed by a push (PHA)
Animated tiles use a Load Direct Page Indexed (LDA $00,x) instruction followed by a push (PHA)
The critical restriction is that both of the 65816 registers (X and Y) are used to reference data and cannot be modified. Further the direct page register (D) is set based on the origin of the second background and cannot be changed; the data bank register is set to the data bank that holds pixel data for the second background and cannot be changed; the stack pointer (S) is mapped to graphics screen, so there is no possibility of jumping to a subroutine and returning.
Given these restrictions, I had the need to quickly handle cases where a word that is about to be pushed onto the stack is mixed, i.e. half comes from Background 1 and half from Background 2. My solution was to trade memory for speed. Because all of the normal registers were in use, I only had the Program Counter (PC) register to work with. My solution was the following:
Define a code fragment to do the blend in the same 64K program bank as the code buffer
Create a copy of this code for each of the 82 words
There is a 1-1 correspondence, so the return from the code fragment can be a hard-coded address
Done! We have a hard-coded subroutine that does not affect the CPU registers.
Here is the actual code fragments
code_buff: PEA $0000 ; rightmost word (16-bits = 4 pixels)
PEA $0000 ; background 1
PEA $0000 ; background 1
PEA $0000 ; background 1
LDA (72),y ; background 2
PHA
LDA (70),y ; background 2
PHA
JMP word_68 ; mix the data
word_68_rtn: PEA $0000 ; more background 1
...
PEA $0000
BRA *+40 ; patched exit code
...
word_68: LDA (68),y ; load data for background 2
AND #$00FF ; mask
ORA #$AB00 ; blend with data from background 1
PHA
JMP word_68_rtn ; jump back
word_66: LDA (66),y
...
The end result was a near-optimal blitter that has minimal overhead and cranks out more than 15 frames per second at 320x200 on a 2.5 MHz CPU with a 1 MB/s memory bus.
Michael Abrash's "Zen of Assembly Language" had some nifty stuff, though I admit I don't recall specifics off the top of my head.
Actually it seems like everything Abrash wrote had some nifty optimization stuff in it.
The Stalin Scheme compiler is pretty crazy in that aspect.
I once saw a switch statement with a lot of empty cases, a comment at the head of the switch said something along the lines of:
Added case statements that are never hit because the compiler only turns the switch into a jump-table if there are more than N cases
I forget what N was. This was in the source code for Windows that was leaked in 2004.
I've gone to the Intel (or AMD) architecture references to see what instructions there are. movsx - move with sign extension is awesome for moving little signed values into big spaces, for example, in one instruction.
Likewise, if you know you only use 16-bit values, but you can access all of EAX, EBX, ECX, EDX , etc- then you have 8 very fast locations for values - just rotate the registers by 16 bits to access the other values.
The EFF DES cracker, which used custom-built hardware to generate candidate keys (the hardware they made could prove a key isn't the solution, but could not prove a key was the solution) which were then tested with a more conventional code.
The FSG 2.0 packer made by a Polish team, specifically made for packing executables made with assembly. If packing assembly isn't impressive enough (what's supposed to be almost as low as possible) the loader it comes with is 158 bytes and fully functional. If you try packing any assembly made .exe with something like UPX, it will throw a NotCompressableException at you ;)