My code is as follows:
#Const debuggingEnabled = False
Sub debugMessage(str As String)
#If debuggingEnabled Then
Debug.Print str
#End If
End Sub
Sub doThings()
debugMessage "test" 'Does this get optimised away, or must it be explicitly
'wrapped with `#If .. #End If` every time it's called
'if one is to avoid the jump and stack push/pop ops?
End Sub
Does Microsoft's VBA compiler optimise away calls to procedures that do nothing? How can I find out?
Upgrading this from comment to answer:
What is the point in "optimizing away" a procedure containing a single, one-bit, binary, boolean test? Are you sure that this represents a significant portion of execution time? The chances of that are pretty remote.
Before optimizing, always do some profiling, so you don't waste time and worry on bits of code that represent 0.0001% of your execution time.
VBA has no native profiler, but there are third-party options out there, some of them free. This is one, accompanied by an instructive read: Profiling and Optimizing VBA
By bruce mcpherson. A DuckDuckGo search gives plenty of other leads as well.
So, the answer to your original question is: I don't think VBA optimizes such a procedure away, but I'm not entirely sure, and either way it is in all likelihood completely irrelevant. Before worrying about optimizing, always do some profiling, so you can spend your time wisely. After profiling, you will almost always find that what you thought slowed down your program is actually very fast, and that something else is the culprit.
AFAIK the VBA interpreter does little or no optimisation. If you test this in VBE debug mode you can see execution jump to the "empty" sub.
But I would have thought that the additional overhead will anyway be drowned by the rest of the VBA execution.
Related
After finding a "procedure to large" error on a specific subroutine of a module, I am refactoring the code in the module. I was curious on how effective the refactoring is at reducing the compiled code size.
So I was wondering if there is a way to see the size of the compiled code of a subroutine in a Module of Excel 2016?
My attempts so far have been limited to searching online for methods to determine the size of a compiled module or subroutine in VBA.
It led me to the limits of VBA as listed in here. It does not mention a way to see the size of the procedure compiled, nor does it mention whether that is (practically) possible.
(As was mentioned in the comments, the current maximum size of the compiled procedure is 64K as stated here.)
I think the compiled procedure size is not 1 on 1 related to the number of lines. (Because that does not take into account short or long lines. But I am currently unsure about how the vba-procure is compiled, and consequently how lines contribute to the compiled file size, otherwise a solution could possibly be to compute the compiled file size.)
Nor is the procedure 1 on 1 dependent on the size of the code as stored as a '.txt' file. (because it can contain comments which do not contribute to the compiled code size)
Disclamer - I am well aware of the shortcomings of the old code I am modifying. That it is poorly written, I think this quote of Bill Gates illustrates it quite well:
Measuring programming progress by lines of code is like measuring
aircraft building progress by weight.
I think refactoring, and breaking the code up in shorter subroutines is an appropriate first step. To monitor this process, in combination with the hard bottleneck presented by VBA as the Procedure too long- error led me to this question.
No, you can't do that. VBA is compiled in several stages, it's part p-code, part interpreted, and at the end of the day the size in kb of the compiled procedure isn't what you need to work on.
You need to read about Abstraction, urgently. A procedure that triggers this compiler error is over 10K lines of code. A healthy procedure at the appropriate abstraction level might be somewhere between 500 and a thousand times smaller than that (not kidding) - the size of the compiled code is absolutely meaningless.
If you're worrying about the size of the compiled code, you're not writing the code for a human.
Code isn't written for a compiler to crunch and a runtime environment to run. Code is written for a human maintainer to read, understand, follow, debug, modify, extend, etc. Without any level of abstraction, code is a boring, excruciatingly mind-numbing series of executable statements that are invariably redundant, inefficient, and annoyingly bug-prone.
There are 3rd-party tools you can use to analyze VBA code. MZ-Tools 3.0 was free, but the latest version isn't. It has a feature that can tell you how many lines of code are in each procedure of each module, how much of it is commented-out, whether you have unused variables, etc.
Rubberduck is free and open-source, under active development (disclaimer: I own the project's repository), and has a code metrics feature (distinct from code inspections) that can probably help you identify the most problematic areas (although, parsing a 10K-liner module might take a while):
Lines is your "lines of code" metric; Cyclomatic Complexity is a rough indicator of the different possible execution paths in your code (how much sense that metric makes at a module-level aggregate is debatable); Maximum Nesting is also an indicator of how badly "arrow-shaped" code might be.
High values in these metrics are a sign that you may need to extract methods.
I was wondering if it was possible to group lines of code in vba excel, in order to make the view more pleasant. Meaning I have 1000 lines of code and would like to group lines in order to hide/show them. I know this is possible in other coding languages, so I would like to know if it was possible in vba.
Thanks
The short answer is no, not out of the box. There is an open source tool that will be able to do this in a future release http://rubberduckvba.com/
If you have a lengthy procedure you may what to ask yourself if it does more than one thing. As a general rule, a procedure should only do one thing. If you break up each separate "thing" into individual procedures, then you can call each procedure in a parent subroutine. A good book on this is "Clean Code: A Handbook of Agile Software Craftsmanship"
Sub ParentSub()
ChildSub1()
ChildSub2()
End Sub
In addition to breaking code up (refactoring) into smaller logical functions, you can also break them across different modules if the Private/Public setting allows it.
If your code is in a class (or in a worksheet), then you can also use Friend for your functions/subroutines.
Of course, once refactored, I often find that I build classes to make my code simpler and easier to maintain. This allows you to hide (encapsulate) much of your lower level code and your main program is now at a much higher level and sometimes almost (if you squint hard enough and you have named your classes and methods properly) look like readable English!
As an aside, the Call keyword is not recommended unless the called expression does not start with an identifier (MSDN ref). In the answer above, the following will work quite fine.
Sub ParentSub()
ChildSub1()
ChildSub2()
End Sub
In a "religious" discussion about formatting of Microsoft T-SQL code, I questioned whether or not the GOTO statement was still available in T-SQL syntax. In 13 years of using T-SQL I have never had occasion to use it, and indeed didn't know if it existed. After a brief search of the documentation and to my consternation it does indeed exist!
My question is this:
Is there at least one case where GOTO statements would yield a solution that performs better than one in which other higher order programming constructs are used?
Encryption algorithm implementation
My question is NOT:
How do I use small bits of functionality without creating a whole function to do so?
How do I replicate error handling functionality without copy/paste?
I almost never use GOTO and can easily live without it.
The one case where I would consider using it is when I have complicated code that does lots of error checking. I might want to take some action when an error returns, and GOTO allows me to define a single block of code for the error checking.
You can solve this problem in multiple ways, but GOTO is a reasonable option that guarantees that errors are processed consistently and the code is not cluttered with a lot of if #Error = 0 . . . statements.
i saw goto a few times in large scripts where people used it to improve readability. sometimes it is better readable but mostly it turns into spaghetti code.
i see just one situation where goto can maybe perform better. its inside a multiple while loop. you can use goto once instead of multiple breaks which just exists the innermost loop. nevertheless in my opinion break is not much better than goto, its both not a good programming style.
while ...
while ...
while...
break
break
break
while ...
while ...
while...
goto endloops
endloops:
I've used GOTO statements mainly when SSIS was not available (don't ask!!) to provide some control flow to a large stored procedure used for ETL processing. That's the only time I've ever used them and while useful in that scenario, it's obviously not a scenario you'd be in.
While implementing a macro via Visual Basic Editor, i got an error "Compile Error: Procedure Too Long".........i wanted to know why is there a limit to the size of the macro and is there any way to increase the allowed size. My macro is pretty big (based on around 150 different cases) and because of this error i will have to divide the task among 8 macros approximately. Is there anyway around it?
Will appreciate your help.
You can speculate on the logic for limiting the text size of a subroutine - maybe they don't think it is necessary to have subs that long - maybe they gain some slight performance boost by doing it - maybe it is just a purposeful limitation to prevent people from recording ridiculously long macros. But in any case it is probably better from a scripting/programming perspective to not have monstrous subs anyways. And with loops you shouldn't need to have subs with that much text in them. So my hypothesis is that they wanted to prevent people from relying on recorded macros when it would be better and more efficient to use loops. But it's just a hypothesis. Good Luck.
I need to optimize code to get room for some new code. I do not have the space for all the changes. I can not use code bank switching (80c31 with 64k).
You haven't really given a lot to go on here, but there are two main levels of optimizations you can consider:
Micro-Optimizations:
eg. XOR A instead of MOV A,0
Adam has covered some of these nicely earlier.
Macro-Optimizations:
Look at the structure of your program, the data structures and algorithms used, the tasks performed, and think VERY hard about how these could be rearranged or even removed. Are there whole chunks of code that actually aren't used? Is your code full of debug output statements that the user never sees? Are there functions specific to a single customer that you could leave out of a general release?
To get a good handle on that, you'll need to work out WHERE your memory is being used up. The Linker map is a good place to start with this. Macro-optimizations are where the BIG wins can be made.
As an aside, you could - seriously- try rewriting parts of your code with a good optimizing C compiler. You may be amazed at how tight the code can be. A true assembler hotshot may be able to improve on it, but it can easily be better than most coders. I used the IAR one about 20 years ago, and it blew my socks off.
With assembly language, you'll have to optimize by hand. Here are a few techniques:
Note: IANA8051P (I am not an 8501 programmer but I have done lots of assembly on other 8 bit chips).
Go through the code looking for any duplicated bits, no matter how small and make them functions.
Learn some of the more unusual instructions and see if you can use them to optimize, eg. A nice trick is to use XOR A to clear the accumulator instead of MOV A,0 - it saves a byte.
Another neat trick is if you call a function before returning, just jump to it eg, instead of:
CALL otherfunc
RET
Just do:
JMP otherfunc
Always make sure you are doing relative jumps and branches wherever possible, they use less memory than absolute jumps.
That's all I can think of off the top of my head for the moment.
Sorry I am coming to this late, but I once had exactly the same problem, and it became a repeated problem that kept coming back to me. In my case the project was a telephone, on an 8051 family processor, and I had totally maxed out the ROM (code) memory. It kept coming back to me because management kept requesting new features, so each new feature became a two step process. 1) Optimize old stuff to make room 2) Implement the new feature, using up the room I just made.
There are two approaches to optimization. Tactical and Strategical. Tactical optimizations save a few bytes at a time with a micro optimization idea. I think you need strategic optimizations which involve a more radical rethinking about how you are doing things.
Something I remember worked for me and could work for you;
Look at the essence of what your code has to do and try to distill out some really strong flexible primitive operations. Then rebuild your top level code so that it does nothing low level at all except call on the primitives. Ideally use a table based approach, your table contains stuff like; Input state, event, output state, primitives.... In other words when an event happens, look up a cell in the table for that event in the current state. That cell tells you what new state to change to (optionally) and what primitive(s) (if any) to execute. You might need multiple sets of states/events/tables/primitives for different layers/subsystems.
One of the many benefits of this approach is that you can think of it as building a custom language for your particular problem, in which you can very efficiently (i.e. with minimal extra code) create new functionality simply by modifying the table.
Sorry I am months late and you probably didn't have time to do something this radical anyway. For all I know you were already using a similar approach! But my answer might help someone else someday who knows.
In the whacked-out department, you could also consider compressing part of your code and only keeping some part that is actively used decompressed at any particular point in time. I have a hard time believing that the code required for the compress/decompress system would be small enough a portion of the tiny memory of the 8051 to make this worthwhile, but has worked wonders on slightly larger systems.
Yet another approach is to turn to a byte-code format or the kind of table-driven code that some state machine tools output -- having a machine understand what your app is doing and generating a completely incomprehensible implementation can be a great way to save room :)
Finally, if the code is indeed compiled in C, I would suggest compiling with a range of different options to see what happens. Also, I wrote a piece on compact C coding for the ESC back in 2001 that is still pretty current. See that text for other tricks for small machines.
1) Where possible save your variables in Idata not in xdata
2) Look at your Jmp statements – make use of SJmp and AJmp
I assume you know it won't fit because you wrote/complied and got the "out of memory" error. :) It appears the answers address your question pretty accurately; short of getting code examples.
I would, however, recommend a few additional thoughts;
Make sure all the code is really
being used -- code coverage test? An
unused sub is a big win -- this is a
tough step -- if you're the original
author, it may be easier -- (well, maybe) :)
Ensure the level of "verification"
and initialization -- sometimes we
have a tendency to be over zealous
in insuring we have initialized
variables/memory and sure enough
rightly so, how many times have we
been bitten by it. Not saying don't
initialize (duh), but if we're doing
a memory move, the destination
doesn't need to be zero'd first --
this dovetails with
1 --
Eval the new features -- can an
existing sub be be enhanced to cover
both functions or perhaps an
existing feature replaced?
Break up big code if a piece of the
big code can save creating a new
little code.
or perhaps there's an argument for hardware version 2.0 on the table now ... :)
regards
Besides the already mentioned (more or less) obvious optimizations, here is a really weird (and almost impossible to achieve) one: Code reuse. And with Code reuse I dont mean the normal reuse, but to a) reuse your code as data or b) to reuse your code as other code. Maybe you can create a lut (or whatever static data) that it can represented by the asm hex opcodes (here you have to look harvard vs von neumann architecture).
The other would reuse code by giving code a different meaning when you address it different. Here an example to make clear what I mean. If the bytes for your code look like this: AABCCCDDEEFFGGHH at address X where each letter stands for one opcode, imagine you would now jump to X+1. Maybe you get a complete different functionality where the now by space seperated bytes form the new opcodes: ABC CCD DE EF GH.
But beware: This is not only tricky to achieve (maybe its impossible), but its a horror to maintain. So if you are not a demo code (or something similiar exotic), I would recommend to use the already other mentioned ways to save mem.