Can this concept be optimized with OpenMP? - optimization

I'd rather not use code since it's common concept:
Say we have the scenario of a function which is neither too big or too small and also can't easily in itself be optimized with OpenMP for-loop optimizations.
However, it is a function which is called millions of times throughout the project's run in a few hundred unrelated circumstances in the code.
[inline in itself doesn't seem to do much (on by default on optimized gcc outcomes) and making it into a macro while not parallel either, it would be an undertaking to be compatible.]

OpenMP is for "making things run in parallel" - in general. Not only for loops... Well, you don't even need to have any loops at all to make some good use of OpenMP and speed up your code.
The only thing which matters is: "do I have a several independent operations which run one after one, and which could work at the same time instead?". If so, then you've found an easy spot for optimization with OpenMP.

When the function is called, is it called multiple times, particularly in a loop? The question is a little vague -- maybe yes (it's called thousands of times in each of a few hundred unrelated places -> millions) or maybe no (it's called once in each of a hundred unrelated places, and you hit those sections of code thousands of times -> millions).
In the first case, then yes, parallelizing the `map' -- that is, applying the function independantly to a bunch of cases -- is easy and OpenMPs very well.
In the second case, if the function is called a million times but each time once, then no. There's repetition of execution there, but no exposed concurrency; there's no list of tasks that have to be done at the same time that can be done independantly. All that you can do there, if the function is likely to be called with repeated parameters, is to use memoization, which is a memory/compute time tradeoff, not a parallelization technique.
In the second case, it may be the case that you can restructure the code so that a bunch of those function calls are made at once, thus exposing the concurrency and allowing parallelization -- but its not something that OpenMP (or any parallel programming model) can automatically do for you.

Related

Is it better to use a boolean variable to replace an if condition for readability or not?

I am in the second year of my bachelor study in information technology. Last year in one of my courses they taught me to write clean code so other programmers have an easier time working with your code. I learned a lot about writing clean code from a video ("clean code") on pluralsight (paid website for learning which my school uses). There was an example in there about assigning if conditions to boolean variables and using them to enhance readability. In my course today my teacher told me it's very bad code because it decreases performance (in bigger programs) due to increased tests being executed. I was wondering now whether I should continue using boolean variables for readability or not use them for performance. I will illustrate in an example (I am using python code for this example):
example boolean variable
Let's say we need to check whether somebody is legal to drink alcohol we get the persons age and we know the legal drinking age is 21.
is_old_enough = persons_age >= legal_drinking_age
if is_old_enough:
do something
My teacher told me today that this would be very bad for performance since 2 tests are performed first persons_age >= legal_drinking_age is tested and secondly in the if another test occurs whether the person is_old_enough.
My teacher told me that I should just put the condition in the if, but in the video they said that code should be read like natural language to make it clear for other programmers. I was wondering now which would be the better coding practice.
example condition in if:
if persons_age >= legal_drinking_age:
do something
In this example only 1 test is tested whether persons_age >= legal_drinking_age. According to my teacher this is better code.
Thank you in advance!
yours faithfully
Jonas
I was wondering now which would be the better coding practice.
The real safe answer is : Depends..
I hate to use this answer, but you won't be asking unless you have faithful doubt. (:
IMHO:
If the code will be used for long-term use, where maintainability is important, then a clearly readable code is preferred.
If the program speed performance crucial, then any code operation that use less resource (smaller dataSize/dataType /less loop needed to achieve the same thing/ optimized task sequencing/maximize cpu task per clock cycle/ reduced data re-loading cycle) is better. (example keyword : space-for-time code)
If the program minimizing memory usage is crucial, then any code operation that use less storage and memory resource to complete its operation (which may take more cpu cycle/loop for the same task) is better. (example: small devices that have limited data storage/RAM)
If you are in a race, then you may what to code as short as possible, (even if it may take a slightly longer cpu time later). example : Hackathon
If you are programming to teach a team of student/friend something.. Then readable code + a lot of comment is definitely preferred .
If it is me.. I'll stick to anything closest to assembly language as possible (as much control on the bit manipulation) for backend development. and anything closest to mathematica-like code (less code, max output, don't really care how much cpu/memory resource is needed) for frontend development. ( :
So.. If it is you.. you may have your own requirement/preference.. from the user/outsiders/customers point of view.. it is just a working/notWorking program. YOur definition of good program may defer from others.. but this shouldn't stop us to be flexible in the coding style/method.
Happy exploring. Hope it helps.. in any way possible.
Performance
Performance is one of the least interesting concerns for this question, and I say this as one working in very performance-critical areas like image processing and raytracing who believes in effective micro-optimizations (but my ideas of effective micro-optimization would be things like improving memory access patterns and memory layouts for cache efficiency, not eliminating temporary variables out of fear that your compiler or interpreter might allocate additional registers and/or utilize additional instructions).
The reason it's not so interesting is, because, as pointed out in the comments, any decent optimizing compiler is going to treat those two you wrote as equivalent by the time it finishes optimizing the intermediate representation and generates the final results of the instruction selection/register allocation to produce the final output (machine code). And if you aren't using a decent optimizing compiler, then this sort of microscopic efficiency is probably the last thing you should be worrying about either way.
Variable Scopes
With performance aside, the only concern I'd have with this convention, and I think it's generally a good one to apply liberally, is for languages that don't have a concept of a named constant to distinguish it from a variable.
In those cases, the more variables you introduce to a meaty function, the more intellectual overhead it can have as the number of variables with a relatively wide scope increases, and that can translate to practical burdens in maintenance and debugging in extreme cases. If you imagine a case like this:
some_variable = ...
...
some_other_variable = ...
...
yet_another_variable = ...
(300 lines more code to the function)
... in some function, and you're trying to debug it, then those variables combined with the monstrous size of the function starts to multiply the difficulty of trying to figure out what went wrong. That's a practical concern I've encountered when debugging codebases spanning millions of lines of code written by all sorts of people (including those no longer on the team) where it's not so fun to look at the locals watch window in a debugger and see two pages worth of variables in some monstrous function that appears to be doing something incorrectly (or in one of the functions it calls).
But that's only an issue when it's combined with questionable programming practices like writing functions that span hundreds or thousands of lines of code. In those cases it will often improve everything just focusing on making reasonable-sized functions that perform one clear logical operation and don't have more than one side effect (or none ideally if the function can be programmed as a pure function). If you design your functions reasonably then I wouldn't worry about this at all and favor whatever is readable and easiest to comprehend at a glance and maybe even what is most writable and "pliable" (to make changes to the function easier if you anticipate a future need).
A Pragmatic View on Variable Scopes
So I think a lot of programming concepts can be understood to some degree by just understanding the need to narrow variable scopes. People say avoid global variables like the plague. We can go into issues with how that shared state can interfere with multithreading and how it makes programs difficult to change and debug, but you can understand a lot of the problems just through the desire to narrow variable scopes. If you have a codebase which spans a hundred thousand lines of code, then a global variable is going to have the scope of a hundred thousands of lines of code for both access and modification, and crudely speaking a hundred thousand ways to go wrong.
At the same time that pragmatic sort of view will find it pointless to make a one-shot program which only spans 100 lines of code with no future need for extension avoid global variables like the plague, since a global here is only going to have 100 lines worth of scope, so to speak. Meanwhile even someone who avoids those like the plague in all contexts might still write a class with member variables (including some superfluous ones for "convenience") whose implementation spans 8,000 lines of code, at which point those variables have a much wider scope than even the global variable in the former example, and this realization could drive someone to design smaller classes and/or reduce the number of superfluous member variables to include as part of the state management for the class (which can also translate to simplified multithreading and all the similar types of benefits of avoiding global variables in some non-trivial codebase).
And finally it'll tend to tempt you to write smaller functions as well, since a variable towards the top of some function spanning 500 lines of code is going to also have a fairly wide scope. So anyway, my only concern when you do this is to not let the scope of those temporary, local variables get too wide. And if they do, then the general answer is not necessarily to avoid those variables but to narrow their scope.

DBI/SQL performance on SELECT via different methods?

I've been looking around and haven't been able to find a good answer to this question. Is there a real performance difference between the following DBI methods?
fetchrow_arrayref vs
selectall_arrayref vs
fetchall_arrayref
It probably doesn't matter when making a single SELECT call that will give a smallish (>50 records) resultset, but what about making multiple SELECT statements all in a row? What about if the resultsets are huge (i.e. thousands of records)?
Thanks.
The key question to ask yourself is whether you need to keep all the returned rows around in memory.
If you do then you can let the DBI fetch them all for you - it will be faster than writing the equivalent code yourself.
If you don't need to keep all the rows in memory, in other words, if you can process each row in turn, then using fetchrow_arrayref in a loop will typically be much faster.
The reason is that the DBI goes to great lengths to reuse the memory buffers for each row. That can be a significant saving in effort. You can see the effect on this slide, although the examples don't directly match your question. You can also see from that slide the importance of benchmarking. It can be hard to tell where the balance lies.
If you can work on a per-row basis, then binding columns can yield a useful performance gain by reducing the work of accessing the values in the fetched rows.
You also asked about "huge" results. (I doubt "thousands of records" would be a problem on modern machines unless the rows themselves were very 'large'.) Clearly processing a row at a time is preferable for very large result sets. Note that some databases default to streaming all the results to the driver which then buffers them in a compact form and returns the rows one by one as your perl code (or a DBI method) fetches them. Again, benchmark and test for yourself.
If you need more help, the dbi-users mailing list is a good place to ask. You don't need to subscribe.
The difference between fetchrow* and fetch all will be the location of the loop code in the call stack. That is, the fetchall* implies the fetch loop, while the fetchrow* implies that you will write your own loop.
The difference between the fetch* and select* methods is that one requires you to manually prepare() and execute() the query, while the other does that for you. The time differences will come from how efficient your code is compared to DBI's.
My reading has shown that the main differences between methods are between *_arrayref and *_hashref, where the *_hashref methods are slower due to the need to look up the hash key names in the database's metadata.

Setting up conditional statements to take advantage of instruction prefetch (modern x86)

I've recently picked up Michael Abrash's The Zen of Assembly Language (from 1990) and was reading a section on how instruction prefetching is not always advantageous such as the case when branching occurs (a jump). This is because all of the instructions that were prefetched are no longer the ones to be executed and so more instructions must be fetched.
This reminded me of an optimization from another old book, Tricks of the Game Programming Gurus by Andre LaMothe in which he suggests that when setting up your conditional statements, you put the most frequently (or expected) path first.
For example:
if (booleanThatIsMostLikelyToBeTrue)
{
// ...expected code
// also the code that would've been prefetched
}
else
{
// ...exceptional or less likely code
}
My questions are:
1) Was LaMothe's optimization suggested with this in mind? (I no longer have the book)
2) Is this type of optimization still a worthwhile programming habit on modern machine? (maybe prefetching is handled differently than it used to be?)
You want to set up your code to branch as little as possible, and branch backwards when it does. A more reliable way to do that IF is to always do the common thing then test for the exception:
Do A;
if( test ) Do B;
Of course, this has to be arranged so that anything A does is reversed by B if B occurs.
The point of Zen programming is to try to eliminate the If statements altogether. So for example, instead of looping 10 times (which requires an exit condition test), you just write the same statement 10 times, voila!, no if statement. Another example is if you are looping a list, you use a sentinel to exit the loop, instead of testing an index value.
If you are working in C, it can be difficult to gimick the compiler into doing what you want. Putting something first or second in an IF statement will have no effect on compiled result. Note that it is critical to use the right compiler options. For example, using the /O2 (optimize for speed) in Visual C++ makes a HUGE difference in the compiled efficiency.
These kinds of optimizations can often be useful. However, it's typically something that you do after you've written the program, profiled it, and determined that the program would benefit from doing these micro-optimizations.
Also, many modern compilers have profile-guided optimization, which can relieve you of having to contort your code for performance purposes.

How often are values duplicated on the stack?

When you have a function that accepts an array as an argument and calls another function with that array and that calls another function with it and so forth the stack will contain many copies of the pointer to that array. I just thought of an interesting way to alleviate this problem but I'm wondering whether or not it is worth implementing.
Does anyone have any idea how often stacks contain duplicate pointers in practice?
EDIT
Just to clarify, I am not optimizing a given program but, rather, am considering writing a new kind of optimization pass for my VM. My benchmarks have indicated that my current solution causes up to 70% of the total running time to be spent in stack manipulations. The optimization pass I am thinking of would generate code at compile time that would perform the same actions but pointers would (potentially) be duplicated on the stack less often. I am interested in any prior studies that have measured the number of duplicates on the stack because this would help me to quantify my optimization's potential. For example, if it is known that real programs do not push pointers already on the stack in practice then my optimization is worthless.
Moreover, these stack manipulations are due to the code generated by my VM making sure locally-held pointers are visible to the garbage collector and not due only to function parameters as both answerers have currently assumed. And they are actually operations on a shadow stack rather than the main stack.
First of all, the answer will depend on your application.
Secondly, even with high duplication, I doubt there is much sense in implementing the mechanism you describe, or even that it is possible in a general case. If you call a method and you pass it parameters, you must do it either one way or another.
There may be advantages to doing it in some specific way - for example there are several function calling conventions and many C/C++ compilers (e.g. gcc) let you choose between passing parameters on the stack or via registers. In certain cases, the latter may be faster - you can try and benchmark if it helps your application.
But in a general case, the cost of detecting duplicated values on the stack and "reusing" them would probably much exceed any gains from having a smaller stack. The code for pushing and popping values is really simple (just a few CPU instructions in an optimized case), code for finding and reusing duplicates - hardly so. You would also have to somehow store the information about which values are already on the stack and how to find them - a nontrivial data structure. Except for some really weird cases, I don't think this would be smaller than the actual copied data itself.
What you could do, would be to rewrite your algorithm in such way that some function calls are eliminated. For example, if your function's result only depends on the input arguments, you could somehow cache or memoize the results, thus avoiding repeated calls with the same values. This may indeed bring some gains, though it's usually a memory vs CPU time tradeoff. Getting an advantage both in memory and in CPU time is rarely possible. Also, rewriting your algorithm is not really "avoiding duplication of data on the stack".
Any way, for the original question, I think the idea is not viable and you should look at optimizations elsewhere.
PS: You use case may somewhat resemble tail-call optimization, so perhaps that's a direction worth looking at - but if you implement it yourself, I would also consider this to fall into the "change your algorithm" category. Maybe changing from a recursive algorithm to an iterative one could help also.
Can I suggest getting some exposure to actual performance tuning?
(Here's my canonical example.)
Between the time a program starts and the time it ends, of the cycles it uses, it obviously uses 100% of those cycles.
If it goes in and out of functions, and passes pointers to an array, but does nothing else, then there's no surprise that a high percent of time goes into function entry and exit, and passing arguments.
If a program P is written to do task T, there are a multitude of other programs P' which could also do task T. Some of them take fewer cycles than all the others, and those are the optimal ones.
The way the optimal ones differ from the non-optimal ones is that the non-optimal ones are doing things that can be done without.
So, to optimize any program, find out what cycles are being spent that don't have to be, and get rid of those activities. That link shows in great detail how I do it.
Trying to pass fewer arguments to functions might or might not be necessary, depending on what your diagnostics tell you.

Array length question

This question concerns optimization. Suppose I need the array length of an array A at two places in my code. Should I use the function a.length() in the two places, or is it faster to assign a local variable the value of a.length() and use it at the two places.
By "faster" I mean in terms of running time. Moreover, i am talking asymptotically.
The asymptotic complexity of calling the function twice is the same - any constant number of calls to the same (pure) function on the same arguments has the same asymptotic complexity as a single call to that function, since you can just roll the constant number of calls into the big-O's hidden constant.
As for what will be faster, there's no guarantee which one will be faster. It depends on the language and compiler. I'd suggest just writing it both ways and timing the result to see if there's an appreciable difference. That said, if you are writing something that is so performance-critical that you can't afford to call .length() twice, you may need to reconsider your approach in general to see if there's a better global solution to the problem. Microoptimizations are rarely worth the effort unless you have a compelling reason to believe that your program is markedly slower in the unoptimized version.
If you have to ask the question, you're not at a point where it matters yet. If you were, you'd already have code that you've profiled, and you could just try it and see. This kind of thing depends heavily on your language and compiler, and the only results that matter are the ones you see.
Don't worry about micro-optimizations til you find you need to shave cycles, and even then the algorithm is the first thing to check.
What language? In many languages, such calls are optimized away (either at compile time or by a JIT compiler) into direct access to the length field of the array object.