are 2^n exponent calculations really less efficient than bit-shifts? - objective-c

if I do:
int x = 4;
pow(2, x);
Is that really that much less efficient than just doing:
1 << 4
?

Yes. An easy way to show this is to compile the following two functions that do the same thing and then look at the disassembly.
#include <stdint.h>
#include <math.h>
uint32_t foo1(uint32_t shftAmt) {
return pow(2, shftAmt);
}
uint32_t foo2(uint32_t shftAmt) {
return (1 << shftAmt);
}
cc -arch armv7 -O3 -S -o - shift.c (I happen to find ARM asm easier to read but if you want x86 just remove the arch flag)
_foo1:
# BB#0:
push {r7, lr}
vmov s0, r0
mov r7, sp
vcvt.f64.u32 d16, s0
vmov r0, r1, d16
blx _exp2
vmov d16, r0, r1
vcvt.u32.f64 s0, d16
vmov r0, s0
pop {r7, pc}
_foo2:
# BB#0:
movs r1, #1
lsl.w r0, r1, r0
bx lr
You can see foo2 only takes 2 instructions vs foo1 which takes several instructions. It has to move the data to the FP HW registers (vmov), convert the integer to a float (vcvt.f64.u32) call the exp function and then convert the answer back to an uint (vcvt.u32.f64) and move it from the FP HW back to the GP registers.

Yes. Though by how much I can't say. The easiest way to determine that is to benchmark it.
The pow function uses doubles... At least, if it conforms to the C standard. Even if that function used bitshift when it sees a base of 2, there would still be testing and branching to reach that conclusion, by which time your simple bitshift would be completed. And we haven't even considered the overhead of a function call yet.
For equivalency, I assume you meant to use 1 << x instead of 1 << 4.
Perhaps a compiler could optimize both of these, but it's far less likely to optimize a call to pow. If you need the fastest way to compute a power of 2, do it with shifting.
Update... Since I mentioned it's easy to benchmark, I decided to do just that. I happen to have Windows and Visual C++ handy so I used that. Results will vary. My program:
#include <Windows.h>
#include <cstdio>
#include <cmath>
#include <ctime>
LARGE_INTEGER liFreq, liStart, liStop;
inline void StartTimer()
{
QueryPerformanceCounter(&liStart);
}
inline double ReportTimer()
{
QueryPerformanceCounter(&liStop);
double milli = 1000.0 * double(liStop.QuadPart - liStart.QuadPart) / double(liFreq.QuadPart);
printf( "%.3f ms\n", milli );
return milli;
}
int main()
{
QueryPerformanceFrequency(&liFreq);
const size_t nTests = 10000000;
int x = 4;
int sumPow = 0;
int sumShift = 0;
double powTime, shiftTime;
// Make an array of random exponents to use in tests.
const size_t nExp = 10000;
int e[nExp];
srand( (unsigned int)time(NULL) );
for( int i = 0; i < nExp; i++ ) e[i] = rand() % 31;
// Test power.
StartTimer();
for( size_t i = 0; i < nTests; i++ )
{
int y = (int)pow(2, (double)e[i%nExp]);
sumPow += y;
}
powTime = ReportTimer();
// Test shifting.
StartTimer();
for( size_t i = 0; i < nTests; i++ )
{
int y = 1 << e[i%nExp];
sumShift += y;
}
shiftTime = ReportTimer();
// The compiler shouldn't optimize out our loops if we need to display a result.
printf( "Sum power: %d\n", sumPow );
printf( "Sum shift: %d\n", sumShift );
printf( "Time ratio of pow versus shift: %.2f\n", powTime / shiftTime );
system("pause");
return 0;
}
My output:
379.466 ms
15.862 ms
Sum power: 157650768
Sum shift: 157650768
Time ratio of pow versus shift: 23.92

That depends on the compiler, but in general (when the compiler is not totally braindead) yes, the shift is one CPU instruction, the other is a function call, that involves saving the current state an setting up a stack frame, that requires many instructions.

Generally yes, as bit shift is very basic operation for the processor.
On the other hand many compilers optimise code so that raising to power is in fact just a bit shifting.

Related

Which is the fastest way to find the last N bits of an integer?

Which algorithm is fastest for returning the last n bits in an unsigned integer?
1.
return num & ((1 << bits) - 1)
2.
return num % (1 << bits)
3.
let shift = num.bitWidth - bits
return (num << shift) >> shift
(where bitWidth is the width of the integer, in bits)
Or is there another, faster algorithm?
This is going to depend heavily on what compiler you have, what the optimization settings are, and what size of integers you're working with.
My hypothesis going into this section was that the answer would be "the compiler will be smart enough to optimize all of these in a way that's better than whatever you'd choose to write." And in some sense, that's correct. Consider the following three pieces of code:
#include <stdint.h>
#include <limits.h>
uint32_t lastBitsOf_v1(uint32_t number, uint32_t howManyBits) {
return number & ((1 << howManyBits) - 1);
}
uint32_t lastBitsOf_v2(uint32_t number, uint32_t howManyBits) {
return number % (1 << howManyBits);
}
uint32_t lastBitsOf_v3(uint32_t number, uint32_t howManyBits) {
uint32_t shift = sizeof(number) * CHAR_BIT - howManyBits;
return (number << shift) >> shift;
}
Over at the godbolt compiler explorer with optimization turned up to -Ofast with -march=native enabled, we get this code generated for the three functions:
lastBitsOf_v1(unsigned int, unsigned int):
bzhi eax, edi, esi
ret
lastBitsOf_v2(unsigned int, unsigned int):
bzhi eax, edi, esi
ret
lastBitsOf_v3(unsigned int, unsigned int):
mov eax, 32
sub eax, esi
shlx edi, edi, eax
shrx eax, edi, eax
ret
Notice that the compiler recognized what you were trying to do with the first two versions of this function and completely rewrote the code to use the bzhi x86 instruction. This instruction copies the lower bits of one register into another. In other words, the compiler was able to generate a single assembly instruction! On the other hand, the compiler didn't recognize what the last version was trying to do, so it actually generated the code as written and actually did the shifts and subtraction.
But that's not the end of the story. Imagine that the number of bits to extract is known in advance. For example, suppose we want the lower 13 bits. Now, watch what happens with this code:
#include <stdint.h>
#include <limits.h>
uint32_t lastBitsOf_v1(uint32_t number) {
return number & ((1 << 13) - 1);
}
uint32_t lastBitsOf_v2(uint32_t number) {
return number % (1 << 13);
}
uint32_t lastBitsOf_v3(uint32_t number) {
return (number << 19) >> 19;
}
These are literally the same functions, just with the bit amount hardcoded. Now look at what gets generated:
lastBitsOf_v1(unsigned int):
mov eax, edi
and eax, 8191
ret
lastBitsOf_v2(unsigned int):
mov eax, edi
and eax, 8191
ret
lastBitsOf_v3(unsigned int):
mov eax, edi
and eax, 8191
ret
All three versions get compiled to the exact same code. The compiler saw what we're doing in each case and replaced it with this much simpler code that's basically the first version.
After seeing all of this, what should you do? My recommendation would be the following:
Unless this code is an absolute performance bottleneck - as in, you've measured your code's runtime and you're absolutely certain that the code for extracting the low bits of numbers is what's actually slowing you down - I wouldn't worry too much about this at all. Pick the most readable code that you can. I personally find option (1) the cleanest, but that's just me.
If you absolutely must get every ounce of performance out of this that you can, rather than taking my word for it, I'd recommend tinkering around with different versions of the code and seeing what assembly gets generated in each case and running some performance experiments. After all, if something like this is really important, you'd want to see it for yourself!
Hope this helps!

Debug data/neon performance hazards in arm neon code

Originally the problem appeared when I tried to optimize an algorithm for neon arm and some minor part of it was taking 80% of according to profiler. I tried to test to see what can be done to improve it and for that I created array of function pointers to different versions of my optimized function and then I run them in the loop to see in profiler which one performs better:
typedef unsigned(*CalcMaxFunc)(const uint16_t a[8][4], const uint16_t b[4][4]);
CalcMaxFunc CalcMaxFuncs[] =
{
CalcMaxFunc_NEON_0,
CalcMaxFunc_NEON_1,
CalcMaxFunc_NEON_2,
CalcMaxFunc_NEON_3,
CalcMaxFunc_C_0
};
int N = sizeof(CalcMaxFunc) / sizeof(CalcMaxFunc[0]);
for (int i = 0; i < 10 * N; ++i)
{
auto f = CalcMaxFunc[i % N];
unsigned retI = f(a, b);
// just random code to ensure that cpu waits for the results
// and compiler doesn't optimize it away
if (retI > 1000000)
break;
ret |= retI;
}
I got surprising results: performance of a function was totally depend on its position within CalcMaxFuncs array. For example, when I swapped CalcMaxFunc_NEON_3 to be first it would be 3-4 times slower and according to profiler it would stall at the last bit of the function where it tried to move data from neon to arm register.
So, what does it make stall sometimes and not in other times? BY the way, I profile on iPhone6 in xcode if that matters.
When I intentionally introduced neon pipeline stalls by mixing-in some floating point division between calling these functions in the loop I eliminated unreliable behavior, now all of them perform the same regardless of the order in which they were called. So, why in the first place did I have that problem and what can I do to eliminate it in actual code?
Update:
I tried to create a simple test function and then optimize it in stages and see how I could possibly avoid neon->arm stalls.
Here's the test runner function:
void NeonStallTest()
{
int findMinErr(uint8_t* var1, uint8_t* var2, int size);
srand(0);
uint8_t var1[1280];
uint8_t var2[1280];
for (int i = 0; i < sizeof(var1); ++i)
{
var1[i] = rand();
var2[i] = rand();
}
#if 0 // early exit?
for (int i = 0; i < 16; ++i)
var1[i] = var2[i];
#endif
int ret = 0;
for (int i=0; i<10000000; ++i)
ret += findMinErr(var1, var2, sizeof(var1));
exit(ret);
}
And findMinErr is this:
int findMinErr(uint8_t* var1, uint8_t* var2, int size)
{
int ret = 0;
int ret_err = INT_MAX;
for (int i = 0; i < size / 16; ++i, var1 += 16, var2 += 16)
{
int err = 0;
for (int j = 0; j < 16; ++j)
{
int x = var1[j] - var2[j];
err += x * x;
}
if (ret_err > err)
{
ret_err = err;
ret = i;
}
}
return ret;
}
Basically it it does sum of squared difference between each uint8_t[16] block and returns index of the block pair that has lowest squared difference. So, then I rewrote it in neon intrisics (no particular attempt was made to make it fast, as it's not the point):
int findMinErr_NEON(uint8_t* var1, uint8_t* var2, int size)
{
int ret = 0;
int ret_err = INT_MAX;
for (int i = 0; i < size / 16; ++i, var1 += 16, var2 += 16)
{
int err;
uint8x8_t var1_0 = vld1_u8(var1 + 0);
uint8x8_t var1_1 = vld1_u8(var1 + 8);
uint8x8_t var2_0 = vld1_u8(var2 + 0);
uint8x8_t var2_1 = vld1_u8(var2 + 8);
int16x8_t s0 = vreinterpretq_s16_u16(vsubl_u8(var1_0, var2_0));
int16x8_t s1 = vreinterpretq_s16_u16(vsubl_u8(var1_1, var2_1));
uint16x8_t u0 = vreinterpretq_u16_s16(vmulq_s16(s0, s0));
uint16x8_t u1 = vreinterpretq_u16_s16(vmulq_s16(s1, s1));
#ifdef __aarch64__1
err = vaddlvq_u16(u0) + vaddlvq_u16(u1);
#else
uint32x4_t err0 = vpaddlq_u16(u0);
uint32x4_t err1 = vpaddlq_u16(u1);
err0 = vaddq_u32(err0, err1);
uint32x2_t err00 = vpadd_u32(vget_low_u32(err0), vget_high_u32(err0));
err00 = vpadd_u32(err00, err00);
err = vget_lane_u32(err00, 0);
#endif
if (ret_err > err)
{
ret_err = err;
ret = i;
#if 0 // enable early exit?
if (ret_err == 0)
break;
#endif
}
}
return ret;
}
Now, if (ret_err > err) is clearly data hazard. Then I manually "unrolled" loop by two and modified code to use err0 and err1 and check them after performing next round of compute. According to profiler I got some improvements. In simple neon loop I got roughly 30% of entire function spent in the two lines vget_lane_u32 followed by if (ret_err > err). After I unrolled by two these operations started to take 25% (e.g. I got roughly 10% overall speedup). Also, check armv7 version, there is only 8 instructions between when err0 is set (vmov.32 r6, d16[0]) and when it's accessed (cmp r12, r6). T
Note, in the code early exit is ifdefed out. Enabling it would make function even slower. If I unrolled it by four and changed to use four errN variables and deffer check by two rounds then I still saw vget_lane_u32 in profiler taking too much time. When I checked generated asm, appears that compiler destroys all the optimizations attempts because it reuses some of the errN registers which effectively makes CPU access results of vget_lane_u32 much earlier than I want (and I aim to delay access by 10-20 instructions). Only when I unrolled by 4 and marked all four errN as volatile vget_lane_u32 totally disappeared from the radar in profiler, however, the if (ret_err > errN) check obviously got slow as hell as now these probably ended up as regular stack variables overall these 4 checks in 4x manual loop unroll started to take 40%. Looks like with proper manual asm it's possible to make it work properly: have early loop exit, while avoiding neon->arm stalls and have some arm logic in the loop, however, extra maintenance required to deal with arm asm makes it 10x more complex to maintain that kind of code in a large project (that doesn't have any armasm).
Update:
Here's sample stall when moving data from neon to arm register. To implement early exist I need to move from neon to arm once per loop. This move alone takes more than 50% of entire function according to sampling profiler that comes with xcode. I tried to add lots of noops before and/or after the mov, but nothing seems to affect results in profiler. I tried to use vorr d0,d0,d0 for noops: no difference. What's the reason for the stall, or the profiler simply shows wrong results?

How unwind ARM Cortex M3 stack

The ARM Coretex STM32's HardFault_Handler can only get several registers values, r0, r1,r2, r3, lr, pc, xPSR, when crash happened. But there is no FP and SP in the stack. Thus I could not unwind the stack.
Is there any solution for this? Thanks a lot.
[update]
Following a web instruction to let ARMGCC(Keil uvision IDE) generate FP by adding a compiling option "--use_frame_pointer", but I could not find the FP in the stack. I am a real newbie here. Below is my demo code:
int test2(int i, int j)
{
return i/j;
}
int main()
{
SCB->CCR |= 0x10;
int a = 10;
int b = 0;
int c;
c = test2(a,b);
}
enum { r0 = 0, r1, r2, r3, r11, r12, lr, pc, psr};
void Hard_Fault_Handler(uint32_t *faultStackAddress)
{
uint32_t r0_val = faultStackAddress[r0];
uint32_t r1_val = faultStackAddress[r1];
uint32_t r2_val = faultStackAddress[r2];
uint32_t r3_val = faultStackAddress[r3];
uint32_t r12_val = faultStackAddress[r12];
uint32_t r11_val = faultStackAddress[r11];
uint32_t lr_val = faultStackAddress[lr];
uint32_t pc_val = faultStackAddress[pc];
uint32_t psr_val = faultStackAddress[psr];
}
I have two questions here:
1. I am not sure where the index of FP(r11) in the stack, or whether it is pushed into stack or not. I assume it is before r12, because I compared the assemble source before and after adding the option "--use_frame_pointer". I also compared the values read from Hard_Fault_Handler, seems like r11 is not in the stack. Because r11 address I read points to a place where the code is not my code.
[update] I have confirmed that FP is pushed into the stack. The second question still needs to be answered.
See below snippet code:
Without the option "--use_frame_pointer"
test2 PROC
MOVS r0,#3
BX lr
ENDP
main PROC
PUSH {lr}
MOVS r0,#0
BL test2
MOVS r0,#0
POP {pc}
ENDP
with the option "--use_frame_pointer"
test2 PROC
PUSH {r11,lr}
ADD r11,sp,#4
MOVS r0,#3
MOV sp,r11
SUB sp,sp,#4
POP {r11,pc}
ENDP
main PROC
PUSH {r11,lr}
ADD r11,sp,#4
MOVS r0,#0
BL test2
MOVS r0,#0
MOV sp,r11
SUB sp,sp,#4
POP {r11,pc}
ENDP
2. Seems like FP is not in the input parameter faultStackAddress of Hard_Fault_Handler(), where can I get the caller's FP to unwind the stack?
[update again]
Now I understood the last FP(r11) is not stored in the stack. All I need to do is to read the value of r11 register, then I can unwind the whole stack.
So now my final question is how to read it using inline assembler of C. I tried below code, but failed to read the correct value from r11 following the reference of http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0472f/Cihfhjhg.html
volatile int top_fp;
__asm
{
mov top_fp, r11
}
r11's value is 0x20009DCC
top_fp's value is 0x00000004
[update 3] Below is my whole code.
int test5(int i, int j, int k)
{
char a[128] = {0} ;
a[0] = 'a';
return i/j;
}
int test2(int i, int j)
{
char a[18] = {0} ;
a[0] = 'a';
return test5(i, j, 0);
}
int main()
{
SCB->CCR |= 0x10;
int a = 10;
int b = 0;
int c;
c = test2(a,b); //create a divide by zero crash
}
/* The fault handler implementation calls a function called Hard_Fault_Handler(). */
#if defined(__CC_ARM)
__asm void HardFault_Handler(void)
{
TST lr, #4
ITE EQ
MRSEQ r0, MSP
MRSNE r0, PSP
B __cpp(Hard_Fault_Handler)
}
#else
void HardFault_Handler(void)
{
__asm("TST lr, #4");
__asm("ITE EQ");
__asm("MRSEQ r0, MSP");
__asm("MRSNE r0, PSP");
__asm("B Hard_Fault_Handler");
}
#endif
void Hard_Fault_Handler(uint32_t *faultStackAddress)
{
volatile int top_fp;
__asm
{
mov top_fp, r11
}
//TODO: use top_fp to unwind the whole stack.
}
[update 4] Finally, I made it out. My solution:
Note: To access r11, we have to use embedded assembler, see here, which costs me much time to figure it out.
//we have to use embedded assembler.
__asm int getRegisterR11()
{
mov r0,r11
BX LR
}
//call it from Hard_Fault_Handler function.
/*
Function call stack frame:
FP1(r11) -> | lr |(High Address)
| FP2|(prev FP)
| ...|
Current FP(r11) ->| lr |
| FP1|(prev FP)
| ...|(Low Address)
With FP, we can access lr(link register) which is the address to return when the current functions returns(where you were).
Then (current FP - 1) points to prev FP.
Thus we can unwind the stack.
*/
void unwindBacktrace(uint32_t topFp, uint16_t* backtrace)
{
uint32_t nextFp = topFp;
int j = 0;
//#define BACK_TRACE_DEPTH 5
//loop backtrace using FP(r11), save lr into an uint16_t array.
for(int i = 0; i < BACK_TRACE_DEPTH; i++)
{
uint32_t lr = *((uint32_t*)nextFp);
if ((lr >= 0x08000000) && (lr <= 0x08FFFFFF))
{
backtrace[j*2] = LOW_16_BITS(lr);
backtrace[j*2 + 1] = HIGH_16_BITS(lr);
j += 1;
}
nextFp = *((uint32_t*)nextFp - 1);
if (nextFp == 0)
{
break;
}
}
}
#if defined(__CC_ARM)
__asm void HardFault_Handler(void)
{
TST lr, #4
ITE EQ
MRSEQ r0, MSP
MRSNE r0, PSP
B __cpp(Hard_Fault_Handler)
}
#else
void HardFault_Handler(void)
{
__asm("TST lr, #4");
__asm("ITE EQ");
__asm("MRSEQ r0, MSP");
__asm("MRSNE r0, PSP");
__asm("B Hard_Fault_Handler");
}
#endif
void Hard_Fault_Handler(uint32_t *faultStackAddress)
{
//get back trace
int topFp = getRegisterR11();
unwindBacktrace(topFp, persistentData.faultStack.back_trace);
}
Very primitive method to unwind the stack in such case is to read all stack memory above SP seen at the time of HardFault_Handler and process it using arm-none-eabi-addr2line. All link register entries saved on stack will be transformed into source line (remember that actual code path goes the line before LR points to). Note, if functions in between were called using branch instruction (b) instead of branch and link (bl) you'll not see them using this method.
(I don't have enough reputation points to write comments, so I'm editing my answer):
UPDATE for question 2:
Why do you expect that Hard_Fault_Handler has any arguments? Hard_Fault_Handler is usally a function to which address is stored in vector (exception) table. When the processor exception happens then Hard_Fault_Handler will be executed. There is no arguments passing involved doing this. But still, all registers at the time the fault happens are preserved. Specifically, if you compiled without omit-frame-pointer you can just read value of R11 (or R7 in Thumb-2 mode). However, to be sure that in your code Hard_Fault_Handler is actually a real hard fault handler, look into startup.s code and see if Hard_Fault_Handler is at the third entry in vector table. If there is an other function, it means Hard_Fault_Handler is just called from that function explicitly. See this article for details. You can also read my blog :) There is a chapter about stack which is based on Android example, but a lot of things are the same in general.
Also note, most probably in faultStackAddress should be stored a stack pointer, not a frame pointer.
UPDATE 2
Ok, lets clarify some things. Firstly, please paste the code from which you call Hard_Fault_Handler. Secondly, I guess you call it from within real HardFault exception handler. In that case you cannot expect that R11 will be at faultStackAddress[r11]. You've already mentioned it at the first sentence in your question. There will be only r0-r3, r12, lr, pc and psr.
You've also written:
But there is no FP and SP in the stack. Thus I could not unwind the
stack. Is there any solution for this?
The SP is not "in the stack" because you have it already in one of the stack registers (msp or psp). See again THIS ARTICLE. Also, FP is not crucial to unwind stack because you can do it without it (by "navigating" through saved Link Registers). Other thing is that if you dump memory below your SP you can expect FP to be just next to saved LR if you really need it.
Answering your last question: I don't now how you're verifying this code and how you're calling it (you need to paste full code). You can look into assembly of that function and see what's happening under the hood. Other thing you can do is to follow this post as a template.

An optimized implementation of the Heaviside function

I'm would like to (super)optimize an implementation of the Heaviside function.
I'm working on a numerical algorithm (in Fortran) where speed is particularly important. This employs the Heaviside function many times, currently implemented by the signum intrinsic function as follows:
heaviside = 0.5*sign(1,x)+1
I'm mainly interested in the case where x is a double precision real number on intel processors.
Is it possible to develop a more efficient implementation of the Heaviside function?
Perhaps using assembly language, a superoptimizing code or call to an existing external library?
Did you intend heaviside = 0.5*(sign(1,x)+1)? In any case testing with gcc 4.8.1 fortran shows High Performance Mark's idea should be beneficial. Here are 3 possibilities:
heaviside1 - original
heaviside2 - High Performance Mark's idea
heaviside3 - another variation
function heaviside1 (x)
double precision heaviside1, x
heaviside1 = 0.5 * (sign(1d0,x) + 1)
end
function heaviside2 (x)
double precision heaviside2, x
heaviside2 = sign(0.5d0,x) + 0.5
end
function heaviside3 (x)
double precision heaviside3, x
heaviside3 = 0
if (x .ge. 0) heaviside3 = 1
end
program demo
double precision heaviside1, heaviside2, heaviside3, x, a, b, c
do
x = 0.5 - RAND(0)
a = heaviside1(x)
b = heaviside2(x)
c = heaviside3(x)
print *, "x=", x, "heaviside(x)=", a, b, c
enddo
end
When compiled, gcc generates these 3 stand-alone functions:
<heaviside1_>:
vmovsd xmm0,QWORD PTR [rcx]
vandpd xmm0,xmm0,XMMWORD PTR [rip+0x2d824]
vorpd xmm0,xmm0,XMMWORD PTR [rip+0x2d80c]
vaddsd xmm0,xmm0,QWORD PTR [rip+0x2d7f4]
vmulsd xmm0,xmm0,QWORD PTR [rip+0x2d81c]
ret
<heaviside2_>:
vmovsd xmm0,QWORD PTR [rcx]
vandpd xmm0,xmm0,XMMWORD PTR [rip+0x2d844]
vorpd xmm0,xmm0,XMMWORD PTR [rip+0x2d85c]
vaddsd xmm0,xmm0,QWORD PTR [rip+0x2d844]
ret
<heaviside3_>:
vxorpd xmm0,xmm0,xmm0
vmovsd xmm1,QWORD PTR [rip+0x2d844]
vcmplesd xmm0,xmm0,QWORD PTR [rcx]
vandpd xmm0,xmm1,xmm0
ret
When compiled with gcc, heaviside1 generates a multiply that might slow execution.
heaviside2 eliminates the multiply.
heaviside3 has the same number of instructions as heaviside2, but uses 2 fewer memory accesses.
For the stand-alone functions:
instruction memory reference
count count
heaviside1 6 5
heaviside2 5 4
heaviside3 5 2
The inline code for these functions avoids the need for the return instruction and ideally passes the arguments in registers and preloads other registers with needed constants. The exact result depends on the compiler used and the calling code. An estimate for inlined code:
instruction memory reference
count count
heaviside1 4 0
heaviside2 3 0
heaviside3 2 0
It looks like the function could be handled by as few as two compiler generated instructions: vcmplesd+vandpd. The first instruction creates a mask of all zeros if the argument is negative, or a mask of all ones otherwise. The second instruction applies the mask to a register constant value of one in order to produce the result value of zero or one.
Though I have not benchmarked these functions, it looks like the heaviside function should not take much execution time.
---09/23/2013: adding x86_64 assembly language versions and C language benchmark---
file functions.s
//----------------------------------------------------------------------------
.intel_syntax noprefix
.text
//-----------------------------------------------------------------------------
// this heaviside function generates its own register constants
// double heaviside_a1 (double arg);
.globl heaviside_a1
heaviside_a1:
mov rax,0x3ff0000000000000
xorpd xmm1,xmm1 # xmm1: constant 0.0
cmplesd xmm1,xmm0 # xmm1: mask (all Fs or all 0s)
movq xmm0,rax # xmm0: constant 1.0
andpd xmm0,xmm1
retq
//-----------------------------------------------------------------------------
// this heaviside function uses register constants passed from caller
// double heaviside_a2 (double arg, double const0, double const1);
.globl heaviside_a2
heaviside_a2:
cmplesd xmm1,xmm0 # xmm1: mask (all Fs or all 0s)
movsd xmm0,xmm2 # xmm0: constant 1.0
andpd xmm0,xmm1
retq
//-----------------------------------------------------------------------------
file ctest.c
#define __USE_MINGW_ANSI_STDIO 1
#include <windows.h>
#include <stdio.h>
#include <stdint.h>
// functions.s
double heaviside_a1 (double x);
double heaviside_a2 (double arg, double const0, double const1);
//-----------------------------------------------------------------------------
static double heaviside_c1 (double x)
{
double result = 0;
if (x >= 0) result = 1;
return result;
}
//-----------------------------------------------------------------------------
//
// queryPerformanceCounter - similar to QueryPerformanceCounter, but returns
// count directly.
uint64_t queryPerformanceCounter (void)
{
LARGE_INTEGER int64;
QueryPerformanceCounter (&int64);
return int64.QuadPart;
}
//-----------------------------------------------------------------------------
//
// queryPerformanceFrequency - same as QueryPerformanceFrequency, but returns count direcly.
uint64_t queryPerformanceFrequency (void)
{
LARGE_INTEGER int64;
QueryPerformanceFrequency (&int64);
return int64.QuadPart;
}
//----------------------------------------------------------------------------
//
// lfsr64gpr - left shift galois type lfsr for 64-bit data, general purpose register implementation
//
static uint64_t lfsr64gpr (uint64_t data, uint64_t mask)
{
uint64_t carryOut = data >> 63;
uint64_t maskOrZ = -carryOut;
return (data << 1) ^ (maskOrZ & mask);
}
//---------------------------------------------------------------------------
int runtests (uint64_t pattern, uint64_t mask)
{
uint64_t startCount, elapsed, index, loops = 800000000;
double ns;
double total = 0;
startCount = queryPerformanceCounter ();
for (index = 0; index < loops; index++)
{
double x, result;
pattern = lfsr64gpr (pattern, mask);
x = (double) (int64_t) pattern;
result = heaviside_c1 (x);
total += result;
}
elapsed = queryPerformanceCounter () - startCount;
ns = (double) elapsed / queryPerformanceFrequency () * 1000000000 / loops;
printf ("heaviside_c1: %7.2f ns\n", ns);
startCount = queryPerformanceCounter ();
for (index = 0; index < loops; index++)
{
double x, result;
pattern = lfsr64gpr (pattern, mask);
x = (double) (int64_t) pattern;
result = heaviside_a1 (x);
//printf ("heaviside_a1 (%lf): %lf\n", x, result);
total += result;
}
elapsed = queryPerformanceCounter () - startCount;
ns = (double) elapsed / queryPerformanceFrequency () * 1000000000 / loops;
printf ("heaviside_a1: %7.2f ns\n", ns);
startCount = queryPerformanceCounter ();
for (index = 0; index < loops; index++)
{
double x, result;
const double const0 = 0.0;
const double const1 = 1.0;
pattern = lfsr64gpr (pattern, mask);
x = (double) (int64_t) pattern;
result = heaviside_a2 (x, const0, const1);
//printf ("heaviside_a2 (%lf): %lf\n", x, result);
total += result;
}
elapsed = queryPerformanceCounter () - startCount;
ns = (double) elapsed / queryPerformanceFrequency () * 1000000000 / loops;
printf ("heaviside_a2: %7.2f ns\n", ns);
return total;
}
//---------------------------------------------------------------------------
int main (void)
{
uint64_t mask;
mask = 0xBEFFFFFFFFFFFFFF;
// raise our priority to increase measurement accuracy
SetPriorityClass (GetCurrentProcess (), REALTIME_PRIORITY_CLASS);
printf ("using pseudo-random data\n");
runtests (1, mask);
return 0;
}
//---------------------------------------------------------------------------
mingw64 build command: gcc -Wall -Wextra -O3 -octest.exe ctest.c functions.s
Program output from Intel Core i7-2600K at 4.0 GHz:
using pseudo-random data
heaviside_c1: 2.24 ns
heaviside_a1: 2.00 ns
heaviside_a2: 2.00 ns
These timing results include execution of pseudo-random argument generation and result totalization code needed to keep the optimizer from eliminating the otherwise unused heaviside_c1 local function.
heaviside_c1 is from the original fortran suggestion, ported to C.
heaviside_a1 is an assembly language implementation.
heaviside_a2 is a modification of the assembly language version that uses register constants passed by the caller to avoid the overhead of generating them. For my processor, benchmarking shows no advantage to passing constants.
The assembly language functions assume xmm0 returns the result and xmm1 and xmm2 are available as scratch registers. This is valid for the x64 calling convention used by Windows. This assumption should be confirmed for other calling conventions.
In order to avoid memory accesses, the assembly language version expects the argument to be passed by value in a register (XMM0). Because this is not the fortran default, a special declaration is required. This one seems to work properly for gfortran 64-bit:
interface
real(c_double) function heaviside_a1(x)
use iso_c_binding, only: c_double
real(c_double), VALUE :: x
end function heaviside_a1
end interface

Arm loop code optimization for calculating mean standard deviation

I want to improve some code which is using 25% of my app CPU, the code is the next:
for (int i=0; i<8; i++) {
unsigned f = *p++;
sum += f;
sqsum += f*f;
}
I made some arm code but it is not working, even not compiling, which is the next:
void loop(uint8_t * p , int * sum ,int * qsum)
{
__asm__ volatile("vld4.8 {d0}, [%0]! \n"
"mov r4, #0 \n"
"vmlal.u8 [%1]!, [%1]!, d0 \n"
"vmull.u8 r4, d0 , d0 \n"
"vmlal.u8 [%2]!, [%2]!, r4\n"
:
: "r"(p), "r"(sum), "r"(qsum)
: "r4"
);
}
Any help?
Here is the my function to improve:
void calculateMeanStDev8x8(cv::Mat* patch, int sx, int sy, int& mean, float& stdev)
{
unsigned sum=0;
unsigned sqsum=0;
for (int j=0; j< 8; j++) {
const unsigned char* p = (const unsigned char*)(patch->data + (j+sy)*patch->step + sx); //Apuntador al inicio de la matrix
//The code to improve
for (int i=0; i<8; i++) {
unsigned f = *p++;
sum += f;
sqsum += f*f;
}
}
mean = sum >> 6;
int r = (sum*sum) >> 6;
stdev = sqrtf(sqsum - r);
if (stdev < .1) {
stdev=0;
}
}
That loop is a perfect candidate for NEON optimization. You can fit your 8 unsigned integers into a single NEON register. There is no "sum all elements of a vector" instruction, but you can use the pairwise add to compute the sum of the 8 elements in 3 steps. Since we can't see the rest of your application, it's hard to know what the big picture is, but NEON is your best bet for improving the speed. All recent Apple products support NEON instructions and in XCode you can use the NEON intrinsics mixed with your C++ code.