C++ error "expected initializer before <variablename>" - initializer

Good afternoon programmers,
I am very new to C++. I use eclipse and I have an assignment that asks me to calculate a future population when given constant birthrates, deathrates, immigrantrates, and the current population.
I get two errors.
1) On the line "float SECONDSINAYEAR;" I get the error "expected initializer before BIRTHRATEPERSEC"
2) On the line "CURRENTPOPULATION = 318933342;" I get the error "'futurepopulation' was not declared in this scope"
#include <iostream>
using namespace std;
float main(void) {
//declaration of variables
float BIRTHRATEPERSEC;
float DEATHRATEPERSEC;
float IMMIGRANTRATEPERSEC;
float CURRENTPOPULATION;
float SECONDSINAYEAR;
float futurepopulation;
// Initialize constants
BIRTHRATEPERSEC = .5;
DEATHRATEPERSEC = -.1428571429;
IMMIGRANTRATEPERSEC = .04167;
CURRENTPOPULATION = 318933342;
SECONDSINAYEAR = 31536000;
// DO calculations
float futurepopulation = (CURRENTPOPULATION + (SECONDSINAYEAR * (BIRTHRATEPERSEC + DEATHRATEPERSEC + IMMIGRANTRATEPERSEC)));
//print output
cout << "The future population is " + futurepopulation <<endl;
return 0;
}
what should I do? Thanks!

First issue is that your main should return an int, not a float.
You also declare float futurepopulation twice, once with your other variables and once with the summation.
Last problem is that in C++ you cannot add a number to a string that way, the correct syntax would be cout << "The future population is " << futurepopulation <<endl;

Related

Please explain me this function inidat

What does that * means before u, what kind of the variable it is, and what will be the output of this function.
Thanks
void inidat (int nx, int ny, float* u)
{
int ix, iy;
for (ix = 0; ix <= nx-1; ix++)
{
for (iy = 0; iy <= ny-1; iy++)
{
*(u+ix*ny+iy) = (float)(ix * (nx - ix - 1) * iy * (ny - iy - 1));
}
}
}
This is a pointer. A pointer is an object whose value refers to another value stored elsewhere in the computer memory using its address. A pointer references a location, and one can get the object stored at that location by "dereferencing" the pointer.
float * x;
cout << *x;
cout << x;
To get the value of the pointer, you do the 2nd line.
To get the location of the pointer you do the 3rd line.
* means that the follow variable is defined or used (to be used, as in this case, it must've been defined before) as a pointer.
As for the output of the function: it's void so it won't return any value but change some in the last line, I would need the context of the function to see what it's actually doing.

Is there any GMP logarithm function?

Is there any logarithm function implemented in the GMP library?
I know you didn't ask how to implement it, but...
You can implement a rough one using the properties of logarithms: http://gnumbers.blogspot.com.au/2011/10/logarithm-of-large-number-it-is-not.html
And the internals of the GMP library: https://gmplib.org/manual/Integer-Internals.html
(Edit: Basically you just use the most significant "digit" of the GMP representation since the base of the representation is huge B^N is much larger than B^{N-1})
Here is my implementation for Rationals.
double LogE(mpq_t m_op)
{
// log(a/b) = log(a) - log(b)
// And if a is represented in base B as:
// a = a_N B^N + a_{N-1} B^{N-1} + ... + a_0
// => log(a) \approx log(a_N B^N)
// = log(a_N) + N log(B)
// where B is the base; ie: ULONG_MAX
static double logB = log(ULONG_MAX);
// Undefined logs (should probably return NAN in second case?)
if (mpz_get_ui(mpq_numref(m_op)) == 0 || mpz_sgn(mpq_numref(m_op)) < 0)
return -INFINITY;
// Log of numerator
double lognum = log(mpq_numref(m_op)->_mp_d[abs(mpq_numref(m_op)->_mp_size) - 1]);
lognum += (abs(mpq_numref(m_op)->_mp_size)-1) * logB;
// Subtract log of denominator, if it exists
if (abs(mpq_denref(m_op)->_mp_size) > 0)
{
lognum -= log(mpq_denref(m_op)->_mp_d[abs(mpq_denref(m_op)->_mp_size)-1]);
lognum -= (abs(mpq_denref(m_op)->_mp_size)-1) * logB;
}
return lognum;
}
(Much later edit)
Coming back to this 5 years later, I just think it's cool that the core concept of log(a) = N log(B) + log(a_N) shows up even in native floating point implementations, here is the glibc one for ia64
And I used it again after encountering this question
No there is no such function in GMP.
Only in MPFR.
The method below makes use of mpz_get_d_2exp and was obtained from the gmp R package. It can be found under the function biginteger_log in the file bigintegerR.cc (You first have to download the source (i.e. the tar file)). You can also see it here: biginteger_log.
// Adapted for general use from the original biginteger_log
// xi = di * 2 ^ ex ==> log(xi) = log(di) + ex * log(2)
double biginteger_log_modified(mpz_t x) {
signed long int ex;
const double di = mpz_get_d_2exp(&ex, x);
return log(di) + log(2) * (double) ex;
}
Of course, the above method could be modified to return the log with any base using the properties of logarithm (e.g. the change of base formula).
Here it is:
https://github.com/linas/anant
Provides gnu mp real and complex logarithm, exp, sine, cosine, gamma, arctan, sqrt, polylogarithm Riemann and Hurwitz zeta, confluent hypergeometric, topologists sine, and more.
As other answers said, there is no logarithmic function in GMP. Part of answers provided implementations of logarithmic function, but with double precision only, not infinite precision.
I implemented full (arbitrary) precision logarithmic function below, even up to thousands bits of precision if you wish. Using mpf, generic floating point type of GMP.
My code uses Taylor serie for ln(1 + x) plus mpf_sqrt() (for boosting computation).
Code is in C++, and is quite large due to two facts. First is that it does precise time measurements to figure out best combinations of internal computational parameters for your machine. Second is that it uses extra speed improvements like extra usage of mpf_sqrt() for preparing initial value.
Algorithm of my code is following:
Factor out exponent of 2 from input x, i.e. rewrite x = d * 2^exp, with usage of mpf_get_d_2exp().
Make d (from step above) such that 2/3 <= d <= 4/3, this is achieved by possibly multiplying d by 2 and doing --exp. This ensures that d always differs from 1 by at most 1/3, in other words d extends from 1 in both directions (negative and positive) in equal distance.
Divide x by 2^exp, with usage of mpf_div_2exp() and mpf_mul_2exp().
Take square root of x several times (num_sqrt times) so that x becomes closer to 1. This ensures that Taylor Serie converges more rapidly. Because computation of square root several times is faster than contributing much more time in extra iterations of Taylor Serie.
Compute Taylor Serie for ln(1 + x) up to desired precision (even thousands of bit of precision if needed).
Because in Step 4. we took square root several times, now we need to multiply y (result of Taylor Serie) by 2^num_sqrt.
Finally because in Step 1. we factored out 2^exp, now we need to add ln(2) * exp to y. Here ln(2) is computed by just one recursive call to same function that implements whole algorithm.
Steps above come from sequence of formulas ln(x) = ln(d * 2^exp) = ln(d) + exp * ln(2) = ln(sqrt(...sqrt(d))) * num_sqrt + exp * ln(2).
My implementation automatically does timings (just once per program run) to figure out how many square roots is needed to balance out Taylor Serie computation. If you need to avoid timings then pass 3rd parameter sqrt_range to mpf_ln() equal to 0.001 instead of zero.
main() function contains examples of usage, testing of correctness (by comparing to lower precision std::log()), timings and output of different verbose information. Function is tested on first 1024 bits of Pi number.
Before call to my function mpf_ln() don't forget to setup needed precision of computation by calling mpf_set_default_prec(bits) with desired precision in bits.
Computational time of my mpf_ln() is about 40-90 micro-seconds for 1024 bit precision. Bigger precision will take more time, that is approximately linearly proportional to the amount of precision bits.
Very first run of a function takes considerably longer time becuse it does pre-computation of timings table and value of ln(2). So it is suggested to do first single computation at program start to avoid longer computation inside time critical region later in code.
To compile for example on Linux, you have to install GMP library and issue command:
clang++-14 -std=c++20 -O3 -lgmp -lgmpxx -o main main.cpp && ./main
Try it online!
#include <cstdint>
#include <iomanip>
#include <iostream>
#include <cmath>
#include <chrono>
#include <mutex>
#include <vector>
#include <unordered_map>
#include <gmpxx.h>
double Time() {
static auto const gtb = std::chrono::high_resolution_clock::now();
return std::chrono::duration_cast<std::chrono::duration<double>>(
std::chrono::high_resolution_clock::now() - gtb).count();
}
mpf_class mpf_ln(mpf_class x, bool verbose = false, double sqrt_range = 0) {
auto total_time = verbose ? Time() : 0.0;
int const prec = mpf_get_prec(x.get_mpf_t());
if (sqrt_range == 0) {
static std::mutex mux;
std::lock_guard<std::mutex> lock(mux);
static std::vector<std::pair<size_t, double>> ranges;
if (ranges.empty())
mpf_ln(3.14, false, 0.01);
while (ranges.empty() || ranges.back().first < prec) {
size_t const bits = ranges.empty() ? 64 : ranges.back().first * 3 / 2;
mpf_class x = 3.14;
mpf_set_prec(x.get_mpf_t(), bits);
double sr = 0.35, sr_best = 1, time_best = 1000;
size_t constexpr ntests = 5;
while (true) {
auto tim = Time();
for (size_t i = 0; i < ntests; ++i)
mpf_ln(x, false, sr);
tim = (Time() - tim) / ntests;
bool updated = false;
if (tim < time_best) {
sr_best = sr;
time_best = tim;
updated = true;
}
sr /= 1.5;
if (sr <= 1e-8) {
ranges.push_back(std::make_pair(bits, sr_best));
break;
}
}
}
sqrt_range = std::lower_bound(ranges.begin(), ranges.end(), size_t(prec),
[](auto const & a, auto const & b){
return a.first < b;
})->second;
}
signed long int exp = 0;
// https://gmplib.org/manual/Converting-Floats
double d = mpf_get_d_2exp(&exp, x.get_mpf_t());
if (d < 2.0 / 3) {
d *= 2;
--exp;
}
mpf_class t;
// https://gmplib.org/manual/Float-Arithmetic
if (exp >= 0)
mpf_div_2exp(x.get_mpf_t(), x.get_mpf_t(), exp);
else
mpf_mul_2exp(x.get_mpf_t(), x.get_mpf_t(), -exp);
auto sqrt_time = verbose ? Time() : 0.0;
// Multiple Sqrt of x
int num_sqrt = 0;
if (x >= 1)
while (x >= 1.0 + sqrt_range) {
// https://gmplib.org/manual/Float-Arithmetic
mpf_sqrt(x.get_mpf_t(), x.get_mpf_t());
++num_sqrt;
}
else
while (x <= 1.0 - sqrt_range) {
mpf_sqrt(x.get_mpf_t(), x.get_mpf_t());
++num_sqrt;
}
if (verbose)
sqrt_time = Time() - sqrt_time;
static mpf_class const eps = [&]{
mpf_class eps = 1;
mpf_div_2exp(eps.get_mpf_t(), eps.get_mpf_t(), prec + 8);
return eps;
}(), meps = -eps;
// Taylor Serie for ln(1 + x)
// https://math.stackexchange.com/a/878376/826258
x -= 1;
mpf_class k = x, y = x, mx = -x;
size_t num_iters = 0;
for (int32_t i = 2;; ++i) {
k *= mx;
y += k / i;
// Check if error is small enough
if (meps <= k && k <= eps) {
num_iters = i;
break;
}
}
auto VerboseInfo = [&]{
if (!verbose)
return;
total_time = Time() - total_time;
std::cout << std::fixed << "Sqrt range " << sqrt_range << ", num sqrts "
<< num_sqrt << ", sqrt time " << sqrt_time << " sec" << std::endl;
std::cout << "Ln number of iterations " << num_iters << ", ln time "
<< total_time << " sec" << std::endl;
};
// Correction due to multiple sqrt of x
y *= 1 << num_sqrt;
if (exp == 0) {
VerboseInfo();
return y;
}
mpf_class ln2;
{
static std::mutex mutex;
std::lock_guard<std::mutex> lock(mutex);
static std::unordered_map<size_t, mpf_class> ln2s;
auto it = ln2s.find(size_t(prec));
if (it == ln2s.end()) {
mpf_class sqrt_sqrt_2 = 2;
mpf_sqrt(sqrt_sqrt_2.get_mpf_t(), sqrt_sqrt_2.get_mpf_t());
mpf_sqrt(sqrt_sqrt_2.get_mpf_t(), sqrt_sqrt_2.get_mpf_t());
it = ln2s.insert(std::make_pair(size_t(prec), mpf_class(mpf_ln(sqrt_sqrt_2, false, sqrt_range) * 4))).first;
}
ln2 = it->second;
}
y += ln2 * exp;
VerboseInfo();
return y;
}
std::string mpf_str(mpf_class const & x) {
mp_exp_t exp;
auto s = x.get_str(exp);
return s.substr(0, exp) + "." + s.substr(exp);
}
int main() {
// https://gmplib.org/manual/Initializing-Floats
mpf_set_default_prec(1024); // bit-precision
// http://www.math.com/tables/constants/pi.htm
mpf_class x(
"3."
"1415926535 8979323846 2643383279 5028841971 6939937510 "
"5820974944 5923078164 0628620899 8628034825 3421170679 "
"8214808651 3282306647 0938446095 5058223172 5359408128 "
"4811174502 8410270193 8521105559 6446229489 5493038196 "
"4428810975 6659334461 2847564823 3786783165 2712019091 "
"4564856692 3460348610 4543266482 1339360726 0249141273 "
"7245870066 0631558817 4881520920 9628292540 9171536436 "
);
std::cout << std::boolalpha << std::fixed << std::setprecision(14);
std::cout << "x:" << std::endl << mpf_str(x) << std::endl;
auto cmath_val = std::log(mpf_get_d(x.get_mpf_t()));
std::cout << "cmath ln(x): " << std::endl << cmath_val << std::endl;
auto volatile tmp = mpf_ln(x); // Pre-Compute to heat-up timings table.
auto time_start = Time();
size_t constexpr ntests = 20;
for (size_t i = 0; i < ntests; ++i) {
auto volatile tmp = mpf_ln(x);
}
std::cout << "mpf ln(x) time " << (Time() - time_start) / ntests << " sec" << std::endl;
auto mpf_val = mpf_ln(x, true);
std::cout << "mpf ln(x):" << std::endl << mpf_str(mpf_val) << std::endl;
std::cout << "equal to cmath: " << (std::abs(mpf_get_d(mpf_val.get_mpf_t()) - cmath_val) <= 1e-14) << std::endl;
return 0;
}
Output:
x:
3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117067982148086513282306647093844609550582231725359408128481117450284102701938521105559644622948954930381964428810975665933446128475648233786783165271201909145648566923460348610454326648213393607260249141273724587007
cmath ln(x):
1.14472988584940
mpf ln(x) time 0.00004426845000 sec
Sqrt range 0.00000004747981, num sqrts 23, sqrt time 0.00001440000000 sec
Ln number of iterations 42, ln time 0.00003873100000 sec
mpf ln(x):
1.144729885849400174143427351353058711647294812915311571513623071472137769884826079783623270275489707702009812228697989159048205527923456587279081078810286825276393914266345902902484773358869937789203119630824756794011916028217227379888126563178049823697313310695003600064405487263880223270096433504959511813198
equal to cmath: true

OpenCL, simple vector addition but wrong output for large input

So, after spending hours reading and understanding I have finally made my first OpenCL program that actually does something, which is it adds two vectors and outputs to a file.
#include <iostream>
#include <vector>
#include <cstdlib>
#include <string>
#include <fstream>
#define __CL_ENABLE_EXCEPTIONS
#include <CL/cl.hpp>
int main(int argc, char *argv[])
{
try
{
// get platforms, devices and display their info.
std::vector<cl::Platform> platforms;
cl::Platform::get(&platforms);
std::vector<cl::Platform>::iterator i=platforms.begin();
std::cout<<"OpenCL \tPlatform : "<<i->getInfo<CL_PLATFORM_NAME>()<<std::endl;
std::cout<<"\tVendor: "<<i->getInfo<CL_PLATFORM_VENDOR>()<<std::endl;
std::cout<<"\tVersion : "<<i->getInfo<CL_PLATFORM_VERSION>()<<std::endl;
std::cout<<"\tExtensions : "<<i->getInfo<CL_PLATFORM_EXTENSIONS>()<<std::endl;
// get devices
std::vector<cl::Device> devices;
i->getDevices(CL_DEVICE_TYPE_ALL,&devices);
int o=99;
std::cout<<"\n\n";
// iterate over available devices
for(std::vector<cl::Device>::iterator j=devices.begin(); j!=devices.end(); j++)
{
std::cout<<"\tOpenCL\tDevice : " << j->getInfo<CL_DEVICE_NAME>()<<std::endl;
std::cout<<"\t\t Type : " << j->getInfo<CL_DEVICE_TYPE>()<<std::endl;
std::cout<<"\t\t Vendor : " << j->getInfo<CL_DEVICE_VENDOR>()<<std::endl;
std::cout<<"\t\t Driver : " << j->getInfo<CL_DRIVER_VERSION>()<<std::endl;
std::cout<<"\t\t Global Mem : " << j->getInfo<CL_DEVICE_GLOBAL_MEM_SIZE>()/(1024*1024)<<" MBytes"<<std::endl;
std::cout<<"\t\t Local Mem : " << j->getInfo<CL_DEVICE_LOCAL_MEM_SIZE>()/1024<<" KBbytes"<<std::endl;
std::cout<<"\t\t Compute Unit : " << j->getInfo<CL_DEVICE_MAX_COMPUTE_UNITS>()<<std::endl;
std::cout<<"\t\t Clock Rate : " << j->getInfo<CL_DEVICE_MAX_CLOCK_FREQUENCY>()<<" MHz"<<std::endl;
}
std::cout<<"\n\n\n";
//MAIN CODE BEGINS HERE
//get Kernel
std::ifstream ifs("vector_add_kernel.cl");
std::string kernelSource((std::istreambuf_iterator<char>(ifs)), std::istreambuf_iterator<char>());
std::cout<<kernelSource;
//Create context, select device and command queue.
cl::Context context(devices);
cl::Device &device=devices.front();
cl::CommandQueue cmdqueue(context,device);
// Generate Source vector and push the kernel source in it.
cl::Program::Sources sourceCode;
sourceCode.push_back(std::make_pair(kernelSource.c_str(), kernelSource.size()));
//Generate program using sourceCode
cl::Program program=cl::Program(context, sourceCode);
//Build program..
try
{
program.build(devices);
}
catch(cl::Error &err)
{
std::cerr<<"Building failed, "<<err.what()<<"("<<err.err()<<")"
<<"\nRetrieving build log"
<<"\n Build Log Follows \n"
<<program.getBuildInfo<CL_PROGRAM_BUILD_LOG>(devices.front());
}
//Declare and initialize vectors
std::vector<cl_float>B(993448,1.3);
std::vector<cl_float>C(993448,1.3);
std::vector<cl_float>A(993448,1.3);
cl_int N=A.size();
//Declare and intialize proper work group size and global size. Global size raised to the nearest multiple of workGroupSize.
int workGroupSize=128;
int GlobalSize;
if(N%workGroupSize) GlobalSize=N - N%workGroupSize + workGroupSize;
else GlobalSize=N;
//Declare buffers.
cl::Buffer vecA(context, CL_MEM_READ_WRITE, sizeof(cl_float)*N);
cl::Buffer vecB(context, CL_MEM_READ_ONLY , (B.size())*sizeof(cl_float));
cl::Buffer vecC(context, CL_MEM_READ_ONLY , (C.size())*sizeof(cl_float));
//Write vectors into buffers
cmdqueue.enqueueWriteBuffer(vecB, 0, 0, (B.size())*sizeof(cl_float), &B[0] );
cmdqueue.enqueueWriteBuffer(vecB, 0, 0, (C.size())*sizeof(cl_float), &C[0] );
//Executing kernel
cl::Kernel kernel(program, "vector_add");
cl::KernelFunctor kernel_func=kernel.bind(cmdqueue, cl::NDRange(GlobalSize), cl::NDRange(workGroupSize));
kernel_func(vecA, vecB, vecC, N);
//Reading back values into vector A
cmdqueue.enqueueReadBuffer(vecA,true,0,N*sizeof(cl_float), &A[0]);
cmdqueue.finish();
//Saving into file.
std::ofstream output("vectorAdd.txt");
for(int i=0;i<N;i++) output<<A[i]<<"\n";
}
catch(cl::Error& err)
{
std::cerr << "OpenCL error: " << err.what() << "(" << err.err() <<
")" << std::endl;
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
The problem is, for smaller values of N, I'm getting the correct result that is 2.6
But for larger values, like the one in the code above (993448) I get garbage output varying between 1 and 2.4.
Here is the Kernel code :
__kernel void vector_add(__global float *A, __global float *B, __global float *C, int N) {
// Get the index of the current element
int i = get_global_id(0);
//Do the operation
if(i<N) A[i] = C[i] + B[i];
}
UPDATE : Ok it seems the code is working now. I have fixed a few minor mistakes in my code above
1) The part where GlobalSize is initialized has been fixed.
2)Stupid mistake in enqueueWriteBuffer (wrong parameters given)
It is now outputting the correct result for large values of N.
Try to change the data type from float to double etc.

g++ SSE intrinsics dilemma - value from intrinsic "saturates"

I wrote a simple program to implement SSE intrinsics for computing the inner product of two large (100000 or more elements) vectors. The program compares the execution time for both, inner product computed the conventional way and using intrinsics. Everything works out fine, until I insert (just for the fun of it) an inner loop before the statement that computes the inner product. Before I go further, here is the code:
//this is a sample Intrinsics program to compute inner product of two vectors and compare Intrinsics with traditional method of doing things.
#include <iostream>
#include <iomanip>
#include <xmmintrin.h>
#include <stdio.h>
#include <time.h>
#include <stdlib.h>
using namespace std;
typedef float v4sf __attribute__ ((vector_size(16)));
double innerProduct(float* arr1, int len1, float* arr2, int len2) { //assume len1 = len2.
float result = 0.0;
for(int i = 0; i < len1; i++) {
for(int j = 0; j < len1; j++) {
result += (arr1[i] * arr2[i]);
}
}
//float y = 1.23e+09;
//cout << "y = " << y << endl;
return result;
}
double sse_v4sf_innerProduct(float* arr1, int len1, float* arr2, int len2) { //assume that len1 = len2.
if(len1 != len2) {
cout << "Lengths not equal." << endl;
exit(1);
}
/*steps:
* 1. load a long-type (4 float) into a v4sf type data from both arrays.
* 2. multiply the two.
* 3. multiply the same and store result.
* 4. add this to previous results.
*/
v4sf arr1Data, arr2Data, prevSums, multVal, xyz;
//__builtin_ia32_xorps(prevSums, prevSums); //making it equal zero.
//can explicitly load 0 into prevSums using loadps or storeps (Check).
float temp[4] = {0.0, 0.0, 0.0, 0.0};
prevSums = __builtin_ia32_loadups(temp);
float result = 0.0;
for(int i = 0; i < (len1 - 3); i += 4) {
for(int j = 0; j < len1; j++) {
arr1Data = __builtin_ia32_loadups(&arr1[i]);
arr2Data = __builtin_ia32_loadups(&arr2[i]); //store the contents of two arrays.
multVal = __builtin_ia32_mulps(arr1Data, arr2Data); //multiply.
xyz = __builtin_ia32_addps(multVal, prevSums);
prevSums = xyz;
}
}
//prevSums will hold the sums of 4 32-bit floating point values taken at a time. Individual entries in prevSums also need to be added.
__builtin_ia32_storeups(temp, prevSums); //store prevSums into temp.
cout << "Values of temp:" << endl;
for(int i = 0; i < 4; i++)
cout << temp[i] << endl;
result += temp[0] + temp[1] + temp[2] + temp[3];
return result;
}
int main() {
clock_t begin, end;
int length = 100000;
float *arr1, *arr2;
double result_Conventional, result_Intrinsic;
// printStats("Allocating memory.");
arr1 = new float[length];
arr2 = new float[length];
// printStats("End allocation.");
srand(time(NULL)); //init random seed.
// printStats("Initializing array1 and array2");
begin = clock();
for(int i = 0; i < length; i++) {
// for(int j = 0; j < length; j++) {
// arr1[i] = rand() % 10 + 1;
arr1[i] = 2.5;
// arr2[i] = rand() % 10 - 1;
arr2[i] = 2.5;
// }
}
end = clock();
cout << "Time to initialize array1 and array2 = " << ((double) (end - begin)) / CLOCKS_PER_SEC << endl;
// printStats("Finished initialization.");
// printStats("Begin inner product conventionally.");
begin = clock();
result_Conventional = innerProduct(arr1, length, arr2, length);
end = clock();
cout << "Time to compute inner product conventionally = " << ((double) (end - begin)) / CLOCKS_PER_SEC << endl;
// printStats("End inner product conventionally.");
// printStats("Begin inner product using Intrinsics.");
begin = clock();
result_Intrinsic = sse_v4sf_innerProduct(arr1, length, arr2, length);
end = clock();
cout << "Time to compute inner product with intrinsics = " << ((double) (end - begin)) / CLOCKS_PER_SEC << endl;
//printStats("End inner product using Intrinsics.");
cout << "Results: " << endl;
cout << " result_Conventional = " << result_Conventional << endl;
cout << " result_Intrinsics = " << result_Intrinsic << endl;
return 0;
}
I use the following g++ invocation to build this:
g++ -W -Wall -O2 -pedantic -march=i386 -msse intrinsics_SSE_innerProduct.C -o innerProduct
Each of the loops above, in both the functions, runs a total of N^2 times. However, given that arr1 and arr2 (the two floating point vectors) are loaded with a value 2.5, the length of the array is 100,000, the result in both cases should be 6.25e+10. The results I get are:
Results:
result_Conventional = 6.25e+10
result_Intrinsics = 5.36871e+08
This is not all. It seems that the value returned from the function that uses intrinsics "saturates" at the value above. I tried putting other values for the elements of the array and different sizes too. But it seems that any value above 1.0 for the array contents and any size above 1000 meets with the same value we see above.
Initially, I thought it might be because all operations within SSE are in floating point, but floating point should be able to store a number that is of the order of e+08.
I am trying to see where I could be going wrong but cannot seem to figure it out. I am using g++ version: g++ (GCC) 4.4.1 20090725 (Red Hat 4.4.1-2).
Any help on this is most welcome.
Thanks,
Sriram.
The problem that you are having is that while a float can store 6.25e+10, it only has a few significant digits of precision.
This means that when you are building a large number by adding lots of small numbers together a bit at a time, you reach a point where the smaller number is smaller than the lowest precision digit in the larger number so adding it up has no effect.
As to why you are not getting this behaviour in the non-intrinsic version, it is likely that result variable is being held in a register which uses a higher precision that the actual storage of a float so it is not being truncated to the precision of a float on every iteration of the loop. You would have to look at the generated assembler code to be sure.

AbsoluteToNanoseconds vs AbsoluteToDuration

Apple has extremely comprehensive documentation, but I can't find any documentation for the function AbsoluteToNanoseconds? I was to find the difference between AbsoluteToNanoseconds and AbsoluteToDuration.
Note
I am beginning to think that the Apple Docs only cover Objective-C functions? Is this the case?
I found the following by using Apple-double-click:
Duration 32-bit millisecond timer for drivers
AbsoluteTime 64-bit clock
I'm not sure why it isn't documented anywhere, but here is an example of how it is used, if that helps:
static float HowLong(
AbsoluteTime endTime,
AbsoluteTime bgnTime
)
{
AbsoluteTime absTime;
Nanoseconds nanosec;
absTime = SubAbsoluteFromAbsolute(endTime, bgnTime);
nanosec = AbsoluteToNanoseconds(absTime);
return (float) UnsignedWideToUInt64( nanosec ) / 1000.0;
}
UPDATE:
"The main reason I am interested in the docs is to find out how it differs from AbsoluteToDuration"
That's easier. AbsoluteToNanoseconds() returns a value of type Nanoseconds, which is really an UnsignedWide struct.
struct UnsignedWide {
UInt32 hi;
UInt32 lo;
};
In contrast, AbsoluteToDuration() returns a value of type Duration, which is actually an SInt32 or signed long:
typedef SInt32 Duration;
Durations use a smaller, signed type because they are intended to hold relative times. Nanoseconds, on the other hand, only make sense as positive values, and they can be very large, since computers can stay running for years at a time.
According to https://developer.apple.com/library/prerelease/mac/releasenotes/General/APIDiffsMacOSX10_9/Kernel.html,
SubAbsoluteFromAbsolute(), along with apparently all the other *Absolute* functions, have been removed from Mavericks. I have confirmed this.
These functions are no longer necessary since at least in Mavericks and Mountain Lion (the two I tested), mach_absolute_time() already returns time in nanoseconds, and not in absolute form (which used to be the number of bus cycles), making a conversion no longer necessary. Thus, the conversion shown in clock_gettime alternative in Mac OS X and similar code presented in several places on the web, is no longer necessary. This can be confirmed on your system by checking that both the numerator and denominator returned by mach_timebase_info() are 1.
Here is my test code with lots of output to check if you need to do the conversion on your system (I have to perform a check since my code might run on older Macs, although I do the check at program initiation and set a function pointer to call a different routine):
#include <CoreServices/CoreServices.h>
#include <mach/mach.h>
#include <mach/mach_time.h>
#include <time.h>
#include <iostream>
using namespace std;
int main()
{
uint64_t now, then;
uint64_t abs, nano;
mach_timebase_info_data_t timebase_info = {0,0};
then = mach_absolute_time();
sleep(1);
now = mach_absolute_time();
abs = now - then;
mach_timebase_info(&timebase_info);
cout << "numerator " << timebase_info.numer << " denominator "
<< timebase_info.denom << endl;
if ((timebase_info.numer != 1) || (timebase_info.denom != 1))
{
nano = (abs * timebase_info.numer) / timebase_info.denom;
cout << "Have a real conversion value" << endl;
}
else
{
nano = abs;
cout << "Both numerator and denominator are 1" << endl;
}
cout << "milliseconds = " << nano/1000000LL << endl;
}