Convert common block to module - module

I am a researcher working with a program written in Fortran. I have very basic coding skills, so I need a bit of help getting some code to compile properly.
I will give a bit of background before showing the code. I am dealing with a large number of data, which will require 64-bit compiling and greater than 2 gb of memory. The first thing I noticed in the code is that many of the variables were written as "real", but in my research I found that "double precision" allows for much larger variables, and would be a more flexible choice, so I changed all "real" variables to "double precision" variables.
There is also in a file that is included in the compilation of the fortran build file "dist.f", called "geocoord.inc". I found that the variables are saved to a common block, but once again, I need something that can hold a larger amount of data. As I have been led to believe, a module would be a better program to use. I need some advice in converting this include file to work properly with the module program, which I will list below.
Dist.f:
c Convert latitude and longitude to kilometers relative
c to center of coordinates by short distance conversion.
subroutine dist(xlat, xlon, xkm, ykm)
implicit none
c Parameters:
double precision xlat, xlon ! (input)
double precision xkm, ykm ! (output)
c Local variables:
double precision lat1, lat2, lat3
double precision q
double precision xx
double precision yp
include "geocoord.inc"
c Set up short distance conversion by subr. SETORG
q=60*xlat-olat
yp=q+olat
lat1=datan(rlatc*dtan(RAD*yp/60.0))
lat2=datan(rlatc*dtan(RAD*OLAT/60.0))
LAT3=(LAT2+LAT1)/2.
xx=60*xlon-olon ! - wegen LON E
q=q*aa
xx = xx*bb*dcos(LAT3)
IF(rotate.ne.0.) then
c** rotate coordinate system anticlockwise
yp=cost*q+sint*xx
xx=cost*xx-sint*q
q=yp
ENDIF
xkm=xx
ykm=q
return
end
Geocoord.inc:
double precision rearth
double precision ellip
double precision rlatc
double precision rad
double precision olat, olon
double precision aa, bb, bc
double precision sint, cost
double precision rotate
integer icoordsystem
common /GEO_COORSYSTEM/ rearth, ellip, rlatc, rad,
& olat, olon, aa, bb, bc, sint, cost, rotate,
& icoordsystem
I appreciate any advice that you can provide and apologize for my relative ignorance in all things Fortran!

Modernizing an old code is often not an easy task. At least for a beginner. The move from real to double precision is not in the spirit of modern Fortran, but until you introduce modules it is OK. When you have modules it is better to do:
module precisions
integer, parameter :: rp = kind(1.d0) !if you insist on double, otherwise use selected_real_kind()
end module
and everywhere use the new kind constant that denotes your real precision:
use precisions
real(rp) :: variables
With the common blocks, the one you showed would be:
module geo_coordsystem
use precisions
implicit none
real(rp) :: rearth
real(rp) :: ellip
real(rp) :: rlatc
real(rp) :: rad
real(rp) :: olat, olon
real(rp) :: aa, bb, bc
real(rp) :: sint, cost
real(rp) :: rotate
integer icoordsystem
end module
Then you use it:
subroutine dist(xlat, xlon, xkm, ykm)
use precisions
use geo_coordsystem
implicit none
You can also continually move you subroutines to modules. Do that in small steps and always check that you didn't introduce some bug.

Several Fortran compilers have an option to promote real variables to double precision. In gfortran the option is -fdefault-real-8 , as documented at http://gcc.gnu.org/onlinedocs/gfortran/Fortran-Dialect-Options.html . In the long run it is better to use kinds as Vladimir F suggested.

Related

Why write 1,000,000,000 as 1000*1000*1000 in C?

In code created by Apple, there is this line:
CMTimeMakeWithSeconds( newDurationSeconds, 1000*1000*1000 )
Is there any reason to express 1,000,000,000 as 1000*1000*1000?
Why not 1000^3 for that matter?
One reason to declare constants in a multiplicative way is to improve readability, while the run-time performance is not affected.
Also, to indicate that the writer was thinking in a multiplicative manner about the number.
Consider this:
double memoryBytes = 1024 * 1024 * 1024;
It's clearly better than:
double memoryBytes = 1073741824;
as the latter doesn't look, at first glance, the third power of 1024.
As Amin Negm-Awad mentioned, the ^ operator is the binary XOR. Many languages lack the built-in, compile-time exponentiation operator, hence the multiplication.
There are reasons not to use 1000 * 1000 * 1000.
With 16-bit int, 1000 * 1000 overflows. So using 1000 * 1000 * 1000 reduces portability.
With 32-bit int, the following first line of code overflows.
long long Duration = 1000 * 1000 * 1000 * 1000; // overflow
long long Duration = 1000000000000; // no overflow, hard to read
Suggest that the lead value matches the type of the destination for readability, portability and correctness.
double Duration = 1000.0 * 1000 * 1000;
long long Duration = 1000LL * 1000 * 1000 * 1000;
Also code could simple use e notation for values that are exactly representable as a double. Of course this leads to knowing if double can exactly represent the whole number value - something of concern with values greater than 1e9. (See DBL_EPSILON and DBL_DIG).
long Duration = 1000000000;
// vs.
long Duration = 1e9;
Why not 1000^3?
The result of 1000^3 is 1003. ^ is the bit-XOR operator.
Even it does not deal with the Q itself, I add a clarification. x^y does not always evaluate to x+y as it does in the questioner's example. You have to xor every bit. In the case of the example:
1111101000₂ (1000₁₀)
0000000011₂ (3₁₀)
1111101011₂ (1003₁₀)
But
1111101001₂ (1001₁₀)
0000000011₂ (3₁₀)
1111101010₂ (1002₁₀)
For readability.
Placing commas and spaces between the zeros (1 000 000 000 or 1,000,000,000) would produce a syntax error, and having 1000000000 in the code makes it hard to see exactly how many zeros are there.
1000*1000*1000 makes it apparent that it's 10^9, because our eyes can process the chunks more easily. Also, there's no runtime cost, because the compiler will replace it with the constant 1000000000.
For readability. For comparison, Java supports _ in numbers to improve readability (first proposed by Stephen Colebourne as a reply to Derek Foster's PROPOSAL: Binary Literals for Project Coin/JSR 334) . One would write 1_000_000_000 here.
In roughly chronological order, from oldest support to newest:
XPL: "(1)1111 1111" (apparently not for decimal values, only for bitstrings representing binary, quartal, octal or hexadecimal values)
PL/M: 1$000$000
Ada: 1_000_000_000
Perl: likewise
Ruby: likewise
Fantom (previously Fan): likewise
Java 7: likewise
Swift: (same?)
Python 3.6
C++14: 1'000'000'000
It's a relatively new feature for languages to realize they ought to support (and then there's Perl). As in chux#'s excellent answer, 1000*1000... is a partial solution but opens the programmer up to bugs from overflowing the multiplication even if the final result is a large type.
Might be simpler to read and get some associations with the 1,000,000,000 form.
From technical aspect I guess there is no difference between the direct number or multiplication. The compiler will generate it as constant billion number anyway.
If you speak about objective-c, then 1000^3 won't work because there is no such syntax for pow (it is xor). Instead, pow() function can be used. But in that case, it will not be optimal, it will be a runtime function call not a compiler generated constant.
To illustrate the reasons consider the following test program:
$ cat comma-expr.c && gcc -o comma-expr comma-expr.c && ./comma-expr
#include <stdio.h>
#define BILLION1 (1,000,000,000)
#define BILLION2 (1000^3)
int main()
{
printf("%d, %d\n", BILLION1, BILLION2);
}
0, 1003
$
Another way to achieve a similar effect in C for decimal numbers is to use literal floating point notation -- so long as a double can represent the number you want without any loss of precision.
IEEE 754 64-bit double can represent any non-negative integer <= 2^53 without problem. Typically, long double (80 or 128 bits) can go even further than that. The conversions will be done at compile time, so there is no runtime overhead and you will likely get warnings if there is an unexpected loss of precision and you have a good compiler.
long lots_of_secs = 1e9;

VBA Fix Function Discrepancy? [duplicate]

Disclaimer: I know that 0.025 cannot be represented exactly in IEEE floating-point variables and, thus, rounding might not return what one would expect. That is not my question!
Is it possible to simulate the behavior of the VBA arithmetic operators in .NET?
For example, in VBA, the following expression yields 3:
Dim myInt32 As Long
myInt32 = CLng(0.025 * 100) ' yields 3
However, in VB.NET, the following expression yields 2:
Dim myInt32 As Integer
myInt32 = CInt(0.025 * 100) ' yields 2
According to the specification, both should return the same value:
Long (VBA) and Integer (VB.NET) are 32-bit integer types.
According to the VBA specification, CLng performs Let-coercion to Long, and Let-coercion between numeric types uses Banker's rounding. The same is true for VB.NET's CInt.
0.025 is a double precision IEEE floating-point constant in both cases.
Thus, some implementation detail of the floating-point multiplication operator or the integer-conversion operator changed. However, for reasons of compatibility with a legacy VBA system, I'd need to replicate the mathematical behavior of VBA (however wrong it might be) in a .NET application.
Is there some way to do that? Did someone write a Microsoft.VBA.Math library? Or is the precise VBA algorithm documented somewhere so I can do that myself?
VBA and VB.NET behave differently because VBA uses 80-bit "extended" precision for intermediate floating-point calculations (even though Double is a 64-bit type), whereas VB.NET always uses 64-bit precision. When using 80-bit precision, the value of 0.025 * 100 is slightly greater than 2.5, so CLng(0.025 * 100) rounds up to 3.
Unfortunately, VB.NET doesn't seem to offer 80-bit precision arithmetic. As a workaround, you can create a native Win32 DLL using Visual C++ and call it via P/Invoke. For example:
#include <cmath>
#include <float.h>
#pragma comment(linker, "/EXPORT:MultiplyAndRound=_MultiplyAndRound#16")
extern "C" __int64 __stdcall MultiplyAndRound(double x, double y)
{
unsigned int cw = _controlfp(0, 0);
_controlfp(_PC_64, _MCW_PC); // use 80-bit precision (64-bit significand)
double result = floor(x * y + 0.5);
if (result - (x * y + 0.5) == 0 && fmod(result, 2))
result -= 1.0; // round down to even if halfway between even and odd
_controlfp(cw, _MCW_PC); // restore original precision
return (__int64)result;
}
And in VB.NET:
Declare Function MultiplyAndRound Lib "FPLib.dll" (ByVal x As Double, ByVal y As Double) As Long
Console.WriteLine(MultiplyAndRound(2.5, 1)) ' 2
Console.WriteLine(MultiplyAndRound(0.25, 10)) ' 2
Console.WriteLine(MultiplyAndRound(0.025, 100)) ' 3
Console.WriteLine(MultiplyAndRound(0.0025, 1000)) ' 3
Given that the VBA is supposed to use Banker's rounding, it seems clear to me at first glance that the bug is actually in the VBA side of things. Bankers rounding rounds at the midpoint (.5) so the result digit is even. Thus, to do correct Banker's rounding, 2.5 should round to 2, and not to 3. This matches the .Net result, rather than the VBA result.
However, based on information pulled from a currently deleted answer, we can also see this result in VBA:
Dim myInt32 As Integer
myInt32 = CInt(2.5) ' 2
myInt32 = CInt(0.025 * 100) ' 3
This makes it seem like the rounding in VBA is correct, but the multiplication operation produces a result that is somehow greater than 2.5. Since we're no longer at a mid-point, the Banker's rule does not apply, and we round up to 3.
Therefore, to fix this issue, you'll need to figure out what that VBA code is really doing with that multiplication instruction. Regardless of what is documented, the observations prove that VBA is handling this part differently than .Net. Once you figure out exactly what's going on, with luck you'll be able to simulate that behavior.
One possible option is to go back to the old standby for floating point numbers: check whether you're within some small delta of a mid-point and, if so, just use the mid-point. Here's some (untested) naive code to do it:
Dim result As Double = 0.025 * 100
Dim delta As Double = Double.Epsilon
Dim floor As Integer = Math.Floor(result)
If Math.Abs(result - (CDbl(floor) + 0.5)) <= delta Then
result = floor + 0.5
End
I emphasize the untested, because at this point we're already dealing strange with results from small computer rounding errors. The naive implementation in this situation is unlikely to be good enough. At very least, you may want to use a factor of 3 or 4 epsilons for your delta. Also, the best you could hope for from this code is that it could force the VBA to match the .Net, when what you're really after is the reverse.

In what cases do we need functions for both double, float and long double?

In the math-headers we see
extern float fabsf(float);
extern double fabs(double);
extern long double fabsl(long double);
...
extern float fmodf(float, float);
extern double fmod(double, double);
extern long double fmodl(long double, long double);
Why is there one function for each type?
Isn't this a lot of duplicate code? If I where to say write a lerp-function or a clamp-function would I need to write one for each type?
Seems like we will have duplicate code where there's only one thing changing – the type.
extern float clampf(float value, float min, float max)
{
if(value > max)
return max;
if(value < min)
return min;
return value;
}
extern double clamp(double value, double min, double max)
{
if(value > max)
return max;
if(value < min)
return min;
return value;
}
Question 1: What is the historical reason for this structure?
Question 2: Should I follow the same pattern? Or should I only implement the double-kind since it is the one which is most common?
Question 3: Or should I just use macro's to overcome the type-issue altogether?
Historically (circa C89 and before), the math library contained only the double-precision versions of these functions, which is why those versions have no suffix. If you needed to compute the sine of a float, you either wrote your own implementation, or (more likely!) you simply wrote:
float x;
float y = sin(x);
However, this introduces some overhead on modern architectures. Specifically, on the most common architectures today, it is necessary for the compiler to emit code that looks something like this:
convert x to double
call sin
convert result to float
These conversions are pretty fast (about the same as an addition, usually), but they still have some cost. On top of the cost of conversion, sin needs to deliver a result that has ~53 bits of precision, more than half of which are completely wasted if the result is just going to be converted back to single precision. Between these two factors, it is possible for a dedicated single-precision sin routine to be about twice as fast; that’s a significant win for some very frequently-used library functions!
If we look at functions like fabs (and assume that the compiler does not simply inline and lower them), the situation is much, much worse. fabs, on a typical modern architecture, is a simple bitwise-and operation. So the two conversions bracketing the call (if all you have is double) are significantly more expensive than the operation itself, and can easily cause a 5x slowdown. That’s why multiple versions of these functions were added to support each FP type.
If you don’t want to keep track of all of them, you can #include <tgmath.h>, which will infer the correct function to use based on the type of the argument (meaning
sin((float)x)
will generate a call to sinf(x), whereas
sin((long double)x)
will call sinl(x)).
In your own code, you usually know a priori what the type of your arguments is, and only need to support one or maybe two types. clamp and lerp in particular are graphics operations, and almost universally are used only in single-precision variants.
Incidentally, the fact that you’re using clamp and lerp is a pretty good indication that you might want to look at writing your code in OpenCL instead of C/Obj-C; the OpenCL math library implements these operations (and many other similar operations) for you, and provides implementations that work with a wide range of basic types, including vectors.
float and double are different data types, same as int and long int. You can use the functions which operate on double on float values and implicit conversion will happen to make it work as expected in most circumstances, but if you use functions which operate on float on double values, you will almost inevitably lose precision.
There are other longer explanations available, e.g. What's the difference between a single precision and double precision floating point operation? .

Does fortran 90 (gfortran) optimise array syntax?

I have done a lot of work with array-based interpreted languages, but I'm having a look at Fortran. What has just occurred to me after writing my first bit of code is the question of whether or not gfortran will optimise an expression using array syntax by placing the expression in a single loop. In most array-based interpreters an expression such as A=B/n*2*pi (where B is an array) would require 5 loops and multiple array temporaries to evaluate. Is gfortran clever enough to optimise this out, and will my code below (the line that calculates the array from 0 to 2pi) be as efficient as an explicit do loop around the expression? Is there anything I should look out for when using array syntax if I'm worried about performance?
PROGRAM Sine
IMPLICIT NONE
REAL, PARAMETER :: PI = 3.415926535
INTEGER, PARAMETER :: z = 500
INTEGER :: ier
INTEGER, EXTERNAL :: PGBEG
REAL, DIMENSION(z) :: x,y
x=(indgen(z)-1.0)/z*(2*pi) ! This line...``
y=sin(x)
CALL plot(y,x)
CONTAINS
FUNCTION indgen(n) result(i)
INTEGER :: n
INTEGER, DIMENSION(n) :: i
INTEGER :: l
DO l=1,n
i(l)=l
END DO
END FUNCTION indgen
SUBROUTINE plot(y,x)
REAL, DIMENSION(:) :: x,y
ier=PGBEG(0,'/XWINDOW',1,1)
CALL PGENV(0.0,7.0,-1.0,1.0,0,1)
CALL PGLINE(SIZE(x),x,y)
CALL PGEND()
END SUBROUTINE plot
END PROGRAM Sine
In gfortran you can use the -Warray-temporaries flag to see all array temporaries generated. When I try your example no extra array temporary is generated (other than the one necessary to store the results of indgen(z)), so I guess gfortran is clever enough.
The expression z*(2*pi) is a compile-time constant, which the compiler can easily verify, so that should not be evaluated at run time regardless. Additionally, virtually all modern compilers should perform one-line "elemental" array operations within a single loop, and in many cases SIMD instructions will be generated (auto-vectorization).
Whether a temporary is generated usually depends on whether or not each element can be handled independently, and whether or not the compiler can prove this. Xiaolei Zhu's suggestion of using -Warray-temporaries is a good one. Don't mix this up wih -fcheck=array-temps, which I think only applies to temporaries generated for function calls.
Here's an example of such a message from gfortran:
foo.F90:4.12:
foo(1:20) = 2*foo(20:1:-1)
1
Warning: Creating array temporary at (1)
Your function call will be done in a separate loop, unless the compiler can inline it. Whether or not the compiler inlines a short function can be pretty unpredictable; it potentially depends on where that other function is defined, whether or not the function has the pure attribute (although in practice this rarely seems to matter), the vendor and version of the compiler itself, and the options you pass. Some compilers can generate a report for this; as I recall, the Intel compiler has a decent one.
Edit: It's also possible to inline the expression in this line pretty easily by hand, using an "implied do loop":
x = [ ( real(i)/z*(2*pi), i = 0, z-1) ]
Yes.
Fortran is compiled rather than interpreted.
It handles loops very well.

What does the floating point "f" designator signify?

I wonder if someone can clarify what the "f" behind a floating point number is used to signify?
float myFloat = 12.0f;
as apposed to:
float myFloat = 12.0;
I have seen this used many times but its pretty hard to find an explanation either online or in books. I am assuming its either something carried over from another language thats supported for consistency by C or maybe its there as a directive for the compiler when it comes to evaluate the maths.
I am just curious if there is any practical difference between the "f" and using the "." to signify a floating point number?
It means it's a single-precision float rather than a double precision double.
From the C99 standard:
An unsuffixed floating constant has type double. If suffixed by the letter f or F, it has type float.
Objective-C is based on C, maybe not C99, but this convention has been around in C for a long time.
There are sometimes performance concerns when converting from float to double, and you can avoid them by using the 'f'. Also when doing say a square root, sin,cos, etc, a wild guess would say that
float answer = sqrt(12.0f)
is about 10x slower than
float answer = sqrtf(12.0f)
It really makes a difference on the phone and iPad, if you are doing millions of these kinds of operations. Stay in float if you need speed and can deal with the lower resolution. If you are not good at math, or not using much math in your program use double everywhere, as there are more gotchas when using the lower precision 32 bit float.