Using SWI-Prolog (Multi-threaded, 64 bits, Version 7.3.5),
we proceed step by step:
Define dcg nonterminal a//1 in module dcgAux (pronounced: "di-SEE-goh"):
:- module(dcgAux,[a//1]).
a(0) --> [].
a(s(N)) --> [a], a(N).
Run the following queries—using phrase/2 and apply:foldl/4:
?- use_module([library(apply),dcgAux]).
true.
?- phrase( foldl( a,[s(0),s(s(0))]),[a,a,a]).
true.
?- phrase( foldl(dcgAux:a,[s(0),s(s(0))]),[a,a,a]).
true.
?- phrase(apply:foldl(dcgAux:a,[s(0),s(s(0))]),[a,a,a]).
true.
?- phrase(apply:foldl( a,[s(0),s(s(0))]),[a,a,a]).
ERROR: apply:foldl_/4: Undefined procedure: apply:a/3
нет! Quite a surprise—and not a good one. Have we been missing some unknown unknowns?
To get rid of above irritating behavior, we must first find out the reason(s) causing it:
?- import_module(apply,M), M=user.
false.
?- phrase(apply:foldl(a,[s(0),s(s(0))]),[a,a,a]).
ERROR: apply:foldl_/4: Undefined procedure: apply:a/3
?- add_import_module(apply,user,end).
true.
?- import_module(apply,M), M=user. % sic!
M = user. % `?- import_module(apply,user).` fails!
?- phrase(apply:foldl(a,[s(0),s(s(0))]),[a,a,a]).
true.
What's going on? The way I see it is this:
Module expansion of the goal passed to foldl/4 is limited.
Quoting from the SWI-Prolog manual page on import_module/2:
All normal modules only import from user, which imports from system.
SWI's library(apply) only "inherits" from system, but not user.
If we clone module apply to applY (and propagate the new module name), we observe:
?- use_module(applY).
true.
?- phrase(applY:foldl(a,[s(0),s(s(0))]),[a,a,a]). % was: ERROR
true. % now: OK!
Please share your ideas on how I could/should proceed!
(I have not yet run a similar experiment with other Prolog processors.)
This is an inherent feature/bug of predicate based module systems in the Quintus tradition. That is, this module system was first developed for Quintus Prolog. It was adopted subsequently by SICStus (after 0.71), then (more or less) by 13211-2, then by YAP, and (with some modifications) by SWI.
The problem here is what exactly an explicit qualification means. As long as the goal is no meta-predicate, things are trivially resolvable: Take the module of the innermost qualification. However, once you have meta-predicates, the meta-arguments need to be informed of that module ; or not. If the meta-arguments are informed, we say that the colon sets the calling context, if not, then some other means is needed for that purpose.
In Quintus tradition, the meta-arguments are taken into account. With the result you see. As a consequence you cannot compare two implementations of the same meta-predicate in the same module directly. There are other approaches most notably IF and ECLiPSe that do not change the calling context via the colon. This has advantages and disadvantages. The best is to compare them case by case.
Here is a recent case. Take lambdas and how they are put into a module
in SICStus, in SWI, and in ECLiPSe.
As for the Quintus/SICStus/YAP/SWI module system, I'd rather use it in the most conservative manner possible. That is:
no explicit qualification, consider the infix : as something internal
clean, checkable meta-declarations - insert on purpose an undefined predicate just to see whether or not cross-referencing is able to detect the problem (in SWI that's check or make).
use the common subset, avoid the many bells and whistles - there are many well meant extensions...
do more versatile things the pedestrian way: reexport by adding an appropriate module and add a dummy definition. Similarly, instead of renaming, import things from an interface module.
be always aware that module systems have inherently some limits. No matter how you twist or turn it. There is no completely seamless module system, for the very purpose of modules is to separate code and concerns.
1: To be precise, SICStus' adaptation of Quintus modules did only include : for module sensitive arguments in meta_predicate declarations. The integers 0..9, which are so important for higher-order programming based on call/N were only introduced about 20 years later in
4.2.0 released 2011-03-08.
Related
sub make-regex {
my $what's-in-the-box = rand > .5 ?? 'x' !! 'y';
/$what's-in-the-box/
}
my $lol-who-knows = make-regex;
$lol-who-knows.gist.say;
How do you see the innards of the regex (i.e. x or y)? Forcing a match is not a solution.
How do you see the innards of the regex (i.e. x or y)?
You:
Statically compile the regex. Doing so will involve use of a raku compiler, i.e. Rakudo.
Dynamically evaluate the regex so that the $what's-in-the-box variable in your regex gets interpolated and turns into x or y. Doing so will involve running the regex as code. That in turn means both using Rakudo and running the regex with a Match object invocant (or sub-class instance or, conceivably, a mock object equivalent).
View the resulting regex. Doing so will involve using compiler (Rakudo) toolchain specific introspection or debug functionality.
Forcing a match is not a solution.
You can run a regex with no input if your concern is just to avoid the need for a successful match:
say Match.new.&$lol-who-knows; # #<failed match>
But it must be run, otherwise the $what's-in-the-box variable won't turn into an x or y. You might think you could cheat on this by writing something that mimics this part of raku regex construction/use but there's good reason to think that's not going to work out[1].
And then you must view its innards, using Rakudo toolchain features, after you've started running it and before it finishes running.
A regex is a method
In raku, a Regex is code, a sub-class of Method.
Until you run it, by using it in a match, that code is just /$what's-in-the-box/ (aka regex { $what's-in-the-box }). It's the equivalent of something like:
method {
...self should be a Match or a sub-class of it
...if self is not an instance, create a new one
...do matching -- compiler evaluates $what's-in-the-box during this
...return updated self / new instance
}
(To see a bit more detail, see Moritz's answer to the SO Can I change the slang inside a method?.)
A routine is a closure
You create your regex inside the closure named make-regex. The compiler spots that you've used $what's-in-the-box and therefore hangs on to the variable even after the make-regex closure returns. Later on, if/when your regex is run, the $what's-in-the-box variable will be replaced by its value at the moment the regex is run.
Footnotes
[1] There are plenty of huge complications you'll encounter if you try to do an end-run around using the compiler. Even something simple like interpolation is non-trivial. To quote Jonathan Worthington, in 2020:
I thought oh I'm only building [a tool that's] not the real compiler. I can ... have a simpler model of the symbol resolution. In the end it turned out that no I couldn't really. That started to create us some problems. When we aligned the way that we resolved symbols with the same algorithms and lookup structures that was being used in the compiler then suddenly it all became a lot simpler.
I’m trying to figure out what the differences are between the above-mentioned routines, and if statements like
say $y.Bool;
say $y.so;
say ? $y;
say so $y;
would ever produce a different result.
So far the only difference that is apparent to me is that ? has a higher precedence than so. .Bool and .so seem to be completely synonymous. Is that correct and (practically speaking) the full story?
What I've done to answer your question is to spelunk the Rakudo compiler source code.
As you note, one aspect that differs between the prefixes is parsing differences. The variations have different precedences and so is alphabetic whereas ? is punctuation. To see the precise code controlling this parsing, view Rakudo's Grammar.nqp and search within that page for prefix:sym<...> where the ... is ?, so, etc. It looks like ternary (... ?? ... !! ...) turns into an if. I see that none of these tokens have correspondingly named Actions.pm6 methods. As a somewhat wild guess perhaps the code generation that corresponds to them is handled by this part of method EXPR. (Anyone know, or care to follow the instructions in this blog post to find out?)
The definitions in Bool.pm6 and Mu.pm6 show that:
In Mu.pm6 the method .Bool returns False for an undefined object and .defined otherwise. In turn .defined returns False for an undefined object and True otherwise. So these are the default.
.defined is documented as overridden in two built in classes and .Bool in 19.
so, .so, and ? all call the same code that defers to Bool / .Bool. In theory classes/modules could override these instead of, or as well, as overriding .Bool or .defined, but I can't see why anyone would ever do that either in the built in classes/modules or userland ones.
not and ! are the same (except that use of ! with :exists dies) and both turn into calls to nqp::hllbool(nqp::not_i(nqp::istrue(...))). I presume the primary reason they don't go through the usual .Bool route is to avoid marking handling of Failures.
There are .so and .not methods defined in Mu.pm6. They just call .Bool.
There are boolean bitwise operators that include a ?. They are far adrift from your question but their code is included in the links above.
In other words, should it be 0 or : or something else? The Prolog systems SICStus, YAP, and SWI all indicate this as :. Is this appropriate? Shouldn't it be rather a 0 which means a term that can be called by call/1?
To check your system type:
| ?- predicate_property(predicate_property(_,_),P).
P = (meta_predicate predicate_property(:,?)) ? ;
P = built_in ? ;
P = jitted ? ;
no
I should add that meta-arguments — at least as in the form used here — cannot guarantee the same algebraic properties we expect from pure relations:
?- S=user:false, predicate_property(S,built_in).
S = user:false.
?- predicate_property(S,built_in), S=user:false.
false.
Here is the relevant part from ISO/IEC 13211-2:
7.2.2 predicate_property/2
7.2.2.1 Description
predicate_property(Prototype, Property) is true in the calling context of a module M iff the procedure
associated with the argument Prototype has predicate property
Property.
...
7.2.2.2 Template and modes
predicate_property(+prototype, ?predicate_property)
7.2.2.3 Errors
a) Prototype is a variable — instantiation_error.
...
c) Prototype is neither a variable nor a callable term —
type_error(callable, Prototype).
...
7.2.2.4 Examples
Goals attempted in the context of the module
bar.
predicate_property(q(X), exported).
succeeds, X is not instantiated.
...
Thats an interesting question. First I think there are two
kinds of predicate_property/2 predicates. The first kind
takes a callable and is intended to work smoothly with
for example vanilla interpreters and built-ins such as
write/1, nl/0, etc.., i.e.:
solve((A,B)) :- !, solve(A), solve(B).
solve(A) :- predicate_property(A, built_in), !, A.
solve(A) :- clause(A,B), solve(B).
For the first kind, I guess the 0 meta argument specifier
would work fine. The second kind of predicate_property/2
predicates works with predicate indicators. Callable
and predicate indicators are both notions already defined
in the ISO core standard.
A predicate indicator has the form F/N, where F is an atom
and N is an integer. Matters get a little bit more complicated
if modules are present, especially because of the operator
precedence of (:)/2 versus (/)/2. If predicate property works
with predicate indicators, we can still code the vanilla
interpreter:
solve((A,B)) :- !, solve(A), solve(B).
solve(A) :- functor(A,F,N), predicate_property(F/N, built_in), !, A.
solve(A) :- clause(A,B), solve(B).
Here we loose the connection of a possible meta argument
0, for example of solve/1, with predicate property. Because
functor/3 has usually no meta predicate declaration. Also
to transfer module information via functor/3 to
predicate_property/2 is impossible, since functor/3 is agnostic to
modules, it usually has no realization that can deal with
arguments that contain module qualification.
There are now two issues:
1) Can we give typing and/or should we give typing to predicates
such as functor/3.
2) Can we extend functor/3 so that it can convey module
qualification.
Here are my thoughts:
1) Would need a more elaborate type system. One that would
allow overloading of predicates with multiple types. For
example functor/3 could have two types:
:- meta_predicate functor(?,?,?).
:- meta_predicate functor(0,?,?).
The real power of overloading multiple types would only
shine in predicates such as (=)/2. Here we would have:
:- meta_predicate =(?,?).
:- meta_predicate =(0,0).
Thus allowing for more type inference, if one side of
(=)/2 is a goal we could deduced that the other side
is also a goal.
But matter are not so simple, it would possibly make
sense to have also a form of type cast, or some other
mechanism to restrict the overloading. Something which
is not covered by introducing just a meta predicate
directive. This would require further constructs inside
the terms and goals.
Learning form lambda Prolog or some dependent type
system, could be advantageous. For example (=)/2 can
be viewed as parametrized by a type A, i.e.:
:- meta_predicate =(A,A).
2) For Jekejeke Prolog I have provided an alternative
functor/3 realization. The predicate is sys_modfunc_site/2.
And it works bidirectionally like functor/3, but returns
and accepts the predicate indicator as one whole thing.
Here are some example runs:
?- sys_modfunc_site(a:b(x,y), X).
X = a:b/2
?- sys_modfunc_site(X, a:b/2).
X = a:b(_A,_B)
The result of the predicate could be called a generalized
predicate indicator. It is what SWI-Prolog already understands
for example in listing/1. So it could have the same meta argument
specification as listing/1 has. Which is current : in SWI-Prolog.
So we would have, and subsequently predicate_property/2 would
take the : in its first argument:
:- meta_predicate sys_modfunc_site(?,?).
:- meta_predicate sys_modfunc_site(0,:).
The vanilla interpreter, that can also deal with modules, then
reads as follows. Unfortunately a further predicate is needed,
sys_indicator_colon/2, which compresses a qualified predicate
indicator into an ordinary predicate indicator, since our
predicate_property/2 does not understand generalized predicate
indicators for efficiency reasons:
solve((A,B)) :- !, solve(A), solve(B).
solve(A) :-
sys_modfunc_site(A,I),
sys_indicator_colon(J,I),
predicate_property(J, built_in), !, A.
solve(A) :- clause(A,B), solve(B).
The above implements a local semantic of the colon (:)/2,
compared to the rather far reaching semantic of the colon
(:)/2 as described in the ISO module standard. The far
reaching semantic imputes a module name on all the literals
of a query. The local semantic only expects a qualified
literal and just applies the module name to that literal.
Jekejeke only implements local semantic with the further
provision that call-site is not changed. So under the hood
sys_modfunc_site/2 and sys_indicator_colon/2 have also to
transfer the call-site so that predicate_property/2 makes
the right decision for unqualified predicates, i.e. resolving
the predicate name by respecting imports etc..
Finally a little epilog:
The call-site transfer of Jekejeke Prolog is a pure runtime
thing, and doesn't need some compile time manipulation, especially
no ad hoc adding of module qualifiers at compile time. As a result
certain algebraic properties are preserved. For example assume
we have the following clause:
?- [user].
foo:bar.
^D
Then the following things work fine, since not only sys_modfunc_site/2
is bidirectional, but also sys_indicator_colon/2:
?- S = foo:bar/0, sys_indicator_colon(R,S), predicate_property(R,static).
S = foo:bar/0,
R = 'foo%bar'/0
?- predicate_property(R,static), sys_indicator_colon(R,S), S = foo:bar/0.
R = 'foo%bar'/0,
S = foo:bar/0
And of course predicate_property/2 works with different input and
output modes. But I guess the SWI-Prolog phaenomenom has first an
issue that a bare bone variable is prefixed with the current module. And
since false is not in user, but in system, it will not show false.
In output mode it will not show predicates which are equal by resolution.
Check out in SWI-Prolog:
?- predicate_property(X, built_in), write(X), nl, fail; true.
portray(_G2778)
ignore(_G2778)
...
?- predicate_property(user:X, built_in), write(X), nl, fail; true.
prolog_load_file(_G71,_G72)
portray(_G71)
...
?- predicate_property(system:X, built_in), write(X), nl, fail; true.
...
false
...
But even if the SWI-Prolog predicate_property/2 predicate would
allow bar bone variables, i.e. output goals, we would see less
commutativity in the far reaching semantic than in the local
semantic. In the far reaching semantic M:G means interpreting G
inside the module M, i.e. respecting the imports of the module M,
which might transpose the functor considerable.
The far reaching semantic is the cause that user:false means
system:false. On the other hand, in the local semantics, where M:G
means M:G and nothing else, we have the algebraic property more often.
In the local semantics user:false would never mean system:false.
Bye
Assume I have a compilation unit consisting of three functions, A, B, and C. A is invoked once from a function external to the compilation unit (e.g. it's an entry point or callback); B is invoked many times by A (e.g. it's invoked in a tight loop); C is invoked once by each invocation of B (e.g. it's a library function).
The entire path through A (passing through B and C) is performance-critical, though the performance of A itself is non-critical (as most time is spent in B and C).
What is the minimal set of functions which one should annotate with __attribute__ ((hot)) to effect more aggressive optimization of this path? Assume we cannot use -fprofile-generate.
Equivalently: Does __attribute__ ((hot)) mean "optimize the body of this function", "optimize calls to this function", "optimize all descendant calls this function makes", or some combination thereof?
The GCC info page does not clearly address these questions.
Official documentation:
hot
The hot attribute on a function is used to inform the compiler that the function is a hot spot of the compiled program. The function is optimized more aggressively and on many target it is placed into special subsection of the text section so all hot functions appears close together improving locality.
When profile feedback is available, via -fprofile-use, hot functions are automatically detected and this attribute is ignored.
The hot attribute on functions is not implemented in GCC versions earlier than 4.3.
The hot attribute on a label is used to inform the compiler that path following the label are more likely than paths that are not so annotated. This attribute is used in cases where __builtin_expect cannot be used, for instance with computed goto or asm goto.
The hot attribute on labels is not implemented in GCC versions earlier than 4.8.
2007:
__attribute__((hot))
Hint that the marked function is "hot" and should be optimized more aggresively and/or placed near other "hot" functions (for cache locality).
Gilad Ben-Yossef:
As their name suggests, these function attributes are used to hint the compiler that the corresponding functions are called often in your code (hot) or seldom called (cold).
The compiler can then order the code in branches, such as if statements, to favour branches that call these hot functions and disfavour functions cold functions, under the assumption that it is more likely that that the branch that will be taken will call a hot function and less likely to call a cold one.
In addition, the compiler can choose to group together functions marked as hot in a special section in the generated binary, on the premise that since data and instruction caches work based on locality, or the relative distance of related code and data, putting all the often called function together will result in better caching of their code for the entire application.
Good candidates for the hot attribute are core functions which are called very often in your code base. Good candidates for the cold attribute are internal error handling functions which are called only in case of errors.
So, according to these sources, __attribute__ ((hot)) means:
optimize calls to this function
optimize the body of this function
put body of this function to .hot section (to group all hot code in one location)
After source code analysis we can say that "hot" attribute is checked with (lookup_attribute ("hot", DECL_ATTRIBUTES (current_function_decl)); and when it is true, the functions's node->frequency is set to NODE_FREQUENCY_HOT (predict.c, compute_function_frequency()).
If the function has frequency as NODE_FREQUENCY_HOT,
If there is no profile information and no likely/unlikely on branches, maybe_hot_frequency_p will return true for the function (== "...frequency FREQ is considered to be hot."). This turns value of maybe_hot_bb_p into true for all Basic Blocks (BB) in the function ("BB can be CPU intensive and should be optimized for maximal performance.") and maybe_hot_edge_p true for all edges in function. In turn in non -Os-modes these BB and edges and also loops will be optimized for speed, not for size.
For all outbound call edges from this function, cgraph_maybe_hot_edge_p will return true ("Return true if the call can be hot."). This flag is used in IPA (ipa-inline.c, ipa-cp.c, ipa-inline-analysis.c) and influence inline and cloning decisions
Let's say you have a Fortran 90 module containing lots of variables, functions and subroutines. In your USE statement, which convention do you follow:
explicitly declare which variables/functions/subroutines you're using with the , only : syntax, such as USE [module_name], only : variable1, variable2, ...?
Insert a blanket USE [module_name]?
On the one hand, the only clause makes the code a bit more verbose. However, it forces you to repeat yourself in the code and if your module contains lots of variables/functions/subroutines, things begin to look unruly.
Here's an example:
module constants
implicit none
real, parameter :: PI=3.14
real, parameter :: E=2.71828183
integer, parameter :: answer=42
real, parameter :: earthRadiusMeters=6.38e6
end module constants
program test
! Option #1: blanket "use constants"
! use constants
! Option #2: Specify EACH variable you wish to use.
use constants, only : PI,E,answer,earthRadiusMeters
implicit none
write(6,*) "Hello world. Here are some constants:"
write(6,*) PI, &
E, &
answer, &
earthRadiusInMeters
end program test
Update
Hopefully someone says something like "Fortran? Just recode it in C#!" so I can down vote you.
Update
I like Tim Whitcomb's answer, which compares Fortran's USE modulename with Python's from modulename import *. A topic which has been on Stack Overflow before:
‘import module’ or ‘from module import’
In an answer, Mark Roddy mentioned:
don't use 'from module import *'. For
any reasonable large set of code, if
you 'import *' your will likely be
cementing it into the module, unable
to be removed. This is because it is
difficult to determine what items used
in the code are coming from 'module',
making it east to get to the point
where you think you don't use the
import anymore but its extremely
difficult to be sure.
What are good rules of thumb for python imports?
dbr's answer contains
don't do from x import * - it makes
your code very hard to understand, as
you cannot easily see where a method
came from (from x import *; from y
import *; my_func() - where is my_func
defined?)
So, I'm leaning towards a consensus of explicitly stating all the items I'm using in a module via
USE modulename, only : var1, var2, ...
And as Stefano Borini mentions,
[if] you have a module so large that you
feel compelled to add ONLY, it means
that your module is too big. Split it.
I used to just do use modulename - then, as my application grew, I found it more and more difficult to find the source to functions (without turning to grep) - some of the other code floating around the office still uses a one-subroutine-per-file, which has its own set of problems, but it makes it much easier to use a text editor to move through the code and quickly track down what you need.
After experiencing this, I've become a convert to using use...only whenever possible. I've also started picking up Python, and view it the same way as from modulename import *. There's a lot of great things that modules give you, but I prefer to keep my global namespace tightly controlled.
It's a matter of balance.
If you use only a few stuff from the module, it makes sense if you add ONLY, to clearly specify what you are using.
If you use a lot of stuff from the module, specifying ONLY will be followed by a lot of stuff, so it makes less sense. You are basically cherry-picking what you use, but the true fact is that you are dependent on that module as a whole.
However, in the end the best philosophy is this one: if you are concerned about namespace pollution, and you have a module so large that you feel compelled to add ONLY, it means that your module is too big. Split it.
Update: Fortran? just recode it in python ;)
Not exactly answering the question here, just throwing in another solution that I have found useful in some circumstances, if for whatever reason you don't want to split your module and start to get namespace clashes. You can use derived types to store several namespaces in one module.
If there is some logical grouping of the variables, you can create your own derived type for each group, store an instance of this type in the module and then you can import just the group that you happen to need.
Small example: We have a lot of data some of which is user input and some that is the result of miscellaneous initializations.
module basicdata
implicit none
! First the data types...
type input_data
integer :: a, b
end type input_data
type init_data
integer :: b, c
end type init_data
! ... then declare the data
type(input_data) :: input
type(init_data) :: init
end module basicdata
Now if a subroutine only uses data from init, you import just that:
subroutine doesstuff
use basicdata, only : init
...
q = init%b
end subroutine doesstuff
This is definitely not a universally applicable solution, you get some extra verbosity from the derived type syntax and then it will of course barely help if your module is not the basicdata sort above, but instead more of a allthestuffivebeenmeaningtosortoutvariety. Anyway, I have had some luck in getting code that fits easier into the brain this way.
The main advantage of USE, ONLY for me is that it avoids polluting my global namespace with stuff I don't need.
Agreed with most answers previously given, use ..., only: ... is the way to go, use types when it makes sense, apply python thinking as much as possible. Another suggestion is to use appropriate naming conventions in your imported module, along with private / public statements.
For instance, the netcdf library uses nf90_<some name>, which limits the namespace pollution on the importer side.
use netcdf ! imported names are prefixed with "nf90_"
nf90_open(...)
nf90_create(...)
nf90_get_var(...)
nf90_close(...)
similarly, the ncio wrapper to this library uses nc_<some name> (nc_read, nc_write...).
Importantly, with such designs where use: ..., only: ... is made less relevant, you'd better control the namespace of the imported module by setting appropriate private / public attributes in the header, so that a quick look at it will be sufficient for readers to assess which level of "pollution" they are facing. This is basically the same as use ..., only: ..., but on the imported module side - thus to be written only once, not at each import).
One more thing: as far as object-orientation and python are concerned, a difference in my view is that fortran does not really encourage type-bound procedures, in part because it is a relatively new standard (e.g. not compatible with a number of tools, and less rationally, it is just unusual) and because it breaks handy behavior such as procedure-free derived type copy (type(mytype) :: t1, t2 and t2 = t1). That means you often have to import the type and all would-be type-bound procedures, instead of just the class. This alone makes fortran code more verbose compared to python, and practical solutions like a prefix naming convention may come in handy.
IMO, the bottom line is: choose your coding style for people who will read it (this includes your later self), as taught by python. The best is the more verbose use ..., only: ... at each import, but in some cases a simple naming convention will do it (if you are disciplined enough...).
Yes, please use use module, only: .... For large code bases with multiple programmers, it makes the code easier to follow by everyone (or just use grep).
Please do not use include, use a smaller module for that instead. Include is a text insert of source code which is not checked by the compiler at the same level as use module, see: FORTRAN: Difference between INCLUDE and modules. Include generally makes it harder for both humans and computer to use the code which means it should not be used. Ex. from mpi-forum: "The use of the mpif.h include file is strongly discouraged and may be deprecated in a future version of MPI." (http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node411.htm).
I know I'm a little late to the party, but if you're only after a set of constants and not necessarily computed values, you could do like C and create an include file:
inside a file,
e.g., constants.for
real, parameter :: pi = 3.14
real, parameter :: g = 6.67384e-11
...
program main
use module1, only : func1, subroutine1, func2
implicit none
include 'constants.for'
...
end program main
Edited to remove "real(4)" as some think it is bad practice.