Use of Fortran Modules - module

When is it customarily appropriate to use modules in fortran. Ordinarily, one can put subroutines in a file and compile that without problems. So why and on what conditions is better to use a module? What advantages do modules have from just the normal subroutines?
I have used modules before. But at work, the libraries are basically a long list of files that do not use modules. Thus the question.

Related

Getting imported targets through `find_package`?

The CMake manual of Qt 5 uses find_package and says:
Imported targets are created for each Qt module. Imported target names should be preferred instead of using a variable like Qt5<Module>_LIBRARIES in CMake commands such as target_link_libraries.
Is it special for Qt or does find_package generate imported targets for all libraries? The documentation of find_package in CMake 3.0 says:
When the package is found package-specific information is provided through variables and Imported Targets documented by the package itself.
And the manual for cmake-packages says:
The result of using find_package is either a set of IMPORTED targets, or a set of variables corresponding to build-relevant information.
But I did not see another FindXXX.cmake-script where the documentation says that a imported target is created.
find_package is a two-headed beast these days:
CMake provides direct support for two forms of packages, Config-file Packages
and Find-module Packages
Source
Now, what does that actually mean?
Find-module packages are the ones you are probably most familiar with. They execute a script of CMake code (such as this one) that does a bunch of calls to functions like find_library and find_path to figure out where to locate a library.
The big advantage of this approach is that it is extremely generic. As long as there is something on the filesystem, we can find it. The big downside is that it often provides little more information than the physical location of that something. That is, the result of a find-module operation is typically just a bunch of filesystem paths. This means that modelling stuff like transitive dependencies or multiple build configurations is rather difficult.
This becomes especially painful if the thing you are trying to find has itself been built with CMake. In that case, you already have a bunch of stuff modeled in your build scripts, which you now need to painstakingly reconstruct for the find script, so that it becomes available to downstream projects.
This is where config-file packages shine. Unlike find-modules, the result of running the script is not just a bunch of paths, but it instead creates fully functional CMake targets. To the dependent project it looks like the dependencies have been built as part of that same project.
This allows to transport much more information in a very convenient way. The obvious downside is that config-file scripts are much more complex than find-scripts. Hence you do not want to write them yourself, but have CMake generate them for you. Or rather have the dependency provide a config-file as part of its deployment which you can then simply load with a find_package call. And that is exactly what Qt5 does.
This also means, if your own project is a library, consider generating a config file as part of the build process. It's not the most straightforward feature of CMake, but the results are pretty powerful.
Here is a quick comparison of how the two approaches typically look like in CMake code:
Find-module style
find_package(foo)
target_link_libraries(bar ${FOO_LIBRARIES})
target_include_directories(bar ${FOO_INCLUDE_DIR})
# [...] potentially lots of other stuff that has to be set manually
Config-file style
find_package(foo)
target_link_libraries(bar foo)
# magic!
tl;dr: Always prefer config-file packages if the dependency provides them. If not, use a find-script instead.
Actually there is no "magic" with results of find_package: this command just searches appropriate FindXXX.cmake script and executes it.
If Find script sets XXX_LIBRARY variable, then caller can use this variable.
If Find script creates imported targets, then caller can use these targets.
If Find script neither sets XXX_LIBRARY variable nor creates imported targets ... well, then usage of the script is somehow different.
Documentation for find_package describes usual usage of Find scripts. But in any case you need to consult documentation about concrete script (this documentation is normally contained in the script itself).

Are fortran library required if using only modules?

I'm trying to clean up a fortran make process for distribution. Currently, two libraries are made, and then the executable is compiled linking to the libraries and including the module files. I see from previous answers (Distribute compiled fortran library with module files) that you can't get rid of the module files and that they can be different for every machine and compiler. This is very annoying.
However, the code in my libraries are made up entirely of modules. It seems like I don't need the library part at all; I can just include the modules. I've tried this and it does compile and run on small examples.
Will this always work (when all I have are modules in the libraries)? Is it best practice? Should I instead consider rewriting my libraries NOT to use modules so I can avoid all these compiler dependencies and only distribute the lib*.a files? Is that what this document is referring to by using submodules (which no one supports static lib with many modules)
It really depends on the features you have in your library. Does it have only a couple of declarations? Then the .mod files would suffice, but why not distribute the source in such a simple case?
Are all your public procedures simple enough, so that they do not require an explicit interface and they are outside of modules? Then you don't need any .mod files.
Do you have a simple public module or an include file with the public API and the rest is private? You can then distribute the source of the API module or the include file. I would recommend to place just the interface blocks and other declarations in this module.
Be aware of one important problem. You can get away (using interface locks or similar) with avoiding the non-portable .mod files, but if the procedures are using some more advanced argument passing, their ABI is often NOT portable between different compilers or even some compiler versions. You would the be able to compile it and get mysterious crashes when calling your library.
Submodules can change it all, but actually I do not expect they will solve portability between compilers. The user of your library will still need the same compiler you had. It is true that interfacing the closed source software will be easier, but not more portable between compilers.
You can link either from a library lib*.a, or from object files. Both will be at least platform dependent and so more difficult to distribute than source code. library file might have the advantage of fewer files. In either case, linking from lib*a or object files, you can present your code to the user as a library of procedures to call. If you don't want to distribute your source code, then you will have to compile for however many platforms you support. Modules are a major advantage of modern Fortran, automating the checking of procedure actual and dummy arguments. Compared to, for example, C header files, they have the advantage of being automatic, but the disadvantage of producing a compiler-dependent intermediate file. If you are providing procedures to other programmers, it would seem a bad idea not to provide them with this interface checking. If you want to hide your source code, then you could write interface blocks describing the procedures and distribute only this source for them to compile.

Using different variants of module in routines shared between two programs

in two directories I have two different, independent Fortran 90 programs, and I want them to share certain routines that use some variables defined in modules. In other words, I have a directory dirA with program prgA.f90 and a couple of routines in an extra file sub.f90, and these routines use some stuff from a module in the file modA; all of them reside in dirA. In another directory, dirB, I have the independent code prgB.f90 that is supposed to use routines from sub.f90 and hence needs modules that define the stuff needed by it. For technical reasons, I cannot use the modules from modA in dirA directly, but write a variant of it, modB, with the same module name and containing the variables of interest with the same names as in modA as well as other variables only used by prgB. Will the routines from sub.f90 work with modA in the executable of prgA and with modB in the executable of prgB?
I have partly tried to adapt my codes to this, and the compiler seems to accept it somehow, but I'm not sure if it will really work and not produce garbage results in spite of compiling seemingly ok.
Basically the question is this: Can I share functions USEing certain modules between different programs if I ensure that the modules have the same name and have a subset of variables in common, or do the modules USEd by the functions have to be exactly the same for both programs?
Thomas
If I understand correctly then what you are doing is just fine.
When you build progA the compiler encounters, on tackling sub.f90, a statement such as use globals. At that point the compiler will look for a file called globals.mod which it should, by that point, have created by compiling the module source. Of course, that module source need not be in a file called globals.f90 but that's neither here nor there. The module source might, for example, be in a file called globals_for_a.f90.
When you build progB the compiler encounters, on tackling sub.f90, a statement such as use globals. At that point the compiler will look for a file called globals.mod which it should, by that point, have created by compiling the module source. Of course, that module source need not be in a file called globals.f90 but that's neither here nor there. The module source might, for example, be in a file called globals_for_b.f90.
So long as the compilation for each program finds the right source for globals.mod everything should compile as you wish. You've chosen to divide your source files across a number of directories but that's not strictly necessary; a make file with suitably defined targets could build either program or both however you organise the source files and directories.
Note that almost all of this is outside the concern of the Fortran standards, it's more a question of how compilers and compilation work.

Running Fortran on Xcode

I am trying to run sample Fortran code on Xcode 4.3 using a 64-bit compiler and it will not build correctly. The main problem is that despite my best efforts, I cannot get the separate .f90 files to interact with each other, thus code like
USE ElementModule, ONLY : ElementType
will not work. Does anybody have any answers regarding how to get the separate .f90 files to read each other. I'm aware you have to include specific modules, but my search hasn't given me any straight answers regarding what those specific modules are.
Normally when F90 code compiles, it generates 2 files: an object file and a mod file. When compiling subsequent modules, the mod files are used for the USE statements.
If you have circular dependency, then you have to build two or more times. Best to avoid circular dependency if you can avoid it.
The mod files are normally picked up by the same directive that tells the compiler where the include files are.

OCaml - testing functions not included in the signature

I'm writing tests for an OCaml module. Some of the functions in the module are not meant to be publicly visible, and so they're not included in the signature (.mli file).
I can't call these functions from my tests, because they're not visible outside of the module. So I'm having a hard time testing them. Is there a good way to get around this? For example, a way to tell ocamlc not to read the signature from the .mli file when it's compiling tests?
Some ideas:
Actually export the test functions, but use ocamldoc's stop comment (**/**) feature to avoid displaying the exports in the documentation.
Put all of your tests entirely in another module. However, this is difficult if you have abstract types because your tests may very well need access to the internal implementation.
Create a submodule Test, where all your tests go. That way it is clear what functions are just for testing. Possibly combine this with the (**/**) feature to also hide the sub-module from documentation.
I've heard that people sometimes separate their .mli files from their .ml files (in a different directory) so that they can compile with or without them (by telling ocamlc to look in the separate directory or not). I just tried a few experiments with this. I think it can be made to work, but it seems a little bit error prone to me. Maybe you could put the tests of the internal functions into the module. Exporting the test functions might not violate the modularity too badly. (Though of course it clutters up the module.)