I am currently developping a fortran DLL and I have a problem about multi-variable functions. My final objective are to
call the DLL functions from VBA
debug the DLL using a fortran code calling the DLL functions
Here is my simplified case:
1. Fortran DLL code
module mod_thermo
implicit none
contains
function y1(x1) result(y) bind(c, name = "Y1")
use iso_c_binding, only : c_double
!GCC$ attributes dllexport, stdcall :: y1
real(c_double) :: x1
real(c_double) :: y
y = 2.d0 * x1
end function
function y2(x1, x2) result(y) bind(c, name = "Y2")
use iso_c_binding, only : c_double
!GCC$ attributes dllexport, stdcall :: y2
real(c_double) :: x1
real(c_double) :: x2
real(c_double) :: y
y = 2.d0 * x1 * x2
end function
end module
2. Fortran DLL compilation options with GCC
Compiler is GCC. Compilations options are:
-static (to avoid dependencies to other dll)
-Wl,--kill-at (VBA related)
-fno-underscoring (VBA related)
Output files are located in the project folder of the fortran code for future debug:
dll_thermo.dll
libdll_thermo.a
libdll_thermo.def
3. Fortran code for DLL testing (interfaces + program)
The DLL is linked to the code by liking library libdll_thermo.a
module mod_thermo
implicit none
interface
function y1(x1) result(y) bind(c,name="Y1")
use iso_c_binding, only : c_double
real(c_double) :: x1
real(c_double) :: y
end function
function y2(x1, x2) result(y) bind(c,name="Y2")
use iso_c_binding, only : c_double
real(c_double) :: x1
real(c_double) :: x2
real(c_double) :: y
end function
end interface
end module
program main
use mod_thermo
implicit none
write(*,*)"y1 calls:"
write(*,*)y1(1.d0) ! output ok
write(*,*)y1(2.d0) ! output ok
write(*,*)y1(3.d0) ! output ok
write(*,*)"y2 calls:"
write(*,*)y2(1.d0, 1.d0) ! output ok
write(*,*)y2(2.d0, 2.d0) ! output fails
write(*,*)y2(3.d0, 2.d0)
end program
4. Output and conclusion
My conclusion is that I do not perform the multivariable DLL function y2 call in a correct way. What would be your way to perform such calls ?
The problem came from the link to "libdll_thermo.a". Following tutorial from here, I found out that I should have linked "libdll_thermo.dll". Now it works.
Related
I am still learning Scilab (5.5.2), so I am writing and running test codes to familiarize myself with the software.
To test the numerical differential equation solver, I started easy from the equation dy/dx = A, which has as solution y = Ax+c (line equation).
This is the code I wrote:
// Function y = A*x+1
function ydot=fn(x, A)
ydot=A
endfunction
A=2;
//Initial conditions
x0=0;
y0=A*x0+1;
//Numerical Solution
x=[0:5];
y= ode(y0,x0,x,fn);
//Analytical solution
y2 = A*x+1;
clf(); plot(x, y); plot(x, y2, '-k');
//End
And these are the unexpected results:
y = 1. 2.7182824 7.3890581 20.085545 54.598182
148.41327
y2 = 1. 3. 5. 7. 9. 11.
It appears that y = e^x. Can someone explain what is going wrong, or what I did wrong?
Just renaming the variables does not change how they are used internally by the ODE solver. Since that solver expects a function with arguments time,state it will interpret the provided function that way.
Renaming the variables back, what you programmed is equivalent to
function ydot=fn(t,y)
ydot = y
endfunction
which indeed has the exponential function as solution.
From the manual you can see that the way to include parameters is to pass the function as a list,
The f argument can also be a list with the following structure: lst=list(realf,u1,u2,...un) where realf is a Scilab function with syntax: ydot = f(t,y,u1,u2,...,un)
function ydot=fn(t,y,A)
ydot = A
endfunction
y= ode(y0,x0,x,list(fn,A));
I'm trying to learn to plot things with Julia using PyPlot, and I tried to plot a quadratic function. It does not like how I'm squaring x. I tried using x**2 and x*x, and the compiler did not accept those either. What should I be using to square x?
Thanks
Code # line 7:
x1 = linspace(0,4*pi, 500); y1 = x^2
Error:
LoadError: MethodError: `*` has no method matching *(::LinSpace{Float64},
::LinSpace{Float64})
Closest candidates are:
*(::Any, ::Any, !Matched::Any, !Matched::Any...)
*{T}(!Matched::Bidiagonal{T}, ::AbstractArray{T,1})
*(!Matched::Number, ::AbstractArray{T,N})
...
in power_by_squaring at intfuncs.jl:80
in ^ at intfuncs.jl:108
in include_string at loading.jl:282
in include_string at C:\Users\User\.julia\v0.4\CodeTools\src\eval.jl:32
in anonymous at C:\Users\User\.julia\v0.4\Atom\src\eval.jl:84
in withpath at C:\Users\User\.julia\v0.4\Requires\src\require.jl:37
in withpath at C:\Users\User\.julia\v0.4\Atom\src\eval.jl:53
[inlined code] from C:\Users\User\.julia\v0.4\Atom\src\eval.jl:83
in anonymous at task.jl:58
while loading C:\Users\User\Desktop\Comp Sci\Class\plotTest, in expression
starting on line 7
To square every element of an array, use x.^2.
You are trying to square all of the elements of an array. This means you need to use the element-wise version x.^2.
I'm having trouble solving a discrepancy between something breaking at runtime, but using the exact same data and operations in the python console, having it work fine.
# f_err - currently has value 1.11819388872025
# l_scales - currently a numpy array [1.17840183376334 1.13456764589809]
sq_euc_dists = self.se_term(x1, x2, l_scales) # this is fine. It calls cdists on x1/l_scales, x2/l_scales vectors
return (f_err**2) * np.exp(-0.5 * sq_euc_dists) # <-- errors on this line
The error that I get is
AttributeError: 'Zero' object has no attribute 'exp'
However, calling those exact same lines, with the same f_err, l_scales, and x1, x2 in the console right after it errors out, somehow does not produce errors.
I was not able to find a post referring to the 'Zero' object error specifically, and the non-'Zero' ones I found didn't seem to apply to my case here.
EDIT: It was a bit lacking in info, so here's an actual (extracted) runnable example with sample data I took straight out of a failed run, which when run in isolation works fine/I can't reproduce the error except in runtime.
Note that the sqeucld_dist function below is quite bad and I should be using scipy's cdist instead. However, because I'm using sympy's symbols for matrix elementwise gradients with over 15 partial derivatives in my real data, cdist is not an option as it doesn't deal with arbitrary objects.
import numpy as np
def se_term(x1, x2, l):
return sqeucl_dist(x1/l, x2/l)
def sqeucl_dist(x, xs):
return np.sum([(i-j)**2 for i in x for j in xs], axis=1).reshape(x.shape[0], xs.shape[0])
x = np.array([[-0.29932052, 0.40997373], [0.40203481, 2.19895326], [-0.37679417, -1.11028267], [-2.53012051, 1.09819485], [0.59390005, 0.9735], [0.78276777, -1.18787904], [-0.9300892, 1.18802775], [0.44852545, -1.57954101], [1.33285028, -0.58594779], [0.7401607, 2.69842268], [-2.04258086, 0.43581565], [0.17353396, -1.34430191], [0.97214259, -1.29342284], [-0.11103534, -0.15112815], [0.41541759, -1.51803154], [-0.59852383, 0.78442389], [2.01323359, -0.85283772], [-0.14074266, -0.63457529], [-0.49504797, -1.06690869], [-0.18028754, -0.70835799], [-1.3794126, 0.20592016], [-0.49685373, -1.46109525], [-1.41276934, -0.66472598], [-1.44173868, 0.42678815], [0.64623684, 1.19927771], [-0.5945761, -0.10417961]])
f_err = 1.11466725760716
l = [1.18388412685279, 1.02290811104357]
result = (f_err**2) * np.exp(-0.5 * se_term(x, x, l)) # This runs fine, but fails with the exact same calls and data during runtime
Any help greatly appreciated!
Here is how to reproduce the error you are seeing:
import sympy
import numpy
zero = sympy.sympify('0')
numpy.exp(zero)
You will see the same exception you are seeing.
You can fix this (inefficiently) by changing your code to the following to make things floating point.
def sqeucl_dist(x, xs):
return np.sum([np.vectorize(float)(i-j)**2 for i in x for j in xs],
axis=1).reshape(x.shape[0], xs.shape[0])
It will be better to fix your gradient function using lambdify.
Here's an example of how lambdify can be used on partial d
from sympy.abc import x, y, z
expression = x**2 + sympy.sin(y) + z
derivatives = [expression.diff(var, 1) for var in [x, y, z]]
derivatives is now [2*x, cos(y), 1], a list of Sympy expressions. To create a function which will evaluate this numerically at a particular set of values, we use lambdify as follows (passing 'numpy' as an argument like that means to use numpy.cos rather than sympy.cos):
derivative_calc = sympy.lambdify((x, y, z), derivatives, 'numpy')
Now derivative_calc(1, 2, 3) will return [2, -0.41614683654714241, 1]. These are ints and numpy.float64s.
A side note: np.exp(M) will calculate the element-wise exponent of each of the elements of M. If you are trying to do a matrix exponential, you need np.linalg.exmp.
I have a problem with my fortran code when using O3 optimization: The values calculated for the norm of an array changes with and without O3, and is incorrect with O3. The following shows a minimal example of my code
program main
use wavefunction
implicit none
integer(I4B) :: Na, Nb, Npes
complex(DPC), ALLOCATABLE, DIMENSION(:,:,:) :: phi
real(DP), ALLOCATABLE, DIMENSION(:) :: normPerPes1
real(DP) :: sigma
integer(I4B) :: i,j
Na=100
Nb=100
Npes=4
ALLOCATE(phi(Na,Nb,Npes), normPerPes1(Npes))
sigma=10
phi=(0.D0,0.D0)
do i=1,Na
do j=1,Nb
!gaussian on pes 1
phi(i,j,1)=1.D0/(sigma**2*2.D0*pi)*exp(-(dble(i-50)**2+dble(j-50)**2/(2.D0*sigma**2))
end do
end do
!total norm
write(*,*) norm(Na,Nb,Npes,phi)
!norm on each pes
CALL normPerPes(Na,Nb,Npes,phi,NormPerPes1)
write(*,*) NormPerPes1
end program main
which uses the following module
module wavefunction
use nrtype
implicit none
contains
function norm(Na,Nb,Npes, phi)
implicit none
INTEGER(I4B), INTENT(IN) :: Na, Nb, Npes
COMPLEX(DPC), INTENT(IN) :: phi(Na,Nb,Npes)
REAL(DP) :: norm
INTEGER(I4B) :: i,j, pesNr
norm=0.D0
do i=1,Na
do j=1,Nb
do pesNr=1, Npes
norm=norm+abs(phi(i,j,pesNr))**2
end do
end do
end do
end function norm
!----------------------------------------------------------
subroutine normPerPes(Na, Nb, Npes, phi, normPerPes1)
IMPLICIT none
REAL(DP) :: normPerPes1(Npes)
INTEGER(I4B), INTENT(IN) :: Na, Nb, Npes
COMPLEX(DPC), INTENT(IN) :: phi(Na,Nb,Npes)
INTEGER(I4B):: i,j,pesNr
normPerPes1=0.0d0
do i=1,Na
do j=1,Nb
do pesNr=1,Npes
normPerPes1(pesNr)=normPerPes1(pesNr)+abs(phi(i,j,pesNr))**2
end do
end do
end do
return
end subroutine normPerPes
!-----------------------------------------------------------
end module wavefunction
if I compile with the following Makefile
# compiler
FC = ifort
# flags
FFLAGS = # -O3
main: main.o nrtype.o wavefunction.o
main.o: main.f90 nrtype.o wavefunction.o
wavefunction.o: wavefunction.f90 nrtype.o
nrtype.o: nrtype.f90
%: %.o
$(FC) $(FFLAGS) -o dynamic $^ $(LDFLAGS)
%.o: %.f90
$(FC) $(FFLAGS) -c $<
clean:
rm -f *.o *.mod *_genmod.f90
I get the following correct output:
7.957747154568253E-004
7.957747154568242E-004 0.000000000000000E+000 0.000000000000000E+000
0.000000000000000E+000
However, if I use O3 then I obtain the following incorrect result
7.957747154568253E-004
1.989436788642788E-004 0.000000000000000E+000 0.000000000000000E+000
0.000000000000000E+000
This looks to me as there were something seriously fishy in my code, but I can't seem to find the problem. Thank you for your help!
As confirmed by Intel (see https://software.intel.com/en-us/forums/topic/516819), this is a problem of the compiler version used ( Composer XE 2013 SP1 initial release (pkg. 080) ).
They claim that upgrading to Update 2 or 3 helps - I was not able to try yet.
In the mean while a workaround is to forget about O3 and use O2 optimization.
I'm writing a code in Python that calls some subroutines written in Fortran. When the variables are defined in Fortran as:
real*8, intent(in) :: var1,var2
and, respectively in Python,
var1 = 1.0
var1 = 1.0
everything is fine. But if I define an extended real, that is:
real*16, intent(in) :: var1,var2
and in python use
import numpy as np
var1 = np.float16(2)
var2 = np.float16(2)
the variables take a strange number when passing them to the fortran routine. Can anyone see what I'm doing wrong?
This numpy-discussion thread from last year indicates that numpy's quadruple precision varies form machine to machine. My guess is that your bunk data comes from two different language's inconsistency as to what quad-precision means.
Note also that f2py really only understands <type>(kind=<precision>) where <type> is REAL/INTEGER/COMPLEX and <precision> is an integer 1, 2, 4, 8 (cf the FAQ).