Can't resolve indirect interface - idris

I'm having difficulty with indirect interface resolution. I have a set of backend (custom C++) types (e.g. float64) that I capture in Idris with empty types (e.g. F64), and which correspond to Idris builtin types (e.g. Double). I capture the ability to read and write from the backend with an interface
interface BackendRW cpp idr where
For example, I have the implementation
BackendRW F64 Double where
I use this interface to constrain operations on backend types by properties on the Idris types. For example
negate : (Neg idr, BackendRW cpp idr) => cpp -> cpp
But at usage site I'm finding Idris can't resolve ?idr in negate in
x : F64
y : F64
y = -x
There are no other implementations for F64 than that shown above. I get
Can't find an implementation for (Neg ?idr, BackendRW F64 ?idr)
It works if I specify {idr=Double} but that's not practical. I tried to fix this by saying each C++ type corresponds to only one Idris type and using a determining parameter
interface BackendRW cpp idr | cpp where
but that didn't fix it.

I can use
interface BackendRW cpp idr | cpp where
if I don't ask for both implementations at once
negate : Neg idr => BackendRW cpp idr => cpp -> cpp
which presumably means the type checker is freer to find one implementation then the other rather than both at the same time.

Related

CGAL Update from 4.13 to 5.5

I have updated a project using CGAL 4.13 to CGAL 5.5. It uses the kernel:
typedef CGAL::Exact_predicates_exact_constructions_kernel K;
typedef K::FT dbl;
and some functions do not compile now anymore. For example the one below:
inline void decouple(const dbl& val,dbl& decoupled)
{
...
decoupled=CGAL::Gmpq(val.exact().mpq());
}
../geometricTools.h:476:50: error: 'const ET' {aka 'const class
boost::multiprecision::numberboost::multiprecision::backends::gmp_rational'}
has no member named 'mpq' 476 |
decoupled=CGAL::Gmpq(val.exact().mpq());
A second problem is a line wher e a string ("123/456") is converted to a number:
dbl AlgorithmHdf5::getDbl(int n, int d)
{
...
dbl ret(m_vDbl[ind]); // argument is a std::string
return ret;
}
AlgorithmHdf5.cpp:71:36: error: no matching function for call to
'CGAL::Lazy_exact_ntboost::multiprecision::number<boost::multiprecision::backends::gmp_rational
::Lazy_exact_nt(__gnu_cxx::__alloc_traitsstd::allocator<std::__cxx11::basic_string<char
, std::__cxx11::basic_string >::value_type&)' 71 | dbl ret(m_vDbl[ind]);
these lines used to work with CGAL 4.13 but do not with CGAL 5.5. I'd appreciate any help on this. Compiler: g++ (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Epeck::FT is a wrapper around some rational type that depends on what is available. If you have GMPXX or LEDA, it may use that. In your case you have GMP and a recent enough Boost, so it uses Boost.Multiprecision. If you disable that with -DCGAL_DO_NOT_USE_BOOST_MP, you may get back to the Gmpq your old code was apparently expecting.
Boost.Multiprecision does not use reference counting, so decoupled=val.exact() should be sufficient for that type. To construct from std::string, it may help to first construct a FT::Exact_type (or CGAL::Exact_rational) and then convert that to FT. You may want to file an issue on github about this direct construction from a string, it looks like something that CGAL could support.

What do curly braces inside a function's argument list mean in C? [duplicate]

This code:
#include <stdio.h>
int main()
{
void (^a)(void) = ^ void () { printf("test"); } ;
a();
}
Compile without warning with clang -Weverything -pedantic -std=c89 (version clang-800.0.42.1) and print test.
I could not find any information about standard C having lambda, also gcc has it's own syntax for lambda and it would be strange for them do this if a standard solution existed.
This behavior seems to be specfic to newer versions of Clang, and is a language extension called "blocks".
The Wikipedia article on C "blocks" also provides information which supports this claim:
Blocks are a non-standard extension added by Apple Inc. to Clang's implementations of the C, C++, and Objective-C programming languages that uses a lambda expression-like syntax to create closures within these languages. Blocks are supported for programs developed for Mac OS X 10.6+ and iOS 4.0+, although third-party runtimes allow use on Mac OS X 10.5 and iOS 2.2+ and non-Apple systems.
Emphasis above is mine. On Clang's language extension page, under the "Block type" section, it gives a brief overview of what the Block type is:
Like function types, the Block type is a pair consisting of a result value type and a list of parameter types very similar to a function type. Blocks are intended to be used much like functions with the key distinction being that in addition to executable code they also contain various variable bindings to automatic (stack) or managed (heap) memory.
GCC also has something similar to blocks called lexically scoped nested functions. However, there are some key differences also note in the Wikipedia articles on C blocks:
Blocks bear a superficial resemblance to GCC's extension of C to support lexically scoped nested functions. However, GCC's nested functions, unlike blocks, must not be called after the containing scope has exited, as that would result in undefined behavior.
GCC-style nested functions also require dynamic creation of executable thunks when taking the address of the nested function. [...].
Emphasis above is mine.
the C standard does not define lambdas at all but the implementations can add extensions.
Gcc also added an extension in order for the programming languages that support lambdas with static scope to be able to convert them easily toward C and compile closures directly.
Here is an example of extension of gcc that implements closures.
#include <stdio.h>
int(*mk_counter(int x))(void)
{
int inside(void) {
return ++x;
}
return inside;
}
int
main() {
int (*counter)(void)=mk_counter(1);
int x;
x=counter();
x=counter();
x=counter();
printf("%d\n", x);
return 0;
}

How to get a module type from an interface?

I would like to have my own implementation of an existing module but to keep a compatible interface with the existing module. I don't have a module type for the existing module, only an interface. So I can't use include Original_module in my interface. Is there a way to get a module type from an interface?
An example could be with the List module from the stdlib. I create a My_list module with exactly the same signature than List. I could copy list.mli to my_list.mli, but it does not seem very nice.
In some cases, you should use
include module type of struct include M end (* I call it OCaml keyword mantra *)
rather than
include module type of M
since the latter drops the equalities of data types with their originals defined in M.
The difference can be observed by ocamlc -i xxx.mli:
include module type of struct include Complex end
has the following type definition:
type t = Complex.t = { re : float; im : float; }
which means t is an alias of the original Complex.t.
On the other hand,
include module type of Complex
has
type t = { re : float; im : float; }
Without the relation with Complex.t, it becomes a different type from Complex.t: you cannot mix code using the original module and your extended version without the include hack. This is not what you want usually.
You can look at RWO : if you want to include the type of a module (like List.mli) in another mli file :
include (module type of List)

OCaml module types and separate compilation

I am reading through OCaml lead designer's 1994 paper on modules, types, and separate compilation. (kindly pointed to me by Norman Ramsey in another question ). I understand that the paper discusses the origins of OCaml's present module type / signature system. It it, the author proposes opaque interpretation of type declarations in signatures (to allow separate compilation) together with manifest type declarations (for expressiveness). Attempting to put together some examples of my own to demonstrate the kind of problems the OCaml module signature notation is trying to tackle I wrote the following code in two files:
In file ordering.ml (or .mli — I've tried both) (file A):
module type ORDERING = sig
type t
val isLess : t -> t -> bool
end
and in file useOrdering.ml (file B):
open Ordering
module StringOrdering : ORDERING
let main () =
Printf.printf "%b" StringOrdering.isLess "a" "b"
main ()
The idea being to expect the compiler to complain (when compiling the second file) that not enough type information is available on module StringOrdering to typecheck the StringOrdering.isLess application (and thus motivate the need for the with type syntax).
However, although file A compiles as expected, file B causes the 3.11.2 ocamlc to complain for a syntax error. I understood that signatures were meant to allow someone to write code based on the module signature, without access to the implementation (the module structure).
I confess that I am not sure about the syntax: module A : B which I encountered in this rather old paper on separate compilation but it makes me wonder whether such or similar syntax exists (without involving functors) to allow someone to write code based only on the module type, with the actual module structure provided at linking time, similar to how one can use *.h and *.c files in C/C++. Without such an ability it would seem to be that module types / signatures are basically for sealing / hiding the internals of modules or more explicit type checking / annotations but not for separate / independent compilation.
Actually, looking at the OCaml manual section on modules and separate compilation it seems that my analogy with C compilation units is broken because the OCaml manual defines the OCaml compilation unit to be the A.ml and A.mli duo, whereas in C/C++ the .h files are pasted to the compilation unit of any importing .c file.
The right way to do such a thing is to do the following:
In ordering.mli write:
(* This define the signature *)
module type ORDERING = sig
type t
val isLess : t -> t -> bool
end
(* This define a module having ORDERING as signature *)
module StringOrdering : ORDERING
Compile the file: ocamlc -c ordering.mli
In another file, refer to the compiled signature:
open Ordering
let main () =
Printf.printf "%b" (StringOrdering.isLess "a" "b")
let () = main ()
When you compile the file, you get the expected type error (ie. string is not compatible with Ordering.StringOrdering.t). If you want to remove the type error, you should add the with type t = string constraint to the definition of StringOrdering in ordering.mli.
So answer to you second question: yes, in bytecode mode the compiler just needs to know about the interfaces your are depending on, and you can choose which implementation to use at link time. By default, that's not true for native code compilation (because of inter-module optimizations) but you can disable it.
You are probably just confused by the relation between explicit module and signature definitions, and the implicit definition of modules through .ml/.mli files.
Basically, if you have a file a.ml and use it inside some other file, then it is as if you had written
module A =
struct
(* content of file a.ml *)
end
If you also have a.mli, then it is as if you had written
module A :
sig
(* content of file a.mli *)
end =
struct
(* content of file a.ml *)
end
Note that this only defines a module named A, not a module type. A's signature cannot be given a name through this mechanism.
Another file using A can be compiled against a.mli alone, without providing a.ml at all. However, you want to make sure that all type information is made transparent where needed. For example, suppose you are to define a map over integers:
(* intMap.mli *)
type key = int
type 'a map
val empty : 'a map
val add : key -> 'a -> 'a map -> 'a map
val lookup : key -> 'a map -> 'a option
...
Here, key is made transparent, because any client code (of the module IntMap that this signature describes) needs to know what it is to be able to add something to the map. The map type itself, however, can (and should) be kept abstract, because a client shouldn't mess with its implementation details.
The relation to C header files is that those basically only allow transparent types. In Ocaml, you have the choice.
module StringOrdering : ORDERING is a module declaration. You can use this in a signature, to say that the signature contains a module field called StringOrdering and having the signature ORDERING. It doesn't make sense in a module.
You need to define a module somewhere that implements the operations you need. The module definition can be something like
module StringOrderingImplementation = struct
type t = string
let isLess x y = x <= y
end
If you want to hide the definition of the type t, you need to make a different module where the definition is abstract. The operation to make a new module out of an old one is called sealing, and is expressed through the : operator.
module StringOrderingAbstract = (StringOrdering : ORDERING)
Then StringOrderingImplementation.isLess "a" "b" is well-typed, whereas StringOrderingAbstract.isLess "a" "b" cannot be typed since StringOrderingAbstract.t is an abstract type, which is not compatible with string or any other preexisting type. In fact, it's impossible to build a value of type StringOrderingAbstract.t, since the module does not include any constructor.
When you have a compilation unit foo.ml, it is a module Foo, and the signature of this module is given by the interface file foo.mli. That is, the files foo.ml and foo.mli are equivalent to the module definition
module Foo = (struct (*…contents of foo.ml…*) end :
sig (*…contents of foo.mli…*) end)
When compiling a module that uses Foo, the compiler only looks at foo.mli (or rather the result of its compilation: foo.cmi), not at foo.ml¹. This is how interfaces and separate compilation fit together. C needs #include <foo.h> because it lacks any form of namespace; in OCaml, Foo.bar automatically refers to a bar defined in the compilation unit foo if there is no other module called Foo in scope.
¹ Actually, the native code compiler looks at the implementation of Foo to perform optimizations (inlining). The type checker never looks at anything but what is in the interface.

Design 2 interactive modules in Ocaml

I would like to design 2 modules A and B which both have their own functions, for instance: A.compare: A.t -> A.t -> bool, B.compare: B.t -> B.t -> bool. The elements of A and B are convertible. So I would also need functions a_of_b : B.t -> A.t and b_of_a : A.t -> B.t. My question is where I should define these functions? inside the structure of A or the one of B or somewhere else?
Could anyone help?
Edit1: just amended some errors based on the first comment
This is a classic design problem. In OOP languages, it is hard to resolve this elegantly because a class encapsulates both a type definition and methods related to that type. Thus, as soon as you have a function such as a_of_b, which regards two types to an equal extent, there is no clear place for it.
OCaml correctly provides distinct language mechanisms for these distinct needs: type definitions are introduced with the keyword type, and related methods are collected together in a module. This gives you greater flexibility in designing your API, but does not solve the problem automatically.
One possibility is to define modules A and B, both with their respective types and compare functions. Then, the question remaining is where to put a_of_b and b_of_a. You could arbitrarily give preference to module A, and define the functions A.to_b and A.of_b. This is what the Standard Library did when it put to_list and of_list in Array. This lacks symmetry; there is no reason not to have put these functions in B instead.
Instead you could standardize on using of_ functions vs to_ functions. Let's say you prefer to_. Then you would define the functions A.to_b and B.to_a. The problem now is modules A and B are mutually dependent, which is only possible if you define them in the same file.
If you will have lots of functions that deal with values of type A.t and B.t, then it may be worth defining a module AB, and putting all these functions in there. If you will only need two, then an extra module is perhaps overkill.
On the other hand, if the total number of functions regarding A's and B's is small, you could create only the module AB, with type a, type b, and all related methods. However, this does not follow the OCaml community's convention of naming a type t within its own module, and it will be harder to apply the Set and Map functors to these types.
You probably mean A.compare: A.t -> A.t -> bool because types are in lower cases.
You can have a single module AB which contains both the type for A and the type for B.
You can have a single module AB containing both A & B as sub-modules.
You might also use recursive modules & functors.