Frege: can I derive "Show" for a recursive type? - frege

I'm trying to implement the classical tree structure in frege, which works nicely as long as I don't use "derive":
data Tree a = Node a (Tree a) (Tree a)
| Empty
derive Show Tree
gives me
realworld/chapter3/E_Recursive_Types.fr:7: kind error,
type constructor `Tree` has kind *->*, expected was *
Is this not supported or do I have to declare it differently?

Welcome to the world of type kinds!
You must give the full type of the items you want to show. Tree is not a type (kind *), but something that needs a type parameter to become one (kind * -> *).
Try
derive Show (Tree a)
Note that this is shorthand for
derive Show (Show a => Tree a)
which resembles the fact that, to show a tree, you need to also know how to show the values in the tree (at least, the code generated by derive will need to know this - of course, one could write an instance manually that prints just the shape of the tree and so would not need it).
Generally, the kind needed in instances for every type class is fixed. The error message tells you that you need kind * for Show.
EDIT: eliminate another possible misconception
Note that this has nothing to do with your type being recursive. Let's take, for example, the definition of optional values:
data Maybe a = Nothing | Just a
This type is not recursive, and yet we still cannot say:
derive Show Maybe -- same kind error as above!!
But, given the following type class:
class ListSource c -- things we can make a list from
toList :: c a -> [a]
we need say:
instance ListSource Maybe where
toList (Just x) = [x]
toList Nothing = []
(instanceand derive are equivalent for the sake of this discussion, both make instances, the difference being that derive generates the instance functions automatically for certain type classes.)
It is, admittedly, not obvious why it is this way in one case and differntly in the other. The key is, in every case the type of the class operation we want to use. For example, in class Show we have:
class Show s where
show :: s -> String
Now, we see that the so called class type variable s (which represents any future instantiated type expression) appears on its own on the left of the function array. This, of course, indicates that s must be a plain type (kind *), because we pass a value to show and every value has per definition a type of kind *. We can have values of types Int or Maybe Int or Tree String, but no value ever has a type Maybe or Tree.
On the other hand, in the definition of ListSource, the class type variable c is applied to some other type variable a in the type of toList, which also appears as list element type. From the latter, we can conclude, that a has kind * (because list elements are values). We know, that the type to the left and to the right of a function arrow must have kind * also, since functions take and return values. Therefore, c a has kind *. Thus, c alone is something that, when applied to a type of kind * yields a type of kind *. This is written * -> *.
This means, in plain english, when we want to make an instance for ListSource we need the type constructor of some "container" type that is parameterized with another type. Tree and Maybe would be possible here, but not Int.

Related

Golang SQL rows.Scan function for all fields of generic type

I want to use the Scan() function from the sql package for executing a select statement that might (or not) return multiple rows, and return these results in my function.
I´m new to Golang generics, and am confused about how to achieve this.
Usually, we would use the Scan function on a *sql.Rows and provide the references to all fields of our expected 'result type' we want to read the rows into, e.g.:
var alb Album
rows.Scan(&alb.ID, &alb.Title, &alb.Artist,
&alb.Price, &alb.Quantity)
where Album is a struct type with those five fields shown.
Now, for the purpose of not writing a similar function N times for every SQL table I have, I want to use a generic type R instead. R is of generic interface type Result, and I will define this type as one of N different structs:
type Result interface {
StructA | StructB | StructC
}
func ExecSelect[R Result](conn *sql.DB, cmd Command, template R) []R
How can I now write rows.Scan(...) to apply the Scan operation on all fields of my struct of R´s concrete type? e.g. I would want to have rows.Scan(&res.Field1, &res.Field2, ...) where res is of type R, and Scan should receive all fields of my current concrete type R. And do I actually need to provide a 'template' as argument of R´s concrete type, so that at runtime it becomes clear which struct is now relevant?
Please correct me on any mistake I´m making considering the generics.
This is a poor use case for generics.
The arguments to the function sql.Rows.Scan are supposed to be the scan destinations, i.e. your struct fields, one for each column in the result set, and within the generic function body you do not have access to the fields of R type parameter.
Even if you did, the structs in your Result constraint likely have different fields...? So how do you envision writing generic code that works with different fields?
You might accomplish what you want with a package that provides arbitrary struct scanning like sqlx with facilities like StructScan, but that uses reflection under the hood to map the struct fields into sql.Rows.Scan arguments, so you are not getting any benefit at all with generics.
If anything, you are making it worse, because now you have the additional performance overheads of using type parameters.

Kotlin - Dynamic Runtime Deconstructing in Kotlin

Is it be possible to create a method in Kotlin 1.5 to support dynamic object creation at runtime and support deconstruction like so:
val (A, B, C, D, E) = buildNodes("A,B,C,D,E")
The difference here is that we would need to create the methods component1() .. component5() dynamically at runtime.
https://kotlinlang.org/docs/destructuring-declarations.html
There is one problem with this. componentX() functions are used not only to get the value at runtime. They are also used to infer the type of destructured variables. You can dynamically choose in buildNodes() what will be the value of A, but you can't decide on its compile type dynamically.
So it really depends on what do you mean that these items are constructed dynamically. You can create items dynamically as long as their types are somehow predictable. For example, if your buildNodes() always returns only strings, then this is fine. In fact, you can return List<String> and it will work for at most 5 items. You can also create an interface and decide on the implementation at runtime. Or even make buildNodes() generic, so it returns different types depending on something. At the last resort, you can return Any and check the type of destructured items at the runtime.
But you can't expect that depending on the contents of the passed string, the A variable will change its compile type.

Is it possible to test if two types are of the same unknown inheritance in VB.net?

Is it possible to dynamically identify the closest common hierarchy or inheritance of two or more unknown typed objects? In other words, I'd like to test if, say, Integer and String have a common hierarchy, without knowing the objects I'm testing are going to be an Integer and String due to user selection. I found a C++ question posted that seems similar to my issue here: Check if two types are of the same template
However, I'm not familiar with any VB.net equivalents of the answers posted there, and online translators simply provide an error when I attempt to translate them. So is this even possible in VB.net in the first place?
The closest to this action that I know of is the .IsAssignableFrom() function, but in my case I don't know what the parent class/interface/whatever is to test against in the first place. the above function is the only thing even remotely related to this issue that pops up on any search I do.
The context I need this is in the Revit API; I'm trying to see if user selected elements have a similar hierarchy/inheritance that is not the Object Type, and if so to allow an action, otherwise, give a warning dialog box.
EDIT: Due to the nature of the Revit API and the desired effects of my command, the users of my plugin could select anything in the model, and I'm not able to determine which of the MANY common ancestors I could be looking for to compare using IsAssignableFrom. I could test for the (I think universal) common ancestor of Element type, but I don't want to allow users to run the command if you select a wall and an element tag. I need to find the common ancestors of the user-selected elements and confirm that the closest common ancestor is below Element type in hierarchy.
For example, the room tag element in the API has a hierarchy sort of like this:
Object -> Element -> SpatialElementTag -> RoomTag
There may be more intermediate inheritances, but I'm not going to track them down in the API documentation. And then each element may have a slightly different ancestry. IsAssignableFrom would be great if I knew the base ancestry I wanted to test for.
TnTinMn's answer gives me the type of solution I'm looking for.
The Type.BaseType Property returns:
The Type from which the current Type directly inherits, or null if the current Type represents the Object class or an interface.
Using this information, it is possible to define an iterator to enumerate the inheritance tree.
Private Iterator Function SelfAndAncestors(srcType As Type) As IEnumerable(Of Type)
Do Until srcType Is Nothing
Yield srcType
srcType = srcType.BaseType
Loop
End Function
Now you can use the Enumerable.Intersect Method to find all common types in the inheritance between two ancestry enumerations and return the first common ancestry type.
Dim t1 As Type = GetType(Form)
Dim t2 As Type = GetType(UserControl)
Dim highestCommonAncestor As Type = Enumerable.Intersect(SelfAndAncestors(t1), SelfAndAncestors(t2)).First()
For this case, the highest common ancestor is ContainerControl.

Standard ML: Datatype vs. Structure

I'm reading through Paulson's ML For the Working Programmer and am a bit confused about the distinction between datatypes and structures.
On p. 142, he defines a type for binary trees as follows:
datatype 'a tree = Lf
| Br of 'a * 'a tree * 'a tree;
This seems to be a recursive definition where 'a denotes some fixed type. So any time I see 'a, it must refer to the same type throughout.
On p. 148, he discusses a structure for binary trees:
"...we have been following an imaginary ML session in which we typed in the tree functions one at a time. Now we ought to collect the most important of those functions into a structure, called Tree. We really must do so, because one of our functions (size) clashes with a built-in function. One reason for using structures is to prevent such name clashes.
We shall, however, leave the datatype declaration of tree outside of the structure. If it were inside, we should be forced to refer to the constructors by Tree.Lf and Tree.Br, which would make our patters unreadable. Thus, in the sequel, imagine that we have made the following declarations:
datatype 'a tree = Lf
| Br of 'a * 'a tree * 'a tree;
structure Tree =
struct
fun size Lf = 0
| size (Br( v, t1, t2)) = 1 + size t1 + size t2;
fun depth...
etc...
end;
I'm a little confused.
1) What is the relationship between a datatype and a structure?
2) What is the role of "struct" within the structure definition?
3) Later on, Paulson discusses a structure for dictionaries as binary search trees. He does the following:
structure Dict : DICTIONARY =
struct
type key = string;
type 'a t = (key * 'a) tree;
val empty = Lf;
<a bunch of functions for dictionaries>
This makes me think struct specifies the different primitive or compound types involved int he definition of a Dict.
That's a really fuzzy definition though. Anyone like to clarify?
Thanks for the help,
bclayman
A structure is a module. Everything between the struct and end keywords forms the body of this module. Similarly, you can view a signature as the description of an abstract module interface. Ascribing a signature to a structure (like the : DICTIONARY syntax does in your example) limits the exports of the module to what is specified in that signature (by default, everything would be accessible). That allows you to hide implementation details of a module.
However, ML modules are much richer than that. They can be arbitrarily nested. There are also functors, which are effectively functions from modules to modules ("parameterised modules", if you want). Altogether, the module language in ML forms a full functional language on its own, with structures as the basic entities, functors over them, and signatures describing the "types" of such modules. This little language is a layer on top of the so-called core language, where ordinary values and types live.
So, to answer your individual questions:
1) There is no specific relationship between the datatype and the structure. The latter simply uses the former.
2) struct-end is simply a keyword pair to delimit the structure body (languages in C tradition would probably use curly braces there).
3) As explained above, a structure is a basic module. It can contain (and export) arbitrary other language entities, including other modules. By grouping definitions together, and potentially hiding some of them through a signature ascription, you can express namespacing and encapsulation (in particular, abstract data types).
I should also note that Paulson's book is outdated regarding its description of modules, as it predates the current language version. In particular, it does not describe how to express abstract data types through modules, but instead introduces the obsolete abstype declaration which nobody has been using in almost 20 years. A more extensive and up-to-date introduction to modular programming in ML can be found in Harper's Programming in Standard ML.
In this example, the datatype 'a tree is describing a binary tree (https://en.wikipedia.org/wiki/Binary_tree) that is capable of storing any value of a single type. The 'a in the definition is a variant type which will later be constrained down to a concrete type wherever tree is used with a different type. This allows you to define the structure of a tree once and then use it with any type later on.
The Tree structure is separate from the datatype definition. It is being used to group functions together that operate on the 'a tree datatype. It is being used right now as a way to modularize the code and, as it points out, to prevent namespace clashes.
struct is just an identifier keyword to let the compiler know where your structure definition starts while the end keyword is used to let the compiler know where the definition ends.
The dictionary structure is defining a dictionary (a key -> value data structure) that uses a tree as the internal data structure. Once again, the structure is a collection of functions that will be used to create and operate on dictionaries. The types within the dictionary structure compose the type of the internal data structure that makes up the dictionary. The following functions define the public interface that you're exposing to allow clients to work with dictionaries.

Global operations on all class instances

There is the question: there are many fields instantiated in my CFD code and I want to count them all and write their contents to an external file (for example to save my calculation progress). My field derived type has only one component:
type scalar_field
real , dimension (:,:,:) , allocatable :: nodes
contains
! some procedures
end type
I am trying to create another type with counters and pointers to the node components of all fields. Something like this:
type field_counter
private
real ,pointer :: scalar_fields(:,:,:,:)
integer :: number_of_scalars
contains
procedure set_num_scalar_fields
procedure set_scalar_field_pointer
procedure output_scalars
end type
The main idea is to pass object of this type to the field constructor , where num_scalar_field attribute is incremented and slice of pointer array scalar_fields(i,:,:,:) is associated with nodes array. After that I would be able to print all contents of scalar fields via calling scalar_fields pointer array.
But I don't know whether it would work and whether it is an easiest way to perform this task, and I'll have to add the target attribute to each nodes array, which seems a bit overwhelming. Maybe there is some OOP design pattern for this task, or maybe anybody already have solved this kind of problem?
If, as you say, the number of fields is static after the program has initialised, I'd be inclined to create an array of them, like this:
type(scalar_field), dimension(:), allocatable :: all_fields
Once your program starts and figures out how many fields to allocate, then it can go ahead and allocate them
allocate(all_fields(num))
and you can reference individual fields as you would any other array element, like this:
all_fields(1)
I don't see the need for a new user-defined type for the array of scalar fields nor any pointers or any of that stuff. Mind you, I'm not sure I see the need for any OO at all here, back in the day I'd have just defined all_fields as a rank-4 array and used the last index as the identifier of the field.