I see a lot of functions, such as acc_get_num_devices(), requires as input the device type. I used int devtype=acc_get_device_type() that returns devtype=2.
(In the documenation:
acc_get_num_devices( devicetype )
Returns the number of devices of the specified type)
What this 2 means? Which device types are there? Is device type an integer?
(Seems absurd to me I cannot find these info on the documentation)
Refer to section 2.4 of the specification for a general description of usage of device_type. Also see Appendix A (not formally part of the specification) for recommendations for typical device types you might expect to encounter.
The device_type is used to specify a particular accelerator type, and it is implementation defined. Therefore the specific types available for selection will be defined by the OpenACC aware compiler that you are using.
Using the PGI compiler implementation, the choices for device_type should correspond to the choices available for the -ta=... compiler switch.
A typical use of a device_type clause on an OpenACC directive is to (further) condition the behavior of that directive for specific device types. For example, a specific optimization (such as choice of vector length to use) could be conditioned on running on a particular device_type.
Here is a particular (obsolete) example. The device_type is used to either parallelize a particular loop, or run it sequentially, depending on device type that you are actually running on. I say obsolete because I don't think -ta=radeon is a supported configuration for the most recent versions of the PGI OpenACC compilers (after 17.x). You can see another example of usage of device_type in this blog and in this blog
I believe in C/C++, the datatype is an enum type, whereas in Fortran it may formally be an integer. Naturally in C/C++, an enum may have an underlying integer association. Rather than worrying about the meaning of a specific integer value in a specific implementation, it's probably better to use the enums/defines to refer to these.
Related
ByteBuddy offers two mechanisms for representing a constant of a Class that is primitive:
JavaConstant.Dynamic.ofPrimitiveType(Class)
TypeDescription.ForLoadedType.of(Class)
I am aware that the first one creates a "true" dynamic constant in the constant pool. I am aware that the second is specially recognized by ByteBuddy and ultimately results in some other path to storing some sort of class constant in the constant pool. (For example, you can see in the documentation of FixedValue#value(TypeDescription) that a TypeDescription will end up being transformed into a constant in the constant pool in some unspecified non-ByteBuddy-specific format that (presumably) is not the same as a dynamic constant.)
I am also aware that ByteBuddy supports JVMs back to 1.5 and that only JDKs of version 11 or greater support true dynamic constants. I am using JDK 15 and personally don't need to worry about anything earlier than that.
Given all this: Should I make constants-representing-primitive-classes using JavaConstant.Dynamic.ofPrimitiveType(Class), or should I make them using TypeDescription.ForLoadedType.of(Class)? Is there some advantage I'm missing to one representation or the other?
Dynamic constants are bootstrapt what causes a minimal runtime overhead. The static constants are therefore likely a better choice but it simplifies your code, there's no danger in using the dynamic ones.
I have started to learn SystemVerilog and I am reading about the new types, such as:
strings
dynamic/associative arrays
queues
I am wondering how these can be implemented in hardware due to their dynamic nature; is it that they are only for testing/simulation purposes so they are never actually instantiated in hardware?
If so, why would you ever use those types of arrays if you had to change to a normal array to run the design on hardware?
Verilog and now SystemVerilog contain features that fall into two categories: Synthesizable and Non-Synthesizable. There is no fixed standard that defines which features belong in each catagory. Ideally if you can simulate a feature, you can find a way to synthesize it.
Some features wind up in both categories depending on how it's used. For example, a for loop is synthesizable if you can statically determine (i.e. at compile time rather than run-time) how many iterations it has. The same is true for a queue or dynamic array - if you can define a maximum size, they can be implemented in hardware.
Why does Idris require that functions appear in the order of their definitions and mutual recursion declared with mutual?
I would expect Idris to perform a first pass of dependency analysis between functions, and reorder them automatically. I have always believed that Haskell does this. Why is this not possible in Idris?
In the tutorial it says (emphasis mine):
In general, functions and data types must be defined before use, since dependent types allow functions
to appear as part of types, and their reduction behaviour to affect type checking. However, this
restriction can be relaxed by using a mutual block, which allows data types and functions to be
defined simultaneously.
(Agda has this restriction as well, but has now removed the mutual keyword in favour of giving types then definitions.)
To expand on this: when you have dependant types, the automatic dependency analysis à la Haskell would be difficult or impossible, because the dependancy order at the type level may very well be different than the dependancy order at the value level. Haskell doesn't have this problem because values can not appear in types, so it can just do the dependancy analysis and then typecheck in that order. This is what the Idris tutorial is getting at about the reduction behaviour of values being required for the type checking.
I do not know whether the problem is even solvable in general with dependant types (you lose Hindley-Milner, for one thing), but I bet it wouldn't be efficient even if it were.
Is the mechanism how fields are shadowed/hidden by inheritance and later resolved part of the JVM spec? I know it is part of the Java spec, and can be found in many blog posts and SO questions. However, when I actually look at the JVM spec for field resolution, the words "hiding" or "shadowing" do not appear anywhere in the pdf of the JVM spec.
I ask this because I am writing my own JVM, and discovered that this field shadowing is a property of the bytecode/VM and not just part of the Java compiler or Java-the-language. I want to know the proper, authoritative way this should be implemented at the VM level. Surely a (mis?)feature of the JVM this important needs to be formally documented somewhere?
The term shadowing usually refers to when one identifer shadows another. I.e a given name could refer to multiple variables, so there has to be a mechanism to disambiguate it. This is mostly a language level construct because it contains a lot more names. Local variable names don't appear in the bytecode at all except as optional debugging information for instance.
From the bytecode point of view, you already have an explicit reference to a class, name, and descriptor. The only question is whether the field you're describing is actually declared in the class you specified, or whether it was inherited from one of it's superclasses.
As you already discovered, Field Resolution is explained in section 5.4.3.2 of the standard. The terms hiding and shadowing are not used because they apply to source code, not classfiles.
I was going to add this as a comment to my previous question about type theory, but I felt it probably deserved its own exposition:
If you have a dynamic typing system and you add a "type" member to each object and verify that this "type" is a specific value before executing a function on the object, how is this different than static typing? (Other than the fact that it is run-time instead of compile-time).
Technically, it actually is the other way round: a "dynamically typed" language is a special case of a statically typed language, namely one with only a single type (in the mathematical sense). That at least is the view point of many in the type systems community.
Edit regarding static vs dynamic checking: only local properties can be checked dynamically, whereas properties that require some kind of global knowledge cannot. Think of properties such as something being unique, something not being aliased, a computation being free of race conditions. A suitable static type system can verify such properties, because it has the ability to establish certain invariants on the context of the expression that is being checked.
static typing happens at compile-time, not at run-time! And that difference is essential!!
See B.Pierce's book Types and Programming Languages for more.