ByteBuddy offers two mechanisms for representing a constant of a Class that is primitive:
JavaConstant.Dynamic.ofPrimitiveType(Class)
TypeDescription.ForLoadedType.of(Class)
I am aware that the first one creates a "true" dynamic constant in the constant pool. I am aware that the second is specially recognized by ByteBuddy and ultimately results in some other path to storing some sort of class constant in the constant pool. (For example, you can see in the documentation of FixedValue#value(TypeDescription) that a TypeDescription will end up being transformed into a constant in the constant pool in some unspecified non-ByteBuddy-specific format that (presumably) is not the same as a dynamic constant.)
I am also aware that ByteBuddy supports JVMs back to 1.5 and that only JDKs of version 11 or greater support true dynamic constants. I am using JDK 15 and personally don't need to worry about anything earlier than that.
Given all this: Should I make constants-representing-primitive-classes using JavaConstant.Dynamic.ofPrimitiveType(Class), or should I make them using TypeDescription.ForLoadedType.of(Class)? Is there some advantage I'm missing to one representation or the other?
Dynamic constants are bootstrapt what causes a minimal runtime overhead. The static constants are therefore likely a better choice but it simplifies your code, there's no danger in using the dynamic ones.
Related
I found the new value class been
I found the purpose is like :
value class adds attribute to a variable and constraint it’s usage.
I was wondering what is some practical usage of value class.
Well, as stated in the documentation Kotlin Inline classes
Sometimes it is necessary for business logic to create a wrapper around some type. However, it introduces runtime overhead due to additional heap allocations. Moreover, if the wrapped type is primitive, the performance hit is terrible, because primitive types are usually heavily optimized by the runtime, while their wrappers don't get any special treatment.
To solve such issues, Kotlin introduces a special kind of class called an inline class. Inline classes are a subset of value-based classes. They don't have an identity and can only hold values.
A value class can be helpful when, for example, you want to be clear about what unit a certain value uses: does a function expect me to pass my value in meters per second or kilometers per hour? What about miles per hour? You could add documentation on what unit the function expects, but that still would be error-prone. Value classes force developers to use the correct units.
You can also use value classes to provide clear means for other devs on your project on doing operations with your data, for example converting from one unit to another.
Value classes also are not assignment-compatible, so they are treated like actual new class declarations: When a function expects a value class of an integer, you still have to pass an instance of your value class - an integer won't work. With type aliases, you could still accidentally use the underlying type, and thus introduce expensive errors.
In other words, if you simply want things to be easier to read, you can just use type aliases. If you need things to be strict and safe in some way, you probably want to use value classes instead.
So I'm exploring Rust, and I have read about technical differences between constants and immutable variables. But it seems like immutable variables can do all things that constants can. Then what is the point of existence of constants, if immutable variables can fully substitute them?
There are two computational times that you should take into account:
compilation time
run time
The constant is computed at compilation time (and can be used in other compile-time computation) and hence the run time is faster, as it does need to compute it again.
Immutable variables are always computed at run time (from an external input not available at compilation time usually), and constants cannot be used there.
Then what is the point of existence of constants, if immutable
variables can fully substitute them?
While there are certainly use cases in which constants may be interchangeable with immutable variables, the main distinction between the categories of values is their semantics.
Declaring a constant immediately says a lot about what the value is to the reader: in particular, that the information that comprises the value must all be available at compile-time. This is a property which is enforced by the compiler. This sets up expectations for the reader about what the value is and what can be done with it.
Of course, the initialization of immutable variables is much more flexible. There is no mandate that these values are known at compile time, and the calculations that produce such values may be arbitrarily complex and even evolve over time.
The differences are, perhaps, mainly stylistic (in many but not all use cases) but where readability and maintainability are involved the distinction is valuable.
What is the difference between Type-pool and creating a class for constants?
What is better?
My question is for a large group of constants and to be accessible to other groups.
Thank you
EDIT - Thank you for the answers and I will improve my question. I need something to store constants and I will use them on programs or other classes. Basically, I wanted to know if it is better to use a type-pool or a class with constants (only). I can have more than one class or type-pool.
The documentation mentions this:
Since it is possible to also define data types and constants in the public visibility section of global classes, type groups are obsolete and should no longer be created. Existing type groups can still be used.
A sensibly named interface with the constants you desire is the way to go. An additional benefit is that ABAP OO enforces some more rules.
Agree with #petul's answer, except for one detail: I'd recommend creating one enumeration-like class per logical group of constants, instead of collecting constants in interfaces.
Consider using the new enum language feature for specifying the constant values.
Interfaces can be accidentally "implemented", which doesn't make sense here. Classes can prevent this with final.
Making one class per logical group simplifies finding the constants with IDE features such as Ctrl+Shift+A search in the ABAP Development Tools. Constants that are randomly thrown together into interfaces are hard to find later on.
Classes allow adding enumeration-like helper methods like converters, existence checks, numbering all values.
Classes also allow adding unit tests, such as ensuring that the constant collection is still in sync with the fixed values of an underlying domain.
I have a very CPU intensive F# program that depends on persistent data-structures - about 40% of the total CPU time is spent in the Map module. So I thought I'd try out the PersistentHashMap in FSharpX collections. (BTW, this is already a big improvement over the previous version of F# in VS2013 where the same program spent 70% of its time in Map. I also notice that running programs with the debugger attached doesn't have the huge penalty it did before - good work guys...) There is also a hot-spot where I'm re-sorting all the time, where instead I should be adding to a Heap, so I thought I'd give that a go as well.
Two issue became immediately apparent:
(1) Swapping out one for the other from an interface perspective proved harder than it seems it should - I.e., making a shim that let me switch from a Map to a PersistentMap, preserving both the needed module-based let-bound functions and Types necessary to use the each map. I know that having full HM type-inference (and no type-classes) is orthogonal to LSP-style referential transparency for the most part - but maybe I was missing some way to do this better with a minimal amount of code.
(2) The biggest problem (which I'd like to focus on here) is the reliance of the F# functional data-structs on oo-style dispatched equality and comparison via the IComparison (when 't : comparison), etc., family of interfaces.
Even for OO programs ISTM that the idea of dispatching equality and comparison is a bad idea -- an object "knows" how to perform its own domain-specific tasks, but it doesn't "know" for the most part what notion of equality is going to be necessary at various points in the program for various purposes -- so equality/comparison should not be part of the object's interface, but when these concepts are needed, they should always be mentioned explicitly. For example, there should never be a .Sort(), only a .SortWith(...). One could argue that even something as basic as structural equality in F# could be explicit a.StructEq(b) or a ~= b - otherwise you always get object.Equals -- but even stipulating that doing things this way is the best for a multi-paradigm language that's a first-class .Net citizen, it seems like there should at least be the option of using passed-in comparison and equality functions, but this is not the case.
This means that: (a) type constraints are enforced even if you don't want them, causing ripples of broken inferred typing (and hundreds of wavy red lines with it being unclear where the actual "problem" is) and (b), that by implementing a notion of equality or comparison that makes one container type happy in one part of your program (and in my case I want to use the same container and item type with two different notions of ordering in two different places), it is likely to silently break (or cause inefficiency, if one subsumes the other) in other parts of the code that depended on the default/previous implementation.
The only way around this that I could think of is wrapping each item a adapter object using new...with object expression - but I really don't want to create so much garbage just to get the code to work.
So, ISTM that we could have a "pure" version of each persistent data struct that could be loaded if desired (even basics like List, etc.) that do not depend on dispatched equality/comparison/hashing and do not impose type constraints - all such needs should be via a passed in fn's at the time of the call. (Dispatched eq/cmp would be only for used for interop with BCL collections that don't accept delegates.) Then we could have a [EqCmpHashThrowNotImplemented] attribute, and I could be sure that there were no default operations happening at all, and I would feel better about the efficiency and predictability of my code. (And this also let's one change from a Record to a Class or visa-versa w/o worrying about any changes in behavior due to default implementations.) Again, this would be optional, but done by with a simple import. (Which does mean that each base core collection type would have to be broken out into its own module, which isn't really a bad idea anyway.)
If I've overlooked a better way to do things or there are some patterns people are using here, I'd be interested.
There are some constants and enumerations in a project, and each one is used by some other classes.
As a design pattern, is it acceptable to create a class for constants and enumerations definition? Or is there a better way to define and use those constants?
It depends on the problem domain. Generally speaking it is rather standard practice to keep them in Java enumeration. The question is - how would you like to use those constants? I have such experience, that constants being hold in interfaces/enumerations are being duplicated and created over and over again due to lack of the knowledge of developers of past constants. In the result, there are many files as such Constants.java, BusinessLogic.java, AppConstants.java etc.. It causes big overwhelm over the purpose and then you don't know if the some constant, lets say APP_MODE should be used from Constants.java or AppConstants.java ?
One of the solutions is to keep those constants in one (or many?) properties files and inject thme using spring' #Value annotation.
You may group by using some prefixing, building groups separated by dot.
One of the advantages of the property files is that you keep one Java logic of using properties, but you still can provide property file (which may vary depending on application). A lot of flexibility, no redundancy.
Another solution is to create one Service to provide properties / constants from database. You can differentiate the values over diffrent environements, but that's another story.
If I were you I create a constant container class packege by package. Just span the logically coherent parts together. Otherwise you will increase the the coupling and dependency. And the most general constants (problem domain independent ones) take place in the utility package's constant container class.