F# Record vs Class - oop

I used to think of a Record as a container for (immutable) data, until I came across some enlightening reading.
Given that functions can be seen as values in F#, record fields can hold function values as well. This offers possibilities for state encapsulation.
module RecordFun =
type CounterRecord = {GetState : unit -> int ; Increment : unit -> unit}
// Constructor
let makeRecord() =
let count = ref 0
{GetState = (fun () -> !count) ; Increment = (fun () -> incr count)}
module ClassFun =
// Equivalent
type CounterClass() =
let count = ref 0
member x.GetState() = !count
member x.Increment() = incr count
usage
counter.GetState()
counter.Increment()
counter.GetState()
It seems that, apart from inheritance, there’s not much you can do with a Class, that you couldn’t do with a Record and a helper function. Which plays better with functional concepts, such as pattern matching, type inference, higher order functions, generic equality...
Analyzing further, the Record could be seen as an interface implemented by the makeRecord() constructor. Applying (sort of) separation of concerns, where the logic in the makeRecord function can be changed without risk of breaking the contract, i.e. record fields.
This separation becomes apparent when replacing the makeRecord function with a module that matches the type’s name (ref Christmas Tree Record).
module RecordFun =
type CounterRecord = {GetState : unit -> int ; Increment : unit -> unit}
// Module showing allowed operations
[<CompilationRepresentation(CompilationRepresentationFlags.ModuleSuffix)>]
module CounterRecord =
let private count = ref 0
let create () =
{GetState = (fun () -> !count) ; Increment = (fun () -> incr count)}
Q’s: Should records be looked upon as simple containers for data or does state encapsulation make sense? Where should we draw the line, when should we use a Class instead of a Record?
Note the model from the linked post is pure, whereas the code above is not.

I do not think there is a single universal answer to this question. It is certainly true that records and classes overlap in some of their potential uses and you can choose either of them.
The one difference that is worth keeping in mind is that the compiler automatically generates structural equality and structural comparison for records, which is something you do not get for free for classes. This is why records are an obvious choice for "data types".
The rules that I tend to follow when choosing between records & classes are:
Use records for data types (to get structural equality for free)
Use classes when I want to provide C#-friendly or .NET-style public API (e.g. with optional parameters). You can do this with records too, but I find classes more straightforward
Use records for types used locally - I think you often end up using records directly (e.g. creating them) and so adding/removing fields is more work. This is not a problem for records that are used within just a single file.
Use records if I need to create clones using the { ... with ... } syntax. This is particularly nice if you are writing some recursive processing and need to keep state.
I don't think everyone would agree with this and it is not covering all choices - but generally speaking, using records for data and local types and classes for the rest seems like a reasonable method for choosing between the two.

If you want to achieve data hiding in a record, I feel there are better ways of going about it, like abstract data type "pattern".
Take a look at this:
type CounterRecord =
private {
mutable count : int
}
member this.Count = this.count
member this.Increment() = this.count <- this.count + 1
static member Make() = { count = 0 }
The record constructor is private, so the only way of constructing an instance is through the static Make member,
count field is mutable - not something to be proud about, but I'd say fair game for your counter example. Also it's not accessible from outside the module where it's defined due to private modifier. To access it from outside, you have the read-only Count property.
Like in your example, there's an Increment function on the record that mutates the internal state.
Unlike your example, you can compare CounterRecord instances using auto-generated structural comparisons - as Tomas mentioned, the selling point of records.
As for records-as-interfaces, you might see that sometimes in the field, though I think it's more of a JavaScript/Haskell idiom. Unlike those languages, F# has the interface system of .NET, made even stronger when coupled with object expressions. I feel there's not much reason to repurpose records for that.

Related

Mixing Private and Public Attributes and Accessors in Raku

#Private attribute example
class C {
has $!w; #private attribute
multi method w { $!w } #getter method
multi method w ( $_ ) { #setter method
warn “Don’t go changing my w!”; #some side action
$!w = $_
}
}
my $c = C.new
$c.w( 42 )
say $c.w #prints 42
$c.w: 43
say $c.w #prints 43
#but not
$c.w = 44
Cannot modify an immutable Int (43)
so far, so reasonable, and then
#Public attribute example
class C {
has $.v is rw #public attribute with automatic accessors
}
my $c = C.new
$c.v = 42
say $c.v #prints 42
#but not
$c.v( 43 ) #or $c.v: 43
Too many positionals passed; expected 1 argument but got 2
I like the immediacy of the ‘=‘ assignment, but I need the ease of bunging in side actions that multi methods provide. I understand that these are two different worlds, and that they do not mix.
BUT - I do not understand why I can’t just go
$c.v( 43 )
To set a public attribute
I feel that raku is guiding me to not mix these two modes - some attributes private and some public and that the pressure is towards the method method (with some : sugar from the colon) - is this the intent of Raku's design?
Am I missing something?
is this the intent of Raku's design?
It's fair to say that Raku isn't entirely unopinionated in this area. Your question touches on two themes in Raku's design, which are both worth a little discussion.
Raku has first-class l-values
Raku makes plentiful use of l-values being a first-class thing. When we write:
has $.x is rw;
The method that is generated is:
method x() is rw { $!x }
The is rw here indicates that the method is returning an l-value - that is, something that can be assigned to. Thus when we write:
$obj.x = 42;
This is not syntactic sugar: it really is a method call, and then the assignment operator being applied to the result of it. This works out, because the method call returns the Scalar container of the attribute, which can then be assigned into. One can use binding to split this into two steps, to see it's not a trivial syntactic transform. For example, this:
my $target := $obj.x;
$target = 42;
Would be assigning to the object attribute. This same mechanism is behind numerous other features, including list assignment. For example, this:
($x, $y) = "foo", "bar";
Works by constructing a List containing the containers $x and $y, and then the assignment operator in this case iterates each side pairwise to do the assignment. This means we can use rw object accessors there:
($obj.x, $obj.y) = "foo", "bar";
And it all just naturally works. This is also the mechanism behind assigning to slices of arrays and hashes.
One can also use Proxy in order to create an l-value container where the behavior of reading and writing it are under your control. Thus, you could put the side-actions into STORE. However...
Raku encourages semantic methods over "setters"
When we describe OO, terms like "encapsulation" and "data hiding" often come up. The key idea here is that the state model inside the object - that is, the way it chooses to represent the data it needs in order to implement its behaviors (the methods) - is free to evolve, for example to handle new requirements. The more complex the object, the more liberating this becomes.
However, getters and setters are methods that have an implicit connection with the state. While we might claim we're achieving data hiding because we're calling a method, not accessing state directly, my experience is that we quickly end up at a place where outside code is making sequences of setter calls to achieve an operation - which is a form of the feature envy anti-pattern. And if we're doing that, it's pretty certain we'll end up with logic outside of the object that does a mix of getter and setter operations to achieve an operation. Really, these operations should have been exposed as methods with a names that describes what is being achieved. This becomes even more important if we're in a concurrent setting; a well-designed object is often fairly easy to protect at the method boundary.
That said, many uses of class are really record/product types: they exist to simply group together a bunch of data items. It's no accident that the . sigil doesn't just generate an accessor, but also:
Opts the attribute into being set by the default object initialization logic (that is, a class Point { has $.x; has $.y; } can be instantiated as Point.new(x => 1, y => 2)), and also renders that in the .raku dumping method.
Opts the attribute into the default .Capture object, meaning we can use it in destructuring (e.g. sub translated(Point (:$x, :$y)) { ... }).
Which are the things you'd want if you were writing in a more procedural or functional style and using class as a means to define a record type.
The Raku design is not optimized for doing clever things in setters, because that is considered a poor thing to optimize for. It's beyond what's needed for a record type; in some languages we could argue we want to do validation of what's being assigned, but in Raku we can turn to subset types for that. At the same time, if we're really doing an OO design, then we want an API of meaningful behaviors that hides the state model, rather than to be thinking in terms of getters/setters, which tend to lead to a failure to colocate data and behavior, which is much of the point of doing OO anyway.
BUT - I do not understand why I can’t just go $c.v( 43 ) To set a public attribute
Well, that's really up to the architect. But seriously, no, that's simply not the standard way Raku works.
Now, it would be entirely possible to create an Attribute trait in module space, something like is settable, that would create an alternate accessor method that would accept a single value to set the value. The problem with doing this in core is, is that I think there are basically 2 camps in the world about the return value of such a mutator: would it return the new value, or the old value?
Please contact me if you're interested in implementing such a trait in module space.
I currently suspect you just got confused.1 Before I touch on that, let's start over with what you're not confused about:
I like the immediacy of the = assignment, but I need the ease of bunging in side actions that multi methods provide. ... I do not understand why I can’t just go $c.v( 43 ) To set a public attribute
You can do all of these things. That is to say you use = assignment, and multi methods, and "just go $c.v( 43 )", all at the same time if you want to:
class C {
has $!v;
multi method v is rw { $!v }
multi method v ( :$trace! ) is rw { say 'trace'; $!v }
multi method v ( $new-value ) { say 'new-value'; $!v = $new-value }
}
my $c = C.new;
$c.v = 41;
say $c.v; # 41
$c.v(:trace) = 42; # trace
say $c.v; # 42
$c.v(43); # new-value
say $c.v; # 43
A possible source of confusion1
Behind the scenes, has $.foo is rw generates an attribute and a single method along the lines of:
has $!foo;
method foo () is rw { $!foo }
The above isn't quite right though. Given the behavior we're seeing, the compiler's autogenerated foo method is somehow being declared in such a way that any new method of the same name silently shadows it.2
So if you want one or more custom methods with the same name as an attribute you must manually replicate the automatically generated method if you wish to retain the behavior it would normally be responsible for.
Footnotes
1 See jnthn's answer for a clear, thorough, authoritative accounting of Raku's opinion about private vs public getters/setters and what it does behind the scenes when you declare public getters/setters (i.e. write has $.foo).
2 If an autogenerated accessor method for an attribute was declared only, then Raku would, I presume, throw an exception if a method with the same name was declared. If it were declared multi, then it should not be shadowed if the new method was also declared multi, and should throw an exception if not. So the autogenerated accessor is being declared with neither only nor multi but instead in some way that allows silent shadowing.

Are extensible records useless in Elm 0.19?

Extensible records were one of the most amazing Elm's features, but since v0.16 adding and removing fields is no longer available. And this puts me in an awkward position.
Consider an example. I want to give a name to a random thing t, and extensible records provide me a perfect tool for this:
type alias Named t = { t | name: String }
„Okay,“ says the complier. Now I need a constructor, i.e. a function that equips a thing with specified name:
equip : String -> t -> Named t
equip name thing = { thing | name = name } -- Oops! Type mismatch
Compilation fails, because { thing | name = ... } syntax assumes thing to be a record with name field, but type system can't assure this. In fact, with Named t I've tried to express something opposite: t should be a record type without its own name field, and the function adds this field to a record. Anyway, field addition is necessary to implement equip function.
So, it seems impossible to write equip in polymorphic manner, but it's probably not a such big deal. After all, any time I'm going to give a name to some concrete thing I can do this by hands. Much worse, inverse function extract : Named t -> t (which erases name of a named thing) requires field removal mechanism, and thus is not implementable too:
extract : Named t -> t
extract thing = thing -- Error: No implicit upcast
It would be extremely important function, because I have tons of routines those accept old-fashioned unnamed things, and I need a way to use them for named things. Of course, massive refactoring of those functions is ineligible solution.
At last, after this long introduction, let me state my questions:
Does modern Elm provides some substitute for old deprecated field addition/removal syntax?
If not, is there some built-in function like equip and extract above? For every custom extensible record type I would like to have a polymorphic analyzer (a function that extracts its base part) and a polymorphic constructor (a function that combines base part with additive and produces the record).
Negative answers for both (1) and (2) would force me to implement Named t in a more traditional way:
type Named t = Named String t
In this case, I can't catch the purpose of extensible records. Is there a positive use case, a scenario in which extensible records play critical role?
Type { t | name : String } means a record that has a name field. It does not extend the t type but, rather, extends the compiler’s knowledge about t itself.
So in fact the type of equip is String -> { t | name : String } -> { t | name : String }.
What is more, as you noticed, Elm no longer supports adding fields to records so even if the type system allowed what you want, you still could not do it. { thing | name = name } syntax only supports updating the records of type { t | name : String }.
Similarly, there is no support for deleting fields from record.
If you really need to have types from which you can add or remove fields you can use Dict. The other options are either writing the transformers manually, or creating and using a code generator (this was recommended solution for JSON decoding boilerplate for a while).
And regarding the extensible records, Elm does not really support the “extensible” part much any more – the only remaining part is the { t | name : u } -> u projection so perhaps it should be called just scoped records. Elm docs itself acknowledge the extensibility is not very useful at the moment.
You could just wrap the t type with name but it wouldn't make a big difference compared to approach with custom type:
type alias Named t = { val: t, name: String }
equip : String -> t -> Named t
equip name thing = { val = thing, name = name }
extract : Named t -> t
extract thing = thing.val
Is there a positive use case, a scenario in which extensible records play critical role?
Yes, they are useful when your application Model grows too large and you face the question of how to scale out your application. Extensible records let you slice up the model in arbitrary ways, without committing to particular slices long term. If you sliced it up by splitting it into several smaller nested records, you would be committed to that particular arrangement - which might tend to lead to nested TEA and the 'out message' pattern; usually a bad design choice.
Instead, use extensible records to describe slices of the model, and group functions that operate over particular slices into their own modules. If you later need to work accross different areas of the model, you can create a new extensible record for that.
Its described by Richard Feldman in his Scaling Elm Apps talk:
https://www.youtube.com/watch?v=DoA4Txr4GUs&ab_channel=ElmEurope
I agree that extensible records can seem a bit useless in Elm, but it is a very good thing they are there to solve the scaling issue in the best way.

Unit testing value objects in isolation from its dependencies

TL;DR
How do you test a value object in isolation from its dependencies without stubbing or injecting them?
In Misko Hevery's blog post To “new” or not to “new”… he advocates the following (quoted from the blog post):
An Injectable class can ask for other Injectables in its constructor.(Sometimes I refer to Injectables as Service Objects, but
that term is overloaded.). Injectable can never ask for a non-Injectable (Newable) in its constructor.
Newables can ask for other Newables in their constructor, but not for Injectables (Sometimes I refer to Newables as Value Object, but
again, the term is overloaded)
Now if I have a Quantity value object like this:
class Quantity{
$quantity=0;
public function __construct($quantity){
$intValidator = new Zend_Validate_Int();
if(!$intValidator->isValid($quantity)){
throw new Exception("Quantity must be an integer.");
}
$gtValidator = new Zend_Validate_GreaterThan(0);
if(!$gtvalidator->isValid($quantity)){
throw new Exception("Quantity must be greater than zero.");
}
$this->quantity=$quantity;
}
}
My Quantity value object depends on at least 2 validators for its proper construction. Normally I would have injected those validators through the constructor, so that I can stub them during testing.
However, according to Misko a newable shouldn't ask for injectables in its constructor. Frankly a Quantity object that looks like this
$quantity=new Quantity(1,$intValidator,$gtValidator); looks really awkward.
Using a dependency injection framework to create a value object is even more awkward. However now my dependencies are hard coded in the Quantity constructor and I have no way to alter them if the business logic changes.
How do you design the value object properly for testing and adherence to the separation between injectables and newables?
Notes:
This is just a very very simplified example. My real object my have serious logic in it that may use other dependencies as well.
I used a PHP example just for illustration. Answers in other languages are appreciated.
A Value Object should only contain primitive values (integers, strings, boolean flags, other Value Objects, etc.).
Often, it would be best to let the Value Object itself protect its invariants. In the Quantity example you supply, it could easily do that by checking the incoming value without relying on external dependencies. However, I realize that you write
This is just a very very simplified example. My real object my have serious logic in it that may use other dependencies as well.
So, while I'm going to outline a solution based on the Quantity example, keep in mind that it looks overly complex because the validation logic is so simple here.
Since you also write
I used a PHP example just for illustration. Answers in other languages are appreciated.
I'm going to answer in F#.
If you have external validation dependencies, but still want to retain Quantity as a Value Object, you'll need to decouple the validation logic from the Value Object.
One way to do that is to define an interface for validation:
type IQuantityValidator =
abstract Validate : decimal -> unit
In this case, I patterned the Validate method on the OP example, which throws exceptions upon validation failures. This means that if the Validate method doesn't throw an exception, all is good. This is the reason the method returns unit.
(If I hadn't decided to pattern this interface on the OP, I'd have preferred using the Specification pattern instead; if so, I'd instead have declared the Validate method as decimal -> bool.)
The IQuantityValidator interface enables you to introduce a Composite:
type CompositeQuantityValidator(validators : IQuantityValidator list) =
interface IQuantityValidator with
member this.Validate value =
validators
|> List.iter (fun validator -> validator.Validate value)
This Composite simply iterates through other IQuantityValidator instances and invokes their Validate method. This enables you to compose arbitrarily complex validator graphs.
One leaf validator could be:
type IntegerValidator() =
interface IQuantityValidator with
member this.Validate value =
if value % 1m <> 0m
then
raise(
ArgumentOutOfRangeException(
"value",
"Quantity must be an integer."))
Another one could be:
type GreaterThanValidator(boundary) =
interface IQuantityValidator with
member this.Validate value =
if value <= boundary
then
raise(
ArgumentOutOfRangeException(
"value",
"Quantity must be greater than zero."))
Notice that the GreaterThanValidator class takes a dependency via its constructor. In this case, boundary is just a decimal, so it's a Primitive Dependency, but it could just as well have been a polymorphic dependency (A.K.A a Service).
You can now compose your own validator from these building blocks:
let myValidator =
CompositeQuantityValidator([IntegerValidator(); GreaterThanValidator(0m)])
When you invoke myValidator with e.g. 9m or 42m, it returns without errors, but if you invoke it with e.g. 9.8m, 0m or -1m it throws the appropriate exception.
If you want to build something a bit more complicated than a decimal, you can introduce a Factory, and compose the Factory with the appropriate validator.
Since Quantity is very simple here, we can just define it as a type alias on decimal:
type Quantity = decimal
A Factory might look like this:
type QuantityFactory(validator : IQuantityValidator) =
member this.Create value : Quantity =
validator.Validate value
value
You can now compose a QuantityFactory instance with your validator of choice:
let factory = QuantityFactory(myValidator)
which will let you supply decimal values as input, and get (validated) Quantity values as output.
These calls succeed:
let x = factory.Create 9m
let y = factory.Create 42m
while these throw appropriate exceptions:
let a = factory.Create 9.8m
let b = factory.Create 0m
let c = factory.Create -1m
Now, all of this is very complex given the simple nature of the example domain, but as the problem domain grows more complex, complex is better than complicated.
Avoid value types with dependencies on non-value types. Also avoid constructors that perform validations and throw exceptions. In your example I'd have a factory type that validates and creates quantities.
Your scenario can also be applied to entities. There are cases where an entity requires some dependency in order to perform some behaviour. As far as I can tell the most popular mechanism to use is double-dispatch.
I'll use C# for my examples.
In your case you could have something like this:
public void Validate(IQuantityValidator validator)
As other answers have noted a value object is typically simple enough to perform its invariant checking in the constructor. An e-mail value object would be a good example as an e-mail has a very specific structure.
Something a bit more complex could be an OrderLine where we need to determine, totally hypothetical, whether it is, say, taxable:
public bool IsTaxable(ITaxableService service)
In the article you reference I would assert that the 'newable' relates quite a bit to the 'transient' type of life cycle that we find in DI containers as we are interested in specific instances. However, when we need to inject specific values the transient business does not really help. This is the case for entities where each is a new instance but has very different state. A repository would hydrate the object but it could just as well use a factory.
The 'true' dependencies typically have a 'singleton' life-cycle.
So for the 'newable' instances a factory could be used if you would like to perform validation upon construction by having the factory call the relevant validation method on your value object using the injected validator dependency as Mark Seemann has mentioned.
This gives you the freedom to still test in isolation without coupling to a specific implementation in your constructor.
Just a slightly different angle on what has already been answered. Hope it helps :)

Does CF ORM have an Active Record type Update()?

Currently I am working partly with cfwheels and its Active Record ORM (which is great), and partly raw cfml with its Hibernate ORM (which is also great).
Both work well for applicable situations, but the thing I do miss most when using CF ORM is the model.update() method that is available in cfwheels, where you can just pass a form struct to the method, and it will map up the struct elements with the model properties and update the records.. really good for updating and maintaining large tables. In CF ORM, it seems the only way to to update a record is to set each column individually, then do a save. Is this the case?
Does cf9 ORM have an Active Record type update() (or equivalent) method which can just receive a struct with values to update and update the object without having to specify each one?
For example, instead of current:
member = entityLoadByPK('member',arguments.id);
member.setName(arguments.name);
member.setEmail(arguments.email);
is there a way to do something like this in CF ORM?
member = entityLoadByPK('member',arguments.id);
member.update(arguments);
Many thanks in advance
In my apps I usually create two helper functions for models which handle the task:
/*
* Get properties as key-value structure
* #limit Limit output to listed properties
*/
public struct function getMemento(string limit = "") {
local.props = {};
for (local.key in variables) {
if (isSimpleValue(variables[local.key]) AND (arguments.limit EQ "" OR ListFind(arguments.limit, local.key))) {
local.props[local.key] = variables[local.key];
}
}
return local.props;
}
/*
* Populate the model with given properties collection
* #props Properties collection
*/
public void function setMemento(required struct props) {
for (local.key in arguments.props) {
variables[local.key] = arguments.props[local.key];
}
}
For better security of setMemento it is possible to check existence of local.key in variables scope, but this will skip nullable properties.
So you can make myObject.setMemento(dataAsStruct); and then save it.
There's not a method exactly like the one you want, but EntityNew() does take an optional struct as a second argument, which will set the object's properties, although depending on how your code currently works, it may be clunky to use this method and I don;t know whether it'll have any bearing on whether a create/update is executed when you flush the ORM session.
If your ORM entities inherit form a master CFC, then you could add a method there. Alternatively, you could write one as a function and mix it into your objects.
I'm sure you're aware, but that update() feature can be a source of security problems (known as the mass assignment problem) if used with unsanitized user input (such as the raw FORM scope).

Should private functions modify field variable, or use a return value?

I'm often running into the same trail of thought when I'm creating private methods, which application is to modify (usually initialize) an existing variable in scope of the class.
I can't decide which of the following two methods I prefer.
Lets say we have a class Test with a field variable x. Let it be an integer. How do you usually modify / initialize x ?
a) Modifying the field directly
private void initX(){
// Do something to determine x. Here its very simple.
x = 60;
}
b) Using a return value
private int initX(){
// Do something to determine x. Here its very simple.
return 60;
}
And in the constructor:
public Test(){
// a)
initX();
// b)
x = initX();
}
I like that its clear in b) which variable we are dealing with. But on the other hand, a) seems sufficient most of the time - the function name implies perfectly well what we are doing!
Which one do you prefer and why?
Thank for your answers guys! I'll make this a community wiki as I realize that there is no correct answer to this.
I usually prefer b), only I pick a different name, like computeX() in this case. A few reasons for why:
if I declare computeX() as protected, there is a simple way for a subclass to influent how it works, yet x itself can remain a private field;
I like to declare fields final if that's what they are; in this case a) is not an option since initialization has to happen in compiler (this is Java-specific, but your examples all look Java as well).
That said, I don't have a strong preference between the two methods. For instance, if I need to initialize several related fields at once, I will usually pick option a). That, though, only if I cannot or don't want for some reason, to initialize directly in constructor.
For initialization I prefer constructor initialization if it's possible,
public Test():x(val){...}, or write initialization code in the constructor body. Constructor is the best place to initialize all the fields (actually, it is the purpose of constructor). I'd use private initX() approach only if initialization code for X is too long (just for readability) and call this function from constructor. private int initX() in my opinion has nothing to do with initialization(unless you implement lazy initialization,but in this case it should return &int or const &int) , it is an accessor.
I would prefer option b), because you can make it a const function in languages that support it.
With option a), there is a temptation for new, lazy or just time-stressed developers to start adding little extra tasks into the initX method, instead of creating a new one.
Also, in b), you can remove initX() from the class definition, so consumers of the object don't even have to know it's there. For example, in C++.
In the header:
class Test {
private: int X;
public: Test();
...
}
In the CPP file:
static int initX() { return 60; }
Test::Test() {
X = initX();
}
Removing the init functions from the header file simplifies the class for the people that have to use it.
Neither?
I prefer to initialize in the constructor and only extract out an initialization method if I need a lot of fields initialized and/or need the ability to re-initialize at another point in the life time of an instance (without going through a destruct/construct).
More importantly, what does 60 mean?
If it is a meaningful value, make it a const with a meaningful name: NUMBER_OF_XXXXX, MINUTES_PER_HOUR, FIVE_DOZEN_APPLES, SPEED_LIMIT, ... regardless of how and where you subsequently use it (constructor, init method or getter function).
Making it a named constant makes the value re-useable in and of itself. And using a const is much more "findable", especially for more ubiquitous values (like 1 or -1) then using the actual value.
Only when you want to tie this const value to a specific class would it make sense to me to create a class const or var, or - it the language does not support those - a getter class function.
Another reason to make it a (virtual) getter function would be if descendant classes need the ability to start with a different initial value.
Edit (in response to comments):
For initializations that involve complex calculations I would also extract out a method to do the calculation. The choice of making that method a procedure that directly modifies the field value (a) or a function that returns the value it should be given (b), would be driven by the question whether or not the calculation would be needed at other times than "just the constructor".
If only needed at initialization in the constructor, I would prefer method (a).
If the calculation needs to be done at other times as well, I would opt for method (b) as it also makes it possible to assign the outcome to some other field or local variable and so can be used by descendants or other users of the class without affecting the inner state of the instance.
Actually only a) method behaves as expected (by analyzing method name). Method b) should be named 'return60' in your example or 'getXValue' in some more complicated one.
Both options are correct in my opinion. It all depeneds what was your intention when certain design was choosen. If your method has to do initialization only I would prefer a) beacuse it is simplier. In case x value is also used for something else somewhere in logic using b) option might lead to more consistent code.
You should also always write method names clearly and make those names corresponding with actual logic. (in this case method b) has confusing name).
#Frederik, if you use option b) and you have a LOT of field variables, the constructor will become a quite unwieldy block of code. Sometimes you just can't help but have lots and lots of member variables in a class (example: it's a domain object and it's data comes straight from a very wide table in the database). The most pragmatic approach would be to modularize the code as you need to.