I likely am asking a very stupid question here, please forgive me.
I am a Java and C# backend-engineer with relatively good knowledge of OOP design patterns. I have recently discovered the debate about OOP vs functional programming and the question i simply can't wrap my head around is this: If there is no state, then how could we update an element based on user input?
Here's a small example showing the problem i am facing (i am aware that JS is not a strictly functional language, but i think it shows my problem relatively well.):
Let's say hat we have a small web page that simply displays a counter and increments its value every time the user clicks a button:
<body>
The count is <span id="data">0</span><br>
<button onClick="inc()">Increment</button>
</body>
Now there's the strictly imperative approach, using a counter variable to store the counter's state:
let data;
window.onload = function(){
data = document.getElementById("data");
}
let counter = 0;
function inc(){
data.innerHTML = ++counter;
}
A more functional approach (at least in my understanding) would be the following code:
let data;
window.onload = function(){
data = document.getElementById("data");
}
function writeValue(val){
data.innerHTML = val;
}
function doIncrement(val){
return val + 1;
}
function readValue(){
return parseInt(data.innerHTML);
}
function inc(){
writeValue(doIncrement(readValue()));
}
The issue i am facing now is that, while the data variable is never altered, data's state still changes over time (every time the counter is updated). I do not really see any real solution to this. Of course, the counter's state needs to be tracked somewhere in order for it to be incremented. We could also call document.getElementById("data") every time we need to read or write the data, but the problem essentially remains the same. I have to somehow track the page's state in order to process it later.
Edit: Note that i have reassigned (which, i am told, is a bad thing in FP) the value val to the variable innerHTML in the writeValue(val) function. This is the exact line at which i am starting to question my approach.
TL;DR: how would you handle data that is naturally subject to a change in a functional way?
This question seems to originate from the misunderstanding that there's no state in Functional Programming (FP). While this notion is understandable, it's not true.
In short, FP is an approach to programming that makes an explicit distinction between pure functions and everything else (often called impure actions). Simon Peyton-Jones (SPJ, one of the core Haskell developers) once gave a lecture where he said something to the effect that if you couldn't have any side effects, the only thing you could do with a pure function would be to heat the CPU, whereafter one student remarked that that would also be a side effect. (It's difficult to find the exact source of this story. I recall having seen an interview with SPJ where he related the story, but searching the web for a quote in a video is still hard in 2022.)
Changing a pixel on the screen is a side effect.
Sending an email is a side effect.
Deleting a file is a side effect.
Creating a row in a database is a side effect.
Changing an internal variable that, via cascading consequences, causes anything like the above to happen, is a side effect.
It is impossible to write (useful) software that has no side effects.
Furthermore, pure functions also don't allow non-deterministic behaviour. That excludes even more necessary actions:
Getting a (truly) random number is non-deterministic.
Getting the time or date is non-deterministic.
Reading a file is non-deterministic.
Querying a database is non-deterministic.
Etc.
FP acknowledges that all such impure actions need to take place. The difference in philosophy is the emphasis on pure functions. A pure function has many desirable traits (predictability, referential transparency, possible memoization, testability) that makes it worthwhile to pursue a programming philosophy that favours such functions.
A functional architecture is one that minimises the impure actions to their essentials. One label for that is Functional Core, Imperative Shell, where you push all impure actions to the edge of your system. That would, for example, include an HTML counter. Actually changing an HTML element stays imperative, while the calculation required to produce the new value can be implemented as a pure function.
Different languages have varying approaches to how explicitly they model the distinction between pure functions and impure actions. Most languages (even 'functional' languages like F# and Clojure) don't explicitly make that distinction, so it's up to the programmer to keep that separation in mind.
Haskell is one language that famously does make that distinction. It uses the IO monad to explicitly model impure actions. I've attempted to explain IO for object-oriented programmers using C# as an example language in an article: The IO Container.
Related
I've started playing around with Kotlin, but I sense my own limitation in the way I program. My problem is that I still think Java therefore the style is still imperative, my question is to all functional programming zealots , which I believe would be very useful to all people who at the very beginning stage and also need to 'brake' their brain to start building it again; to leave comfort zone and start thinking pseudo and not in "whatever is your first language". I believe it is possible for highly experienced polyglot developers to chew the concepts down to plain advices of what makes your program being written in entirely functional way and what violates the paradigm. I don't know all the quirks but please don't hesitate to include universally accepted terms which might be unknown to me(I can always lookup). At this point I need this set of rules to make myself suffer at first and not break them but then I know I will feel it, analyze guidelines and understand how they are worse/better which of course is my own homework.
So example of these guidelines, would be something like:
Never change state, this can be avoided by using x, y, z
Operate using higher order functions only (I maybe wrong, just example)
I hope the answer will give me long term reference to put myself in extreme conditions where I stop escaping to OOP whenever I feel uncomfortable. And now when I look at Kotlin I understand how I've should've been thinking about problems, it is about intention not about the structure imposed by one language or another. Intention can always be converted to a language of your choice and backed up by design patterns applicable to the language, but to find that middle ground I need to jail myself first from the comfort zone.
Avoid mutable state like the plague.
One of the main points of using functional programming, possibly the main one, is to avoid all the little pitfalls, bugs, issues one needs to deal with when using mutable state. You should do everything you can in order to avoid mutating state. For instance, instead of using C-style for-loops where you need to keep a counter variable updated, use map and other higher-order functions in order to abstract away your iteration patterns. This also means that you should never change the value of a variable if you can avoid that. Instead, you should be defining almost all of your variables, preferrably all of them, as constants, and using functions to compute new values from them instead of mutating them.
Avoid side-effects like the plague.
Mutable state's ugly cousin, side-effects. Side effects mean anything other than taking a value and returning a value in a function. If that function prints data, mutates global variables, sends messages to threads, or anything, anything other than simply taking its parameters, computing a value from them, and returning a value, that function has side-effects. Side-effects are important (see next bullet point), but if you use them a lot, they get impossible to track. Just think of how everyone tells you to avoid global variables in imperative programming. Functional programming goes a step further and tries to avoid all side-effects. The bulk of your program should be made of pure functions. (See ahead)
When you need to use side-effects, keep them contained.
Yes, I just told you to run away from side-effects. However, no program is useful without side-effects of some kind. Graphical User Interface? Side-effect. Audio output? Side-effect. Printing to a shell? Side-effect. So you can't really get rid of side-effects if you want to build useful stuff.
What you should do instead is write your code so that all your side-effecting code lives in a thin layer which mostly calls pure functions and then does the required side-effects using the result of these pure function calls.
Use pure functions for everything you can.
This is sort of the flipside of the previous point. A pure function is a function which has no side-effects and does not mutate anything. It can only take in parameters and return a value. You should use these a lot. For instance, instead of doing your logging within functions which are computing stuff, you should be constructing your log strings using pure functions, and then letting your side-effects layer call these pure functions, call more pure functions in order to format the log strings into a full log, and then output the log itself from your side-effects layer.
Use higher-order functions to structure your code.
Higher-order functions are, in a way, the glue that makes functional programming work. A higher-order function is a function which takes one or more functions as parameters and/or returns a function. The power of higher-order functions is that they can encapsulate many of the patterns which you would use in an imperative-style program in a declarative manner. For instance, let's take a look at the three most common higher-order functions:
map is a function which takes a function and a list of values, applies its function argument to each of those values, and returns a new list with the results. map encapsulates the whole pattern of iterating over a list doing an operation on each value in a declarative manner.
filter is a function which takes a function which returns a boolean and a list of values, applies its function argument to each of those values and returns a list containing only those values for which its function argument returns true. It encapsulates the whole pattern of selecting results from a list in a declarative manner.
reduce, also known as fold, takes an initial value, a binary function and a list of values. It uses its function argument to combine the initial value with the first value of the list, then combines the result with the next value of the list and keeps on doing this until it has reduced the list to just one single value. It encapsulates the entire pattern of obtaining an aggregate value from a list of values.
This is in no way an exhaustive list of higher-order functions, but these three are the most common ones. I hope this has been enough to show how you can structure code which would require a lot of tracking variables using only functions in a declarative manner. If you use these higher-order functions well, it's likely you won't ever need a for or while loop again.
This is definitely not an exhaustive list of functional programming practices, but I think most functional programmers would agree these five guidelines form the core of what functional programming is about. If you want to really learn how to apply these, my advice would be to learn a pure functional programming language such as Haskell, so you are forced to abandon the imperative paradigm and to learn how to structure things functionally instead. I would recommend the fantastic Haskell Programming from First Principles as a starting resource if you choose to go this way. In case you don't want to/can't put down the cash, Brent Yorgey's Haskell course at UPenn is also a great free resource.
Run-time Polymorphism can be used to let the run-time to dynamically load the exact concrete class of an abstract class/interface. (You can take Animal/Dog, Vehicle/Car examples)
But when we know the exact concrete class #coding-time (compile-time), does it really need to forcefully apply polymorphism?
When I write OO code, I tend to use most-general type I can on the left-hand side of the assignment. This immediately means that my answer to your question is - no.
Here's the example:
Animal x = new Dog();
...
x.move();
The reason why I'm doing this is that I'm probably going to split beginning and end of the operation into two distinct operations. My methods are extremely short in practice.
Applied to the same example:
function moveDog() {
move(new Dog());
}
function move(Animal animal) {
animal.move();
}
As you can see, it would make no sense for the move function to know what kind of animal it is really moving.
Generally, it is compiler's duty to figure whether in a given code base any concrete call has been made with an overridden move() method. Some compilers can detect that no overridden method will be subjected to them and then they remove dynamic dispatch at compile time. With some luck, my code above would compile the same whether move function receives Animal or Dog.
Now, this is theory. In practice, there are two important things. First, compilers that are widely used have still not started using such aggressive optimization techniques as detecting static method calls, as opposed to calls that require dynamic dispatch. Second, the first thing doesn't matter too much with CPU power we have today.
I have been writing highly optimized code for fifteen years already and I have met the situation in which I had to factor polymorphic calls out. That is why I strongly recommend to apply polymorphism as much as possible. When the time comes to add some classes, to incorporate new features, polymorphic calls will likely be the tool to seamlessly add new classes to the existing design. If you used overly concrete types during development, it could easily happen that you cannot add new feature to the given code base.
But when we know the exact concrete class #coding-time (compile-time), does it really need to forcefully apply polymorphism?
Knowing the type at compile time is not necessarily a yes/no thing across all the code in an app and an object's entire lifetime, given techniques for type erasure. But, ignoring those classic uses of polymorphism, there are still other potential reasons such as...
(sorry - pretty obvious one this) to make it easier to change the implementation should another become available later
to make it easier to "mock" an implementation for testing (i.e. provide objects that pretend to provide some service or function, but have more scripted/controllable/observable behaviours to let tests put some dependent code through its paces)
hide aspects of the implementation that might otherwise have to be exposed (e.g. in C++, a class/struct definition must declare all the protected and private members)
this is sometimes for Intellectual Property protection; at other times, so more changes can be made to the implementation without having to make a change the "header" file that would typically trigger recompilation of a lot of dependent code
to aid in modelling and application design, using the "interfaces" to cleanly specify the intended APIs, which can then provide a more stable reference for comparison as the implementations are fleshed out
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I'm watching a video course/lectures from Stanford. The course is "The Structure and Interpretation of Computer Programs"
In the first OOP lecture, the instructor (Brian Harvey) describes an OOP method as one that gives different answers for the same question, while a function in functional programming gives a certain output for a certain input.
The following code is an example of a method in OOP that gives a different answer each time it's called:-
(define-class (counter)
instance-vars (count 0))
(method (next)
(set! count (+ count 1))
count) )
Now although the course is illustrated by scheme, I didn't pay much attention to the language itself, and so I can't explain the code; but can't a similar function "next" do the same thing as this "next" function?
In C, I would declare a global variable, and each time increase it by one when calling next. I know C is procedural, but I'm guessing a similar thing can be done in Scheme.
Well. With all due respect to the lecturer, these are slightly fishy definitions of both "OOP" and "functional programming". Both terms are consistently used, well, inconsistently, both in industry and academic contexts, not to mention informal use. If you dig a bit deeper, what's really going on is that there are several orthogonal concepts--different axes along which a choice is made in how to approach a program--that are being conflated, with one set of choices being arbitrarily called "OOP" despite not having anything else tying them together.
Probably the two biggest distinctions involved here are:
Identity vs. value: Do you model things by implicit identity (based on memory location or whatnot) and allow them to change arbitrarily? Or do you model things by their value, with no inherent notion of identity? If you say x = 4 does that mean that x is an alias to the timeless Platonic ideal of the number 4, or is x the name of a thing that's currently a four, but could be something else later (while still being x)?
Data vs. behavior: Do you work with simple data structures whose representation can be inspected, manipulated, and transformed? Or do you work with abstracted behaviors that do things, representing data only in terms of the things you can do with it, and let these behavioral abstractions operate on each other?
Most standard imperative languages lean toward using identity and data--pointers to C structs are about as purely this approach as possible. OOP languages tend to be defined largely by opting for behavior over data, often leaning toward identity as well but not consistently (cf. the popularity of "immutable" objects).
Functional programming usually leans more toward values rather than identity, while mixing data and behavior to various degrees.
There's a lot more going on here as well but I think that's the key part of what you're wondering here.
If anyone's curious I've elaborated a bit on some of this before: Analyzing some essential concepts of many OOP languages, more on the identity/value issue and also formal vs. informal approaches, a look at the data/behavior distinction in functional programming, probably others I can't think of. Warning, I'm kind of long-winded, these are not for the faint of heart. :P
There is a page on the excellent Haskell wiki, where differences in Functional Programming and OOP are contrasted. The Haskell wiki is a wonderful resource for everything about functional programming in general in addition to helping with the Haskell language.
Functional programming and OOP Differences
The important difference between pure functional programming and object-oriented programming is:
Object-oriented:
Data:
OOP asks What can I do with the data?
Producer: Class
Consumer: Class method
State:
The methods and objects in OOP have some internal state (method variables and object attributes) and they possibly have side effects affecting the state of computer’s peripherals, the global scope, or the state of an object or method. Variable assignment is one good sign of something having a state.
Functional:
Data:
Functional programming asks How the data is constructed?
Producer: Type Constructor
Consumer: Function
State:
If a pure functional programming ever assigns to a variable, the variable must be considered and handled as immutable. There must not be a state in pure functional programming.
Code with side effects is often separated from the main purely functional body of code
State can be passed around as an argument to a function, this is called a continuation.
Functional substitutes for OOP generators
The way to do something similar to OOP style generators (which have an internal state) with pure functional programming is to approach the problem from a different point of view, by using one of these solutions depending on the use case:
1. Process some or all values in a sequence:
Type of sequence can be list, array, sequence or vector.
Lisp has car and Haskell has first, which take first item from a list.
Haskell also has take, which takes the first n items, and which supports lazy evaluation and thus infinite or cyclic sequences – like OOP generators do.
Both have first, and different map, reduce or fold functions for processing sequences with a function.
Matrices usually also have some ways to map or apply a function to each item.
2. Some values from a function are needed:
The indices might be from a discrete or continuous scale (integers or floats).
Make one pure function to generate the indices (events) and feed those to another pure function (behaviour). This is called Functional reactive programming. This is a form of Dataflow programming along with cell-oriented programming. The Actor model is also somewhat similar in operation, and a very interesting alternative to threads with handling concurrency!
3. Use a closure to confine and encapsulate the state from the outside
This is the closest subsitute to OOP way with generators (which I think actually originated to imitate closures), and also farthest from pure functional programming, because a closure has a state.
"Functional" in functional programming has traditionally referred to the meaning of mathematical functions. That is, the output of a mathematical function is based solely on the inputs passed to it. Nowadays such programming is more often called pure functional programming.
In pure functional programming reassigning state is not allowed, thus writing a function such as your C example would not be possible. You are only allowed to bind a value to a variable once. An example of a language where this would not be possible is Haskell.
Most functional programming languages (Scheme included) are unpure and would allow you to do so. Said that, what the lecturer is telling is that writing such a function is not possible in the traditional sense of functional programming.
Well, yeah, you could do that in C.
But its not the same - in C++ you can make each object have its own count.
Context: I need to explain "composed methods" to a group of mixed-experience.
I think I heard about it first while reading Beck's Smalltalk Best practices. I've personally not had too many issues writing such methods - however in the local code-wilderness, I've seen quite a few instances where the lack of composed methods had created indecipherable Blobs... and I was in the minority. So I'm taking them through CleanCode - where this one popped up again.
The premise is quite simple.
"Functions should be short, do one
thing well and have an
intention-revealing name. Each step in
the body of the method should be at
the same level of abstraction."
What I'm struggling with a check for the "same level of abstraction".. viz.. forgive the pun a bit abstract for beginners.
My current explanation would be similar to SICP's "wishful thinking". (Imagine the ideal set of steps and then worry about implementation/making it happen.").
Does anyone have a better set of rules / an acid test to evaluate your decisions while writing composed methods ?
Same level of abstraction - examples:
void DailyChores()
{
Dust();
Hoover();
MopKitchenFloor();
AddDirtyClothesToWashingMachine();
PlaceDetergentInWashingMachine();
CloseWashingMachineDoor();
StartWashingMachine();
Relax();
}
Hopefully it should be clear that the WashingMachine saga would be better extracted into a separate method entited WashDirtyLaundry();
Arguably the MopKitchenFloor should also be in a separate method entitled CleanKitchen() as quite likely you would want to extend this in the future to include WashPots(), DefrostFridge() etc.
So a better way would be to write as follows:
void DailyChores()
{
Dust();
Hoover();
CleanKitchen(CleaningLevel.Daily);
WashDirtyClothes();
Relax();
}
void WashDirtyClothes()
{
AddDirtyClothesToWashingMachine();
PlaceDetergentInWashingMachine();
CloseWashingMachineDoor();
StartWashingMachine();
}
void CleanKitchen(CleaningLevel level)
{
MopKitchenFloor();
WashPots();
if(level == CleaningLevel.Monthly)
{
DefrostFridge();
}
}
enum CleaningLevel
{
Daily,
Weekly,
Monthly
}
In terms of "rules" to apply to code not following this principle:
1) Can you describe what the method does in a single sentence without any conjunctions (e.g. "and")? If not split it until you can. e.g. In the example I have AddDirtyClothesToWashingMachine() and PlaceDetergentInWashingMachine() as separate methods - this is correct - to have the code for these 2 separate tasks inside one method would be wrong - however see rule 2.
2) Can you group calls to similar methods together into a higher level method which can be described in a single sentence. In the example, all of the methods relating to washing clothes are grouped into the single method WashDirtyClothes(). Or in consideration of rule 1, the methods AddDirtyClothesToWashingMachine() and PlaceDetergentInWashingMachine() could be called from a single method AddStuffToWashingMachine():
void AddStuffToWashingMachine()
{
AddDirthClothesToWashingMachine();
PlaceDetergentInWashingMachine();
}
3) Do you have any loops with more than a simple statement inside the loop? Any looping behaviour should be a separate method. Ditto for switch statements, or if, then else statements.
Hope this helps
I take "acid test" to mean you would like some concrete rules that help embody the abstract concept in question. As in "You might have mixed levels of abstraction if..."
You might have mixed levels of abstraction if...
you have a variable in the function that is only used for part of the function.
you have multiple loops in the function.
you have multiple if statements who's conditions are independent of each other.
I hope others will add to the above...
EDIT: Warning - long answer from self-taught guy ahead . . .
In my very humble opinion (I am TOTALLY learning here as well) it seems BonyT and Daniel T. are on the right track. The piece which might be missing here is the design part. While it is safe to say that refactoring/composing will always be a necessity, might it also be as safe to say that (ESPECIALLY with beginners!) proper design up front would be the first, second, and third steps to properly composed code?
While I get what you are asking (a test/set of tests for code composition), I think BonyT is applying the earliest of those tests during the "pseudocode" part of the design process, no?
Obviously, in the early stages of project planning, strong design and experienced coders will sniff out the obvious places ripe for composition. As the work progresses, and the team is beginning to fill these initial code stubs with body, some slightly more obtuse exaples are bound to crop up. BonyT's example presents these first two steps quite well.
I think there comes a point at which experience and a finely-tuned "nose" for code smell comes in - in other words, the part you may not be able to teach directly. However, that is where Daniel T's answer comes in - while one may not be able to develop concrete ACID-type "tests" for proper composition, one CAN employ techniques such as Daniel proposes to seek out potential smelly code. Detect "hints" if you will, that should at least prompt further investigation.
If one is not certain whether things are composed at the proper level, it might make sense to work through a particular function and attempt to describe, step-by-step, what is going on in simple, single sentences without conjuctions. This is probably the most basic ACID-type test that could be performed. Not to mention, this process would by default end up correctly documenting the code . . .
In response to BonyT you imply that his pseudocode/method stubs make the next step obvious - I am betting that if one walks through a function and describes things step by step, one will often find that indeed, the next step either obviously follows at the same level of detail, or belongs elsewhere. While there will obviously be cases (many, with complex code) where things are not so neat, I propose that this is where nothing but experience (and possibly genetics!) come in - again, things you can't teach directly. At that point, one must examine the problem domain, and determine a solution which best fits the domain (and also be prepared to change it down the road . . .). Once again, properly documenting the "in-between" cases with short, simple statements (and narrative describing decisions in gray areas) will help the poor maintenence guy down the road.
I realize that I have proposed nothing new here, but what I had to say was longer than a comment would allow!
Apart from unambiguous clarity, why should we stick to:
car.getSpeed() and car.setSpeed(55)
when this could be used as well :
car.speed() and car.speed(55)
I know that get() and set() are useful to keep any changes to the data member manageable by keeping everything in one place.
Also, obviously, I understand that car.speed() and car.speed(55) are the same function, which makes this wrong, but then in PHP and also in Zend Framework, the same action is used for GET, POST, postbacks.
In VB and C# there are "properties", and are used by many, much to the disgust of purists I've heard, and there are things in Ruby like 5.times and .each, .to_i etc.
And you have operator overloading, multiple inheritance, virtual functions in C++, certain combinations of which could drive anyone nuts.
I mean to say that there are so many paradigms and ways in which things are done that it seems odd that nobody has tried the particular combination that I mentioned.
As for me, my reason is that it is short and cleaner to read the code.
Am I very wrong, slightly wrong, is this just odd and so not used, or what else?
If I still decide to stay correct, I could use car.speed() and car.setSpeed(55).
Is that wrong in any way (just omitting the "get" )?
Thanks for any explanations.
If I called car.speed(), I might think I am telling the car to speed, in other words to increase speed and break the speed limit. It is not clearly a getter.
Some languages allow you to declare const objects, and then restrict you to only calling functions that do not modify the data of the object. So it is necessary to have seperate functions for modification and read operations. While you could use overloads on paramaters to have two functions, I think it would be confusing.
Also, when you say it is clearer to read, I can argue that I have to do a look ahead to understand how to read it:
car.speed()
I read "car speed..." and then I see there is no number so I revise and think "get car speed".
car.getSpeed()
I read "for this car, get speed"
car.setSpeed(55)
I read "for this car, set speed to 55"
It seems you have basically cited other features of the language as being confusing, and then used that as a defense for making getters/setters more confusing? It almost sounds like are admitting that what you have proposed is more confusing. These features are sometimes confusing because of how general purpose they are. Sometimes abstractions can be more confusing, but in the end they often serve the purpose of being more reusable. I think if you wanted to argue in favor of speed() and speed(55), you'd want to show how that can enable new possibilities for the programmer.
On the other hand, C# does have something like what you describe, since properties behave differently as a getter or setter depending on the context in what they are used:
Console.WriteLine(car.Speed); //getter
car.Speed = 55 //setter
But while it is a single property, there are two seperate sections of code for implementing the getting and setting, and it is clear that this is a getter/setter and not a function speed, because they omit the () for properties. So car.speed() is clearly a function, and car.speed is clearly a property getter.
IMHO the C# style of having properties as syntatic sugar for get and set methods is the most expressive.
I prefer active objects which encapsulate operations rather than getters and setters, so you get a semantically richer objects.
For example, although an ADT rather than a business object, even the vector in C++ has paired functions:
size_type capacity() const // how many elements space is reserved for in the vector
void reserve(size_type n) // ensure space is reserved for at least n elements
and
void push_back ( const T& ) // inserts an element at the end
size_type size () const // the number of elements in the vector
If you drive a car, you can set the accelerator, clutch, brakes and gear selection, but you don't set the speed. You can read the speed off the speedometer. It's relatively rare to want both a setter and a getter on an object with behaviour.
FYI, Objective-C uses car.speed() and car.setSpeed(55) (except in a different syntax, [car speed] and [car setSpeed:55].
It's all about convention.
There is no right answer, it's a matter of style, and ultimately it does not matter. Spend your brain cycles elsewhere.
FWIW I prefer the class.noun() for the getter, and class.verb() for the setter. Sometimes the verb is just setNoun(), but other times not. It depends on the noun. For example:
my_vector.size()
returns the size, and
my_vector.resize(some_size)
changes the size.
The groovy approach to properties is quite excellent IMHO, http://groovy.codehaus.org/Groovy+Beans
The final benchmarks of your code should be this:
Does it work correctly?
Is it easy to fix if it breaks?
Is it easy to add new features in the future?
Is it easy for someone else to come in and fix/enhance it?
If those 4 points are covered, I can't imagine why anybody would have a problem with it. Most of the "Best Practices" are generally geared towards achieving those 4 points.
Use whichever style works for you, just be consistent about it, and you should be fine.
This is just a matter of convention. In Smalltalk, it's done the way you suggest and I don't recall ever hearing anybody complain about it. Getting the car's speed is car speed, and setting the car's speed to 55 is car speed:55.
If I were to venture a guess, I would say the reason this style didn't catch on is because of the two lines down which object-oriented programming have come to us: C++ and Objective-C. In C++ (even more so early in its history), methods are very closely related to C functions, and C functions are conventionally named along the lines of setWhatever() and do not have overloading for different numbers of arguments, so that general style of naming was kept. Objective-C was largely preserved by NeXT (which later became Apple), and NeXT tended to favor verbosity in their APIs and especially to distinguish between different kinds of methods — if you're doing anything but just accessing a property, NeXT wanted a verb to make it clear. So that became the convention in Cocoa, which is the de facto standard library for Objective-C these days.
It's convention Java has a convention of getters and setters C# has properties, python has public fields and JavaScript frameworks tend to use field() to get and field(value) to set
Apart from unambiguous clarity, why should we stick to:
car.getSpeed() and car.setSpeed(55)
when this could be used as well : car.speed() and car.speed(55)
Because in all languages I've encountered, car.speed() and car.speed(55) are the same in terms of syntax. Just looking at them like that, both could return a value, which isn't true for the latter if it was meant to be a setter.
What if you intend to call the setter but forget to put in the argument? The code is valid, so the compiler doesn't complain, and it doesn't throw an immediate runtime error; it's a silent bug.
.() means it's a verb.
no () means it's a noun.
car.Speed = 50;
x = car.Speed
car.Speed.set(30)
car.setProperty("Speed",30)
but
car.Speed()
implies command to exceed speed limit.