do naming rules in kotlin matter [closed] - kotlin

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
An absolute pet hate are naming rules for the sake of it, when development enviroments are so good at letting users know what each item is.
As the title suggests are there any pitfalls if a developer were to name all types, objects, variables etc.. 'all in "snake_case", specifically in Kotlin. Ignoring the auto generated names for binding etc.

Coding style, such as naming, doesn't matter to the compiler.
But it matters to humans — and as a couple of wise people once said, “programs must be written for people to read, and only incidentally for machines to execute.”  (They were probably exaggerating for effect, but I think there's still a grain of truth there.)
Consistency in naming means that you don't have to stop and think about whether to use underscores or capitals (or spaces or dashes or whatever inside backquotes); it makes classes and methods easier to find in your code as well as in libraries and frameworks; it plays better with Kotlin properties (which look for getXxx/setXxx/isXxx method in the bytecode); it removes a source of disagreement among developers; it's less likely to cause problems with IDEs and frameworks and source-code tools which tend to assume you're using standard naming conventions; it makes the codebase easier for new developers to get up to speed with.
But, more than all those, code which doesn't follow conventions iS_нa℞𝐝𝑒𝕽-τଠ𐍂ɘⱭ𐐼.  When things that work the same look the same, differences are easier to see.  The less time you spend deciphering names, that more time is left for understanding what the code is doing with them.  It's the same reason why we use consistent indentation and spacing and structure and design patterns.  With fewer surface differences, you can more easily see the underlying structures and patterns in the code, and deviations (and hence bugs) become more obvious.
Coding — by which I include debugging, maintaining, and enhancing as well as writing fresh code — is hard, and we humans are limited, so we should make things as easy for ourselves as possible.  Developing software is a constant battle against complexity; every little simplification helps.  You may think that using snake_case instead of camelCase is insignificant; but the mere fact you're asking about it here shows that it makes a difference!
The answers to this question and this question give many more (and better-argued) reasons why consistency is important.
(As it happens, I've spent many years using languages which prefer snake_case, and also with those which prefer camelCase, and I definitely find the latter easier to read in context.  But that's a much less important consideration than consistency.)

Apart from arguing about that with other developers, and calls to all library functions looking different, the language will work perfectly and not care about that.

Related

How to use Type Classes in Haskell and difference with java interfaces [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I asked this question yesterday and the user #dfeuer advised me, that as a beginner I should not define my own classes. His comment:
Haskell beginners shouldn't define their own classes at all. Learn to define functions, and types, and instances. These are the vast majority of actual Haskell code. As you do this, you'll get a good feel for what makes some classes really useful and others less so. You'll learn what makes some classes easy to use and others full of booby traps. Then when you find a good reason to actually define your own class, you'll go through a slew of bad class designs before you get good enough at it that only most of your attempts go badly. Designing good classes is really hard and rarely necessary.
I am curious, why is defining my own classes usually (for a beginner) a bad idea? What are these "booby traps" and why is it so hard to design good classes?
I thought classes are used to define interfaces to data as I do in OOP. When I write java code, I try to write as much code as possible with abstract classes and especially interfaces, so that when I need to change the data, most of my code remains unchanged and that my methods are highly reusable. Another comment under that question by #Carl suggests, that this is not how classes should be used
Why did you create that class? It feels very weird to me - very much like something that someone used to OOP would do, rather than someone used to Haskell. It has too many parameters, they're connected in what feels like a very ad-hoc manner...
My fear is, that without this OOP use of classes, any change in data would break huge part of code. Is this fear unfunded? And if it is funded, why I should not use classes to define interface to data?
To be fair, I am self taught java programmer and I did not read others people code, so maybe I am doing java wrong also. I only read some books on how the language works and then built an application. I developed it for a year or so, and my whole style is consequence of this experience alone. My style seems to work well for my needs though, and thus I assume it is how java programming/OOP is indeed done.
I'm a relatively a new (and amateur) Haskell enthusiast.
I'd say: just stop thinking you can reuse OOP knowledge, patterns, and other things in Haskell. Even terminology is not "reusable". Classes are not the same thing in OOP languages vs Haskell (well, they are called typeclasses in Haskell, actually).
This is an answer to a question of mine. It starts more or less like this:
It's true that typeclasses can express what interfaces do in OO languages, but this doesn't always make sense.
i.e. stating the inherent difference between two similar (only apparently similar!) concepts in Haskell vs OOP languages.
Another interesting link is on Design Patterns in Haskell. It is very high level, and I still don't quite understand how some tools can be used in Haskell as an alternative to a specific OOP pattern. (Probably the fact that first-class function remove the need for the strategy pattern is the only thing that is totally clear to me, at the moment.) However, I think it is a good reading and, most of all, it should convince you that learning and coding in Haskell comes with a huge mental shift, and it is best approached by starting from zero. If you refuse that, you're not gonna learn Haskell.
I'm not saying that you shouldn't use your brain to notice similarities between OOP languages and Haskell. You should just assume that even trying to build on those observations will handicap your learning process.
As regards Haskell specifically, sitting down and studying LYAH as you were at school (with a laptop to try out examples) is a good way to learn very well the basics. It is an easy-ish to read book, and guides you by hand.
For what is worth, I think that Structure and Interpretation of Computer Programs is a good book that can accompany learning a functional language, as it gives you a practical background to the shift of philosophy I mentioned earlier. You must do the exercises. Doing them will force you towards that mental shift.
A final suggestion, that I would never apply before studying LYAH thoroughly, is to complete The Monad Challenges. But I have to say that LYAH does already a good job at teaching you what the Challenges ask you to think about. I found myself thinking "I already know this", "why is the challenge going so roundabout?".

Can any language be used to program in any paradigm? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Can any language be used to program in any paradigm? For example C doesn't have classes but s it is possible to program in OOP. There are some languages (such as assembly) I can't see using OOP in.
Yes, simply due to the fact you can implement an interpreter for your $favorite $paradigm in the host language.
Practically though, this is not feasible, efficient or right.
C++ is ultimately assembly, you just have a compiler to write the assembly for you from a nicer description. So sure you can do OOP in assembly, just as you can do OOP in C; it's just that a lot of the OO concepts end up being implemented with convention and programmer discipline rather than being forced by the structure of the language, with the result that huge classes of bugs become possible that your language tools probably won't be very good at helping you find.
Similar arguments follow for most paradigm/language mismatches. Lots of object-oriented programs have been written in C this way, so it can even be a somewhat practical thing to do, not just an academic matter.
It can be a little harder when what you want is to remove restrictions rather than add them.
In purity-enforced languages such as Haskell and Mercury you can't suddenly break out object-oriented style packets-of-encapsulated-mutable-state in the middle of arbitrary pure code (at least not without using "all bets are off" features like unsafePerformIO in Haskell or promise_pure in Mercury to lie to the compiler, at which point your program may well completely fail to work unless you can wrap a pure interface around the regions in which you do this). However you can write whole programs in procedural or object-oriented style in these languages, by never leaving the mechanism they use to do IO.
Likewise, if you consider the use of duck typing in dynamic languages to be a paradigm, it's pretty painful to get something similar in languages with static typing, but you can always find a way to represent your dynamic types as data. But you again find yourself doing thing with convention and reimplementation that you would get for free if you were really using a duck typing language.
I'm pretty sure it would be hard to find a language (usable for writing general purpose programs) that can't be adapted to write code in any paradigm you like. The adaptation may not produce very efficient code (sometimes it can though; adapting C or assembly to any paradigm can usually be made pretty much as efficient as if you had a language tuned for that paradigm), and it will almost certainly be horribly inefficient in terms of programmer time.
No, not all languages can be used to program in any paradigm. However, the more popular ones - python, c++, etc all allows you to chose how you want to program. Even php is adding OO support.

Command Pattern leading to class explosion [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
It seems like whenever I use the Command Pattern, it always leads to a significantly larger number of classes than when I don't use it. This seems pretty natural, given that we're executing chunks of relevant code together in separate classes. It wouldn't bother me as much if I didn't finish with 10 or 12 Command subclasses for what I might consider a small project that would have only used 6 or 7 classes otherwise. Having 19 or so classes for a usual 7 class project seems almost wrong.
Another thing that really bothers me is that testing all of those Command subclasses is a pain. I feel sluggish after I get to the last few commands, as if I'm moving slower and no longer agile.
Does this sound familiar to you? Am I doing it wrong? I just feel that I've lost my agility late in this project, and I really don't know how to continuously implement and test with the speed that I had a few days ago.
Design patterns are general templates for solving problems in a generic way. The tradeoff is exactly what you are seeing. This happens because you need to customize the generic approach. 12 command classes does not seem like a lot to me, though, personally.
With the command pattern, hopefully the commands are simple (just an execute method, right?) and hence easy to test. Also, they should be testable in isolation, i.e. you should be able to test the commands easily with little or no dependencies.
The benefit you should be seeing are two-fold:
1) You should have seen your specific, complicated approach simplified by using the pattern(s) you chose. i.e. something that was getting ugly quickly should now be more elegant.
2) Your should be going faster, due to the simplified approach and the ease of testing your individual components.
Can you make use other patterns, like composite, and use good OO design to avoid duplicating code (if you are duplicating code...)?
That doesn't seem like a lot of command classes, but I agree with you that it smells a little if they make up more than 60% of your classes. If the project is complex enough to merit the use of the command pattern, I suspect you'll find some classes begging to be split up. If not, perhaps the command pattern is overkill.
The other answers here have great suggestions for reducing the complexity of your commands, but I favor simplicity where I can find it (ala the bowling game).

do you rely on your memory or consult references and use a lot of intellisense? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have noticed I do not code as much as I use to. Today I dedicate more time to analysis and design, then I communicate that design to programmers. Then they do the coding. This has affected my coding productivity, because I must consult references and rely on intellisense. Things are becoming more complex everyday
Now, here is the irony. If I were to hire a programmer and ask him/her to sit in front of a computer, I may ask to do some coding and I would check abilities. I would evaluate them based on their use of memory vs. consulting references. Maybe I will prefer that programmer who did not consult too much, but who knows what they are doing.
What is your opinion and experience?
I would say that a developer who knows how to find the answers is better than one who has an overall good knowledge already. I find that intellisense is a good tool for finding answers, besides it is too much to remember all method names, arguments, overloads, etc.
I use memory to get me into the right general area (e.g. knowing which classes to use or at least which namespace they'll be in) and then often Intellisense/MSDN for the exact method name or arguments to use.
Having said that, Stack Overflow is improving my ability to code without any references (or even compilation) - I'm sure code will just work out of the box for me more often now than it used to. (I tend to post and then check the code works, add links to MSDN etc - assuming I'm reasonably confident in the approach.)
Someone knowing what resources are available, and how to find the answers, and how to effectively debug - these are qualities I look for now in prospective employees.
I used to consult my memory only, but two things have happened:
Class libraries have gotten larger, so has the number of languages available
The ratio of programming-related memory to personal-life-related memory has shifted away from code
Programming today is also eight times harder than it was when I started. I used to work on 8-bit machines, now I'm working on 64-bit ones. :)
I once was at a job interviewed with the CTO of a company. He asked a question based on a real life problem the company had a while back and solved. It was a multi step problem.
I was standing in front of a whiteboard working through my solution and struggling through a particular part, a part I would use google for before even attempting it, had I been tasked with solving this problem for real instead of for an interview. He asked me at that point, "would you do anything different if this wasn't an interview question." I responded, "Yes. I would exhaust all possibilities of using a third party component for this part of the task and look up the solution, because it is a well defined problem thats been solved several times." There was a bit more discussion where I justified my answer, explained exactly what I would research, and I solved some other parts of the question. In the end I was offered and accepted the job, partly because of knowing how to find out what I didn't know.
Being able to use references is as important as being able to code from memory. Obviously, if you are a one language shop, and want people proficient in that language,the person should be able to write a complete hello world app in notepad. Interview problems should focus on small problems, and one should not worry about small syntax errors. This is why a whiteboard is the best IDE for interview questions.
Unless you demand all your coders use notepad and don't give them internet access, don't be as concerned by the syntax. If you do sit them down in front of a computer, worry about the finished product as well as the technique used to get there.
I'm a PHP programmer in my early 30's. I rely on PHP's excellent documentation, for several reasons:
Programming concepts don't change. If I know what my object models are and how I want to manipulate data, then there's dozens of ways to implement the details. The details are important, but a better grasp of the design and structure is more important
PHP has notoriously inconsistent functions. One string function might use ($needle,$haystack) as parameters, and another might use ($haystack,$needle). Trying to keep them straight isn't worth the hassle when you can just type php.net/function_name and get the reference.
I don't rely on intellisense, simply because I haven't found a decent IDE for PHP that does it well. Eclipse is ok, but it's not fantastic. Netbeans gives me 'PHPDoc not found' for all the built-in PHP functions whenever I install it. There's nothing that I've found so far that beats out the documentation.
The bottom line is that the ability to memorize functions isn't indicative of coding ability. Obviously there's a key set of basic functions that a good programmer will know just from extensive usage over time, but I wouldn't base a hiring decision on whether someone knows substr_replace vs. str_replace from memory.
Because I've read either the documentation, or articles, or a book on a subject, the things I learn on a topic are organized. The result is that, if I can't bring something up from memory, I can probably find it quickly through IntelliSense or the Object Browser.
Worse come to worst, I can pick up the book again; something these youngsters are not being taught to do.
John Saunders
Age 51
Pretty much Google + Old Projects + my memory (of course)
References will not solve your problems though, its only for the nuts and bolts, the higher level of problem solving is the actual "programming" part IMHO.
I tend to use Intellisense and Resharper much more than I used to before, but this has helped my overall productivity. If I can get the idea of how I want to solve something and then use tools to get the more boring parts like class names and function signatures, why shouldn't I use the tools I have? I feel relieved that Jon Skeet has a similar approach it seems.
I rely on my bookmarks and books... and my ability to use them effectively. I have multiple books above my desk, including a copy of the ISO C90 standard. Moreover, I use Xmarks to have access to my bookmarks wherever I go. Sometimes, I make a pdf out of a particular page and upload it to my web-site if it is important enough.
Sometimes the information provided by the resources I use makes its way into my terrible memory... maybe.

Is Single Responsibility Principle a rule of OOP? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
An answer to a Stack Overflow question stated that a particular framework violated a plain and simple OOP rule: Single Responsibility Principle (SRP).
Is the Single Responsibility Principle really a rule of OOP?
My understanding of the definition of Object Orientated Programming is "a paradigm where objects and their behaviour are used to create software". This includes the following techniques: Encapsulation, Polymorphism & Inheritance.
Now don't get me wrong - I believe SRP to be the key to most good OO designs, but I feel there are cases where this principle can and should be broken (just like database normalization rules). I aggressively push the benefits of SRP, and the great majority of my code follows this principle.
But, is it a rule, and thus implies that it shouldn't be broken?
Very few rules, if any, in software development are without exception. Some people think there are no place for goto but they're wrong.
As far as OOP goes, there isn't a single definition of object-orientedness so depending on who you ask you'll get a different set of hard and soft principles, patterns, and practices.
The classic idea of OOP is that messages are sent to otherwise opaque objects and the objects interpret the message with knowledge of their own innards and then perform a function of some sort.
SRP is a software engineering principle that can apply to the role of a class, or a function, or a module. It contributes to the cohesion of something so that it behaves well put together without unrelated bits hanging off of it or having multiple roles that intertwine and complicate things.
Even with just one responsibilty, that can still range from a single function to a group of loosely related functions that are part of a common theme. As long as you're avoiding jury-rigging an element to take the responsibilty of something it wasn't primarily designed for or doing some other ad-hoc thing that dilute the simplicity of an object, then violate whatever principle you want.
But I find that it's easier to get SRP correct then to do something more elaborate that is just as robust.
None of these rules are laws. They are more guidelines and best practices. There are times when it doesn't make sense to follow "the rules" and you need to do what is best for your situation.
Don't be afraid to do what you think is right. You might actually come up with newer and better rules.
To quote Captain Barbossa:
"..And secondly, you must be a pirate for the pirate's code to apply and you're not.
And thirdly, the code is more what you'd call "guidelines" than actual rules...."
To quote Jack Sparrow & Gibbs.
"I thought you were supposed to keep to the code."
Mr. Gibbs: "We figured they were more actual guidelines. "
So clearly Pirates understand this pretty well.
The "rules" could be understood via the patterns movement as "Forces"
So there is a force trying to make the class have a single responsibility. (cohesion)
But there is also a force trying to keep the coupling to other classes down.
As with all design ( not just code) the answer is that it depends.
Ahh, I guess this pertains to an answer I gave. :)
As with most rules and laws, there are underlying motives by which these rules are relevant -- if the underlying motive is not present or applicable to your case, then you are free to bend/break the rules according to your own needs.
That being said, SRP is not a rule of OOP per se, but are considered best practices to create OOP applications that are both easily extensible and unit-testable.
Both are characteristics that I consider as of utmost importance in enterprise application development, where maintenance of existing applications occupies more time than new development does.
As many of the other posters have said, all rules are made to be broken.
That being said, I do think that SRP is one of the more important rules for writing good code. It's not specific to Object Oriented programming, but the "encapsulation" part of OOP is very hard to do right if the class does not have a single responsibility.
After all, how do you correctly and simply encapsulate a class with multiple responsibilities? Usually the answer is multiple interfaces and in many languages that can help quite a bit, but it's still confusing to the users of your class that it may apply in completely different ways in different situations.
SRP is just another expression of ISP :-) .
And the "P" means "principle" , not "rule" :D