As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I've been developing in Objective-C for a long time, mostly games, and lots of people have commented that I frequently use global variables. For example, in my .m file before the #implementation I mostly have:
BOOL X=0;
int y=1;
NSString *ran;
.
.
.
Now, I know I have to use properties, but I've found it much clearer for me to use this global variable, and I'm keeping them safe.
Except for the fact that it is not object oriented and/or not acceptable, does it affect any other facet of my app, like processor operation?
In my games I have something like 40 booleans in 1 class that are shared by almost ALL methods. I find it almost impossible to write getter/setter, or use properties for all of them, and I am comfortable with my way. But is it so wrong?
Is there another way to deal with many booleans that change in real time frequently, and are shared by all methods?
Is this so terrible to use globals? (I dont need to be considered as good Objective-C user…)
Global variables are generally considered bad practice because they lead to coding problems in the long run. If you find yourself able to maintain your code, then go ahead.
But eventually, you'll work on a project big enough where it will cause problems. Why not learn to get along without them now on the easier projects?
ObjC's not much different from its relatives in this regard.
The problem is that the program is very difficult to use in other contexts. That is, it can be a better choice to reimplement a program entirely rather than making the one with 40 globals reusable (and retesting everything).
40 Booleans for one class is also a llllloooooooot. Read your code -- look for patterns. Make smaller, more easily reusable implementations if you want to get away from the globals. Many devs consider them huge maintenance pains (war stories!). I could easily see myself having trouble trying to understand the program flow of such a program.
Even packing your 40 bools into a C struct and putting an instance of that struct in your ObjC class will be one huge improvement which is simple to implement.
If you have had no problems maintaining these programs, consider it a blessing! …but it will not be a favorite design for other people that will read, extend, or maintain said program.
As with most development practices, global variables have their place, BUT they reduce how refactorable, readable and debuggable the code is. Imagine the following scenarios:
a program that has an error in one function, but the error is caused by a global variable. Which other location produced the bad value? It's nearly impossible to tell.
someone who didn't write the code wants to change something, but has no idea where the global values are coming from. The way to figure out how the program works is to understand EVERY place the global variable is used. This is much more difficult than if you had simply encapsulated your functionality appropriately.
one piece of code is repeated over and over again (every method has access to your global so they all use it, but in just barely different ways). Requirements change and you need to change how that works slightly. You now have to change 143 different places in the code. (one time I had to do this was when the software changed from the English system to metric. 30 different code locations all using DIFFERENT conversion values to do the same thing)
On the other hand, if you have performance issues, there are times when having a global will speed things up, but it's much better to code for readability, refractorability and debuggability and then refactor if necessary for performance.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've read a blog post called Blogging with Noir, and I was honestly surprised that the author uses java.jdbc instead of libraries like Korma which I found surprising. What are the advantages of writing SQL queries in your code instead of letting tools do it for you?
I guess it is for the usual reasons that you might choose to use an API directly in Clojure rather than use a wrapper:
Existing knowledge: you already know the JDBC well and know that it will get the job done, why spend time learning a new abstraction unless there is a clear advantage?
Uncertainty - does the library have all the features you need? Will it continue to be maintained and implement new features in the future?
Stability - the wrapper may not yet be mature, so you run the risk of your code having to change if breaking changes occur / bugs are discovered.
Completeness - the wrapper may not (yet) encapsulate all of the functionality of the original API that you need
Overhead - sometimes extra layers of abstraction add a performance overhead that you don't need/want
Extra dependency - adds complexity to your build, and conceptual overhead in terms of the number of abstractions you need to keep in your head.
Ultimately it's a trade-off - the above are reasons that you might want to use the underlying API, but there are equally good reasons that you may choose to use the wrapper:
More idiomatic - a wrapper library is likely to give you much cleaner, more elegant code than a Java-based API (particularly if the Java API is imperative/stateful). You have to admit that Korma is pretty elegant!
More composable - Clojure wrappers tend to adopt a functional style, which leads to easy composability with other clojure code / libraries.
New features - often Clojure wrappers add extra functionality that the original API does not posess (for example, look at the data binding functionality added on top of Swing by Seesaw)
Korma IMO isn't nearly ready to be used as a full replacement for SQL. It's definitely handy, but right now a lot of my queries have (raw "...") snippets in them, and for more complicated stuff all the main querying is done inside SQL views which are then selected on via korma.
The main alternative, ClojureQL, doesn't even work with Clojure 1.3+
In short, it's hard to abstract SQL, and Korma - even though it tries to be minimal, meaning you still have to understand SQL pretty well to use it - isn't finished.
I can think about two reasons:
Almost everybody knows SQL, almost nobody knows Korma
This is a guess, because I do not know Korma myself, but raw SQL is sometimes suitable or even necessary if you want to do something specific like features that are only present in a particular database
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
sometimes i'm using OO des. and sometimes procedural style and everytime i use oop i feel like wasting resources on nothing. say i have a situation where i need to grab some values from datasource, a pool of bannerinfo. For the further work i can declare a banner class and decorators for additional functionality, but why would i do such a hard sequence - i got to grab, instantiate objects, fill them, wrap and so on, rather then just: grab data, run procedural code on data; yeah in many times oop just helps to organize logic and make decisions flexible, but on the other hand it's a waste of time on design (i experience a lot of problems solving simple stuff while putting them into oop style) and obviously a waste of machine resources. i'm kinda stuck in that mindset, im young but i've already seen some projects in oop - i wouldn't say that they're easy-understandable; that idead of oop is pretty charming - organising, making logically, but...
So, would you mind to point out some difference between situations when i should use oop/procedural styles. I'll appriciate any links to additional literature on that topic.Thanks!.
That data you're grabbing has a structure to it, i.e. the order in which the fields show up within each record in the data source. The code you want to run on that data is closely bound to that structure (i.e. the code is not going to apply to other data structures, and if the data structure changes you certainly want to change the code). So it makes sense to keep the data and behaviour together from a "mental information management" point of view, and object are a great way to do this.
What if your program grows, and you want to iterate through bannerinfo in multiple places within the project? Of course you could create a routine available from the whole program which does what you want on the bannerinfo, and call that from each point where you need it. But what if you then think of other things you want to do with a bannerinfo? Of course you could just create another routine available from the whole program, but it would be completely separate from the first. What if these two routines had some code in common that you could push out to a separate routine, would you create yet another routine available from the whole program, even though it's only used by the other two?
With OOP you'd have a class with two public methods, and one private one for that third routine. Why is this different to having three routines available to the whole program? The answer is clutter. You can create as many additional methods on that class, and it won't add clutter to the parts where you're not using that particular class as they won't be available. If the data structure of bannerinfo changes, you only need to go to one place to make the changes.
Of course there's more, but I hope this helps demonstrate where OOP can be useful. Its all about making it easy to manage. If your specifc problem doesn't care for that because it is a one-off, or will never grow, then there's not necessarily any benefit.
Final note: whether the benefit is worth the effort also depends on other factors such as how comfortable you are with using objects, what you're trying to do with them (inheritance can get murky), and also on the language and syntax itself.
"grab data, run procedural code on data"
I don't see how dealing with data can be easier with procedural. With OOP you can do stuff like
$users = $db->from('users')->where('score',100,'>')->getMany();
Or with an ORM:
$user = $orm->entity('User')->findOne($id);
$user->setPassword('abc123'); // set a new password
$orm->save($user);
About showing the data (also called 'the view' in MVC architecture), I have to agree that decorators can be annoying. But if you use a templating engine, things are easy as they can get. You didn't mention which language you are using, but if you are into PHP you can use Twig
Personally, I feel more comfortable with OOP even in small projects, where you don't even do things like unit-testing. But I think the best of OOP comes when you need maintainability, collaboration, reusability, etc.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
i've been interested in game programming for a while and tried to read quite a lot of books on OOP. The problem is for the most part the books show you code and say "add this here" "add this there" but they fail to explain "the big picture" of OOP instead of jumping around. What i want to know i how to think in terms of OOP. For example i've read this thread Object Oriented application problems in game development which gives you some good insight on howto THINK about your classes (like, player "has", "can"....world "listens"). What i would like some help with is a way of thinking, to make the right questions order to plan well which things should be left for a "player class" to do, which things to leave for the "world class" to do, which things to make "private" and which to leave "public", etc. I want to answer the "Why" not the "Hows" I don't want the code, i want the Questions or Mind Set for OOP to become a natural way to organize code.
For example, if i am dealing with collision detection. Should i leave this for the "world" to check?, should i leave it for the player to check? Which question should i ask myself?
Sorry for the "broad" question, but anything would help. From a good "book" to some tips.
PD: I do not have mucho programming experience
Best regards,
Stop reading books and get out there and program. Learn Java. Use a book to do it, but don't just go through the motions, don't download the code write it yourself. In the beginning you will wonder what is the point of OOP, but then you will get into more complex problems and you will start to appreciate the freedom that OOP gives you. Things like inheritance, encapsulation, and polymorphism are just terms right now for you. You kinda know what they mean but you haven't programmed enough to use the concepts. Once you use them and make classes that exemplify the concepts then you start to learn real object oriented programming. You shouldn't focus on making your game OOP, you should focus on making OOP fit your game.
So moral of the story is go program.
Write, write software. People make too big of a deal out of OOP. It's merely an approach to achieve certain design principals such as modularity and low coupling. You experiment and see what makes code - good code, how to make code flexible and maintainable. then you will understand the principles that lead to a good design, whether purely functional, procedural, OOP, or any other paradigm.
I think the key to learning OOP is indeed writing code, but start to think in terms of how you would model the real world - i.e., a car object has attributes of doors, tires, engine, and so on, while the behaviors would be perhaps start engine, change oil, ect....free your mind and think of things in a method that will relate to how you can make writing code less cumbersome and complex. Some problems are inherintly complex, but OOP can help you to sort it out and think of things in a real world fashion. You can do it...just start trying....
I read earlier edition of "Object-Oriented Thought Process" and found the book immensely helpful in understanding the whole OOP paradigm.
http://www.amazon.com/Object-Oriented-Thought-Process-3rd/dp/0672330164/
I guess the best way to 'grock' the concept of Object Oriented Programming is to think of code as modules, or building blocks - write code so it can be modularized in this fashion, then you can reuse them whenever you need that code by simply calling them as needed instead of writing the same code over and over and over again. It's as much a discipline as well as a taught subject. It is also helpful to document your code so when you go back later to reuse it - you know what kind of arguments it takes, what kind of output it generates, and how it does what you wrote it to do.
As you have said, this is a very broad question. With experience, you will have a better sense of when to use what.
While it is nice to know the "whys", remember that knowing the "hows" builds a good foundation for you to understand the "whys".
Now, to answer the specific ones that you have brought up. Think of public as something you would put in the API. If you have a "player class", what do you want the rest of your code to do with it? You want to interact with it in some sense. What is the interface to interact with the "player class"? Those that are your interface should be "public".
So what are the things that should be private? For example, if there is some attribute to the player class that has to be in a valid range (let's say between 1 to 100). How do you prevent people (other parts of the code) from corrupting that? You use private for that. This prevents other people from setting the value to 1001. This way, if it ever gets into a bad state, you know it's the class that screwed up, not the rest of the code.
As for designs, remember that designs change. When you first set out with your program, you may decide the one class should do the collision detection. (That is, your "world" has a collision detector) Maybe at first you just write your "world" with the collision detector. And later on you refactored the code out and have a class called "Collision detector". Then later on you may decide it goes somewhere else, but it's easy since you can just have another object to "have a" Collision detector.
Point is, if you make your code modular enough, this will be easy. There are no hard rules. You first write your code with the design you have in mind. Along the way you are going to find better ways of doing things.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
I don't think I am a total noob in OOP, but do you sometimes feel we go a little too far in privatizing the fields? Do you have a good rule of thumb as to when a field absolutely must be private, and when is it (maybe) okay to mark it protected, or public?
Sometimes it's the obvious thing that gets me.
Discuss
The simple rule:
Interface should be public. Implementation should be private.
When you make a field public (or even protected), you are effectively declaring it as part of the interface. Problem is, it's an implementation detail -- in almost every case, that field means something to your code. Anyone that wants to set it can, but you have to carefully explain how it needs to be set in order to keep the whole thing from crashing and burning. You can't even validate it as it's being set, so you need to validate it every...single...time before you use it, which can kill performance depending on how the validation needs to be done. (Even then, there's no guarantee that the field will stay valid, because you can't even enforce synchronized access to it.) And everyone using your class will have to do the same thing, cause $DEITY only knows what other code has been mucking around with that field and possibly corrupting it.
On top of all that, once it's a field, people are going to write code that expects to use that field. And in the cases where you find you need any of that validation later, you can't just convert a field to a getter/setter without breaking binary compatibility (meaning anyone who used your code will have to recompile everything that used that field in order for it to work again). Do that too many times, and people will be afraid to use your API -- read: you won't have any users.
Getters and setters separate the implementation from the interface. They allow callers to get back something that is definitely valid, and let you make sure that anything going in is definitely valid. This makes things more predictable, more stable. So if you write code that will ever be used after you write it, non-trivial getters and setters (that validate the value, maybe synchronize, etc -- ie: do more than just blindly get and set a variable) are a good idea.
It's for the case when others are going to be using your code or classes. You can expose and control exactly what is to be interfaced with, and what is 'internal.'
If you're the only one who will ever use your code, then yes it is often going 'too far.'
I will get attacked for even suggesting this, but whatever...
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
We all agree that duplication is evil and should be avoid (Don't Repeat Yourself principle).
To ensure that, static analysis code should be used like Simian (Multi Language) or Clone Detective (Visual Studio add-in)
I just read Ayende's post about Kobe where he is saying that :
8.5% of Kobe is copy & pasted code. And that is with the sensitivity
dialed high, if we set the threshold
to 3, which is what I commonly do, is
goes up to 12.5%.
I think that 3 as threshold is very low.
In my company we offer quality code analysis as a service, our default threshold for duplication is set to 20 and there is a lot of duplications. I can't imagine if we set it to 3, it would be impossible for our customer to even think about correction.
I understand Ayende's opinion about Kobe: it's an official sample and is marketed as “intended to guide you with the planning, architecting, and implementing of Web 2.0 applications and services.” so the expectation of quality is high.
But for your project what minimum threshold do you use for duplication?
Related question : How fanatically do you eliminate Code Duplication?
Three is a good rule of thumb, but it depends. Refactoring to eliminate duplication often involves trading conceptual simplicity of the codebase and API for a smaller codebase that is more maintainable once someone does understand it. I generally evaluate things in this light.
At one extreme, if fixing the duplication makes the code more readable and adds little or nothing to the conceptual complexity of the code, then any duplication is unacceptable. An example of this would be whenever the duplicated code factors out neatly into a simple referentially transparent function that does something that's easy to explain and name.
When a more complex, heavyweight solution, such as metaprogramming, OO design patterns, etc. is necessary, I may allow 4 or 5 instances, especially if the duplicated snippet is small. In these cases I feel that the conceptual complexity of the solution makes the cure worse than the ill until there are really a lot of instances.
In the most extreme case, where the codebase I'm working with is a very rapidly evolving prototype and I don't know enough about what direction the project may evolve in to draw abstraction lines that are both reasonably simple and reasonably future-proof, I just give up. In a codebase like this, I think it's better to just focus on expediency and getting things done than good design, even if the same piece of code is duplicated 20 times. Often the parts of the prototype that are creating all that duplication are the ones that will be discarded relatively soon anyhow, and once you know what parts of the prototype will be kept, you can always refactor these. Without the additional constraints created by the parts that will be discarded, refactoring is often easier at this stage.
I don't know what is a good 'metric' for it, but I would say the thing I usually strive for is
if you have the same code in two places, and
the code is the same by intent (rather than merely coincidentally the same)
then refactor to get rid of the duplication. All duplication is bad. I'll rarely let code be in two places, and at three, it definitely has to go.
I'm personally pretty fanatic about it. I try to design my projects to avoid code duplication. I do have the goal to get the threshold in the lower single digits, if I can't get there it means my design is not well enough and I need to go back to the drawing board or refactor.
Depends on the programming language. (The "Clone Detective" guy seems to recognize this: "programming language constraints" is one of the boxes in his first presentation.)
In a Lisp program, any duplicate expression is easily subject to refactoring -- I guess you'd call that threshold 2. Everything is comprised of expressions, and there are macros to translate expressions, so there's rarely an excuse for duplicating anything even once. (About the only thing I can think of which would be hard to extract would be something like LOOP clauses, and in fact many people advocate avoiding LOOP for just that reason.)
In many other languages, programs consist of classes which have methods which have statements, so it's harder to just pull out an expression and use it in two different files. Often it means changing the structure of the thing as you extract it. Often there's also a requirement of type safety, which can be limiting (unless you want to write a whole lot of reflection code all the time to escape it, but if you do, you shouldn't be using a static language). If I made my current statically-typed program perfectly DRY, it would be neither shorter nor easier to maintain.
I guess the upshot of this is that what we really want is "easy to maintain". Sometimes, in some languages, that means "just put a comment here that says what you copied and why". DRY is a good indicator of maintainable code. If you're repeating a lot, that's not a good sign. But chasing any single statistic also tends to be bad -- otherwise, we'd solve all our problems by simply optimizing for that.
I think we need to combine
Number of lines that is duplicate
Number of times duplicate has been copied
How “close” are the duplicates to each other
e.g if there are in different products that just happen to be in the same source code control system, it is very different then if they are in the same method.
Time since any of the methods that contain the duplicate code have been changed.
So as to get a good trade-off between cost/benefit of removing the duplicates.
Our CloneDR finds duplicate code, both exact copies and near-misses, across large source systems, parameterized by langauge syntax. It supports Java, C#, COBOL, C++, PHP and many other languages.
It accepts a number of parameters to define "What is a clone?", including:
a) Similarilty threshold, controlling how similar two blocks of code must
be to be declaraed clones (typically 95% is good)
b) number of lines minimum clone size (3 tends to be a good choice)
c) number of parameters (distinct changes to the text; 5 tends to be a good choice)
With these settings, it tends to find 10-15% redundant code in virtually
everything it processes.