Why some developers prefix variables with the word "my"? - oop

I've noticed some of the developers on my team tend to prefix their variables with "my" (i.e. "var myHello = 'Hello World'). These aren't instance variables or anything, just regular variables that reside within a method.
Is there any significance to this naming convention? To me it comes off a little "newbie-ish" -- as if they just graduated from their LOGO class. But these are seemingly seasoned devs so I could be way off.

I totally disagree with you. leaving the fact that judging a person's skill by variable names imply that he's newbie aside, some teams have code standards other than you're used to. For your information - in PERL if you didn't know there's a reserved word my variable and it has nothing to do with coding standards. Maybe he had no better name for that class which is already implemented in the system but he wants to keep things simple? Besides, maybe he likes to reference himself to the code he writes?

Related

what does 'smt' mean, why is 'smt' often used as a prefix of a database name?

I find that many databases are named with a prefix 'smt_' in industry, for example, 'smt_customer_profile', what dose 'smt' mean? Is there any special meaning in this prefix?
I have asked my supervisor about this question, getting the answer that the reason behind this naming is very geek style, which really invokes my curiosity! So I tried to google it, some interpretations are like:
satisfiability modulo theories (SMT)
Simultaneous multithreading
Small Mini-Tower
all seemingly not the right answer...
So does anybody know about this, many thanks!!!
Haven't seen this in the industry myself, but the best guess would be Subscription Management Tool, this is what I found SMT in relation to databases:
https://www.suse.com/documentation/smt11/book_yep/data/book_yep.html
It's part of SUSE Linux operating system
What the actual meaning of smt is could be anything. This falls into a class of naming conventions called Hungarian notation -- that is a prefix that defines the "type" or "class" of an object.
Sometimes this was literally the type -- in other cases it might be the equivilant to a modern module (before object oriented programming).
In any case it is very old fashioned and has been replaced with name spaces or schemas for most things.

What antipattern does this common programming mistake fall under

For a programming project,
let's say the programmer has named similar style functions differently in many places, for example...
int ask_bro_4_data();
and another as
int ask_mom_for_data();
What antipattern does this represent? Essentially, it's the lack of standardization right? As in, one function uses for, the other uses 4.
Similarily the programmer could be naming variables in some fashion that relates to their use but fails to do so in every case, or does so in a non standardized way. This makes searching for these variables in a large code base harder because they may not be following the naming condition that you assume they would be.
Any ideas? Sorry for the ambiguous name, but I was not sure what to label this question as.
This would be considered more a syntax convention than a pattern.
The English language would lead us to prescribe using words in preference to numerals in order to improve maintainability. However, conventions can vary significantly depending on your peer group.
A design pattern would be considered a solution intended to address common problems introduced by a specific context.
For example; I want to ensure my application can only ever access the same instance of a given class. A basic pattern to address this problem would be the Singleton.
If the solution then introduces more problems than it solves; then it becomes an anti-pattern.
In this example; Singletons are hard to unit test; so this is one reason why many consider it an anti-pattern.
Anti-Pattern: Rename later
When the programmer realize that he/she or her collegues are inconsistent in naming and decide to do something about it later, or that is not important to do something about at all.
This can be coped with by:
clear guidelines from the team about what to strive for in respecting naming conventions,
recognizing that refactoring is an ongoing process, parallel to the coding.
simple IDE commands that afford the user after thinking oh we used "4" here and "for" here, that's disturbing *Ctrl+R Ctrl+R* ah that's better *continues coding.*

What is the point of the lower camel case variable casing convention (thisVariable, for example)?

I hope this doesn't get closed due to being too broad. I know it comes down to personal preference, but there is an origin to all casing conventions and I would like to know where this one came from and a logical explanation as to why people use it.
It's where you go all like var empName;. I call that lower camel, although it's probably technically called something else. Personally, I go like var EmpName. I call that proper camel and I like it.
When I first started programming, I began with the lower camel convention. I didn't know why. I just followed the examples set by all the old guys. Variables and functions (VB) got lower camel while subs and properties got proper camel. Then, after I finally acquired a firm grasp on programming itself, I became comfortable enough to question the tactics of my mentors. It didn't make logical sense to me to use lower camel because it wasn't consistent, especially if you have a variable that consists of one word which ends up being in all lowercase. There is also no validation mechanism in place to make sure you are appropriately using lower vs. upper camel, so I asked why not just use proper camel for everything. It's consistent since all variable names are subject to proper camelization.
Having dug deeper into it, it turns out that this is a very sensitive issue to many programmers when it is brought to question. They usually answer with, "Well, it's just personal preference" or "That's just how I learned it". Upon prodding further, it usually invokes a sort of dogmatic reaction with the person as I attempt to find a logical reason behind their use of lower camel.
So anyone want to shed a little history and logic behind casing of the proper camelatory variety?
It's a combination of two things:
The convention of variables starting with lower case, to differentiate from classes or other entities which use a capital. This is also sometimes used to differentiate based on access level (private/public)
CamelCasing as a way to make multi-word names more readable without spaces (of course this is a preference over underscore, which some people use). I would guess the logic is that CamelCasing is easier/faster for some to type than word_underscores.
Whether or not it gets used is of course up to whomever is setting the coding standards that govern the code being written. Underscores vs CamelCase, lowercasevariables vs Uppercasevariables. CamelCase + lowercasevariable = camelCase
In languages like C# or VB, the standard is to start private things with lowercase and start public/protected things with uppercase. This way, just by looking at the first letter you can tell whether the thing you are messing could be used by other classes and thus any changes need more scrutiny. Also, there are tools to enforce naming conventions like this. The one created/used internally at Microsoft is called StyleCop and is available as a free download.
Historically, well named variables in C (a case-sensitive language) consisted of a single word in lower case. UPPERCASE was reserved for macros.
Then came along C++, where classes are usually CapitalizedAndCamelCased, and variables/functions consisting of several words are camelCased. (Note that C people tend to dislike camelCase, and instead write identifiers_this_way.
From there, it spread.
And, yes, probably other case-sensitive languages have had some influence.
lowerCamelCase I think has become popular because of java and javascript.
In java, it is specifically defined why, that the first word should be a verb with small letters where the remaining words start with a capital letter.
The reason why java chose lowerCamelCase I think depends on what they wanted to solve. Java was launched in 1995 as a language that would make programming easy. C/C++ that was often used was often considered difficult and too technical.
This was something java claimed to solve, more people would be able to program and the same code would work on different hardware. The code was the documentation, you didn't need to comment code, just read and everything would be great.
lowerCamelCase makes it harder to write "technical" code because it removes options to use uppercase and lowercase letters to better describe the code from a technical perspective. Java didn't want to be hard, java was the language to use where everyone could learn to program.
javascript in browsers was created in 10 days by Brendan Eich in 1995. Why javascript selected lowerCamelCase I think is because of java. It has nothing to do with java but it has "java" in its name "javascript".

Are namespace collisions really an issue in Objective-C?

Objective-C doesn't have namespaces, and many (such as CocoaDevCentral's Cocoa Style Guide) recommend prefixing your class names with initials to avoid namespace collision.
Quoting from the above link:
Objective-C doesn't have namespaces,
so prefix your class names with
initials. This avoids "namespace
collision," which is a situation where
two pieces of code have the same name
but do different things.
That makes sense, I suppose. But honestly, in the context of a relatively small app (say, an iPhone game), is this really an issue? Should I really rename MyViewController to ZPViewController? If not, at what point do namespace collisions really become a concern?
If you're writing an application that uses some set of libraries, then you already know what your namespace looks like and you just need to select names that do not conflict with existing available functions.
However, if you are writing a library for use by others, then you should pick a reasonably unique prefix to try to avoid name collisions with other libraries. With only two characters there may still be name collisions, but the frequency will be reduced.
Small apps shouldn't use up all the good names, so won't have a problem with namespaces.
But it is a good idea to get used to the style that languages are generally written in. It makes it easier to read other people's code, and for others to read yours.
E. g., use camelCase variables in Java, but CamelCase vars in C#, hyphen_separated_names in C, etc.
It will make it easier for you to learn in the long run.
I have read (but haven't verified) that Apple has private classes in their frameworks that don't include any prefixes on the names. So if your application classes' names have no prefixes, you risk colliding with those.
I've worked with repositories where classes were not prefixed. (Or only some of the classes were prefixed.)
One thing that I found painful is it's sometimes hard to tell if code was written by someone inside or outside the company. Using a consistent prefix makes it immediately obvious for someone reading the code for the first time.
Keep in mind that code will be read many more times than written.
Also, it can definitely come in handy when using tools like grep and sed.

declaration of variable names

what is the best way to declare variable names.... in uppercase ...? .. lowercase? in which area must be declared in any case ... and what name is appropriate depending on the roll of the standard variable ... there are some variables to declare?...sorry for the question..I'm new to the world of programming ... I hope not bother .... =)
Well here are some links for the coding standards for various languages..
This has standards for variable naming and a lot more.
C# coding standards
C++ coding standards
Java coding standards
And here is generic coding standards article that explains the reasoning behind the coding standards.
Atleast for C and C++ we can use Hungarian notation
If:
the language doesn't dictate it; and
your coding standards don't dictate it,
then just make it as readable as possible. Hordes of developers in the future will sing praises to your name for not inflicting horrible code on them.
My personal favorite is all uppercase and underscores for constants (IQ_LIMIT) and camel case for everything else (getItembyId(), itemCount). But that's personal preference, not something written on stone tablets.
It really depends on the programming language you use, and any coding conventions that are followed by a group.
For example, there is the GNU coding standards for writing C code which covers variable names down to the indentation of lines.
For languages, the Code Conventions for the Java Programming Language lays out some coding conventions for capitalization and naming of variables, packages, classes, methods, etc in the Java programming language.
When in Rome, do as the Romans. Each language usually has its own idioms with respect to these sorts of things.
IMO, knowing the scope of a variable is the most important thing. You should know at a glance how much code can effect a variable and how much code will be effected by your changing it. In this way encapsulation (and your sanity) can be maintained. You won't accidentally change a global variable and mysteriously hose the whole program. Also they should stand out like a sore thumb just begging to be refactored away.
Therefore upper-case the first letter for globals (where "global" is any variable that can be seen by more than one function) and lower-case the first letter for every else. Constants traditionally get all caps.
So in studlyCaps style it would be:
GlobalVariable
localVariable
CONSTANTVARIABLE
And using under scores:
Global_Variable
local_variable
CONSTANT_VARIABLE
Whether you use studlyCaps or under scores depends on your programming language and local style (I prefer under scores for their readability and no confusion about capitalization).
In C#, we use PascalCase for properties and methodnames and camelCase for other members. For constants we use CAPS_WITH_UNDERSCORE. For the html elements hungarian notation is used. (I think these are Microsoft standards.)
A corollary to "When in Rome..." is to do as the previous coder has done. When you are working on another developers code or project, you should match your style to the existing style. While seeing a weird convention is puzzling and hard to deal with at first, it is nothing compared to sorting out a file that switches notation and style every couple of functions.
When working on your own project, or as a single developer you can do what is most comfortable within reason.