Dynamics Nav C/AL naming conventions - naming-conventions

I am facing certain C/AL tasks those days and as I am used to code in c#, C/AL seems a bit "different" in several aspects.
In particular I am wondering, why it is recommed to use variable names starting with uppercase letters.
From my point of view it would be a benefit in terms of readability to use camelcase notation for variables.
Is there any reason, why it is recommed that way by Microsoft?

I do not think there is a specific reason why the Pascal Case (first letter always uppercase) is used. That being said it is more of a guideline for all developers so that the code is uniform across all products. The general idea is that if you merge code from two different sources (e.g. two different developers) the end result would appear as if the code was from a single source.
Some companies have their own internal rules how code should be formatted. I prefer the use of the naming conventions specified by Microsoft because:
it makes my code consistent with the Navision standard code (objects in the range 1..49999),
it makes my code consistent with my coworkers (our company policy is to use the Microsoft naming conventions).
The MSDN Naming Conventions page states:
"Precise and consistent terminology helps the end user work with the application. Rules for naming and abbreviating objects also help developers to understand the CRONUS International Ltd. demonstration database and develop new features faster."
Pascal Case should be used for general code consistency and overall uniformity but is not necessary or required. I would advise you to consult your company policy on Naming Conventions and follow those or if you are starting fresh to follow the Microsoft naming guidelines.

Related

Are there specific grammar rules for naming variables?

I am creating an ontology for urban systems. For instance if we have the variable that indicate the size of the population I would name it (using the so called camel notation) sizeOfPopulation. The length of the street as lengthOfStreet. Is there a specific or standardized way of doing it?
There is no correct answer to this question because it's extremely subjective.
Programming Style, Coding Conventions and Naming Conventions.
You are probably familiar with: Tabs versus Spaces?
TL;DR: Choose a style with your team or for youself, and be consistent. Look at strictly managed open source code for ideas, eg: Qt, ChibiOS, Linux.

How Do I Design Abstract Semantic Graphs?

Can someone direct me to online resources for designing and implementing abstract semantic graphs (ASG)? I want to create an ASG editor for my language. Being able to edit the ASG directly has a number of advantages:
Only identifiers and literals need to be typed in and identifiers are written only once, when they're defined. Everything else is selected via the mouse.
Since the editor knows the language's grammar, there are no more syntax errors. The editor prevents them from being created in the first place.
Since the editor knows the language's semantics, there are no more semantic errors.
There are some secondary advantages:
Since all the reserved words are easily separable, a program can be written in one locale and viewed in other. On-the-fly changes of locale are possible.
All the text literals are easily separable, so changes of locale are easily made, including on-the-fly changes.
I'm not aware of a book on the matter, but you'll find the topic discussed in portions of various books on computer language. You'll also find discussions of this surrounding various projects which implement what you describe. For instance, you'll find quite a bit of discussion regarding the design of Scratch. Most workflow engines are also based on scripting in semantic graphs.
Allow me to opine... We've had the technology to manipulate language structurally for basically as long as we've had programming languages. I believe that the reason we still use textual language is a combination of the fact that it is more natural for us as humans, who communicate in natural language, to wield, and the fact that it is sometimes difficult to compose and refactor code when proper structure has to be maintained. If you're not sure what I mean, try building complex expressions in Scratch. Text is easier and a decent IDE gives virtually as much verification of correct structure.*
*I don't mean to take anything away from Scratch, it's a thing of beauty and is perfect for its intended purpose.

What antipattern does this common programming mistake fall under

For a programming project,
let's say the programmer has named similar style functions differently in many places, for example...
int ask_bro_4_data();
and another as
int ask_mom_for_data();
What antipattern does this represent? Essentially, it's the lack of standardization right? As in, one function uses for, the other uses 4.
Similarily the programmer could be naming variables in some fashion that relates to their use but fails to do so in every case, or does so in a non standardized way. This makes searching for these variables in a large code base harder because they may not be following the naming condition that you assume they would be.
Any ideas? Sorry for the ambiguous name, but I was not sure what to label this question as.
This would be considered more a syntax convention than a pattern.
The English language would lead us to prescribe using words in preference to numerals in order to improve maintainability. However, conventions can vary significantly depending on your peer group.
A design pattern would be considered a solution intended to address common problems introduced by a specific context.
For example; I want to ensure my application can only ever access the same instance of a given class. A basic pattern to address this problem would be the Singleton.
If the solution then introduces more problems than it solves; then it becomes an anti-pattern.
In this example; Singletons are hard to unit test; so this is one reason why many consider it an anti-pattern.
Anti-Pattern: Rename later
When the programmer realize that he/she or her collegues are inconsistent in naming and decide to do something about it later, or that is not important to do something about at all.
This can be coped with by:
clear guidelines from the team about what to strive for in respecting naming conventions,
recognizing that refactoring is an ongoing process, parallel to the coding.
simple IDE commands that afford the user after thinking oh we used "4" here and "for" here, that's disturbing *Ctrl+R Ctrl+R* ah that's better *continues coding.*

What is the point of the lower camel case variable casing convention (thisVariable, for example)?

I hope this doesn't get closed due to being too broad. I know it comes down to personal preference, but there is an origin to all casing conventions and I would like to know where this one came from and a logical explanation as to why people use it.
It's where you go all like var empName;. I call that lower camel, although it's probably technically called something else. Personally, I go like var EmpName. I call that proper camel and I like it.
When I first started programming, I began with the lower camel convention. I didn't know why. I just followed the examples set by all the old guys. Variables and functions (VB) got lower camel while subs and properties got proper camel. Then, after I finally acquired a firm grasp on programming itself, I became comfortable enough to question the tactics of my mentors. It didn't make logical sense to me to use lower camel because it wasn't consistent, especially if you have a variable that consists of one word which ends up being in all lowercase. There is also no validation mechanism in place to make sure you are appropriately using lower vs. upper camel, so I asked why not just use proper camel for everything. It's consistent since all variable names are subject to proper camelization.
Having dug deeper into it, it turns out that this is a very sensitive issue to many programmers when it is brought to question. They usually answer with, "Well, it's just personal preference" or "That's just how I learned it". Upon prodding further, it usually invokes a sort of dogmatic reaction with the person as I attempt to find a logical reason behind their use of lower camel.
So anyone want to shed a little history and logic behind casing of the proper camelatory variety?
It's a combination of two things:
The convention of variables starting with lower case, to differentiate from classes or other entities which use a capital. This is also sometimes used to differentiate based on access level (private/public)
CamelCasing as a way to make multi-word names more readable without spaces (of course this is a preference over underscore, which some people use). I would guess the logic is that CamelCasing is easier/faster for some to type than word_underscores.
Whether or not it gets used is of course up to whomever is setting the coding standards that govern the code being written. Underscores vs CamelCase, lowercasevariables vs Uppercasevariables. CamelCase + lowercasevariable = camelCase
In languages like C# or VB, the standard is to start private things with lowercase and start public/protected things with uppercase. This way, just by looking at the first letter you can tell whether the thing you are messing could be used by other classes and thus any changes need more scrutiny. Also, there are tools to enforce naming conventions like this. The one created/used internally at Microsoft is called StyleCop and is available as a free download.
Historically, well named variables in C (a case-sensitive language) consisted of a single word in lower case. UPPERCASE was reserved for macros.
Then came along C++, where classes are usually CapitalizedAndCamelCased, and variables/functions consisting of several words are camelCased. (Note that C people tend to dislike camelCase, and instead write identifiers_this_way.
From there, it spread.
And, yes, probably other case-sensitive languages have had some influence.
lowerCamelCase I think has become popular because of java and javascript.
In java, it is specifically defined why, that the first word should be a verb with small letters where the remaining words start with a capital letter.
The reason why java chose lowerCamelCase I think depends on what they wanted to solve. Java was launched in 1995 as a language that would make programming easy. C/C++ that was often used was often considered difficult and too technical.
This was something java claimed to solve, more people would be able to program and the same code would work on different hardware. The code was the documentation, you didn't need to comment code, just read and everything would be great.
lowerCamelCase makes it harder to write "technical" code because it removes options to use uppercase and lowercase letters to better describe the code from a technical perspective. Java didn't want to be hard, java was the language to use where everyone could learn to program.
javascript in browsers was created in 10 days by Brendan Eich in 1995. Why javascript selected lowerCamelCase I think is because of java. It has nothing to do with java but it has "java" in its name "javascript".

Common variable names in different languages

I see a lot of different styles of variable names used in different kind of languages. Sometimes these names are lowercase and using underscores (i.e. test_var) and other times I see variables like testVar.
Is there a specific reason why programmers use different variable name styles in different languages?
It's really just the convention for that programming language.
For example, most Java programs use camel-casing (testVar) while a lot of C programs use _ to seperate words (test_var).
It's completely the choice of the programmer, but most languages have "standard" naming conventions.
As Wiki says :
Reasons for using a naming convention (as opposed to allowing programmers to choose any character sequence) include the following:
to reduce the effort needed to read and understand source code;1
to enhance source code appearance (for example, by disallowing overly long names or abbreviations).
Also there are code conventions in companies that care about readability of their code.
This simplify the code sharing between programmers and they don't spend time to understand what means variables name "aaa" and "bbb".
There is no real reason. Each language and sometimes even platform can have varying naming conventions.
For instance, in .Net TestVar would be seen if it was a public class variable. In C++, testVar would probably be opted for. In Ruby, test_var, etc. It's just a matter of preference by the community and/or creators.
I urge you to follow language standards. I work on a team that has had many developers working on the code over the years, and very few standards have been followed. The majority of our code is nearly unreadable. I have been working on a standardization project for the last several months. It has been very difficult to enforce and get buy-in. I'm hopeful that people will come around as they start seeing the benefits of easy to read code.
For naming conventions/standards keep this in mind:
Follow team/company standards
Follow language standards
Follow the style that the program is already using
Do whatever you want (Not really - if you don't have standards follow
your language standards/conventions.)