Why are operators different in different languages? - operators

Why do operators have different functions in different languages?

Because different languages exist to solve different problems, are developed at different times by different people with different levels of knowledge, under different outside constraints.
Depending on what problems a language tries to solve (or tries to solve first), some of the easier characters to type might have already been used for more common or newer concepts by the time a new operator is added.
E.g. PHP wasn't an object-oriented language at first, so used . as the concatenation operator. Then they added object-oriented PHP, and now they needed a different operator for identifying fields.
OTOH in a language like HyperTalk, which doesn't have data structures, you do not need a field-resolution operator at all.

Related

Is there a way to make SQL standard compliant queries using Visual Studio?

I just wanted to know if there is an SQL standard compliance validator out there for Visual Studio 2019 Professional (something that could be set to strict: only absolutely compliant syntax would be accepted). It would be nice if it had support for native languages too, but I'm used to that kind of stuff being CLR-only (I don't really know why probably because of linking... I may be so absolutely wrong, though... I actually have no idea and took a guess).
Something important would be that it needs to be standard compliant, not only SQL-server compliant. What is not in the standard is an error.
The goal is to make SQL code that is completely independent of the DBMS. Thank you for taking the time to read my question.
The goal is to make SQL code that is completely independent of the DBMS.
Impossible goal, unless you are going to forsake writing SQL at all. It is perhaps sad, but different databases differ on very fundamental things, picking and choosing the parts of the standard they want. Happily, the major things like SELECT, JOIN and GROUP BY are common but the details are not.
You can think of them of them like dialects of a spoken language over time and region. I'm most familiar with English, but it is true that all languages evolve and change. I can read Shakespearean English, but I am not going to write English like that. It would be grammatically incorrect in some cases, use unknown words, and alternative meanings of common words.
Here are just some examples of some features that differ widely among databases:
Intervals. Adding an interval to a date using the standard syntax is interval + '1 day'. This varies significantly across databases.
Some databases do not support FULL JOIN.
Some databases do not support recursive CTEs. Some use the recursive keyword; some do not.
Some databases do not support the VALUES() constructor in the FROM clause.
Some databases allow the FROM clause to be optional.
The standard has nifty functionality such as FILTER and aggregation by functionally dependent ids.that few databases support
Limitations on data types vary significantly -- what is the longer string, for instance.
The standard uses FETCH to limit results, which some databases do not support.
Parsing strings into dates and formatting dates into strings is totally database-dependent.
Extracting date/time components uses extract() in the standard, but few databases actually support that functionality.
These are just a few of the differences off the top of my head -- in no way meant to be complete or even the most important. I just want to point out that what you want to do is not possible.

Are there conventional synonyms used to replace keywords reserved in programming languages?

The main examples of the words I mean are "object", "value" etc. In many (well, not really, but the chances are on some occasions at least) cases you may happen to find yourself willing to name a variable etc. of yours this way.
Another example I have stumbled upon in my practice is "try" which represents both the keyword (in many C-like and other languages) used in exception handling and the currency of Turkey. But this is an example just for fun, I doubt there are any common practices known for this particular case (though I feel like there may be for the previous).
What do people do in such cases? What are some synonyms for an object, a value etc reasonable in the programming and data modelling context?
For example imagine you are developing an object database, manipulating objects, properties and values (rather than documents, fields and... eh... values) is, for some reason, among the key ideas of its philosophy and you really don't want to use words too distinct from these semantically. What words would you use to replace the reserved ones while keeping the sense very close to that of theirs?
The easiest solution to come into my mind so far it to use misspelled (or spelled in a different language orthography) varieties of the same words like "objekt", "walue" etc. but although this can do the job this just disgusts me so much I really don't want to accept going this way ever.
UPDATE: Indeed, in some specific cases (particular languages) using a different case (which, some times, may go against the case aspect of the commynity and/or the company naming convention by the way) and/or namespaces (which have been introduced almost exactly for this) may solve the problem at least partially but I am still interested in alternatives as I believe actually duplicating a system keyword is a thing one should at least think about avoiding (might there be a way to do it easily without accepting compromises considered too serious) in every case.
I am even considering writing script that would scrape through GitHub to analyse the common code elements naming vocabulary but I think it is always a good idea to ask first rather than to "reinvent a bicycle", perhaps somebody has done something like this already.
UPDATE2: Please do me a favour and consider the following with applicable degree of objectivity before voting to close. With all do respect I would like to emphasise that the actual degree of subjectivity of this question is excusably low (though, I admit, somewhat above zero anyway). The only real flaw of it is that it might perhaps fit the English site better but I believe the audience of StackOverflow is much more relevant (generally informed in a much more relevant way) to the context. The actual goal of publishing this question is to highlight a problem that is fairly easy to understand clearly enough and which can not be denied of existence (though its importance may be questionable so far) but is spoken of too little (as importance of code clarity and semantic relevance is increasing, IMHO, code as a media is quickly moving towards obtaining bigger cultural (in the broad meaning of this word) importance than of books). And to let people share the ways of addressing it in practice they know of.
Capitalization: Often, a different capitalization instead of a synonym does the trick, as most language keywords are case-sensitive. E.g. object = new Object();
Prefix / Postfix: Another often encountered solution is to write myObject = new Object(). Which one you chose really depends on the naming conventions you follow. For private class fields, some developers use an underscore, e.g. this._object indicating a private access modifier.
Specification: In most cases however, you can find a more specific word describing the role - such as instance, parent, child or argument - or the subclass - such as integer or n instead of a generic number datatype - of your object.
In addition to the above, many language communities follow de-facto conventions such as cls for Class, obj for Object, me or self for this etc.

How to show that something increases relational expressive power?

How do I show that something increases relational expressive power? For example I have been given a problem in which I need to show whether adding some certain functionality to the select-project-join queries of SQL increases the expressive power. Do I give an example and show that it is not expressible?
First you must decide what is that is being expressed by two notations. (Ie what it is that they are expressing, ie are expressive of, ie are denoting.) Otherwise, the problem doesn't make much sense.
Eg: As long as two notations' sets of expressions are countably infinite they can be set in 1:1 correspondence. So anything that one set's expressions can express the corresponding expression from the other set can be assigned to express. So they are in this trivial sense equally expressive. (Which sense is, essentially, equally expressive of each other's expressions.)
In being told what our two notations are expressing we are generally given for each:
some primitive expressions
some rules for generating expressions
some primitive things
some rules for generating things
a mapping from expressions to things
Sometimes the mapping is from terminal expressions to primitive things and from non-terminal expressions to structured things, but it doesn't have to be like that.
To show that one notation is more expressive (of whatever they are expressing) is to show that one notation can express all the things that the other can plus some that it cannot.
It is ok for the "things" to actually be expressions of one of the notations, with a trivial mapping from each of its expressions to itself, and the other (the less expressive) notation mapping only to a proper subset of it (the more expressive). (The reason that expressibility here is able to differ from the example above is that here each expresssion of the two notations is being defined to express something different than it is in that example.)
See discussions in the Alice book or Maier's book. These deal with database querying languages. Eg expressively equivalent versions of relational algebra, relational tuple calculus and relational domain calculus, and also other languages like predicate logic and versions of Datalog.

Creating domains for every "logical" domain

I have a databases class in which the prof wants us to create domains for every type, even when these just end up being aliases to other types. For example, instead of using the default DATE type, we would create out own type depending on what kind of day it is (eg, OrderDate).
I'm wondering if this is common or a best practice.
I can think of some pros and cons to this approach. A pro is that it makes it clear exactly what the domain is intended for, and typically we'd only compare fields if they have the same domain and any other comparison is something to watch for (since it could be comparing apples to oranges). But as a con, this also makes it more confusing to work with the types, as we'd have to refer to the domain declaration to figure out what kind of type a column really is (not that we need to do this too often).
This is not a particularly common practice. For instance, I have worked on many databases over the years and I have never used such substitutions for base types.
In your example, for instance, an order date may be an order date. But, I might want to know the how long ago that was in the past -- this requires "mixing" types because the current date (sysdate? now()? getdate()? CURRENT_TIMESTAMP?) is not an OrderDate. Or I might want to know how long after the order the first complaint or first return was made. Even if the conversion is invisible and automatic, why introduce incompatible types?
Another issue is that different databases differ in their support for user-defined data types. So, code using user defined types would likely make code more difficult to port to a different database. Why limit future options?
There are some occasional uses for user defined types do have a place for particular new types that might be needed -- complex numbers and points perhaps. There may even be some situations in some databases where a user defined type on a base type is useful -- for instance, to get represent a telephone number consistently. Using them liberally as substitutes for built-in types? It seems like overkill, complicating the code, hampering some important queries, and limiting future portability options.

In non-procedural languages, what specifies how things are to be done?

If you compare C vs SQL, this is the argument:
In contrast to procedural languages
such as C, which describe how things
should be done, SQL is nonprocedural
and describes what should be done.
So, the how part for languages like SQL is specified by the language itself, is it? What if I want to change the way some query works. Suppose I want to change the way a SELECT is handled. Is that possible?
So, the how part for languages like
SQL is specified by the language
itself, is it?
Not strictly by the language (ie. SQL), but normally by the database and its optimiser. As such, even where the same data is being queried from tables with the same structures and the same indexes, some databases will build the resultset in a different way to others.
Suppose I want to change the way a
SELECT is handled. Is that possible?
To some degree, yes. You can either:
Rewrite the query, to achieve the same result a different way, or
Use hinting - http://en.wikipedia.org/wiki/Hint_%28SQL%29
Neither of these directly instruct the database engine which approach to use, but both of them will affect how the resultset is returned - this is likely to vary between databases.
Additionally, I understand that some databases have additional interfaces that allow more low-level interaction with the database engine, enabling greater control over how a query is built than is possible from plain SQL. (However, your question did specify SQL.)
This is actually exaggerating the difference. There is no clear-cut point at which one is telling how things are done and the other only telling what it done. Rather, one may have to specify what/how things are done at a greater level of detail than the other. A typical SQL implementation allows the user to control such things as what indexes are used (or ignored), what kind of locking to do, and so on.
If you were to do the same job in C, you would (at some point) have to specify a great deal more detail (unless you used something like ODBC). Nonetheless, you're still telling what should be done, not all the details of how it should be done (e.g., despite being about as low-level as possible short of assembly language, C will still do some type conversions automatically, so you don't have to tell it how to do something like adding an integer to a floating point number -- you just tell it to add them, and it handles the details).
Bottom line: trying to talk about one as procedural and the other as non-procedural is misleading. SQL doesn't always require as much detail, but it's a difference of degree, not really "how" versus "what".