I have some questions:
A dynamic programming language is always interpreted? I think so, but why?
Are there any dynamic languages with static typing system?
A programming language with static typing system is always compiled?
In others words, are there really a link between :
Static / dynamic typing system and static / dynamic language
Static / dynamic typing system and compiler / interpreter
Static / dynamic language and compiler / interpreter
There is no inherent connection between the type system and the method of execution. Dynamic languages can be compiled and static languages can be interpreted. Arguably static type systems make a lot of sense with programs which are compiled before execution, as a method of catching certain kinds of errors before the program is ever executed. However, dynamic type systems solve different problems than static type systems, and interpreted execution solves different problems than compilation.
See What to know before debating type systems.
A dynamic programming language is always interpreted? I think so, but why?
No. Most dynamic languages in wide use internally compile to either bytecode or machine code ("JIT"). There are also a number of ahead-of-time compilers for dynamically-typed languages. There are a number of compilers for Scheme and Lisp, as well as other languages.
Are there any dynamic languages with static typing system?
Yes. The terms your are looking for here are "optional typing" and "gradual typing".
A programming language with static typing system is always compiled?
Most are, but this isn't strictly required. Many statically typed functional languages like ML, F#, and Haskell support an interactive mode where it will interpret (or internally compile and execute) code on the fly. Go also has a command to immediately compile and run code directly from source.
In others words, are there really a link between :
Static / dynamic typing system and static / dynamic language
Static / dynamic typing system and compiler / interpreter
Static / dynamic language and compiler / interpreter
There's a soft link between the two. Most people using dynamically typed languages are using them in part because they want quick iteration while they develop. Meanwhile, most people using statically typed languages want to catch as many errors as early as they can. That means that dynamically typed languages tend to be run directly from source while statically typed languages tend to compile everything ahead of time.
But there's no technical reason preventing you from mixing it up.
Related
i am new to java.
i wanted to know this.
what is the need to create the .class file in java ?
can't we just pass the source code to every machine so that each machine can compile it according to the OS and the hardware ?
I believe it's mostly for efficiency reasons.
From wikipedia http://en.wikipedia.org/wiki/Bytecode:
Bytecode, also known as p-code (portable code), is a form of
instruction set designed for efficient execution by a software
interpreter. Unlike human-readable source code, bytecodes are compact
numeric codes, constants, and references (normally numeric addresses)
which encode the result of parsing and semantic analysis of things
like type, scope, and nesting depths of program objects. They
therefore allow much better performance than direct interpretation of
source code.
(my emphasis)
And as others have mentioned possible weak obfuscation of the source code.
The main reason for the compilation is that the Virtual Machines which are used to host java classes and run them only understands bytecode
And since compiling a class each time to the language the virtual machine understands is expensive. That's the only reason why the source code is compiled into bytecode.
But we can also use some compilers which compiles source code directly into machine code.But that's a different story which I don't know about much.
In Objective-C I often use __typeof__(obj) when dealing with blocks etc. Why not __typeof(obj) or typeof(obj).
When to use which?
__typeof__() and __typeof() are compiler-specific extensions to the C language, because standard C does not include such an operator. Standard C requires compilers to prefix language extensions with a double-underscore (which is also why you should never do so for your own functions, variables, etc.)
typeof() is exactly the same, but throws the underscores out the window with the understanding that every modern compiler supports it. (Actually, now that I think about it, Visual C++ might not. It does support decltype() though, which generally provides the same behaviour as typeof().)
All three mean the same thing, but none are standard C so a conforming compiler may choose to make any mean something different.
As others have mentioned, typeof() is an extension of C that has various support in respective compilers.
If you happen to be writing Objective-C for iOS or Mac apps, chances are good that you will be compiling your app with the Clang compiler.
Clang does support the use of typeof(), but technically it's for when your C Language Dialect is set to be a gnu* type. However __typeof__() is supported in both c* and gnu* language dialects - as detailed in the Clang documentation.
Now if you're writing your code with Xcode, the default setting for the C language dialect appears to be GNU99 and the option of allowing 'asm' 'inline' 'typeof' is set to Yes, so using typeof() won't bring you any problems.
If you want to be (arguably) safer when using the Clang compiler, use __typeof__(). This way you won't be affected if the C Language Dialect being used for compilation changes or if someone decides to turn off the allowance of 'typeof'.
Hope this will be helpfull:
-ansi and the various -std options disable certain keywords. This causes trouble when you want to use GNU C extensions, or a general-purpose header file that should be usable by all programs, including ISO C programs. The keywords asm, typeof and inline are not available in programs compiled with -ansi or -std (although inline can be used in a program compiled with -std=c99 or -std=c11). The ISO C99 keyword restrict is only available when -std=gnu99 (which will eventually be the default) or -std=c99 (or the equivalent -std=iso9899:1999), or an option for a later standard version, is used.
The way to solve these problems is to put ‘__’ at the beginning and end of each problematical keyword. For example, use __asm__ instead of asm, and __inline__ instead of inline.
http://gcc.gnu.org/onlinedocs/gcc/Alternate-Keywords.html#Alternate-Keywords
https://clang.llvm.org/docs/UsersManual.html#c-language-features
I'm confused by the concept of scripts.
Can I say that makefile is a kind of script?
Are there scripts written in C or Java?
I'd refer to Wikipedia for a detailed explanation.
"Scripts" usually refer to a piece of code or set of instructions that run in the context of another program. They usually aren't a standalone executable piece of software.
Makefiles are a script that is run by "make", or MSBuild, etc.
C needs to be compiled into an executable or a library, so programs written in (standard) C would typically not be considered scripts. (There are exceptions, but this isn't the normal way of working with C.)
Java (and especially .net) is a bit different. A typical java program is compiled and run as an executable, but this is a grey area. It is possible to do runtime compilation of a "script" written in java and execute it.
In a very general sense the term "Scripts" relates to code that is deployed and expected to run from the lexical representation. As soon as you compile the code and distribute the resulting output instead of the code it ceases to be a "Script".
Minification and obsfication of a script is not consided a compile and the result is still consider a script.
It depends on your definition of script. For me, a script could be any small program you write for a small purpose. They are usually written in interpreted languages. However, there's nothing stopping you from writing a small program in a compiled language.
For me a script has to consist of a single file. And that file must be able to perform the task for which the script was written with no intermediate steps.
So these would be OK:
bash backup_my_home_dir.sh
perl munge_some_text.pl
python download_url.py
But this wouldn't qualify, even if the file is small:
javac HandyUtility.java
java HandyUtility
Yes it's possible to do scripting in Java. I've seen it many times :)
(this was sarcasm for bad spaghetti code)
The term 'scripting' can cover a fairly broad spectrum of activities. Examples being programming in imperative interpreted languages such as VBScript, Python, or shell scripts such as csh or bash, or expressing a task in declaritive languages such as XSL, SQL or Erlang.
Some scripting languages fall into a category referred to as Domain Specific Languages (DSL's). Good examples of DSL's are 'makefile's, many other types of configuration files, SQL, XSL and so on.
What you're asking is fairly subjective, one man's script is another man's application. If your interpretation of scripting means that using scripting languages should not force a user to follow the traditional compile -> link -> run cycle, then you could form the opinion that you can't write 'scripts' in C or Java.
A script is basically a non-compilable text file in almost any language, or shell, with an interpreter that is used to automate some process, or list of commands, that you perform repeatedly. Scripts are often used for backing up files, compiling routines, svn commits, shell initialization, etc., ad infinitum. There are a million and one things you can do with a script that an executable (complete with installation, etc.) would simply be overkill for.
I write scripts in F#. A recent one is a small data loader to take in some set of data, do a bit of processing to it, and dump it in a DB. ~40 lines. No separate compilation step needed; I can just make F# Interactive run it directly.
Benefit is that I get a fully powered language with a great IDE and all the safety static checking provides, while inference makes it not get verbose like say, Java or C#.
So, that's one language that offers a reasonably decent type system, compilation and checking, isn't interpreteded, but works fine for scripting.
To support multiple platforms in C/C++, one would use the preprocessor to enable conditional compiles. E.g.,
#ifdef _WIN32
#include <windows.h>
#endif
How can you do this in Ada? Does Ada have a preprocessor?
The answer to your question is no, Ada does not have a pre-processor that is built into the language. That means each compiler may or may not have one and there is not "uniform" syntax for pre-processing and things like conditional compilation. This was intentional: it's considered "harmful" to the Ada ethos.
There are almost always ways around a lack of a preprocessor but often times the solution can be a little cumbersome. For example, you can declare the platform specific functions as 'separate' and then use build-tools to compile the correct one (either a project system, using pragma body replacement, or a very simple directory system... put all the windows files in /windows/ and all the linux files in /linux/ and include the appropriate directory for the platform).
All that being said, GNAT realized that sometimes you need a preprocessor and has created gnatprep. It should work regardless of the compiler (but you will need to insert it into your build process). Similarly, for simple things (like conditional compilation) you can probably just use the c pre-processor or even roll your own very simple one.
AdaCore provides the gnatprep preprocessor, which is specialized for Ada. They state that gnatprep "does not depend on any special GNAT features", so it sounds as though it should work with non-GNAT Ada compilers. Their User Guide also provides some conditional compilation advice.
I have been on a project where m4 was used as well, with the Ada spec and body files suffixed as ".m4s" and ".m4b", respectively.
My preference is really to avoid preprocessing altogether, and just use specialized bodies, setting up CM and the build process to manage them.
No but the CPP preprocessor or m4 can be called on any file on the command line or using a building tool like make or ant. I suggest calling your .ada file something else. I have done this for some time on java files. I call the java file .m4 and use a make rule to create the .java and then build it in the normal way.
I hope that helps.
Yes, it has.
If you are using GNAT compiler, you can use gnatprep for doing the preprocessing, or if you use GNAT Programming Studio you can configure your project file to define some conditional compilation switches like
#if SOMESWITCH then
-- Your code here is executed only if the switch SOMESWITCH is active in your build configuration
#end if;
In this case you can use gnatmake or gprbuild so you don't have to run gnatprep by hand.
That's very useful, for example, when you need to compile the same code for several different OS's using even different cross-compilers.
Some old Ada1983-era compilers have a package called a.app that utilized a #-prefixed subset of Ada (interpreted at build-time) as a preprocessing language for generating Ada (to be then translated to machine code at compile-time). Rational's Verdix Ada Development System (VADS) appears to be the progenitor of a.app among several Ada compilers. Sun Microsystems, for example, derived the Ada SPARCompiler from VADS and thus also had a.app. This is not unlike the use of PL/I as the preprocessor of PL/I, which IBM did.
Chapter 2 is some documentation of what a.app looks like: http://dlc.sun.com/pdf/802-3641/802-3641.pdf
No, it does not.
If you really want one, there are ways to get one (Use C's, use a stand-alone one, etc.) However I'd argue against it. It was a purposeful design decision to not have one. The whole idea of a preprocessor is very un-Ada.
Most of what C's preprocessor is used for can be accomplished in Ada in other more reliable ways. The only major exception is in making minor changes to a source file for cross-platform support. Given how much this gets abused in a typical cross-platform C program, I'm still happy there's no support for it in Ada. Very few C/C++ developers can control themselves enough to keep the changes "minor". The result may work, but is often nearly impossible for a human to read.
The typical Ada way to accomplish this would be to put the different code in different files and use your build system to somehow choose between them at compile time. Make is plenty powerful enough to help you do this.
You write a function and, looking at the resulting assembly, you see it can be improved.
You would like to keep the function you wrote, for readability, but you would like to substitute your own assembly for the compiler's. Is there any way to establish a relationship between your high-livel language function and the new assembly?
If you are looking at the assembly, then its fair to assume that you have a good understanding about how code gets compiled down. If you have this knowledge, then its sometimes possible to 'reverse enginer' the changes back up into the original language but its often better not to bother.
The optimisations that you make are likely to be very small in comparison to the time and effort required in first making these changes. I would suggest that you leave this kind of work to the compiler and go have a cup of tea. If the changes are significant, and the performance is critical, (as say in the embedded world) then you might want to mix the normal code with the assemblar in some fashion, however, on most computers and chips the performance is usually sufficient to avoid this headache.
If you really need more performance, then optimise the code not the assembly.
None, I suppose. You've rejected the compiler's work in favor of your own. You might as well throw out the function you wrote in the compiled language, because now all you have is your assembler in that platform.
I would highly advise against engaging in this kind of optimization because unless you're sure, via profiling and analysis, that you truly are making a difference.
It depends on the language you wrote your function in. Some languages like C are very low-level, translating each function call or statement to specific assembly statements. If you did use C, you can replace your function with inline assembly to improve performance.
Other high-level languages may convert each statement into macro routines or other more complex calls on the assembly side. Certain optimizations (like tail recursion, loop unrolling, etc) can be implemented easily on the source side, but others (like making more efficient use of the register file) may be impossible (again, depending on the language and the compiler you're using).
Its tough to say there is any relationship between modified assembly and the source which generated the unmodified version. It will certainly confuse debugging tools: register contents will no longer match the source variables they were supposed to correspond to.
There are a number of places in packet processing code where I've examined the generated assembly and gone back to change the original source code in order to improve the result. Re-arranging source can reduce the number of branches, __attribute__ and compiler arguments can align branch points and functions to reduce I$ misses. In desperate cases a little inline assembly can be used, so that the binary can still be compiled from source.
Something you could try is to separate your original function into its own file, and provide a make rule to build the assembler from there. Then update the assembler file with your improved version, and provide a make rule to build an object file from the assembler file. Then change your link rules to include that object file.
If you only ever change the assembler file, that will keep on being used. If you ever change the original higher-level language file, the assembler file will be rebuilt and the object file built from the new (unimproved) version.
This gives you a relationship between the two; you probably want to add a warning comment at the top of the higher-level language file to warn about the behaviour. Using some form of VCS will give you the ability to recover the improved assembler file if you make a mistake here.
If you're writing a native compiled app in Visual C++, there are two methods:
Use the __asm { } block and write your assembler in there.
Write your functions in MASM assembler, assemble to .obj, and link it as an static library. In your C/C++ code, declare the function with an extern "C" declaration.
Other C/C++ compilers have similar approaches.
In this situation, you generally have two options: optimize the code or rewrite the compiler. I can't see where breaking the link between source and op is ever going to be the correct solution.