Does a language exist where not both scopes for procedures and variables are either static or dynamic? - dynamic

As title asks, does even a single language exist where the scope for procedures are static, (or dynamic) but where the scope for variables is the opposite?
I believe it not to be the case as it would be a nightmare working in, but does anyone know of such as language?

Related

why are languages generally either statically typed or dynamically typed (not both)?

I don't understand this. I understand the pros and cons of each, but why don't languages like Python allow you to specify the variable type yourself at initialization and function argument types and return types when you wish so the interpreter won't waste time checking it at runtime, for programs or just parts of your code where speed is important, and not do it yourself when it isn't?
It just seems waste of time for users to switch between languages kind of needlessly in these situation and for developers of the language to lose some users or not have them use their language for all of their projects because of this.
Initializing a variable (with a specific type) in a dynamically typed language would be pointless because the variable could be reassigned with a different type later on. And the type of the variable is determined by the variable to which it is assigned anyway. So making statically typing variables optional wouldn't actually provide any extra functionality.
Second, compile-time checking of function arguments wouldn't work either because the type of the variables passed to it couldn't be determined until runtime. And functions can be coded to check the types of their own arguments in a dynamically typed language, so there's no need to implement another system for this.

Any languages whose functions cannot access global scope?

I've been writing a bit in a dialect of BASIC that has user-defined functions which can only access local variables; for example, the following code:
let S$ = "Hello, world!"
fn.def someFunction$()
print S$
fn.rtn "a string"
fn.end
X$ = someFunction$()
would print a blank line, because S$ does not have a value in the context of someFunction$.
The question: are there other languages in common use that have global scope which cannot be accessed from inside a function?
The basis of this question is a misunderstanding. This dialect of Basic, like most others, does not have a global scope. There are many languages in the same category.
First an explanation. Many early computer languages had a single scope in which all variables were defined. When this became too limiting they added a subroutine capability which either shared the same scope (COBOL PERFORM and BASIC GOSUB) or defined a completely separate scope with argument passing (FORTRAN CALL and RETURN).
One language was different: Algol. It defined nested lexical scope, so that a reference to a variable could be within block or to an outer nested block. This was an unusual feature and not widely copied.
Fortran also provide a linkage mechanism called COMMON. This was adopted by some other languages. C added block scope, external scope (with external linkage), but not nested functions, so functions can never access variables from another function's scope.
The dialect of Basic you are asking about belongs to the Basic/Fortran family. It has non-overlapping scopes for each of the main program and user-defined functions, but apparently no external linkage. Regardless of how they are written, user-defined functions have their own scope and of course they cannot access variables in the main program, which is in a quite different scope. Some dialects of Basic have a COMMON-like feature, but I don't think this one does.
So the answer is that most languages (of this kind) do not provide nested scopes and do not allow an inner scope to access the contents of an outer one. [The Lisp family tree is quite different, of course.]
There is one interesting exception. Object-oriented languages mostly derive from Simula which was a Pascal-like language and introduced the idea of nesting the method cope inside the class scope. That idea has definitely caught on.

The state variable is never a parameter of a function, right? (How to Design Programs)

In Chapter 36.4 of HTDP(How to Design Programs),
I found this warning:
Warning: The state variable is never a parameter of a function.
But as far as I've heard before, in functional programming, functions will be corrupted if they refer state variables. They will not be pure functions anymore. They will be hard to test, do unpredictable works, cannot be memoized ... etc. The state variables also should be passed by as parameters, not just referred as some global constants.
So I wonder
is HTDP is arguing something wrong,
in some of functional programming practices, global state variables are allowed? or
I have wrong idea?
Thanks in advance.
Disclaimer: I like&respect this book very much and learned a lot. Actually I would like to spread good words about this book to my friends(if any). So don't get it wrong.
I don't think there's anything incompatible with what you've heard about functional programming and what is written in the chapter you linked. However, you're conflating two concepts here: the presence of mutable state in functional programs (a purity issue) vs. the order in which things are evaluated, and the restrictions on the syntax you have available to write things down.
Consider: if you're using an eager evaluation strategy, then passing a "state variable" of the kind they describe in that chapter would have the effect of dereferencing it, and you would get the value of the variable as the function argument. Similarly, if the variable was bound as a parameter to the function, you would get a different bit of memory at every call. There are many different options here. The fact that some languages permit you to pass references around as values is not universal.
So they are really just describing global variables (or variables that are accessed from some parent scope), which by their very nature need not be passed to functions as parameters. If the specific language permits pass-by-reference, this might not be such a clear distinction.

Reasoning for Language-Required Variable Name Prefixes

The browser-based software StudyTRAX ( http://wiki.studytrax.com ), used for research data management, allows for custom form and form variable management via JavaScript. However, a StudyTRAX "variable" (essentially, a representation of both an element of a form [HTML properties included] and its corresponding parameter, with some data typing/etc.) must be referred to with #<varname>, while regular JavaScript variables will just be <varname>.
Is this sort of thing done to make parsing easier, or is it just to distinguish between the two so that researchers who aren't so technologically-inclined won't have as much trouble figuring out what they're doing? Given the nature of JavaScript, I would think the StudyTRAX "variables" are just regular JavaScript objects defined in such a way to make form design and customization simpler, and thus the latter would make more sense, but am I wrong?
Also, I know that there are other programming languages that do require specific variable prefixes (though I can't think of some off the top of my head at the moment); what is/was the usual reasoning for that choice in language design?
Two part answer, StudyTRAX is almost certainly using a preprocessor to do some magic. JavaScript makes this relativity easy, but not as easy as a Lisp would. You still need to parse the code. By prefixing, the parser can ignore a lot of the complicated syntax of JavaScript and get to the good part without needing a "picture perfect" compiler. Actually, a lot of templeting systems do this. It is an implementation of Lisp's quasi-quote (see Greenspun's Tenth Rule).
As for prefixes in general, the best way to understand them is to try to write a parser for a language without them. For very dynamic and pure languages like Lisp and JavaScript where everything is a List / object it is not too bad. When you get languages where methods are distinct from objects, or functions are not first class the parser begins having to ask itself what type of thing doe "foo" refer to? An annoying example from Ruby: an unprefixed identifier is either a local variable or a method implicitly on self. In Rails there are a few functions that are implemented with method_missing. Person.find_first_by_rank works fine, but
Class Person < ActiveRecord::Base
def promotion(name)
p = find_first_by_rank
[...]
end
end
gives an error because find_first_by_rank looks like it might be a local variable and Ruby is scared to call method_missing on something that might just be a misspelled local variable.
Now imagine trying to distinguish between instance variables (prefix-#), class-variables (prefix-##), global variables (prefix-$), Constants (first letter Capitol), method names and local variables (no prefix small case) by context alone.
(From a Compiler & Language Hobbyst Designer).
Your question is more especific to the "StudyTRAX" software.
In early days of programming, variables in Basic used prefixes as $ (for strings, "a$"), to difference from numeric values. Today, some programming languages such as PHP prefixes variables with "$". COBNOL used variables starting with A to I, for integers, and later letters for floats.
Transforming, and later, executing some code, its a complex task, that's why many programmers, use shortcuts like adding prefixes or suffixes to programming languages.
In many Collegues or Universities, exist specialized classes / courses for transforming code from a programming language, to something that the computer does, like "Compilers", "Automatons", "Language Design", because its not an easy task.
Perl requires different variable prefixes, depending on the type of data:
$scalar = 4.2;
#array = (1, 4, 9, 16);
%map = ("foo" => 42, "bar" => 17, "baz" => 137);
As I understand it, this is so the reader can immediately identify what kind of object they're dealing with. It's not a matter of whether the reader is technologically inclined or not: if you reduce the programmer's cognitive load, he can use his brainpower for more important things than figuring out fiddly syntactic details.
Whether Perl's design is successful in this respect is another question, but I believe that's the reasoning behind the feature.

Is static typing a subset of dynamic typing?

I was going to add this as a comment to my previous question about type theory, but I felt it probably deserved its own exposition:
If you have a dynamic typing system and you add a "type" member to each object and verify that this "type" is a specific value before executing a function on the object, how is this different than static typing? (Other than the fact that it is run-time instead of compile-time).
Technically, it actually is the other way round: a "dynamically typed" language is a special case of a statically typed language, namely one with only a single type (in the mathematical sense). That at least is the view point of many in the type systems community.
Edit regarding static vs dynamic checking: only local properties can be checked dynamically, whereas properties that require some kind of global knowledge cannot. Think of properties such as something being unique, something not being aliased, a computation being free of race conditions. A suitable static type system can verify such properties, because it has the ability to establish certain invariants on the context of the expression that is being checked.
static typing happens at compile-time, not at run-time! And that difference is essential!!
See B.Pierce's book Types and Programming Languages for more.