Is functional programming considered more "mathematical"? If so, why? - oop

Every now and then, I hear someone saying things like "functional programming languages are more mathematical". Is it so? If so, why and how? Is, for instance, Scheme more mathematical than Java or C? Or Haskell?
I cannot define precisely what is "mathematical", but I believe you can get the feeling.
Thanks!

There are two common(*) models of computation: the Lambda Calculus (LC) model and the Turing Machine (TM) model.
Lambda Calculus approaches computation by representing it using a mathematical formalism in which results are produced through the composition of functions over a domain of types. LC is also related to Combinatory Logic, which is considered a more generalized approach to the same topic.
The Turing Machine model approaches computation by representing it as the manipulation of symbols stored on idealized storage using a body of basic operations (like addition, mutation, etc).
These different models of computation are the basis for different families of programming languages. Lambda Calculus has given rise to languages like ML, Scheme, and Haskell. The Turing Model has given rise to C, C++, Pascal, and others. As a generalization, most functional programming languages have a theoretical basis in lambda calculus.
Due to the nature of Lambda Calculus, certain proofs are possible about the behavior of systems built on its principles. In fact, provability (ie correctness) is an important concept in LC, and makes possible certain kinds of reasoning and conclusions about LC systems. LC is also related to (and relies on) type theory and category theory.
By contrast, Turing models rely less on type theory and more on structuring computation as a series of state transitions in the underlying model. Turing Machine models of computation are more difficult to make assertions about and do not lend themselves to the same kinds of mathematical proofs and manipulation that LC-based programs do. However, this does not mean that no such analysis is possible - some important aspects of TM models is used when studying virtualization and static analysis of programs.
Because functional programming relies on careful selection of types and transformation between types, FP can be perceived as more "mathematical".
(*) Other models of computation exist as well, but they are less relevant to this discussion.

Pure functional programming languages are examples of a functional calculus and so in theory programs written in a functional language can be reasoned about in a mathematical sense. Ideally you'd like to be able to 'prove' the program is correct.
In practice such reasoning is very hard except in trivial cases, but it's still possible to some degree. You might be able to prove certain properties of the program, for example you might be able to prove that given all numeric inputs to the program, the output is always constrained within a certain range.
In non-functional languages with mutable state and side effects attempts to reason about a program and 'prove' correctness are all but impossible, at the moment at least. With non-functional programs you can think through the program and convince yourself parts of it are correct, and you can run unit tests that test certain inputs, but it's usually not possible to construct rigorous mathematical proofs about the behaviour of the program.

I think one major reason is that pure functional languages have no side effects, i.e. no mutable state, they only map input parameters to result values, which is just what a mathematical function does.

The logic structures of functional programming is heavily based on lambda calculus. While it may not appear to be mathematical based solely on algebraic forms of math, it is written very easily from discrete mathematics.
In comparison to imperative programming, it doesn't prescribe exactly how to do something, but what must be done. This reflects topology.

The mathematical feel of functional programming languages comes from a few different features. The most obvious is the name; "functional", i.e. using functions, which are fundamental to math. The other significant reason is that functional programming involves defining a collection of things that will always be true, which by their interactions achieve the desired computation -- this is similar to how mathematical proofs are done.

Related

Limitations of optimisation software such as CPLEX

Which of the following optimisation methods can't be done in an optimisation software such as CPLEX? Why not?
Dynamic programming
Integer programming
Combinatorial optimisation
Nonlinear programming
Graph theory
Precedence diagram method
Simulation
Queueing theory
Can anyone point me in the right direction? I didn't find too much information regarding the limitations of CPLEX on the IBM website.
Thank you!
That's kind-of a big shopping list, and most of the things on it are not optimisation methods.
For sure CPLEX does integer programming, non-linear programming (just quadratic, SOCP, and similar but not general non-linear) and combinatoric optimisation out of the box.
It is usually possible to re-cast things like DP as MILP models, but will obviously require a bit of work. Lots of MILP models are also based on graphs, so yes it is certainly possible to solve a lot of graph problems using a MILP solver such as CPLEX.
Looking wider at topics like simulation, then that is quite a different approach. Simulation really is NOT an optimisation method, but it can be used alongside optimisation to get extra insights which may be useful in a business context. Might be used for example to discover some empirical relationships that could be used in an optimisation model by CPLEX.
The same can probably also be said for things like queuing theory, precedence, etc. Basically, use CPLEX as an optimisation tool to solve part or all of your problem once you have structured and analysed it via one of these other approaches.
Hope that helps.

Is mixed integer linear programming used to implement optimization algorithms (e.g., genetic or particle swarm)

I am learning about optimization algorithms for automatic grouping of users. However, I am completely new to these algorithms and I have heard about them as I reviewed the related literature. And, differently, in one of the articles, the authors implemented their own algorithm (based on their own logic) using Integer Programming (this is how I heard about IP).
I am wondering if one needs to implement a genetic/particle swarm (or any other optimization) algorithm using mixed integer linear programming, or is this just one of the options. At the end, I will need to build a web-based system that groups users automatically. I appreciate any help.
I think you are confusing the terms a bit. These are all different optimization techniques. You can surely represent a problem using Mixed Integer Programming (MIP) notation but you can solve it with a MIP solver or genetic algorithms (GA) or Particle Swarm Optimization (PSO).
Integer Programming is part of a more traditional paradigm called mathematical programming, in which a problem is modelled based on a set of somewhat rigid equations. There are different types of mathematical programming models: linear programming (where all variables are continuous), integer programming, mixed integer programming (a mix of continuous and discrete variables), nonlinear programming (some of the equations are not linear).
Mathematical programming models are good and robust, depending on the model, you can tell how far you are from an ideal solution, for example. But these models often struggle in problems with many variables.
On the other hand, genetic algorithms and PSO belong to a younger branch of optimization techniques, one that it is often called metaheuristics. These techniques often find good or at least reasonable solutions even for large and complex problems, many practical applications
There are some hybrid algorithms that combine both mathematical models and metaheuristics and in this case, yes, you would use both MIP and GA/PSO. Choosing which approach (MIP, metaheuristics or hybrid) is very problem-dependent, you have to test what works better for you. I would usually prefer mathematical models if the focus is on the accuracy of the solution and I would prefer metaheuristics if my objective function is very complex and I need a quick, although poorer, solution.

Definitions of Phenotype and Genotype

Can someone help me understand the definitions of phenotype and genotype in relation to evolutionary algorithms?
Am I right in thinking that the genotype is a representation of the solution. And the phenotype is the solution itself?
Thanks
Summary: For simple systems, yes, you are completely right. As you get into more complex systems, things get messier.
That is probably all most people reading this question need to know. However, for those who care, there are some weird subtleties:
People who study evolutionary computation use the words "genotype" and "phenotype" frustratingly inconsistently. The only rule that holds true across all systems is that the genotype is a lower-level (i.e. less abstracted) encoding than the phenotype. A consequence of this rule is that there can generally be multiple genotypes that map to the same phenotype, but not the other way around. In some systems, there are really only the two levels of abstraction that you mention: the representation of a solution and the solution itself. In these cases, you are entirely correct that the former is the genotype and the latter is the phenotype.
This holds true for:
Simple genetic algorithms where the solution is encoded as a bitstring.
Simple evolutionary strategies problems, where a real-value vector is evolved and the numbers are plugged directly into a function which is being optimized
A variety of other systems where there is a direct mapping between solution encodings and solutions.
But as we get to more complex algorithms, this starts to break down. Consider a simple genetic program, in which we are evolving a mathematical expression tree. The number that the tree evaluates to depends on the input that it receives. So, while the genotype is clear (it's the series of nodes in the tree), the phenotype can only be defined with respect to specific inputs. That isn't really a big problem - we just select a set of inputs and define phenotype based on the set of corresponding outputs. But it gets worse.
As we continue to look at more complex algorithms, we reach cases where there are no longer just two levels of abstraction. Evolutionary algorithms are often used to evolve simple "brains" for autonomous agents. For instance, say we are evolving a neural network with NEAT. NEAT very clearly defines what the genotype is: a series of rules for constructing the neural network. And this makes sense - that it the lowest-level encoding of an individual in this system. Stanley, the creator of NEAT, goes on to define the phenotype as the neural network encoded by the genotype. Fair enough - that is indeed a more abstract representation. However, there are others who study evolved brain models that classify the neural network as the genotype and the behavior as the phenotype. That is also completely reasonable - the behavior is perhaps even a better phenotype, because it's the thing selection is actually based on.
Finally, we arrive at the systems with the least definable genotypes and phenotypes: open-ended artificial life systems. The goal of these systems is basically to create a rich world that will foster interesting evolutionary dynamics. Usually the genotype in these systems is fairly easy to define - it's the lowest level at which members of the population are defined. Perhaps it's a ring of assembly code, as in Avida, or a neural network, or some set of rules as in geb. Intuitively, the phenotype should capture something about what a member of the population does over its lifetime. But each member of the population does a lot of different things. So ultimately, in these systems, phenotypes tend to be defined differently based on what is being studied in a given experiment. While this may seem questionable at first, it is essentially how phenotypes are discussed in evolutionary biology as well. At some point, a system is complex enough that you just need to focus on the part you care about.

VHDL optimization tips [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am quite new in VHDL, and by using different IP cores (by different providers) can see that sometimes they differ massively as per the space that they occupy or timing constraints.
I was wondering if there are rules of thumb for optimization in VHDL (like there are in C, for example; unroll your loops, etc.).
Is it related to the synthesis tools I am using (like the different compilers are using other methods of optimization in C, so you need to learn to read the feedback asm files they return), or is it dependent on my coding skills?
Is it related to the synthesis tools I am using (like the different compilers are using other methods of optimization in C, so you need to learn to read the feedback asm files they return), or is it dependent on my coding skills?
The answer is "yes." When you are coding in HDL, you are actually describing hardware (go figure). Instead of the code being converted into machine code (like it would with C) it is synthesized to logical functions (AND, NOT, OR, XOR, etc) and memory elements (RAM, ROM, FF...).
VHDL can be used in many different ways. You can use VHDL in a purely structural sense where at the base level you are calling our primitives of the underlying technology that you are targeting. For example, you literally instantiate every AND, OR, NOT, and Flip Flop in your design. While this can give you a lot of control, it is not an efficient use of time in 99% of cases.
You can also implement hardware using behavioral constructs with VHDL. Instead of explicitly calling out each logic element, you describe a function to be implemented. For example, if this, do this, otherwise, do something else. You can describe state machines, math operations, and memories all in a behavioral sense. There are huge advantages to describing hardware in a behavioral sense:
Easier for humans to understand
Easier for humans to maintain
More portable between synthesis tools and target hardware
When using behavioral constructs, knowing your synthesis tool and your target hardware can help in understanding how what you write will actually be implemented. For example, if you describe a memory element with a asynchronous reset the implementation in hardware will be different for architectures with a dedicated asynchronous reset input to the memory element and one without.
Synthesis tools will generally publish in their reference manual or user guide a list of suggested HDL constructs to use in order to obtain some desired implementation result. For basic cases, they will be what you would expect. For more elaborate behavior models (e.g. a dual port RAM) there may be some form that you need to follow for the tool to "recognize" what you are describing.
In summary, for best use of your target device:
Know the device you are targeting. How are the programmable elements laid out? How many inputs and outputs are there from lookup tables? Read the device user manual to find out.
Know your synthesis engine. What types of behavioral constructs will be recognized and how will they be implemented? Read the synthesis tool user guide or reference manual to find out. Additionally, experiment by synthesizing small constructs to see how it gets implemented (via RTL or technology viewer, if available).
Know VHDL. Understand the differences between signals and variables. Be able to recognize statements that will generate many levels of logic in your design.
I was wondering if there are rules of thumb for optimization in VHDL
Now that you know the hardware, synthesis tool, and VHDL... Assuming you want to design for maximum performance, the following concepts should be adhered to:
Pipeline, pipeline, pipeline. The more levels of logic you have between synchronous elements, the more difficulty you are going to have making your timing constraint/goal.
Pipeline some more. Having additional stages of registers can provide additional wiggle-room in the future if you need to add more processing steps to your algorithm without affecting the overall latency/timeline.
Be careful when operating on the boundaries of the normal fabric. For example, if interfacing with an IO pin, dedicated multiplies, or other special hardware, you will take more significant timing hits. Additional memory elements should be placed here to avoid critical paths forming.
Review your synthesis and implementation reports frequently. You can learn a lot from reviewing these frequently. For example, if you add a new feature, and your timing takes a hit, you just introduced a critical path. Why? How can you alleviate this issue?
Take care with your "global" structures -- such as resets. Logic that must be widely distributed in your design deserves special care, since it needs to reach across your whole device. You may need special pipeline stages, or timing constraints on this type of logic. If at all possible, avoid "global" structures, unless truly a requirement.
While synthesis tools have design goals to focus on area, speed or power, the designer's choices and skills is the major contributor for the quality of the output. A designer should have a goal to maximize speed or minimize area and it will greatly influence his choices. A design optimized for speed can be made smaller by asking the tool to reduce the area, but not nearly as much as the same design thought for area in the first place.
However, it is more complicated than that. IP cores often target several FPGA technologies as well as ASIC. This can only be achieved by using general VHDL constructs, (or re-writing the code for each target, which non-critical IP providers don't do). FPGA and ASIC vendor have primitives that will improve speed/area when used, but if you write code to use a primitive for a technology, it doesn't mean that the resulting code will be optimized if you change the technology. Both Xilinx and Altera have DSP blocks to speed multiplication and whatnot, but they don't work exactly the same and writing code that uses the full potential of both is very challenging.
Synthesis tools are notorious for doing exactly what you ask them to, even if a more optimized solution is simple, for example:
a <= (x + y) + z; -- This is a 2 cascaded 2-input adder
b <= x + y + z; -- This is a 3-input adder
Will likely lead a different path from xyz to b/c. In the end, the designer need to know what he wants, and he has to verify that the synthesis tool understands his intent.

What's the best language for physics modeling?

I've been out of the modeling biz, so to speak, for a while now. When I was in college, most of the models I worked with were written in FORTRAN, which I never liked. I'm looking to get back into science, so I'm wondering if there are modern languages with feature sets suited for this kind of application. What would you consider to be an optimal language for simulating complex physics systems?
While certainly Fortran was the absolute ruler for this, Python is being used more and more exactly for this purpose. While it is very hard to say which is the BEST program for this, I've found python pretty useful for physics simulations and physics education.
It depends on the task
C++ is good at complicated data structures, but it is bad at slicing and multiply matrices. (This task equires you to spend a lot of time writing for loops.)
FORTRAN has a nice notation for slicing and multiplying matrices, but it is clumsy for creating complicated data structure such as graphs and linked lists.
Python/scipy has a nice notation for everything, but python is an interepreted language, so it is slow at certain tasks.
Some people are interested in languages like CUDA that allow you to use your GPU to speed up your simulations.
In the molecular dynamics community c++ seems to be popular, because you need somewhat complicated data structures to represent the shapes of the molecules.
I think it's arguable that FORTRAN is still dominant when it comes to solving large-scale problems in physics, as long as we're talking about serial calculations.
I know that parallelization is changing the game. I'm less certain about whether or not parallelized versions of LINPACK and other linear algebra packages are still written in FORTRAN.
A lot of engineers are using MATLAB and Mathematica these days, because they combine numerical and graphics capabilities.
I'd also point out that there's a difference between calculation and display engines. The former might still be written in FORTRAN, but the latter may be using more modern languages and OpenGL.
I'm also unsure about how much modeling has crept into biology. Physical chemistry might be a very different animal altogether.
If you write a terrific parallel linear algebra package in Scala or F# or Haskell that performs well, the world will beat a path to your door.
Python + Matplotlib + NumPy + α
The nuclear/particle/high energy physics community has moved heavily toward c++ (in part due to ROOT and Geant4), with some interest in Python (because it has ROOT bindings).
But you'll note that this is sub-discipline dependent..."physics" and "modeling" are big, broad topics, so there is no one answer.
Modelica is a specialized language for modeling (and simulating) physical systems. OpenModelica is an open source implementation of Modelica.
Python is very popular among science-oriented people, as is Matlab. The issue with these is that they are both VERY slow (to run). If you want to do large simulations that may take minutes/hours/days, you're going to have to pick another language.
As long as you are picking a language for speed, suck it up and use C/C++, maybe with CUDA depending on your needs.
Final thought though: if it takes you two days longer to write and debug your model in C than in python, and the resulting code takes 10 minutes to run instead of an hour, have your really saved any time?
There's also a lot of capability with MATLAB. Especially when interfacing your simulations with hardware, or if you need your results visualised.
I'll chime in with Python but you should also look to R for any statistical work you may need to do. You should really be asking more about what packages for which languages to use rather than the language itself.