I am writing an interpreter for the lambda calculus in C#. So far I have gone down the following avenues for interpretation.
Compilation of terms to MSIL, such that lazy evaluation is still preserved.
Evaluation on a tree of terms (term rewriting).
At this moment, the MSIL compilation strategy is well over an order of magnitude faster in most any case I have been able to test. However, I am looking into optimizing the term rewriter by identifying patterns often used in the construction of LC terms. So far, I have come up with one method in particular which provides a relatively small speedup: identification of exponentiated applications. E.g. f (f (f (f x))) is simplified to f^4 x. Then, a rule for applications of equal applicant exponentials is used, namely f^m (f^n x) = f^(m + n) x. This rule works very well in particular for the exponentiation of church numerals.
This optimization has me wondering: Are there other pattern-based approaches to optimization in LC?
Related
What is the complexity of networkx.is_isomorphic(graph1, graph2)?
I am particularly interested to know it in the case of directed acyclic graphs.
Cheers.
According to the documents of nx.is_isomorphic the vf2-algorithm is implemented and even the original scientific reference is given.
"L. P. Cordella, P. Foggia, C. Sansone, M. Vento, “An Improved Algorithm for Matching Large Graphs”, 3rd IAPR-TC15 Workshop on Graph-based Representations in Pattern Recognition, Cuen, pp. 149-159, 2001."
The boost library states for the vf2-algorithm the following complexity:
"The spatial complexity of VF2 is of order O(V), where V is the (maximum) number of vertices of the two graphs. Time complexity is O(V^2) in the best case and O(V!·V) in the worst case."
What is the complexity class and big o for a given function?
Are these the same thing?
for e.g : n^2 + n
Thanks
Complexity classes and big-O notation are not the same thing. Big-O notation is just a notation to communicate the asymptotic behavior of a function; that is, O(f(n)) is the set of all functions that are upper bounded by c*f(n) for all n>N, where c and N are universal constants. So, in your example, we'd say that n^2 + n is in O(n^2), because for all n>=1, n^2 + n < 2n^2.
Complexity classes, on the other hand, are classes of languages, which we can think of as " decision problems (e.g., decide if some object X has a given property). The complexity classes describe how much computational power is required for an algorithm to solve a given decision problem. For example, say we want to decide if an array of n numbers is sorted in increasing order. Since we can do so by simply scanning the items one at a time and making sure there's no decrease, it takes n steps to solve the decision problem. Thus this decision problem is in the class P, which contains all languages with polynomial time algorithms. Note that this is a property of the problem, and not a given function. You can also decide if a list is sorted by enumerating all lists on n elements and checking the input matches any, but that would be really inefficient. Complexity classes are determined by the existence of a sufficient algorithm to solve a decision problem.
I'm trying to use the scipy.optimize.slsqp for an industrial-related constrained optimization. A highly non-linear FE model is used to generate the objective and the constraint functions, and their derivatives/sensitivities.
The objective function is in the form:
obj=a number calculated from the FE model
A series of constraint functions are set, and most of them are in the form:
cons = real number i - real number j (calculated from the FE model)
I would like to try to restrict the design variables to integers as that would be what input into the plant machine.
Another consideration is to have a log file recording what design variable have been tried. if a set of design variable (integer) is already tried for, skip the calculation, perturb the design variable and try again. By limiting the design variable to integers, we are able to limit the number of trials (while leaving the design variable to real, a change in the e.g. 8th decimal point could be regarded as untried values).
I'm using SLSQP as it is one of the SQP method (please correct me if I am wrong), and the it is said to be powerful to deal with nonlinear problems. I understand the SLSQP algorithm is a gradient-based optimizer and there is no way I can implement the restriction of the design variables being integer in the algorithm coded in FORTRAN. So instead, I modified the slsqp.py file to the following (where it calls the python extension built from the FORTRAN algorithm):
slsqp(m, meq, x, xl, xu, fx, c, g, a, acc, majiter, mode, w, jw)
for i in range(len(x)):
x[i]=int(x[i])
The code stops at the 2nd iteration and output the following:
Optimization terminated successfully. (Exit mode 0)
Current function value: -1.286621577077517
Iterations: 7
Function evaluations: 0
Gradient evaluations: 0
However, one of the constraint function is violated (value at about -5.2, while the default convergence criterion of the optimization code = 10^-6).
Questions:
1. Since the FE model is highly nonlinear, I think it's safe to assume the objective and constraint functions will be highly nonlinear too (regardless of their mathematical form). Is that correct?
2. Based on the convergence criterion of the slsqp algorithm(please see below), one of which requires the sum of all constraint violations(absolute values) to be less than a very small value (10^-6), how could the optimization exit with successful termination message?
IF ((ABS(f-f0).LT.acc .OR. dnrm2_(n,s,1).LT.acc).AND. h3.LT.acc)
Any help or advice is appreciated. Thank you.
I am working with use of genetic algorithm to break transposition cipher. So in this work I have come across to a paper named Breaking Transposition Cipher with Genetic Algorithm by R. Toemeh & S. Arumugam.
In this paper they have used a fitness function. But i can not understand it completely. I can not understand the function of β and γ in the equation.
Can anyone please explain the fitness function please? Here is the picture of the fitness function:
The weights β and γ can be varied to allow more or less
emphasis on particular statistics (they're determined "experimentally").
Kb(i, j) and Kt(i, j, k) are the known language bigram and trigram statistics. E.g. for English language you have (bigrams):
(further details in The frequency of bigrams in an English corpus)
Db(i, j) and Dt(i, j ,k) are the bigram and trigram statistics of
the message decrypted with key k.
In A Generic Genetic Algorithm to Automate an Attack on Classical Ciphers by Anukriti Dureha and Arashdeep Kaur there are some reference values of β and γ (and α since they use an extended form of the above equation) and three types of ciphers.
Some further details about β and γ.
They're weights that remain constant during the evolution. They should be tuned experimentally ("optimal" values depends on the target languages and the cipher algorithms).
Offline parameter tuning is the way to go, i.e.:
simple parameter sweep (try everything)
meta-GA
racing strategy
We know that compilers are getting better and better at optimising our code and make it run faster, but my question are there compilers that can optimise floating point operations to ensure greater accuracy.
For example a basic rule is to perform multiplications before addition, this is because multiplication and division using floating point numbers does not introduce inaccuracies as great as that of addition and subtraction but can increase the magnitude of inaccuracies introduced by addition and subtraction, so it should be done first in many cases.
So a floating point operation like
y = x*(a + b); // faster but less accurate
Should be changed to
y = x*a + x*b; // slower but more accurate
Are there any compilers that will optimise for improved floating point accuracy at the expense of speed like I showed above? Or is the main concern of compilers speed with out looking at accuracy of floating point operations?
Thanks
Update: The selected answer, showed a very good example where this type of optimisation would not work, so it wouldn't be possible for the compiler to know before hand what is the more accurate way to evaluate y. Thanks for the counter example.
Your premise is faulty. x*(a + b), is (in general) no less accurate than x*a + x*b. In fact, it will often be more accurate, because it performs only two floating point operations (and therefore incurs only two rounding errors), whereas the latter performs three operations.
If you know something about the expected distribution of values for x, a, and b a priori, then you could make an informed decision, but compilers almost never have access to that type of information.
That aside, what if the person writing the program actually meant x*(a+b) and specifically wanted the exactly roundings that are caused by that particular sequence of operations? This sort of thing is actually pretty common in high-quality numerical algorithms.
Better to do what the programmer wrote, not what you think he might have intended.
Edit -- An example to illustrate a case where the transformation you suggested results in a catastrophic loss of accuracy: suppose
x = 3.1415926535897931
a = 1.0e15
b = -(1.0e15 - 1.0)
Then, evaluating in double we get:
x*(a + b) = 3.1415926535897931
but
x*a + x*b = 3.0
Compilers typically "optimize" for accuracy over speed, accuracy defined as exact implementation of the IEEE 754 standard. Whereas integer operations can be reordered in any way that doesn't cause overflow, FP operations need to be performed exactly as the programmer specifies. This may sacrifice numerical accuracy (ordinary C compilers are not equipped to optimize for that) but faithfully implements the what the programmer asked.
A programmer who is sure he hasn't manually optimized for accuracy may enable compiler features like GCC's -funsafe-math-optimizations and -ffinite-math-only to possibly extract extra speed. But usually there isn't much gain.
No, there isn't. Stephen Canon gives some good reasons why this would be a stupid idea, and he's correct; so you won't find a compiler that does this.
If you as the programmer have some knowledge about the ranges of numbers you're manipulating, you can use parentheses, temporary variables and similar constructs to strongly hint the compiler about how you want things done.