Why are there multiple options for the same SMT solver - leon

In Leon verifier, why are there different options that use the same solver even when inductive reasoning happens within Leon? Eg. all the 3 options: fairz3, smt-z3 and unrollz3 seem to use a z3 solver and perform inductive reasoning in leon.

All three options are doing the thing in principle, but differ slightly in implementation (leading to different performances/reliability).
The fairz3 option use the native Z3 api (via the ScalaZ3 library) while the smt-z3 communicates with a Z3 process standard input (using the SMT-LIB standard via the Scala SMT-LIB library). In order to use smt-z3 you will need to make sure a z3 command is in your PATH.
With fairz3, Leon and Z3 are running in the same process, which means that a crash in Z3 would bring down the whole process, and there is nothing that can be done in Leon to prevent it. When using smt-z3, we run Z3 as a separate process, and we can run Leon in isolation from that process. The process can be killed at any point if it becomes unresponsive (or if Leon decides to time out the solver).
The fair name is due to historical reason. The original implementation of Leon was based on the native API of Z3 (apparently for performance reason, it is faster to build formula trees directly in Z3 instead of building them in Leon and then translating them for Z3). The solver in Leon ended up being named FairZ3Solver, with Fair as in fair unrolling of the functions. All the unrolling logic was mixed with Z3 communication.
There is a second (new) implementation of the inductive unrolling in Leon (known as UnrollingSolver) that is independent of the underlying solver (Z3, CVC4, RandomSolver). That unrolling is just as "fair" as the one provided by fairz3. When you use unrollz3 you are using this UnrollingSolver (which is also used with smt-z3) with the underlying solver being the native interface of Z3 (not using SMT-LIB text interface). The main difference with FairZ3Solver is that, besides being more general, the unrolling is done on Leon tree. This slightly impacts the performance.

Related

Could branch prediction optimization be inherited?

Does it make sense to implement own branch prediction optimization in own VM interpreter or it is enough to run VM on hardware that already has branch prediction optimization support?
It could make sense in a limited sense.
For example, in a JIT complier, when generating assembly you may decide to lay out code based on the observed branch probabilities. This only needs a very simple type of predictor that knows the overall probability but doesn't need to recognize any patterns. If you did recognize patterns you could do more sophisticated optimizations, e.g. a loop with an embedded branch that alternates every iteration could be unrolled 2x and the body created efficient for the observed case.
For an interpreter it seems a bit less useful, but one can imagine some sophisticated designs that fuse some adjacent instructions together into a single operation for efficiency and this might benefit from branch prediction. Similarly an interpreter might benefit from recognizing loops.
Apparently you're talking about a VM that interprets bytecode, not hardware virtualization of a CPU.
Implement how? Branch prediction in CPUs is only needed because they're pipelined, and for speculative out-of-order execution.
None of those things make sense for interpreter software if it would create more work to implement. Software pipelining can be worth it for loops over arrays to hide load and ALU latency, especially on older in-order CPUs, but that doesn't increase the total number of instructions to be run. If you don't know for sure what needs to be done next, leave the speculation to hardware OoO exec.
Note that for a pure non-JITing interpreter, control dependencies in the guest code become data dependencies in the interpreter, while a sequence of different instructions in the guest creates a control dependency in the interpreter (to dispatch to handler functions). See How exactly R is affected by Branch Prediction?
You do potentially need to care about branch prediction in the CPU that will run your code. Recently (like Intel since Haswell), CPUs are finally not bad for that, using IT-TAGE predictors: Branch Prediction and the Performance of Interpreters - Don’t Trust Folklore.
You don't implement branch prediction in software, but for older CPUs it was worth tuning interpreters with hardware branch prediction in mind. X86 prefetching optimizations: "computed goto" threaded code has some links, especially an article by Darek Mihocka discussing how badly it sucks for older CPUs (current at the time it was written) to have one "grand central" dispatch branch, like a single switch that every instruction-handler function returns to. That means the entire pattern of which instruction tends to follow which other instruction has to be predicted for that single branch. Without something like IT-TAGE, the prediction state for a single branch is very limited.
Tuning for older CPUs can involve putting dispatch to the next instruction at the end of each handler function, instead of returning to a single dispatch loop. But again, that's not implementing branch prediction, that's tuning for it.

Algebraic/implicit loops handling by Gekko

I have got a specific question with regards to algebraic / implicit loops handling by Gekko.
I will give examples in the field of Chemical Engineering, as this is how I found the project and its other libraries.
For example, when it comes to multicomponent chemical equilibrium calculations, it is not possible to explicitly work out the equations, because the concentration of one specie may be present in many different equations.
I have been using other paid software in the past and it would automatically propose a resolution procedure based on how the system is solvable (by analyzing dependency and creating automatic algebraic loops).
My question would be:
Does Gekko do that automatically?
It is a little bit tricky because sometimes one needs to add tear variables and iterate from a good starting value.
I know this message may be a little bit abstract, but I am trying to decide which software to use for my work and this is a pragmatic bottle neck that I have happened to find.
Thanks in advance for your valuable insight.
Python Gekko uses a simultaneous solution strategy so that all units are solved together instead of sequentially. Therefore, tear variables are not needed but large flowsheet problems with recycle can be difficult to converge to a feasible solution. Below are three methods that are in Python Gekko to assist in efficient solutions and initialization.
Method 1: Intermediate Variables
Intermediate variables are useful to decrease the complexity of the model. In many models, the temporary variables outnumber the regular variables. This model reduction often aides the solver in finding a solution by reducing the problem size. Intermediate variables are declared with m.Intermediates() in Python Gekko. The intermediate variables may be defined in one section or in multiple declarations throughout the model. Intermediate variables are parsed sequentially, from top to bottom. To avoid inadvertent overwrites, intermediate variable can be defined once. In the case of intermediate variables, the order of declaration is critical. If an intermediate is used before the definition, an error reports that there is an uninitialized value. Here is additional information on Intermediates with an example problem.
Method 2: Lower Block Triangular Decomposition
For large problems that have trouble with initialization, there is a mode that is activated with the option m.options.COLDSTART=2. This mode performs a lower block triangular decomposition to automatically identify independent blocks that are then solved independently and sequentially.
This decomposition method for initialization is discussed in the PhD dissertation (chapter 2) of Mostafa Safdarnejad or also in Safdarnejad, S.M., Hedengren, J.D., Lewis, N.R., Haseltine, E., Initialization Strategies for Optimization of Dynamic Systems, Computers and Chemical Engineering, 2015, Vol. 78, pp. 39-50, DOI: 10.1016/j.compchemeng.2015.04.016.
Method 3: Automatic Model Reduction
Model reduction requires more pre-processing time but can help to significantly reduce the solver time. There is additional documentation on m.options.REDUCE.
Overall Strategy for Initialization
The overall strategy that we use for initializing hard problems, such as flowsheets with recycle, is shown in this flowchart.
Sometimes it does mean breaking recycles to get an initialized solution. Other times, the initialization strategies detailed above work well and no model rearrangement is necessary. The advantage of working with a simultaneous solution strategy is degree of freedom swapping such as downstream variables can be fixed and upstream variables calculated to meet that value.

Support of trigonometric functions ( e.g.: cos, tan) in Z3

I would like to use Z3 to optimize a set of equations. The problem hereby is my equations are non-linear but most importantly they do have trigonometric functions. Is there a way to deal with that in z3?
I am using the z3py API.
Transcendental numbers and trigonometric functions are usually not supported by SMT solvers.
As Christopher pointed out (thanks!), Z3 does have support for trigonometric functions and transcendentals; but the support is rather quite limited. (In practice, this means you shouldn't expect Z3 to decide every single formula you throw at it; in complicated cases, it's most likely to simply return unknown.)
See https://link.springer.com/chapter/10.1007%2F978-3-642-38574-2_12 for the related publication. There are some examples in the following discussion thread that can get you started: https://github.com/Z3Prover/z3/issues/680
Also, note that the optimizing solver of Z3 doesn't handle nonlinear equations; so you wouldn't be able to optimize them. For this sort of optimization problems, traditional SMT solvers are just not the right choice.
However, if you're happy with δ-satisfiability (allowing a certain error factor), then check out dReal, which can deal with trigonometric functions: http://dreal.github.io/ So far as I can tell, however, it doesn't perform optimization.

Z3 Time Restricted Optimization

I have seen that Z3 supports optimization via e.g. assert-soft. From what I understood, if given sufficient time, Z3 will report the optimal solution for a given SMT formula.
However, I am interested if it is possible to run Z3 for a limited amount of time and have it report the best solution it can find (which does not necessarily mean it is the optimal solution).
If I run Z3 on a SMT formula and restrict the time (via parameter -T), it will just report 'timeout' if it did not solve it optimally. I read that the default wmax solver uses a simple procedure to bounds weights and am curious to whether it is possible to bound the weights from an upper bound, rather than a lower bound.
The timeout option -T causes the process to terminate so it does not return any intermediate values. If you use -t option (soft timeout), then the process does not terminate. Instead Z3 will stop the search at some point where it checks for cancellation. It then produces the best answer so far. It corresponds to setting a cancellation state. I hope this will work for you.

Automated Design in CAD, Analysis in FEA, and Optimization

I would like to optimize a design by having an optimizer make changes to a CAD file, which is then analyzed in FEM, and the results fed back into the optimizer to make changes on the design based on the FEM, until the solution converges to an optimum (mass, stiffness, else).
This is what I envision:
create a blueprint of the part in a CAD software (e.g. CATIA).
run an optimizer code (e.g. fmincon) from within a programming language (e.g. Python). The parameters of the optimizer are parameters of the CAD model (angles, lengths, thicknesses, etc.).
the optimizer evaluates a certain design (parameter set). The programming language calls the CAD software and modifies the design accordingly.
the programming language extracts some information (e.g. mass).
then the programming language extracts a STEP file and passes it a FEA solver (e.g. Abaqus) where a predefined analysis is performed.
the programming language reads the results (e.g. max van Mises stress).
the results from CAD and FEM (e.g. mass and stress) are fed to the optimizer, which changes the design accordingly.
until it converges.
I know this exists from within a closed architecture (e.g. isight), but I want to use an open architecture where the optimizer is called from within an open programming language (ideally Python).
So finally, here are my questions:
Can it be done, as I described it or else?
References, tutorials please?
Which softwares do you recommend, for programming, CAD and FEM?
Yes, it can be done. What you're describing is a small parametric structural sizing multidisciplinary optimization (MDO) environment. Before you even begin coding up the tools or environment, I suggest doing some preliminary work on a few areas
Carefully formulate the minimization problem (minimize f(x), where x is a vector containing ... variables, subject to ... constraints, etc.)
Survey and identify individual tools of interest
How would each tool work? Input variables? Output variables?
Outline in a Design Structure Matrix (a.k.a. N^2 diagram) how the tools will feed information (variables) to each other
What optimizer is best suited to your problem (MDF?)
Identify suitable convergence tolerance(s)
Once the above steps are taken, I would then start to think MDO implementation details. Python, while not the fastest language, would be an ideal environment because there are many tools that were built in Python to solve MDO problems like the one you have and the low development time. I suggest going with the following packages
OpenMDAO (http://openmdao.org/): a modern MDO platform written by NASA Glenn Research Center. The tutorials do a good job of getting you started. Note that each "discipline" in the Sellar problem, the 2nd problem in the tutorial, would include a call to your tool(s) instead of a closed-form equation. As long as you follow OpenMDAO's class framework, it does not care what each discipline is and treats it as a black-box; it doesn't care what goes on in-between an input and an output.
Scipy and numpy: two scientific and numerical optimization packages
I don't know what software you have access to, but here are a few tool-related tips to help you in your tool survey and identification:
Abaqus has a Python API (http://www.maths.cam.ac.uk/computing/software/abaqus_docs/docs/v6.12/pdf_books/SCRIPT_USER.pdf)
If you need to use a program that does not have an API, you can automate the GUI using Python's win32com or Pywinauto (GUI automation) package
For FEM/FEA, I used both MSC PATRAN and MSC NASTRAN on previous projects since they have command-line interfaces (read: easy to interface with via Python)
HyperSizer also has a Python API
Install Pythonxy (https://code.google.com/p/pythonxy/) and use the Spyder Python IDE (included)
CATIA can be automated using win32com (quick Google search on how to do it: http://code.activestate.com/recipes/347243-automate-catia-v5-with-python-and-pywin32/)
Note: to give you some sort of development time-frame, what you're asking will probably take at least two weeks to develop.
I hope this helps.