nested if function transformation for linearization, solver optimization - optimization

I have 10 nested if functions. I'm trying to transform nonsmooth and non linear function to a linear function. In order to do that, I need to transform nested if functions to a linear format by adding binary variables. It is easy if there is only 1 if statement. What about more than one. Thanks in advance for your responds.

I suspect this may not still be an issue for you, but I just saw this post today. Manually linearizing nested IF statements can be quite a challenge. LINDO Systems has an Excel add-in solver named What'sBest that can internally linearize nested IF statements. This can allow What'sBest to solve the resulting model as a mixed integer linear program.

Related

Is it possible to access SCIP's Statistics output values directly from PyScipOpt Model Object?

I'm using SCIP to solve MILPs in Python using PyScipOpt. After solving a problem, the solver statistics can be either 1) printed as a string using printStatistics(), or 2) saved to an external file using writeStatistics(). For example:
import pyscipopt as pso
model = pso.Model()
model.addVar(name="x", obj=1)
model.optimize()
model.printStatistics()
model.writeStatistics(filename="stats.txt")
There's a lot of information in printStatistics/writeStatistics that doesn't seem to be accessible from the Python model object directly (e.g. primal-dual integral value, data for individual branching rules or primal heuristics, etc.) It would be helpful to be able to extract the data from this output via, e.g., attributes of the model object or a dictionary.
Is there any way to access this information from the model object without having to parse the raw text/file output?
PySCIPOpt does not provide access to the statistics directly. The data for the various tables (e.g. separators, presolvers, etc.) are stored separately for every single plugin in SCIP and are sometimes not straightforward to collect.
If you are only interested in certain statistics about the general solving process, then you might want to add PySCIPOpt wrappers for a few of the simple get functions defined in scip_solvingstats.c.
Lastly, you might want to check out IPET for parsing the statistics output.

Inverse Lognormal function in SQL

I have a process built in Excel and am trying to transfer it over to SQL for efficiency purposes. I have figured much of it out, including random number generation and pieces involving exponential distributions.
The one piece I am unable to figure out in SQL is how to get the inverse of lognormal.
This is how the formula looks in Excel:
lognorm.inv(p,mu,sigma)
I have the equivalent to p, mu and sigma set up in my data and am trying to find the SQL equivalent (or workaround) to the lognorm.inv() Excel function.
Has anyone done something like this before?

Providing a LIMIT param in Django query without taking a slice of a QuerySet

I have a utility function in my program for searching for entities. It takes a max_count parameter. It returns a QuerySet.
I would like this function to limit the max number of entries. The standard way would be to take a slice out of my QuerySet:
return results[:max_count]
My problem is that the views which utilize this function sort in various ways by using .order_by(). This causes exceptions as re-ordering is not allowed after taking a slice.
Is it possible to force a "LIMIT 1000" into my SQL query without taking a slice?
Do results[:max_count] in a view, after .order_by(). Don't be afraid of requesting too much from DB, query won't be evaluated until the slice (and even after it either).
You could subclass QuerySet to achieve this, by simply ignore every slice and do [:max_count] at last in __getitem__, but I don't think it worths with the complex and side-effects.
If you are worrying about memory consumption by large queryset, follow http://www.mellowmorning.com/2010/03/03/django-query-set-iterator-for-really-large-querysets/
For normal usage, please just follow DrTyrsa's suggestion. You could write shortcuts to shorten the order_by and afterwards slice in code to simplify your code.

translation from Datalog to SQL

I am still thinking on how to translate the recursivity of a Datalog program into SQL, such as
P(x,y) <- Q(x,y).
Q(x,y) <- P(x,z), A(y).
where A/1 is an EDB predicate. This, there is a co-dependency between P and Q. For longer queries, how to solve this problem?
Moreover, is there any system completely implement the translation? If there is, may I know what system or which paper may I refer?
If you adopt an approach of "tabling" previous conclusions and forward-chain reasoning on these to infer new conclusions, no recursive "depth" is required.
Bear in mind that Datalog requires some restrictions on rules and variable that assure finite termination and hence finitely many conclusions. Variables must have a finite range of possible values, for example.
Let's assume your example refers to constants rather than to variables:
P(x,y) <- Q(x,y).
Q(x,y) <- P(x,z), A(y).
One wrinkle is that you want A/1 to be implemented as an extended stored procedure or external code. For that I would propose tabling all the results of calling A on all possible arguments (finitely many). These are after all among the conclusions (provable statements) of your system.
Once that is done the forward-chaining inference proceeds iteratively rather than recursively. At each step consider each rule, applying it with premises (right-hand sides) that are previously obtained (tabled) conclusions if it produces a new conclusion. If no rule produces a new conclusion in the current step, halt. The proof procedure is complete.
In your example the proofs stop after all the A facts are adduced, because there are no conclusions sufficient to apply either rule to get new conclusions.
A possible approach is to use recursive CTEs in SQL, which provide the power of transitive closure. Relational algebra + transitive closure = Datalog.
Logica does something like this. It translates a datalog-like language into SQL for Google BigQuery, PostgreSQL and SQLite.

Model clause in Oracle

I am recently inclined towards in Oracle jargon and the more I am looking into the more is attracting me.
I have recently come across the MODEL clause but to be honest I am not understanding the behaviour of this. Can any one with some examples please let me know about the same.
Thanks in advance
Some examples of MODEL are given here.
Personally I've looked at MODEL several times and not yet succeeded in finding a use case for it. While it first appears to be useful, there's a lot of places where only literals work (rather than binds or variables) which restrict its flexibility. For example, on inter-row calculations, you can't readily refer to the 'previous' or 'next' row, but have to be able to absolutely identify it by its attributes. So you can't say 'take the value of the row with the same date in the previous month' but can only code a specific date.
It might be used (internally) by some analytical tools. But as an end-user tool, I never 'got' it. I have previously recommended that, if you ever find a problem you think can be solved by the application of the MODEL clause, go and have a lie down until the feeling wears off.
I think the MODEL clause is quite simple to understand, when you slowly read the official whitepaper. In my opinion, the whitepaper nicely explains the MODEL clause step by step, adding one feature at a time to the examples, leaving out the most advanced features to the official documentation.
From the whitepaper, I also find it easy to understand when to actually use the MODEL clause. In some examples, it is a lot simpler to do "Excel-spreadsheet-like" operations using MODEL rather than, for instance, using window functions, CONNECT BY, or subquery factoring. Think about Excel. Whenever you want to define a complex rule set for Excel columns, use the MODEL clause. Example Excel spreadsheet rules:
A10 = A9 + A8
B10 = A10 * 5
C10 = MAX(A1:A9)
D10 = C10 / A10
In other words, MODEL is a very powerful SQL spreadsheet!
The best explanation is in the official white paper. It uses the SH demo schema and you really need it installed.
http://www.oracle.com/technetwork/middleware/bi-foundation/10gr1-twp-bi-dw-sqlmodel-131067.pdf
I don't think they do a very good job explaining this. It basically lets you load up data into an array and and then loop through array using straight SQL, instead of having to write procedural logic. Alot of the terms are based on spreadsheet terms (they are used in the Excel Help). So if you have them in excel, this would be confusing.
They should have drawn a picture for each of the queries and shown the array created than shown how you look through the array. The syntax looks to be based on Excel syntax. I'm not sure if this is common to all spreadsheet tools or not.
It has uses. Bin fitting is the most common. See the 2nd example. This is basically a complex group by where you are grouping by a range, but that range can change. It requires procedural logic. The example gives 3 ways to do it one of which is the model clause.
http://www.oracle.com/technetwork/issue-archive/2012/12-mar/o22asktom-1518271.html
I think people (often managers) who do complex spreadsheet calculations may have an easier time seeing uses for this and getting the lingo.