Expressing model as LTL - spin

Basically, model checking deals a model 'm' (the behavior description of the system) and a property 'p', which the system shall satisfy. With both artifacts, a model checker determines whether the model satisfies the property.
My question is whether it's possible to specify the model 'm' as an LTL formula and check if the model as LTL 'm' satisfies the property 'p'.
Theoretically, I believe that this approach should work because we can generate two Büchi automaton, one for the LTL formula 'p' and another for the LTL property that describes the model 'm'. If the intersection of the two non-deterministic automata is empty, the model 'm' as LTL satisfies the property.
Can someone give me a hint? Is it possible?

interesting question: the short answer is probably no.
https://en.wikipedia.org/wiki/Linear_temporal_logic_to_B%C3%BCchi_automaton
typically during model checking, the translation of LTL to Buchi Automata is performed. this is possible because LTL is much less expressive than Buchi Automata. however, if you have some pre-existing design, it is unlikely to be able to capture it in LTL. for example, when the design has many many states, it might be a problem in LTL.

Related

Best neural network model for lot of if/else conditions

I have got a large set of data. The data has 13 parameters and those parameters depend on each other and the dependency is established by some rules.
Example:- If say parameter_one is "A", and parameter_two is "B", and there is rule stating that parameter_one==A and parameter_two==B=>parameter_three==C, then parameter_three should be C(ideally). So, basically it's a lot of if/else statements.
Now, I just have the data, and we have to make the machine learning model learn the rules, so that whenever any data comes which doesn't obeys the rules:- as in above example, if parameter_three would have been 'D' instead of 'C', then it's a violation of the rule. How can I make the model learn these rules?
Also, the rules can't be written manually since there are a lot of rules and it's not scaling.
My try
I thought of using an autoencoder and pass the training data through it. After that for each data, we would use the reconstruction loss to check if it's a violation case or not. However, it's overfitting and not working well on test data.
I have previously tried to use deep neural network also, but it was not helping there. Can anyone help me out here?
Thanks in advance.
You could use Association Rule Mining algorithms like Apriori or FP-Growth to generate the frequent item sets.
From the frequent item sets you can generate Association rules.
Once you have the association rules, you can assign a weight to each rule (or use some parameter like confidence/lift of the rule).
When you want to test it on a new data entry, do weighted summing (if the new entry satisfies a rule, use the rule's weight to calculate the score/sum for the new entry).
If the generated score for the new entry is greater than a chosen threshold, you can say the new entry passes the preset rules otherwise it's in violation of the rules.
Weighted summing gives you flexibility to assign importance to association rules. You can also do this, if new entry does not satisfy even one of the association rules, then it is in violation of preset rules.

Machine Learning text comparison model

I am creating a machine learning model that essentially returns the correctness of one text to another.
For example; “the cat and a dog”, “a dog and the cat”. The model needs to be able to identify that some words (“cat”/“dog”) are more important/significant than others (“a”/“the”). I am not interested in conjunction words etc. I would like to be able to tell the model which words are the most “significant” and have it determine how correct text 1 is to text 2, with the “significant” words bearing more weight than others.
It also needs to be able to recognise that phrases don’t necessarily have to be in the same order. The two above sentences should be an extremely high match.
What is the basic algorithm I should use to go about this? Is there an alternative to just creating a dataset with thousands of example texts and a score of correctness?
I am only after a broad overview/flowchart/process/algorithm.
I think TF-IDF might be a good fit to your problem, because:
Emphasis on words occurring in many documents (say, 90% of your sentences/documents contain the conjuction word 'and') is much smaller, essentially giving more weight to the more document specific phrasing (this is the IDF part).
Ordering in Term Frequency (TF) does not matter, as opposed to methods using sliding windows etc.
It is very lightweight when compared to representation oriented methods like the one mentioned above.
Big drawback: Your data, depending on the size of corpus, may have too many dimensions (the same number of dimensions as unique words), you could use stemming/lemmatization in order to mitigate this problem to some degree.
You may calculate similiarity between two TF-IDF vector using cosine similiarity for example.
EDIT: Woops, this question is 8 months old, sorry for the bump, maybe it will be of use to someone else though.

Why is the existential necessary in strongest postconditions?

Every formulation of the strongest postcondition predicate transformer I have seen presents the assignment rule as follows:
sp(X:=E, P) = ∃v. (X=E[v/X] ∧ P[v/X])
I am wondering, why is the existential (and thus the existentially quantified variable "v") necessary in the above rule? It seems to me the strongest postconditions predicate transformer is almost identical to symbolic evaluation, in that you maintain a state (a mapping from variables to values) and a path condition (a predicate that must be true at a particular point in the program). Yet, symbolic evaluation does not rely on an existential quantifier.
So, I think I must be missing something here. Any help is appreciated!
I will give some intuitive description, since you have some knowledge in symbolic evaluation
If you have an arbitrary map to variables, you can not say anything about future state changes in the program before looking at them during the analysis.
Symbolical evaluation remembers each chosen path[as state space seperation], so it does not need to be contained in the evaluation formula to solve.
Here however you argue about every possible path and thus need an arbitrary formula to describe the behavior.
Assuming you would keep the variable in the formula, then you would argue about only 1 path of the possible runs. If you know that your variable does not induce other paths, then you can simplify this behavior.
Having however weakest liberal precondition, you know from which possible path you start and wrap all paths together to proof properties about your system.

PT algorithm for ML type inference

For the PT algorithm for ML type inference to work, the input program expression has to have the property that every bound variable is distinct. Can somebody explain it with an example?
The point is simply that variables bound by different binders are different from each other, and hence may have different types. So, it is a good practice to rename them, in order to avoid confusion and to be able to talk about the type of "x", without having to worry about which among the binders of "x" we are referring to.

Building ranking with genetic algorithm,

Question after BIG edition :
I need to built a ranking using genetic algorithm, I have data like this :
P(a>b)=0.9
P(b>c)=0.7
P(c>d)=0.8
P(b>d)=0.3
now, lets interpret a,b,c,d as names of football teams, and P(x>y) is probability that x wins with y. We want to build ranking of teams, we lack some observations P(a>d),P(a>c) are missing due to lack of matches between a vs d and a vs c.
Goal is to find ordering of team names, which the best describes current situation in that four team league.
If we have only 4 teams than solution is straightforward, first we compute probabilities for all 4!=24 orderings of four teams, while ignoring missing values we have :
P(abcd)=P(a>b)P(b>c)P(c>d)P(b>d)
P(abdc)=P(a>b)P(b>c)(1-P(c>d))P(b>d)
...
P(dcba)=(1-P(a>b))(1-P(b>c))(1-P(c>d))(1-P(b>d))
and we choose the ranking with highest probability. I don't want to use any other fitness function.
My question :
As numbers of permutations of n elements is n! calculation of probabilities for all
orderings is impossible for large n (my n is about 40). I want to use genetic algorithm for that problem.
Mutation operator is simple switching of places of two (or more) elements of ranking.
But how to make crossover of two orderings ?
Could P(abcd) be interpreted as cost function of path 'abcd' in assymetric TSP problem but cost of travelling from x to y is different than cost of travelling from y to x, P(x>y)=1-P(y<x) ? There are so many crossover operators for TSP problem, but I think I have to design my own crossover operator, because my problem is slightly different from TSP. Do you have any ideas for solution or frame for conceptual analysis ?
The easiest way, on conceptual and implementation level, is to use crossover operator which make exchange of suborderings between two solutions :
CrossOver(ABcD,AcDB) = AcBD
for random subset of elements (in this case 'a,b,d' in capital letters) we copy and paste first subordering - sequence of elements 'a,b,d' to second ordering.
Edition : asymetric TSP could be turned into symmetric TSP, but with forbidden suborderings, which make GA approach unsuitable.
It's definitely an interesting problem, and it seems most of the answers and comments have focused on the semantic aspects of the problem (i.e., the meaning of the fitness function, etc.).
I'll chip in some information about the syntactic elements -- how do you do crossover and/or mutation in ways that make sense. Obviously, as you noted with the parallel to the TSP, you have a permutation problem. So if you want to use a GA, the natural representation of candidate solutions is simply an ordered list of your points, careful to avoid repitition -- that is, a permutation.
TSP is one such permutation problem, and there are a number of crossover operators (e.g., Edge Assembly Crossover) that you can take from TSP algorithms and use directly. However, I think you'll have problems with that approach. Basically, the problem is this: in TSP, the important quality of solutions is adjacency. That is, abcd has the same fitness as cdab, because it's the same tour, just starting and ending at a different city. In your example, absolute position is much more important that this notion of relative position. abcd means in a sense that a is the best point -- it's important that it came first in the list.
The key thing you have to do to get an effective crossover operator is to account for what the properties are in the parents that make them good, and try to extract and combine exactly those properties. Nick Radcliffe called this "respectful recombination" (note that paper is quite old, and the theory is now understood a bit differently, but the principle is sound). Taking a TSP-designed operator and applying it to your problem will end up producing offspring that try to conserve irrelevant information from the parents.
You ideally need an operator that attempts to preserve absolute position in the string. The best one I know of offhand is known as Cycle Crossover (CX). I'm missing a good reference off the top of my head, but I can point you to some code where I implemented it as part of my graduate work. The basic idea of CX is fairly complicated to describe, and much easier to see in action. Take the following two points:
abcdefgh
cfhgedba
Pick a starting point in parent 1 at random. For simplicity, I'll just start at position 0 with the "a".
Now drop straight down into parent 2, and observe the value there (in this case, "c").
Now search for "c" in parent 1. We find it at position 2.
Now drop straight down again, and observe the "h" in parent 2, position 2.
Again, search for this "h" in parent 1, found at position 7.
Drop straight down and observe the "a" in parent 2.
At this point note that if we search for "a" in parent one, we reach a position where we've already been. Continuing past that will just cycle. In fact, we call the sequence of positions we visited (0, 2, 7) a "cycle". Note that we can simply exchange the values at these positions between the parents as a group and both parents will retain the permutation property, because we have the same three values at each position in the cycle for both parents, just in different orders.
Make the swap of the positions included in the cycle.
Note that this is only one cycle. You then repeat this process starting from a new (unvisited) position each time until all positions have been included in a cycle. After the one iteration described in the above steps, you get the following strings (where an "X" denotes a position in the cycle where the values were swapped between the parents.
cbhdefga
afcgedbh
X X X
Just keep finding and swapping cycles until you're done.
The code I linked from my github account is going to be tightly bound to my own metaheuristics framework, but I think it's a reasonably easy task to pull the basic algorithm out from the code and adapt it for your own system.
Note that you can potentially gain quite a lot from doing something more customized to your particular domain. I think something like CX will make a better black box algorithm than something based on a TSP operator, but black boxes are usually a last resort. Other people's suggestions might lead you to a better overall algorithm.
I've worked on a somewhat similar ranking problem and followed a technique similar to what I describe below. Does this work for you:
Assume the unknown value of an object diverges from your estimate via some distribution, say, the normal distribution. Interpret your ranking statements such as a > b, 0.9 as the statement "The value a lies at the 90% percentile of the distribution centered on b".
For every statement:
def realArrival = calculate a's location on a distribution centered on b
def arrivalGap = | realArrival - expectedArrival |
def fitness = Σ arrivalGap
Fitness function is MIN(fitness)
FWIW, my problem was actually a bin-packing problem, where the equivalent of your "rank" statements were user-provided rankings (1, 2, 3, etc.). So not quite TSP, but NP-Hard. OTOH, bin-packing has a pseudo-polynomial solution proportional to accepted error, which is what I eventually used. I'm not quite sure that would work with your probabilistic ranking statements.
What an interesting problem! If I understand it, what you're really asking is:
"Given a weighted, directed graph, with each edge-weight in the graph representing the probability that the arc is drawn in the correct direction, return the complete sequence of nodes with maximum probability of being a topological sort of the graph."
So if your graph has N edges, there are 2^N graphs of varying likelihood, with some orderings appearing in more than one graph.
I don't know if this will help (very brief Google searches did not enlighten me, but maybe you'll have more success with more perseverance) but my thoughts are that looking for "topological sort" in conjunction with any of "probabilistic", "random", "noise," or "error" (because the edge weights can be considered as a reliability factor) might be helpful.
I strongly question your assertion, in your example, that P(a>c) is not needed, though. You know your application space best, but it seems to me that specifying P(a>c) = 0.99 will give a different fitness for f(abc) than specifying P(a>c) = 0.01.
You might want to throw in "Bayesian" as well, since you might be able to start to infer values for (in your example) P(a>c) given your conditions and hypothetical solutions. The problem is, "topological sort" and "bayesian" is going to give you a whole bunch of hits related to markov chains and markov decision problems, which may or may not be helpful.