Fuzzy Logic - Aggregating fuzzy rules - fuzzy-logic

I am starting in fuzzy logic and I have a model with several rules. The way I am aggregating them so I can defuzzify is by taking the maximum of each rule (that is how I saw in an exemple of the library that I am using). The problem is that if one of my rule returns a value that is too high the other rules become irrelevant to the output. My output kinds of saturates. Is ther other ways to aggregate fuzzy rules so that dos not happen?

You should look into T-norms and T-conorms. After you find out what T-norms and T-Conorms you can use in your library, you can choose one which fits your needs best.
You use the Maximum T-Conorm. So if one rule result is 0.8, the end result will always be 0.8 as long as the other rule result is smaller as 0.8.
But if you use another T-Conorm for example the Probabilistic sum it is not that way anymore:
Probabilistic sum:
Example:
Rule1 = 0.5
Rule2 = 0.6
EndResult = 0.5 + 0.6 - 0.5 * 0.6 = 0.8
Now both results have an influence on the end result, not just the bigger one.

Related

Kotlin: Why these two implementations of log base 10 give different results on the specific imputs?

println(log(it.toDouble(), 10.0).toInt()+1) // n1
println(log10(it.toDouble()).toInt() + 1) // n2
I had to count the "length" of the number in n-base for non-related to the question needs and stumbled upon a bug (or rather unexpected behavior) that for it == 1000 these two functions give different results.
n1(1000) = 3,
n2(1000) = 4.
Checking values before conversion to int resulted in:
n1_double(1000) = 3.9999999999999996,
n2_double(1000) = 4.0
I understand that some floating point arithmetics magic is involved, but what is especially weird to me is that for 100, 10000 and other inputs that I checked n1 == n2.
What is special about it == 1000? How I ensure that log gives me the intended result (4, not 3.99..), because right now I can't even figure out what cases I need to double-check, since it is not just powers of 10, it is 1000 (and probably some other numbers) specifically.
I looked into implementation of log() and log10() and log is implemented as
if (base <= 0.0 || base == 1.0) return Double.NaN
return nativeMath.log(x) / nativeMath.log(base) //log() here is a natural logarithm
while log10 is implemented as
return nativeMath.log10(x)
I suspect this division in the first case is the reason of an error, but I can't figure out why it causes an error only in specific cases.
I also found this question:
Python math.log and math.log10 giving different results
But I already know that one is more precise than another. However there is no analogy for log10 for some base n, so I'm curious of reason WHY it is specifically 1000 that goes wrong.
PS: I understand there are methods of calculating length of a number without fp arithmetics and log of n-base, but at this point it is a scientific curiosity.
but I can't figure out why it causes an error only in specific cases.
return nativeMath.log(x) / nativeMath.log(base)
//log() here is a natural logarithm
Consider x = 1000 and nativeMath.log(x). The natural logarithm is not exactly representable. It is near
6.90775527898213_681... (Double answer)
6.90775527898213_705... (closer answer)
Consider base = 10 and nativeMath.log(base). The natural logarithm is not exactly representable. It is near
2.302585092994045_901... (Double)
2.302585092994045_684... (closer answer)
The only exactly correct nativeMath.log(x) for a finite x is when x == 1.0.
The quotient of the division of 6.90775527898213681... / 2.302585092994045901... is not exactly representable. It is near 2.9999999999999995559...
The conversion of the quotient to text is not exact.
So we have 4 computation errors with the system giving us a close (rounded) result instead at each step.
Sometimes these rounding errors cancel out in a way we find acceptable and the value of "3.0" is reported. Sometimes not.
Performed with higher precision math, it is easy to see log(1000) was less than a higher precision answer and that log(10) was more. These 2 round-off errors in the opposite direction for a / contributed to the quotient being extra off (low) - by 1 ULP than hoped.
When log(x, 10) is computed for other x = power-of-10, and the log(x) is slightly more than than a higher precision answer, I'd expect the quotient to less often result in a 1 ULP error. Perhaps it will be 50/50 for all powers-of-10.
log10(x) is designed to compute the logarithm in a different fashion, exploiting that the base is 10.0 and certainly exact for powers-of-10.

pyomo: minimal production time / BIG M

I am looking for a way to map a minimum necessary duty cycle in an optimization model.
After several attempts, however, I have now reached the end of my knowledge and hope for some inspiration here.
The idea is that a variable (binary) mdl.ontime is set so that the sum of successive ontime values is greater than or equal to the minimum duty cycle:
def ontime(mdl,t):
min_on_time = 3 # minimum on time in h
if t < min_on_time: return mdl.ontime[t] == 0
return sum(mdl.ontime[t-i] for i in range(min_on_time)) >= min_on_time
That works so far, if the variable mdl.ontime will not be recognized at all.
Then I tried three different constraints, unfortunately they all gave the same result: CPLEX only finds inf. results.
The first variant was:
def flag(mdl,t):
return mdl.ontime[t] + (mdl.production[t]>=0.1) >= 2
So if mdl.ontime is 1 and mdl.production is greater or equal 0.1 (the assumption is just exact enough), it should be greater or equal 2: a logical addition therm.
The second attemp was quite similar to the first:
def flag(mdl,t):
return (mdl.ontime[t]) >= (mdl.production[t] >= 0.1)
If mdl.ontime is 1, it should be greater or equal mdl.production compared with 0.1.
And the third with a big M variable:
def flag(mdl,t):
bigM = 10**6
return mdl.ontime[t] * bigM >= mdl.production[t]
bigM instead should be great enough in my case...
All of them do not work at all...and I have no idea, why CPLEX returns the error that there is only an infeasible solution.
Basically the model runs if I don't consider the ontime-integration.
Do you guys have any more ideas how I could implement this?
Many greetings,
Mathias
It isn't real clear what the desired relationship is between your variables/constraints. That said, I don't think this is legal. I'm surprised that it isn't popping an error....and if not popping an error, I'm pretty sure it isn't doing what you think:
def flag(mdl,t):
return mdl.ontime[t] + (mdl.production[t]>=0.1) >= 2
You are essentially burying an inferred binary variable in there with the test on mdl.production, which isn't going to work, I believe. You probably need to introduce another variable or such.

Issue with "CDbl" function while subtract values of two textboxes

I am trying to subtract the value from two textboxes in Visual Studio 2012.
Example input and results:
textbox1 - textbox2 = label1
25.9 - 25.4 = 0.50 (it's ok)
173.07 - 173 = 0.06 (should be 0.07)
144.98 - 142.12 = 2.85 (should be 2.86)
My code (I tried all three lines separately):
label1.text = (Convert.ToDouble(textbox1.text) - Convert.ToDouble(textbox2.text)).ToString
label1.text = (CDbl(textbox1.text) - CDbl(textbox2.text)).ToString
label1.text = (Val(textbox1.text) - Val(textbox2.text)).ToString
This error (may be not an error) occurs some times, not every time.
What am I missing here? And what should I use instead of "CDbl" ?
what should I use instead of "CDbl" ?
When you start with the a string, the best option is Double.Parse() or Double.TryParse(), depending on the possibility for bad data.
But even that's not enough in this case. Computers use something called IEEE754 for floating point arithmetic. This scheme for encoding floating point numbers is designed as an efficient way to represent numbers in binary, and further has direct support in CPUs for arithmetic operations, meaning it is much faster than any available alternative (it's not even close). Pretty much every programming platform uses it.
The downside is there is some loss of precision. When treated as IEEE754 doubles, 173.07-173 produces .69999999.
You can solve this in two ways:
Round the results. This isn't an option when using division, but with just addition and subtraction you can track significant digits and round to get exact results. This is a pain, though.
Use the Decimal type. Decimal isn't perfect, but is does have a much greater degree of precision (at the cost of some performance), and for your sample data produces exact results.
In short, try this code:
label1.text = (Decimal.Parse(textbox1.text) - Decimal.Parse(textbox2.text)).ToString()

Multiply two fuzzy numbers

Can anyone please give me a step by step procedure on how to multiply two fuzzy numbers A and B where
{(x+1)/2 |(-1<x<=1),
uA(x) = {(3-x)/2 |(1<x<=3),
{0 |otherwise
{(x-1)/2 |(1<x<=3),
uB(x) = {(5-x)/2 |(3<x<=5),
{0 |otherwise
Multiplication is a bit tricky, even if you have simple triangular membership functions. There is a step-by-step description here: http://debian.fmi.uni-sofia.bg/~cathy/SoftCpu/FUZZY_BOOK/chap5-3.pdf
In most cases, however, the simpler approximation described in example 5.12 (p 8) is probably good enough. In this you just multiply each of the three MF-numbers in one set with the three corresponding numbers in the other. (The results are, however, not very intuitive for numbers close to 0 - Anyone care to comment on/explain this?)

J: About optimal applying of sequence of filters to a list

Let {f(i)}, i = 1,...,n be a sequence of filters (each item of a list is mapped to a boolean value) with a property: if f(i) = 1 for some item of a list, then every f(j) = 1 for j > i and the same item. Very simple example:
[ t =: i.5 NB. sample data
0 1 2 3 4
f1 =: 2&> NB. is greater than 2
f2 =: 2&> +. 0=2&| NB. is greater than 2 OR even
(f1 ,: f2) t
1 1 0 0 0
1 1 1 0 1
(#~ f1 +. f2) t
0 1 2 4
Obviously there is no need to apply f2 to first 2 items of t (that has been already accepted by f1).
Question: How to avoid applying f(j) to items that was accepted by f(i) for j > i ?
My naive implementation
I. -. f1 t - indices of those items that are not accepted by f1. So why not select them, apply f2 and amend? I think that it's a wrong way, because this approach uses a lot of memory, right?
t #~ (f1 t) (I. -. f1 t)}~ f2 (I. -. f1 t) { t
0 1 2 4
And it's harder to code for many filters.
While it is possible to avoid computation in the manner you seek here, doing so tends to run "against the grain" of J. Notably, doing so is likely to increase the time and space requirements.
One technique would be to use the result of f1 to filter the argument to f2 and then expand the result of f2 to align with the result of f1. This will involve creating a new array in memory in order to have exactly the necessary values, plus a temporary result array, and also computation over that result to make it conform to the shape of the original argument. These things are not free.
Most importantly, this sort of micro-management involves a move away from what J programmers call array-thinking. Solutions that involve working with nouns "as a whole" (and as conforming rectangles) are often amenable to concise expression in J.
For certain types of calculation on certain types of data the class of problem you have posed may well be important. In such cases it could be worth contriving some technique for communicating partial results and selectively avoiding avoidable application of a verb. I'd guess Power (^:) would be useful in many such efforts. But these solutions would all be quite specific to circumstances where actual performance problems were appearing.
I take the risk of making the following claim: there is no general answer to your question because the generalities of J do not support fine-grained intervention. I suspect you have a solid understanding that J exhibits this bias. That bias is what makes the question you posed a technically difficult question.
Since the solutions to this problem will very often not run in less time, nor less memory, nor assist brevity of expression, nor functional clarity, "optimization" seems an unlikely label for them.