Give the triple representation of a statement x:= y[i] - optimization

Give the triple representation of a statement x:= y[i] I'm having a problem in this one

You should probably be more specific about the context that you need the representation, this book has some good information about compiler design. Here is what it would look like using it's semantics.
| operator | operand1 | operand2
1. | [] | y | i
2. | := | x | (1.)

Related

How to round decimals smaller than .5 to the following number in SQL?

I'm having this situation where a I have a large database with +1000 products.
Some of them have prices like 12.3, 20.7, 55.1 for example.
| Name | Price |
| -------- | -------------- |
| Product 1| 12.3 |
| Product 2| 20.7 |
| Product 3| 55.1 |
(and so on)...
What I've tried is update prices set price = ROUND (price, 0.1).
The output for this will be:
| Name | Price |
| -------- | -------------- | (after updated)
| Product 1| 12.3 | 12.0
| Product 2| 20.7 | 21.0
| Product 3| 55.1 | 55.0
the prices with decimals < .5 will remain the same, and I'm out of ideas.
I'll appreciate any help.
Note I need to update all rows,Ii'm trying to learn about CEILING() but only shows how to use it with SELECT, any idea on how to perform an UPDATE CEILING or something?
It's not entirely clear what you're asking, but I can tell you the function call as shown makes no sense.
The second argument to the ROUND() function is the number of decimal places, not the size of the value you wish to round to. Additionally, the function only accepts integral types for that argument. Therefore, if you pass the value 0.1 to the function what will happen is the value is first cast to an integer, and the result of casting 0.1 to an integer is 0.
We see then, that calling ROUND(price, 0.1) is the same as calling ROUND(price, 0).
If you want to round to the nearest 0.1, that's one decimal place and the correct value for the ROUND() function is 1.
ROUND(price, 1)
Compare results here:
https://dbfiddle.uk/?rdbms=sqlserver_2019&fiddle=7878c275f0f9ea86f07770e107bc1274
Note the trailing 0's remain, because the fundamental type of the value is unchanged. If you also want to remove the trailing 0`s, then you're really moving into the realm of strings, and for that you should wait and the client code, application, or reporting tool handle the conversion.

Multiple Font Style Combinations in vb.net

If I want to create a font with multiple style combinations, like bold AND underline, I have to place the 'or' statement between it, like in the example below:
lblArt.Font = New Font("Tahoma", 18, FontStyle.Bold Or FontStyle.Underline)
If you place bold 'and' underline, it won't work, and you only get 1 of the 2 (like how the or statement should be working), while that would be the logically way to do it. What is the reason behind this?
Boolean logic works a bit differently than the way we use the terms in English. What's happening here is that the enumerated FontStyle values are actually bit flags, and in order to manipulate bit flags, you use bitwise operations.
To combine two bit flags, you OR them together. An OR operation combines the two values. So imagine that FontStyle.Bold was 2 and FontStyle.Underline was 4. When you OR them together, you get 6—you've combined them together. In Boolean logic, you can think of an OR operation as returning "true" (i.e., setting that bit in the result) if either of the bits in the two operands are set, and "false" if neither of the bits in the two operands are set.
You can write a truth table for such an operation as follows:
| A | B | A OR B |
|---|---|--------|
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 1 |
Notice that the results more closely mirror what we, in informal English, would call "and". If either one has it set, then the result has it set, too.
In contrast to OR, a bitwise AND operation only returns "true" (i.e., sets that bit in the result) if both of the bits in the two operands are set. Otherwise, the result is "false". Again, a truth table can be written:
| A | B | A AND B |
|---|---|---------|
| 0 | 0 | 0 |
| 0 | 1 | 0 |
| 1 | 0 | 0 |
| 1 | 1 | 1 |
Assuming again that FontStyle.Bold has the value 2 and FontStyle.Underline has the value 4, if you AND them together, you get 0. This is because the values effectively cancel each other out. The net result is that you don't get any font styles—precisely why it doesn't work when you write FontStyle.Bold And FontStyle.Underline.
In VB, a bitwise OR operation is performed using the Or operator. The And operator performs a bitwise AND operation. So in order to do a bitwise inclusion of values, which is how you combine bit flags, you use the Or operator.
try this:
lblArt.Font = New Drawing.Font("Tahoma", _
18, _
FontStyle.Bold or FontStyle.Italic)
use "New Drawing.Font" instead of Font alone
Source

Grammar for given language

I am given the language {w ∈ {a,b}∗| |w|a = |w|b + 1}. and am asked to find a grammar.
I have come up with the following:
S->aSb | bSa | aAa | bBb | a
A->bS
B->?
and was wondering if this was correct, or if not why?
It's not correct, because it cannot generate the valid sentence:
baaab
which has one more a than b. It should be obvious that this sentence cannot be generated because every sentence generated by your language has different start and end characters.
Edit The edited question is also not correct because the productions:
S -> ... | aAa | a | ...
A -> bS
is equivalent to (by substituting the RHS of A for its use in S):
S -> ... | abSa | a | ...
which will match as follows:
S -> abSa -> abaa

Robot Framework - Case sensitive

I need to know if there is a workaround on this error:
When I use Keywords like:
Location Should Be
Location Should Contain
(Which are both part of the Selenium2Library.)
I get this error:
"Location should have been 'http://www.google.pt' but was 'http://WwW.GooGLe.Pt'
I think that is because robot framework is natively case sensitive when comparing strings.
Any help?
Ty
EDIT
Question edited to clarify some subjects.
Luckily Robot Framework allows for keywords to be written in python.
MyLibrary.py
def Compare_Ignore_Case(s1, s2):
if s1.lower() != s2.lower():
return False
else:
return True
def Convert_to_Lowercase(s1):
return s1.lower()
MySuite.txt
| *Setting* | *Value* |
| Library | ./MyLibrary.py |
| *Test Case* | *Action* | *Argument*
#
| T100 | [Documentation] | Compare two strings ignoring case.
| | Compare Ignore Case | foo | FOO
#
| T101 | [Documentation] | Compare two strings where one is a variable.
# Should be Get Location in your case.
| | ${temp}= | MyKeyword that Returns a String
| | Compare Ignore Case | foo | ${temp}
I have not used the Selenium library, but the example in T101 should work for you.
Just in case someone else wants this:
Location should be '${expectedUrl}' disregard case
${currUrl} get location
${currUrl} evaluate "${currUrl}".lower()
${expectedUrl} evaluate "${expectedUrl}".lower()
should be equal ${currUrl} ${expectedurl}
Location should contain '${expected}' disregard case
${currUrl} get location
${currUrl} evaluate "${currUrl}".lower()
${expected} evaluate "${expected}".lower()
should contain ${currUrl} ${expected}

NSNumber how to get smallest common denominator? Like 3/8 for 0.375?

Say if I have an NSNumber, which is something between 0 and 1, and it can be represented using X/Y, how do I calculate the X and Y in this case? I don't want to compare:
if (number.doubleValue == 0.125)
{
X = 1;
Y = 8;
}
so I get 1/8 for 0.125
That's relatively straightforward. For example, 0.375 is equivalent to 0.375/1.
First step is to multiply numerator and denominator until the numerator is an integral value (a), giving you 375/1000.
Then find the greatest common divisor and divide both numerator and denominator by that.
A (recursive) function for GCD is:
int gcd (int a, int b) {
return (b == 0) ? a : gcd (b, a%b);
}
If you call that with 375 and 1000, it will spit out 125 so that, when you divide the numerator and denominator by that, you get 3/8.
(a) As pointed out in the comments, there may be problems with numbers that have more precision bits than your integer types (such as IEEE754 doubles with 32-bit integers). You can solve this by choosing integers with a larger range (longs, or a bignum library like MPIR) or choosing a "close-enough" strategy (consider it an integer when the fractional part is relatively insignificant compared to the integral part).
Another issue is the fact that some numbers don't even exist in IEEE754, such as the infamous 0.1 and 0.3.
Unless a number can be represented as the sum of 2-n values where n is limited by the available precision (such as 0.375 being 1/4 + 1/8), the best you can hope for is an approximation.
Example, consider the single-precision (you'll see why below, I'm too lazy to do the whole 64 bits) 1/3. As a single precision value, this is stored as:
s eeeeeeee mmmmmmmmmmmmmmmmmmmmmmm
0 01111101 01010101010101010101010
In this example, the sign is 0 hence it's a positive number.
The exponent bits give 125 which, when you subtract the 127 bias, gives you -2. Hence the multiplier will be 2-2, or 0.25.
The mantissa bits are a little trickier. They form the sum of an explicit 1 along with all the 2-n values for the 1 bits, where n is 1 through 23 (left to right. So the mantissa is calculated thus:
s eeeeeeee mmmmmmmmmmmmmmmmmmmmmmm
0 01111101 01010101010101010101010
| | | | | | | | | | |
| | | | | | | | | | +-- 0.0000002384185791015625
| | | | | | | | | +---- 0.00000095367431640625
| | | | | | | | +------ 0.000003814697265625
| | | | | | | +-------- 0.0000152587890625
| | | | | | +---------- 0.00006103515625
| | | | | +------------ 0.000244140625
| | | | +-------------- 0.0009765625
| | | +---------------- 0.00390625
| | +------------------ 0.015625
| +-------------------- 0.0625
+---------------------- 0.25
Implicit 1
========================
1.3333332538604736328125
When you multiply that by 0.25 (see exponent earlier), you get:
0.333333313465118408203125
Now that's why they say you only get about 7 decimal digits of precision (15 for IEEE754 double precision).
Were you to pass that actual number through my algorithm above, you would not get 1/3, you would instead get:
5,592,405
---------- (or 0.333333313465118408203125)
16,777,216
But that's not a problem with the algorithm per se, more a limitation of the numbers you can represent.
Thaks to Wolfram Alpha for helping out with the calculations. If you ever need to do any math that stresses out your calculator, that's one of the best tools for the job.
As an aside, you'll no doubt notice the mantissa bits follow a certain pattern: 0101010101.... This is because 1/3 is an infinitely recurring binary value as well as an infinitely recurring decimal one. You would need and infinite number of 01 bits at the end to exactly represent 1/3 exactly.
You can try this:
- (CGPoint)yourXAndYValuesWithANumber:(NSNumber *)number
{
float x = 1.0f;
float y = x/number.doubleValue;
for(int i = 1; TRUE; i++)
{
if((float)(int)(y * i) == y * i)
// Alternatively floor(y * i), instead of (float)(int)(y * i)
{
x *= i;
y *= i;
break;
}
}
/* Also alternatively
int coefficient = 1;
while(floor(y * coefficient) != y * coefficient)coefficient++;
x *= coefficient, y *= coefficient;*/
return CGPointMake(x, y);
}
This will not work if you have invalid input. X and Y will have to exist and be valid natural numbers (1 to infinity). A good example that will break it is 1/pi. If you have limits, you can do some critical thinking to implement them.
The approach outlined by paxdiablo is spot-on.
I just wanted to provide an efficient GCD function (implemented iteratively):
int gcd (int a, int b){
int c;
while ( a != 0 ) {
c = a; a = b%a; b = c;
}
return b;
}
Source.