By definition a Race Condition happens when two different Processes try to change a shared variable simultaneously, but does it happen even if one of them doesn't write when also the other writes?
Example:
Process X; var.read -> var.change -> var.write
Process Y; var.read -> var.change -> var.write
Process X; var.read -> var.change
Process Y; var.read -> var.change -> var.write
Process X; var.write
Both should lead to inconsistency since a write happens when something else is happening, but should they both lead to a Race Condition?
You can easily have a race condition with one writer and simultaneous readers.
Assume you want y to be greater than zero and have something like this:
GLOBAL X ;
if X > 100 then
y := x - 50 ;
If someone else is writing to x, y could end up being less than zero.
Related
I want to know the time complexity of the code attached.
I get O(n^2logn), while my friends get O(nlogn) and O(n^2).
SomeMethod() = log n
Here is the code:
j = i**2;
for (k = 0; k < j; k++) {
for (p = 0; p < j; p++) {
x += p;
}
someMethod();
}
The question is not very clear about the variable N and the statement i**2.
i**2 gives a compilation error in java.
assuming someMethod() takes log N time(as mentioned in question), and completely ignoring value of N,
lets call i**2 as Z
someMethod(); runs Z times. and time complexity of the method is log N so that becomes:
Z * log N ----------------------------------------- A
lets call this expression A.
Now, x+=p runs Z^2 times (i loop * j loop) and takes constant time to run. that makes the following expression:
( Z^2 ) * 1 = ( Z^2 ) ---------------------- B
lets call this expression B.
The total run time is sum of expression A and expression B. which brings us to:
O((Z * log N) + (Z^2))
where Z = i**2
so final expression will be O(((i**2) * log N) + ((i**2)^2))
if we can assume i**2 is i^2, the expression becomes,
O(((i^2) * log N) + (i^4))
Considering only the higher order variables, like we consider n^2 in n^2 + 2n + 5, the complexity can be expressed as follows,
i^4
Based on the picture, the complexity is O(logNI2 + I4).
We cannot give a complexity class with one variable because the picture does not explain the relationship between N and I. They must be treated as separate variables.
And likewise, we cannot eliminate the logNI2 term because the N variable will dominate the I variable in some regions of N x I space.
If we treat N as a constant, then the complexity class reduces to O(I4).
We get the same if we treat N as being the same thing as I; i.e. there is a typo in the question.
(I think there is mistake in the way the question was set / phrased. If not, this is a trick question designed to see if you really understood the mathematical principles behind complexity involving multiple independent variables.)
What is the time complexity for this code:
int p=0;
for(int i=1;p<=n;i+=2) p=p+i;
I know the time complexity for the above code if i were to increment by 1 then the time complexity would be approximately √n.
when i=1 -> p=0+1
i=2 -> p=1+2
i=3 -> p=1+2+3
i=4 -> p=1+2+3+4
i=k -> p=1+2+3+4+.....k
Since the loop is not dependent on i so it won't be n times.
When p becomes greater than the loop will be terminated, therefore:
p>n
Since p=k(k+1)/2
k(k+1)/2>n
This expression can be written as
k^2=n (approximate)
k=√n
Please help me understand the complexity of the above code, when i is incremented by 2.
You can do the same as you did before:
i=1 -> p=0+1
i=2 -> p=0+1+3
i=3 -> p=0+1+3+5
...
So you add up every odd number. This is approximately (depending on whether you have an odd or even n) n^2/4
So you look for a k s.t. k^2/4 > n.
Since you increase i by two in each iteration, k equals two times the number of iteration but for the big O notation this does not really matter so you get again O(√n).
Assume that T is a binary search tree with n nodes and height h. Each node x of T stores a
real number x.Key. Give the worst-case time complexity of the following algorithm Func1(T.root). You
need to justify your answer.
Func 1 (x)
if (x == NIL) return 0;
s1 <- Func1(x.left());
if (s1 < 100) then
s2 <- Func1(x.Right());
end
else
s2 <- 0;
end
s <- s1 + s2 + x.Key();
return (s);
x.left() & x.right() return left and right child of node x
x.key() return the key stored at node x
For the worst case run time, I was thinking that this would be O(height of tree) since this basically act like the minimum() or maximum() binary search tree algorithms. However, it's recursive, so I'm slightly hesitant to actually write O(h) as the worst case run-time.
When I think about it, the worst case would be if the function executed the if(s1 < 100) statement for every x.left, which would mean that every node is visited, so would that make the run time O(n)?
You're correct that the worst-case runtime of this function is Θ(n), which happens if that if statement always executes. In that case, the recursion visits each node, recursively visits the full right subtree, then recursively visits the full left subtree. (It also does O(1) work per node, which is why this sums up to O(n)).
I'm looking for a tool for determining whether a given set of linear equations/inequalities (A) entails another given set of linear equations/inequalities (B). The return value should be either 'true' or 'false'.
To illustrate, let's look at possible instances of A and B and the expected return value of the algorithm:
A: {Z=3,Y=Z+2, X < Y} ;
B: {X<5} ;
Expected result: true
A: {Z=3,Y=Z+2, X < Y} ;
B: {X<10} ;
Expected result: true
A: {Z=3,Y=Z+2, X < Y} ;
B: {X=3} ;
Expected result: false
A: {X<=Y,X>=Y} ;
B: {X=Y} ;
Expected result: true
A: {X<=Y,X>=Y} ;
B: {X=Y, X>Z+2} ;
Expected result: false
Typically A contains up to 10 equations/inequalities, and B contains 1 or 2. All of them are linear and relatively simple. We may even assume that all variables are integers.
This task - of determining whether A entails B - is part of a bigger system and therefore I'm looking for tools/source code/packages that already implemented something like that and I can use.
Things I started to look at:
Theorem provers with algebra - Otter, EQP and Z3 (Vampire is currently unavailable for download).
Coq formal proof management system.
Linear Programming.
However, my experience with these tools is very limited and so far I didn't find a promising direction. Any guidelines and ideas from people more experienced than me will be highly appreciated.
Thank you for your time!
I think I found a working solution. The problem can be rephrased as an assignment problem and then it can be solved by theorem provers such as Z3 and with some work probably also by linear programming solvers.
To illustrate, let's look at the first example I gave above:
A: {Z=3, Y=Z+2, X<Y} ;
B: {X<5} ;
Determining whether A entails B is equivalent to determining whether it is impossible that A is true while B is false. This is a simple simple logical equivalence. In our case, it means that rather than checking whether the condition in B follows from the ones in A, we can check whether there is no assignment of X, Y and Z that satisfies the conditions in A and not in B.
Now, when phrased as an assignment problem, a theorem prover such as Z3 can be called for the task. The following code checks if the conditions in A are satisfiable while the one in B is not:
(declare-fun x () Int)
(declare-fun y () Int)
(declare-fun z () Int)
(assert (= z 3))
(assert (= y (+ z 2)))
(assert (< x y))
(assert (not (< x 5)))
(check-sat)
(get-model)
(exit)
Z3 reports that there is no model that satisfies these conditions, thus it is not possible that B doesn't follow from A, and therefore we can conclude that B follows from A.
I've read about it in a book but it wasn't explained at all. I also never saw it in a program. Is part of Prolog syntax? What's it for? Do you use it?
It represents implication. The righthand side is only executed if the lefthand side is true. Thus, if you have this code,
implication(X) :-
(X = a ->
write('Argument a received.'), nl
; X = b ->
write('Argument b received.'), nl
;
write('Received unknown argument.'), nl
).
Then it will write different things depending on it argument:
?- implication(a).
Argument a received.
true.
?- implication(b).
Argument b received.
true.
?- implication(c).
Received unknown argument.
true.
(link to documentation.)
It's a local version of the cut, see for example the section on control predicated in the SWI manual.
It is mostly used to implement if-then-else by (condition -> true-branch ; false-branch). Once the condition succeeds there is no backtracking from the true branch back into the condition or into the false branch, but backtracking out of the if-then-else is still possible:
?- member(X,[1,2,3]), (X=1 -> Y=a ; X=2 -> Y=b ; Y=c).
X = 1,
Y = a ;
X = 2,
Y = b ;
X = 3,
Y = c.
?- member(X,[1,2,3]), (X=1, !, Y=a ; X=2 -> Y=b ; Y=c).
X = 1,
Y = a.
Therefore it is called a local cut.
It is possible to avoid using it by writing something more wordy. If I rewrite Stephan's predicate:
implication(X) :-
(
X = a,
write('Argument a received.'), nl
;
X = b,
write('Argument b received.'), nl
;
X \= a,
X \= b,
write('Received unknown argument.'), nl
).
(Yeah I don't think there is any problem with using it, but my boss was paranoid about it for some reason, so we always used the above approach.)
With either version, you need to be careful that you are covering all cases you intend to cover, especially if you have many branches.
ETA: I am not sure if this is completely equivalent to Stephan's, because of backtracking if you have implication(X). But I don't have a Prolog interpreter right now to check.