I'm trying to proof the following RandomSeach-Algorithm and to figure out the invariant for the loop.
Since the function randomIndex(..) creates a random number I cannot use an invariant like
š ā„ 0 ā§ š < š ā 1 ā š[š ] ā š£ššš¢e
. That means, all elements between 0 and i-1, with i is the index of the current checked element, is not the searched element.
So I thought I define a hypothetical sequence r, that contains all elements that have already been compared to the searched value or are going to be compared to the searched value. Thats why it is just a hypothetical sequence, because I actually do not know the elements that are going to be compared to the searched value until they have been realy compared.
That means it applies r.lenght() ā¤ runs and in the case the searched element was found
(r[r.lenght()-1] = value) ā (r[currentRun] = value).
Then I can define a invariant like:
š ā„ 0 ā§ š < currentRun ā r[š ] ā š£ššš¢e
Can I do this, because the sequence r is not real? It does not feel right. Does anyone have a diffrent idea for an invariant?
The program:
public boolean RandomSearch (int value, int[] f, int runs) {
int currentRun = 0;
boolean found = false;
while (currentRun < runs || !found) {
int x = randomIndex(0, n-1)
if (value == f[x]) {
found = true;
}
currentRun = currentRun + 1;
}//end while
return found;
}//end RandomSearch
Ok,
I use following invariant
currentRun <= runs & f.length > 0
Than I can proof the algorithm :)
Related
can anyone pls tell me the time complexity of this func.
this is for generating all valid parenthesis , given the count of pairs
Input: n = 3
Output: ["((()))","(()())","(())()","()(())","()()()"]
My code is working fine, but im not sure about time complexity of it.
pls help
class Solution {
public List generateParenthesis(int n) {
HashMap<String,Boolean> hm = new HashMap<>();
return generate(hm,n);
}
public static List<String> generate(HashMap<String,Boolean> hm, int n ){
if(n == 1){
hm.put("()",true);
List<String>temp = new ArrayList<>();
temp.add("()");
return temp;
}
List<String>temp = generate(hm,n-1);
List<String>ans = new ArrayList<>();
for(String pat : temp){
for(int i = 0; i < pat.length(); i++){
String newPat = pat.substring(0,i)+"()"+pat.substring(i);
if(!hm.containsKey(newPat)){
hm.put(newPat,true);
ans.add(newPat);
}
}
}
return ans;
}
}
You have two for loops, which each run over m and n elements, it can be written as O(m*n), and be contracted to O(n^2) because m can be equal to n.
Your function is recursively calling itself, making time complexity analysis a bit harder.
Think about it this way: You generate all valid parenthesis of length n. It turns out that the number of all valid parenthesis of length n (taking your definition of n) is equal to the nth catalan number. Each string has length 2*n, So the time complexity is not a polynomial, but O(n*C(n)) where C(n) is the nth catalan number.
Edit: It seems like this question is already answered here.
I am working through Cracking the Coding Interview, and I am unsure of an example on time-complexity. They provide this code to determine if a number is prime:
boolean isPrime(int n) {
for (int x = 2; x * x <= n; x++) {
if (n % x == 0) {
return false;
}
}
return true;
}
Later they say "The work inside the for loop is constant". Why is run-time for modulus operator constant? Why does it not depend on n?
The key part of the statement there is inside the for loop. All that's happening is a a modulo operation. Inside the function itself the time complexity depends on n
What is the best way to iterate over the elements of a finite set object in Dafny? An example of working code would be delightful.
This answer explains how to do it using a while loop, rather than by defining an iterator. The trick is to use the "assign such that" operator,
:|, to obtain a value y such that that y is in the set, and then repeat on that set with the y removing, continuing until there are not more elements. The decreases clause is necessary here. With it, Dafny proves termination of the while loop, but without it, not.
method Main()
{
var x: set<int> := {1, 2, 3};
var c := x;
while ( c != {} )
decreases c;
{
var y :| y in c;
print y, ", ";
c := c - { y };
}
}
So i've tried interpreting this pseudocode a friend made and i wasn't exactly sure that my method returns the right result. Anyone who's able to help me out?
I've done some test cases where e.g. an array of [2,0,7] or [0,1,4] or [0, 8, 0] would return true, but not cases like: [1,7,7] or [2,6,0].
Array(list, d)
for j = 0 to dā1 do
for i = 0 to dā1 do
for k = 0 to dā1 do
if list[j] + list[ i] + list[k] = 0 then
return true
end if
end for
end for
end for
return false
And i've made this in java:
public class One{
public static boolean method1(ArrayList<String> A, int a){
for(int i = 0; i < a-1; i++){
for(int j = 0; j < a-1; j++){
for(int k = 0; k < a-1; k++){
if(Integer.parseInt(A.get(i)+A.get(j)+A.get(k)) == 0){
return true;
}
}
}
}
return false;
}
}
Thanks in advance
For a fix to your concrete problem, see my comment. A nicer way to write that code would be to actually use a list of Integer instead of String, because you will then want to convert the strings back to integers. So, your method looks better like this:
public static boolean method(List<Integer> A) {
for (Integer i : A)
for (Integer j : A)
for (Integer k : A)
if (i + j + k == 0)
return true;
return false;
}
See that you don't even need the size as parameter, since any List in Java embeds its own size.
Somehow offtopic
You're probably trying to solve the following problem: "Find if a list of integers contains 3 different ones that sum up to 0". The solution to this problem doesn't have to be O(n^3), like yours, it can be solved in O(n^2). See this post.
Ok, so here is what I believe the pseudo code is trying to do. It returns true if there is a zero in your list or if there are three numbers that add up to zero in your list. So it should return true for following test cases. (0,1,2,3,4,5), (1,2,3,4,-3). It will return false for (1,2,3,4,5). I just used d=5 as a random example. Your code is good for the most part - you just need to add the ith, jth and kth elements in the list to check if their sum equals zero for the true condition.
I was training in Codility solving the first lesson: Tape-Equilibrium.
It is said it has to be of complexity O(N). Therefore I was trying to solve the problem with just one for. I knew how to do it with two for but I understood it would have implied a complexity of O(2N), therefore I skipped those solutions.
I looked for it in Internet and of course, in SO there was an answer
To my astonishment, all the solutions first calculate the sum of the elements of the vector and afterwards make the calculations. I understand this is complexity O(2N), but it gets an score of 100%.
At this point, I think I am mistaken about my comprehension of the time complexity limits. If they ask you to get a time complexity of O(N), is it right to get O(X*N)? Being X a value not really high ?
How does this works?
Let f and g be functions.
The Big-O notation f in O(g) means that you can find a constant number c such that f(n) ā¤ cā
g(n). So if your algorithm has complexity 2N (or XN for a constant X) this is in O(N) due to c = 2 (or c = X) holds 2N ā¤ cā
N = 2ā
N (or XN ā¤ cā
N = Xā
N).
This is how I managed to keep it O(N) along with a 100% score:
// you can also use imports, for example:
// import java.util.*;
// you can use System.out.println for debugging purposes, e.g.
// System.out.println("this is a debug message");
class Solution {
public int solution(int[] A) {
int result = Integer.MAX_VALUE;
int[] s1 = new int[A.length-1];
int[] s2 = new int[A.length-1];
for(int i=0;i<A.length-1;i++){
if(i>0){
s1[i] = s1[i-1] + A[i];
}
else {
s1[i] = A[i];
}
}
for(int i=A.length-1;i>0;i--){
if(i<A.length-1){
s2[i-1] = s2[i] + A[i];
}
else {
s2[i-1] = A[A.length-1];
}
}
for(int i=0;i<A.length-1;i++){
if(Math.abs(s1[i]-s2[i])<result){
result = Math.abs(s1[i]-s2[i]);
}
}
return result;
}
}