Question about nSudoku if we assume that it is NP-complete - module

Explain why each of the following statements is correct. You may assume that nSudoku is NP-complete.
If nSudoku can be reduced in polynomial time to factorization, then factorization is NP-complete.
If nSudoku can be reduced in polynomial time to the problem of sorting an integer array, then P = NP.
Any ideas how to explain? Thank you!!!

In order to determine if a problem (A) is NP-Complete, we must take four steps:
Prove that A is in NP
Transform a known NP Complete problem (B) into A in polynomial time
Prove that an answer to A is an answer to B
Prove that an answer to B is an answer to A
In your problem, you start with the known NP-Complete problem nSudoku, with the goal to first show that factorization is also NP-Complete. To do this, we would first show that factorization is in NP. You then provide the information that nSudoku can be transformed to factorization in polynomial time. If we then show that an answer to nSudoku is an answer to factorization and vice versa, then we have proven that factorization is NP-Complete.
We will then follow this same pattern for factorization and the problem of sorting an integer array to prove that the problem of sorting an integer array is NP Complete (starting with the fact that factorization is in NP). This, however, complicates things, because the problem of sorting an integer array is actually in P, as you can sort an integer array in O(nlogn), which is polynomial time.
At the core of this question is the "P versus NP problem", which is an unsolved problem that asks whether every problem in NP is really in P (in other words if every problem that has a decision problem that can be verified in polynomial time can ALSO be solved in polynomial time). To this date, there is no answer to the problem.
However, in your problem we prove that a problem that is known to be P is also NP complete, which then results in the conclusion stated in your problem that P=NP.

Related

Time complexity of CVXOPT/MOSEK when the number of constraints is much greater than the number of variables

I have a convex quadratic programming problem:
min x^TPx+c^Tx
Ax \leq b
where P is a positive definite matrix. A is a m * n matrix and m is much greater than n, so the number of constraints is much greater than the number of variables.
My question is: 1. how to analyze the time complexity of this problem. 2. How the time complexity of the convex quadratic programming problem relates to the number of constraints.
I have tried to solve my problem using both cvxopt and mosek, the results of both show that the time complexity seems to be linear to the number of constraints.
However, when I tried to find the literature, I found that all the literatures I found only discussed how the time complexity relates to the number of variables, or assume A is a full rank matrix. I will appreciate it if you can recommend some related references. Thank you.

P vs NP: How to prove that they are not equal?

so a problem is in P (=poly time) if there exists a Turing machine that can solve it in polynomial time. For NP (=non-deterministic poly time) problems there exists a witness, which the Turing machine can utilize to solve the problem in polynomial time (or decide if its part of the language or not).
The question if P = NP is still unproven.
I wonder how you can prove that P is not equal to NP. My thought was that if you find a problem in NP and then prove that there is no algorithm, which can solve the problem in poly time (without witness), then P not equal NP.
So for example if you look at the Hamiltonian path problem (which is in NP) and prove that it cant be solved in poly time by a deterministic TM, then P not equal NP.
Is my thought process correct or am I missing something?
We have NP complete problems, the most known one being SAT and there are numerous others, such as hamiltonian path. If one of these problems is in P, then NP=P. If one of these isn't in P - i.e., there does not exists any poly-time TM that decides this language - then NP != P. So yes, proving P is equal or not equal to NP is equivalent to proving there does or doesn't exists an algorithm which solves one of the complete problems (such as hamiltonian path which you gave as an example).

Let A be NP-complete and B be NP-hard. Can B be polynomial time reducible to A?

Let A be NP-complete and B be NP-hard. Can B be polynomial time reducible to A?
Ans: I know it can't be. Would the strong reason be, because NP-Complete is a subset of NP-Hard?
Let's first look at naming conventions of NP Hard and NP Complete (wikipedia):
NP-hard Class of decision problems which are at least as hard as the hardest problems in NP. Problems that are NP-hard do not have to
be elements of NP; indeed, they may not even be decidable.
NP-complete Class of decision problems which contains the hardest problems in NP. Each NP-complete problem has to be in NP.
NP-hard(B) are at least as hard as the hardest problem of NP.
Hardest problems of NP are NP-complete(A).
From these two statements, we can say that B is at least as hard as A.
In simpler terms, this means that any algorithm for B immediately gives an algorithm for A. But the inverse is not true, knowing how to solve A doesn't tell us anything about how to solve B. This relation is not symmetric.
This is why NP-hard is not reducible to NP-complete.

NP problems can be solved in deterministically EXPONENTIAL time?

any problem in NP can be solved in deterministically exponential time,
or we can say that
any language in NP can be decided by an algorithm running in time 2^O(n^k)
i.e., NP ⊆ EXP
informally speaking, we just try each one of the possible solutions and then decide it
However, there is a simple example that I can not figure out what's wrong with the idea i made
Here it is..
The Traveling Salesman problem : given a undirected graph G=(V,E) V=|n|
This is a well-known NP-complete problem, therefore, indeed belongs to NP
And I try to analyse the running time..like this:
I simply list out all the possible solutions, and there are (n-1)! possible tours in total
Then I check each one of them, it takes O(n) for each possible tour
The total running time will be O(n!)
It doesn't look like can be bounded above by 2^O(n^k), i.e., exponential time
where is the pitfall of this analysis?
or in the other word, how can we explain traveling salesman problem indeed can be decided by an algorithm running in time 2^O(n^k)
Note that
n! ≤ nn = (2log n)n = 2n log n ≤ 2n2
So n! = 2O(n2), so n! ∈ EXP.
Hope this helps!

Using NP Reductions

I have been having some difficulty understanding reductions using NP problems and would like clarification. Consider the following problem:
Show that the following problem is NP-Complete by designing
a polynomial-time reduction algorithm from an already known
NP-Complete problem.
Problem: Given an undirected graph G=(V,E) and integer k,
test whether G has a cycle of length k.
I know there are other topics regarding this subject, but I am still not sure I understand how reductions like this would be done.
It is my understanding that this is how you would approach a problem such as this.
Assume the given problem can be solved in polynomial time.
Use the given problem to solve a problem that we know is NP-Hard in polynomial time
This creates a contradiction, so the assumption must be incorrect
Thus, the given problem mustn't be solvable in polynomial time
So, for a problem like this, would this be a proper approach?
If we choose k to be the length of the Hamiltonian cycle in the graph (assuming there is one) that means that this problem could be used to find the Hamiltonian cycle in the graph.
Because we can only find the Hamiltonian cycle in NP time, this problem must also only be solvable in NP time.
This looks rather like homework so I'll only give you a hint, but try consider a unweighted graph V, with k nodes. What is equivalent to finding a cycle with length k, which is solvable with the algorithm you assumed that is polynomial? Try to proceed from this.