Using NP Reductions - cycle

I have been having some difficulty understanding reductions using NP problems and would like clarification. Consider the following problem:
Show that the following problem is NP-Complete by designing
a polynomial-time reduction algorithm from an already known
NP-Complete problem.
Problem: Given an undirected graph G=(V,E) and integer k,
test whether G has a cycle of length k.
I know there are other topics regarding this subject, but I am still not sure I understand how reductions like this would be done.
It is my understanding that this is how you would approach a problem such as this.
Assume the given problem can be solved in polynomial time.
Use the given problem to solve a problem that we know is NP-Hard in polynomial time
This creates a contradiction, so the assumption must be incorrect
Thus, the given problem mustn't be solvable in polynomial time
So, for a problem like this, would this be a proper approach?
If we choose k to be the length of the Hamiltonian cycle in the graph (assuming there is one) that means that this problem could be used to find the Hamiltonian cycle in the graph.
Because we can only find the Hamiltonian cycle in NP time, this problem must also only be solvable in NP time.

This looks rather like homework so I'll only give you a hint, but try consider a unweighted graph V, with k nodes. What is equivalent to finding a cycle with length k, which is solvable with the algorithm you assumed that is polynomial? Try to proceed from this.

Related

P vs NP: How to prove that they are not equal?

so a problem is in P (=poly time) if there exists a Turing machine that can solve it in polynomial time. For NP (=non-deterministic poly time) problems there exists a witness, which the Turing machine can utilize to solve the problem in polynomial time (or decide if its part of the language or not).
The question if P = NP is still unproven.
I wonder how you can prove that P is not equal to NP. My thought was that if you find a problem in NP and then prove that there is no algorithm, which can solve the problem in poly time (without witness), then P not equal NP.
So for example if you look at the Hamiltonian path problem (which is in NP) and prove that it cant be solved in poly time by a deterministic TM, then P not equal NP.
Is my thought process correct or am I missing something?
We have NP complete problems, the most known one being SAT and there are numerous others, such as hamiltonian path. If one of these problems is in P, then NP=P. If one of these isn't in P - i.e., there does not exists any poly-time TM that decides this language - then NP != P. So yes, proving P is equal or not equal to NP is equivalent to proving there does or doesn't exists an algorithm which solves one of the complete problems (such as hamiltonian path which you gave as an example).

Question about nSudoku if we assume that it is NP-complete

Explain why each of the following statements is correct. You may assume that nSudoku is NP-complete.
If nSudoku can be reduced in polynomial time to factorization, then factorization is NP-complete.
If nSudoku can be reduced in polynomial time to the problem of sorting an integer array, then P = NP.
Any ideas how to explain? Thank you!!!
In order to determine if a problem (A) is NP-Complete, we must take four steps:
Prove that A is in NP
Transform a known NP Complete problem (B) into A in polynomial time
Prove that an answer to A is an answer to B
Prove that an answer to B is an answer to A
In your problem, you start with the known NP-Complete problem nSudoku, with the goal to first show that factorization is also NP-Complete. To do this, we would first show that factorization is in NP. You then provide the information that nSudoku can be transformed to factorization in polynomial time. If we then show that an answer to nSudoku is an answer to factorization and vice versa, then we have proven that factorization is NP-Complete.
We will then follow this same pattern for factorization and the problem of sorting an integer array to prove that the problem of sorting an integer array is NP Complete (starting with the fact that factorization is in NP). This, however, complicates things, because the problem of sorting an integer array is actually in P, as you can sort an integer array in O(nlogn), which is polynomial time.
At the core of this question is the "P versus NP problem", which is an unsolved problem that asks whether every problem in NP is really in P (in other words if every problem that has a decision problem that can be verified in polynomial time can ALSO be solved in polynomial time). To this date, there is no answer to the problem.
However, in your problem we prove that a problem that is known to be P is also NP complete, which then results in the conclusion stated in your problem that P=NP.

Implementing a 2D recursive spatial filter using Scipy

Minimally, I would like to know how to achieve what is stated in the title. Specifically, signal.lfilter seems like the only implementation of a difference equation filter in scipy, but it is 1D, as shown in the docs. I would like to know how to implement a 2D version as described by this difference equation. If that's as simple as "bro, use this function," please let me know, pardon my naiveté, and feel free to disregard the rest of the post.
I am new to DSP and acknowledging there might be a different approach to answering my question so I will explain the broader goal and give context for the question in the hopes someone knows how do want I want with Scipy, or perhaps a better way than what I explicitly asked for.
To get straight into it, broadly speaking I am using vectorized computation methods (Numpy/Scipy) to implement a Monte Carlo simulation to improve upon a naive for loop. I have successfully abstracted most of my operations to array computation / linear algebra, but a few specific ones (recursive computations) have eluded my intuition and I continually end up in the digital signal processing world when I go looking for how this type of thing has been done by others (that or machine learning but those "frameworks" are much opinionated). The reason most of my google searches end up on scipy.signal or scipy.ndimage library references is clear to me at this point, and subsequent to accepting the "signal" representation of my data, I have spent a considerable amount of time (about as much as reasonable for a field that is not my own) ramping up the learning curve to try and figure out what I need from these libraries.
My simulation entails updating a vector of data representing the state of a system each period for n periods, and then repeating that whole process a "Monte Carlo" amount of times. The updates in each of n periods are inherently recursive as the next depends on the state of the prior. It can be characterized as a difference equation as linked above. Additionally this vector is theoretically indexed on an grid of points with uneven stepsize. Here is an example vector y and its theoretical grid t:
y = np.r_[0.0024, 0.004, 0.0058, 0.0083, 0.0099, 0.0133, 0.0164]
t = np.r_[0.25, 0.5, 1, 2, 5, 10, 20]
I need to iteratively perform numerous operations to y for each of n "updates." Specifically, I am computing the curvature along the curve y(t) using finite difference approximations and using the result at each point to adjust the corresponding y(t) prior to the next update. In a loop this amounts to inplace variable reassignment with the desired update in each iteration.
y += some_function(y)
Not only does this seem inefficient, but vectorizing things seems intuitive given y is a vector to begin with. Furthermore I am interested in preserving each "updated" y(t) along the n updates, which would require a data structure of dimensions len(y) x n. At this point, why not perform the updates inplace in the array? This is wherein lies the question. Many of the update operations I have succesfully vectorized the "Numpy way" (such as adding random variates to each point), but some appear overly complex in the array world.
Specifically, as mentioned above the one involving computing curvature at each element using its neighbouring two elements, and then imediately using that result to update the next row of the array before performing its own curvature "update." I was able to implement a non-recursive version (each row fails to consider its "updated self" from the prior row) of the curvature operation using ndimage generic_filter. Given the uneven grid, I have unique coefficients (kernel weights) for each triplet in the kernel footprint (instead of always using [1,-2,1] for y'' if I had a uniform grid). This last part has already forced me to use a spatial filter from ndimage rather than a 1d convolution. I'll point out, something conceptually similar was discussed in this math.exchange post, and it seems to me only the third response saliently addressed the difference between mathematical notion of "convolution" which should be associative from general spatial filtering kernels that would require two sequential filtering operations or a cleverly merged kernel.
In any case this does not seem to actually address my concern as it is not about 2D recursion filtering but rather having a backwards looking kernel footprint. Additionally, I think I've concluded it is not applicable in that this only allows for "recursion" (backward looking kernel footprints in the spatial filtering world) in a manner directly proportional to the size of the recursion. Meaning if I wanted to filter each of n rows incorporating calculations on all prior rows, it would require a convolution kernel far too big (for my n anyways). If I'm understanding all this correctly, a recursive linear filter is algorithmically more efficient in that it returns (for use in computation) the result of itself applied over the previous n samples (up to a level where the stability of the algorithm is affected) using another companion vector (z). In my case, I would only need to look back one step at output signal y[n-1] to compute y[n] from curvature at x[n] as the rest works itself out like a cumsum. signal.lfilter works for this, but I can't used that to compute curvature, as that requires a kernel footprint that can "see" at least its left and right neighbors (pixels), which is how I ended up using generic_filter.
It seems to me I should be able to do both simultaneously with one filter namely spatial and recursive filtering; or somehow I've missed the maths of how this could be mathematically simplified/combined (convolution of multiples kernels?).
It seems like this should be a common problem, but perhaps it is rarely relevant to do both at once in signal processing and image filtering. Perhaps this is why you don't use signals libraries solely to implement a fast monte carlo simulation; though it seems less esoteric than using a tensor math library to implement a recursive neural network scan ... which I'm attempting to do right now.
EDIT: For those familiar with the theoretical side of DSP, I know that what I am describing, the process of designing a recursive filters with arbitrary impulse responses, is achieved by employing a mathematical technique called the z-transform which I understand is generally used for two things:
converting between the recursion coefficients and the frequency response
combining cascaded and parallel stages into a single filter
Both are exactly what I am trying to accomplish.
Also, reworded title away from FIR / IIR because those imply specific definitions of "recursion" and may be confusing / misnomer.

Why is deciding NP deterministically exponential time

From a textbook, it says:
The best deterministic method currently known for deciding languages in NP uses exponential time. In other words, we can prove that
...
Why is this true? I can't seem to find the intuition for this.
NP belongs to EXPTIME (though we're not sure whether or not it's a proper subset) because, intuitively, you can trace through all possible paths of a polynomial-time NTM in exponential time.
More concretely, consider any language L in NP. There has to be a polynomial-time NTM for it; let's call it M and say that it runs in nondeterministic time O(nk). For simplicity, we'll assume that the NTM only uses binary nondeterminism (i.e. at each step, it has at most two choices to pick from). This means that the maximum possible number of different branches of the nondeterminism is then given by 2O(nk), and each one can be simulated in polynomial time by simulating the execution of the NTM on that branch. This means that the total time is poly(n) · 2O(nk) = O(2O(nk)), so this deterministic algorithm runs in exponential time.
Now, this doesn't mean that you have to spend deterministic exponential time to solve NP problems. It just says that if you want to use a deterministic algorithm, you need at most exponential time. The whole P versus NP question is about whether you can do better.
Hope this helps!

NP problems can be solved in deterministically EXPONENTIAL time?

any problem in NP can be solved in deterministically exponential time,
or we can say that
any language in NP can be decided by an algorithm running in time 2^O(n^k)
i.e., NP ⊆ EXP
informally speaking, we just try each one of the possible solutions and then decide it
However, there is a simple example that I can not figure out what's wrong with the idea i made
Here it is..
The Traveling Salesman problem : given a undirected graph G=(V,E) V=|n|
This is a well-known NP-complete problem, therefore, indeed belongs to NP
And I try to analyse the running time..like this:
I simply list out all the possible solutions, and there are (n-1)! possible tours in total
Then I check each one of them, it takes O(n) for each possible tour
The total running time will be O(n!)
It doesn't look like can be bounded above by 2^O(n^k), i.e., exponential time
where is the pitfall of this analysis?
or in the other word, how can we explain traveling salesman problem indeed can be decided by an algorithm running in time 2^O(n^k)
Note that
n! ≤ nn = (2log n)n = 2n log n ≤ 2n2
So n! = 2O(n2), so n! ∈ EXP.
Hope this helps!