Is it possible for backtracking N-queens to produce O(n^2) time complexity - time-complexity

The backtracking approach for n-queens is clearly O(n!). However I am looking at these links here:
https://algodaily.com/challenges/classical-n-queen-problem
https://www.geeksforgeeks.org/n-queen-problem-backtracking-3/
They give O(n^2) time complexity. How is that the case? I read their code, and their solutions appear to be O(n!). For each row, try all columns. That's n*(n-1)*(n-2)... etc I have no idea how they're getting O(n^2).

Related

How does a nested if statement affect actual runtime before f(n) simplifies to O(g(n))?

I am aware that constant coefficients and constants are simply ignored when calculating runtime complexity of an algorithm. However, I would still like to know whether an if statement nested in a while or for loop adds to the total actual runtime of an algorithm, f(n).
This picture is from an intro to theoretical computer science lecture I am currently studying, and the algorithm in question counts the number of 'a's for any input string. The lecturer counts the nested if statement as one of the timesteps that affect total runtime, but I am unsure whether this is correct. I am aware that the entire algorithm simplifies to O(g(n)) where g(n) = n, but I would like to know definitively whether f(n) itself equals to 2n + a or n + a. Understanding this is important to me, since I believe first knowing exactly the actual runtime, f(n), before simplifying it to O(g(n)) reduces mistakes when calculating runtime for more complicated algorithms. I would appreciate your insight.
Youtube clip: https://www.youtube.com/watch?v=5Bbxqv73EbU&list=PLAwxTw4SYaPl4bx7Pck4JWjy1WVbrDx0U&index=35
Knowing the actual runtime, as you say, before calculating the time complexity in big-O is not important. In fact, as you continue studying, you will find that in many cases it is ambiguous, annoying or very, very difficult to find an exact number of steps that an algorithm will execute. It often comes down to definition, and depending on how you see things, you can come up with different answers.
Time complexity, on the other hand, is a useful and often easier expression to find. I believe this is the very point this video is trying to make. But to answer your question: Yes, in this case, the if statement is definitely a step that the algorithm has to make. It only compares one character, so it is clearly a constant-time operation. The author considers this comparison to take 1 step. And since it will execute n times, the total number of steps that this line of "code" will be executed is n. So yes you can see the whole algorithm as taking 2n + a steps.
However, what if we are working on a computer where we can't just compare a character in a single step, but we need to copy the character variable to a special register first, and then do the comparison. Perhaps on this computer we need to see that line as taking 2 steps, so 2n in total. Then the overall number of steps will be 3n + a, yet the time complexity is still O(n). When we study complexity theory, we don't want to go down on that level of counting, because just different ways of counting will give you different results.
You will soon learn to automatically filter out the constants and terms and identify the variables that contribute to the time complexity. When you study different algorithms, you find that as the input grows, those differences become negligible.

Optimising table assignment to guests for an event based on a criteria

66 guests at an event, 8 tables. Each table has a "theme". We want to optimize various criteria: e.g., even number of men/women at the table, people get to discuss the topic they selected, etc.
I formulated this as a gradient-free optimisation problem: I wrote a function that calculates the goodness of the arrangement (i.e., cost of difference of men women, cost of non-preferred theme, etc.) and I am basically randomly perturbing the arrangement by swapping tables and keeping the "best so far" arrangement. This seems to work, but cannot guarantee optimality.
I am wondering if there is a more principled way to go about this. There (intuitively) seems to be no useful gradient in the operation of "swapping" people between tables, so random search is the best I came up with. However, brute-forcing by evaluating all possibilities seems to be difficult; if there are 66 people, there are factorial(66) possible orders, which is a ridiculously large number (10^92 according to Python). Since swapping two people at the same table is the same, there are actually fewer, which I think can be calculated by dividing out the repeats, e.g. fact(66)/(fact(number of people at table 1) * fact(number of people at table 2) * ...), which in my problem still comes out to about 10^53 possible arrangements, way too many to consider.
But is there something better that I can do than random search? I thought about evolutionary search but I don't know if it would provide any advantages.
Currently I am swapping a random number of people on each evaluation and keeping it only if it gives a better value. The random number of people is selected from an exponential distribution to make it more probable to swap 1 person than 6, for example, to make small steps on average but to keep the possibility of "jumping" a bit further in the search.
I don't know how to prove it but I have a feeling this is an NP-hard problem; if that's the case, how could it be reformulated for a standard solver?
Update: I have been comparing random search with a random "greedy search" and a "simulated annealing"-inspired approach where I have a probability of keeping swaps based on the measured improvement factor, that anneals over time. So far the greedy search surprisingly strongly outperforms the probabilistic approach. Adding the annealing schedule seems to help.
What I am confused by is exactly how to think about the "space" of the domain. I realize that it is a discrete space, and that distance are best described in terms of Levenshtein edit distance, but I can't think how I could "map" it to some gradient-friendly continuous space. Possibly if I remove the exact number of people-per-table and make this continuous, but strongly penalize it to incline towards the number that I want at each table -- this would make the association matrix more "flexible" and possibly map better to a gradient space? Not sure. Seating assignment could be a probability spread over more than one table..

How do I determine the big O for my nested for loop?

I have a nested for loop. I am aware that a nested for loop is of O(n2). I have some code that runs in the for loops, but it is conditional, as it only runs if conditions are met. Is that to be factored into the big O? Or is it so small compared the scaling of O(n2) that it is meaningless?
It does not matters whether your meets the conditions.If you are thinking about big O notation then you should consider the worst case scenario where your code will meet all conditions and execute it.
below is the link that might help you,
https://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/

Is O(m+n) or O(mlgn) better

I was thinking whether O(m+n) or O(mlgn) is indeed better. If n is very large, I think the later is better? And in reverse, if m is very large, the first one wins? Am I right thinking like this?
You are right about the fact that O(m+n) is not always better than 0(mlgn) but in general O(m+n) is more desirable. Check the following link about converting an O(mlgn) into O(m+n)
Link

NFA with half the strings in {0,1}^n

If there is an NFA M whose language L(M) is a subset of {0,1}* then how do you prove that determining if L(M) has fewer than half the strings in {0,1}^n for n>=0 is NP-hard.
First, you have to decide whether the problem you are proposing is actually solvable.
Assuming that it is indeed solvable by an NFA, then it sure is solvable by a corresponding Turing Machine (TM).
Let L(TM) = L(M)
Then, there exists a deterministic Turing Machine that can verify the solutions for the given set of problems. Hence, the problem is NP.
As per your question, in order to determine whether the L(M) has fewer than half the strings in {0,1}^n for n>=0, the problem is decidable and can be reduced to P type.
Therefore, we can prove it to be NP-Hard by taking an algorithm that can change it to another problem that is already proved NP-Hard in polynomial time.
Required data missing to formulate the algorithm.