Worst case running time BFS - time-complexity

I am confused with BFS complexity. Wikipédia say : "The time complexity can be expressed as O(|V|+|E|) since every vertex and every edge will be explored in the worst case".
set start vertex to visited
load it into queue
while queue not empty
for each edge incident to vertex
if its not visited
load into queue
mark vertex
For me the worst case is : each vertex are connected to all the vertices
So the while loop will be done V times and the for loop E times. A worst case scenario of 0(VE).
What did I do wrong?

The purpose of BFS is to visit all the nodes once. No vertex ever gets into the queue for a second time (which explains the |V|). The |E| of the summation comes from the assumption that checking incident edges costs nothing compared to a visit. So yes, for a complete graph (all vertices are connected to all other vertices) VE edges will be checked...but checking them does not cost a visit.

Time complexity of BFS is O(|V|+|E|). V is the number of total vertex and E is the number of total edge. You may mistake for the E. E is not the neighbor edges of vertex u.
We can implement this as below.
for each u in vertices
for each e in neighbor(u)
do something.

Related

Time Complexity of DFS and BFS

I'm new to the search strategies. Does anyone know how to analyze the time complexity of the figure below?
The best way to analyse time complexity for such problems is to check how many times a vertex and edge is accessed. Since in both BFS and DFS a single node is only visited once we can say it would be O(V + E) where v is the number of vertices and E is the number of edges.

Karger's Algorithm - Running Time - Edge Contraction

In Karger's Min-Cut Algorithm for undirected (possibly weighted) multigraphs, the main operation is to contract a randomly chosen edge and merge it's incident vertices into one metavertex. This process is repeated until two vertices remain. These vertices correspond to a cut. The algo can be implemented with an adjacency list.
Questions:
how can I find the particular edge, that has been chosen to be contracted?
how does an edge get contracted (in an unweighted and/or weighted graph)?
Why does this procedure take quadratic time?
Edit: I have found some information that the runtime can be quadratic, due to the fact that we have O(n-2) contractions of vertices and each contraction can take O(n) time. It would be great if somebody could explain me, why a contraction takes linear time in an adjacency list. Note a contraction consists of: deleting one adjacent edge, merging the two vertices into a supernode, and making sure that the remaining adjacent edges are connected to the supernode.
Pseudocode:
procedure contract(G=(V,E)):
while |V|>2
choose edge uniformly at random
contract its endpoints
delete self loops
return cut
I have read the related topic Karger Min cut algorithm running time, but it did not help me. Also I do not have so much experience, so a "laymens" term explanation would be very much appreciated!

Time complexity of counting cycles and complete subgraphs of a given size

I am very new to graph theory and I am trying to understand the following. Given an undirected graph G with V vertexes and E edges, what is the time complexity of counting all the complete subgraphs of size k and what is the time complexity of counting all the cycles of length k? Basically, which one is faster? I am only considering cases where k is even. Are there any well-known references for this?
Counting all cycles of length k can, in general, be achieved in O(VE) time (V = number of vertices, E = number of edges), although this number can be greatly improved depending on k (Ref). Counting all complete subgraphs or cliques can be achieved in O(nk k2) in the general case, but again, this number can be improved upon depending on the structure of the graph and algorithm used (Ref).

Finding Shortest Path using BFS search on a Undirected Graph, knowing the length of the SP

I was asked an interview question today and I was not able to solve at that time.
The question is to get the minimum time complexity of finding the shortest path from node S to node T in a graph G where:
G is undirected and unweighted
The connection factor of G is given as B
The length of shortest path from S to T is given as K
The first thing I thought was that in general case, the BFS is fastest way to get the SP from S to T, in O(V+E) time. Then how can we use the B and K to reduce the time. I'm not sure what a connection factor is, so I asked the interviewer, then he told me that it is on average a node has B edges with other nodes. So I was thinking that if K = 1, then the time complexity should be O(B). But wait, it is "on average", which means it could still be O(E+V), where the graph is a like a star and all other nodes are connected to S.
If we assume that the B is a strict up limit. Then the first round of BFS is O(B), and the second is O(B*B), and so on, like a tree. Some of the nodes in the lower layer may be already visited in the previous round therefore should not be added. Still, the worst scenario is that the graph is huge and none of the node has been visited. And the time complexity is
O(B) + O(B^2) + O(B^3) ... O(B^K)
Using the sum of Geometric Series, the sum is O(B(1-B^K)/(1-B)). But this SUM should not exceed V+E.
So, is the time complexity is O(Min(SUM, V+E))?
I have no idea how to correctly solve this problem. Any help is appreciated.
Your analysis seems correct. Please refer to the following references.
http://axon.cs.byu.edu/~martinez/classes/312/Slides/Paths.pdf
https://courses.engr.illinois.edu/cs473/sp2011/lectures/03_class.pdf

On what does the time complexity for graph algorithms depends on?

I stumbled over this question in my textbook:
"In general, on what does the time complexity of Prim's, Kruskal's and Dijkstra's algorithms depends on?"
a. The number of vertices in the graph.
b. The number of edges in the graph.
c. Both, on the number of vertices and edges in the graph.
Explain your choice.
So according to Wikipedia Prim's,Kruskal's and Dijkstra's algorithms worst case time complexities are O(ElogV), O(ElogV) and O(E+VlogV) respectively. So i guess the answer is c? But why?
I don't know about Prim's and Kruskal's and might be wrong about Dijkstra's but I think in its case the answer would be b because:
Dijkstra's will visit nodes on the shortest known path until it finds the destination.
This implies that if two edges point to the same node, only one will ever be considered by the algorithm since one has a higher weight or they're equal, rendering one edge moot to follow.
Therefore, the only way to increase the time spent traversing the graph by adding edges is by adding nodes (adding edges on existing node can change the algorithm's traversal time but it's not proportional to the amount of edges, only to their weights).
Therefore, my intuition is that only the amount of nodes are in direct relation with running time. The Dijkstra's Alogrithm wikipedia page seems to confirm this:
The simplest implementation of the Dijkstra's algorithm stores
vertices of set Q in an ordinary linked list or array, and extract
minimum from Q is simply a linear search through all vertices in Q. In
this case, the running time is O(E + V^2) or O(V^2).
This is only an intuition of course, and cs.stackex might be of greater use.
The answer is (c), because both V and E contribute to the asymptotic complexity of the respective algorithms. Now, on further analysis one could argue that V is much less on Kruskal's and Prim's(since it is log factor). But E seems to almost have same weights in all three cases.
Also, note that |E| <= |V|^2 always (for simple graphs)
In worst case graph will be a complete graph i.e v(v-1)/2 edges ie e>>v and e ~ v^2
Time Complexity of Prim's and Dijkstra's algorithms are:
1. With Adjacency List and Priority queue:
O((v+e) log v) in worst case: e>>v so O( e log v)
2. With matrix and Priority queue:
O(v^2 + e log v) in WC e ~ v^2
So O(v^2 + e log v) ~ O(e + e log v) ~ O(e log v).
3. When graph go denser ( Worst case is Complete Graph ) we use Fibonacci Heap
and adjacency list: O( e + v log v)
Time complexity of kruskal is O(e log e) in Worst case e ~ v^2
so log (v^2) = 2 log v
So we can safely say than O(e log e) can be O( 2e log v)
ie O( e log v) in worst case.
As you said, the time complexities of O(ElogV), O(ElogV), and O(E+VlogV) mean that each one is dependent on both E and V. This is because each algorithm involves considering the edges and their respective weights in a graph. Since for Prim’s and Kruskal’s the MST has to be connected and include all vertices, and for Dijkstra’s the shortest path has to pass from one vertex to another through other intermediary vertices, the vertices also have to be considered in each algorithm.
For example, with Dijkstra’s algorithm, you are essentially looking to add edges that are both low in cost and that connect vertices that will eventually provide a path from the starting vertex to the ending vertex. To find the shortest path, you cannot solely look for a path that connects the start vertex to the end, and you cannot solely look for the smallest weighted edges, you need to consider both. Since you are considering both edges and vertices, the time it takes to make these considerations throughout the algorithm will depend on the number of edges and number of vertices.
Additionally, different time complexities are possible through different implementations of the three algorithms, and analyzing each algorithm requires a consideration of both E and V.
For example, Prim’s algorithm is O(V^2), but can be improved with the use of a min heap-based priority queue to achieve the complexity you found: O(ElogV). O(ElogV) may seem like the faster algorithm, but that’s not always the case. E can be as large as V^2, so in dense graphs with close to V^2 edges, O(ElogV) becomes O(V^2). If V is very small then there is not much difference between O(V^2) and O(ElogV). E and V also influence the running time based on the way that the graph is being stored. For example, an adjacency list becomes very inefficient with dense graphs (with E approaching V^2) because checking to see if an edge exists in the graph goes from close to O(1) to O(V).