I've defined a bipartite graph with two equal partitions. Let's say the vertices are:
(T0, T1, T2, Z0, Z1, Z2),
the partitions are (T0, T1, T2) and (Z0, Z1, Z2). All vertices of partition "T" are connected to all vertices of partition "Z" through weighted edges.
I'm using the KuhnMunkresMinimalWeightBipartitePerfectMatching class to find the optimum assignment. It correctly gives me the best assignment possible.
My question is: can I ask for several solutions, ranked in order of cost?
I've explored two options so far that did not work:
first, I set the weight of the all the edges involved in the first assignment to a high value, effectively making that solution more expensive then other solutions. However, this excludes other feasible solutions.
second, I removed the edge from the graph after it had been used in the first solution. But then the graph is not necessarily bipartite any more. Besides that, it again removes edges that could be a part of a different solution.
Is there perhaps a different class I should be using? Or some way to extract several solutions from the one I use? I'm just starting out with JGraphT, any help is appreciated.
Related
In Karger's Min-Cut Algorithm for undirected (possibly weighted) multigraphs, the main operation is to contract a randomly chosen edge and merge it's incident vertices into one metavertex. This process is repeated until two vertices remain. These vertices correspond to a cut. The algo can be implemented with an adjacency list.
Questions:
how can I find the particular edge, that has been chosen to be contracted?
how does an edge get contracted (in an unweighted and/or weighted graph)?
Why does this procedure take quadratic time?
Edit: I have found some information that the runtime can be quadratic, due to the fact that we have O(n-2) contractions of vertices and each contraction can take O(n) time. It would be great if somebody could explain me, why a contraction takes linear time in an adjacency list. Note a contraction consists of: deleting one adjacent edge, merging the two vertices into a supernode, and making sure that the remaining adjacent edges are connected to the supernode.
Pseudocode:
procedure contract(G=(V,E)):
while |V|>2
choose edge uniformly at random
contract its endpoints
delete self loops
return cut
I have read the related topic Karger Min cut algorithm running time, but it did not help me. Also I do not have so much experience, so a "laymens" term explanation would be very much appreciated!
Minimally, I would like to know how to achieve what is stated in the title. Specifically, signal.lfilter seems like the only implementation of a difference equation filter in scipy, but it is 1D, as shown in the docs. I would like to know how to implement a 2D version as described by this difference equation. If that's as simple as "bro, use this function," please let me know, pardon my naiveté, and feel free to disregard the rest of the post.
I am new to DSP and acknowledging there might be a different approach to answering my question so I will explain the broader goal and give context for the question in the hopes someone knows how do want I want with Scipy, or perhaps a better way than what I explicitly asked for.
To get straight into it, broadly speaking I am using vectorized computation methods (Numpy/Scipy) to implement a Monte Carlo simulation to improve upon a naive for loop. I have successfully abstracted most of my operations to array computation / linear algebra, but a few specific ones (recursive computations) have eluded my intuition and I continually end up in the digital signal processing world when I go looking for how this type of thing has been done by others (that or machine learning but those "frameworks" are much opinionated). The reason most of my google searches end up on scipy.signal or scipy.ndimage library references is clear to me at this point, and subsequent to accepting the "signal" representation of my data, I have spent a considerable amount of time (about as much as reasonable for a field that is not my own) ramping up the learning curve to try and figure out what I need from these libraries.
My simulation entails updating a vector of data representing the state of a system each period for n periods, and then repeating that whole process a "Monte Carlo" amount of times. The updates in each of n periods are inherently recursive as the next depends on the state of the prior. It can be characterized as a difference equation as linked above. Additionally this vector is theoretically indexed on an grid of points with uneven stepsize. Here is an example vector y and its theoretical grid t:
y = np.r_[0.0024, 0.004, 0.0058, 0.0083, 0.0099, 0.0133, 0.0164]
t = np.r_[0.25, 0.5, 1, 2, 5, 10, 20]
I need to iteratively perform numerous operations to y for each of n "updates." Specifically, I am computing the curvature along the curve y(t) using finite difference approximations and using the result at each point to adjust the corresponding y(t) prior to the next update. In a loop this amounts to inplace variable reassignment with the desired update in each iteration.
y += some_function(y)
Not only does this seem inefficient, but vectorizing things seems intuitive given y is a vector to begin with. Furthermore I am interested in preserving each "updated" y(t) along the n updates, which would require a data structure of dimensions len(y) x n. At this point, why not perform the updates inplace in the array? This is wherein lies the question. Many of the update operations I have succesfully vectorized the "Numpy way" (such as adding random variates to each point), but some appear overly complex in the array world.
Specifically, as mentioned above the one involving computing curvature at each element using its neighbouring two elements, and then imediately using that result to update the next row of the array before performing its own curvature "update." I was able to implement a non-recursive version (each row fails to consider its "updated self" from the prior row) of the curvature operation using ndimage generic_filter. Given the uneven grid, I have unique coefficients (kernel weights) for each triplet in the kernel footprint (instead of always using [1,-2,1] for y'' if I had a uniform grid). This last part has already forced me to use a spatial filter from ndimage rather than a 1d convolution. I'll point out, something conceptually similar was discussed in this math.exchange post, and it seems to me only the third response saliently addressed the difference between mathematical notion of "convolution" which should be associative from general spatial filtering kernels that would require two sequential filtering operations or a cleverly merged kernel.
In any case this does not seem to actually address my concern as it is not about 2D recursion filtering but rather having a backwards looking kernel footprint. Additionally, I think I've concluded it is not applicable in that this only allows for "recursion" (backward looking kernel footprints in the spatial filtering world) in a manner directly proportional to the size of the recursion. Meaning if I wanted to filter each of n rows incorporating calculations on all prior rows, it would require a convolution kernel far too big (for my n anyways). If I'm understanding all this correctly, a recursive linear filter is algorithmically more efficient in that it returns (for use in computation) the result of itself applied over the previous n samples (up to a level where the stability of the algorithm is affected) using another companion vector (z). In my case, I would only need to look back one step at output signal y[n-1] to compute y[n] from curvature at x[n] as the rest works itself out like a cumsum. signal.lfilter works for this, but I can't used that to compute curvature, as that requires a kernel footprint that can "see" at least its left and right neighbors (pixels), which is how I ended up using generic_filter.
It seems to me I should be able to do both simultaneously with one filter namely spatial and recursive filtering; or somehow I've missed the maths of how this could be mathematically simplified/combined (convolution of multiples kernels?).
It seems like this should be a common problem, but perhaps it is rarely relevant to do both at once in signal processing and image filtering. Perhaps this is why you don't use signals libraries solely to implement a fast monte carlo simulation; though it seems less esoteric than using a tensor math library to implement a recursive neural network scan ... which I'm attempting to do right now.
EDIT: For those familiar with the theoretical side of DSP, I know that what I am describing, the process of designing a recursive filters with arbitrary impulse responses, is achieved by employing a mathematical technique called the z-transform which I understand is generally used for two things:
converting between the recursion coefficients and the frequency response
combining cascaded and parallel stages into a single filter
Both are exactly what I am trying to accomplish.
Also, reworded title away from FIR / IIR because those imply specific definitions of "recursion" and may be confusing / misnomer.
In my study, a person is represented as a pair of real numbers (x, y). x is on [30, 80] and y is [60, 120]. There are two types of people, A and B. I have ~300 of each type. How can I generate the largest (or even a large) set of pairs of one person from A with one from B: ((xA, yA), (xB, yB)) such that each pair of points is close? Two points are close if abs(x1-x2) < dX and abs(y1 - y2) < dY. Similar constraints are acceptable. (That is, this constraint is roughly a Manhattan metric, but euclidean/etc is ok too.) Not all points need be used, but no point can be reused.
You're looking for the Hungarian Algorithm.
Suggested formulation: A are rows, B are columns, each cell contains a distance metric between Ai and Bi, e.g. abs(X(Ai)-X(Bi)) + abs(Y(Ai)-Y(Bi)). (You can normalize the X and Y values to [0,1] if you want distances to be proportional to the range of each variable)
Then use the Hungarian Algorithm to minimize matching weight.
You can filter out matches with distances over your threshold. If you're worried that this filtering might cause the approach to be sub-optimal, you could set distances over your threshold to a very high number.
There are many implementations of this algorithm. A short search found one in any conceivable language, including VBA for Excel and some online solvers (not sure about matching 300x300 matrix with them, though)
Hungarian algorithm did it, thanks Etov.
Source code available here: http://www.filedropper.com/stackoverflow1
I have a finite number of points (cloud), with a metric defined on them. I would like to find the maximum amount of clusters in this cloud such that:
1) the maximum distance between any two points in one cluster is smaller a given epsilon (const)
2) each cluster has exactly k (const) points in it
I looked at all kind of different clustering methods and clustering with a restriction on the inner maximum distance is not a problem (density based). The 2) constrain and the requirement to find "the maximum amount of clusters s.t." seem to be problematic though. Any suggestions for an efficient solution?
Thank you,
A~
Given your constraints, there might be no solution. And actually, that may happen quite often...
The most obvious case is when you don't have a multiple of k points.
But also if epsilon is set too low, there might be points that cannot be put into clusters anymore.
I think you need to rethink your requirements and problem, instead of looking for an algorithm to solve an unreasonably hard requirement that might not be satisfiable.
Also consider whether you really need to find the guaranteed maximum, or just a good solution.
There are some rather obvious approaches that will at least find a good approximation fast.
I have the same impression as #Anony-Mousse, actually: you have not understood your problem and requirements yet.
If you want your cluster sizes to be k, there is no question of how many clusters you will get: it's obviously n /k. So you can try to use a k-means variant that produces clusters of the same size as e.g. described in this tutorial: Tutorial on same-size k-means and set the desired number of cluster to n/k.
Note that this is not a particular sensible or good clustering algorithm. It does something to satisfy the constraints, but the clusters are not really meaningful from a cluster analysis point of view. It's constraint satisfaction, but not cluster analysis.
In order to also satisfy your epsilon constraint, you can then start off with this initial solution (which probably is what #Anony-Mousse referred to as "obvious approaches") and try to perform the same kind of optimization-by-swapping-elements in order to satisfy the epsilon condition.
You may need a number of restarts, because there may be no solution.
See also:
Group n points in k clusters of equal size
K-means algorithm variation with equal cluster size
for essentially redundant questions.
I have a problem here, that I managed to reduce to a weighted bipartite match problem. Basically, I have a bipartite graph with partitions A and B, and a set of edges with weights. In my case, |A|~=20 and |B| =300.
I want to find a set of edges which minimizes the weigths AND COVERS 'A' (each edge on A has an associated solution edge)
Questions:
-Is there a special name for this kind a problem, so I can look for algorithms and solutions?
-I know I can reduce it to a weighted bipartite perfect match, by adding dummy vertices on A, with infinite weigth. But I'm worried about practical performance since |B|>>|A|.
-Any suggestions on Java libraries? I found this: http://algs4.cs.princeton.edu/code/. I think the 'AssignmentProblem.java' is almost what I need - (but I guess it doesn't ensure a perfect matching?)
Thanks in advance and sorry about the bad english.
a) maximum weighted perfect matching
b) ???
c) floyd or floyd-warshall alogorithm is your friend
I've found a c-implemenation in the web and also you can use edmond's blossom algorithm, too.