How do minimum multicut algorithms avoid trivial solutions?

203 Views Asked by At

I've been reading some papers on multicut algorithms for segmenting graph structures. I'm specifically interested in this work which proposes an algorithm to solve an extension of the multicut problem:

https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Keuper_Efficient_Decomposition_of_ICCV_2015_paper.pdf

Regarding the edge costs, it says:

...for any pair of nodes, a real-valued cost (reward) to all decompositions for which these nodes are in distinct components

Fair enough. It further says that the solution to the multicut problem is a simple binary vector of length equal to the number of edges in the graph, in which a '1' indicates that the corresponding edge separates two vertices belonging to distinct graph components:

for every edge vw ∈ E ∪ F , y(v,w) = 1 if and only if v and w are in distinct components of G.

But then the optimization problem is written as:

enter image description here

This doesn't seem to make sense. If the edge weights depict rewards for that edge connecting nodes in distinct components, shouldn't this be a maximization problem? And in either case, if all edge weights are positive, wouldn't that lead to a trivial solution where y is an all-zeros vector? The above expression is followed by some constraints in the paper, but I couldn't figure out how any of those prevent this outcome.

Furthermore, when it later tries to generate an initial solution using Greedy Additive Edge Contraction, it says:

Alg. 1 starts from the decomposition into single nodes. In every iteration, a pair of neighboring components is joined for which the join decreases the objective value maximally. If no join strictly decreases the objective value, the algorithm terminates.

Again, if edge weights are rewards for keeping nodes separate, wouldn't joining any two nodes reduce the reward? And even if I assume for a second that edge weights are penalties for keeping nodes separate, wouldn't this method simply lump all the nodes into a single component?

The only way I see where this would work is if the edge weights are a balanced combination of positive and negative values, but I'm pretty sure I'm missing something because this constraint isn't mentioned anywhere in literature.

2

There are 2 best solutions below

0
On

Better late than never, here's the answer:

The weights c_e for cutting the edge e are not restricted to be positive as defined in Definition 1. In fact, Equation (7) specifies that they are log-ratios of two complementary probabilities. That means if the estimated probability for edge e being cut is greater than 0.5, then c_e will be negative. If it's smaller, then c_e will be positive.

While the trivial "all edges cut" solution is still feasible, it is quite unlikely that it is also optimal in any "non-toy" instance where you will have edges that are more likely to be cut while others are more likely to remain.

1
On

Just citing this multicut lecture

Minimum Multicut. The input consists of a weighted, undirected graph G = (V,E) with a non-negative weight c_k for every edge in E, and a set of terminal pairs {(s1,t1);(s2,t2)...(sk,tk)}. A multicut is a set of edges whose removal disconnects each of the terminal pairs.

I think from this definition it is clear that the multicut problem is a minimization problem for the accumulated weight which is defined by the selection of edges to cut. Maximizing the weight would of course be trivial (removing all edges). No?