Dantzig-Wolfe Decomposition: A Technical Lesson
A step-by-step walkthrough of the Dantzig-Wolfe decomposition algorithm — from problem structure to convergence — with full mathematical notation.
Prerequisites
Before working through this lesson, you should be comfortable with the following concepts. Each link opens a brief refresher or connects to the site glossary.
Required Background
- Linear Programming (LP): formulation as subject to , ; feasibility, optimality, and duality
- Simplex Method: basis, basic feasible solutions, pivoting rules, optimality certificates — arise naturally as shadow prices
- Matrix algebra: matrix–vector products, block matrices, linear independence
- Convexity: s (vertices) of a polyhedron; Minkowski’s theorem that every bounded polyhedron is the convex hull of its vertices
- LP duality: the dual problem, complementary slackness, strong duality
If any of these concepts feel unfamiliar, the site glossary provides plain-language definitions. External references:
- Bertsimas & Tsitsiklis, Introduction to Linear Optimization (1997) — Chapters 1–5
- Vanderbei, Linear Programming: Foundations and Extensions (2014) — freely available online
Problem Structure
Dantzig-Wolfe Decomposition applies to linear programs with a structure: a set of independent sub-problems linked by a small number of shared constraints.
General Block-Angular LP Formulation
Symbol Legend
- K
- Number of sub-problems (blocks)
- x_i
- Decision variables for sub-problem i
- c_i
- Objective coefficient vector for sub-problem i
- A_0^i
- Coupling constraint matrix for sub-problem i’s variables
- b_0
- Right-hand side of the coupling constraints
- A_i, b_i
- Constraint matrix and RHS for the i-th sub-problem
Why This Structure Matters
If no coupling constraints existed ( for all ), the problem would decompose completely into independent LPs. The coupling constraints are exactly what makes the problem hard — and exactly what Dantzig-Wolfe exploits.
The Master Problem
The key insight of Dantzig-Wolfe Decomposition is to reformulate the original block-angular LP by expressing each sub-problem’s feasible region using its s and extreme rays.
Deriving the Dantzig-Wolfe Master Problem
Let (the feasible region of sub-problem , defined by ) be a polyhedron. By Minkowski’s theorem, every point in can be written as:
where are the s of , are the extreme rays, , , and .
Substituting into the original problem gives the Dantzig-Wolfe Master Problem:
The Role of Convexity Constraints
The s ( for each ) ensure that the chosen combination of extreme points forms a convex combination — i.e., a point in . Without them, the feasible set of the master problem would not correspond to the original LP.
The Restricted Master Problem
In practice, the full master problem has exponentially many columns (one per extreme point/ray). The algorithm never constructs it explicitly. Instead, it works with the
(RMP): a master problem that includes only a subset of columns, adding new ones one at a time via the process.
Sub-Problems
At each iteration of the Dantzig-Wolfe algorithm, the s determine whether any new column can improve the current master solution.
Pricing Sub-Problem Formulation
Let be the of the coupling constraints and be the dual of the convexity constraint for sub-problem .
The i-th pricing sub-problem is:
The expression is the of the column corresponding to extreme point .
The Bounded Case
If the sub-problem is bounded (i.e., is bounded or the objective is bounded below on ), then the optimal solution is an extreme point . If its reduced cost is negative, it becomes a candidate column for the master problem:
The Unbounded Case
If the sub-problem is unbounded (the minimisation has no lower bound), then the optimal direction is an extreme ray along which the objective decreases without bound. This extreme ray becomes an entering column for the master problem:
Infeasibility Detection
If the master RMP is infeasible (e.g., the initial basis has no feasible solution), a Phase I approach or big-M method must be used first. Sub-problem infeasibility cannot occur because each is assumed non-empty by the problem assumptions.
Column Generation
The Dantzig-Wolfe algorithm is a column generation procedure: it builds the master problem one column at a time, guided by the dual solution of the current
.
One Complete Iteration
Solve the Restricted Master Problem (RMP). Using only the columns currently in the RMP, find the optimal primal and dual solutions. Let be the optimal dual variables for the coupling constraints and the dual variables for the convexity constraints.
Obtain dual variables. Extract and from the RMP optimal basis. These act as prices that value each unit of coupling constraint resource.
Solve each pricing sub-problem. For each , solve:
Let be the minimum reduced cost for sub-problem .
Check optimality. If for all , the current RMP solution is optimal for the original LP — no improving column exists.
The Optimality Condition
If any , the corresponding extreme point (or ray) is added to the RMP as a new column, and the next iteration begins.
Adding the Entering Column
The new column added to the RMP for sub-problem corresponds to the extreme point and has:
- Coupling block:
- Convexity block: (unit vector selecting row of the convexity constraints)
- Objective coefficient:
The RMP is re-solved with this additional column to obtain a new dual solution, and the iteration repeats.
Convergence
Termination Criterion
The Dantzig-Wolfe algorithm terminates when all pricing sub-problems return a non-negative reduced cost:
At this point, the current RMP solution is certified optimal for the original LP. Under non-degeneracy, the algorithm terminates in finite iterations because each new column improves the objective strictly, and there are only finitely many extreme points.
Practical Issues: Degeneracy
In the presence of degeneracy (multiple optimal dual solutions to the RMP), the same column may be added repeatedly or the objective may fail to improve at each iteration. This is the LP analogue of cycling in the simplex method.
Stabilisation Strategies
Several techniques address tailing-off and degeneracy:
- Restricted master heuristics: restrict the set of candidate entering columns at each iteration to improve numerical stability
- Dual stabilisation / proximal methods: add a regularisation term to the RMP to smooth the dual trajectory (e.g., bundle methods)
- Kelley cuts / cutting planes: add cutting planes to the master problem’s feasible region to tighten the lower bound more aggressively
- Interior-point pricing: use interior-point duals rather than simplex duals to reduce oscillation
These stabilisation strategies are active research topics in large-scale LP and integer programming.