CoinUtils

Layer 0 - Foundation utilities for COIN-OR solvers

Layer 0

CoinUtils — Foundation Utilities

CoinUtils provides the fundamental building blocks used by all COIN-OR optimization libraries. Every solver in the stack depends on these core utilities for sparse linear algebra, file I/O, and numerical infrastructure.

Layer 0 | 122 files | 18 with algorithm annotations


Why CoinUtils Matters

Optimization solvers spend most of their time doing linear algebra:

CoinUtils provides the efficient sparse data structures and algorithms that make this tractable for problems with millions of variables.


Core Components

Sparse Matrix Classes

Large optimization problems are sparse — most entries are zero. CoinUtils provides column-major storage that exploits this:

ClassPurpose
CoinPackedMatrixColumn-major sparse matrix
CoinPackedVectorSparse vector with indices
CoinIndexedVectorSparse vector with O(1) index lookup
CoinShallowPackedVectorNon-owning view into packed data

Why column-major? LP constraints $Ax \leq b$ are naturally column-oriented: each column is a variable's coefficients in all constraints. Column access is $O(1)$, critical for pricing in simplex.

LU Factorization

The basis matrix $B$ changes by one column per simplex iteration. Rather than refactoring from scratch, CoinUtils supports efficient updates:

ClassPurpose
CoinFactorizationSparse LU with Markowitz pivoting
CoinOslFactorizationOSL-derived factorization
CoinDenseFactorizationDense fallback for small bases

Key algorithms:

File I/O

Read and write standard optimization file formats:

ClassFormat
CoinMpsIOMPS (industry standard)
CoinLpIOLP (CPLEX-style)
CoinFileInputCompressed files (gzip)
CoinModelIn-memory problem builder

Presolve

Simplify problems before solving to reduce size and improve numerics:

ClassOperation
CoinPresolveMatrixPresolve state management
CoinPostsolveMatrixRestore original solution
doubleton_actionEliminate doubleton rows
dupcol_actionRemove duplicate columns

Common reductions:

Graph Algorithms

For MIP cut generation, CoinUtils includes conflict graph structures:

ClassPurpose
CoinConflictGraphVariable conflict relationships
CoinBronKerboschMaximal clique enumeration
CoinCliqueListClique storage for cuts
CoinShortestPathShortest path algorithms

Search Trees

For branch-and-bound tree management:

ClassPurpose
CoinSearchTreeGeneric search tree
CoinSearchTreeCompareBestBest-first selection
CoinSearchTreeCompareDepthDepth-first selection

Architecture

CoinUtils has no external dependencies — it's pure C++ designed for portability:

CoinPackedMatrix / CoinIndexedVector  <- Sparse storage
        |
CoinFactorization                      <- LU factorization
        |
CoinMpsIO / CoinLpIO                   <- File I/O
        |
CoinPresolve*                          <- Problem reduction

Higher-level solvers (Clp, Cbc) build on these primitives.


Usage Example

#include "CoinPackedMatrix.hpp"
#include "CoinMpsIO.hpp"

// Read an MPS file
CoinMpsIO reader;
reader.readMps("problem.mps");

// Access the constraint matrix
const CoinPackedMatrix* matrix = reader.getMatrixByCol();

// Get problem dimensions
int numRows = reader.getNumRows();
int numCols = reader.getNumCols();

// Access column data
for (int j = 0; j < numCols; j++) {
    const CoinBigIndex start = matrix->getVectorFirst(j);
    const CoinBigIndex end = matrix->getVectorLast(j);
    const int* indices = matrix->getIndices() + start;
    const double* elements = matrix->getElements() + start;
    // Process column j...
}

Key Algorithms

Markowitz Pivoting

When factoring $PA = LU$, choose pivots to minimize fill-in:

$$\text{score}(a_{ij}) = (r_i - 1)(c_j - 1)$$

where $r_i$ = row count, $c_j$ = column count. Lower score = less fill-in.

See CoinFactorization for implementation.

Forrest-Tomlin Update

After a basis change (column swap), update $L$ and $U$ without full refactorization:

  1. Form spike from new column
  2. Push spike through $L$ and $U$
  3. Restore triangularity with row/column operations

This makes each simplex iteration $O(m)$ instead of $O(m^3)$.

Compressed Sparse Storage (CSC)

Store sparse matrix $A$ as:

Memory: $O(\text{nnz} + n)$ instead of $O(mn)$ for dense.


Performance Tips

  1. Pre-allocate: Use reserve() before building matrices
  2. Column access: Prefer column-major operations
  3. Update vs. refactor: Balance stability vs. speed
  4. Indexed vectors: Use for hyper-sparse operations

Classes