Abstract
Let $A \in R^{m \times n} $ denote an arbitrary matrix. If $x \in R^n $ and $y \in R^m $ are vectors such that $\omega = y^T Ax \ne 0$, then the matrix $B: = A - \omega ^{ - 1} Axy^T A$ A has rank exactly one less than the rank of A. This Wedderburn rank-one reduction formula is easy to prove, yet the idea is so powerful that perhaps all matrix factorizations can be derived from it. The formula also appears in places such as the positive definite secant updates BFGS and DFP as well as the ABS methods. By repeatedly applying the formula to reduce ranks, a biconjugation process analogous to the Gram–Schmidt process with oblique projections can be developed. This process provides a mechanism for constructing factorizations such as ${\text{LDM}}^T $, QR, and SVD under a common framework of a general biconjugate decomposition $V^T AU = \Omega $ that is diagonal and nonsingular. Two characterizations of biconjugation provide new insight into the Lanczos method and its breakdown. One characterization shows that the Lanczos algorithm (and the conjugate gradient method) is a special case of the rank-one process; in fact, these processes can be identified with the class of biconjugate direction methods so that history is pushed back by about twenty years.
Keywords
Related Publications
A family of variable-metric methods derived by variational means
A new rank-two variable-metric method is derived using Greenstadt’s variational approach [<italic>Math. Comp.</italic>, this issue]. Like the Davidon-Fletcher-Powell (DFP) varia...
Numerical Methods for Unconstrained Optimization and Nonlinear Equations
Preface 1. Introduction. Problems to be considered Characteristics of 'real-world' problems Finite-precision arithmetic and measurement of error Exercises 2. Nonlinear Problems ...
The Collinearity Problem in Linear Regression. The Partial Least Squares (PLS) Approach to Generalized Inverses
The use of partial least squares (PLS) for handling collinearities among the independent variables X in multiple regression is discussed. Consecutive estimates $({\text{rank }}1...
Cross-Validatory Choice of the Number of Components From a Principal Component Analysis
A method is described for choosing the number of components to retain in a principal component analysis when the aim is dimensionality reduction. The correspondence between prin...
A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis
We present a penalized matrix decomposition (PMD), a new framework for computing a rank-K approximation for a matrix. We approximate the matrix X as circumflexX = sigma(k=1)(K) ...
Publication Info
- Year
- 1995
- Type
- article
- Volume
- 37
- Issue
- 4
- Pages
- 512-530
- Citations
- 95
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1137/1037124