Abstract
For given data $(t_i ,y_i ),i = 1, \cdots ,m$, we consider the least squares fit of nonlinear models of the form \[ \eta ({\bf a},{\boldsymbol \alpha} ;t) = \sum _{j = 1}^n {a_j \varphi _j ({\boldsymbol \alpha} ;t),\qquad {\bf a} \in \mathcal{R}^n ,\qquad {\boldsymbol \alpha} \in \mathcal{R}^k .} \] For this purpose we study the minimization of the nonlinear functional \[ r({\bf a},{\boldsymbol \alpha} ) = \sum\limits_{i = 1}^m {\left( {y_i - \eta \left( {{\bf a},{\boldsymbol \alpha} ,t_i } \right)} \right)^2 } . \] It is shown that by defining the matrix $\{ {\bf \Phi} ({\boldsymbol \alpha} )\} _{i,j} = \varphi _j ({\boldsymbol \alpha} ;t_i )$, and the modified functional $r_2 ({\boldsymbol \alpha} ) = \| {\bf y} - {\bf \Phi} ({\boldsymbol \alpha} ){\bf \Phi} ^ + ({\boldsymbol \alpha} ){\bf y} \|_2^2 $, it is possible to optimize first with respect to the parameters ${\boldsymbol \alpha} $, and then to obtain, a posteriors, the optimal parameters $\bf {\hat a}$. The matrix ${\bf \Phi} ^ + ({\boldsymbol{\alpha}} )$ is the Moore–Penrose generalized inverse of ${\bf \Phi} ({\boldsymbol{\alpha}} )$. We develop formulas for the Frechet derivative of orthogonal projectors associated with ${\bf \Phi} ({\boldsymbol{\alpha}} )$ and also for ${\bf \Phi} ^ + ({\boldsymbol{\alpha}} )$, under the hypothesis that ${\bf \Phi} ({\boldsymbol{\alpha}} )$ is of constant (though not necessarily full) rank. Detailed algorithms are presented which make extensive use of well-known reliable linear least squares techniques, and numerical results and comparisons are given. These results are generalizations of those of H. D. Scolnik [20] and Guttman, Pereyra and Scolnik [9].
Keywords
Related Publications
The differentiation of pseudo-inverses and non-linear least squares problems whose variables separate.
For given data (t, Yi), l, , m, we consider the least squares fit ofnonlinear models of the form It is shown that by defining the matrix {(0t)}i, qgj(0t; ti), and the modified f...
A Singular Value Thresholding Algorithm for Matrix Completion
This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understoo...
Numerical Methods for Unconstrained Optimization and Nonlinear Equations
Preface 1. Introduction. Problems to be considered Characteristics of 'real-world' problems Finite-precision arithmetic and measurement of error Exercises 2. Nonlinear Problems ...
Applied Linear Regression
Preface.1 Scatterplots and Regression.1.1 Scatterplots.1.2 Mean Functions.1.3 Variance Functions.1.4 Summary Graph.1.5 Tools for Looking at Scatterplots.1.5.1 Size.1.5.2 Transfo...
A Rank–One Reduction Formula and Its Applications to Matrix Factorizations
Let $A \in R^{m \times n} $ denote an arbitrary matrix. If $x \in R^n $ and $y \in R^m $ are vectors such that $\omega = y^T Ax \ne 0$, then the matrix $B: = A - \omega ^{ - 1} ...
Publication Info
- Year
- 1973
- Type
- article
- Volume
- 10
- Issue
- 2
- Pages
- 413-432
- Citations
- 1394
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1137/0710036