Abstract

We had previously shown that regularization principles lead to approximation schemes that are equivalent to networks with one layer of hidden units, called regularization networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known radial basis functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends radial basis functions (RBF) to hyper basis functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, some forms of projection pursuit regression, and several types of neural networks. We propose to use the term generalized regularization networks for this broad class of approximation schemes that follow from an extension of regularization. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In summary, different multilayer networks with one hidden layer, which we collectively call generalized regularization networks, correspond to different classes of priors and associated smoothness functionals in a classical regularization principle. Three broad classes are (1) radial basis functions that can be generalized to hyper basis functions, (2) some tensor product splines, and (3) additive splines that can be generalized to schemes of the type of ridge approximation, hinge functions, and several perceptron-like neural networks with one hidden layer.

Keywords

Regularization (linguistics)MathematicsBasis functionRadial basis functionApplied mathematicsArtificial neural networkRegularization perspectives on support vector machinesInverse problemAlgorithmComputer scienceArtificial intelligenceMathematical analysis

Affiliated Institutions

Related Publications

Publication Info

Year
1995
Type
article
Volume
7
Issue
2
Pages
219-269
Citations
1344
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1344
OpenAlex

Cite This

Federico Girosi, Michael Jones, Tomaso Poggio (1995). Regularization Theory and Neural Networks Architectures. Neural Computation , 7 (2) , 219-269. https://doi.org/10.1162/neco.1995.7.2.219

Identifiers

DOI
10.1162/neco.1995.7.2.219