Abstract

For neural networks with a wide class of weight priors, it can be shown that in the limit of an infinite number of hidden units, the prior over functions tends to a gaussian process. In this article, analytic forms are derived for the covariance function of the gaussian processes corresponding to networks with sigmoidal and gaussian hidden units. This allows predictions to be made efficiently using networks with an infinite number of hidden units and shows, somewhat paradoxically, that it may be easier to carry out Bayesian prediction with infinite networks rather than finite ones.

Keywords

Artificial neural networkGaussian processGaussianLimit (mathematics)Sigmoid functionPrior probabilityComputationMathematicsCovarianceClass (philosophy)Computer scienceBayesian probabilityAlgorithmApplied mathematicsArtificial intelligenceMathematical analysisStatisticsPhysics

Affiliated Institutions

Related Publications

Publication Info

Year
1998
Type
article
Volume
10
Issue
5
Pages
1203-1216
Citations
150
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

150
OpenAlex

Cite This

Christopher K. I. Williams (1998). Computation with Infinite Neural Networks. Neural Computation , 10 (5) , 1203-1216. https://doi.org/10.1162/089976698300017412

Identifiers

DOI
10.1162/089976698300017412