Abstract

We introduce a novel fast algorithm for independent component analysis, which can be used for blind source separation and feature extraction. We show how a neural network learning rule can be transformed into a fixedpoint iteration, which provides an algorithm that is very simple, does not depend on any user-defined parameters, and is fast to converge to the most accurate solution allowed by the data. The algorithm finds, one at a time, all nongaussian independent components, regardless of their probability distributions. The computations can be performed in either batch mode or a semiadaptive manner. The convergence of the algorithm is rigorously proved, and the convergence speed is shown to be cubic. Some comparisons to gradient-based algorithms are made, showing that the new algorithm is usually 10 to 100 times faster, sometimes giving the solution in just a few iterations.

Keywords

AlgorithmIndependent component analysisConvergence (economics)ComputationSimple (philosophy)Computer scienceArtificial neural networkRamer–Douglas–Peucker algorithmFixed pointBlind signal separationComponent (thermodynamics)MathematicsArtificial intelligenceChannel (broadcasting)

Affiliated Institutions

Related Publications

Independent Component Analysis

A tutorial-style introduction to a class of methods for extracting independent signals from a mixture of signals originating from different physical sources; includes MatLab com...

2004 The MIT Press eBooks 425 citations

Publication Info

Year
1997
Type
article
Volume
9
Issue
7
Pages
1483-1492
Citations
3376
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

3376
OpenAlex

Cite This

Aapo Hyvärinen, Erkki Oja (1997). A Fast Fixed-Point Algorithm for Independent Component Analysis. Neural Computation , 9 (7) , 1483-1492. https://doi.org/10.1162/neco.1997.9.7.1483

Identifiers

DOI
10.1162/neco.1997.9.7.1483