Approximate entropy (ApEn) as a complexity measure

1995 Chaos An Interdisciplinary Journal of Nonlinear Science 1,195 citations

Abstract

Approximate entropy (ApEn) is a recently developed statistic quantifying regularity and complexity, which appears to have potential application to a wide variety of relatively short (greater than 100 points) and noisy time-series data. The development of ApEn was motivated by data length constraints commonly encountered, e.g., in heart rate, EEG, and endocrine hormone secretion data sets. We describe ApEn implementation and interpretation, indicating its utility to distinguish correlated stochastic processes, and composite deterministic/ stochastic models. We discuss the key technical idea that motivates ApEn, that one need not fully reconstruct an attractor to discriminate in a statistically valid manner—marginal probability distributions often suffice for this purpose. Finally, we discuss why algorithms to compute, e.g., correlation dimension and the Kolmogorov–Sinai (KS) entropy, often work well for true dynamical systems, yet sometimes operationally confound for general models, with the aid of visual representations of reconstructed dynamics for two contrasting processes.

Keywords

Approximate entropyAttractorStatisticEntropy (arrow of time)Computer scienceEntropy rateMathematicsArtificial intelligenceStatisticsAlgorithmPattern recognition (psychology)Binary entropy functionPrinciple of maximum entropy

Affiliated Institutions

Related Publications

Publication Info

Year
1995
Type
article
Volume
5
Issue
1
Pages
110-117
Citations
1195
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1195
OpenAlex

Cite This

Steve Pincus (1995). Approximate entropy (ApEn) as a complexity measure. Chaos An Interdisciplinary Journal of Nonlinear Science , 5 (1) , 110-117. https://doi.org/10.1063/1.166092

Identifiers

DOI
10.1063/1.166092