Abstract

Large Language Models (LLMs) based on the Transformer architecture have achieved remarkable success, yet their core processing mechanisms remain largely static after training. While powerful, this static nature limits their ability to dynamically adapt their processing strategy based on nuanced contextual cues, task demands, or desired operational modes (e.g., shifting between exploration and exploitation). We propose Neuromodulatory Control Networks (NCNs), a novel architectural modification inspired by the neuromodulatory systems in the vertebrate brain (e.g., those utilizing dopamine, acetylcholine, norepinephrine). NCNs are small, parallel networks that receive contextual input, summarizing the global state, task information, or external control signals, and compute dynamic "modulatory signals". These signals are distributed as layer-specific control vectors to the main LLM to influence its computational properties during a forward pass, analogous to how neuromodulators alter neuronal gain, plasticity, and network states across different cortical depths. Instead of merely routing information, NCNs aim to change how information is processed throughout the base model by modulating key components like attention mechanisms (e.g., via precision scaling), layer gains, and activation functions. Crucially, the architecture allows the model to implicitly learn to self-regulate these parameters via backpropagation, effectively becoming its own "tuning expert." We further introduce formal stability mechanisms, including homeostatic regularization, to prevent control manifold collapse. This paper introduces the NCN architecture, details its components and implicit learning mechanism, discusses its conceptual advantages and potential failure modes (such as contextual stereotyping), and provides an open-source PyTorch implementation to facilitate community exploration and future empirical validation.

Keywords

Normalization (sociology)Computer scienceSociology

Related Publications

Publication Info

Year
2025
Type
preprint
Citations
1325
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1325
OpenAlex

Cite This

Jimmy Ba, Jamie Kiros, Geoffrey E. Hinton (2025). Neuromodulatory Control Networks (NCNs): A Biologically Inspired Architecture for Dynamic LLM Processing. Zenodo (CERN European Organization for Nuclear Research) . https://doi.org/10.5281/zenodo.17851047

Identifiers

DOI
10.5281/zenodo.17851047