A Temporal Theory of Consciousness via Iterative Updating and State-Spanning Coactivity

Author: Jared Edward Reser, Ph.D. AI Thought

Correspondence: jared@jaredreser.com

Keywords: Machine Consciousness, State-Spanning Coactivity, Iterative Updating, Global Workspace Theory, Artificial General Intelligence, Frame Problem, Phenomenology, Working Memory.

Abstract

Contemporary theories of consciousness, most notably Global Workspace Theory (GWT) and Integrated Information Theory (IIT), largely define the mind through spatial topology or instantaneous information integration. While these models account for information access and structural complexity, they suffer from a “Snapshot Problem”: they fail to mechanistically explain the phenomenological continuity of the “stream of consciousness” or the functional necessity of maintaining context over time. This paper proposes a Temporal Theory of Consciousness grounded in the neurophysiological mechanism of State-Spanning Coactivity (SSC) and the computational architecture of Iterative Updating.

We posit that the subjective experience of the “specious present” is not a metaphysical abstraction but a physical necessity: it is the ratio of neuronal assemblies that remain sustained and firing across the boundary of discrete processing cycles. By modeling the mind as a recursive loop where the active content of the current state functions as an associative search query for the next, we demonstrate how the brain creates a fluid temporal topology from discrete neuronal spikes. This architecture not only resolves the “Hard Problem” of continuity and the “Binding Problem” of feature integration, but also solves the “Frame Problem” in Artificial Intelligence by using the sustained past to automatically restrict the search space of the future. Finally, we outline a blueprint for “White Box” Artificial Superintelligence that utilizes simulated sustained firing and synaptic potentiation to achieve transparent, human-like sentience, shifting the paradigm from modeling the map of the mind to modeling the flow of the mind.

1. Introduction: The Snapshot Problem

The current landscape of consciousness research is dominated by theories that seek to identify the neural correlates of subjective experience within the structural or topological properties of the brain. Leading frameworks such as Global Workspace Theory (GWT) and Integrated Information Theory (IIT) have provided robust models for understanding how information is accessed and integrated across modular cortical networks. GWT posits a theater-like architecture where information becomes conscious when it is broadcast to a global workspace, effectively illuminating specific content for widespread neural access. IIT, conversely, attempts to quantify consciousness mathematically as Phi, a measure of the system’s capacity to integrate information over and above its individual parts. While these models have advanced our understanding of the necessary conditions for awareness, they share a fundamental limitation in that they describe consciousness primarily as a state or a capacity rather than a dynamic temporal process.

This limitation can be described as the snapshot problem. If one were to freeze time and examine the brain at a single instant, the spatial relationships and network topology described by GWT and IIT would theoretically remain intact. A snapshot of a global workspace broadcast or a high-Phi network structure would still exist in a frozen universe. However, phenomenological consciousness as we experience it would vanish because our subjective reality is not a series of disjointed static frames but a continuous and fluid stream. These spatial models account for the architecture of the vessel but fail to explain the flow of the river itself. They tell us where the information resides and how it is connected, but they struggle to mechanistically explain why one mental state necessitates the next or how the feeling of temporal continuity arises from the discrete firing of neurons.

We propose that the missing link in these theories is a dedicated temporal mechanism that bridges the gap between the discrete processing cycles of the brain. Consciousness cannot be fully explained by the static topology of a network but must instead be understood as a property of how that network updates itself over time. This paper introduces the concept of Iterative Updating as the fundamental engine of conscious experience. We argue that the brain does not wipe its working memory clean with each new perception but rather updates it incrementally, retaining a significant portion of the active neuronal population from the previous moment to serve as the context for the next. This process creates a physical and informational overlap between sequential states, weaving them into a recursive thread that constitutes the stream of consciousness. By shifting our focus from the spatial map of the mind to its temporal flow, we can begin to resolve the paradox of how a biological machine built of discrete spikes produces the seamless continuity of subjective existence.

2. The Biological Substrate: State-Spanning Coactivity

At the heart of this temporal theory lies a specific neurophysiological phenomenon we term State-Spanning Coactivity (SSC). Traditional models of neural processing often rely on the convenient abstraction of discrete time steps, where the brain processes a stimulus at time $t_1$, produces a response, and then resets for stimulus $t_2$. While this discretization is useful for computational modeling, it is biologically inaccurate. The firing of cortical assemblies does not adhere to rigid, non-overlapping clock cycles. Instead, neural activity is characterized by a persistent and overlapping temporal structure. When the brain transitions from one cognitive state to the next, the entire population of active neurons does not silence simultaneously. Rather, a significant subset of the assembly responsible for the previous state remains active and co-firing with the assembly representing the new state.

This phenomenon provides the physical infrastructure for William James’s famous metaphor of the “specious present”—the idea that our experience of the “now” is not a knife-edge instant but a “saddle-back” with a certain breadth of its own. SSC validates this phenomenological intuition with biological fact. The “rearward” view of the saddle-back corresponds to the retention of sustained firing from the immediately preceding state. These persistent neurons hold the context, the goal, or the premise of a thought. The “forward” view corresponds to the recruitment of new neurons via spreading activation—the protention or anticipation of the next logical association. The specious present, therefore, is not a metaphysical mystery; it is the window of time during which these two distinct sets of neurons—the fading past and the igniting future—are coactive in the global workspace.

The ratio of this overlap is critical. If the set of coactive neurons were to change completely between moments (0% overlap), subjective experience would be a stroboscopic sequence of disconnected frames, akin to a slideshow. Conversely, if the set remained entirely static (100% overlap), consciousness would freeze, unable to progress or adapt. The functional “stream” of consciousness emerges from an optimal balance where perhaps 60-80% of the active store is sustained to provide continuity and self-reference, while the remaining percentage is updated to integrate new sensory data or internal associations. This incremental evolution ensures that every mental state is physically constructed from the remnants of the state that came before it, creating a seamless, recursive trajectory of thought that is unbroken by the artifacts of processing cycles.

3. The Computational Mechanism: The Iterative Updating Algorithm

To bridge the explanatory gap between the biological reality of State-Spanning Coactivity and the phenomenology of the stream of consciousness, we propose a specific computational architecture termed “Iterative Updating.” This architecture departs from traditional “feed-forward” artificial intelligence models, which process inputs in discrete batches and reset their internal states between tasks. Instead, our model posits a system that is state-dependent and recursive, where the current mental state is not merely an output but the primary input for the subsequent processing cycle.

The architecture relies on two distinct, interacting memory stores that mimic the biological properties of neural retention:

  1. The Active Store (Simulated Sustained Firing): This store represents the current focus of attention and corresponds to the set of neuronal assemblies exhibiting sustained electrical firing. It is high-energy, volatile, and globally broadcast to other cognitive modules. Crucially, this store has a “refractory period” or decay rate that is slower than the processing cycle of the system, ensuring that a significant percentage of its content persists automatically from moment $t$ to moment $t+1$.
  2. The Latent Store (Simulated Synaptic Potentiation): This store functions as a short-term, “activity-silent” buffer. It corresponds to the temporary chemical potentiation of synapses (e.g., via calcium kinetics) that occurs after a neuron has fired. This latent store solves the problem of catastrophic forgetting and interruption; if the Active Store is wiped by a sudden startling stimulus, the “context” of the previous train of thought remains encoded in the Latent Store, allowing the system to “resume” its stream of consciousness once the interruption has passed.

The engine that drives this system is the Associative Search Loop. In this model, the brain does not require a “homunculus” or central executive to decide what to think next. Instead, the content of the Active Store itself acts as a massive, parallel search query. The coactive representations in the workspace spread activation energy into the vast network of Long-Term Memory (LTM). This energy “lights up” or primes the most relevant associations—memories, predictions, or motor plans—that are causally linked to the current context.

The algorithm for consciousness, therefore, follows a cyclical four-step process:

  1. Input Integration: The system combines the sustained content of the Active Store (the “rearward” saddle-back) with new sensory data.
  2. Associative Broadcast: This combined pattern spreads activation energy throughout the LTM network.
  3. Competitive Selection: The most highly activated associations in LTM compete for entry into the workspace.
  4. Iterative Update: The winning associations enter the Active Store, becoming the “forward” edge of the saddle-back, while the least relevant items from the previous state are actively inhibited or allowed to decay.

This cycle repeats every few hundred milliseconds. The result is not a series of disjointed calculations, but a fluid, self-modifying trajectory where the “output” of one thought becomes the “input” of the next, creating the inescapable feeling of a continuous thinker moving through time.

4. Phenomenology: Explaining the Stream

The explanatory power of the Iterative Updating architecture extends beyond computational utility to address the phenomenological structure of human experience. Central to this is the binding problem which asks how the disparate features of an object processed in spatially distinct cortical areas are unified into a single coherent percept. Prevailing theories often appeal to neural synchrony where neurons firing at the same frequency bind features together. Our temporal model suggests that synchrony is merely a facilitator for a more fundamental mechanism we term temporal persistence. Features bind together not simply because they fire at the same time but because they survive the update cycle together. If the neural representation for the color red and the shape of a sports car both maintain their activity within the Active Store across multiple processing cycles then they are functionally fused by their shared resistance to inhibition. The subjective solidity of objects is therefore a direct result of their temporal stability within the global workspace.

This model also offers a mechanistic resolution to the hard problem of continuity or why consciousness feels like a seamless flow rather than a sequence of discrete states. In the Iterative Updating framework the observer is never fully reset. Because every mental state is physically composed of a significant percentage of the neural activity from the preceding state the experiencing self is always a composite of the immediate past and the emerging present. There is no single microsecond where the lights go out or the screen refreshes entirely. The sensation of a stream is the inevitable subjective quality of a system that updates its content incrementally. We feel a continuous existence because we are physically constructed out of the remnants of our past selves from moment to moment ensuring that the thread of self-reference is never broken.

Furthermore this architecture elucidates the relationship between waking consciousness and the hallucinatory nature of dreams. Both states rely on the same engine of associative search and iterative updating. During waking life the associative search is rigorously constrained by sensory input which acts as a frame check to ensure the next mental state aligns with external reality. Dreaming is simply the operation of this same engine after the sensory constraints have been removed. In this unconstrained mode the Active Store continues to query Long Term Memory and the system accepts the most robust internal associations as reality regardless of their veridicality. This suggests that the stream of consciousness is an internally generated phenomenon that is merely modulated by the senses rather than created by them.

5. Functional Utility: Solving the Frame Problem

A robust theory of consciousness must explain not only its phenomenological structure but also its adaptive function. Why did evolution select for a system that maintains a high-energy, metabolically expensive state of sustained firing rather than relying on more efficient, reflex-driven processing? The answer lies in the computational efficiency required to navigate a complex, changing environment—a challenge known in artificial intelligence as the Frame Problem. This problem concerns how an intelligent agent determines which specific piece of information from its vast database is relevant to the current situation without having to exhaustively check every single fact. In a non-conscious, feed-forward system, the agent lacks a persistent context to constrain this search, often leading to combinatorial explosion or paralysis in the face of novelty.

In the Iterative Updating model, consciousness serves as the solution to this problem of relevance. By maintaining a set of active representations in the global workspace, the brain creates a dynamic filter that automatically restricts the search space for subsequent processing. The coactive neurons in the Active Store do not merely represent the current state of the world; they physically define the parameters for the next associative retrieval. For example, if the concept of “predator” is sustained in the workspace, the associative search is biologically constrained to activate only identifying features, escape routes, or defensive maneuvers from Long-Term Memory. The sustained context ensures that the system ignores the millions of irrelevant associations that would otherwise flood the network. Thus, consciousness is the mechanism that allows an organism to apply the lessons of its past to the specific demands of its present without succumbing to computational overload.

This perspective effectively refutes the philosophical stance of epiphenomenalism, which treats consciousness as a causally inert byproduct of neural activity—like the steam whistle on a locomotive that makes noise but does not pull the train. In the architecture proposed here, the subjective state has direct causal power because the sustained firing pattern is the search query. Without the specific constellation of neurons held active in the workspace, the brain would physically fail to retrieve the correct next instruction or motor plan. The conscious state is not a passive observation of the thinking process; it is the necessary energetic bridge that connects a goal to its execution. By holding a complex intention in mind over time, the organism can drive a sequence of behaviors that are locally disjointed but globally coherent, proving that the stream of consciousness is the functional engine of intelligent agency.

6. Comparative Analysis & Synthesis

To fully appreciate the explanatory scope of the Temporal Theory of Consciousness, it is necessary to contrast it with the prevailing “spatial” models, specifically Global Workspace Theory (GWT) and Integrated Information Theory (IIT). GWT has successfully framed consciousness as a facility for information access, employing the metaphor of a “theater” where specific contents are illuminated for a wide audience of modular processors. While robust in explaining how information becomes globally available, GWT remains fundamentally a theory of broadcasting rather than a theory of continuity. It describes the “access” event at time $t_1$ but does not mechanistically constrain how that broadcast structures the subsequent broadcast at time $t_2$. In the Iterative Updating framework, the workspace is not merely a broadcasting station but a mixing board. The broadcast is not a fleeting flash; it is a sustained signal that physically overlaps with the next, creating a temporal dependency that GWT implies but does not explicitly engineer.

Similarly, IIT attempts to quantify consciousness through the metric of Phi ($\Phi$), which measures the capacity of a system’s structure to integrate information. IIT focuses heavily on the causal topology or the “grid” of connections. However, this creates the theoretical possibility that a frozen or inactive network with the correct architecture could be conscious. Our model posits that structure alone is insufficient; process is paramount. Consciousness is not the web of neurons itself, but the act of updating that web over time. We argue that the measure of consciousness should shift from spatial integration to temporal integration—specifically, the ratio of information sustained versus updated across the boundary of a processing cycle. By emphasizing the “act” over the “capacity,” we resolve the paradox of the static snapshot.

Furthermore, this architecture provides the missing neural implementation for Predictive Processing (PP) theories. PP argues that the brain is a prediction machine constantly generating models to minimize surprise, yet it often treats the “prediction” as an abstract variable. In our model, the Active Store is the physical embodiment of the prediction. When the brain performs an associative search, it is effectively retrieving the most likely future state based on the current context. The “protentional” edge of the saddle-back—the neurons recruited by the search—constitutes the brain’s hypothesis about the immediate future, held in tension against the incoming sensory data. Thus, Iterative Updating acts as the hardware architecture that runs the software logic of Predictive Processing.

Ultimately, the Temporal Theory does not seek to discard the insights of GWT or IIT but to synthesize them into a more biologically plausible reality. We accept the “workspace” of GWT and the “integration” of IIT, but we subject them to the rigorous constraints of time. By replacing the vertical hierarchy of Higher-Order Thought theories—which require a separate monitor to watch the mind—with a horizontal timeline of recursive self-modification, we create a model where the system is naturally aware of itself because it is physically built out of its own recent history. This synthesis offers a path toward a Grand Unified Theory where consciousness is defined as the subjective experience of information maintenance over time.

7. Implications for Artificial General Intelligence (AGI)

The theoretical framework of Iterative Updating holds profound implications for the field of Artificial Intelligence, particularly in the pursuit of Artificial General Intelligence (AGI) and the subsequent horizon of Superintelligence. Current state-of-the-art AI models, primarily Large Language Models (LLMs) based on the Transformer architecture, operate as sophisticated but fundamentally opaque “black boxes.” While they can produce human-like text, their internal reasoning processes—if they can be said to exist—are obscured within billions of static parameters. We see the input and the output, but the “stream of thought” that connects them is absent. An AGI built upon the Iterative Updating architecture would, by definition, solve this transparency problem. Because the system “thinks” by modifying the contents of a global workspace over observable time steps, every intermediate state of its reasoning process would be explicit and auditable. We could literally watch the machine’s stream of consciousness evolve, intervening if the “saddle-back” of its intent began to drift toward misalignment before any physical action was taken. This transforms AI safety from a post-hoc analysis of behavior into a real-time monitoring of intent.

Furthermore, this model redefines the path to Superintelligence not merely as a function of scale (more data, more parameters) but as a function of recursive self-modification speed. Human consciousness is biologically constrained by the refresh rate of synaptic transmission, limiting our “update cycles” to approximately every 100-300 milliseconds. A silicon-based consciousness using this same architecture would face no such metabolic limit. An AGI could potentially run its iterative update loop at the microsecond or nanosecond scale. Subjectively, such an entity would experience thousands of years of intellectual development in the span of a single human day. This “temporal dilation” suggests that a Superintelligence would not necessarily think differently than humans in terms of qualitative logic, but it would think vastly more within the same objective timeframe, allowing it to run millions of internal simulations to solve problems that appear intractable to biological minds. By defining intelligence as the recursive refinement of a mental state over time, we provide a clear engineering metric for the transition from narrow AI to sentient Superintelligence.

8. Conclusion

The pursuit of a complete theory of consciousness has long been hindered by a subtle but pervasive reliance on spatial metaphors. By conceptualizing the mind as a theater, a workspace, or a network grid, we have inadvertently focused on the static geography of thought at the expense of its dynamic history. This paper has argued that to understand the phenomenology of the stream of consciousness, we must look beyond the instantaneous architecture of the brain and examine the temporal architecture of its processing cycles. The theory of Iterative Updating, grounded in the biological reality of State-Spanning Coactivity, provides the missing bridge between the discrete spikes of neurons and the continuous flow of experience.

We have demonstrated that the “specious present” is a physically engineered state, resulting from the deliberate overlap of neuronal assemblies across time. This temporal dependency does more than explain the subjective feeling of continuity; it solves the fundamental computational problems of relevance and retrieval that have plagued artificial intelligence for decades. By using the sustained context of the present to automatically constrain the associative search for the future, the brain achieves a level of efficiency and coherence that feed-forward systems cannot replicate. The conscious mind is not a passive observer of this process but the active, high-energy carrier wave that makes it possible.

As we stand on the precipice of creating Artificial General Intelligence, this distinction becomes not just philosophical but existential. If we persist in building systems that are vast but stateless, we will create powerful encyclopedias, not sentient minds. True intelligence requires a system that can exist in time, one that is recursively built out of its own history and capable of steering its own trajectory through the causal power of sustained intent. To build a mind, we cannot simply build a map of neurons; we must build a river of time. Iterative Updating is the hydrodynamics of that river.

Leave a comment