Tag Archives: asynchronous computing

Rethinking Computer Science Part 4: A Network Interpreter

Uniform flow paths, a universal set of operations and sequencing seems the royal road to universal interpretability. A single memory and single interpreter is, in a sense, a simplest and minimal computation capable of being structured to emulate any other computation.  A structureless tabula rasa that reflects only uniform path structure and a small set of sufficient operations onto the mapping of the network to be interpreted. If the emulating computation contains any element of flow structure such as concurrency, two interpreters or two memories for instance, then the emulating computation ceases to be a structureless receptacle, reflects flow structure onto the mapping and the two structures collide mid mapping in a question with a multitude of answers and a nontrivial coordination issue.

Might there be another road to universal interpretability that does not require sequencing the entire network of oscillations, which preserves the structure of the network and its concurrent behaviors. While it is difficult to inject concurrency into sequentiality it is straightforward to inject sequentiality into an expression of concurrency which already expresses all the necessary coordination behavior. In particular, the behavior of an individual oscillation can be implemented in any convenient manner including sequential as long as the oscillation performs its computation and the empty/full flow protocol is honored. A computation can be expressed as a flow network of sequential processes.

Each oscillation is represented with a symbolic record that specifies the computation of the oscillation as a sequence of operations and specifies the input paths and the output paths each with its associated closure path and flags indicating empty and full. The records are mapped into a distributed network of memories and interpreters.

The memories and interpreters are decoupled. Several interpreters are associated with each memory and serve the memory in turns as available. Several memories are associated with each interpreter forming overlapping neighborhoods of memory/interpreter associations. Memories are not connected to memories but are connected through interpreters. Interpreters are not connected to interpreters but are connected through memories. Neighboring memories are connected through a single interpreter.

A flow network is loaded record by record through an input memory. The first record is allocated to a memory. Subsequent records flow through it and are allocated and linked to accepting neighboring memories which in turn become conduits for records which are allocated and linked to neighboring memories until the entire flow network is loaded into the network of memories. The loaded flow network weaving through the neighborhoods of memories can fold back through the same memory multiple times. In the limit a network can flow back and forth through a single memory and a single interpreter can service it.

Each grey box below is a separate oscillation record in a separate memory showing the linking flow paths between memories.

Most records are loaded as empty. Some records may contain initial data wavefronts. Data wavefronts are presented to the network inputs, the interpreters begin servicing the records and data wavefronts begin flowing from component record to component record, from memory to memory and interact as they flow. Each independently behaving interpreter polls its connected memories opportunistically searching for an oscillation record to service. Access to each record is arbitrated so that only one interpreter at a time gains access to a record and performs the following protocol.

The interpreter Protocol

ALWAYS poll for memory access{
WHEN gained {
IF ready queue not empty
THEN access record of top entry;
IF input complete
               perform computation;
                   WHEN done{
                      FOR each output path{
                         gain access to memory with destination path oscillation record;
                         deliver output value to path and set flag full;
                         IF oscillation record input is completely full and its output completely empty;                               THEN place oscillation record onto ready queue;
}
                      FOR each input path{
                         mark as empty
                         gain access to memory of source path oscillation record
                         mark as empty
                         IF oscillation record input is completely full and its output completely empty;                               THEN place oscillation record onto ready queue;
}
                  }
            release record
      ELSE idle a bit
   }
}

Full/empty flags emulate the full/empty flow of oscillation links. Each interpreter can manage input ports, can enable acceptance of input, determine when input is complete and perform the program of the oscillation. Computation completeness occurs when an oscillation program has performed the last sequential operation. It then checks that the input ports of the destinations are empty and sends the output to the successor record input ports. When all are accepted it marks the current record’s input to enable the flow of empty which is just a manipulation of flags.

The state of the flow of the network and the state of each oscillation is maintained entirely in the oscillation records. Each interpreter is a stateless server that, with each access, updates a record, performs any housekeeping duties and leaves the memory and its oscillation record in a coherent state for a next interpreter to access and update. The arbitrated record access and the inherent flow network coordination behavior insures that processors never interfere with each other and that networks flow independently. As each interpreter performs its local duty alternating wavefronts of data and emptiness flow through the network records realizing successive instances of network computations exploiting the represented concurrency both parallel and pipeline as resources permit. There can be multiple flow networks distributed through the memories proceeding simultaneously and independently. Each memory can host multiple oscillation records of the same or different flow networks without interference.

The interpreters do not communicate among themselves. No interpreter is more important than any other interpreter. There is no central controller or explicit scheduler. There is no time referent. The network flows in its own time. There is no encompassing state. The state of each instance of a computation is in the wavefronts of data values flowing through the network and maintained in the records. Within the network, no part is greater than any other part. There is no central anything. There is no global anything. The network is a behaving whole composed of behaving localities enlivened by the servicing interpreters.

The operating system is itself a flow network distributed through the memories with one or more oscillation records in each memory. The OS flow network is enlivened by processors servicing its components just like any other flow network with its state maintained in the records themselves. Interpreters remain independent and stateless in relation to the OS.

Oscillation records can migrate among memories as long as the record state and links are maintained. A memory can be emptied and removed. A newly added memory can begin receiving oscillation records. An idle interpreter is stateless and can be removed. A newly added processor just begins looking for opportunities to service. The interpreters can keep tabs on each other by periodic conversation through the OS flow network. Rogue interpreters can be isolated, slow interpreters identified. Interpreters and memories can negotiate among themselves through the OS network to balance application load, optimize application throughput, flow to special purpose resources and so on.

The flow networks can reconfigure in relation to the memory/processor network. The memory/processor network can reconfigure in relation to the flow networks. The memory/interpreter network can be monitored with redundant flow or with periodic sets of test values or with a test flow network that continually flows test cases through the network. The interpreters treat it as just another network to service.

The memory/processor network exploits the inherent concurrency of independent flow networks and of concurrency within each flow network as resources are capable. Resources can be indefinitely extended simply by adding memories and processors. As resources dwindle the system will continue working with a single memory and a single processor.

Summary
While the behavior of each oscillation is sequentialized the flow network of oscillations is interpreted directly in the network of distributed memories and interpreters realizing the flow network in its native structure and behavior.

Rethinking Computer Science Part 3: A Sequential Interpreter.

Part 3 presents the logical structures of a memory, a configurable oscillation and a sequential interpreter ring forming a traditional universal sequential processor.

Memory: Stopping a Data Wavefront
In the flow model of computation successive wavefronts of data spontaneously propagate through the flow paths of networks of linked oscillations. A memory would entail stopping a data wavefront from flowing, holding it for an indefinite period, then allowing it to flow again. This can be accomplished with a ring.

 A Memory Ring
An oscillation link maintains a wavefront until it is accepted by the following oscillation. A ring can be arranged with an added dual-rail control path that indefinitely delays acceptance at a specific link, C, in the ring effectively stopping and storing a data wavefront.

The read rail transitioning to data enables the stored data wavefront to the output of the ring. The wavefront flows both out of the ring and back into the ring through B to be restored at C allowing multiple reads of the same wavefront.

The write rail transitioning to data enables the stored data wavefront to a consume path, D2, removing it from the ring and allowing a data wavefront presented on the input path, also enabled by the write rail, to flow into the ring and to C replacing the previous wavefront with the input wavefront.

A data wavefront is maintained indefinitely at C until a dual rail read/write command wavefront allows it to flow.

 A Randomly Addressable Memory
Memory rings can be stacked with a write bus and a read bus and individual read/write controls to form an arbitrarily large random access memory.

The memory operates as a complex conditional oscillation which is a consume oscillation for write and a produce oscillation for read. Each oscillation only one memory element is addressed and enabled to accept an input from the write bus or to present an output to the read bus.

The entire memory structure is a network of dual threshold elements.

A Configurable Oscillation

All flow is characterized in terms of two input one output operations and in terms of a common flow path structure (32 dual-rail bits for instance). The behaviors of all oscillations are characterized in terms of a common set of operations on the common flow path structure. An oscillation is defined that conditionally realizes each operation by steering the input through the specified operation to the output.

A computation can now realized as a network of identical configurable oscillations each behaving according to its configuration code.

Symbolically Specifying the Network
Each arc of the network is assigned a unique name which associates an output of one node to one or more inputs of other nodes. The network can now be expressed as a collection of statements each of which associates the names of the input arcs and the name of the output arc with a configuration code such as below.

opcode, inAname, inBname outname

The collection of statements, specifying the network structure by name correspondence, can be placed in an environment in which the name correspondence relations can be conveniently realized. If the unique names are unique memory addresses then the statements become linked by reference through memory locations. Since each memory location will maintain a data wavefront indefinitely the statements can be performed one at a time in any ordering that preserves the dependency relations of the original flow network. In particular, the statements can be placed linearly in a memory and with the assistance of conditional branching the statements can be performed one at a time in sequence in an appropriate order. Data wavefronts flowing haltingly through memory ultimately realize the computation specified by the original flow network.

 An Interpreting Pipeline Ring
A pipeline ring containing a configurable oscillation performs operations in sequence by fetching a statement from memory, fetching its input wavefronts from memory performing the operation and sending the result wavefront to memory, then doing it again, and again.

The ring and the memory are realized entirely in terms of a spontaneously behaving, network of linked oscillations composed entirely of dual threshold operations. No clock, no critical timing issues, no state machine, no DQ flip flops.

The sequential interpreter delivers, “universal”, computation as a coherent structure of logical relations in contrast to a heterogeneous structure of clock, Boolean logic and DQ flip flops or of abstract state machine, mechanical read/write head and paper tape. Universality is attained by mapping dependency networks into the tabula rasa of single memory and sequence. Is this the only road to universality or might there be a universal interpreter that directly interprets flow networks preserving their structure and behavior?

Rethinking Computer Science: Purpose of Site

The purpose of this site is to present and explore a new view of computation and computer science.

not as a sequence of steps controlling a machine altering contents of a memory but as wavefronts of computation and state spontaneously flowing through a network of linked oscillations,

not as clock actualized, step by step, time determined, centralized control but as self actualizing, event driven, logically determined, distributed concurrent local coordination,

not as information manipulation but as information interaction,

nothing global, nothing central, nothing timed.

It is a model of computation that applies to all forms of computation natural and artificial and that applies uniformly to all levels of abstraction from primitive implementation through general programmability,

a new view of computation and computer science.