Part 3 presents the logical structures of a memory, a configurable oscillation and a sequential interpreter ring forming a traditional universal sequential processor.
Memory: Stopping a Data Wavefront In the flow model of computation successive wavefronts of data spontaneously propagate through the flow paths of networks of linked oscillations. A memory would entail stopping a data wavefront from flowing, holding it for an indefinite period, then allowing it to flow again. This can be accomplished with a ring.
A Memory Ring An oscillation link maintains a wavefront until it is accepted by the following oscillation. A ring can be arranged with an added dual-rail control path that indefinitely delays acceptance at a specific link, C, in the ring effectively stopping and storing a data wavefront.
The read rail transitioning to data enables the stored data wavefront to the output of the ring. The wavefront flows both out of the ring and back into the ring through B to be restored at C allowing multiple reads of the same wavefront.
The write rail transitioning to data enables the stored data wavefront to a consume path, D2, removing it from the ring and allowing a data wavefront presented on the input path, also enabled by the write rail, to flow into the ring and to C replacing the previous wavefront with the input wavefront.
A data wavefront is maintained indefinitely at C until a dual rail read/write command wavefront allows it to flow.
A Randomly Addressable Memory Memory rings can be stacked with a write bus and a read bus and individual read/write controls to form an arbitrarily large random access memory.
The memory operates as a complex conditional oscillation which is a consume oscillation for write and a produce oscillation for read. Each oscillation only one memory element is addressed and enabled to accept an input from the write bus or to present an output to the read bus.
The entire memory structure is a network of dual threshold elements.
A Configurable Oscillation
All flow is characterized in terms of two input one output operations and in terms of a common flow path structure (32 dual-rail bits for instance). The behaviors of all oscillations are characterized in terms of a common set of operations on the common flow path structure. An oscillation is defined that conditionally realizes each operation by steering the input through the specified operation to the output.
A computation can now realized as a network of identical configurable oscillations each behaving according to its configuration code.
Symbolically Specifying the Network Each arc of the network is assigned a unique name which associates an output of one node to one or more inputs of other nodes. The network can now be expressed as a collection of statements each of which associates the names of the input arcs and the name of the output arc with a configuration code such as below.
opcode, inAname, inBname outname
The collection of statements, specifying the network structure by name correspondence, can be placed in an environment in which the name correspondence relations can be conveniently realized. If the unique names are unique memory addresses then the statements become linked by reference through memory locations. Since each memory location will maintain a data wavefront indefinitely the statements can be performed one at a time in any ordering that preserves the dependency relations of the original flow network. In particular, the statements can be placed linearly in a memory and with the assistance of conditional branching the statements can be performed one at a time in sequence in an appropriate order. Data wavefronts flowing haltingly through memory ultimately realize the computation specified by the original flow network.
An Interpreting Pipeline Ring A pipeline ring containing a configurable oscillation performs operations in sequence by fetching a statement from memory, fetching its input wavefronts from memory performing the operation and sending the result wavefront to memory, then doing it again, and again.
The ring and the memory are realized entirely in terms of a spontaneously behaving, network of linked oscillations composed entirely of dual threshold operations. No clock, no critical timing issues, no state machine, no DQ flip flops.
The sequential interpreter delivers, “universal”, computation as a coherent structure of logical relations in contrast to a heterogeneous structure of clock, Boolean logic and DQ flip flops or of abstract state machine, mechanical read/write head and paper tape. Universality is attained by mapping dependency networks into the tabula rasa of single memory and sequence. Is this the only road to universality or might there be a universal interpreter that directly interprets flow networks preserving their structure and behavior?
Part 2 presents a logical foundation for computation that does not exhibit the difficulties discussed in part 1.
A Sufficiently Expressive Logic Consider the possibilities of a logic that manages its own flow of concurrent computation, that does not exhibit incorrect behavior, thatdetermines its own completeness of resolution and that stably maintains its flowing data. A logic that is sufficiently expressive to completely determine the behavior of any computation solely in terms of the logic itself. No time interval. No registers. No state machine. No timing closure. No emulated mathematician. Might we accept such a logic as a foundation of computational expression and its logically determined behavior as a foundation of computational confidence?
Such a logic is defined by introducing the notion of emptiness to variables and the notion of completeness and value holding to operators on variables.
Variable Emptiness The current computer science notion of a variable is as a container of a data value that always contains and always asserts a data value. The contained data value can change but the container, the variable, persists and is never empty. The validity or invalidity of a variable’s data value is determined by some control behavior apart from the variable.
A value is added to the variable’s value set that explicitly means “not a data value” which will be called empty. A variable can now explicitly indicate that it does not contain a data value, that it is “empty”. A computation begins when a data value flows into an empty variable and ends when an empty value flows into the variable.
Operator Completeness and Emptiness A computation primitive operator must now relate to this new “not data value” and the fullness and emptiness of its variables. So the notion of completeness and value holding behavior is introduced.
Each operator fulfills a completeness criterion. Only when its input variables are completely data will an operator transition its output variable to a data value that is the correct resolution of the presented input values. Only when its input variables are completely empty will an operator transition its output value to empty. For all conditions of mixed data/empty input an operator will maintain its current output value.
The output of an operator transitioning to data implies that its input is completely data and that its output is the correct resolution of the presented input. The output of an operator transitioning to empty implies that its input is completely empty and that it is ready to receive a next data input.
If the input variables to an operator monotonically transition between data completeness and completely empty then the output variable of the operator will monotonically transition between data and empty fulfilling the completeness criterion.
Each operator exhibits completeness behavior (control), value holding behavior (memory) and transformation behavior (data value transition mapping). An operator is still represented as a mapping but now the mapping includes the empty value and there are mappings for which the output does not transition.
A Dual Threshold Logic Considering electronic implementation, the two electronic values that a wire/rail can assert are designated as logical values data (non-zero voltage) and empty (zero voltage) in contrast to 0 and 1 or True and False. Since there is only one data value there cannot be combinations of different data values presented to an operator, there can only be quantities of the one data value. Therefore the operator data completeness criterion is a quantity of inputs transitioned to data: a threshold operator.
The new operator is a dual threshold behavior with an input data completeness threshold and an input completely empty threshold. A dual threshold operator beginning with its output empty will transition its output to data when a specific number of its input rails transition to data fulfilling the data completeness threshold. It will maintain its output data value until all of its input rails transition to empty at which point it will transition its output to empty. It will then maintain its output empty value until its input rails transition to data completeness at which point it will transition it output to data … and so on.
Some dual threshold operators are represented graphically below with their mapping tables in which a dash represents no transition. The number in the graphic is the data completeness threshold. The input rails indicate the number of inputs. A branched rail indicates a weighted input.
Monotonic Value Completeness Transitioning If the input rails to a dual threshold operator monotonically transition between data completeness and completely empty with an appropriate delay between transitions then the output rail of the dual threshold operator will monotonically transition between data and empty.
Multi-Rail Variable Representation The mutually exclusive values of a variable are represented with multiple rails each representing one value which is asserted when its rail is data. It is illegal for more than one rail to transition to data at a time. When all rails are empty the variable is empty. A variable monotonically transitions between data completeness (one rail at data) and empty (all rails at empty). Below are examples of multi-rail variables.
Combinational Logic Multi-rail variable interactions can be characterized as combinations of rails (values) transitioning to data each of which combination can be recognized by a dual threshold operator.A universal mapping can be assummed for the transition to empty (transition outputs to empty only when all inputs are empty) so a combinational network of dual threshold operator can be specified as a transition table of data value (rail) interactions
The combinational network below, specified by the table at its left, adds a binary variable to a trinary variable producing a quaternary variable. Computation begins with the input variables and the output variable and the operators empty (all rails empty). When the trinary and the binary variables transition from empty to data completeness only one dual threshold operator in the first rank will meet its data completeness threshold and transition its output to data ultimately transitioning the quaternary output variable to data completeness (one rail data) indicating that the input variables are data complete and that the quaternary value is the correct resolution of the presented input.
When the input variables transition to empty, the dual threshold operator in the first rank that had transitioned to data will transition its output to empty, emptying the output quaternary variable. The input and output variables and the operator are again empty (all rails empty) and ready to accept a next transition of input variables to data.
If the input variables monotonically transition between data completeness and empty with an appropriate delay between transitions then the dual threshold operators of the network will monotonically transition between data and empty and the output variable will monotonically transition between data and empty. The behavior of the network as a whole including all the concurrency in the network is deterministically coordinated (no glitching) by the monotonic input behavior, by the completeness behavior of individual dual threshold operators and by the logical structure of the network. The combinational network exhibits the same completeness behavior as its component dual threshold operators.
Greater Combinational Composition In a greater combinational network constructed of first level combinational networks the output of each first level combinational network presents a single correct input transition to its successor first level combinational networks fulfilling their monotonic input requirements.
For each network input transition (to data or to empty) a wavefront of correct monotonic transitions propagates through the greater network. There are no incorrect or spurious transitions. Both concurrent and ordered flow behaviors of the greater network are fully coordinated by the monotonic transition behavior and by the completeness behaviors. The transition completeness of the output variables of the greater network implies that its input variables have completely transitioned and that the output is the correct resolution of the presented input (data complete – completely empty). The requirement for monotonic transitioning with an appropriate delay between transitions is now relegated to the input environment of the greater network.
The network as a whole behaves with the same completeness behavior including the deterministic coordination of all concurrency as its component combinational networks. A composition of logically determined networks is itself logically determined. Completeness behavior scales. A network with fully determined flow behavior emerges atop the network of dual threshold operators.
The Emergent Oscillation – Liveness and Flow Control Data completeness and emptiness of the output data variables can be recognized with dual threshold operators, fed back to the input, inverted and presented to a spanning rank of dual threshold operators which regulate the input of the network. Output data completeness will allow transitions to empty into the network. Output empty completeness will allow transitions to data into the network. Once a transition wavefront is allowed the enabling rank of dual threshold operators will maintain the allowed wavefront until the network detects the completeness of its flow and allows the next transition wavefront.
The network self regulates the monotonic transitioning of its input. The input environment still has an imposed requirement for monotonic transitioning but now the network itself determines the appropriate delay between transitions. The input environment must now wait for each input transition wavefront to be allowed into the network.
The network is a closed circuit with delay and a single inverter, the form of a ring oscillator. The closed network continually strives to transition between two states, data and empty, the behavior of a ring oscillator. However the transitioning is not necessarily periodic as is the familiar case with free unencumbered oscillators. The transitioning of these oscillators can be delayed, modulated by signals external to the oscillator. Nevertheless, they continually strive to transition and as long as their transitions are externally allowed they transition alternately between two states, oscillate, however aperiodically. The network fulfills two and a half properties of the conventional view of a ring oscillator so it will be referred to as a ring oscillator in this context.
With the inversion of the completeness feedback the network of dual threshold operators becomes a spontaneously alive ring oscillator(animation) continually striving to monotonically transition between data completeness and completely empty. The oscillation provides liveness. The input enable rank provides memory. The output completeness provides flow control. The encompassed combinational network provides value transformation behavior. It is a spontaneously alive, self regulating computation expressed solely in terms of logical relationships. No time interval. No register. No controller. No timing closure. No mathematician.
Linked Oscillations: Pipeline Flow Networks Placing the input regulation logic of one oscillation before the output completeness logic of another oscillation links the two oscillations with a shared completeness behavior coordinating their oscillation behavior. Each oscillation maintains its input and its output and does not oscillate until its output is stably passed to its successor oscillations. As oscillations individually oscillate, alternating wavefronts of data and empty flow from oscillation to oscillation with data wavefronts being transformed as they flow through the combinational logic within each oscillation.
The requirement of monotonic transitioning is now relegated to the input environment of the network of linked oscillations as a whole, and if fulfilled, successive waves of monotonic transitions will flow through the network with full coordination of all concurrent and ordered behaviors realizing a logically determined computation. In the network below, for instance, the flows of the concurrent oscillations A and B are coordinated by the completeness behavior of oscillation C.
In a network of linked oscillations the behaviors of data transformation, memory, flow coordination and liveness are fully integrated and uniformly distributed. No time interval. No registers. No controller. No timing closure. No mathematician. Only logical relationships
Structured Flow Paths and Abstract Components A flow path can be a structure of multiple multi-rail variables with a spanning enable and spanning completeness as shown below with the 4 bit adder. Stylized operator symbols are introduced that abstractly represent computations with flow paths of any width and structure. The abstract representations can be composed and further abstracted.
The Ring: Memory A pipeline can feed back on itself forming a ring of oscillations creating a local memory of previous behavior. A network of linked rings produces a computation with memories of previous behavior distributed throughout the network.
In the network below, implementing a state machine, the remembered previous behavior is the transition to the current state which is fed back through D, and presented as input A. The fed back transition wavefront will sit at A and wait indefinitely for a B transition wavefront to arrive and form input completeness at which point the wavefronts will flow through the update computation producing the next state. A and B are concurrent flows coordinated by the completeness behavior of oscillation C.
The state machine, realized as a network of dual threshold operators, is spontaneously alive and self sequencing with no external support except for the presentation of the input.
Self Contained Computation If the feedback associated with the input controls the transitioning of the input (memory read request or sensor sample request) then the requirement for monotonic transitioning relegated to the input environment of the network is satisfied and is no longer imposed on the input environment. The network becomes self contained, self coordinated and spontaneously alive; independently and continually performing its computation. There is no referent for the network and its behavior other than the network itself. No external controller. No external time referent. No mathematician Within the network there is no part that is greater than any other part. There is no central anything. There is no global anything. It is a spontaneously behaving whole composed of spontaneously behaving parts.
Immersed Computation If the input sensor is sensitive to the output behavior then an external feedback path around the network is established forming the network as a whole into a ring, part of which is the logical network and part of which is the environment; a self contained, self referential, continually computing network sensitive to its environment and, with memory of previous behavior, capable of adjusting its behavior. It is an integral locality of a larger computation that it does not, itself, encompass and, in turn is composed of localities that do not encompass its wholeness.
Computation Primitives Variables that can indicate their own validity and invalidity of value, completeness operators that can manage their own behavior in cooperation with the variables, which combine in networks that coordinate their concurrent flow with completeness behavior, which networks close on themselves to create oscillations, which oscillations, linked with completeness behaviors, form pipeline networks, which pipeline networks, closing on themselves, form ring networks which can remember past behavior and which ring networks finally control their own input becoming independently and spontaneously behaving computations. These concepts, all encompassed in a single coherent logic(1) are primitive concepts of computation that are sufficiently expressive on their own behavioral merit, and that can serve computer science better than computation primitives that assume the behavior of a real or emulated mathematician.
Computational Confidence We trust sequenced state, because, given knowledge of the input and the sequence one can predict the values in the state space between steps, pause the sequence of steps and observe the state values.
Why might we trust spontaneously flowing, concurrent, fully determined logic? Given knowledge of the input and the network structure of linked oscillations one can predict the sequence of values that will flow through and out of of each oscillation The output of selected oscillations can be tapped and the succession of wavefronts observed.
The mechanisms of observation are different but the predictability and observability of computational progress correspond because the structure and behavior of the state space is one of the possible sequences derived from a dependency specification and the network of linked oscillations is a direct realization of the same dependency specification.
But what if the input is random and prediction is not possible? Then confidence must reside in the expression of the computation. A key factor of sequential confidence is the “precisely defined step”. But the behavior of a step containing a Boolean network with its timing anomalies (glitches) and its reliance on external control is determined by human coordination.
An oscillation, the counterpart of step, contains a network of dual threshold logic operators and logical links with other oscillations that fully determine its behavior solely in terms of logical relationships.
Should we trust the behavior of a coherent determinate logic or the behavior of an ad hoc construction of heterogeneous parts coordinated by humans?
Summary Mathematics is a world of abstract relations. But integral to the meaning and functioning of mathematical abstractions is a mathematician to actualize them and manage their behavior. The task of computer science is to directly actualize abstractions of computation, without the mathematician. But the mathematical abstractions without the mathematician are incomplete and do not behave correctly on their own merits. A boolean equation in the hands of a mathematician provides a single correct answer. A directly instantiated Boolean equation provides incorrect answers because of concurrent behaviors that the mathematician can coordinate but that the directly instantiated Boolean equation cannot.
Computer science has addressed this quandary by managing the unruly concurrent behavior with time and sequence. Isolating the unpredictable concurrent behavior within a time interval transforming a Boolean network, as a whole, into a predictable sequential step. One can then consider the sequential step as a primitive behavior and consider the concurrent behavior within the step as rendered inconsequential. Isolation might be a reasonable approach if the chaos of primitive concurrent behavior were an existential inevitability. But it is not. There is a logic that can manage its primitive concurrent behavior with deterministic predictability
It is a logic that serves as a primitive abstraction of computation specification as well as a primitive abstraction of computation behavior, a logic that does not assume the behavior of a mathematician but that stands on its own expressional and behavioral merits, a logic that models primitive concurrent computation behavior delivering logical determinism rather than logico-temporal-anthropic adequacy, a computer science logic in contrast to a mathematical logic.
Computer science is different from mathematics. While mathematics uses the mathematician for computation computer science strives to transcend the mathematician and, ultimately, to account for the mathematician as itself a computation. The ghost of the mathematician in computer science creates a referential circle that undermines both ambitions.
1. Karl M. Fant, Logically Determined Design: Clockless System Design with NULL Convention Logic, (Hoboken, New Jersey, Wiley Interscience, 2005).
The purpose of this site is to present and explore a new view of computation and computer science.
not as a sequence of steps controlling a machine altering contents of a memory but as wavefronts of computation and state spontaneously flowing through a network of linked oscillations,
not as clock actualized, step by step, time determined, centralized control but as self actualizing, event driven, logically determined, distributed concurrent local coordination,
not as information manipulation but as information interaction,
nothing global, nothing central, nothing timed.
It is a model of computation that applies to all forms of computation natural and artificial and that applies uniformly to all levels of abstraction from primitive implementation through general programmability,
Introduction Computer science is formulated with concepts borrowed from mathematics. Even though mathematics defines mathematical computation and computer science is about computation, it is argued here that there are fundamental differences between the two, that computer science is not well served by the borrowed concepts and that there exists a conceptual grounding that more effectually addresses the goals and problems of computer science.
At the center of mathematics is a mathematician performing computations. At the center of computer science are computations performing all by themselves. Computer science, quite reasonably, was founded on computational concepts borrowed from mathematics, which concepts, however, are guides for the mathematician with pencil and paper who provides liveness, flow coordination and memory. Concepts which were never intended to stand on their own apart from the mathematician. In computer science, these concepts are realized by emulating a mathematician. This might seem perfectly reasonable, indeed, computer science was founded with a machine explicitly emulating a mathematician with pencil and paper, but nature computes without reference to mathematician, pencil or paper and computer science strives to encompass all computation including the computations of nature. A mathematician’s brain, for instance, does not compute by emulating a mathematician.
The ghost of the mathematician that haunts computer science is an impediment to understanding computation in itself and while we have gotten a lot of mileage out of emulating the mathematician we can get even more mileage out of transcending the mathematician. Part 1 focuses on difficulties with two concepts borrowed from mathematics, Boolean logic and the sequential process, part 2 suggests an alternative abstraction that better serves the goals of computer science.
Boolean Logic In mathematics, Boolean logic, in conjunction with a mathematician, is considered a theoretically fundamental primitive of computation. But, enlivened Boolean logic on its own, without a mathematician, is an incomplete model of computation. It cannot determine when a resolution has begun nor when a resolution has completed and can generate incorrect results.
If an input is presented to a Boolean logic network that causes no transition activity (same input as previous input), the logic cannot, itself, determine that a resolution has begun nor that a resolution has ended.
If an input is presented that causes transition activity in the network (different from previous input) and there are delays associated with the functions of the network, the functions will temporarily output incorrect results which are presented to successor functions. The network asserts a chaos of incorrect transitions (glitches) before the input completely propagates and the network settles asserting correct result values for the presented input. Again, the logic cannot determine amidst the chaotic transitioning when an input is completely presented or when a resolution is completed.
There must be an extra logical time interval referent coordinated with the actual delay behavior of the Boolean network to control the presentation of input (the beginning) and to wait long enough for the worst case chaotic transitions of the network to settle to a stable output before accepting the output (the end). The waiting long enough is accomplished with explicit memory elements (registers – data lifeboats) which, at the beginning of each interval, presents stably maintained input to the Boolean network, which during the interval isolates their contents from the indeterminate chaos of the transitioning Boolean network and which at the end of the interval accepts the stably presented output data of the Boolean network into the memory element.
The recurring time interval and the memories emulate the flow coordination and memory of a mathematician with pencil and paper in relation to an enlivened Boolean logic network. Boolean logic cannot compute on its own logical merits without this emulated ghost of a mathematician.
So what, you say? It works. But, what if there is a logic than can compute on its own logical merits, that has no need of any extra logical referent such as the ghost’s time interval and memories? Might it serve better than Boolean logic both conceptually and practically?
The Sequential Process In mathematics, step by step sequence is fundamental to proof. If a proof proceeds in any manner other than one humanly understandable step at a time it cannot be accepted. Sequence of computation is not as fundamental but is deeply ingrained by history, practice and concept. Historically, the 12th century place-value arithmetic algorisms specified sequences of steps that anybody could follow and perform a computation. This meshed with a central feature of mathematics which is the rare resource of a single mathematician which must be used over and over in sequence to perform a computation. Computing by a sequence of steps became standard practice.
The sequence of proof and the sequence of computation became conflated in Turing’s machine which was a proof about computation in which sequential computation was an integral part of the proof. Turing’s machine became a definition of mathematical computation which eventually became the notion of algorithm which, in turn, became a fundamental paradigm of computation in computer science.
“The notion of the algorithm is basic to all computer programming…”(1)
“One of the concepts most central to computer science is that of an algorithm.”(2)
The notion of the algorithm is typically defined by simply presenting a list of properties that an expression must possess to qualify as an algorithm.The following is typical.
1. An algorithm is a sequence of steps that must terminate in a finite number of steps
2. Each step must be precisely defined
3. An algorithm must effectively and deterministically yield a correct solution (3)
It is easy to see how this list of restrictive characteristics serves to define what might acceptable as a mathematical proof or computation, but what conceptual service does the distinction algorithm/not algorithm perform for computer science? Is the expression of one fundamentally different from the expression of the other?Is one interpreted differently from the other? Are algorithms first class citizens in some sense and non-algorithms second class citizens?
Important programs do not qualify as algorithms. An operating system is not supposed to terminate nor does it yield a singular “solution”. A program with truly random input is not an algorithm. No program with a bug can be an algorithm and it is generally accepted that no significant program can be demonstrated to be bug free. Programs and computers that utilize concurrency are not algorithms.What does it mean when a sequential program qualifying as an algorithm is parallelized by a vectorizing compiler, and no longer qualifies as an algorithm?
These questions have not gone unnoticed.
“…there is an extension of the notion of algorithm (called nondeterministic algorithms).”(4)
“Any computer program is at least a semi-algorithm and any program that always halts is an algorithm.”(5)
“A procedure which has all of the characteristics of an algorithm except that it possibly lacks finiteness may be called a ‘computational method.’”(6)
“There is another word for algorithm which obeys all of the above properties except termination and that is computational procedure.”(7)
“An algorithm A is a probabilistically good algorithm if the algorithm “almost always” generates either an exact solution or a solution with a value that is “exceedingly close” to the value of the optimal solution.”(8)
“The procedure becomes an algorithm if the Turing machine always halts”.(9)
“By admitting probabilistic procedures in algorithms….”(10)
“…if, after executing some step, the control logic shifts to another step of the algorithm as dictated by a random device (for example, coin tossing), we call the algorithm random algorithm.”(11)
“An algorithm which is based on such convergence tests is called an infinite algorithm.”(12)
“Algorithms that are not direct are called indirect.”(13)
“We drop the requirement that the algorithm stop and consider infinite algorithms” .(14)
With these exercises in labeling, a computation can now not terminate, not be deterministic and not yield a specific solution (not be an algorithm). All that is left is “sequence of precisely defined steps”. Instead of trying to “fix” the notion of the algorithm perhaps we should consider the possibility that the notion is not as fundamental to computer science as once supposed.
Mathematicians and computer scientists are pursuing fundamentally different aims.
Mathematics is concerned with the nature of specific computations
independent of how they might be expressed.
Computerscience is concerned with the nature of the expression of computation
independent of any specific computation.(15)
The primary questions of computer science are not of computational possibilities but of expressional possibilities.
“computer science itself becomes no more (and no less) than the discipline of constructing appropriate descriptive languages.”(16)
Is limiting description to “a sequence of precisely defined steps” help or hindrance or simply beside the point?
What is a step? How is it sequenced? What does it do? The notion of the algorithm remains silent. A good engineer, however, can fill in the blanks. If there is sequence there must be a sequence controller to instantiate each step in the proper order. Each step must perform some behavior which must ultimately accumulate to realize the computation so there must be an accumulating memory which must be equally accessible to all steps in the sequence. Again, the engineer, filling in the blanks of an incomplete model, emulates the liveness, flow control and memory of the mathematician with pencil and paper.
The Borrowing Boolean logic and the sequential process are not incomplete because of an inadequate application of thought. Both Boolean logic and the sequential process models of computation serve quite adequately in association with a mathematician. Given these adequate models of mathematical computation all that was needed was to make them work without a mathematician which can be considered an engineering problem, in contrast to a conceptual problem. Methods of emulating the behavior of the mathematician were quickly developed. Methods so serviceable and economically viable that they still serve today albeit with rapidly decreasing economy. So discovering a coherent model of computation from first principles which do not include a mathematician was not a burning concern. Early efforts in this direction such as asynchronous circuits, cellular automata, neural networks and the perceptron did not thrive.
We have been very successful at emulating the mathematician, but can computer science dispense with the ghost of the mathematician and model computation with its own unique conceptual foundations? We must first understand why the mathematician ghost persists
The Confusions of Sequentiality Emulating the mathematician results in concepts of computation locked in a tangle of mutual dependencies that collectively resist the evolution of any individual concept.
The Inherent Concurrency of Sequentiality Sequence, in and of itself, does not specify a computation. A sequence has to realize the correct dependency relations among the steps. A computation is specified as a progression of dependent operations as with an algebraic equation, a flow chart or a programmer’s jottings. This dependency specification inherently specifies operations that can occur simultaneously or in any order. The, “simultaneously”, specifies possible concurrency. The, “in any order”, means that the possible concurrency of the dependency specification can be mapped to a sequence of operations. But it also means that there can be a multitude of correct sequential orders that realize the dependency relations of the computation.
Below at left is a dependency specification. A and B can occur in any order before D but if either A or B occur after D the sequence is incorrect. C can occur in any order with A, B and D. At right are the eight correct sequences derivable from just the shaded portion of the dependency specification. The other 16 possible sequences are all incorrect. The only means of determining a correct sequence from an incorrect sequence in a typically enormous set of possible sequences (one might call this the chaos of sequentiality) is to refer to the dependency specification with its expression of concurrency.
The dependency relations and their possible concurrency must be understood to construct a correct sequence. The expression of sequence derives from the expression of concurrency. It cannot be otherwise.
It would seem that the dependency specification might make a more concise and coherent object of primitive consideration for computer science than sequence. So, why is it not? Blame the concurrency.
The Fear of Concurrency Time indeterminacy:
“The introduction of concurrency into computation opens Pandora’s box to release the possibility of non determinacy and a host of other complications, including deadlock and livelock …. Events within concurrent systems do not necessarily occur at predetermined times, but may occur in different orders depending upon the relative speeds of different system parts. The usual assumption, which is also physically realistic, is that relative speeds cannot be controlled so precisely that the designer or programmer can depend upon them to enforce sequencing requirements.”(17)
“Unfortunately, asynchronous circuits are difficult to design because they are concurrent systems. When trying to understand a synchronous circuit, it is possible to pretend that events occur in lock-step. Variations in the speeds of functional elements can be ignored, since the clock is timed to allow for the worst-case delays. the lock-step model does not work for speed-independent designs — a correct design must exhibit proper behavior no matter what the speeds of its components. When several components are operating concurrently, there may be a very large number of possible execution paths, each corresponding to a different set of delays in the components. Non determinism resulting from unknown or varying delays is the essential problem in all concurrent systems. For a property to hold for a concurrent system it must hold for every possible execution.”(18)
Execution path indeterminacy:
“Although the architectural freedom of asynchronous systems is a great benefit, it also poses a difficult challenge. Because each part sets its own pace, that pace may vary from time to time in any one system and may vary from system to system. If several actions are concurrent, they may finish in a large number of possible sequences, Enumerating all the possible sequences of actions in a complex asynchronous chip is as difficult as predicting the sequences of actions in a schoolyard full of children. This dilemma is called the state explosion problem. Can chip designers create order out of the potential chaos of concurrent actions?”(19)
To understand what these authors are talking about, consider again, the Boolean logic combinational network. It is a directly instantiated dependency specification with all the attendant concurrency, and it delivers incorrect behavior (it glitches). The relative speeds of the functions can vary and their outputs occur in different orders effecting a variety of execution paths causing a flurry of incorrect outputs (the exploding state space).
The above quotes imply that the only way to avoid the glitching is for all input transitions to arrive simultaneously or in specific orders, neither of which can be expressed in terms of Boolean logic nor assured by timing in a practical Boolean logic network. The stock solution is to isolate and contain the indeterminate chaos (glitching) of the combinational network by bounding it with a time interval within which the Boolean network will stabilize to a correct output. The time interval, recurring, sequences the behavior of the network, turning the Boolean network as a whole into a timed step of sequential computation. The chaos of concurrency is tamed with time and sequence. Sequence, thus, appears to be more fundamental than concurrency. But concurrency lurks in the Boolean combinational network within each sequential step. So, is not concurrency more fundamental than sequence?
From the mathematician’s point of view, any concurrent expression (a so-called partial ordering) can be reduced to a sequential expression (a so-called total ordering). The chaos of concurrency can be tamed simply by mapping it to a controlled sequence. Sequentiality appears to reside at a reductive conceptual bottom. But sequence control cannot be primitive. The most primitive sequence controller cannot itself be sequence controlled or “controlled” in any sense but must proceed in terms of its own intrinsic behaviors which may include concurrent behaviors which cannot be reduced to controlled sequence. So concurrency must, ultimately, reside at the reductive conceptual bottom.
On one hand it appears that concurrency can and should be characterized in terms of sequentiality; cooperating sequential processes(20), communicating sequential processes(21), Interleaved sequential processes (22)and so on.
On the other hand, as shown, sequentiality derives from concurrency, sequential steps contain concurrency, and sequential control cannot preclude all concurrency. Sequentiality cannot fully characterize concurrency and, so, cannot be primitive.
The difficulty is that characterizing concurrency in terms of sequentiality which is derived from concurrency circularly begs the question of primitivity and builds a house of cards in the clouds. Concurrency is fundamental to computer science no matter how you cut the pie and must be confronted directly and primitively.
Is the chaos of concurrency an existential inevitability or is it an artifact of a conceptual point of view? Why is a Boolean combinational network chaotic?
The Mathematical Function A Boolean combinational network glitches because it is a network of mathematical functions. A mathematical function is a mapping rule for the mathematician that was never intended to behave on its own, and when directly instantiated becomes a continuously responsive mapping behavior, continually presented with input values and continually presenting a result value. A continuously responsive behavior can temporarily output an erroneous result if its inputs do not transition simultaneously. The erroneous outputs of functions early in a network of functions will present incorrect result values to subsequent functions resulting in a blizzard of incorrect values rippling through the network followed by a wavefront of correct values that eventually stabilize the output of the network to the correct resolution of the presented input values. There is no way to determine, from the network behavior itself ,when the stabilization occurs and, as lamented above, the blizzard of incorrectness cannot be avoided either in terms of the logic or with timing control. An external influence must determine when the computation begins and when the computation ends.
But, what if a function behavior could appreciate when a mapping starts and when a mapping finishes and output only a correct result. Might a concurrent network composed of such behaviors exhibit fully determined behavior?
But first, why is a function continually presented with data values to map?
Passive Symbols: Variables and Their Values The notion of a variable continually asserting a value is not a mathematical concept. An algebraic variable does not assert a value but is an empty place holder to be substituted with a value when the equation is used(23). The mathematician substituting the input variables (filling its emptiness) of a function is the start of a computation and when the mathematician writes down the result value substituting the output variable (filling its emptiness) the computation is finished.
The notion of constantly asserted data values arises from a confluence of binary representation in mathematics, the two state representation of digital electronics, the correspondence of Boolean logic with binary switching and the notion of state space from physics.
A binary variable has two values mapped to the two states of electronic representation so an electronic binary variable is always asserting a data value and cannot not assert a data value. Consequently, an electronic Boolean logic switching function is continually presented with data values by its electronic variables.
In physics a state space consists of a collection of variables that continually assert values and which are continually interacting and transforming their values. Physicists imagine sampling a dynamic state space at an instant to observe a static state.
“We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.”(24)
The “state” is the record of progress of the computation of the universe and state space has come to be generally regarded as the formal record of progress of any computation. Computer science tailored this dynamic notion of a state space of variables continually asserting values to allow periodic instances of sampleable stability. The values of variables transition only during a computation step and the state space of variable values is static between steps. This allows predicting and observing the expected state after each step which brings us to the crux of the matter: computational confidence.
The Mutual Dependency of State space and Sequence For a state space to be reliably observable there must be intervals of stability with no transitions occurring. The notion of controlled sequence provides such intervals. Sequence, in turn, relies on intervals of stable state. Each sequential step observes a part of the state, transitions a part of the state and leaves a stable state for the next step to observe.
If sequential steps are allowed to overlap (concurrent) the possibility arises that a step might observe a transitioning state (a race) and given the enormous variety of correct sequence, it becomes difficult to insure the integrity of state observation for overlapping steps.
If the state space fragments the possibility of simultaneous observation arises leading to the possibility of overlapping steps.
If any inherently concurrent behavior does exist, such as in a Boolean logic network, it must be isolated within a sequential step such as a time interval, a semaphore state, a channel access and so on.
Computational Confidence Predictable observability is the touchstone of human computational confidence. The computer science notion of state space, entwining time, memory and sequence control in a tangle of mutual dependencies, is all about predictable observability. Concurrency messes with this touchstone of computational confidence in ways that effectively destroy it. Hence, the fear of concurrency. Yet concurrency exists and is fundamental to computer science. Might there be another path to computational confidence that does not preclude concurrency?
Continued in part 2.
1. Donald E. Knuth, Fundamental Algorithms (Reading, Mass., Addison-Wesley, 1969) p. 1.
2. Zenon W. Pylyshyn, “Theoretical Ideas: Algorithms Automata and Cybernetics,” in Perspectives on the Computer Revolution, ed. by Zenon W. Pylyshyn (Englewood Cliffs, N. J., Prentice-Hall, 1970) pp. 60-68.
3. Donald E. Knuth, Fundamental Algorithms (Reading, Mass., Addison-Wesley, 1969) p. 4-6.
4. M.S. Carberry, H.M. Khalil, J.F. Leathrum and L.S. Levy, Foundations of Computer Science (Potomac, MA, Computer Science Press, 1979) p.16.
5. R.R. Korfhage, “Algorithm,” in Encyclopedia of Computer Science, eds Anthony Ralston, Chester Meek (1st ed. 1976),(New York, Petrocelli/Carter, 1976) p. 49.
6. Donald E. Knuth, Fundamental Algorithms (Reading, Mass., Addison-Wesley, 1969) p. 5.
7. Ellis Horowitz and Sartaj Sahni, Fundamentals of Computer Algorithms (Potomac, Computer Science Press, MA, 1979) pp. 1-2.
8. Benjamin W. Wah, C. V. Ramamoorthy “Theory of Algorithms and Computation Complexity with Applications to Software Design,” in Handbook of Software Engineering, ed. by Charles R. Vick and C. V. Ramamoorthy (New York, Van Nostrand Reinhold, 1984) p. 88.
9. Kurt Maly, Allen R. Hanson, Fundamentals of the Computing Sciences (Englewood Cliffs, N.J., Prentice-Hall 1978) p. 41.
10. F. S. Beckman, Mathematical Foundations of Programming (Reading, Mass., Addison-Wesley,1980) p. 398.
11. E. V. Krishnamurthy, Introductory Theory of Computer Science (New York, Springer-Verlag, 1983) p. 3.
12. E. V. Krishnamurthy, Introductory Theory of Computer Science (New York, Springer-Verlag, 1983) p. 3.
13. John K. Rice, John R. Rice, Introduction to Computer Science (New York, Holt, Rinehart and Winston, 1969) p. 47.
14. John K. Rice, John R. Rice, Introduction to Computer Science (New York, Holt, Rinehart and Winston, 1969) p. 49.
15. Karl M. Fant, Computer Science Reconsidered: The Invocation Model of Process Expression, (Hoboken, New Jersey, Wiley Interscience, 2007). p. 7
16. Harold Abelson, Gerald Jay Sussman with Julie Sussman, Structure and Interpretation of Computer Programs (Cambridge Ma., MIT Press, New York, McGraw-Hill, 1985) p. 295.
17. Charles Seitz ed., Resources in Parallel and Concurrent Systems, ACM Press, New York, 1991, introduction, page ix.
18. David L. Dill, Trace Theory for Automatic Hierarchical Verification of Speed-Independent Circuits, (Cambridge, MIT Press, 1989). pp. 2,3.
19. Ivan E. Sutherland and Jo Ebergen, “Computers Without Clocks”, Scientific American, Vol. 287, No. 2, August 2002, pp. 62-69.
20. E. W. Dyjkstra “Cooperating Sequential Processes”, F. Genuys Ed. Programming Languages, Academic Press, New York, 1968, pages 42-112.
21. C. A. R. Hoare, Communicating Sequential Processes, (Englewood Cliffs, New Jersey, Prentice Hall International, 1985).
22. M. Ben-Ari, Principles of Concurrent Programming, (Englewood Cliffs, New Jersey, Prentice Hall International, 1982). p. 18.
23. W. Gellert, H. Kustner, M. Hellwich, H. Kastner eds., The VNR Concise Encyclopedia of Mathematics, Van Nostrand Reinhold Company (New York, 1977), P 40.
24. Laplace, Pierre Simon, A Philosophical Essay on Probabilities, translated by Truscott,F.W. and Emory,F.L., Dover (New York, 1951) p.4