# Rethinking Computer Science Part 5: The Provenance of Computation

What is the most primitive provenance of computation and how does it extend to the complexities of life and mathematics?

Differentness
Everything begins with differentness. Nothing arises from sameness. Differentness is appreciated in interaction. Nothing accrues from not interacting. Interaction of differentness manifests with change. Interaction without change is a non sequitur. Change is differentness in time, an after that is different from a before. Change is appreciated by a persistence spanning the change. Without spanning persistence there is no referent for change. Change occurs with constraining rules of interaction. Unconstrained change has no significance.

Change manifests in relation to spanning persistence. Persistence manifests in relation to encompassing change. This counterpoint of persistence relating change relating persistence relating change … is the essence of differentness and the foundation of computation.

Locus and Value
The primitive of this counterpoint is a persistent locus that can assert one at a time of two of more values which transition according to value interaction rules. A locus asserting a value might be a wire asserting a voltage, a protein asserting a shape, a molecule asserting a chemical, a digit asserting a numeral, a human asserting speech and so on. A value interaction rule might be a logic function, a shape sensitivity, a chemical affinity, a multiplication table, and ideal and so on.

Computation begins with interaction rules that appreciate combinations of differentness. Loci associate and combine their asserted values to form the name of a value interaction rule. The named rule is invoked and the specified value transitions occur.

The spectrum of differentiation
A locus is differentiated by its place of association in relation to all other loci. A values is differentiated by its interaction behavior in relation to all other values. Two domains of differentiation interpenetrating through loci asserting values. Together they weave a tapestry of computation along a spectrum of expression, below, that is purely in terms of value differentiation on one end and purely in terms of association differentiation on the other end with varying proportions of differentiation along the spectrum. A computation can map anywhere onto the spectrum by altering its proportions of value differentiation and association differentiation (numeric radix for instance).

Spectrum of differentiation.

Value Differentiation
In pure value differentiation all differentness is represented with values and their interaction rules. Each value is different from every other value by virtue of its interaction behavior with all other values as expressed by the value interaction rules.

The loci of pure value differentiation are all mutually associated at a single place of association. There is no way to tell one locus from another in terms of association relationship and hence no association differentiation. The loci are constrained and agitated such that all loci will eventually associate and all possible value interactions will eventually occur. Warm matter in a gravity well or cytoplasm within a cell membrane are examples. This is illustrated with the shaking bag below. The bag is the constraining place of mutual association. The shaking of the bag is the search of values to find interaction partners.

Mutually associated loci.

Roman numerals
The pure value end of the spectrum is illustrated with Roman Numeral addition considered without the subtractive principle: 9 is VIIII instead of IX. The value of a Roman numeral is represented by loci asserting values but the loci are not associated in any particular relationships.

XLII = LXII = ILXI = IILX = LIIX = LIXI = IXIL = XILI = XIIL = IIXL = …

Roman numeral addition is expressed as a set of value interaction rules.

IIIII  ->  V
VV   ->  X
XXXXX   ->  L
LL   ->  C
CCCCC   ->  D
DD   ->  M

Given two Roman numerals these rules will reduce them to a minimal single numeral representation. The numbers 1878 and 122 are used as examples.

MDCCCLXXVIII + CXXII = MM

Each symbol is a locus asserting a value. If two loci are asserting V and they associate, then their values form the name of interaction rule VV. The rule is invoked, the Vs disappear and X appears. The values and the loci embody the interaction rules. When 2 Vs meet they spontaneously transform into X, similar to 2 molecules of hydrogen and one molecule of oxygen spontaneously transforming into 2 molecules of water when they meet.

One can place the two numerals into a shaking bag as in Figure 3 below. The five Is will form the name IIIII, invoke the rule named IIIII and transform into a V. There are then two Vs that will transform to an X. The five Xs will transform to an L, the two Ls to a C the five Cs to a D and finally the two Ds transform to an M. What remains are two Ms. There is no transform rule with the name MM so no more names can be formed and the addition is completed. Names are formed, rules are invoked, values transform and eventually the sum appears.

Roman numeral addition in shaking bag.

The loci randomly associate in the bag yet a fully determined numerical computation occurs in terms of a progression of formed names and value transitions in terms of the value interaction rules. However, the expression itself cannot determine when its resolution is completed. The only way to determine the progress of the addition is to open the bag and count the values. When there are four or fewer of I, X, C and one or fewer of V, L, D the addition is completed.

The expression itself, however, can be enhanced to express appropriate coordination behavior and determine itself when it is done. This requires that there be a progression of name formation and resolution such that there is a necessarily last name formed and resolved. With the present form of the expression there might not even be a first name formed. VI + XII = XVIII with no names formed and resolved at all. There must be a completeness of behavior at each stage of name formation and resolution to insure that every name in the progression is formed and appreciated in an orderly progression.

The first question of an addition is, how many Is are present in the bag. The computation must count the Is and somehow determine that it has considered all the Is and has not missed any Is in the bag. This can be done if the number of Is to be counted is a constant which can be arranged with zero place holder values which allows there to be a constant number of each value and its zero place holder values in each number. The corresponding lower case letter will be used as the zero place holder value for each numeral value as shown in Table 1.

Numeral    Zero
value      value
I               i
V              v
X              x
L              l
C              c
D             d
M             m
Roman numerals with associated zero values

The zero value i is used such that there is always exactly four of I and/or i in a number: iiii, Iiii, IIii, IIIi, IIII. When two numbers are added there will always be exactly eight of I and/or i. The criterion for completeness can now be pre expressed in the value interaction rule names. The name of each I.i rule, shown in Table 2 below, is exactly eight values. Each name is an association of values but this does not imply any position dependency of the values. The rule name IIIiiiii means that there is three of I and five of i in no particular order. An I rule will be invoked only when its name is completely formed by all eight values. This requirement of completeness of name formation is the mechanism of behavior coordination.

The V or v is the carry to the V,v addition. There are one each of V and/or v in the two numbers and there will always be a carry value of V or v so there will always be exactly three of V and/or v present after the carry. The names of the value transform rules, shown in Table 2 below requires 3 of V and/or v. Because of the completeness requirement the V,v addition occurs strictly after the I,i addition is completed. The carry values express the necessary progression of name formations and resolutions.

There will be four of X,x in each Roman number so adding two numbers will involve 8 values and the carry value will make exactly nine values to form the X,x addition names as shown in Table 2. Again, the X,x addition will occur strictly after the V,v addition.

Rules for I        rules for V                  rules for X
iiiiiiii[iiiiv]           vvv[vx]                  xxxxxxxxx[xxxxl]
Iiiiiiii[Iiiiv]           Vvv[Vx]                 Xxxxxxxxx[Xxxxl]
IIiiiiii[IIiiv]           VVv[vX]                 XXxxxxxxx[XXxxl]
IIIiiiii[IIIiv]           VVV[VX]                XXXxxxxxx[XXXxl]
IIIIiiii[IIIIv]                                          XXXXxxxxx[XXXXl]
IIIIIiii[iiiiV]                                          XXXXXxxxx[xxxxL]
IIIIIIii[IiiiV]                                          XXXXXXxxx[XxxxL]
IIIIIIIi[IIiiV]                                          XXXXXXXxx[XXxxL]
IIIIIIII[IIIiV]                                         XXXXXXXXx[XXXxL]
;                                                             XXXXXXXXX[XXXXL]

The remaining rule sets, shown below 3 can be similarly derived except for M,m which poses a difficulty because it does not have an inherent maximal form. One can put as many Ms as one likes in a number and the only way to pre-determine how many Ms there are is to limit the number of M,ms allowed in a number. In this discussion the number of M,ms will be limited to five so that, with the carry, there are always exactly eleven of M and/or m. The addition rules for M clip any result greater than five Ms to five Ms without considering the lesser numerals.  Notice that the carry out value for the M rules is Z which indicates the completion of the addition.

Rules for L                  rules for C          rules for D                      rules for M
lll[lc]                       ccccccccc[iiiiv]       ddd[dm]           mmmmmmmmmm[mmmmmZ]
Lll[Lc]                    Ccccccccc[Iiiiv]       Ddd[Dm]          Mmmmmmmmmm[MmmmmZ]
LLl[lC]                  CCccccccc[IIiiv]       DDd[dM]           MMmmmmmmmm[MMmmmZ]
LLL[LC]                CCCcccccc[IIIiv]       DDD[DM]          MMMmmmmmmm[MMMmmZ]
;                              CCCCccccc[IIIIv]                                    MMMMmmmmmm[MMMMmZ]
;                              CCCCCcccc[iiiiV]                                   MMMMMmmmmm[MMMMMZ]
;                              CCCCCCccc[IiiiV]                                  MMMMMMmmmm[MMMMMZ]
;                              CCCCCCCcc[IIiiV]                                 MMMMMMMmmm[MMMMMZ]
;                              CCCCCCCCc[IIIiV]                                MMMMMMMMmm[MMMMMZ]
;                              CCCCCCCCC[IIIiV]                                MMMMMMMMMm[MMMMMZ]
;                                                                                                 MMMMMMMMMm[MMMMMZ]

A modified Roman numeral is always exactly 20 values. four of I or i, one of V or v, four of X or x, one of L or l, four of C or c, one of D or d, and five of M or m. The number one is mmmmmdcccclxxxxviiiI. The number zero is mmmmmdcccclxxxxviiii. This is analogous to a 2s complement binary number represented in a 32 bit computer which is always 32 bits regardless of its magnitude.

The Roman numbers.

MDCCCLXXVIII and CXXII

now become

mmmmMDCCCcLxxXXViIII and mmmmmdCccclxxXXviiII

mmmmMDCCCcLxxXXViIII + mmmmmdCccclxxXXviiII = mmmMMdcccclxxxxviiii.

The addition process accepts two completely represented numbers and produces one completely represented number preserving the number representation convention. The I,i addition is the necessarily first interaction. As shownbelow  the successive carry values shepherd a coordinated progression of interactions leading to the addition of the M,ms which is the necessarily last interaction.

Modified Roman numeral addition in shaking bag.

No matter how long it takes for the names to form and resolve the expression autonomously resolves in an orderly progression of name formations and resolutions to a necessarily last name formation and resolution which generates the coordination value Z to indicate that the addition is completed. The behavior is directed, discrete and fully determined. Z might open the bag and spill the results, into another bag emptying the current bag which can now receive a new computation to resolve.

Roman numeral summary
Classical Roman Numeral addition requires a human to manage the addition and coordinate its behavior. More values, value transform rules and conventions of expression express the coordination behavior previously expressed by a human. The Roman Numeral addition, or any other pure value computation, can be complete in itself, expressing a fully determined computation as a progression of spontaneous behaviors which unambiguously produces a correct result.

Loci jumbling around in the bag trying to form an interactable name is the pure value search. The loci spontaneously appreciate when a name is formed, invoke the appropriate interaction rule and transition their values. (physical chemistry, biochemistry, particle interaction). A search failure (name not formed) does not leave any lingering conditions such as partially formed names to be accounted.

Binary place value numerals – shorter rule names
With pure value binary encoding, which already includes zero, each binary place has a unique set of values.  Rules with only three value names make name formation more likely.

Pure value interaction rules for binary addition.

Pure value representation:

A0E1H1L0 + B1F0J1M1 = N1Z0Y0X0W1

Corresponding binary place value representation:

0110 + 1011 = 10001

Pure value summary
The differentness of computation is expressed and its behavior fully determined solely in terms of values interacting in accordance with value interaction rules. The value interaction rules are the persistent referent for the changing values. The loci of a pure value expression, all mutually associated at a single place, serve simply as a medium of value assertion and do not contribute to computation differentiation. Computation flows through a progression of names formed of unique values resolved with value interaction rules.

No new expressional primitives have been postulated. A process expression is just associated loci asserting values transforming according to value interaction rules.

Association Differentiation
In pure association differentiation all differentness is expressed in terms of loci in static structures of association relationships. Each locus is different from every other locus by virtue of its place in the structure of association relationships.

The behavior of statically associated loci
A locus has no structure, no input or output, no top or bottom, no right or left. Each locus sees a name continually formed of its own value and the unordered values of the loci directly associated with it and changes its own value in response to the formed name. There is never an instant of no presented name. If associated loci assert the same values and resolve the same names as illustrated at the left below then value change flows in all directions continually and might cease flowing all together (all loci asserting 1).

If directly associated loci assert different values change can flow with circular interaction as in the middle below. Only with three linearly associated loci asserting different values, as on the right below is the flow of change directed such that the output value does not feedback and participate in forming the input name.

Directionalizing the behavior of statically associated loci.

The result is a directionalized locus providing directed value transition flow in the context of a common set of values as below. The output values and input values are not differentiated in terms of value but are different in terms of association place in relation to the directionalized locus.

Directionalized locus.

The Switch
A switch is a directionalized locus. A manual switch in a power plant does not electrocute the human switcher.

An electromagnetic switch receives an input current value that influences a magnetic field value which influences a physical position value which influences a result current value. The output current is isolated from the input current by interaction through two different value domains.

Switches isolate and directionalize value transition flow.

An electronic tube receives a voltage value which influences a charge value in a vacuum which influences current flow value through the vacuum which influences the output voltage value which does not feed back and influence the input voltage value.

The voltage on the transistor gate influences the charge value in the channel which influences the electron flow value through the channel which influences the voltage value at the output which does not feed back and influence the input voltage value.

Multi input directionalized loci can be implemented in terms of networks of single input switches as with CMOS implementation of Boolean logic gates for instance.

Combinational composition
Directionalized loci, can be connected, output to input, to form networks of directly associated directionalized loci with a common set of values and value interaction rules expressing progressive value transition flow as in the left below. The directionalized loci can be graphically stylized to show directionality and rule set behavior as in the center below and ignoring the internal buffering values, which need not be explicitly expressed nor even be common across loci, can be graphically formalized into primitive logic behaviors as on the right below.

Stylized presentations of networks of directionalized loci.

The NULL value – ends and beginnings
With interaction rule sets in terms of data values loci continually assert  data values, form computation names and resolve the formed names with no indication of when a computation begins or when a computation ends.

A value, null, is added to the mutually exclusive set of data values which is defined to mean “not a data value” establishing two disjoint domains of name formation completeness; data completeness and null completeness. A null convention is established in which name formation alternates between a completely formed data name and a completely formed null name (all values null) as below. A computation begins when values transition from all null to a completely formed data name. A computation ends when values transition from a formed data name to all null. Successive wavefronts of value transition alternating between data completeness and null completeness representing successive computations.

Null Convention value transition wavefront flow.

The completeness criterion
In relation to the null value a locus is defined that transitions its output to a data value only when a completely formed data name is presented and that transitions its output to null only when a completely formed null name is presented. Otherwise it maintains its current output value.

The transition of the output to a data value implies the presence of a completely formed data name presented to the input. The transition of the output to null implies the presence of a completely formed null name presented to the input.

Pure association differentiation and threshold logic
In pure association differentiation there is only one data value and the null value. There cannot be combinations of different data values presented so the only appreciable differentness of a formed data name is how many data values are present. Therefore the completeness of a formed data name is recognized by a threshold  criterion (the requisite quantity of presented values at data). The completeness of  the null name is also recognized with a threshold criterion (all presented values at null)

The result is a dual threshold logic with a data completeness threshold, a completely null threshold and an output maintaining behavior. A dual threshold locus beginning with its presented input completely null and its asserted output value at null will transition its output value to data when a specific number of its input values transition to data fulfilling the data completeness threshold. It will maintain its output value at data until all of its input values transition to null at which point it will transition its output value to null. It will then maintain its output value at null until its input values transition to data meeting the completeness threshold at which point it will transition it output value to data and so on.

Some dual threshold loci are represented graphically below with their value interaction rules represented as mapping tables in which D represents the data value, N represents the null value and a dash represents no transition. The number in the graphic is the data completeness threshold. The input rails indicate the number of inputs. A branched input indicates a weighted input.

Dual threshold loci.

If the inputs to a dual threshold locus monotonically transition between data completeness and completely null with an appropriate delay between transitions then the output of the dual threshold locus will monotonically transition between data and null presenting a monotonically transitioning input to any succeeding loci.

With data assigned to 1 and null assigned to 0 dual threshold loci can be expressed in terms of directionalized loci with cross coupled feedback, with a data(D,1) completeness, quantity of 1s network to transition to data(D,1) and with a null(N,0) completeness network to transition to null(N,0) as below.

Dual threshold locus implemented with directionalized loci.

Multi-Rail Variable Representation
The mutually exclusive values of a variable which cannot now be represented with different values are represented with different rails, as below, each rail representing one value which is asserted when its rail becomes data. It is illegal for more than one rail of a variable to transition to data at a time. When all rails are null the variable is null or empty (not containing or asserting a data value).

Multi-rail variables.

Combinational Logic
I n multi-rail variable interactions each rail represents a value so associations of rails represent combinations of values forming a name. Each variable will transition a single rail to data. Each rail (value) of each variable fans out associating with rails of other variables attempting to form a name. Of the possible combinations of rails of interacting variables only one combination will transition all its rails to data forming a complete name which can be recognized by a dual threshold locus. This fanning out and associating of rails is the association version of the search to form a resolvable name. The unsuccessful fanout paths forming incomplete names are search failures called orphans. There is a delay issue with the orphan that does not fit into this discussion. See part 2.

The combinational network below, specified by a set of value interaction rules and their corresponding table representation at its left, adds a binary variable to a trinary variable producing a quaternary variable. It begins with all rails asserting null. When one rail each of the trinary and the binary variables transitions from null to data. The transitions to data fanout searching to form a name only one of which will be completely formed. One dual threshold locus in the first rank will meet its data completeness threshold (recognize the formed name) and transition its output to data which will ultimately transition one rail of the quaternary output variable to data (data completeness) indicating that the input variables are data complete and that the quaternary value is the correct resolution of the presented input.

When the input variables transition to null the completely null name will form and the dual threshold locus in the first rank that had transitioned to data will transition its output to null, transitioning the output quaternary variable to null. All rails are again null and the combinational network is ready to accept a next transition of input variables to data.

A simple multi-rail pure association computation.

If the input variables monotonically transition between data completeness and completely null with an appropriate delay between transitions then the output variable will monotonically transition between data and null. The behavior of the network as a whole is deterministically coordinated (no glitching) by the monotonic input behavior, by the completeness behavior of individual dual threshold loci and by the logical structure of the network. The network fulfills the completeness criterion.

Greater Combinational Composition
Networks fulfilling the completeness criterion can be combined into greater networks that, in turn, fulfill the completeness criterion.

The figure below is the pure association expression of I,i value interaction rules above. The I,I values of input variables A and B are represented as dual rail pairs. One rail of each pair will become data presenting four of I and or i to the input for each variable. Only one dual threshold loci for each input variable will transition to data recognizing the presented name of each variable.

The next rank of behaviors implements the search to recognize the combination of names from inputs A and B. One threshold behavior of this rank will transition to data recognizing the specific value interaction rule name. This one rail at data, through the last rank of behaviors, transitions the output rails to values specified by the value interaction rule including the V,v carry.

Notice that places in the network are labeled with the value names of the pure value interaction rules. The differentness relations of the computation previously represented with unique values are now expressed by unique places in a structure of association relationships. There is a direct mapping of expression of differentness between the two expressions as exhibited with the labeling.

Pure association expression of Roman numeral I,i addition.

The pure association expression for binary addition is shown in Figure 16 below that corresponds to the pure value interaction rules of binary addition above. Again the places in the network are labeled with corresponding pure value names.

Pure association expression for dual-rail binary addition.

Completeness behavior scales.
A greater network as a whole behaves with the same completeness behavior as its component combinational networks which in turn behave with the same completeness behavior as their component dual threshold loci. Networks with fully determined flow behavior emerge atop networks of dual threshold loci.

For each network input transition (to data or to null) a wavefront of correct monotonic transitions propagates through the greater network. There are no incorrect or spurious transitions. The transition completeness of the output variables of the greater network implies that its input variables have completely transitioned to data, that the data transitions have propagated through the network and that the output is the correct resolution of the presented input.

This is also true for the null wavefront. When the input transitions to null an orderly wavefront of transitions to null flows through the network. There are no transitions of null to data. When the output is completely transitioned to null it means that the input is all null and the null transitions have propagated through the network.

A combinational network as a whole fulfills the completeness criterion.

Pure association summary
Computation is expressed and its behavior fully determined solely in terms of a structure of associated loci interacting with direct neighbors. The network of associated loci is the persistent referent for the changing values flowing through it. The one computation value serves simply to assert differentness of place in the association network and does not contribute to computation differentiation. Computation flows through a progression of names formed of combined rails resolved with dual threshold loci.

Invocation Model
Two domains of differentiation form a spectrum of pure value at one end and pure association at the other end with the two domains interweaving in various proportion along the spectrum, shown below, unifying natural and artificial forms of computation in a single model. The computation of a living cell is not fundamentally different from the computation of an artificial computer.

The spectrum of differentiation.

No new expressional primitives have been introduced: no time referent, no explicit control, no state space, no humans with pencils. In the beginning there is no meta referent, no privileged differentness, no overarching rule of interaction, no referent of time, no metric of space. There are only primitives, mutually interacting, building their own universe of significance. While some conventions have been established, computation is still just associated loci asserting values forming names causing change according to value interaction rules.

Value differentiation and association differentiation are interdependent forms of differentiation. Associated loci cannot express differentness without changing values. Changing values cannot express differentness without the spanning referent of associated loci. Values are associated through and isolated by loci. The same values can be reused over and over between loci. Loci are associated through and isolated by values. The same loci can be reused over and over between values. This interdependent weave of mutually isolating and renewing each other is illustrated with a full adder as halfadd,halfadd,OR circuit in Figure 18.

Alternately reused loci and values expressing a dual-rail full adder.

The consequence is that a few loci, a few values and a few interaction rules can weave patterns of differentness into the complexities of life and mathematics.

Differentness is information. Interacting differentnesses is information transforming. Spontaneously interacting differentnesses are no less symbolic than are passive symbols of differentness manipulated by a human or a computer. Spontaneously interacting proteins compute just as a human or computer manipulating passive symbols computes. Its all about differentness.

# Unifying Hardware and Software

Matt Whiteside 8,9,2017
Unifying Hardware and Software

Q: Regarding a language that “unifies hardware and software, uniformly characterizing computation through all levels of abstraction” is an attractive idea that’s also crossed my mind, but I don’t see it getting much appreciation by programmers.  To give a concrete example from my own experience of a deficiency that a unified approach would address, consider GPU programming.  You start with a nice high level, type-safe, statically analyzed language running on the CPU, which must then interface with a shader program on the GPU, and therefore throw all the type safety, static analysis, correctness guarantees, etc., out the proverbial window.  It’s hard even to imagine what possibilities could open up if even a moderate amount of progress was made towards improving this situation.

A: The unification. I see both ends doing the same computation. The processor sequences operations, the clock sequences operations. EEs actually talk about the massive concurrency at the circuit level. They don’t seem realize that they throw 99.9% of the concurrency away to actually sequence one or a few steps. They are mildly aware of inefficiency in that they include clock gating but buried in transistor details they are not sensitive to what they are doing in the context of computation. This is one of the thrusts of what I am pursuing. What is the fundamental nature of computation and its most primitive implementation consistent with that nature and how does this implementation seamlessly scale to arbitrary complexity. This is the question that CSR began addressing and which I am still working on. Computation should be uniformly characterizable in all its manifestations. It seems to me that this uniform characterization must embody concurrency as fundamental and that the notion of the sequential process is a fundamental flaw in contemporary computer science. The traditional notion that any concurrent process can be mapped to a sequential process implying that sequentiality is more fundamental than concurrency is a red herring similar to the notion that planetary motions can be mapped into uniform motion around perfect circles. Both are true enough and they both sort of work and because of that they mask access to a more effective conceptual essence.

My thesis is that the model of networks of linked oscillations spans both primitive implementation and abstract interpretation unifying both domains. Part 4 will present the flow network interpreter architecture. The view of computation is modified for both domains but then they are unified.

# Multi-rail Encoding

James Talbert 8,3,2017
Multi-rail Encoding

Q: With a multi-rail signal, do the lines have to be mutually exclusive? For completeness detection, they have to have to have a deterministic result set, with no overlap (001 overlaps with 011, 101, and 111). This may be what you meant by “With a single data value and multi-rail you can straightforwardly  encode any meaning in a single unbounded encoding regime.” but I want to check that I’m not missing something.

An example: If I have a 4-rail signal, with states {0011, 0101, 0110, 1001, 1010, 1100}, I get 6 states instead of the 4 with the same number of lines. Since all these states are distinct, and partially-complete versions do not form valid states, I would think that they could be used as representations.

A: Yes. When I say multi-rail I mean 1 of N encoding. There are delay insensitive M of N encodings like your 2 of 4 which yield more distinct meanings with fewer wires but have other disadvantages. First, M of N encodings require decoding which makes combination more expensive. Meaning is inherently a 1 of N condition. Out of multiple possible meanings there is typically only one meaning intended. The 2 of 4 code can encode 6 meanings but the code has to be decoded to a 1 of 6 representation to determine the meaning. With a 1 of 6 encoding there are 6 wires but there is no decoding. It matches the natural condition of meaning.

The 2 of 4 encoding would be efficient only if logic is much cheaper than wires. Chapter 11 of LDD goes into this issue in some depth.

Secondly, M of N codes do not scale. The 2 of 4 code represents 1 of 6 meanings. It cannot represent 1 of 7 or 1 of 3. The code locks one into base 6 representation with the associated decoding logic. 1 of N encoding does not have associated decoding logic and can directly represent any set of mutually exclusive meanings and form any base for further encoding.

# Rethinking Computer Science Part 4: A Network Interpreter

Uniform flow paths, a universal set of operations and sequencing seems the royal road to universal interpretability. A single memory and single interpreter is, in a sense, a simplest and minimal computation capable of being structured to emulate any other computation.  A structureless tabula rasa that reflects only uniform path structure and a small set of sufficient operations onto the mapping of the network to be interpreted. If the emulating computation contains any element of flow structure such as concurrency, two interpreters or two memories for instance, then the emulating computation ceases to be a structureless receptacle, reflects flow structure onto the mapping and the two structures collide mid mapping in a question with a multitude of answers and a nontrivial coordination issue.

Might there be another road to universal interpretability that does not require sequencing the entire network of oscillations, which preserves the structure of the network and its concurrent behaviors. While it is difficult to inject concurrency into sequentiality it is straightforward to inject sequentiality into an expression of concurrency which already expresses all the necessary coordination behavior. In particular, the behavior of an individual oscillation can be implemented in any convenient manner including sequential as long as the oscillation performs its computation and the empty/full flow protocol is honored. A computation can be expressed as a flow network of sequential processes.

Each oscillation is represented with a symbolic record that specifies the computation of the oscillation as a sequence of operations and specifies the input paths and the output paths each with its associated closure path and flags indicating empty and full. The records are mapped into a distributed network of memories and interpreters.

The memories and interpreters are decoupled. Several interpreters are associated with each memory and serve the memory in turns as available. Several memories are associated with each interpreter forming overlapping neighborhoods of memory/interpreter associations. Memories are not connected to memories but are connected through interpreters. Interpreters are not connected to interpreters but are connected through memories. Neighboring memories are connected through a single interpreter.

A flow network is loaded record by record through an input memory. The first record is allocated to a memory. Subsequent records flow through it and are allocated and linked to accepting neighboring memories which in turn become conduits for records which are allocated and linked to neighboring memories until the entire flow network is loaded into the network of memories. The loaded flow network weaving through the neighborhoods of memories can fold back through the same memory multiple times. In the limit a network can flow back and forth through a single memory and a single interpreter can service it.

Each grey box below is a separate oscillation record in a separate memory showing the linking flow paths between memories.

Most records are loaded as empty. Some records may contain initial data wavefronts. Data wavefronts are presented to the network inputs, the interpreters begin servicing the records and data wavefronts begin flowing from component record to component record, from memory to memory and interact as they flow. Each independently behaving interpreter polls its connected memories opportunistically searching for an oscillation record to service. Access to each record is arbitrated so that only one interpreter at a time gains access to a record and performs the following protocol.

The interpreter Protocol

ALWAYS poll for memory access{
WHEN gained {
THEN access record of top entry;
IF input complete
perform computation;
WHEN done{
FOR each output path{
deliver output value to path and set flag full;
IF oscillation record input is completely full and its output completely empty;                               THEN place oscillation record onto ready queue;
}
FOR each input path{
mark as empty
mark as empty
IF oscillation record input is completely full and its output completely empty;                               THEN place oscillation record onto ready queue;
}
}
release record
ELSE idle a bit
}
}

Full/empty flags emulate the full/empty flow of oscillation links. Each interpreter can manage input ports, can enable acceptance of input, determine when input is complete and perform the program of the oscillation. Computation completeness occurs when an oscillation program has performed the last sequential operation. It then checks that the input ports of the destinations are empty and sends the output to the successor record input ports. When all are accepted it marks the current record’s input to enable the flow of empty which is just a manipulation of flags.

The state of the flow of the network and the state of each oscillation is maintained entirely in the oscillation records. Each interpreter is a stateless server that, with each access, updates a record, performs any housekeeping duties and leaves the memory and its oscillation record in a coherent state for a next interpreter to access and update. The arbitrated record access and the inherent flow network coordination behavior insures that processors never interfere with each other and that networks flow independently. As each interpreter performs its local duty alternating wavefronts of data and emptiness flow through the network records realizing successive instances of network computations exploiting the represented concurrency both parallel and pipeline as resources permit. There can be multiple flow networks distributed through the memories proceeding simultaneously and independently. Each memory can host multiple oscillation records of the same or different flow networks without interference.

The interpreters do not communicate among themselves. No interpreter is more important than any other interpreter. There is no central controller or explicit scheduler. There is no time referent. The network flows in its own time. There is no encompassing state. The state of each instance of a computation is in the wavefronts of data values flowing through the network and maintained in the records. Within the network, no part is greater than any other part. There is no central anything. There is no global anything. The network is a behaving whole composed of behaving localities enlivened by the servicing interpreters.

The operating system is itself a flow network distributed through the memories with one or more oscillation records in each memory. The OS flow network is enlivened by processors servicing its components just like any other flow network with its state maintained in the records themselves. Interpreters remain independent and stateless in relation to the OS.

Oscillation records can migrate among memories as long as the record state and links are maintained. A memory can be emptied and removed. A newly added memory can begin receiving oscillation records. An idle interpreter is stateless and can be removed. A newly added processor just begins looking for opportunities to service. The interpreters can keep tabs on each other by periodic conversation through the OS flow network. Rogue interpreters can be isolated, slow interpreters identified. Interpreters and memories can negotiate among themselves through the OS network to balance application load, optimize application throughput, flow to special purpose resources and so on.

The flow networks can reconfigure in relation to the memory/processor network. The memory/processor network can reconfigure in relation to the flow networks. The memory/interpreter network can be monitored with redundant flow or with periodic sets of test values or with a test flow network that continually flows test cases through the network. The interpreters treat it as just another network to service.

The memory/processor network exploits the inherent concurrency of independent flow networks and of concurrency within each flow network as resources are capable. Resources can be indefinitely extended simply by adding memories and processors. As resources dwindle the system will continue working with a single memory and a single processor.

Summary
While the behavior of each oscillation is sequentialized the flow network of oscillations is interpreted directly in the network of distributed memories and interpreters realizing the flow network in its native structure and behavior.

# Rethinking Computer Science Part 3: A Sequential Interpreter.

Part 3 presents the logical structures of a memory, a configurable oscillation and a sequential interpreter ring forming a traditional universal sequential processor.

Memory: Stopping a Data Wavefront
In the flow model of computation successive wavefronts of data spontaneously propagate through the flow paths of networks of linked oscillations. A memory would entail stopping a data wavefront from flowing, holding it for an indefinite period, then allowing it to flow again. This can be accomplished with a ring.

A Memory Ring
An oscillation link maintains a wavefront until it is accepted by the following oscillation. A ring can be arranged with an added dual-rail control path that indefinitely delays acceptance at a specific link, C, in the ring effectively stopping and storing a data wavefront.

The read rail transitioning to data enables the stored data wavefront to the output of the ring. The wavefront flows both out of the ring and back into the ring through B to be restored at C allowing multiple reads of the same wavefront.

The write rail transitioning to data enables the stored data wavefront to a consume path, D2, removing it from the ring and allowing a data wavefront presented on the input path, also enabled by the write rail, to flow into the ring and to C replacing the previous wavefront with the input wavefront.

A data wavefront is maintained indefinitely at C until a dual rail read/write command wavefront allows it to flow.

Memory rings can be stacked with a write bus and a read bus and individual read/write controls to form an arbitrarily large random access memory.

The memory operates as a complex conditional oscillation which is a consume oscillation for write and a produce oscillation for read. Each oscillation only one memory element is addressed and enabled to accept an input from the write bus or to present an output to the read bus.

The entire memory structure is a network of dual threshold elements.

A Configurable Oscillation

All flow is characterized in terms of two input one output operations and in terms of a common flow path structure (32 dual-rail bits for instance). The behaviors of all oscillations are characterized in terms of a common set of operations on the common flow path structure. An oscillation is defined that conditionally realizes each operation by steering the input through the specified operation to the output.

A computation can now realized as a network of identical configurable oscillations each behaving according to its configuration code.

Symbolically Specifying the Network
Each arc of the network is assigned a unique name which associates an output of one node to one or more inputs of other nodes. The network can now be expressed as a collection of statements each of which associates the names of the input arcs and the name of the output arc with a configuration code such as below.

opcode, inAname, inBname outname

The collection of statements, specifying the network structure by name correspondence, can be placed in an environment in which the name correspondence relations can be conveniently realized. If the unique names are unique memory addresses then the statements become linked by reference through memory locations. Since each memory location will maintain a data wavefront indefinitely the statements can be performed one at a time in any ordering that preserves the dependency relations of the original flow network. In particular, the statements can be placed linearly in a memory and with the assistance of conditional branching the statements can be performed one at a time in sequence in an appropriate order. Data wavefronts flowing haltingly through memory ultimately realize the computation specified by the original flow network.

An Interpreting Pipeline Ring
A pipeline ring containing a configurable oscillation performs operations in sequence by fetching a statement from memory, fetching its input wavefronts from memory performing the operation and sending the result wavefront to memory, then doing it again, and again.

The ring and the memory are realized entirely in terms of a spontaneously behaving, network of linked oscillations composed entirely of dual threshold operations. No clock, no critical timing issues, no state machine, no DQ flip flops.

The sequential interpreter delivers, “universal”, computation as a coherent structure of logical relations in contrast to a heterogeneous structure of clock, Boolean logic and DQ flip flops or of abstract state machine, mechanical read/write head and paper tape. Universality is attained by mapping dependency networks into the tabula rasa of single memory and sequence. Is this the only road to universality or might there be a universal interpreter that directly interprets flow networks preserving their structure and behavior?

# Rethinking Computer Science Part 2: A Sufficiently Expressive Logic

Part 2 presents a logical foundation for computation that does not exhibit the difficulties discussed in part 1.

A Sufficiently Expressive Logic
Consider the possibilities of a logic that manages its own flow of concurrent computation, that does not exhibit incorrect behavior, that  determines its own completeness of resolution and that stably maintains its flowing data. A logic that is sufficiently expressive to completely determine the behavior of any computation solely in terms of the logic itself. No time interval. No registers. No state machine. No timing closure. No emulated mathematician. Might we accept such a logic as a foundation of computational expression and its logically determined behavior as a foundation of computational confidence?

Such a logic is defined by introducing the notion of emptiness to variables and the notion of completeness and value holding to operators on variables.

Variable Emptiness
The current computer science notion of a variable is as a container of a data value that always contains and always asserts a data value. The contained data value can change but the container, the variable, persists and is never empty. The validity or invalidity of a variable’s data value is determined by some control behavior apart from the variable.

A value is added to the variable’s value set that explicitly means “not a data value” which will be called empty. A variable can now explicitly indicate that it does not contain a data value, that it is “empty”. A computation begins when a data value flows into an empty variable and ends when an empty value flows into the variable.

Operator Completeness and Emptiness
A computation primitive operator must now relate to this new “not data value” and the fullness and emptiness of its variables. So the notion of completeness and value holding behavior is introduced.

Each operator fulfills a completeness criterion. Only when its input variables are completely data will an operator transition its output variable to a data value that is the correct resolution of the presented input values. Only when its input variables are completely empty will an operator transition its output value to empty. For all conditions of mixed data/empty input an operator will maintain its current output value.

The output of an operator transitioning to data implies that its input is completely data and that its output is the correct resolution of the presented input. The output of an operator transitioning to empty implies that its input is completely empty and that it is ready to receive a next data input.

If the input variables to an operator monotonically transition between data completeness and completely empty then the output variable of the operator will monotonically transition between data and empty fulfilling the completeness criterion.

Each operator exhibits completeness behavior (control), value holding behavior (memory) and transformation behavior (data value transition mapping). An operator is still represented as a mapping but now the mapping includes the empty value and there are mappings for which the output does not transition.

A Dual Threshold Logic
Considering electronic implementation, the two electronic values that a wire/rail can assert are designated as logical values data (non-zero voltage) and empty (zero voltage) in contrast to 0 and 1 or True and False. Since there is only one data value there cannot be combinations of different data values presented to an operator, there can only be quantities of the one data value. Therefore the operator data completeness criterion is a quantity of inputs transitioned to data: a threshold operator.

The new operator is a dual threshold behavior with an input data completeness threshold and an input completely empty threshold. A dual threshold operator beginning with its output empty will transition its output to data when a specific number of its input rails transition to data fulfilling the data completeness threshold. It will maintain its output data value until all of its input rails transition to empty at which point it will transition its output to empty. It will then maintain its output empty value until its input rails transition to data completeness at which point it will transition it output to data … and so on.

Some dual threshold operators are represented graphically below with their mapping tables in which a dash represents no transition. The number in the graphic is the data completeness threshold. The input rails indicate the number of inputs. A branched rail indicates a weighted input.

Monotonic Value Completeness Transitioning
If the input rails to a dual threshold operator monotonically transition between data completeness and completely empty with an appropriate delay between transitions then the output rail of the dual threshold operator will monotonically transition between data and empty.

Multi-Rail Variable Representation
The mutually exclusive values of a variable are represented with multiple rails each representing one value which is asserted when its rail is data. It is illegal for more than one rail to transition to data at a time. When all rails are empty the variable is empty. A variable monotonically transitions between data completeness (one rail at data) and empty (all rails at empty). Below are examples of multi-rail variables.

Combinational Logic
Multi-rail variable interactions can be characterized as combinations of rails (values) transitioning to data each of which combination can be recognized by a dual threshold operator.A universal mapping can be assummed for the transition to empty (transition outputs to empty only when all inputs are empty) so a combinational network of dual threshold operator can be specified as a transition table of data value (rail) interactions

The combinational network below, specified by the table at its left, adds a binary variable to a trinary variable producing a quaternary variable. Computation begins with the input variables and the output variable and the operators empty (all rails empty). When the trinary and the binary variables transition from empty to data completeness only one dual threshold operator in the first rank will meet its data completeness threshold and transition its output to data ultimately transitioning the quaternary output variable to data completeness (one rail data) indicating that the input variables are data complete and that the quaternary value is the correct resolution of the presented input.

When the input variables transition to empty, the dual threshold operator in the first rank that had transitioned to data will transition its output to empty, emptying the output quaternary variable. The input and output variables and the operator are again empty (all rails empty) and ready to accept a next transition of input variables to data.

If the input variables monotonically transition between data completeness and empty with an appropriate delay between transitions then the dual threshold operators of the network will monotonically transition between data and empty and the output variable will monotonically transition between data and empty. The behavior of the network as a whole including all the concurrency in the network is deterministically coordinated (no glitching) by the monotonic input behavior, by the completeness behavior of individual dual threshold operators and by the logical structure of the network. The combinational network exhibits the same completeness behavior as its component dual threshold operators.

Greater Combinational Composition
In a greater combinational network constructed of first level combinational networks the output of each first level combinational network presents a single correct input transition to its successor first level combinational networks fulfilling their monotonic input requirements.

For each network input transition (to data or to empty) a wavefront of correct monotonic transitions propagates through the greater network. There are no incorrect or spurious transitions. Both concurrent and ordered flow behaviors of the greater network are fully coordinated by the monotonic transition behavior and by the completeness behaviors. The transition completeness of the output variables of the greater network implies that its input variables have completely transitioned and that the output is the correct resolution of the presented input (data complete – completely empty). The requirement for monotonic transitioning with an appropriate delay between transitions is now relegated to the input environment of the greater network.

The network as a whole behaves with the same completeness behavior including the deterministic coordination of all concurrency as its component combinational networks. A composition of logically determined networks is itself logically determined. Completeness behavior scales. A network with fully determined flow behavior emerges atop the network of dual threshold operators.

The Emergent Oscillation – Liveness and Flow Control
Data completeness and emptiness of the output data variables can be recognized with dual threshold operators, fed back to the input, inverted and presented to a spanning rank of dual threshold operators which regulate the input of the network. Output data completeness will allow transitions to empty into the network. Output empty completeness will allow transitions to data into the network. Once a transition wavefront is allowed the enabling rank of dual threshold operators will maintain the allowed wavefront until the network detects the completeness of its flow and allows the next transition wavefront.

The network self regulates the monotonic transitioning of its input. The input environment still has an imposed requirement for monotonic transitioning but now the network itself determines the appropriate delay between transitions. The input environment must now wait for each input transition wavefront to be allowed into the network.

The network is a closed circuit with delay and a single inverter, the form of a ring oscillator. The closed network continually strives to transition between two states, data and empty, the behavior of a ring oscillator. However the transitioning is not necessarily periodic as is the familiar case with free unencumbered oscillators. The transitioning of these oscillators can be delayed, modulated by signals external to the oscillator. Nevertheless, they continually strive to transition and as long as their transitions are externally allowed they transition alternately between two states, oscillate, however aperiodically. The network fulfills two and a half properties of the conventional view of a ring oscillator so it will be referred to as a ring oscillator in this context.

With the inversion of the completeness feedback the network of dual threshold operators becomes a spontaneously alive ring oscillator(animation) continually striving to monotonically transition between data completeness and completely empty. The oscillation provides liveness. The input enable rank provides memory. The output completeness provides flow control. The encompassed combinational network provides value transformation behavior. It is a spontaneously alive, self regulating computation expressed solely in terms of logical relationships. No time interval. No register. No controller. No timing closure. No mathematician.

Placing the input regulation logic of one oscillation before the output completeness logic of another oscillation links the two oscillations with a shared completeness behavior coordinating their oscillation behavior. Each oscillation maintains its input and its output and does not oscillate until its output is stably passed to its successor oscillations. As oscillations individually oscillate, alternating wavefronts of data and empty flow from oscillation to oscillation with data wavefronts being transformed as they flow through the combinational logic within each oscillation.

The requirement of monotonic transitioning is now relegated to the input environment of the network of linked oscillations as a whole, and if fulfilled, successive waves of monotonic transitions will flow through the network with full coordination of all concurrent and ordered behaviors realizing a logically determined computation. In the network below, for instance, the flows of the concurrent oscillations A and B are coordinated by the completeness behavior of oscillation C.

In a network of linked oscillations the behaviors of data transformation, memory, flow coordination and liveness are fully integrated and uniformly distributed. No time interval. No registers. No controller. No timing closure. No mathematician. Only logical relationships

Structured Flow Paths and Abstract Components
A flow path can be a structure of multiple multi-rail variables with a spanning enable and spanning completeness as shown below with the 4 bit adder. Stylized operator symbols are introduced that abstractly represent computations with flow paths of any width and structure. The abstract representations can be composed and further abstracted.

The Ring: Memory
A pipeline can feed back on itself forming a ring of oscillations creating a local memory of previous behavior. A network of linked rings produces a computation with memories of previous behavior distributed throughout the network.

In the network below, implementing a state machine, the remembered previous behavior is the transition to the current state which is fed back through D, and presented as input A. The fed back transition wavefront will sit at A and wait indefinitely for a B transition wavefront to arrive and form input completeness at which point the wavefronts will flow through the update computation producing the next state. A and B are concurrent flows coordinated by the completeness behavior of oscillation C.

The state machine, realized as a network of dual threshold operators, is spontaneously alive and self sequencing with no external support except for the presentation of the input.

Self Contained Computation
If the feedback associated with the input controls the transitioning of the input (memory read request or sensor sample request) then the requirement for monotonic transitioning relegated to the input environment of the network is satisfied and is no longer imposed on the input environment. The network becomes self contained, self coordinated and spontaneously alive; independently and continually performing its computation. There is no referent for the network and its behavior other than the network itself. No external controller. No external time referent. No mathematician Within the network there is no part that is greater than any other part. There is no central anything. There is no global anything. It is a spontaneously behaving whole composed of spontaneously behaving parts.

Immersed Computation
If the input sensor is sensitive to the output behavior then an external feedback path around the network is established forming the network as a whole into a ring, part of which is the logical network and part of which is the environment; a self contained, self referential, continually computing network sensitive to its environment and, with memory of previous behavior, capable of adjusting its behavior. It is an integral locality of a larger computation that it does not, itself, encompass and, in turn is composed of localities that do not encompass its wholeness.

Computation Primitives
Variables that can indicate their own validity and invalidity of value, completeness operators that can manage their own behavior in cooperation with the variables, which combine in networks that coordinate their concurrent flow with completeness behavior, which networks close on themselves to create oscillations, which oscillations, linked with completeness behaviors, form pipeline networks, which pipeline networks, closing on themselves, form ring networks which can remember past behavior and which ring networks finally control their own input becoming independently and spontaneously behaving computations. These concepts, all encompassed in a single coherent logic(1) are primitive concepts of computation that are sufficiently expressive on their own behavioral merit, and that can serve computer science better than computation primitives that assume the behavior of a real or emulated mathematician.

Computational Confidence
We trust sequenced state, because, given knowledge of the input and the sequence one can predict the values in the state space between steps, pause the sequence of steps and observe the state values.

Why might we trust spontaneously flowing, concurrent, fully determined logic? Given knowledge of the input and the network structure of linked oscillations one can predict the sequence of values that will flow through and out of of each oscillation The output of selected oscillations can be tapped and the succession of wavefronts observed.

The mechanisms of observation are different but the predictability and observability of computational progress correspond because the structure and behavior of the state space is one of the possible sequences derived from a dependency specification and the network of linked oscillations is a direct realization of the same dependency specification.

But what if the input is random and prediction is not possible? Then confidence must reside in the expression of the computation. A key factor of sequential confidence is the “precisely defined step”. But the behavior of a step containing a Boolean network with its timing anomalies (glitches) and its reliance on external control is determined by human coordination.

An oscillation, the counterpart of step, contains a network of dual threshold logic operators and logical links with other oscillations that fully determine its behavior solely in terms of logical relationships.

Should we trust the behavior of a coherent determinate logic or the behavior of an ad hoc construction of heterogeneous parts coordinated by humans?

Summary
Mathematics is a world of abstract relations. But integral to the meaning and functioning of mathematical abstractions is a mathematician to actualize them and manage their behavior. The task of computer science is to directly actualize abstractions of computation, without the mathematician. But the mathematical abstractions without the mathematician are incomplete and do not behave correctly on their own merits. A boolean equation in the hands of a mathematician provides a single correct answer. A directly instantiated Boolean equation provides incorrect answers because of concurrent behaviors that the mathematician can coordinate but that the directly instantiated Boolean equation cannot.

Computer science has addressed this quandary by managing the unruly concurrent behavior with time and sequence. Isolating the unpredictable concurrent behavior within a time interval transforming a Boolean network, as a whole, into a predictable sequential step. One can then consider the sequential step as a primitive behavior and consider the concurrent behavior within the step as rendered inconsequential. Isolation might be a reasonable approach if the chaos of primitive concurrent behavior were an existential inevitability. But it is not. There is a logic that can manage its primitive concurrent behavior with deterministic predictability

It is a logic that serves as a primitive abstraction of computation specification as well as a primitive abstraction of computation behavior, a logic that does not assume the behavior of a mathematician but that stands on its own expressional and behavioral merits, a logic that models primitive concurrent computation behavior delivering logical determinism rather than logico-temporal-anthropic adequacy, a computer science logic in contrast to a mathematical logic.

Computer science is different from mathematics. While mathematics uses the mathematician for computation computer science strives to transcend the mathematician and, ultimately, to account for the mathematician as itself a computation. The ghost of the mathematician in computer science creates a referential circle that undermines both ambitions.

1. Karl M. Fant, Logically Determined Design: Clockless System Design with NULL Convention Logic, (Hoboken, New Jersey, Wiley Interscience, 2005).

# Rethinking Computer Science: Purpose of Site

The purpose of this site is to present and explore a new view of computation and computer science.

not as a sequence of steps controlling a machine altering contents of a memory but as wavefronts of computation and state spontaneously flowing through a network of linked oscillations,

not as clock actualized, step by step, time determined, centralized control but as self actualizing, event driven, logically determined, distributed concurrent local coordination,

not as information manipulation but as information interaction,

nothing global, nothing central, nothing timed.

It is a model of computation that applies to all forms of computation natural and artificial and that applies uniformly to all levels of abstraction from primitive implementation through general programmability,

a new view of computation and computer science.

# Rethinking Computer Science Part 1: The Problem

Introduction
Computer science is formulated with concepts borrowed from mathematics. Even though mathematics defines mathematical computation and computer science is about computation, it is argued here that there are fundamental differences between the two, that computer science is not well served by the borrowed concepts and that there exists a conceptual grounding that more effectually addresses the goals and problems of computer science.

At the center of mathematics is a mathematician performing computations. At the center of computer science are computations performing all by themselves. Computer science, quite reasonably, was founded on computational concepts borrowed from mathematics, which concepts, however, are guides for the mathematician with pencil and paper who provides liveness, flow coordination and memory. Concepts which were never intended to stand on their own apart from the mathematician. In computer science, these concepts are realized by emulating a mathematician. This might seem perfectly reasonable, indeed, computer science was founded with a machine explicitly emulating a mathematician with pencil and paper, but nature computes without reference to mathematician, pencil or paper and computer science strives to encompass all computation including the computations of nature. A mathematician’s brain, for instance, does not compute by emulating a mathematician.

The ghost of the mathematician that haunts computer science is an impediment to understanding computation in itself and while we have gotten a lot of mileage out of emulating the mathematician we can get even more mileage out of transcending the mathematician. Part 1 focuses on difficulties with two concepts borrowed from mathematics, Boolean logic and the sequential process,  part 2 suggests an alternative abstraction that better serves the goals of computer science.

Boolean Logic
In mathematics, Boolean logic, in conjunction with a mathematician, is considered a theoretically fundamental primitive of computation. But, enlivened Boolean logic on its own, without a mathematician, is an incomplete model of computation. It cannot determine when a resolution has begun nor when a resolution has completed and can generate incorrect results.

If an input is presented to a Boolean logic network that causes no transition activity (same input as previous input), the logic cannot, itself, determine that a resolution has begun nor that a resolution has ended.

If an input is presented that causes transition activity in the network (different from previous input) and there are delays associated with the functions of the network, the functions will temporarily output incorrect results which are presented to successor functions. The network asserts a chaos of incorrect transitions (glitches) before the input completely propagates and the network settles asserting correct result values for the presented input. Again, the logic cannot determine amidst the chaotic transitioning when an input is completely presented or when a resolution is completed.

There must be an extra logical time interval referent coordinated with the actual delay behavior of the Boolean network to control the presentation of input (the beginning) and to wait long enough for the worst case chaotic transitions of the network to settle to a stable output before accepting the output (the end). The waiting long enough is accomplished with explicit memory elements (registers – data lifeboats) which, at the beginning of each interval, presents stably maintained input to the Boolean network, which during the interval isolates their contents from the indeterminate chaos of the transitioning Boolean network and which at the end of the interval accepts the stably presented output data of the Boolean network into the memory element.

The recurring time interval and the memories emulate the flow coordination and memory of a mathematician with pencil and paper in relation to an enlivened Boolean logic network. Boolean logic cannot compute on its own logical merits without this emulated ghost of a mathematician.

So what, you say? It works. But, what if there is a logic than can compute on its own logical merits, that has no need of any extra logical referent such as the ghost’s time interval and memories? Might it serve better than Boolean logic both conceptually and practically?

The Sequential Process
In mathematics, step by step sequence is fundamental to proof. If a proof proceeds in any manner other than one humanly understandable step at a time it cannot be accepted. Sequence of computation is not as fundamental but is deeply ingrained by history, practice and concept. Historically, the 12th century place-value arithmetic algorisms specified sequences of steps that anybody could follow and perform a computation. This meshed with a central feature of mathematics which is the rare resource of a single mathematician which must be used over and over in sequence to perform a computation. Computing by a sequence of steps became standard practice.

The sequence of proof and the sequence of computation became conflated in Turing’s machine which was a proof about computation in which sequential computation was an integral part of the proof. Turing’s machine became a definition of mathematical computation which eventually became the notion of algorithm which, in turn, became a fundamental paradigm of computation in computer science.

“The notion of the algorithm is basic to all computer programming…”(1)

“One of the concepts most central to computer science is that of an algorithm.”(2)

The notion of the algorithm is typically defined by simply presenting a list of properties that an expression must possess to qualify as an algorithm.  The following is typical.

1. An algorithm is a sequence of steps that must terminate in a finite number of steps
2. Each step must be precisely defined
3. An algorithm must effectively and deterministically yield a correct solution (3)

It is easy to see how this list of restrictive characteristics serves to define what might acceptable as a mathematical proof or computation, but what conceptual service does the distinction algorithm/not algorithm perform for computer science? Is the expression of one fundamentally different from the expression of the other?  Is one interpreted differently from the other? Are algorithms first class citizens in some sense and non-algorithms second class citizens?

Important programs do not qualify as algorithms. An operating system is not supposed to terminate nor does it yield a singular “solution”. A program with truly random input is not an algorithm. No program with a bug can be an algorithm and it is generally accepted that no significant program can be demonstrated to be bug free. Programs and computers that utilize concurrency are not algorithms.  What does it mean when a sequential program qualifying as an algorithm is parallelized by a vectorizing compiler, and no longer qualifies as an algorithm?

These questions have not gone unnoticed.

“…there is an extension of the notion of algorithm (called nondeterministic algorithms).”(4)

“Any computer program is at least a semi-algorithm and any program that always halts is an algorithm.”(5)

“A procedure which has all of the characteristics of an algorithm except that it possibly lacks finiteness may be called a ‘computational method.’”(6)

“There is another word for algorithm which obeys all of the above properties except termination and that is computational procedure.”(7)

“An algorithm A is a probabilistically good algorithm if the algorithm “almost always” generates either an exact solution or a solution with a value that is “exceedingly close” to the value of the optimal solution.”(8)

“The procedure becomes an algorithm if the Turing machine always halts”.(9)

“By admitting probabilistic procedures in algorithms….”(10)

“…if, after executing some step, the control logic shifts to another step of the algorithm as dictated by a random device (for example, coin tossing), we call the algorithm random algorithm.”(11)

“An algorithm which is based on such convergence tests is called an infinite algorithm.”(12)

“Algorithms that are not direct are called indirect.”(13)

“We drop the requirement that the algorithm stop and consider infinite algorithms” .(14)

With these exercises in labeling, a computation can now not terminate, not be deterministic and not yield a specific solution (not be an algorithm). All that is left is “sequence of precisely defined steps”. Instead of trying to “fix” the notion of the algorithm perhaps we should consider the possibility that the notion is not as fundamental to computer science as once supposed.

Mathematicians and computer scientists are pursuing fundamentally different aims.

Mathematics is concerned with the nature of specific computations
independent of how they might be expressed.

Computer  science is concerned with the nature of the expression of computation
independent of any specific computation.(15)

The primary questions of computer science are not of computational possibilities but of expressional possibilities.

“computer science itself becomes no more (and no less) than the discipline of constructing appropriate descriptive languages.”(16)

Is limiting description to “a sequence of precisely defined steps” help or hindrance or simply beside the point?

What is a step? How is it sequenced? What does it do? The notion of the algorithm remains silent. A good engineer, however, can fill in the blanks. If there is sequence there must be a sequence controller to instantiate each step in the proper order. Each step must perform some behavior which must ultimately accumulate to realize the computation so there must be an accumulating memory which must be equally accessible to all steps in the sequence. Again, the engineer, filling in the blanks of an incomplete model, emulates the liveness, flow control and memory of the mathematician with pencil and paper.

The Borrowing
Boolean logic and the sequential process are not incomplete because of an inadequate application of thought. Both Boolean logic and the sequential process models of computation serve quite adequately in association with a mathematician. Given these adequate models of mathematical computation all that was needed was to make them work without a mathematician which can be considered an engineering problem, in contrast to a conceptual problem. Methods of emulating the behavior of the mathematician were quickly developed. Methods so serviceable and economically viable that they still serve today albeit with rapidly decreasing economy. So discovering a coherent model of computation from first principles which do not include a mathematician was not a burning concern. Early efforts in this direction such as asynchronous circuits, cellular automata, neural networks and the perceptron did not thrive.

We have been very successful at emulating the mathematician, but can computer science dispense with the ghost of the mathematician and model computation with its own unique conceptual foundations? We must first understand why the mathematician ghost persists

The Confusions of Sequentiality
Emulating the mathematician results in concepts of computation locked in a tangle of mutual dependencies that collectively resist the evolution of any individual concept.

The Inherent Concurrency of Sequentiality
Sequence, in and of itself, does not specify a computation. A sequence has to realize the correct dependency relations among the steps. A computation is specified as a progression of dependent operations as with an algebraic equation, a flow chart or a programmer’s jottings. This dependency specification inherently specifies operations that can occur simultaneously or in any order. The, “simultaneously”, specifies possible concurrency. The, “in any order”, means that the possible concurrency of the dependency specification can be mapped to a sequence of operations. But it also means that there can be a multitude of correct sequential orders that realize the dependency relations of the computation.

Below at left is a dependency specification. A and B can occur in any order before D but if either A or B occur after D the sequence is incorrect. C can occur in any order with A, B and D. At right are the eight correct sequences derivable from just the shaded portion of the dependency specification. The other 16 possible sequences are all incorrect. The only means of determining a correct sequence from an incorrect sequence in a typically enormous set of possible sequences (one might call this the chaos of sequentiality) is to refer to the dependency specification with its expression of concurrency.

The dependency relations and their possible concurrency must be understood to construct a correct sequence. The expression of sequence derives from the expression of concurrency. It cannot be otherwise.

It would seem that the dependency specification might make a more concise and coherent object of primitive consideration for computer science than sequence. So, why is it not? Blame the concurrency.

The Fear of Concurrency
Time indeterminacy:

“The introduction of concurrency into computation opens Pandora’s box to release the possibility of non determinacy and a host of other complications, including deadlock and livelock …. Events within concurrent systems do not necessarily occur at predetermined times, but may occur in different orders depending upon the relative speeds of different system parts. The usual assumption, which is also physically realistic, is that relative speeds cannot be controlled so precisely that the designer or programmer can depend upon them to enforce sequencing requirements.”(17)

Speed indeterminacy:

“Unfortunately, asynchronous circuits are difficult to design because they are concurrent systems. When trying to understand a synchronous circuit, it is possible to pretend that events occur in lock-step. Variations in the speeds of functional elements can be ignored, since the clock is timed to allow for the worst-case delays. the lock-step model does not work for speed-independent designs — a correct design must exhibit proper behavior no matter what the speeds of its components. When several components are operating concurrently, there may be a very large number of possible execution paths, each corresponding to a different set of delays in the components. Non determinism resulting from unknown or varying delays is the essential problem in all concurrent systems. For a property to hold for a concurrent system it must hold for every possible execution.”(18)

Execution path indeterminacy:

“Although the architectural freedom of asynchronous systems is a great benefit, it also poses a difficult challenge. Because each part sets its own pace, that pace may vary from time to time in any one system and may vary from system to system. If several actions are concurrent, they may finish in a large number of possible sequences, Enumerating all the possible sequences of actions in a complex asynchronous chip is as difficult as predicting the sequences of actions in a schoolyard full of children. This dilemma is called the state explosion problem. Can chip designers create order out of the potential chaos of concurrent actions?”(19)

To understand what these authors are talking about, consider again, the Boolean logic combinational network. It is a directly instantiated dependency specification with all the attendant concurrency, and it delivers incorrect behavior (it glitches). The relative speeds of the functions can vary and their outputs occur in different orders effecting a variety of execution paths causing a flurry of incorrect outputs (the exploding state space).

The above quotes imply that the only way to avoid the glitching is for all input transitions to arrive simultaneously or in specific orders, neither of which can be expressed in terms of Boolean logic nor assured by timing in a practical Boolean logic network. The stock solution is to isolate and contain the indeterminate chaos (glitching) of the combinational network by bounding it with a time interval within which the Boolean network will stabilize to a correct output. The time interval, recurring, sequences the behavior of the network, turning the Boolean network as a whole into a timed step of sequential computation. The chaos of concurrency is tamed with time and sequence. Sequence, thus, appears to be more fundamental than concurrency. But concurrency lurks in the Boolean combinational network within each sequential step. So, is not concurrency more fundamental than sequence?

From the mathematician’s point of view, any concurrent expression (a so-called partial ordering) can be reduced to a sequential expression (a so-called total ordering). The chaos of concurrency can be tamed simply by mapping it to a controlled sequence. Sequentiality appears to reside at a reductive conceptual bottom. But sequence control cannot be primitive. The most primitive sequence controller cannot itself be sequence controlled or “controlled” in any sense but must proceed in terms of its own intrinsic behaviors which may include concurrent behaviors which cannot be reduced to controlled sequence. So concurrency must, ultimately, reside at the reductive conceptual bottom.

On one hand it appears that concurrency can and should be characterized in terms of sequentiality; cooperating sequential processes(20), communicating sequential processes(21), Interleaved sequential processes (22)and so on.

On the other hand, as shown, sequentiality derives from concurrency, sequential steps contain concurrency, and sequential control cannot preclude all concurrency. Sequentiality cannot fully characterize concurrency and, so, cannot be primitive.

The difficulty is that characterizing concurrency in terms of sequentiality which is derived from concurrency circularly begs the question of primitivity and builds a house of cards in the clouds. Concurrency is fundamental to computer science no matter how you cut the pie and must be confronted directly and primitively.

Is the chaos of concurrency an existential inevitability or is it an artifact of a conceptual point of view? Why is a Boolean combinational network chaotic?

The Mathematical Function
A Boolean combinational network glitches because it is a network of mathematical functions. A mathematical function is a mapping rule for the mathematician that was never intended to behave on its own, and when directly instantiated becomes a continuously responsive mapping behavior, continually presented with input values and continually presenting a result value. A continuously responsive behavior can temporarily output an erroneous result if its inputs do not transition simultaneously. The erroneous outputs of functions early in a network of functions will present incorrect result values to subsequent functions resulting in a blizzard of incorrect values rippling through the network followed by a wavefront of correct values that eventually stabilize the output of the network to the correct resolution of the presented input values. There is no way to determine, from the network behavior itself ,when the stabilization occurs and, as lamented above, the blizzard of incorrectness cannot be avoided either in terms of the logic or with timing control. An external influence must determine when the computation begins and when the computation ends.

But, what if a function behavior could appreciate when a mapping starts and when a mapping finishes and output only a correct result. Might a concurrent network composed of such behaviors exhibit fully determined behavior?

But first, why is a function continually presented with data values to map?

Passive Symbols: Variables and Their Values
The notion of a variable continually asserting a value is not a mathematical concept. An algebraic variable does not assert a value but is an empty place holder to be substituted with a value when the equation is used(23). The mathematician substituting the input variables (filling its emptiness) of a function is the start of a computation and when the mathematician writes down the result value substituting the output variable (filling its emptiness) the computation is finished.

The notion of constantly asserted data values arises from a confluence of binary representation in mathematics, the two state representation of digital electronics, the correspondence of Boolean logic with binary switching and the notion of state space from physics.

A binary variable has two values mapped to the two states of electronic representation so an electronic binary variable is always asserting a data value and cannot not assert a data value. Consequently, an electronic Boolean logic switching function is continually presented with data values by its electronic variables.

In physics a state space consists of a collection of variables that continually assert values and which are continually interacting and transforming their values. Physicists imagine sampling a dynamic state space at an instant to observe a static state.

“We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.”(24)

The “state” is the record of progress of the computation of the universe and state space has come to be generally regarded as the formal record of progress of any computation. Computer science tailored this dynamic notion of a state space of variables continually asserting values to allow periodic instances of sampleable stability. The values of variables transition only during a computation step and the state space of variable values is static between steps. This allows predicting and observing the expected state after each step which brings us to the crux of the matter: computational confidence.

The Mutual Dependency of State space and Sequence
For a state space to be reliably observable there must be intervals of stability with no transitions occurring. The notion of controlled sequence provides such intervals. Sequence, in turn, relies on intervals of stable state. Each sequential step observes a part of the state, transitions a part of the state and leaves a stable state for the next step to observe.

If sequential steps are allowed to overlap (concurrent) the possibility arises that a step might observe a transitioning state (a race) and given the enormous variety of correct sequence, it becomes difficult to insure the integrity of state observation for overlapping steps.

If the state space fragments the possibility of simultaneous observation arises leading to the possibility of overlapping steps.

If any inherently concurrent behavior does exist, such as in a Boolean logic network, it must be isolated within a sequential step such as a time interval, a semaphore state, a channel access and so on.

Computational Confidence
Predictable observability is the touchstone of human computational confidence. The computer science notion of state space, entwining time, memory and sequence control in a tangle of mutual dependencies, is all about predictable observability. Concurrency messes with this touchstone of computational confidence in ways that effectively destroy it. Hence, the fear of concurrency. Yet concurrency exists and is fundamental to computer science. Might there be another path to computational confidence that does not preclude concurrency?

Continued in part 2.

References

1. Donald E. Knuth, Fundamental Algorithms (Reading, Mass., Addison-Wesley, 1969) p. 1.
2. Zenon W. Pylyshyn, “Theoretical Ideas: Algorithms Automata and Cybernetics,” in Perspectives on the Computer Revolution, ed. by Zenon W. Pylyshyn (Englewood Cliffs, N. J., Prentice-Hall, 1970) pp. 60-68.
3. Donald E. Knuth, Fundamental Algorithms (Reading, Mass., Addison-Wesley, 1969) p. 4-6.
4. M.S. Carberry, H.M. Khalil, J.F. Leathrum and L.S. Levy, Foundations of Computer Science (Potomac, MA, Computer Science Press, 1979) p.16.
5. R.R. Korfhage, “Algorithm,” in Encyclopedia of Computer Science, eds Anthony Ralston, Chester Meek (1st ed. 1976),(New York, Petrocelli/Carter, 1976) p. 49.
6. Donald E. Knuth, Fundamental Algorithms (Reading, Mass., Addison-Wesley, 1969) p. 5.
7. Ellis Horowitz and Sartaj Sahni, Fundamentals of Computer Algorithms (Potomac, Computer Science Press, MA, 1979) pp. 1-2.
8. Benjamin W. Wah, C. V. Ramamoorthy “Theory of Algorithms and Computation Complexity with Applications to Software Design,” in Handbook of Software Engineering, ed. by Charles R. Vick and C. V. Ramamoorthy (New York, Van Nostrand Reinhold, 1984) p. 88.
9. Kurt Maly, Allen R. Hanson, Fundamentals of the Computing Sciences (Englewood Cliffs, N.J., Prentice-Hall 1978) p. 41.
10. F. S. Beckman, Mathematical Foundations of Programming (Reading, Mass., Addison-Wesley,1980) p. 398.
11. E. V. Krishnamurthy, Introductory Theory of Computer Science (New York, Springer-Verlag, 1983) p. 3.
12. E. V. Krishnamurthy, Introductory Theory of Computer Science (New York, Springer-Verlag, 1983) p. 3.
13. John K. Rice, John R. Rice, Introduction to Computer Science (New York, Holt, Rinehart and Winston, 1969) p. 47.
14. John K. Rice, John R. Rice, Introduction to Computer Science (New York, Holt, Rinehart and Winston, 1969) p. 49.
15. Karl M. Fant, Computer Science Reconsidered: The Invocation Model of Process Expression, (Hoboken, New Jersey, Wiley Interscience, 2007). p. 7
16. Harold Abelson, Gerald Jay Sussman with Julie Sussman, Structure and Interpretation of Computer Programs (Cambridge Ma., MIT Press, New York, McGraw-Hill, 1985) p. 295.
17. Charles Seitz ed., Resources in Parallel and Concurrent Systems, ACM Press, New York, 1991, introduction, page ix.
18. David L. Dill, Trace Theory for Automatic Hierarchical Verification of Speed-Independent Circuits, (Cambridge, MIT Press, 1989). pp. 2,3.
19. Ivan E. Sutherland and Jo Ebergen, “Computers Without Clocks”, Scientific American, Vol. 287, No. 2, August 2002, pp. 62-69.
20. E. W. Dyjkstra “Cooperating Sequential Processes”, F. Genuys Ed. Programming Languages, Academic Press, New York, 1968, pages 42-112.
21. C. A. R. Hoare, Communicating Sequential Processes, (Englewood Cliffs, New Jersey, Prentice Hall International, 1985).
22. M. Ben-Ari, Principles of Concurrent Programming, (Englewood Cliffs, New Jersey, Prentice Hall International, 1982). p. 18.
23. W. Gellert, H. Kustner, M. Hellwich, H. Kastner eds., The VNR Concise Encyclopedia of Mathematics, Van Nostrand Reinhold Company (New York, 1977), P 40.
24. Laplace, Pierre Simon, A Philosophical Essay on Probabilities, translated by Truscott,F.W. and Emory,F.L., Dover (New York, 1951) p.4