# Flow Computation

It is called flow computing because values spontaneously flow, interact and compute through a  network of  linked oscillations: a behaving structure in contrast to a controlled structure. It is not sequential, not synchronous. There is no clock, no flip flops, no global anything and no central anything, no encompassing state. Showing how it all works and why it is worthy of consideration is the purpose of this site.

###### NCL enables the model of flow computation entirely in terms of logical relationships. (see LDD chapters 1, 2 and 3)

in contrast to logical relationships, with critical delay relationships

###### Represented as a directed network of flow paths implemented with  linked oscillations, each individually striving to oscillate.

in contrast to a passive sequence enlivened by a single oscillating clock

###### In which wavefronts of active data spontaneously flow from oscillation to oscillation through a background of emptiness. (see flow system illustration – see also talking oscillations)

in contrast to passive data manipulated by state machine and clock or by sequenced instructions

###### Each oscillation contributes a NCL combinational function and maintains the result,

in contrast to Boolean logic cones and registers

###### all of which the flowing wavefront accumulates to a realization of the computation. (see spontaneous flow structures)

in contrast to a mathematician with pencil and paper flowing data values through an algebraic equation or a clock driven state machine flowing data values through registers or a sequenced program flowing data values through static memory.

The  table below indicates how the components of the flow computation model relate to the more familiar RTL and sequential models of computation.

 Oscillation ===> clock – sequence control NCL completeness coupling ===> register – memory NCL linking logic ===> Boolean logic cone – instruction Oscillation linking structure ===> state machine/registers – sequence/memory allocation

## So What?

#### Optimal Computational Efficiency

Perform only necessary logical activity with local logical signal transitions.

No clock banging long haul signals. No state machine discarding unneeded computation.

Intrinsic quiescence

No extra logic attempting to avoid the unneeded computation (setting zero, clock gating).

Perform with actual delays delivering optimal throughput per effort.

in contrast to wasting throughput by always waiting on worst case delays.

Adapts continuously to voltage variation, temperature variation and performance demands. The voltage can be algorithmically scaled to continually deliver optimal performance per µw.

in contrast to a limited set of predefined voltage/clock rate options.

#### Ockham’s razor

Seek the fewest and simplest assumptions to accomplish the abstract mission.

An addressable memory can be implemented with NCL as a network of pipeline rings. While it is very inefficient and not practically feasible it is a perfectly feasible abstraction in the same sense that a paper tape is a feasible abstraction but infeasible practically. Sequential processors have already been implemented with NCL and With an NCL memory NCL becomes sufficient, in the abstract, to implement any computation.

A Turing machine assumes a state machine somehow sequencing its states and a paper tape with a read/write mechanism. While practical implementation is left unspecified, a modern computer is serviceable facsimile.

The modern computer assumes Boolean logic which requires an assumed time referent to filter glitches and determine completeness. NCL does not glitch, determines its own logical completeness and does not need an assumed time referent.

The abstract mission of characterizing computation can be accomplished with the assumption of a single coherent logic. While the Turing machine and Booean logic have served well the single assumption of the more encompassing NUll Convention Logic will pave the road to the future.

google-site-verification: googled2814bb749111094.html