All posts by Karl Fant

Unifying Hardware and Software

Matt Whiteside 8,9,2017
Unifying Hardware and Software

Q: Regarding a language that “unifies hardware and software, uniformly characterizing computation through all levels of abstraction” is an attractive idea that’s also crossed my mind, but I don’t see it getting much appreciation by programmers.  To give a concrete example from my own experience of a deficiency that a unified approach would address, consider GPU programming.  You start with a nice high level, type-safe, statically analyzed language running on the CPU, which must then interface with a shader program on the GPU, and therefore throw all the type safety, static analysis, correctness guarantees, etc., out the proverbial window.  It’s hard even to imagine what possibilities could open up if even a moderate amount of progress was made towards improving this situation.

A: The unification. I see both ends doing the same computation. The processor sequences operations, the clock sequences operations. EEs actually talk about the massive concurrency at the circuit level. They don’t seem realize that they throw 99.9% of the concurrency away to actually sequence one or a few steps. They are mildly aware of inefficiency in that they include clock gating but buried in transistor details they are not sensitive to what they are doing in the context of computation. This is one of the thrusts of what I am pursuing. What is the fundamental nature of computation and its most primitive implementation consistent with that nature and how does this implementation seamlessly scale to arbitrary complexity. This is the question that CSR began addressing and which I am still working on. Computation should be uniformly characterizable in all its manifestations. It seems to me that this uniform characterization must embody concurrency as fundamental and that the notion of the sequential process is a fundamental flaw in contemporary computer science. The traditional notion that any concurrent process can be mapped to a sequential process implying that sequentiality is more fundamental than concurrency is a red herring similar to the notion that planetary motions can be mapped into uniform motion around perfect circles. Both are true enough and they both sort of work and because of that they mask access to a more effective conceptual essence.

My thesis is that the model of networks of linked oscillations spans both primitive implementation and abstract interpretation unifying both domains. Part 4 will present the flow network interpreter architecture. The view of computation is modified for both domains but then they are unified.

Multi-rail Encoding

James Talbert 8,3,2017
Multi-rail Encoding

Q: With a multi-rail signal, do the lines have to be mutually exclusive? For completeness detection, they have to have to have a deterministic result set, with no overlap (001 overlaps with 011, 101, and 111). This may be what you meant by “With a single data value and multi-rail you can straightforwardly  encode any meaning in a single unbounded encoding regime.” but I want to check that I’m not missing something.

An example: If I have a 4-rail signal, with states {0011, 0101, 0110, 1001, 1010, 1100}, I get 6 states instead of the 4 with the same number of lines. Since all these states are distinct, and partially-complete versions do not form valid states, I would think that they could be used as representations.

A: Yes. When I say multi-rail I mean 1 of N encoding. There are delay insensitive M of N encodings like your 2 of 4 which yield more distinct meanings with fewer wires but have other disadvantages. First, M of N encodings require decoding which makes combination more expensive. Meaning is inherently a 1 of N condition. Out of multiple possible meanings there is typically only one meaning intended. The 2 of 4 code can encode 6 meanings but the code has to be decoded to a 1 of 6 representation to determine the meaning. With a 1 of 6 encoding there are 6 wires but there is no decoding. It matches the natural condition of meaning.

The 2 of 4 encoding would be efficient only if logic is much cheaper than wires. Chapter 11 of LDD goes into this issue in some depth.

Secondly, M of N codes do not scale. The 2 of 4 code represents 1 of 6 meanings. It cannot represent 1 of 7 or 1 of 3. The code locks one into base 6 representation with the associated decoding logic. 1 of N encoding does not have associated decoding logic and can directly represent any set of mutually exclusive meanings and form any base for further encoding.

Rethinking Computer Science Part 4: A Network Interpreter

Can there exist an approach to universal interpretability that relates directly to the dependency network and which preserves its distributed concurrency? Continue reading Rethinking Computer Science Part 4: A Network Interpreter

Rethinking Computer Science Part 3: A Sequential Interpreter.

Part 3 presents the logical structures of a memory, a configurable oscillation and a sequential interpreter ring forming a traditional universal sequential processor. Continue reading Rethinking Computer Science Part 3: A Sequential Interpreter.

Rethinking Computer Science: Purpose of Site

The purpose of this site is to present and explore a new view of computation and computer science,

  • not as a sequence of steps controlling a machine altering contents of a memory
    • but as wavefronts of computation and state spontaneously flowing through a network of linked oscillations,
  • not as clock actualized, step by step, time determined, centralized control
    • but as self actualizing, event driven, logically determined, distributed concurrent local coordination,
  • not as information manipulation
    • but as information interaction,
  • nothing global, nothing central, nothing timed.
  • a model of computation that applies to all forms of computation natural and artificial and that applies uniformly to all levels of abstraction from primitive implementation through general programmability,

a new view of computation and computer science.

Rethinking Computer Science Part 1: The Problem

Introduction
Computer science is formulated with concepts borrowed from mathematics. Even though mathematics defines mathematical computation and computer science is about computation, it is argued here that there are fundamental differences between the two, that computer science is not well served by the borrowed concepts and that there exists a conceptual grounding that more effectually addresses the goals and problems of computer science. Continue reading Rethinking Computer Science Part 1: The Problem