Index Home About Blog
Newsgroups: comp.arch
Subject: Re: Asynchronous from the ground up
Date: 11 Dec 2006 14:29:57 -0800
Message-ID: <> wrote:
> Hello Comrades,
> Is it true that one of the biggest problems with Asynchronous designs
> is interfacing the asynchronous logic with other clocked domains?

No, this is easily solved with an asynchronous interface (which is a
non-trivial design affair in its own right).

> If so, what would be the problems with a computer that was designed
> asynchronously from the ground-up?

The basic problem with asynchronous computations is the "concordance"
problem. This occurs anywhere in an asynchronous design where two (or
more) signal (busses) with different timing meet at a multiplexor.
Common places for this to happen are in the forwarding lock of a
typical pipeline. Imaging the difficulty of figuring out when an ALU
operation can begin when waiting for a result from any of the 3 other
ALU's previous results, 2 data cach port results, the non-cacheable
value port (L2 cache in), and miscelaneous data-input port
(immediates,...). Now multiply this problem by the number of ports into
the ALU and the number of ALUs and AGUs and FPUs, and you get some idea
as to the difficulty involved.

Only one input can be selected (and this is not usually the problem)
what is the problem is when is it safe to begin the subsequent ALU
operation? Often it takes more time to compute 'when' to begin an
operation than it would have taken to just run the operation
synchronously (as in a standard pipeline).

A single issue in-order machine makes all of this manageable but far
from easy.

Another problem shows up in the state machine logic. If there is a
glitch* in the computation of any subsequent state, this glitch can
cause inappropriate state transistions to occur. Ivan Southerland delt
with this in his paper "micro-pipelines". A task best left to

[*] glitch, a logic level that is never fully expressed in a CMOS
rail-to-rail sense. Glitches occur in XOR gates, glitches can occur in
timing races to AND and OR gates as well. This leaves the 'decider'
logic in the asynchronous machine unable to decide that the next set of
data has arrived (or not) and it is time to go (or not) at least during
the propagation of the glitch through out the rest of the logic trees.
Synchronous designs avoid this by timing all of the paths and using the
slowest path to determine the cycle time of the machine. By its vary
nature an asynchronous machine cannot.

> If everything from the disk controller, DMA, memory, interrupts were
> asynchronous, there would be no need to detect when the input changes.
> The outputs would be a function of the inputs after a certain
> combinational delay.

These things are so slow compared to the CPU itself (milliseconds to
microseconds compared to fractions of a nanosecond), that running them
through a synchronizing interface costs very little (in delay), and
leaves the CPU in the synchronous domain where we have lots of tools to
deal with the (already) myriad of problems and issues.

> When I move my mouse, Int 9h is raised, which causes the processor to
> execute the set of instructions to handle the mouse event
> asynchronously.

Once again, you are operating in the human time domain. During the time
between when your brain decided to click the mouse button and when the
int 9 interrupt was raised, the computer ran 40,000,000 instructions!
Adding a few cycles of delay to synchronize, here, is inconsequential
in the long haul. {However, there are applications where this
synchronizing delay actually does become intolerable.}

> Thanks,
> Rohit


Index Home About Blog