Index
Home
About
Blog
Date: Thu, 27 Sep 2001 16:23:38 -0700 (PDT)
From: Linus Torvalds <torvalds@transmeta.com>
Subject: Re: CPU frequency shifting "problems"
Newsgroups: fa.linux.kernel
On Thu, 27 Sep 2001, Padraig Brady wrote:
>
> >
> >For example, on a transmeta CPU, the TSC will run at a constant
> >"nominal" speed (the highest the CPU can go), although the real CPU
> >speed will depend on the load of the machine and temperature etc.
>
> As does the P4 from what I understand.
That might explain why the P4 "rdtsc" is so slow.
> So a question..
> What are the software dependencies on this auto/manual frequency shifting?
None. At least not as long as the CPU _does_ do it automatically, and the
TSC appears to run at a constant speed even if the CPU does not.
For example, the Intel "SpeedStep" CPU's are completely broken under
Linux, and real-time will advance at different speeds in DC and AC modes,
because Intel actually changes the frequency of the TSC _and_ they don't
document how to figure out that it changed.
With a CPU that does makes TSC appear constant-frequency, the fact that
the CPU itself can go faster/slower doesn't matter - from a kernel
perspective that's pretty much equivalent to the different speeds you get
from cache miss behaviour etc.
Linus
Date: Fri, 28 Sep 2001 09:11:35 -0700 (PDT)
From: Linus Torvalds <torvalds@transmeta.com>
Subject: Re: CPU frequency shifting "problems"
Newsgroups: fa.linux.kernel
On Fri, 28 Sep 2001, Jamie Lokier wrote:
>
> On a Transmeta chip, does the TSC clock advance _exactly_ uniformly, or
> is there a cumulative error due to speed changes?
>
> I'll clarify. I imagine that the internal clocks are driven by PLLs,
> DLLs or something similar. Unless multiple oscillators are used, this
> means that speed switching is gradual, over several hundred or many more
> clock cycles.
Basically, there's the "slow" timer, and the fast one. The slow one always
runs, and fast one gives the precision but runs at CPU speed.
So yes, there are multiple oscillators, and no, they should not drift on
frequency shifting, because the slow and constant one is used to scale the
fast one. So no cumulative errors.
HOWEVER, anybody who believes that TSC is a "truly accurate clock" will be
sadly mistaken on any machine. Even PLL's drift over time, and as
mentioned, Intel already broke the "you can use TSC as wall time" in their
SpeedStep implementation. Who knows what their future CPU's will do..
> I can now use `rdtsc' to measure time in userspace, rather more
> accurately than gettimeofday(). (In fact I have worked with programs
> that do this, for network traffic injection.). I can do this over a
> period of minutes, expecting the clock to match "wall clock" time
> reasonably accurately.
It will work on Crusoe.
> (One hardware implementation that doesn't have this problem is to run a
> small counter, say 3 or 4 bits, at the nominal clock speed all the time,
> and have the slower core sample that. But it may use a little more
> power, and your note about FP scaling tells me you don't do that).
We do that, but the other way around. The thing is, the "nominal clock
speed" doesn't even _exist_ when running normally.
What does exist is the bus clock (well, a multiple of it, but you get the
idea), and that one is stable. I bet PCI devices don't like to be randomly
driven at frequencies "somewhere between 12 and 33MHz" depending on load ;)
But because the stable frequency is the _slow_ one, you can't just scale
that up (well, you could - you could just run your cycle counter at 66MHz
all the time, and you couldn't measure smaller intervals, and people would
be really disappointed). So you need the scaling of the fast one..
Linus
Date: Fri, 28 Sep 2001 14:54:54 -0700 (PDT)
From: Linus Torvalds <torvalds@transmeta.com>
Subject: RE: CPU frequency shifting "problems"
Newsgroups: fa.linux.kernel
On Thu, 27 Sep 2001, Grover, Andrew wrote:
>
> APM is a lost cause, but the correct solution for ACPI systems is to use the
> PM timer.
This is _completely_ untrue.
The PM timer is (a) inaccurate and (b) slow as hell.
Linux uses TSC because we want high accuracy (nanosecond scale) without
having slow stuff.
But we've had other chips that were broken, and were marked as "don't use
the TSC" for real-time. They'll get worse results, but hey, if intel isn't
going to fix their TSC, that's all the more reason to buy AMD (or..)
Linus
Index
Home
About
Blog