From: email@example.com (Ken Becker)
Subject: Re: Robbed-bit signalling
Date: Fri, 13 Jun 1997 05:03:56 GMT
Tim Sohacki <sohacki@STOPSPAM.nortel.ca> wrote:
>B8ZS robbed-bit signalling line speed (kbps)
> no yes 54.666
> no no 56
> yes yes 62.666
> yes no 64
>It is surprisingly hard to get all the facts about this stuff.
>Also, finding the details of the standards doesn't tell you
>anything about the makeup of the real world telco implementations.
Well, I may as well weigh in. I see that Floyd has made a comment or
two already, but I think I can add a little to the discussion and
clear up the fun and games.
I happen to be a DACS (Digital Access and Cross-Connect) hardware
designer. I've had a fair amout to do with DS0's and T1's; more than I
care to think about, actually. Now that we've got the bona fides out
of the way, let's get into the fun stuff.
First of all, let's tackle the infamous 1's density problem. Basically
there are phase-locked-loops (PLL's) on the receive side of all T1
receivers. The "standard" coding scheme for T1's is something called
AMI (Alternate Mark Inversion). In this scheme a series of logic 1's
into the transmitter result in a bunch of +1 -1 +1 -1 pulses out of
the transmitter. Logic 0 into the transmitter results in a 0 out of
the transmitter. For example, then, a series that looks like
1 1 0 0 1 0 1 1
would come out like
+1-1 0 0+1 0-1-1
and so on. Note that this is a >>line<< encoding - it doesn't matter
what framing you got, where the beginning and end of each octet is,
and so on. It works strictly on the basis of bits in and bits out.
Now, for those nifty PLL's to keep running correctly they have to have
detectable bit transitions to supply them with the energy they need to
keep on frequency. If they don't get enough +1 or -1 bits they will
drift in phase so that when data reappears a bit error will occur, not
cool. So, Bellcore (and AT&T before them, etc.) has mandated that if
one is running AMI, one had better have no more than 15 0's in a row.
This has resulted in some interesting gambits to ensure that 1's
density. One of them, ZCS, I've seen in the some of the T1
transceivers that have passed my way. In this mode bit D2 (the next to
LSB bit) gets set to 1 whenever all the bits in that octet are logic
0. (This is also known locally as "Stomp" mode). With voice and
HDLC-based signals (Hardware data link communications protocol,
something that underlies X.25, LAP-B, LAP-D, and others) a ZCS enabled
DS0 channel will rarely, if ever, do a stomp. With other kinds of
digital communications all bets are off.
Now, an alternative to AMI is good 'ol B8ZS (8-bit zero supression).
Looking back at the above example of bit encoding, one will notice
that the output "1's" keep on flipping sign, even with intervening
0's. This is done so that the average DC voltage on a T1 line is zero
(useful when the signal is going through transformers). In fact,
getting two +1's or two -1's in a row is an error, a BPV (BiPolar
Violation). Too many BPV's in a period of time will put your T1 into
CGA (Carrier Group Alarm), but I digress.
In B8ZS mode eight 0 bits in a row are translated into a fixed bit
pattern that has two back-to-back BPV's. I don't have my reference in
front of me, but I think it's something like
0 0 0 0 0 0 0 0 into the transmitter goes to
+1 0 +1 0 0 -1 0 -1 out of the transmitter.
The T1 receiver knows that if it gets exactly the above bit pattern it
should generate eight 0 bits in a row, thereby masking the insertion
of the BPV's from the user. I note again that the above B8ZS code is a
>>line<< code, and it pays no attention if the back-to-back BPV's
span two DS0 channels or go across the T1 framing bit or some such.
So, one >>could<< enable ZCS on a B8ZS line, but why bother? With B8ZS
the 1's density problem for the PLL's are solved. Further you can
transmit any arbitrary bit pattern you'd like and it gets preserved as
it traverses the network.
Now, let's go after the robbed bit signalling business.
First, how it works. If one has a D4-formatted T1 line (note: this is
independent of AMI/B8ZS stuff) one every 6 frames each DS0 channel
gets its least significant bit "robbed" and a signalling bit put in
instead. This is repeated every 12 frames, i.e., one signalling bit is
transmitted on the first 6th frame spot and a different one
transmitted on the second 6th frame (the 12th frame) in each DS0
channel. On the other 10 frames (out of 12) the bit is not "robbed".
Or is it? There are two things working against you getting that LSB
1.) Going across multiple frames. This one gives people headaches, so
bear with me. There are 12 frames in a D4 superframe. The transmitted
T1's leaving a particular piece of T1 equipment (a DACS, 5E, what have
you) all typically have the same superframe alignment; i.e., they are
all transmitting frame 1 of 12 at the same time. Now, let a particular
T1 enter the next piece of T1 gear. It's a given that that gear is
probably >>not<< lined up on the same superframe. So, the 6th frame
(say) of a D4 superframe entering a piece of T1 gear might very well
be transmitted out of that T1 gear on the 4th frame of the transmitted
D4 superframe. Geez, when you do that, what do you set that DS0 LSB
bit to? You can't set it to data - that was lost a piece of T1 gear
ago. Answer: You leave it at the signalling value, sometimes. (I'll
get into that in a second). And, on a perfectly good DS0 LSB bit on
the 6th frame of the transmitted D4 superframe, we whop that LSB and
put in the signalling bit.
So: going in we have one DS0 byte least significant bit "robbed" every
six frames. Coming out we have >>two<< DS0 bytes with "robbed"
(non-data!) on two of six frames in the superframe. Neat trick; and,
if you go through enough T1 gear, it's pretty much expected that
you're going to lose >>all<< your LSB's on >>all<< your DS0 bytes. Ah,
well: 7 bits once every 125 us gives you 56 Kb/s, not 64!
2). False framing pattern supression. I know a lot of you out there
may be getting kind of wall-eyed by now, but bear with me a little
more. A T1 line contains 24 DS0 channels, 8 bits apiece, and a single
framing bit. That framing bit goes through a fixed, repetitive
pattern. For a D4 T1, the pattern is 12 bits long (once every frame).
For an ESF T1 (ESF stands for Extended SuperFrame) the pattern repeats
once every 24 bits, for 24 frames.
Now, it turns out that if one looks carefully at case (1) above, the
bit pattern in the LSB of the DS0 channels can, under the right
circumstances, look very much like a framing bit pattern. This is not
good. Suppose that a T1 line gets disconnected for a moment and then
gets reconneced. ESD (Electro Static Discharge) and lightning can also
cause similiar events. The T1 framer then goes looking for the framing
bit pattern. Lo: if it finds that "false framing pattern" in the DS0
LSB it's going to stick with it - and your T1 is busted with no alarm.
Bellcore (and similar types) have written standards to prevent false
framing patterns from appearing in the data stream; usually, these
involve setting the LSB on DS0 channels >>not<< in the signalling
frames to fixed values. This also loses you your 64 Kb/s.
So: how do we manage to get 64 Kb/s anyway, given all the fun stuff
1) These days inter-switch signalling is not handled in-band in the
DS0 channels. It's done with system like SS7 (Signalling System 7)
which handles all the call-setup and call-teardown one could use
without having to mung around with the DS0 channels carrying
2) There is a funky T1 data format called ESF/DMI, where the DMI
stands for Digital Multiplexed Interface. It's not used too much, but
the 24th DS0 channel is encoded to hold the signalling bits for the
other 23. This gives you 23 voice channels on your T1 (not so good for
the BOC's, they want more channels!) but since you don't have robbed
bit signalling the channels are running at 64 Kb/s.
3) ISDN. ISDN format signals running to the CO really >>are<< at 64
Kb/s with a (I think) 8 Kb/s control channel for signalling. If your
CO gives you ISDN with 56 Kb/s, it's not the local loop that's doing
this to you - it's the trunking between the CO's.
What's working against you?
1) If you happen to have a SLC (Subscriber Line Concentrator) on your
local street corner. A large number of these run T1's with RBS back to
the CO. Once you get to/from the CO you're back at 64 Kb/s, but the
damage is done.
2) Trunking between CO's. There's a large installed base of old T1
equipment that knoweth only D4, AMI, and RBS - and it'll probably
still be working when we die. BOC's >>hate<< to retire equipment that
earns money and is payed for.
3) Slow BOC's. They may have T1's that can do the B8ZS/SS7 stuff, or
better; but what they don't have (or want to pay for) is the time to
go back into the network and start analyzing those thousands
(millions?) of T1's or what have you to see if they can bump the data
4) Your local ISP. Say your ISP is doing the right thing by hooking
T1's (or better) directly into digital modems. However, remember that
trick about how signalling gets passed around? If they have RBS DS0's
(how else are you going to detect on-hook, off-hook, wink, seizure,
and that kind of stuff), that LSB is history. If they run DMI, they
get back that 64 Kb/s - but then, they have to pay for more T1's
(remember, you lose that 24th data channel!). I suppose they could be
running SS7 (unlikely - that's within the network) or ISDN (more
likely - and the D control channel on one ISDN T1 can do the
signalling for multiple DS0's in many T1's), but this is past my level
Finally: there was some comment on whether the modem people can detect
whether that LSB is mucked with or not. Well, I do do DSP (Digital
Signal Processing) from time to time. I believe the answer is yes, as
Consider the two LSB's of a DS0 channel. They have four possible
combinations: 00, 01, 10, and 11. Next, suppose that RBS is turned off
(i.e., we got a 64 Kb/s channel). If I were the digital modem maker, I
would use the following possible codes for the last two bits:
00 11 (case 1)
Next, suppose that RBS is turned on. If the LSB is zero, you get
00 10 (case 2)
If the LSB is one, you get
01 11 (case 3)
Note that the difference between the two signals in case 1 is "11" (3,
in decimal). In both cases 2 and 3 the difference between the two
values is "1" (decimal 1). Yep, I betcha they can see that! A guess
might be that this would cause the first fall-back from 56 Kb/s to the
next slowest value.
2) The ISP's modem pool is hooked up to T1's using RBS. Note that if
the ISP wants more channels on a T1
From: "David G. Lewis" <firstname.lastname@example.org>
Subject: Re: Robbed-bit signalling
Date: Fri, 13 Jun 1997 08:40:57 -0400
Ken Becker wrote:
> So: how do we manage to get 64 Kb/s anyway, given all the fun stuff
> 3) ISDN. ISDN format signals running to the CO really >>are<< at 64
> Kb/s with a (I think) 8 Kb/s control channel for signalling.
On a BRI, it's a 16 kb/s control channel; since we're discussing DS1s,
it's more relevant to discuss PRI, which is a 64 kb/s control channel -
one of the DS0s. So you get 64 kb/s bearer channels at the expense of
one DS0. (Whether it's one DS0 per DS1, or one DS0 per some larger
number of DS1s depends on whether or not the LEC and your CPE support
something called Non Facility-Associated Signaling, or NFAS, in which a
D-channel on one DS1 of a PRI can provide signaling for the B-channels
on multiple DS1s of a PRI.)
> What's working against you?
> 4) Your local ISP. Say your ISP is doing the right thing by hooking
> T1's (or better) directly into digital modems. However, remember that
> trick about how signalling gets passed around? If they have RBS DS0's
> (how else are you going to detect on-hook, off-hook, wink, seizure,
> and that kind of stuff), that LSB is history. If they run DMI, they
> get back that 64 Kb/s - but then, they have to pay for more T1's
> (remember, you lose that 24th data channel!). I suppose they could be
> running SS7 (unlikely - that's within the network) or ISDN (more
> likely - and the D control channel on one ISDN T1 can do the
> signalling for multiple DS0's in many T1's), but this is past my level
> of expertise.
They have a similar "pay for more T1s" issue on ISDN PRI, because one
DS0 is used for the D-channel; the magnitude depends on whether or not
they can use NFAS, and if so, some engineering calculations to determine
how many DS1s they can support with a single D-channel.
David G. Lewis AT&T Network and Computing Services
The future - it's a long distance from long distance.
From: email@example.com (Floyd L. Davidson)
Subject: Re: Robbed-bit signalling
Date: 13 Jun 1997 17:29:31 -0800
Ken Becker <firstname.lastname@example.org> wrote:
>First of all, let's tackle the infamous 1's density problem. Basically
>there are phase-locked-loops (PLL's) on the receive side of all T1
>receivers. The "standard" coding scheme for T1's is something called
>AMI (Alternate Mark Inversion). In this scheme a series of logic 1's
>into the transmitter result in a bunch of +1 -1 +1 -1 pulses out of
>the transmitter. Logic 0 into the transmitter results in a 0 out of
>the transmitter. For example, then, a series that looks like
> 1 1 0 0 1 0 1 1
>would come out like
>+1-1 0 0+1 0-1-1
>and so on. Note that this is a >>line<< encoding - it doesn't matter
>what framing you got, where the beginning and end of each octet is,
>and so on. It works strictly on the basis of bits in and bits out.
>Now, for those nifty PLL's to keep running correctly they have to have
>detectable bit transitions to supply them with the energy they need to
>keep on frequency. If they don't get enough +1 or -1 bits they will
>drift in phase so that when data reappears a bit error will occur, not
>cool. So, Bellcore (and AT&T before them, etc.) has mandated that if
>one is running AMI, one had better have no more than 15 0's in a row.
Maybe I can add something to Ken's (exceedingly good) description
that will help put this into perspective. Here is a block diagram
of a DS1 interface showing the significance of the PLL:
DS-1 Interface (DSX-1 Jacks) T1 Equipment
<-------------------------------------------------------< TX DATA
| Phase Locked Loop | Timing Jumpers
+--->| Data Rate Clock |->---+ A-B = Loop
| | Generator | | B-C = External
| +-------------------+ |
| | A B C External
| +----------------------------+---->o o==o<--------< Sync Source
| | Buffer-In Clock |
| | |
| | +-------------------+ |
| +->| |<-------------+------------> CLOCK
| | | Buffer-Out Clock
| | Input Data Buffer |
>--+--->| |>--------------------------> RX DATA
Receive | | Buffered Data Line
Note that the only place where the phase relationship between
data and clocking is important is the data input to the Input
Data Buffer. For a minimum error count, each bit on the
incoming line has to be sampled in exactly the center of its
time slot. But otherwise, since the data is buffered, the
output from the buffer can be clocked by an external source
which need not be phase locked to the input data rate, and
instead needs merely to be the same frequency to prevent
over/underflow of the buffer. It does happen that if loop
timing is used, which is common for tail circuits terminating at
a customer location, the PLL supplies clock to the entire
equipment including the transmit side. But for DS1 links
between major locations the options will be set for "External
Timing" and an external network frequency synchronized (but not
phase locked) clock signal is used.
The requirement for 12.5% 1's density, with no more than 15
consecutive 0's, is somewhat of an arbitrary number chosen to
guarantee that a PLL will remain in lock. Certainly PLL's today
are better than PLL's of decades past when the standard was
defined. But every PLL requires some pulses to remain in lock,
and 1 pulse in 16 bits is not significantly different than if
the standard were changed to 1 in 24 or even 1 in 193. Some form
of zero suppression would be required in any case, so it might
as well be B8ZS in the next decade too.
Floyd L. Davidson email@example.com
Salcha, Alaska or: firstname.lastname@example.org
From: email@example.com (Floyd Davidson)
Subject: Re: Bad trunks handle voice, no data
Date: Sat, 22 Nov 1997 01:50:37 GMT
[posted and emailed]
Jay K. Thomas <firstname.lastname@example.org> wrote:
>Thanks for the suggestion, but it turned out to be a ESF/B8ZS on one
>end D4/AMI on the other thing. You can talk over them just fine, but
>FAX and data over 300 baud just can't deal with it. It took a few
>days, but they went through and checked/corrected every trunk in that
>office. With SS7, trunks no longer have to carry -any- signal in order
>to get assigned to a call.
Someone didn't give you a very good answer... ESF on one end
and D4 on the other end simply will not work for voice or data
because the T1 will never achieve frame sync. So that can't
possibly be what the trouble was.
B8ZS on one end and just straight AMI on the other end would
cause a problem for ISDN, SW-56, or the new PCM modems, but for
a true analog modem, such as v.34, it will never make any
difference at all (because no 0 valued byte ever gets encoded by
a telco codec).
SS7 doesn't require inband signaling, so robbed bit signaling
can be turned off on T1 facilities that can be configured
without it. But robbed bit signaling is going to result in
things like getting 28.8Kbps connections instead of 33.6K
connections, and will have no effect on FAX or lower speed modem
The symtoms that you are describing, where FAX is affected and
300 baud modems are not, is almost exclusively a clock sync
problem which results in "controlled clock slips". 300 baud is
totally immune, and usually 1200 baud too, but anything higher
than that is just out of the question because each clock slip is
a 45 degree phase hit.
They gave you an answer that didn't include the things they had
previously (and incorrectly) denied were possible causes... :-)
Floyd L. Davidson <email@example.com> Salcha, Alaska
From: firstname.lastname@example.org (Al Varney)
Subject: Re: Bad trunks handle voice, no data
Date: 24 Nov 1997 18:05:23 GMT
In article <b157cd$103225.1ca@PolarNet>,
Floyd Davidson <email@example.com> wrote:
>B8ZS on one end and just straight AMI on the other end would
>cause a problem for ISDN, SW-56, or the new PCM modems, but for
>a true analog modem, such as v.34, it will never make any
>difference at all (because no 0 valued byte ever gets encoded by
>a telco codec).
Uhh, Floyd, you've forgotten last February's exchange on this topic.
It certainly DOES make a difference -- even though no codec generates
an all-0 byte....
>It is interesting to note that if PCM voice samples are sent,
>then there will never be an instance of 8 consecutive bits and
>the special pattern will never be used.
# Floyd, I hate to disagree, 'cause you are usually both accurate
# AND readable. But B8ZS, like other line codings (HDB3, B6ZS, etc.),
# operates without respect to octet alignment. So ANY string of 8 zero
# bits will be converted using the special pattern, even if it spans
# two different channels. It is certainly possible for the last 4 bits
# of one channel and the first 4 bits of an adjacent channel to all be
# zeros. Even with just PCM voice.
Al Varney - just my opinion
From: firstname.lastname@example.org (Floyd Davidson)
Subject: Re: Bad trunks handle voice, no data
Date: Tue, 25 Nov 1997 04:53:57 GMT
email@example.com (Al Varney) writes:
>Floyd Davidson <firstname.lastname@example.org> wrote:
>>B8ZS on one end and just straight AMI on the other end would
>>cause a problem for ISDN, SW-56, or the new PCM modems, but for
>>a true analog modem, such as v.34, it will never make any
>>difference at all (because no 0 valued byte ever gets encoded by
>>a telco codec).
> Uhh, Floyd, you've forgotten last February's exchange on this topic.
>It certainly DOES make a difference -- even though no codec generates
>an all-0 byte....
Nahh... it won't make any difference Al.
>>It is interesting to note that if PCM voice samples are sent,
>>then there will never be an instance of 8 consecutive bits and
>>the special pattern will never be used.
Yes, that was in error. There will be instance of 8 consecutive
bits (though no consecutive 8 bit byte values) causing B8ZS to
insert intentional BPV's, as you noted below:
># Floyd, I hate to disagree, 'cause you are usually both accurate
># AND readable. But B8ZS, like other line codings (HDB3, B6ZS, etc.),
># operates without respect to octet alignment. So ANY string of 8 zero
># bits will be converted using the special pattern, even if it spans
># two different channels. It is certainly possible for the last 4 bits
># of one channel and the first 4 bits of an adjacent channel to all be
># zeros. Even with just PCM voice.
What happens under those circumstances? The B8ZS enabled
sending end encodes intentional BPV's, the non-B8ZS receiving
end gets one each byte value wrong on maybe two different
channels, and therefore a small amount of noise is added to the
signal. That small noise is not relatively significant and will
not cause the type of problems described.
The significance of the BPV's is relatively none. A transient
SNR reduction, but only enough to maybe exercise error
correction and not enough to cause a modem retrain. We have a
verifiable aberation of the digital signal, but it results in an
almost unmeasurable difference in the analog signal used by the
receiving modem. (I am totally discounting the possibility that
some data stream would result in multiple consecutive byte
values that cause BPV's because I expect the probability of that
to be near zero.)
However, the significance of not encoding 0 value PCM bytes is
great, because it will never encode two or more consecutive 0
value bytes for a string of more than 15 consecutive 0 bits.
That could cause frame synchronization to be lost at the DS-1
interface receiver, which would result in exactly the kind of
problems originally described.
Your point is well taken in that I should probably have gone
into more detail about why it makes no difference because what I
did say is obviously not detailed enough.
Floyd L. Davidson <email@example.com> Salcha, Alaska
From: firstname.lastname@example.org (Floyd Davidson)
Subject: Re: Bit-robbing causes ISP call disconnects?
Date: 19 Sep 1996 13:02:37 GMT
>They refer me to a Web page they have put up to give their "technical"
>>A theory behind why T1s may cause sudden disconnects and less than
>>perfect modem connections
>>First, basics: a DS0 is 64Kbps, a DS1 (aka T1) is 1.544Mbps. But
>>wait... doesn`t a T1 carry 24 DS0s? (24 * 64 = 1536) Sure it
>>does, but in the TDM (time division multiplexing) scheme used for
>>T1s, there is an extra framing bit added for synchronization.
>>Therefore, in every `cycle` there are 193 bits involved
>>(24 DS0 * 8 bytes per DS0 + 1 framing bit) times 8000 frames per
>>second gives 1.544 Mbps. (FYI, the framing bits follow certain
>>which is known as the line encoding, ie B8ZS)
They've got that it wrong.
The framing bit pattern distinguishes between Super Frame (SF) and
Extended Super Frame (ESF). It is entirely unrelated to B8ZS.
B8ZS is necessary when digital data (not PCM encoded voice
channels) are transported on a DS1 rate circuit. If more than 15
zero bits in a row are sent, the clock recovery circuit at the
receiving end cannot function correctly and loses sync. B8ZS
prevents that from happening.
B8ZS is a modification of Alternate Mark Inversion (AMI) line
encoding, and replaces any occurance of 8 consecutive zero bits
with a pattern that includes 1 bits that are bipolar violations
(BPV) in place of the 4th and 7th bits. BPVs are when the
consecutive mark bits (1 bits) is not inverted. With AMI each 1
bit will be the opposite polarity from the last one, and a 0 bit
is 0 voltage. B8ZS is used to provide sufficient "1's density" to
allow the receiving end of a DS1 rate line to recover an accurate
clock signal from the data, even when raw data is send with more
than 15 consecutive 0 bits.
Note that B8ZS is not required, and has no added benefit, for
circuits passing normal analog voice traffic (including analog
modem signals) because PCM and ADPCM encoding is designed to
never allow 15 zeros in a row under any circumstance. That is
done by never encoding a 0 byte (the range of valid bytes is
not 0-255, but 1-255).
Framing, SF or ESF, is an entirely different subject.
With SF framing (also known as D3/D4 or just D4 Framing) every
sync bit follows the sync pattern. The pattern is 1000 1101 1100,
repeated over and over. A superframe is 12 consecutive frames,
which is one entire sequence of sync bits. ESF (also known as Fe
or D5 Framing) consists of 24 consectutive frames, but the framing
sequence, 001011, is only send in frames 4, 8, 12, 16, 20, and 24.
All odd frames are used as a slow speed (4Kbps) data link, and the
remaining 6 even frames contain a CRC-6 error code for each frame.
Robbed bit signaling in an SF framed DS1 uses frames 6 and 12. In
each octet of those frames the least significant bit is used to
indicate a pair of binary supervision signals, on or off hook. The
6th frame is the A bit, and the 12th frame is the B bit.
In an ESF framed DS1, frames 6, 12, 18 and 24 are robbed, providing
A, B, C, and D bits. Each channel can encode 16 different states
(things like call forwarding are defined).
Note that the "6th frame" is an arbitrary reference to the first
frame after synchronization occurs. It can be any frame, but
every 6th one after that is used for bit robbing.
>>So far, nothing to affect the signal, but there is more: bit robbing!
>>The least significant bit of one of the 24 bytes of data of every
>> sixth frame
Actually that is the least significant bit of every octet in every
sixth frame. The real problem with the robbed bits, are that at
each DS1 interface the signal is reframed, and it would be
possible to have a T1 facility that goes through 6 different DS1
interface points, each of which selected a different frame as
number 1 for reference purposes. Hence, it could be that only one
out of 6 frames are coded with only 7 bits instead of 8, but it
could also be _all_ frames, meaning that the entire signal is coded
with 7 bits instead of 8.
The significance, for v.34 modem users, is that with 8 bits the
average error between the original analog input signal that is
encoded and the restored analog signal at the output is a noise
which limits the best signal to noise ratio to about 37 dB. With
7 bits coding on all octets the noise is 6 dB worse, for 31 dB
SNR. Hence a T1 facility with multiple DS1 interface points can
exhibit very close to the 37 dB best case for 8 bit encoding, or
it can degrade the SNR to perhaps 31 dB, which might be too low
for even 28.8Kbps connections, or be anywhere in between.
>>is `robbed` and used to encode internal switching information. Not
>>noticeable on an audio call, but this could very well be the problem
>>with our lines.
>> More proof that`s less technical:
>> * High speed modem (33.6) connections are always more reliable on local
>> calls (your call is not usually switched through T1 loops locally)
That may not be a good assumption. Not only might a local call be
switched through numerous T1 facility links, there could be more
than one point where the digital signal is converted back to an
analog interface and then back to digital for further transmission
or switching. (Disgusting thought, but it happens, and the
resulting channels are well within the minimum specifications
required. They just arent't a modem user's delight...) Likewise a
local loop, the analog portion, can be long enough to be far more
of a restriction than even the worst case digital facility.
However... given the number of T1 facilities using bit robbing
in the LD network, I might agree that 33.6 is probably less likely
on LD than on local calls. But calling to a local modem that is
on a poor analog loop is still going to be less than perfect, and
it might be possible to call a modem across the country and get
>> * High speed modem (33.6) connections are next to impossible
>> long-distance...(so much for all digital network :) ) because your call
>> is routed through one or more different T1s (or higher)
>> The solution is already on it`s way: New lines will be ISDN PRI (out of band
>> signalling rather than inband) and we will probably switch existing T1s for
>> PRIs also.
Well, as noted above the out of band signaling will, on the one
facility, probably improve the SNR by about 1 dB in most cases
(every 6th frame is 7 bit coding) but if the facility is long and
has 6 or more DS1 interface points (unlikely but possible) then
the improvement on some calls will be as much as 6 dB for the SNR.
Off hand, an educated guess is that about 1% of all calls will be
affected unless the circumstances are very unusual.
>Is this a credible explanation (technically)? If so, it it reasonable for
>them to blame the telco, or might it have been TotalNet's "choice" to use
>this method of delivering lines? [I believe they are using Centrex lines -
>I don't know if Bell Canada has a "Centrex data" tariff or all Centrex is
>considered to be voice.]
The original statement they made was about "why T1s may cause
sudden disconnects and less than perfect modem connections". We
can see where it is possible that less than perfect modem
connections might indeed be the case, but nothing at all has been
described that would cause sudden disconnects or remove any causes
>If this is what is happening to them, shouldn't every ISP or other Centrex
>"data" user have the same problem?
Hmmm... yes. :-)
Floyd L. Davidson Salcha, Alaska email@example.com