Index
Home
About
Blog
From: old_systems_guy@yahoo.com (John Mashey)
Newsgroups: comp.arch,alt.folklore.computers
Subject: Re: Integer types for 128-bit addressing
Date: 6 Nov 2004 17:00:43 -0800
Message-ID: <ce9d692b.0411061700.1867e5e6@posting.google.com>
Jon Forrest <forrest@ce.berkeley.edu> wrote in message news:<418BFCC2.9070702@ce.berkeley.edu>...
> Anne & Lynn Wheeler wrote:
> > "David Wade" <g8mqw@yahoo.com> writes:
> >
> >>Ah the joys of VM under VM. When I went on the VM systems
> >>programming courses of course we ran VM under VM (at St Johns
> >>Wood. 4361 I think)..
>
> This is slightly off-topic but the thing I remember disliking
> about CMS (and other IBM OSs) is that you couldn't write
> a program that prompted for the name of a file, and then
> opened the file. Instead, you had to predefine a handle-like
> name outside of the program, and refer to the handle when
> opening the file. This is basically what the redoubted
> JCL 'DD' card also used to do.
1) The 360's 40th Anniversary bash at the Compute History Museum last
Spring included a short talk by Fred Brooks. Being a good writer
does not guarantee being a good speaker, but Fred of course is a fine
speaker in the soft-spoken Southern Orator fashion. he used some
great slides about JCL, which he labeled something like the "worst
language never designed", noting especially that having decided there
wouldn't be many operators, that the zillion parameters and
suboperands of DD grew to be what they were. He had some wry comment
about getting OS/360 going and then handing it to somebody else [in
this case, Fritz Trapnell, a fine engineering manager and friend of
ours ... but it was already too late :-)]
2) On the other hand, as painful as storage allocation was, say in
something like OS/MVT, let us remember a fairly fundamental and
well-known deadlock avoidance policy in resource allocation, known
long ago in operations research for manufacturing shop job
scheduling:
Deadlocks can be avoided in a multi-tasking system, if each task only
requests a complete set of necessary sharable resources at a point at
which it has no such resources. In practice, this means:
a) There is effectively an atomic way to obtain a set of resources,
and a task does that at the beginning of its execution. Or, a task
may sequentially obtain resources until it has the needed set, but if
it make an allocation request that is denied, it must release all
resources and try again from scratch.
b) Having gotten the resources, it executes, and can release some any
time, just can't request any more, until it releases everything.
So 1) and b) together were a JOB STEP in JCL parlance.
c) So, JCL and equivalents just enforced allocation boundaries, using
the simplest deadlock-avoidance strategy, which is perfectly
appropriate to the limited resources that existed when these systems
started. For instance, a 360/50 with 256KB memory and a couple 7.25MB
2311 disk drives and some tapes would have been a typical substantial
machine.
More dynamic systems, like UNIX, are of course quite subject to
deadlock when being run near their limits, but fortunately:
a) Virtual memory memory helps
b) Big disks really help
Nevertheless, when Bell Labs computer centers (not individual
projects) starting running general-use UNIX services in the mid-1970s,
the OS/360-oriented managers were very nervous about dynamic
allocation style. ["Do you mean one process would be allowed to fill
up the disk? [this was before quotas] Are you kidding me? We could
never run a system like that."]
But there have often been long-running arguments. For example, when
should swap space be allocated?
a) It could be allocated whenever a program increases its memory size,
thus guranteeing that if the program must be entirely swapped out,
there is a place to put it.
b) It could be allocated only as it is needed to actually swap out, at
which point the OS may discover it is out of swap space, in which case
it can hope that something else frees some up, or it may have to kill
a process.
The first one is better for avoiding surprises late in a job's
execution, but of course wastes space in the normal case. I recall
hearing about a system that did a), to the consternation of somebody
who bought enough memory to avoid swapping, with a small swap space
allocation on a small disk, and had perfectly runnable jobs fail.
The same issues arise in really large technical jobs, with TB-sized
datasets.
You really *don't* want to start a job that writes a 10-TB-file only
to discover that run out of disk space at 9TB.
Of course, there are more sophisticated mechanisms that help, such as
using strict resource hierachy allocation when you can, but in
general, the rule remains:
The more dynamic the allocation and sharing, the more efficient you
can be in use of resources, but the closer you get to resource
exhaustion, the more likely you can run into deadlock in the general
case. If you use allocation limits, you lessen the utilization of
shared resources, and increase the inconvenience, but you avoid
deadlock.
Anyway, as awful as DD cards were, the fundamental resource allocation
style was not irrational for (tiny by today's standards) systems that
were mostly intended to run well-known production batch jobs.
Index
Home
About
Blog