没有合适的资源?快使用搜索试试~ 我知道了~
首页现代操作系统3e:深入解析与最新发展
"Modern Operating Systems 3e 是一本由Andrew S. Tanenbaum编写的关于操作系统领域的经典教材,涵盖了计算机组织结构、计算机网络等多个相关领域。"
在《现代操作系统》第三版中,作者深入浅出地讲解了操作系统的核心概念和技术。这本书不仅适合初学者,也适合有一定基础的读者,它提供了全面且深入的洞察力,帮助读者理解操作系统是如何在现代计算机中工作的。
一、操作系统基础
操作系统是计算机系统的心脏,它管理硬件资源并为应用程序提供服务。本书详细讨论了操作系统的主要组成部分,包括进程管理、内存管理、文件系统、设备管理和调度算法。这些内容构成了操作系统设计的基础,使得读者能够理解如何实现并发执行、资源分配、任务调度等关键功能。
二、进程与线程
在现代操作系统中,进程是执行中的程序实例,而线程则是进程内的执行流。书中有详细的章节解释了进程的创建、同步、通信以及线程的使用,这对于理解多任务环境下的系统行为至关重要。
三、内存管理
内存管理是操作系统的重要职责,它涉及到虚拟内存、内存分配和回收、页面替换算法等。书中会详细探讨这些主题,帮助读者理解如何高效地使用有限的物理内存资源。
四、文件系统
文件系统是操作系统中负责管理和组织数据存储的部分。它包括文件的创建、删除、读写操作,以及文件的查找和权限控制。书中会介绍不同类型的文件系统及其工作原理。
五、设备管理
随着硬件技术的发展,设备管理变得越来越复杂。现代操作系统需要有效地驱动各种硬件设备,如打印机、硬盘、网络接口等。书中会讲解中断处理、I/O缓冲、直接内存访问(DMA)等机制。
六、网络基础
虽然标题主要关注操作系统,但书中也提到了Andrew S. Tanenbaum另一本著作《计算机网络》的内容,简要介绍了网络的基本原理。这部分可能涵盖物理层、数据链路层、网络层、传输层和应用层的协议,如TCP/IP协议栈,以及网络性能、安全性和网络应用等内容。
七、设计与实现
最后,书中还涉及操作系统的设计原则和实现技术,包括微内核、宏内核、分层及客户-服务器模型等设计选择,以及系统调用、异常和中断的处理。
总结,《Modern Operating Systems 3e》是一本全面覆盖操作系统领域的教科书,它结合了理论与实践,深入探讨了操作系统的关键概念和技术,对于学习和理解操作系统有着极高的价值。无论是计算机科学的学生还是专业的软件工程师,都能从中受益匪浅。
2
INTRODUCTION
CHAP. 1
complete access to all the hardware and can execute any instruction the machine
is capable of executing. The rest of the software runs in user mode, in which only
a subset of the machine instructions is available. In particular, those instructions
that affect control of the machine or do I/O (Input/Output) are forbidden to user-
mode programs. We will come back to the difference between kernel mode and
user mode repeatedly throughout this book.
User mode
Kernel mode
V Software
Figure 1-1- Where the operating system fits in.
The user interface program, shell or GUI, is the lowest level of user-mode
software, and allows the user to start other programs, such as a Web browser, e-
mail reader, or music player. These programs, too, make heavy use of the operat-
ing system.
The placement of the operating system is shown in Fig. 1-1. It runs on the
bare hardware and provides the base for all the other software.
An important distinction between the operating system and normal (user-
mode) software is that if a user does not like a particular e-mail reader, hef is free
to get a different one or write his own if he so chooses; he is not free to write his
own clock interrupt handler, which is part of the operating system and is protected
by hardware against attempts by users to modify it.
This distinction, however, is sometimes blurred in embedded systems (which
may not have kernel mode) or interpreted systems (such as Java-based operating
systems that use interpretation, not hardware, to separate the components).
Also, in many systems there are programs that run in user mode but which
help the operating system or perform privileged functions. For example, there is
often a program that allows users to change their passwords. This program is not
part of the operating system and does not run in kernel mode, but it clearly carries
out a sensitive function and has to be protected in a special way. In some sys-
tems, this idea is carried to an extreme form, and pieces of what is traditionally
t "He" should be read as "he or she" throughout the book.
SEC. 1.1
WHAT IS AN OPERATING SYSTEM?
3
considered to be the operating system (such as the file system) run in user space.
In such systems, it is difficult to draw a clear boundary. Everything running in
kernel mode is clearly part of the operating system, but some programs running
outside it are arguably also part of it, or at least closely associated with it.
Operating systems differ from user (i.e., application) programs in ways other
than where they reside. In particular, they are huge, complex, and long-lived.
The source code of an operating system like Linux or Windows is on the order of
five million lines of code. To conceive of what this means, think of printing out
five million lines in book form, with 50 lines per page and 1000 pages per volume
(larger than this book). It would take 100 volumes to list an operating system of
this size—essentially an entire bookcase. Can you imagine getting a job maintain-
ing an operating system and on the first day having your boss bring you to a book
case with the code and say: "Go learn that." And this is only for the part that runs
in the kernel. User programs like the GUI, libraries, and basic application soft-
ware (things like Windows Explorer) can easily run to 10 or 20 times that amount.
It should be clear now why operating systems live a long time—they are very
hard to write, and having written one, the owner is loath to throw it out and start
again. Instead, they evolve over long periods of time. Windows 95/98/Me was
basically one operating system and Windows NT/2000/XP/Vista is a different
one. They look similar to the users because Microsoft made very sure that the user
interface of Windows 2000/XP was quite similar to the system it was replacing,
mostly Windows 98. Nevertheless, there were very good reasons why Microsoft
got rid of Windows 98 and we will come to these when we study Windows in de-
tail in Chap. 11.
The other main example we will use throughout this book (besides Windows)
is UNIX and its variants and clones. It, too, has evolved over the years, with ver-
sions like System V, Solaris, and FreeBSD being derived from the original sys-
tem, whereas Linux is a fresh code base, although very closely modeled on UNIX
and highly compatible with it. We will use examples from UNIX throughout this
book and look at Linux in detail in Chap. 10.
In this chapter we will touch on a number of key aspects of operating systems,
briefly, including what they are, their history, what kinds are around, some of the
basic concepts, and their structure. We will come back to many of these impor-
tant topics in later chapters in more detail.
1.1 WHAT IS AN OPERATING SYSTEM?
It is hard to pin down what an operating system is other than saying it is the
software that runs in kernel mode—and even that is not always true. Part of the
problem is that operating systems perform two basically unrelated functions: pro-
viding application programmers (and application programs, naturally) a clean
abstract set of resources instead of the messy hardware ones and managing these
4
INTRODUCTION
CHAP. 1
hardware resources. Depending on who is doing the talking, you might hear
mostly about one function or the other. Let us now look at both.
1.1.1 The Operating System as an Extended Machine
The architecture (instruction set, memory organization, I/O, and bus struc-
ture) of most computers at the machine language level is primitive and awkward
to program, especially for input/output. To make this point more concrete, con-
sider how floppy disk I/O is done using the NEC PD765 compatible controller
chips used on most Intel-based personal computers. (Throughout this book we
will use the terms "floppy disk" and "diskette" interchangeably.) We use the
floppy disk as an example, because, although it is obsolete, it is much simpler
than a modern hard disk. The PD765 has 16 commands, each specified by loading
between I and 9 bytes into a device register. These commands are for reading and
writing data, moving the disk arm, and formatting tracks, as well as initializing,
sensing, resetting, and recalibrating the controller and the drives.
The most basic commands are read and write, each of which requires 13 pa-
rameters, packed into 9 bytes. These parameters specify such items as the address
of the disk block to be read, the number of sectors per track, the recording mode
used on the physical medium, the intersector gap spacing, and what to do with a
deleted-data-address-mark. If you do not understand this mumbo jumbo, do not
worry; that is precisely the point—it is rather esoteric. When the operation is com-
pleted, the controller chip returns 23 status and error fields packed into 7 bytes.
As if this were not enough, the floppy disk programmer must also be constantly
aware of whether the motor is on or off. If the motor is off, it must be turned on
(with a long startup delay) before data can be read or written. The motor cannot
be left on too long, however, or the floppy disk will wear out. The programmer is
thus forced to deal with the trade-off between long startup delays versus wearing
out floppy disks (and losing the data on them).
Without going into the real details, it should be clear that the average pro-
grammer probably does not want to get too intimately involved with the pro-
gramming of floppy disks (or hard disks, which are worse). Instead, what the pro-
grammer wants is a simple, high-level abstraction to deal with. In the case of
disks, a typical abstraction would be that the disk contains a collection of named
files. Each file can be opened for reading or writing, then read or written, and fi-
nally closed. Details such as whether or not recording should use modified fre-
quency modulation and what the current state of the motor is should not appear in
the abstraction presented to the application programmer.
Abstraction is the key to managing complexity. Good abstractions turn a
nearly impossible task into two manageable ones. The first one of these is defin-
ing and^aglementing the abstractions. The second one is using these abstractions
to sol^He problem at hand. One abstraction that almost every computer user
understands is the file. It is a useful piece of information, such as a digital photo,
SEC. 1.1
WHAT IS AN OPERATING SYSTEM?
5
saved e-mail message, or Web page. Dealing with photos, e-mails, and Web pages
is easier than the details of disks, such as the floppy disk described above. The job
of the operating system is to create good abstractions and then implement and
manage the abstract objects thus created. In this book, we will talk a lot about ab-
stractions. They are one of the keys to understanding operating systems.
This point is so important that it is worth repeating in different words. With
all due respect to the industrial engineers who designed the Macintosh, hardware
is ugly. Real processors, memories, disks, and other devices are very complicated
and present difficult, awkward, idiosyncratic, and inconsistent interfaces to the
people who have to write software to use them. Sometimes this is due to the need
for backward compatibility with older hardware, sometimes due to a desire to
save money, but sometimes the hardware designers do not realize (or care) how
much trouble they are causing for the software. One of the major tasks of the op-
erating system is to hide the hardware and present programs (and their pro-
grammers) with nice, clean, elegant, consistent, abstractions to work with instead.
Operating systems turn the ugly into the beautiful, as shown in Fig. 1-2.
Application programs
H its
Operating system
& "W A is*
Beautiful interface
• Ugly interface
Figure 1-2. Operating systems turn ugly hardware into beautiful abstractions.
It should be noted that the operating system's real customers are the applica-
tion programs (via the application programmers, of course). They are the ones
who deal directly with the operating system and its abstractions. In contrast, end
users deal with the abstractions provided by the user interface, either a command-
line shell or a graphical interface. While the abstractions at the user interface may
be similar to the ones provided by the operating system, this is not always the
case. To make this point clearer, consider the normal Windows desktop and the
iine-oriented command prompt. Both are programs running on the Windows oper-
ating system and use the abstractions Windows provides, but they offer very dif-
ferent user interfaces. Similarly, a Linux user running Gnome or KDE sees a very
different interface than a Linux user working directly on top of the underlying
(text-oriented) X Window System, but the underlying operating system abstrac-
tions are the same in both cases.
6
INTRODUCTION
CHAP. 1
In this book, we will study the abstractions provided to application programs
in great detail, but say rather little about user interfaces. That is a large and impor-
tant subject, but one only peripherally related to operating systems.
1.1.2 The Operating System as a Resource Manager
The concept of an operating system as primarily providing abstractions to ap-
plication programs is a top-down view. An alternative, bottom-up, view holds
that the operating system is there to manage all the pieces of a complex system.
Modern computers consist of processors, memories, timers, disks, mice, network
interfaces, printers, and a wide variety of other devices. In the alternative view,
the job of the operating system is to provide for an orderly and controlled alloca-
tion of the processors, memories, and I/O devices among the various programs
competing for them.
Modern operating systems allow multiple programs to run at the same time.
Imagine what would happen if three programs running on some computer all tried
to print their output simultaneously on the same printer. The first few lines of
printout might be from program I, the next few from program 2, then some from
program 3, and so forth. The result would be chaos. The operating system can
bring order to the potential chaos by buffering all the output destined for the print-
er on the disk. When one program is finished, the operating system can then copy-
its output from the disk file where it has been stored for the printer, while at the
same time the other program can continue generating more output, oblivious to
the fact that the output is not really going to the printer (yet).
When a computer (or network) has multiple users, the need for managing and
protecting the memory, I/O devices, and other resources is even greater, since the
users might otherwise interfere with one another. In addition, users often need to
share not only hardware, but information (files, databases, etc.) as well. In short,
this view of the operating system holds that its primary task is to keep track of
which programs are using which resource, to grant resource requests, to account
for usage, and to mediate conflicting requests from different programs and users.
Resource management includes multiplexing (sharing) resources in two dif-
ferent ways: in time and in space. When a resource is time multiplexed, different
programs or users take turns using it. First one of them gets to use the resource,
then another, and so on. For example, with only one CPU and multiple programs
that want to run on it, the operating system first allocates the CPU to one program,
then, after it has run long enough, another one gets to use the CPU, then another,
and then eventually the first one again. Determining how the resource is time mul-
tiplexed—who goes next and for how long—is the task of the operating system.
Another example of time multiplexing is sharing the printer. When multiple print
jobs are queued up for printing on a single printer, a decision has to be made
about which one is to be printed next.
SEC. 1.1
WHAT IS AN OPERATING SYSTEM?
7
The other kind of multiplexing is space multiplexing. Instead of the customers
taking turns, each one gets part of the resource. For example, main memory is
normally divided up among several running programs, so each one can be resident
at the same time (for example, in order to take turns using the CPU). Assuming
there is enough memory to hold multiple programs, it is more efficient to hold
several programs in.memory at once rather than give one of them all of it, espe-
cially if it only needs a small fraction of the total. Of course, this raises issues of
fairness, protection, and so on, and it is up to the operating system to solve them.
Another resource that is space multiplexed is the (hard) disk. In many systems a
single disk can hold files from many users at the same time. Allocating disk space
and keeping track of who is using which disk blocks is a typical operating system
resource management task.
1.2 HISTORY OF OPERATING SYSTEMS
Operating systems have been evolving through the years. In the following
sections we will briefly look at a few of the highlights. Since operating systems
have historically been closely tied to the architecture of the computers on which
they run, we will look at successive generations of computers to see what their op-
erating systems were like. This mapping of operating system generations to com-
puter generations is crude, but it does provide some structure where there would
otherwise be none.
The progression given below is largely chronological, but it has been a bumpy
ride. Each development did not wait until the previous one nicely finished before
getting started. There was a lot of overlap, not to mention many false starts and
dead ends. Take this as a guide, not as the last word.
The first true digital computer was designed by the English mathematician
Charles Babbage (1792-1871). Although Babbage spent most of his life and for-
tune trying to build his "analytical engine," he never got it working properly be-
cause it was purely mechanical, and the technology of his day could not produce
the required wheels, gears, and cogs to the high precision that he needed. Need-
less to say, the analytical engine did not have an operating system.
As an interesting historical aside, Babbage realized that he would need soft-
ware for his analytical engine, so he hired a young woman named Ada Lovelace,
who was the daughter of the famed British poet Lord Byron, as the world's first
programmer. The programming language Ada® is named after her.
1.2.1 The First Generation (1945-55) Vacuum Tubes
After Babbage's unsuccessful efforts, little progress was made in constructing
digital computers until World War II, which stimulated an explosion of activity.
Prof. John Atanasoff and his graduate student Clifford Berry built what is now
8
INTRODUCTION
CHAP- 1
regarded as the first functioning digital computer at Iowa State University. It used
300 vacuum tubes. At about the same time, Konrad Zuse in Berlin built the Z3
computer out of relays. In 1944, the Colossus was built by a group at Bletchley
Park, England, the Mark I was built by Howard Aiken at Harvard, and the ENIAC
was built by William Mauchley and his graduate student J. Presper Eckert at the
University of Pennsylvania. Some were binary, some used vacuum tubes, some
were programmable, but all were very primitive and took seconds to perform even
the simplest calculation.
In these early days, a single group of people (usually engineers) designed,
built, programmed, operated, and maintained each machine. All programming was
done in absolute machine language, or even worse yet, by wiring up electrical cir-
cuits by connecting thousands of cables to plugboards to control the machine's
basic functions. Programming languages were unknown (even assembly language
was unknown). Operating systems were unheard of. The usual mode of operation
was for the programmer to sign up for a block of time using the signup sheet on
the wall, then come down to the machine room, insert his or her plugboard into
the computer, and spend the next few hours hoping that none of the 20,000 or so
vacuum tubes would burn out during the run. Virtually all the problems were sim-
ple straightforward numerical calculations, such as grinding out tables of sines,
cosines, and logarithms.
By the early 1950s, the routine had improved somewhat with the introduction
of punched cards. It was now possible to write programs on cards and read them
in instead of using plugboards; otherwise, the procedure was the same. .
1.2.2 The Second Generation (1955-65) Transistors and Batch Systems
The introduction of the transistor in the mid-1950s changed the picture radi-
cally. Computers became reliable enough that they could be manufactured and
sold to paying customers with the expectation that they would continue to func-
tion long enough to get some useful work done. For the first time, there was a
clear separation between designers, builders, operators, programmers, and mainte-
nance personnel.
These machines, now called mainframes, were locked away in specially air-
conditioned computer rooms, with staffs of professional operators to run them.
Only large corporations or major government agencies or universities could afford
the multimillion-dollar price tag. To run a job (i.e., a program or set of pro-
grams), a programmer would first write the program on paper (in FORTRAN or
assembler), then punch it on cards. He would then bring the card deck down to
the input room and hand it to one of the operators and go drink coffee until the
output was ready.
When the computer finished whatever job it was currently running, an opera-
tor would go over to the printer and tear off the output and carry it over to the out-
put room, so that the programmer could collect it later. Then he would take one of
SEC. 1.2
HISTORY OF OPERATING SYSTEMS
9
the card decks that had been brought from the input room and read it in. If the
FORTRAN compiler was needed, the operator would have to get it from a file
cabinet and read it in. Much computer time was wasted while operators were
walking around the machine room.
Given the high cost of the equipment, it is not surprising that people quickly
looked for ways to reduce the wasted time. The solution generally adopted was
the batch system. The idea behind it was to collect a tray full of jobs in the input
room and then read them onto a magnetic tape using a small (relatively) inexpen-
sive computer, such as the IBM 1401, which was quite good at reading cards,
copying tapes, and printing output, but not at all good at numerical calculations.'
Other, much more expensive machines, such as the IBM 7094, were used for the
real computing. This situation is shown in Fig. 1-3.
Tape System
W Q» (c) (d) (e) <f)
Figure 1-3. An early batch system, (a) Programmers bring cards to 1401. (b)
1401 reads batch of jobs onto tape, (c) Operator carries input tape to 7094. (d)
7094 does computing, (e) Operator carries output tape to 1401. (f) 1401 prints
output.
After about an hour of collecting a batch of jobs, the cards were read onto a
magnetic tape, which was carried into the machine room, where it was mounted
on a tape drive. The operator then loaded a special program (the ancestor of
today's operating system), which read the first job from tape and ran it. The out-
put was written onto a second tape, instead of being printed. After each job fin-
ished, the operating system automatically read the next job from the tape and
began running it. When the whole batch was done, the operator removed the input
and output tapes, replaced the input tape with the next batch, and brought the out-
put tape to a 1401 for printing offline (i.e., not connected to the main computer).
The structure of a typical input job is shown in Fig. 1-4. It started out with a
SJOB card, specifying the maximum run time in minutes, the account number to
be charged, and the programmer's name. Then came a SFORTRAN card, telling
the operating system to load the FORTRAN compiler from the system tape. It
was directly followed by the program to be compiled, and then a $LOAD card, di-
recting the operating system to load the object program just compiled. (Compiled
10
INTRODUCTION
CHAR 1
programs were often written on scratch tapes and had to be loaded explicitly.)
Next came the $RUN card, telling the operating system to run the program with
the data following it. Finally, the SEND card marked the end of the job. These
primitive control cards were the forerunners of modern shells and command-line
interpreters.
$END
-Date for program
$RUN
$LOAD
-Fortran program
$FORTRAN
4JOB, 10,6610802, MARVIN TANENBAUM
Figure 1-4.
Structure
of
a typical FMS
job.
Large second-generation computers were used mostly for scientific and en-
gineering calculations, such as solving the partial differential equations that often
occur in physics and engineering. They were largely programmed in FORTRAN
and assembly language. Typical operating systems were FMS (the Fortran Moni-
tor System) and IBSYS, IBM's operating system for the 7094.
1.2.3 The Third Generation (1965-1980) ICs and Multiprogramming
By the early 1960s, most computer manufacturers had two distinct, incompati-
ble, product lines. On the one hand there were the word-oriented, large-scale
scientific computers, such as the 7094, which were used for numerical calcula-
tions in science and engineering. On the other hand, there were the character-
oriented, commercial computers, such as the 1401, which were widely used for
tape sorting and printing by banks and insurance companies.
With the introduction of the IBM System/360, whjtased ICs (Integrated Cir-
cuits), IBM combined these two machine types in flHpe series of compatible
machines. The lineal descendant of the 360, the zSeif^ is still widely used for
high-end server applications with massive data bases. One Of the many
SEC. 1.2
HISTORY OF OPERATING SYSTEMS
11
innovations on the 360 was multiprogramming, the ability to have several pro-
grams in memory at once, each in its own memory partition, as shown in Fig. 1-5.
While one job was waiting for I/O to complete, another job could be using the
CPU. Special hardware kept one program from interfering with another.
Job 3
Job 2
Job
1
Operating
system
Figure 1-5.
A
multiprogramming
system with three jobs in
memory.
Another major feature present in third-generation operating systems was the
ability to read jobs from cards onto the disk as soon as they were brought to the
computer room. Then, whenever a running job finished, the operating system
could load a new job from the disk into the now-empty partition and run it. This
technique is called spooling (from Simultaneous Peripheral Operation On Line)
and was also used for output. With spooling, the 1401s were no longer needed,
and much carrying of tapes disappeared.
Although third-generation operating systems were well suited for big scien-
tific calculations and massive commercial data processing runs, they were still
basically batch systems with turnaround times of an hour. Programming is diffi-
cult if a misplaced comma wastes an hour. This desire of many programmers for
quick response time paved the way for timesharing, a variant of multiprogram-
ming, in which each user has an online terminal. In a timesharing system, if 20
users are logged in and 17 of them are thinking or talking or drinking coffee, the
CPU can be allocated in turn to the three jobs that want service. Since people
debugging programs usually issue short commands (e.g., compile a five-page pro-
ceduref) rather than long ones (e.g., sort a million-record file), the computer can
provide fast, interactive service to a number of users and perhaps also work on big
batch jobs in the background when the CPU is otherwise idle. The first serious
timesharing system, CTSS (Compatible Time Sharing System), was developed
at M.I.T. on a specially modified 7094 (Corbatd et al., 1962). However, timeshar-
ing did not really become popular until the necessary protection hardware became
widespread during the third generation.
After the success of the CTSS system, M.I.T., Bell Labs', and General Electric
(then a major computer manufacturer) decided to embark on the development of a
"computer utility," a machine that would support some hundreds of simultaneous
tWe will use the terms "procedure," "subroutine," and "function" interchangeably in this book.
Memory
partitions
剩余551页未读,继续阅读
2012-05-30 上传
2008-03-21 上传
2018-01-16 上传
2013-03-18 上传
2017-10-16 上传
2019-09-25 上传
2011-12-09 上传
falala4519
- 粉丝: 5
- 资源: 78
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- 串口通信实例教程详解
- Java操作Excel完美解决方案
- j2ee architecture's handbook j2ee架构师手册pdf version
- DS18B20中文资料使用手册
- 16道C语言面试题.doc
- 如何设计与实现当前网上考试系统
- 动态网页校术IIS的安装与使用
- Libero快速入门
- ArcGIS 3D_Interpolator
- struts+hibernate+spring部署顺序
- 2007年QA典型百大MISSBUG总结-测试人员必看
- 2D-LDA A statistical linear discriminant analysis for image matrix
- C#自定义控件的制作
- Face recognition using FLDA with single training image per person
- ejb3.0开发文档
- WiFi技术的原理及未来发展趋势
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功