没有合适的资源?快使用搜索试试~ 我知道了~
首页Dasgupta, Papadimitriou, Vazirani算法专著2008版:理论与实践
Dasgupta, Papadimitriou, Vazirani算法专著2008版:理论与实践
5星 · 超过95%的资源 需积分: 9 57 下载量 193 浏览量
更新于2024-07-25
5
收藏 5.72MB PDF 举报
《算法》(Algorithms)是由Sanjoy Dasgupta、Christos Papadimitriou和Umesh Vazirani三位知名学者合作编写的经典著作,于2008年由McGraw-Hill出版公司发行。本书是计算机科学领域的基石,主要探讨了算法设计与分析的基本原理和实践方法,适用于大学计算机科学课程以及专业人员的深入学习。 该书涵盖了广泛的算法主题,包括但不限于排序、搜索、图论、动态规划、贪心算法、递归、复杂性理论等核心内容。作者们以其清晰的阐述、严谨的逻辑和丰富的实例,引导读者理解算法背后的数学思想和工程应用。通过阅读这本书,学生和研究人员能够提升问题解决能力,掌握如何设计高效、优雅的解决方案来处理各种计算问题。 书中特别强调了算法分析的重要性,如时间复杂度和空间复杂度的讨论,这些概念对于评估算法效率至关重要。此外,书中的许多章节还探讨了实际编程中的挑战,以及如何将理论转化为实践,特别是在C语言等编程语言中的实现。 版权方面,该书受到The McGraw-Hill Companies Inc.的严格保护,未经许可,任何形式的复制、分发或存储都必须得到出版商的书面同意。由于版权限制,某些辅助材料如电子版和打印版可能只在美国境内提供。 《算法》不仅是一本教学用书,也是业界专业人士参考的重要资料,它的出版标志着算法研究和教育的一个里程碑。对于想要深入理解算法设计和优化,或是希望在计算机科学领域取得突破的读者来说,这是一本不可或缺的经典教材。无论是在学术研究、工程开发还是日常编程工作中,它都能提供扎实的理论基础和实践经验。
资源详情
资源推荐
P1: OSO/OVY P2: OSO/OVY QC: OSO/OVY T1: OSO
das23402
Ch00 GTBL020-Dasgupta-v10 August 2, 2006 2:44
Chapter 0 Algorithms
5
Figure 0.1 The proliferation of recursive calls in fib1.
F
n−3
F
n−1
F
n−4
F
n−2
F
n−4
F
n−6
F
n−5
F
n−4
F
n−2
F
n−3
F
n−3
F
n−4
F
n−5
F
n−5
F
n
As with fib1, the correctness of this algorithm is self-evident because it directly
uses the definition of F
n
. How long does it take? The inner loop consists of a single
computer step and is executed n − 1 times. Therefore the number of computer steps
used by fib2 is linear in n. From exponential we are down to polynomial, a huge
breakthrough in running time. It is now perfectly reasonable to compute F
200
or
even F
200,000
.
1
As we will see repeatedly throughout this book, the right algorithm makes all the
difference.
More careful analysis
In our discussion so far, we have been counting the number of basic computer steps
executed by each algorithm and thinking of these basic steps as taking a constant
amount of time. This is a very useful simplification. After all, a processor’s instruc-
tion set has a variety of basic primitives—branching, storing to memory, comparing
numbers, simple arithmetic, and so on—and rather than distinguishing between
these elementary operations, it is far more convenient to lump them together into
one category.
But looking back at our treatment of Fibonacci algorithms, we have been too liberal
with what we consider a basic step. It is reasonable to treat addition as a single
computer step if small numbers are being added, 32-bit numbers say. But the nth
Fibonacci number is about 0.694n bits long, and this can far exceed 32 as n grows.
1
To better appreciate the importance of this dichotomy between exponential and polynomial algorithms,
the reader may want to peek ahead to the story of Sissa and Moore in Chapter 8.
P1: OSO/OVY P2: OSO/OVY QC: OSO/OVY T1: OSO
das23402
Ch00 GTBL020-Dasgupta-v10 August 2, 2006 2:44
6 0.3 Big-O notation
Arithmetic operations on arbitrarily large numbers cannot possibly be performed
in a single, constant-time step. We need to audit our earlier running time estimates
and make them more honest.
We will see in Chapter 1 that the addition of two n-bit numbers takes time roughly
proportional to n; this is not too hard to understand if you think back to the grade-
school procedure for addition, which works on one digit at a time. Thus fib1,
which performs about F
n
additions, actually uses a number of basic steps roughly
proportional to nF
n
. Likewise, the number of steps taken by fib2 is proportional
to n
2
, still polynomial in n and therefore exponentially superior to fib1. This cor-
rection to the running time analysis does not diminish our breakthrough.
But can we do even better than fib2? Indeed we can: see Exercise 0.4.
0.3 Big-O notation
We’ve just seen how sloppiness in the analysis of running times can lead to an
unacceptable level of inaccuracy in the result. But the opposite danger is also
present: it is possible to be too precise. An insightful analysis is based on the right
simplifications.
Expressing running time in terms of basic computer steps is already a simplifica-
tion. After all, the time taken by one such step depends crucially on the particu-
lar processor and even on details such as caching strategy (as a result of which
the running time can differ subtly from one execution to the next). Account-
ing for these architecture-specific minutiae is a nightmarishly complex task and
yields a result that does not generalize from one computer to the next. It there-
fore makes more sense to seek an uncluttered, machine-independent characteriza-
tion of an algorithm’s efficiency. To this end, we will always express running time
by counting the number of basic computer steps, as a function of the size of the
input.
And this simplification leads to another. Instead of reporting that an algorithm takes,
say, 5n
3
+ 4n + 3 steps on an input of size n, it is much simpler to leave out lower-
order terms such as 4n and 3 (which become insignificant as n grows), and even the
detail of the coefficient 5 in the leading term (computers will be five times faster in
a few years anyway), and just say that the algorithm takes time O(n
3
) (pronounced
“big oh of n
3
”).
It is time to define this notation precisely. In what follows, think of f (n) and g(n)
as the running times of two algorithms on inputs of size n.
Let f (n) and g(n) be functions from positive integers to positive reals. We say
f = O(g) (which means that “ f grows no faster than g”) if there is a constant
c > 0 such that f (n) ≤ c · g(n).
Saying f = O (g) is a very loose analog of “ f ≤ g.” It differs from the usual notion
of ≤ because of the constant c, so that for instance 10n = O(n). This constant also
allows us to disregard what happens for small values of n. For example, suppose we
P1: OSO/OVY P2: OSO/OVY QC: OSO/OVY T1: OSO
das23402
Ch00 GTBL020-Dasgupta-v10 August 2, 2006 2:44
Chapter 0 Algorithms
7
are choosing between two algorithms for a particular computational task. One takes
f
1
(n) = n
2
steps, while the other takes f
2
(n) = 2n + 20 steps (Figure 0.2). Which
is better? Well, this depends on the value of n.Forn ≤ 5, n
2
is smaller; thereafter,
2n + 20 is the clear winner. In this case, f
2
scales much better as n grows, and
therefore it is superior.
This superiority is captured by the big-O notation: f
2
= O( f
1
), because
f
2
(n)
f
1
(n)
=
2n + 20
n
2
≤ 22
for all n; on the other hand, f
1
= O( f
2
), since the ratio f
1
(n)/ f
2
(n) = n
2
/(2n + 20)
can get arbitrarily large, and so no constant c will make the definition work.
Figure 0.2 Which running time is better?
1 2 3 4 5 6 7 8 9 10
0
10
20
30
40
50
60
70
80
90
100
n
2n+20
n
2
Now another algorithm comes along, one that uses f
3
(n) = n + 1 steps. Is this better
than f
2
? Certainly, but only by a constant factor. The discrepancy between 2n + 20
and n + 1 is tiny compared to the huge gap between n
2
and 2n + 20. In order to
stay focused on the big picture, we treat functions as equivalent if they differ only
by multiplicative constants.
Returning to the definition of big-O, we see that f
2
= O( f
3
):
f
2
(n)
f
3
(n)
=
2n + 20
n + 1
≤ 20,
and of course f
3
= O( f
2
), this time with c = 1.
P1: OSO/OVY P2: OSO/OVY QC: OSO/OVY T1: OSO
das23402
Ch00 GTBL020-Dasgupta-v10 August 2, 2006 2:44
8 Exercises
Just as O(·) is an analog of ≤, we can also define analogs of ≥ and = as follows:
f = (g) means g = O( f )
f = (g) means f = O(g) and f = (g).
In the preceding example, f
2
= ( f
3
) and f
1
= ( f
3
).
Big-O notation lets us focus on the big picture. When faced with a complicated
function like 3n
2
+ 4n + 5, we just replace it with O( f (n)), where f (n) is as simple
as possible. In this particular example we’d use O(n
2
), because the quadratic portion
of the sum dominates the rest. Here are some commonsense rules that help simplify
functions by omitting dominated terms:
1. Multiplicative constants can be omitted: 14n
2
becomes n
2
.
2. n
a
dominates n
b
if a > b: for instance, n
2
dominates n.
3. Any exponential dominates any polynomial: 3
n
dominates n
5
(it even domi-
nates 2
n
).
4. Likewise, any polynomial dominates any logarithm: n dominates (log n)
3
. This
also means, for example, that n
2
dominates n log n.
Don’t misunderstand this cavalier attitude toward constants. Programmers and al-
gorithm developers are very interested in constants and would gladly stay up nights
in order to make an algorithm run faster by a factor of 2. But understanding algo-
rithms at the level of this book would be impossible without the simplicity afforded
by big-O notation.
Exercises
0.1. In each of the following situations, indicate whether f = O(g), or f = (g), or
both (in which case f = (g)).
f (n) g(n)
(a) n − 100 n − 200
(b) n
1/2
n
2/3
(c) 100n + log nn+ (log n)
2
(d) n log n 10nlog 10n
(e) log 2n log 3n
(f) 10 log n log(n
2
)
(g) n
1.01
n log
2
n
(h) n
2
/ log nn(log n)
2
(i) n
0.1
(log n)
10
(j) (log n)
log n
n/ log n
(k)
√
n (log n)
3
(l) n
1/2
5
log
2
n
(m) n2
n
3
n
P1: OSO/OVY P2: OSO/OVY QC: OSO/OVY T1: OSO
das23402
Ch00 GTBL020-Dasgupta-v10 August 2, 2006 2:44
Chapter 0 Algorithms
9
(n) 2
n
2
n+1
(o) n!2
n
(p) (log n)
log n
2
(log
2
n)
2
(q)
n
i=1
i
k
n
k+1
0.2. Show that, if c is a positive real number, then g(n) = 1 + c + c
2
+···+c
n
is:
(a) (1) if c < 1.
(b) (n)ifc = 1.
(c) (c
n
)ifc > 1.
The moral: in big- terms, the sum of a geometric series is simply the first term if
the series is strictly decreasing, the last term if the series is strictly increasing, or
the number of terms if the series is unchanging.
0.3. The Fibonacci numbers F
0
, F
1
, F
2
,..., are defined by the rule
F
0
= 0, F
1
= 1, F
n
= F
n−1
+ F
n−2
.
In this problem we will confirm that this sequence grows exponentially fast and
obtain some bounds on its growth.
(a) Use induction to prove that F
n
≥ 2
0.5n
for n ≥ 6.
(b) Find a constant c < 1 such that F
n
≤ 2
cn
for all n ≥ 0. Show that your answer
is correct.
(c) What is the largest c you can find for which F
n
= (2
cn
)?
0.4. Is there a faster way to compute the nth Fibonacci number than by
fib2
(page 4)? One idea involves matrices.
We start by writing the equations F
1
= F
1
and F
2
= F
0
+ F
1
in matrix notation:
F
1
F
2
=
01
11
·
F
0
F
1
.
Similarly,
F
2
F
3
=
01
11
·
F
1
F
2
=
01
11
2
·
F
0
F
1
and in general
F
n
F
n+1
=
01
11
n
·
F
0
F
1
.
So, in order to compute F
n
, it suffices to raise this 2 × 2 matrix, call it X, to the
nth power.
(a) Show that two 2 × 2 matrices can be multiplied using 4 additions and 8
multiplications.
剩余331页未读,继续阅读
craft-zhang
- 粉丝: 1
- 资源: 4
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- Hadoop生态系统与MapReduce详解
- MDS系列三相整流桥模块技术规格与特性
- MFC编程:指针与句柄获取全面解析
- LM06:多模4G高速数据模块,支持GSM至TD-LTE
- 使用Gradle与Nexus构建私有仓库
- JAVA编程规范指南:命名规则与文件样式
- EMC VNX5500 存储系统日常维护指南
- 大数据驱动的互联网用户体验深度管理策略
- 改进型Booth算法:32位浮点阵列乘法器的高速设计与算法比较
- H3CNE网络认证重点知识整理
- Linux环境下MongoDB的详细安装教程
- 压缩文法的等价变换与多余规则删除
- BRMS入门指南:JBOSS安装与基础操作详解
- Win7环境下Android开发环境配置全攻略
- SHT10 C语言程序与LCD1602显示实例及精度校准
- 反垃圾邮件技术:现状与前景
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功