Solutions for Introduction to algorithms
The author of this document takes absolutely no responsibility for the contents. This is merely
a vague suggestion to a solution to some of the exercises posed in the book Introduction to algo-
rithms by Cormen, Leiserson and Rivest. It is very likely that there are many errors and that the
solutions are wrong. If you have found an error, have a better solution or wish to contribute in
some constructive way please send a message to email@example.com.
It is important that you try hard to solve the exercises on your own. Use this document only as
a last resort or to check if your instructor got it all wrong.
Please note that the document is under construction and is updated only sporadically. Have
fun with your algorithms.
Last update: December 9, 2002
1.2 − 2
Insertion sort beats merge sort when 8n
< 64n lg n, ⇒n < 8 lg n, ⇒2
< n. This is true for
2 6 n 6 43 (found by using a calculator).
Rewrite merge sort to use insertion sort for input of size 43 or less in order to improve the
1 − 1
We assume that all months are 30 days and all years are 365.
1 1 1 1 1 1 1
second minute hour day month year century
36 · 10
1296 · 10
746496 · 10
6718464 · 10
8950673664 · 10
8950673664 · 10
6 · 10
36 · 10
864 · 10
2592 · 10
94608 · 10
94608 · 10
n lg n 62746 2801417 ?? ?? ?? ?? ??
24494897 6 · 10
293938 1609968 30758413 307584134
391 1532 4420 13736 98169 455661
19 25 31 36 41 49 56
n! 9 11 12 13 15 17 18
2.1 − 2
In line 5 of INSERTION-SORT alter A[i] > key to A[i] < key in order to sort the elements in
2.1 − 3
Algorithm 1 LINEAR-SEARCH(A, v)
Input: A = ha
, . . . a
i and a value v.
Output: An index i such that v = A[i] or nil if v 6∈ A
for i ←1 to n do
if A[i] = v then
As a loop invariant we say that none of the elements at index A[1, . . . , i − 1] are equal to v.
Clearly, all properties are fullﬁlled by this loop invariant.
2.2 − 1
/1000 − 100n
− 100n + 3 = Θ(n
2.2 − 2
Assume that FIND -MIN(A, r, s) returns the index of the smallest element in A between indices r
and s. Clearly, this can be implemented in O(s − r) time if r > s.
Algorithm 2 SELECTION-SORT(A)
Input: A = ha
, . . . a
Output: sorted A.
for i ←1 to n − 1 do
j ←FIND -MIN(A, i, n)
As a loop invariant we choose that A[1, . . . , i − 1] are sorted and all other elements are greater
than these. We only need to iterate to n − 1 since according to the invariant the nth element will
then the largest.
The n calls of FIND -MIN gives the following bound on the time complexity:
This holds for both the best- and worst-case running time.
2.2 − 3
Given that each element is equally likely to be the one searched for and the element searched for is
present in the array, a linear search will on the average have to search through half the elements.
This is because half the time the wanted element will be in the ﬁrst half and half the time it will
be in the second half. Both the worst-case and average-case of LINEAR-SEARCH is Θ(n).
2.2 − 4
One can modify an algorithm to have a best-case running time by specializing it to handle a best-
case input efﬁciently.
2.3 − 5
A recursive version of binary search on an array. Clearly, the worst-case running time is Θ(lg n).
Algorithm 3 BINARY-SEARCH(A, v, p, r)
Input: A sorted array A and a value v.
Output: An index i such that v = A[i] or nil.
if p > r and v 6= A[p] then
j ←A[b(r − p)/2c]
if v = A[j] then
if v < A[j] then
return BINARY-SEARCH(A, v, p, j)
return BINARY-SEARCH(A, v, j, r)
2.3 − 7
Give a Θ(n lg n) time algorithm for determining if there exist two elements in an set S whose sum
is exactly some value x.
Algorithm 4 CHECKSUMS(A, x)
Input: An array A and a value x.
Output: A boolean value indicating if there is two elements in A whose sum is x.
for i ←to n do
if A[i] > 0 and BINARY-SEARCH(A, A[i] − x, 1, n) then
Clearly, this algorithm does the job. (It is assumed that nil cannot be true in the if-statement.)
3.1 − 1
Let f(n), g(n) be asymptotically nonnegative. Show that max(f(n), g(n)) = Θ(f(n) + g(n)). This
means that there exists positive constants c
0 6 c
(f(n) + g(n)) 6 max(f(n), g(n)) 6 c
(f(n) + g(n))
for all n > n
= 1 clearly shows the third inequality since the maximum must be smaller than
the sum. c
should be selected as 1/2 since the maximum is always greater than the weighted
average of f(n) and g(n). Note the signiﬁcance of the “asymptotically nonnegative” assumption.
The ﬁrst inequality could not be satisﬁed otherwise.
3.1 − 4
) since 2
= 2 · 2
6 2 · 2
! However 2
is not O(2
): by deﬁnition we have
which for no constant c asymptotically may be less than or equal to c ·2
3 − 4
Let f(n) and g(n) be asymptotically positive functions.
a. No, f(n) = O(g(n)) does not imply g(n) = O(f(n)). Clearly, n = O(n
) but n
b. No, f(n) + g(n) is not Θ(min(f(n), g(n))). As an example notice that n + 1 6= Θ(min(n, 1)) =
c. Yes, if f(n) = O(g(n)) then lg(f(n)) = O(lg(g(n))) provided that f(n) > 1 and lg(g(n)) > 1
are greater than or equal 1. We have that:
f(n) 6 cg(n) ⇒lg f(n) 6 lg(cg(n)) = lg c + lg g(n)
To show that this is smaller than b lg g(n) for some constant b we set lg c + lg g(n) = b lg g(n).
Dividing by lg g(n) yields:
lg(c) + lg g(n)
+ 1 6 lg c + 1
The last inequality holds since lg g(n) > 1.
d. No, f(n) = O(g(n)) does not imply 2
). If f(n) = 2n and g(n) = n we have that
2n 6 2 · n but not 2
for any constant c by exercise 3.1 − 4.
e. Yes and no, if f(n) < 1 for large n then f(n)
< f(n) and the upper bound will not hold.
Otherwise f(n) > 1 and the statement is trivially true.
f. Yes, f(n) = O(g(n)) implies g(n) = Ω(f(n)). We have f(n) 6 cg(n) for positive c and thus
1/c · f(n) 6 g(n).
g. No, clearly 2
for any constant c if n is sufﬁciently large.
h. By a small modiﬁcation to exercise 3.1−1 we have that f(n)+o(f(n)) = Θ(max(f(n), o(f(n)))) =
- 我的内容管理 收起
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额