没有合适的资源?快使用搜索试试~ 我知道了~
首页The Levenberg-Marquardt algorithm=Implementation and theory
The Levenberg-Marquardt algorithm=Implementation and theory
5星 · 超过95%的资源 需积分: 15 147 下载量 54 浏览量
更新于2023-03-03
评论 2
收藏 493KB PDF 举报
介绍Levenberg-Marquardt算法实现的资料,现在大部分软件都用这篇文章的实现方法!
资源详情
资源评论
资源推荐
THE LEVE_~BERG-MARQUARDT ALGORITHM:
IMPLEMENTATION AND THEORY
Jorge J. Mor~
i. Introduction
Let F: R n ÷ R m be continuously differentiable, and consider the nonlinear least
squares problem of finding a local minimizer of
I f (x) = 1 IIF x)IL2
(1.1) ~(x) = 7 7
i=l
Levenberg [1944] and Marquardt [1963] proposed a very elegant algorithm for the
numerical solution of (i.i). However, most implementations are either not robust,
or do not have a solid theoretical justification. In this work we discuss a robust
and efficient implementation of a version of the Levenberg-Marqnardt algorithm, and
show that it has strong convergence properties. In addition to robustness, the main
features of this implementation are the proper use of implicitly scaled variables,
and the choice of the Levenberg-Marquardt parameter via a scheme due to Hebden
[1973]. Numerical results illustrating the behavior of this implementation are also
presented.
Notation. in all cases If" II refers to the £2 vector norm or to the induced operator
norm. The Jacobian matrix of F evaluated at x is denoted by F' (x), but if we have a
of vectors {Xk} , then Jk and fk are used instead of F'(x k) and
F(x k)
sequence
respectively.
2. Derivation
The easiest way to derive the Levenberg-Marquardt algorithm is by a lineariza-
tion argument. If, given x ~ R n, we could minimize
~(P) = IIF( x+p) II
as a function of p, then x+p would be the desired solution. Since P is usually a
nonlinear function of p, we linearize F(x+p) and obtain the linear least squares
problem
~(p) = IIF(x) + F'(x)pl I •
Of course, this linearization is not valid for all values of p, and thus we con-
sider the constrained linear least squares problem
Work performed under the auspices of the U.S. Energy Research and Development
Administration
106
(2.1) min{~(p): [IDpll ! A} .
In theory D is any given nonsingular matrix, but in our implementation D is a diago-
nal matrix which takes into account the scaling of the problem. In either case, p
lies in the hyperellipsoid
(2.2) E = {p: I!Dpll i A} ,
but if D is diagonal, then E has axes along the coordinate directions and the length
of the ith semi-axis is A/d..
i
We now consider the solution of (2.1) in some generality, and thus the problem
(2.3) min{IIf+Jpll : IIDpll ~ A}
where f ~ R m and J is any m by n matrix. The basis for the Levenberg-Marquardt
method is the result that if p is a solution to (2.3), then p = p(l) for some
I > 0 where
(2.4) p(l) = _(jTj + IDTD)-IjTf .
If J is rank deficient and i = 0, then (2.4) is defined by the limiting process
Dp(0) E lim Dp(l) = -(jD-l)#f .
l÷0 +
There are two possibilities: Either % = 0 and IIDp(0)ll ! A, in which case p(0) is
the solution to (2.3) for which liNeN is least, or % > 0 and IIDp(%)II = A, and then
p(%) is the unique solution to (2.3).
The above results suggest the following iteration.
(2.5) Al$orithm
(a) Given Ak > 0, find %k ~ 0 such that if
(JJJk + %kDkTDk)Pk = -JkTfk '
then either X k = 0 and IIDkPkl I ~ A k, or %k > 0 and llDkPkll =
A k •
(b) If IIF(Xk+P~I 1 < IIF(Xk)II set Xk+ I = Xk+P k and evaluate Jk+l; otherwise
set Xk+ I = x k and Jk+l = Jk"
(c) Choose gk+ I and Dk+ I.
In the next four sections we elaborate on how (2.5) leads to a very robust and
efficient implementation of the Levenberg-Marquardt algorithm.
107
3. Solution of a Structured Linear Least Squares Problem
The s~plest way to obtain the correction p is to use Cholesky decomposition on
the linear system
(3.1) (jTj + %DTD)p = _jTf .
Another method is to recognize that (3.1) are the normal equations for the least
squares problem
(3.2) p ~ - ,
0
and to solve this structured least squares problem using QR decomposition with
column pivoting.
The main advantage of the nodal equations is speed; it is possible to solve
(3.1) twice as fast as (3.2). On the other hand, the normal equations are particu-
larly unreliable when % = 0 and J is nearly rank deficient. Moreover, the fo~ation
of jTj or DTD can lead to unnecessary underflows and overflows, while this is not
the case with (3.2). We feel that the loss in speed is more than made up by the
gain in reliability and robustness.
The least squares solution of (3.2) proceeds in two stages. These stages are
the same as those suggested by Golub (Osborne [1972]), but modified to take into
account the pivoting.
In the first stage, compute the QR decomposition of J with column pivoting.
This produces an orthogonal matrix Q and a permutation ~ of the columns of J such
that
where T is a nonsingular upper triangular matrix of rank (J) order. If X = 0, then
a solution of (3.2) is
I-°l
p=~ Qf~J-f
0 0
where J- refers to a particular symmetric generalized inverse of J in the sense
that JJ- is symmetric and JJ-J = J. To solve (3.2) when X > 0 first note that (3.3)
implies that
4 i lI ] I
where D~ = x½~TD~ is still a diagonal matrix and R is a (possibly singular) upper
triangular matrix of order n.
剩余11页未读,继续阅读
bxwang1
- 粉丝: 2
- 资源: 1
上传资源 快速赚钱
- 我的内容管理 收起
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
会员权益专享
最新资源
- ExcelVBA中的Range和Cells用法说明.pdf
- 基于单片机的电梯控制模型设计.doc
- 主成分分析和因子分析.pptx
- 共享笔记服务系统论文.doc
- 基于数据治理体系的数据中台实践分享.pptx
- 变压器的铭牌和额定值.pptx
- 计算机网络课程设计报告--用winsock设计Ping应用程序.doc
- 高电压技术课件:第03章 液体和固体介质的电气特性.pdf
- Oracle商务智能精华介绍.pptx
- 基于单片机的输液滴速控制系统设计文档.doc
- dw考试题 5套.pdf
- 学生档案管理系统详细设计说明书.doc
- 操作系统PPT课件.pptx
- 智慧路边停车管理系统方案.pptx
- 【企业内控系列】企业内部控制之人力资源管理控制(17页).doc
- 温度传感器分类与特点.pptx
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功
评论13