没有合适的资源?快使用搜索试试~ 我知道了~
首页Variations of Sparse Optimization and Numerical Implementation.pdf
资源详情
资源评论
资源推荐

Introduction IST APG ALM ADMM Extensions Conclusion
Variations of Sparse Optimization and Numerical Implementation
Allen Y. Yang
Department of EECS, UC Berkeley
yang@eecs.berkeley.edu
with
Yi Ma & John Wright
ECCV 2012 Tutorial
Sparse Representation and Low-Rank Representation in Computer Vision
Lecture Two
http://www.eecs.berkeley.edu/
~
yang Sparse Optimization

Introduction IST APG ALM ADMM Extensions Conclusion
Two Previous Sparse Optimization Problems
`
1
-min seeks sparse solution in underdetermined system (A in general is full rank):
min kxk
1
subj. to b = Ax where A ∈ R
d×n
, (d < n)
b
A x
Robust PCA seeks sparse and low-rank decomposition:
min kAk
∗
+ λkEk
1
subj. to D = A + E ∈ R
m×n
.
= +
http://www.eecs.berkeley.edu/
~
yang Sparse Optimization

Introduction IST APG ALM ADMM Extensions Conclusion
Efficient sparse optimization is challenging
Generic second-order method toolboxes do exist: CVX, SparseLab.
However, standard interior-point methods are very expensive in HD space.
min
x
1
T
x
subj. to Ax = b
x ≥ 0
The KKT condition:
∇f (x
∗
)+µ∇g(x
∗
)+λ∇h(x
∗
) = 0
Complexity: O(n
3
)
Robust PCA: CVX can solve smaller than 80 × 80 matrices on typical PC
Complexity bound: O(n
6
).
http://www.eecs.berkeley.edu/
~
yang Sparse Optimization

Introduction IST APG ALM ADMM Extensions Conclusion
Efficient sparse optimization is challenging
Generic second-order method toolboxes do exist: CVX, SparseLab.
However, standard interior-point methods are very expensive in HD space.
min
x
1
T
x
subj. to Ax = b
x ≥ 0
The KKT condition:
∇f (x
∗
)+µ∇g(x
∗
)+λ∇h(x
∗
) = 0
Complexity: O(n
3
)
Robust PCA: CVX can solve smaller than 80 × 80 matrices on typical PC
Complexity bound: O(n
6
).
http://www.eecs.berkeley.edu/
~
yang Sparse Optimization

Introduction IST APG ALM ADMM Extensions Conclusion
Sparse Optimization Literature: `
1
-Minimization
1
Interior-point methods
Log-Barrier [Frisch ’55, Karmarkar ’84, Megiddo ’89, Monteiro-Adler ’89, Kojima-Megiddo-Mizuno
’93]
2
Homotopy
Homotopy [Osborne-Presnell-Turlach ’00, Malioutov-Cetin-Willsky ’05, Donoho-Tsaig ’06]
Polytope Faces Pursuit (PFP) [Plumbley ’06]
Least Angle Regression (LARS) [Efron-Hastie-Johnstone-Tibshirani ’04]
3
Gradient Projection
Gradient Projection Sparse Representation (GPSR) [Figueiredo-Nowak-Wright ’07]
Truncated Newton Interior-Point Method (TNIPM) [Kim-Koh-Lustig-Boyd-Gorinevsky ’07]
4
Iterative Soft-Thresholding
Soft Thresholding [Donoho ’95]
Sparse Reconstruction by Separable Approximation (SpaRSA) [Wright-Nowak-Figueiredo ’08]
5
Proximal Gradient [Nesterov ’83, Nesterov ’07]
FISTA [Beck-Teboulle ’09]
Nesterov’s Method (NESTA) [Becker-Bobin-Cand´es ’09]
6
Augmented Lagrangian Methods [Bertsekas ’82]
Bergman [Yin et al. ’08]
SALSA [Figueiredo et al. ’09]
Primal ALM, Dual ALM [AY et al ’10]
7
Alternating Direction Method of Multipliers
YALL1 [Yang-Zhang ’09]
http://www.eecs.berkeley.edu/
~
yang Sparse Optimization
剩余60页未读,继续阅读
















安全验证
文档复制为VIP权益,开通VIP直接复制

评论0