This course explores theoretical conditions for the existence of solutions and effective computational procedures to find these solutions for optimization problems involving nonlinear functions. Topics covered include: fundamentals of optimization, convex optimization, classical optimization techniques, line search methods, including Newton’s method and the method of steepest descent, Lagrange multipliers and Karush-Kuhn-Tucker theory, algorithms for constrained and unconstrained problems, and applications. Further topics may include: duality in nonlinear programming, penalty functions, conjugate gradient methods, stochastic gradient descent, or trust region methods.
MA 4235: Nonlinear Optimization
Program/Department
Category
Category II (offered at least every other Year)