Numerical Implementation

On considering the dual formulation, the solution for the Lagrange multipliers can be written as [16,17]

$\displaystyle {\lambda}^* = \mathop{\rm argmin}\limits F({\lambda}), \quad F({\lambda}) := \ln Z({\lambda}),$ (5)

where $ {\lambda}^*$ is the optimal solution that is desired. Since $ F$ is strictly convex in the interior of $ D$, convex optimization algorithms (for example, Newton's method and families of gradient descent) are a natural choice. The steps in these algorithm are:

  1. Start with iteration counter $ k = 0$. The initial guess $ {\lambda}^0 = {0}$ and let $ \epsilon$ be the desired convergence tolerance. For the convergence tolerance, $ \epsilon = 10^{-14}$-$ 10^{-10}$ is suitable (see sample input data files in the tests sub-directory);
  2. Compute $ {g}^k
:= {\nabla}_{{\lambda}}F({\lambda}^k)$ (gradient of $ F$) and $ {H}^k :=
{\nabla}_{{\lambda}}
{\nabla}_{{\lambda}}
F({\lambda}^k)$ (Hessian of $ F$);
  3. Determine a suitable search direction, $ \Delta {\lambda}^k$. For steepest descent, $ \Delta {\lambda}^k =
-{g}^k$ and for Newton's method, $ \Delta {\lambda}^k = - \left( {H}^k \right)^{-1} {g}^k$ (matrix-vector notation) are used;
  4. Update: $ {\lambda}^{k+1} = {\lambda}^{k}
+ \alpha \Delta {\lambda}^k$, where $ \alpha$ is the step size. For steepest descent, a variable step size (line search) algorithm, which is presented in Reference [18], is used to determine $ \alpha$, and for Newton's method (damped or guarded), the line search is used to set the step size if the error is greater than $ 10^{-4}$ and otherwise, $ \alpha = 1$ is used;
  5. Check convergence: if $ {\lvert{g}^{k+1}\rvert} > \epsilon$, increment the iteration counter, $ k \leftarrow k+1$, and goto 2, else continue;
  6. Set $ {\lambda}^* = {\lambda}^{k+1}$ and compute the max-ent basis functions using Eq. (4).

N. Sukumar
Copyright © 2008