where is a -estimate (prior distribution). The quantity is also referred to as the Kullback-Leibler (KL) distance. As a means for least-biased statistical inference in the presence of testable constraints, Jaynes's used the Shannon entropy to propose the principle of maximum entropy [5], and if the KL-distance is adopted as the objective functional, the variational principle is known as the principle of minimum relative entropy [4].

Consider a set of distinct nodes in that are located at ( ), with denoting the convex hull of the nodal set. For a real-valued function , the numerical approximation for is:

where , is the basis function associated with node , and are coefficients. The use of basis functions that are constructed independent of an

We use the relative entropy functional given in Eq. (1) to construct max-ent basis functions. The variational formulation for maximum-entropy approximants is: find as the solution of the following constrained (convex or concave with or , respectively) optimization problem:

where is the non-negative orthant, is a non-negative weight function (

where ( ) are shifted nodal coordinates, are the Lagrange multipliers (implicitly dependent on the point ) associated with the constraints in Eq. (3c), and is known as the partition function in statistical mechanics. The smoothness of maximum-entropy basis functions for the Gaussian prior was established in Reference [12]; the continuity for any () prior was proved in Reference [15].

Copyright © 2008