Abstract:
Statistical inference in the presence of order restrictions is an important area of statistical
analysis. Isotonic regression theory plays a key role in this field.
Let K= {1,…,k} be a finite set on which a partial order « is defined. A real vector (θ1,
…,θk) is said to be isotonic if μ, υ ∈ K, μ « υ imply θμ ≤ θυ. Given real numbers x , , xk 1 K
and positive numbers k w , ,w 1 K , a vector ( ) k θ θ
)
K
)
, , 1 is said to be the univariate isotonic
regression of k x , , x 1 K with weights k w , ,w 1 K if it is isotonic and minimizes
( ) ν
ν
ν ν x θ w
kΣ=
−
1
2
under the restriction that (θ1,…,θk) is isotonic. Isotonic regression is closely related to the
maximum likelihood estimate of ordered parameters of univariate normal distribution and
some other univariate distributions. Various algorithms are given in the literature for
computing univariate isotonic regression.
Multivariate generalization of the isotonic regression and multivariate extensions of
related theorems are given and proved by Sasabuchi, Inutsuka and Kulatunga (1983,
1992).
A p × k real matrix ( ) k θ θ ,...,θ 1 = is said to be isotonic with respect to the partial order «
, if μ, υ ∈ K, μ « υ imply μ ν θ ≤θ , where μ ν θ ≤θ means all the elements of ν μ θ −θ are
nonnegative .
Given p-dimensional real vectors k x , , x 1 K and p × p positive-definite matrices
k Λ , ,Λ 1 K , a p × k matrix ( ) k θ θ
)
K
)
, , 1 is said to be the multivariate, in fact p-variate,
isotonic regression of k x , , x 1 K with weights 1 1
1 , , Λ − Λ − K k if it is isotonic and satisfies
min ( ) ( ) ( ) 1 ( ),
1
1 '
1
* '
θ ν ν ν ν ν ν ν ν ν ν ν ν θ θ θ θ
) )
− Λ − = − Λ − −
=
−
= Σ x x Σ x x k k
Where min * (.)
θ to denotes the minimum for all θ isotonic with respect to the partial
order. An algorithm for the computation of multivariate isotonic regression is given in
Sasabuchi et al. (1983, 1992). This algorithm involves iterative computation of
univariate isotonic regression. The convergence of this algorithm is also studied there
and it has been observed that the convergence follows only under certain conditions.
(Corollary 4.1 of Sasabuchi et al (1992).)
However, the simulation study conducted in Sasabuchi, Miura and Oda (2003) under
special cases, has shown that the condition given in Corollary 4.1 of Sasabuchi et al.
(1992) is not necessary for the convergence of this algorithm. We have written a Fortran
subroutine for the computation of multivariate isotonic regression and also noted that the
algorithm converges in general. This motivates us to consider a proof for the convergence
of this algorithm.
In this study we give a proof for the convergence of this algorithm in the bivariate case.