I am trying to draw random deviates from a multinormal distribution.
There are two subroutines I found online: "setgmn" and "genmn", which are supposed to do exactly that. They are from the "ranlib" library.
http://www.netlib.org/random/
However, I have no clue how those to subroutines are supossed to work together.
Setgmn combines the data in such a way that genmn can uses it. However I struggle wih the "parm" argument I am not exactly sure what I have to pass into it. And how those two subroutines need to be combined to actually work.
I could not find an example online. So I thought maybe someone here already used them or has a link to an example online.
From the source:
First you have to call setgmn
. This procedure will set the argument parm
that you have to pass to getmn
.
subroutine setgmn ( meanv, covm, p, parm )
integer ( kind = 4 ) p
real ( kind = 4 ) covm(p,p)
real ( kind = 4 ) meanv(p)
real ( kind = 4 ) parm(p*(p+3)/2+1)
Parameters:
MEANV
: the means of the multivariate normal distribution.
COVM
: On input, the covariance matrix of the multivariate distribution. On output, the information in COVM has been overwritten.
P
: the number of dimensions.
PARM
: parameters needed to generate multivariate normal deviates.
PARM(1)
contains the size of the deviates, P
PARM(2:P+1)
contains the mean vector.
PARM(P+2:P*(P+3)/2+1)
contains the upper half of the Cholesky decomposition of the covariance matrix.
subroutine genmn ( parm, x, work )
real ( kind = 4 ) parm(*)
real ( kind = 4 ) work(*)
real ( kind = 4 ) x(*)
Parameters:
PARM(P*(P+3)/2+1)
: parameters set by SETGMN.
X(P)
: a random deviate from the distribution. Will be the output.
WORK(P)
: workspace
Here is an alternative modern implementation of what you need, which works with the Cholesky factorization of the MVN covariance matrix instead of the covariance matrix itself, followed by an example on how to use it:
module RandMVN_mod
implicit none
contains
!***********************************************************************************************************************************
!***********************************************************************************************************************************
! Returns a multivariate normal random number.
function getRandMVN(nd,MeanVec,CholeskyLower,Diagonal) result(RandMVN)
use, intrinsic :: iso_fortran_env, only: IK => int32, RK => real64
implicit none
integer(IK), intent(in) :: nd ! dimension of the MVN
real(RK) , intent(in) :: MeanVec(nd) ! The Mean vector of the MVN from which points are drawn
real(RK) , intent(in) :: CholeskyLower(nd,nd) ! Lower Triangle of the Cholesky Factorization of the covariance matrix of the MVN
real(RK) , intent(in) :: Diagonal(nd) ! Diagonal terms of the Cholesky Factorization of the covariance matrix of the MVN
real(RK) :: RandMVN(nd), dummy
integer(IK) :: j,i
RandMVN = 0._RK
do j = 1,nd
dummy = getRandGaus()
RandMVN(j) = RandMVN(j) + Diagonal(j) * dummy
do i = j+1,nd
RandMVN(i) = RandMVN(i) + CholeskyLower(i,j) * dummy
end do
end do
RandMVN = RandMVN + MeanVec
end function getRandMVN
!***********************************************************************************************************************************
!***********************************************************************************************************************************
! returns a normally distributed random number with zero mean and unit variance.
function getRandGaus()
use, intrinsic :: iso_fortran_env, only: IK => int32, RK => real64
implicit none
integer(IK), save :: iset=0
real(RK) , save :: gset
real(RK) :: fac,rsq,vec(2)
real(RK) :: getRandGaus
if (iset == 0) then
do
call random_number(vec)
vec = 2._RK*vec - 1._RK
rsq = vec(1)**2 + vec(2)**2
if ( rsq > 0._RK .and. rsq < 1._RK ) exit
end do
fac = sqrt(-2._RK*log(rsq)/rsq)
gset = vec(1)*fac
getRandGaus = vec(2)*fac
iset = 1
else
getRandGaus = gset
iset = 0
endif
end function getRandGaus
!***********************************************************************************************************************************
!***********************************************************************************************************************************
! Returns the the Cholesky factorization of input positive definite matrix PosDefMat
subroutine getCholeskyFactor(nd,PosDefMat,Diagonal)
use, intrinsic :: iso_fortran_env, only: IK => int32, RK => real64
implicit none
integer(IK), intent(in) :: nd
real(RK) , intent(inout) :: PosDefMat(nd,nd) ! Upper triangle + diagonal is input matrix, lower is output.
real(RK) , intent(out) :: Diagonal(nd)
real(RK) :: summ
integer(IK) :: i
do i=1,nd
summ = PosDefMat(i,i) - dot_product(PosDefMat(i,1:i-1),PosDefMat(i,1:i-1))
if (summ <= 0._RK) then
error stop
end if
Diagonal(i) = sqrt(summ)
PosDefMat(i+1:nd,i) = ( PosDefMat(i,i+1:nd) - matmul(PosDefMat(i+1:nd,1:i-1),PosDefMat(i,1:i-1)) ) / Diagonal(i)
end do
end subroutine getCholeskyFactor
!***********************************************************************************************************************************
!***********************************************************************************************************************************
end module RandMVN_mod
program test_RandMVN
use, intrinsic :: iso_fortran_env, only: IK => int32, RK => real64
use RandMVN_mod
implicit none
integer(IK) :: isample
integer(IK) , parameter :: nd = 2 ! dimension of the Multivariate distribution (MVN)
integer(IK) , parameter :: nsample = 100 ! count of random numbers
real(RK) , parameter :: CovMat(nd,nd) = reshape([ 1._RK , 0.5_RK , 0.5_RK , 1._RK ], shape=shape(CovMat)) ! covariance matrix of MVN
real(RK) , parameter :: MeanVec(nd) = [ 1._RK , 1._RK ] ! mean vector of the MVN
real(RK) :: CholeskyLower(nd,nd), Diagonal(nd)
! get the Cholesky factorization of the Covariance Matrix
CholeskyLower = CovMat
call getCholeskyFactor(nd,CholeskyLower,Diagonal)
! get random MVN vectors form the MVN distribution
do isample = 1,nsample
write(*,"(*(g0,' '))") getRandMVN(nd,MeanVec,CholeskyLower,Diagonal), " ;"
end do
end program test_RandMVN
If you need many random vectors from the same MVN distribution with the same covariance matrix, then passing the Cholesky factorization to the procedure (as done above) makes it far more efficient, for it does not not need to compute it from the input covariance matrix on every call to the procedure.