From: Nick Holford <*n.holford*>

Date: Sun, 30 May 2010 09:01:38 +0200

Douglas,

Thanks for your thoughtful and insightful comments on why anyone might

be interested in the answer to the question "Does NONMEM assume a normal

distribution for estimation?".

In fact one has no choice but to use whatever assumptions are built into

the estimation algorithm. So a more practical question might be "Are

there situations when models built with this assumption might be

misleading?". It is known that NONMEM parameter estimates obtained with

FOCE may be a bit biased compared true values used for simulation. But

is this due to the approximation to the likelihood used by FOCE or is

because of an assumption of normality? It has been my understanding that

it is due to the likelihood approximation.

On a somewhat unrelated issue - there is one part of the estimation

process that can be misleading if a normal assumption is made and that

is the use of estimated standard errors to compute confidence intervals

(CIs). If likelihood profiling (Holford & Peace 1992) or bootstraps

(Matthews et al. 2004) are used to obtain CIs then it not uncommon to

find the CI is asymmetrical and this cannot be predicted from the

asymptotic standard error estimate. Computation of CIs with standard

errors typically assumes a normal distribution of the uncertainty and

this leads to a misleading impression of the uncertainty that can be

only be discovered by methods which do not make this normal assumption.

This is not just a problem with NONMEM - it is a problem with any

procedure that only provides a standard error as an estimate of uncertainty.

Nick

Holford, N. H. G. and K. E. Peace (1992). "Results and validation of a

population pharmacodynamic model for cognitive effects in Alzheimer

patients treated with tacrine." Proceedings of the National Academy of

Sciences of the United States of America 89(23): 11471-11475.

Matthews, I., C. Kirkpatrick, et al. (2004). "Quantitative justification

for target concentration intervention - Parameter variability and

predictive performance using population pharmacokinetic models for

aminoglycosides." British Journal of Clinical Pharmacology 58(1): 8-19.

Eleveld, DJ wrote:

*>
*

*> I'd like to interject a slightly different point of view to the
*

*> distributional assumption question here.
*

*>
*

*> When I hear people speak in terms of the “distribution assumptions of
*

*> some estimation method” I think its easy for people to jump to the
*

*> conclusion that the normal distribution assumption is just one of many
*

*> possible, equally justifiable distributional assumptions that could
*

*> potentially be made. And that if the normal distribution is the
*

*> “wrong” one then the results from such an estimation method would be
*

*> “wrong”. This is what I used to think, but now I believe this is
*

*> wrong and I'd like to help others from wasting as much time thinking
*

*> along this path, as I have.
*

*>
*

*> From information theory, information is gained when entropy
*

*> decreases. So if you have data from some unknown distribution and if
*

*> you must make some distribution assumption in order to analyze the
*

*> data, you should choose the highest entropy distribution you can.
*

*> This insures that your initial assumptions, the ones you do before you
*

*> actually consider your data, are the most uninformative you can make.
*

*> This is the principle of Maximum Entropy which is related to Principle
*

*> of Indifference and the Principle of Insufficient Reason.
*

*>
*

*> A normal distribution has the highest entropy of all real-valued
*

*> distributions that share the same mean and standard deviation. So if
*

*> you assume your data has some true SD, then the best distribution to
*

*> assume would be normal distribution. So we should not think of the
*

*> normal distribution assumption as one of many equally justifiable
*

*> choices, it is really the “least-bad” assumption we can make when we
*

*> do not know the true distribution. Even if normal is the “wrong”
*

*> distribution, it still remains the “best”, by virtue of being the
*

*> “least-bad”, because it is the most uninformative assumption that can
*

*> be made (assuming a some finite true variance).
*

*>
*

*> In the real-word we never know the true distribution and so it makes
*

*> sense to always assume a normal distribution unless we have some
*

*> scientifically justifiable reason to believe that some other
*

*> distribution assumption would be advantageous.
*

*>
*

*> The Cauchy distribution is a different animal though since its has an
*

*> infinite variance, and is therefore an even weaker assumption than the
*

*> finite true SD of a normal distribution. It would possibly be even
*

*> better than a normal distribution because its entropy is even higher
*

*> (comparing the standard Cauchy and standard normal). It would be very
*

*> interesting if Cauchy distributions could be used in NONMEM.
*

*> Actually, the ratio of two N(0,1) random variables is Cauchy
*

*> distributed. Maybe this property could be used trick NONMEM into
*

*> making a Cauchy (or nearly-Cauchy) distributed random variable?
*

*>
*

*> Douglas Eleveld
*

*>
*

*> ------------------------------------------------------------------------
*

*> De inhoud van dit bericht is vertrouwelijk en alleen bestemd voor de
*

*> geadresseerde(n). Anderen dan de geadresseerde(n) mogen geen gebruik
*

*> maken van dit bericht, het niet openbaar maken of op enige wijze
*

*> verspreiden of vermenigvuldigen. Het UMCG kan niet aansprakelijk
*

*> gesteld worden voor een incomplete aankomst of vertraging van dit
*

*> verzonden bericht.
*

*>
*

*> The contents of this message are confidential and only intended for
*

*> the eyes of the addressee(s). Others than the addressee(s) are not
*

*> allowed to use this message, to make it public or to distribute or
*

*> multiply this message in any way. The UMCG cannot be held responsible
*

*> for incomplete reception or delay of this transferred message.
*

--

Nick Holford, Professor Clinical Pharmacology

Dept Pharmacology & Clinical Pharmacology

University of Auckland,85 Park Rd,Private Bag 92019,Auckland,New Zealand

tel:+64(9)923-6730 fax:+64(9)373-7090 mobile:+64(21)46 23 53

email: n.holford

http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

Received on Sun May 30 2010 - 03:01:38 EDT

Date: Sun, 30 May 2010 09:01:38 +0200

Douglas,

Thanks for your thoughtful and insightful comments on why anyone might

be interested in the answer to the question "Does NONMEM assume a normal

distribution for estimation?".

In fact one has no choice but to use whatever assumptions are built into

the estimation algorithm. So a more practical question might be "Are

there situations when models built with this assumption might be

misleading?". It is known that NONMEM parameter estimates obtained with

FOCE may be a bit biased compared true values used for simulation. But

is this due to the approximation to the likelihood used by FOCE or is

because of an assumption of normality? It has been my understanding that

it is due to the likelihood approximation.

On a somewhat unrelated issue - there is one part of the estimation

process that can be misleading if a normal assumption is made and that

is the use of estimated standard errors to compute confidence intervals

(CIs). If likelihood profiling (Holford & Peace 1992) or bootstraps

(Matthews et al. 2004) are used to obtain CIs then it not uncommon to

find the CI is asymmetrical and this cannot be predicted from the

asymptotic standard error estimate. Computation of CIs with standard

errors typically assumes a normal distribution of the uncertainty and

this leads to a misleading impression of the uncertainty that can be

only be discovered by methods which do not make this normal assumption.

This is not just a problem with NONMEM - it is a problem with any

procedure that only provides a standard error as an estimate of uncertainty.

Nick

Holford, N. H. G. and K. E. Peace (1992). "Results and validation of a

population pharmacodynamic model for cognitive effects in Alzheimer

patients treated with tacrine." Proceedings of the National Academy of

Sciences of the United States of America 89(23): 11471-11475.

Matthews, I., C. Kirkpatrick, et al. (2004). "Quantitative justification

for target concentration intervention - Parameter variability and

predictive performance using population pharmacokinetic models for

aminoglycosides." British Journal of Clinical Pharmacology 58(1): 8-19.

Eleveld, DJ wrote:

--

Nick Holford, Professor Clinical Pharmacology

Dept Pharmacology & Clinical Pharmacology

University of Auckland,85 Park Rd,Private Bag 92019,Auckland,New Zealand

tel:+64(9)923-6730 fax:+64(9)373-7090 mobile:+64(21)46 23 53

email: n.holford

http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

Received on Sun May 30 2010 - 03:01:38 EDT