From: Nick Holford <*n.holford*>

Date: Sat, 22 Nov 2008 12:07:20 +1300

Leonid,

I dont know what you hope to achieve with your survey. I cannot identify

a clear objective that can be reached by analysis of the results e.g.

your first question is a multi-part beast with cannot be answered with

just one YES/NO response.

Mark,

Like Leonid, you talk about error messages from NONMEM. If you get an

error message from NONMEM you do not get any results. NONMEM stops

running. Most of the time you will get an message labelled "ERROR" or

sometimes labelled "PROGRAM TERMINATED". When NONMEM detects an error

there is no sensible way to relate this to a model result. (I am

ignoring dredging into the INTER file which will be deleted anyway

unless you meddle with the NONMEM source.)

My proposal about random messages relates not to error messages but to

status messages which are issued when NONMEM finishes the estimation

step or the covariance step. At this stage there will always be

parameter estimates ("the results"). PLEASE NOTICE THE DIFFERENCE

BETWEEN AN ERROR MESSAGE AND A STATUS MESSAGE. Status messages indicate

either success or failure. There may be other distractions such as

boundary issues but I am only referring to the binary valued

success/failure messages.

I have documented in this thread the efforts of several groups (and you

have recently indicated similar experiences) that the posterior

parameter distributions (and ,in one study, the model choice decisions)

obtained by parametric or non-parametric bootstraps are not different in

any important way when classified by the estimation and covariance

status messages. Thus these posterior distributions of parameters are

not associated consistently with these status messages. It seems

plausible therefore to propose that the messages themselves are not

related to the parameters but are themselves triggered by a random process.

My post hoc justification for the lack of useful association between

status messages and parameter estimates (which have now been confirmed

many times) is as follows:

Because minimization and covariance success calculations are dependent

on the results of finite precision floating point arithmetic then

NONMEM's 'decision' can depend on pseudo-random insignificant bits.

This kind of pseudo-random behaviour is more likely when one is pushing

the model to reveal the secrets in the data and the numerical methods

are most stressed. Simple test cases that do not push the model with the

data can be bootstrapped and produce 'success' practically every time.

But a real learning type analysis that explores the data will commonly

hover over the yes/no decision boundary and thus success becomes a

random event with does not signify anything important about the

parameter estimates.

You propose further experiments with other endpoints e.g. NPDE of

observations instead of comparing posterior distributions of parameters.

I look forward to your results.

Thank you for attributing the idea of using NONMEM status messages as a

source of random numbers to me. This is not my suggestion so please go

ahead and patent it yourself :-)

Ken,

You describe me well :-) I am indeed a mechanistic modeller. However, I

am an atheist in terms of statistical delusional systems (Bayesian,

frequentist, etc) so dont really worry about stating my priors in a

formal way. As I failed in maths and never took a statistics paper I

like to justify the model choice based on biological priors which is why

I always include weight as a covariate for clearance and volume. If run

times permit then I prefer to explore the parameter uncertainty with a

bootstrap or likelihood profile rather than playing around with initial

estimates to randomly end up with minimisation success and asymptotic

standard errors.

Best wishes,

Nick

Mark Sale - Next Level Solutions wrote:

*>
*

*> Ken,
*

*> Thanks for your comments, and I think your observation about how
*

*> mechanistic (vs statistically rigorous) the analysts views are is
*

*> really critical. Clearly Lewis (and at the risk of speaking for him,
*

*> I think Nick perhaps) have strong views about this. Conversely, I
*

*> heard many times (and am sympathetic to) the views of some very smart
*

*> statisticians. So, I suspect we won't resolve this by debating now
*

*> any more than we have over the past 20 years of debating it. So, I
*

*> once again propose generating some actual data, which I continue to
*

*> believe is better than a two decade long debate about theory.
*

*>
*

*> Mark
*

*>
*

*>
*

*> Mark Sale MD
*

*> Next Level Solutions, LLC
*

*> www.NextLevelSolns.com <http://www.NextLevelSolns.com>
*

*> 919-846-9185
*

*>
*

*> -------- Original Message --------
*

*> Subject: RE: [NMusers] Models that abort before convergence Addendum
*

*> From: "Ken Kowalski" <ken.kowalski *

*> Date: Fri, November 21, 2008 4:39 pm
*

*> To: "'Leonid Gibiansky'" <LGibiansky *

*> Sale - Next Level Solutions'" <mark *

*> Cc: "'nmusers'" <nmusers *

*>
*

*> Leonid,
*

*>
*

*> I have never reported out as a final model a run that failed to
*

*> converge or failed the COV step. My guess is that individuals who
*

*> frequently do probably tend to be more mechanistic in their model
*

*> building than I am and often push the complexity of their models
*

*> beyond what can be supported by the data in hand. For those that
*

*> do report out models that don't converge, I wonder if they have
*

*> tried re-running their models with different starting values
*

*> (15-20% different) and see if NONMEM fails to converge at the same
*

*> set of parameter estimates. My guess is in many cases it won't
*

*> although both sets of estimates may appear "reasonable" and give
*

*> similar fits and VPC.
*

*>
*

*> For individuals who have strong prior beliefs about their
*

*> mechanistic models, my thinking is that rather than using
*

*> approximate maximum likelihood methods and ignoring the
*

*> diagnostics that might suggest their model is unstable or not
*

*> fully supported by the data, I think they would be better served
*

*> by using a Bayesian approach. That way they can be explicit about
*

*> the strength of their priors and they don't have to worry about
*

*> convergence and COV step failures. JMHO.
*

*>
*

*> Ken
*

*>
*

*> Kenneth G. Kowalski
*

*> President & CEO
*

*> A2PG - Ann Arbor Pharmacometrics Group, Inc.
*

*> 110 E. Miller Ave., Garden Suite
*

*> Ann Arbor, MI 48104
*

*> Work: 734-274-8255
*

*> Cell: 248-207-5082
*

*> Fax: 734-913-0230
*

*> ken.kowalski *

*>
*

*>
*

*>
*

*> -----Original Message-----
*

*> From: owner-nmusers *

*> [mailto:owner-nmusers *

*> <http://email01.secureserver.net/pcompose.php#Compose>] On Behalf
*

*> Of Leonid Gibiansky
*

*> Sent: Friday, November 21, 2008 3:53 PM
*

*> To: Mark Sale - Next Level Solutions
*

*> Cc: nmusers
*

*> Subject: Re: [NMusers] Models that abort before convergence Addendum
*

*>
*

*> Mark,
*

*> "Useful" is the relative and subjective term. Error messages and
*

*> convergence information are useful to me (i.e., they make my
*

*> search of
*

*> the final model more efficient), and I'd like to understand
*

*> whether they
*

*> are useful to other people. I do not try to prove that the model
*

*> completed without error messages is correct, or that the model
*

*> completed
*

*> with rounding error is wrong, or whether the error messages provide
*

*> information not readily available in NPC, NPDE and PPC. I am
*

*> interested
*

*> to see how many people find it useful: full stop here, do not try to
*

*> interpret the poll beyond this simple statement. In addition,
*

*> questions
*

*> 4-7 will help us to understand how widespred is the use of models
*

*> with
*

*> failed convergence step and/or with failed minimization step.
*

*> Thanks
*

*> Leonid
*

*>
*

*> --------------------------------------
*

*> Leonid Gibiansky, Ph.D.
*

*> President, QuantPharm LLC
*

*> web: www.quantpharm.com <http://www.quantpharm.com>
*

*> e-mail: LGibiansky at quantpharm.com
*

*> tel: (301) 767 5566
*

*>
*

*>
*

*>
*

*>
*

*> Mark Sale - Next Level Solutions wrote:
*

*> >
*

*> > Leonid,
*

*> > Let me understand:
*

*> > You now have a theory that the way to determine whether the
*

*> NONMEM error
*

*> > messages are useful (i.e., they tell you something about the model
*

*> > "goodness") is a poll. This, I think is a theory (and one well
*

*> > established in epistomolgy) of how to find an optimal solution -
*

*> appeal
*

*> > to a large number of presumably well informed people. As data
*

*> that may
*

*> > be relavant to this theory, I would point out that a poll gave us GW
*

*> > Bush as our 43rd president.
*

*> > Nick, in contrast has suggested that the error messages could be
*

*> used
*

*> > as a source of random numbers. This also, I think, is a theory
*

*> without
*

*> > data to support or contradict it.
*

*> > So ....
*

*> > Let me propose a solution - let's generate some data. Suppose we
*

*> > randomly generate 1000 models. We could tests the hypotheses:
*

*> >
*

*> > Are the error messages random (I suspect they are not, that there is
*

*> > some information in them). To test this, see if the error
*

*> messages are
*

*> > predictive of other (presumably non-random) measure of goodness -
*

*> NPC
*

*> > and NPDE, and perhaps PPC come to mind.
*

*> >
*

*> > Do the error messages provide information not readily available
*

*> in NPC,
*

*> > NPDE and PPC.
*

*> > Not really sure how to test this, without some "gold standard" of
*

*> > goodness, except perhaps to compare the different measures to the
*

*> model
*

*> > that was used to simulate the data (seems like measures based on
*

*> that
*

*> > would be "correct" in some way??). I need some ideas on this.
*

*> >
*

*> >
*

*> > I can generate, run and extract results from random models (using
*

*> the GA
*

*> > software) - I already have NPDE and PPC in it, was thinking of
*

*> adding NPC.
*

*> >
*

*> > Any interest/collaborators??
*

*> >
*

*> >
*

*> >
*

*> >
*

*> > Mark Sale MD
*

*> > Next Level Solutions, LLC
*

*> > www.NextLevelSolns.com <http://www.NextLevelSolns.com>
*

*> <http://www.NextLevelSolns.com>
*

*> > 919-846-9185
*

*> >
*

*> > -------- Original Message --------
*

*> > Subject: Re: [NMusers] Models that abort before convergence Addendum
*

*> > From: Leonid Gibiansky <LGibiansky *

*> > Date: Thu, November 20, 2008 9:57 pm
*

*> > To:
*

*> > Cc: nmusers <nmusers *

*> >
*

*> > Nick, Mark, and All,
*

*> > We can argue indefinitely, but let me propose a poll. If you like to
*

*> > participate, reply directly to me (use "reply", not "reply to
*

*> all"). I
*

*> > will summarize all the replies received up to the end of
*

*> November. Skip
*

*> > the questions that you do not like to answer, write NA if the
*

*> question
*

*> > is not applicable. Summaries will be blinded.
*

*> >
*

*> > 1. Would you like Nonmem to stop producing all run-time (not syntax)
*

*> > error/warning messages (134, 137, number of significant digits, etc.)
*

*> > and "MINIMIZATION SUCCESSFUL" messages (YES/NO):
*

*> >
*

*> > 2. Do you remember at least one example when the run-time error
*

*> message
*

*> > helped you to find an error in your code (YES/NO):
*

*> >
*

*> > 3. In your experience, run-time error messages allow you to detect
*

*> > model
*

*> > errors or problems quicker than it would be done without error
*

*> > messages:
*

*> > (agree/disagree)
*

*> >
*

*> > 4. Have you ever used in your report/publication ANY model that
*

*> did not
*

*> > have $COV step completed (YES/NO):
*

*> >
*

*> > 5. Have you ever used in your report/publication ANY model that
*

*> did not
*

*> > converge (YES/NO):
*

*> >
*

*> > 6. Have you ever used in your report/publication FINAL model that did
*

*> > not have $COV step completed (YES/NO):
*

*> >
*

*> > 7. Have you ever used in your report/publication FINAL model that did
*

*> > not converge (YES/NO):
*

*> >
*

*> > 8. Define yourself as novice/intermediate/experienced Nonmem user:
*

*> >
*

*> > Thanks
*

*> > Leonid
*

*> >
*

*> > --------------------------------------
*

*> > Leonid Gibiansky, Ph.D.
*

*> > President, QuantPharm LLC
*

*> > web: www.quantpharm.com <http://www.quantpharm.com>
*

*> <http://www.quantpharm.com>
*

*> > e-mail: LGibiansky at quantpharm.com
*

*> > tel: (301) 767 5566
*

*> >
*

*> >
*

*>
*

--

Nick Holford, Dept Pharmacology & Clinical Pharmacology

University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand

n.holford

http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

Received on Fri Nov 21 2008 - 18:07:20 EST

Date: Sat, 22 Nov 2008 12:07:20 +1300

Leonid,

I dont know what you hope to achieve with your survey. I cannot identify

a clear objective that can be reached by analysis of the results e.g.

your first question is a multi-part beast with cannot be answered with

just one YES/NO response.

Mark,

Like Leonid, you talk about error messages from NONMEM. If you get an

error message from NONMEM you do not get any results. NONMEM stops

running. Most of the time you will get an message labelled "ERROR" or

sometimes labelled "PROGRAM TERMINATED". When NONMEM detects an error

there is no sensible way to relate this to a model result. (I am

ignoring dredging into the INTER file which will be deleted anyway

unless you meddle with the NONMEM source.)

My proposal about random messages relates not to error messages but to

status messages which are issued when NONMEM finishes the estimation

step or the covariance step. At this stage there will always be

parameter estimates ("the results"). PLEASE NOTICE THE DIFFERENCE

BETWEEN AN ERROR MESSAGE AND A STATUS MESSAGE. Status messages indicate

either success or failure. There may be other distractions such as

boundary issues but I am only referring to the binary valued

success/failure messages.

I have documented in this thread the efforts of several groups (and you

have recently indicated similar experiences) that the posterior

parameter distributions (and ,in one study, the model choice decisions)

obtained by parametric or non-parametric bootstraps are not different in

any important way when classified by the estimation and covariance

status messages. Thus these posterior distributions of parameters are

not associated consistently with these status messages. It seems

plausible therefore to propose that the messages themselves are not

related to the parameters but are themselves triggered by a random process.

My post hoc justification for the lack of useful association between

status messages and parameter estimates (which have now been confirmed

many times) is as follows:

Because minimization and covariance success calculations are dependent

on the results of finite precision floating point arithmetic then

NONMEM's 'decision' can depend on pseudo-random insignificant bits.

This kind of pseudo-random behaviour is more likely when one is pushing

the model to reveal the secrets in the data and the numerical methods

are most stressed. Simple test cases that do not push the model with the

data can be bootstrapped and produce 'success' practically every time.

But a real learning type analysis that explores the data will commonly

hover over the yes/no decision boundary and thus success becomes a

random event with does not signify anything important about the

parameter estimates.

You propose further experiments with other endpoints e.g. NPDE of

observations instead of comparing posterior distributions of parameters.

I look forward to your results.

Thank you for attributing the idea of using NONMEM status messages as a

source of random numbers to me. This is not my suggestion so please go

ahead and patent it yourself :-)

Ken,

You describe me well :-) I am indeed a mechanistic modeller. However, I

am an atheist in terms of statistical delusional systems (Bayesian,

frequentist, etc) so dont really worry about stating my priors in a

formal way. As I failed in maths and never took a statistics paper I

like to justify the model choice based on biological priors which is why

I always include weight as a covariate for clearance and volume. If run

times permit then I prefer to explore the parameter uncertainty with a

bootstrap or likelihood profile rather than playing around with initial

estimates to randomly end up with minimisation success and asymptotic

standard errors.

Best wishes,

Nick

Mark Sale - Next Level Solutions wrote:

--

Nick Holford, Dept Pharmacology & Clinical Pharmacology

University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand

n.holford

http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

Received on Fri Nov 21 2008 - 18:07:20 EST