NONMEM Users Network Archive

Hosted by Cognigen

Re: VPC appropriateness in complex PK

From: Nick Holford <n.holford>
Date: Sat, 19 Sep 2009 11:27:44 +1200

Dider,

There are two types of predictive check that are now recognized as the
current state of the art for mixed effect model evaluation.

Statistical Predictive Checks (SPC)
The SVPC described by Wang and Zhang (2009) is a re-invention
(apparently independently) of a method published by Mentré and Escolano
in 2006. The method is primarily a numerical method for checking if the
distribution of discrepancies between model predictions and observations
is uniform (PDE). The primary purpose of the method is to create a
statistical test procedure for the distribution of discrepancies. Mentré
and her colleagues have refined the original method to remove the
correlation between individual observations (Comets et al. 2008) and
propose a set of statistical tests based on a normal distribution of
discrepancies (NPDE). These standardized/normalized discrepancies (PDE,
NPDE) can be plotted as graphs but lose information which might identify
the magnitude of model failure because the process of
standardization/normalization removes this important clue (as Leonid has
also pointed out in his recent email to nmusers).

Visual Predictive Checks (VPC)
The visual predictive check (Holford 2005) is primarily a graphical
method that relies on a subjective evaluation of the pattern of
discrepancies between the model predictions and the observations. It
preserves the magnitude of the model prediction and can be directly
compared to the observation. Examples of how this visual evaluation can
lead to recognition of model failure, when the standard 'diagnostic
plots' do not, can be found in Holford (2005) and Karlsson & Holford
(2008; see Slides 22-26). The original scatterplot VPC may be sufficient
when there are small numbers of observations but the distribution of
observations cannot be appreciated when scatterplots have many
overlapping observation symbols. The percentile VPC (see Karlsson &
Holford 2008) solves this problem and allows direct comparison of
selected percentiles of the distribution of both predictions and
observations. The scatterplot VPC and percentile VPC offer complementary
views of the model predictions. The scatterplot VPC is helpful for
appreciating the realized design of the study but the percentile VPC is
needed to properly compare observations with predictions. A scatterplot
VPC by itself is inadequate for model evaluations in almost all cases.
Uncritical evaluation of the scatterplot VPC can lead to acceptance of
models which may not describe the data well but the failure is often
hidden by this naive method used for construction of the plot.

Learning and Confirming
The statistical predictive checks (SPC) offer a simple way of deciding
if a model of arbitrary complexity describes the observations. However,
because all models are wrong and the tests may have high power to detect
small and possibly unimportant differences this kind of test may reject
a model that is in fact useful for its intended purpose. Evaluation of a
model in relation to its purpose will usually require an understanding
of the magnitude and timing of the discrepancy. Because magnitude is
lost with SPC methods they can only be partially helpful as evaluation
tools.
VPCs can be challenging to construct but the very process of having to
think about how to simulate the observations and how to display the
results to account for covariates (e.g. dose, disease state) adds more
value to the VPC (see Karlsson & Holford 2008 Slide 27/28 for an example
where inclusion of a model for dropout related to disease status was
helpful in explaining and interpreting the observations). Leonid also
echoes this conclusion in his email.
The PRED corrected method of VPC construction (Karlsson & Holford 2008,
Bergstrand et al. 2009) offers the potential for automated 'correction'
for covariate influences but it still requires thought from the user
both for construction and evaluation.
In the terminology of Sheiner (1997) the VPC can be viewed as an
evaluation procedure for learning while the SPC is an evaluation
procedure for confirming. They are complementary and have different uses.

Unstratified VPCs
Leonid has proposed an example which he asserts shows how the VPC is
not useful if covariate stratification is not performed.
> To see the problem, consider VPC (without stratification) for the data
> with two dose groups, 1 and 100 units (with the rest being similar).
> Obviously, all data that exceed 95% CI would come from the high dose,
> and all data below 5th percentile would come from the low dose, and
> overall, VPC plots and stats will not be useful.
However, I disagree with his conclusion about the usefulness of
unstratified VPC plots. If the model is suitable then the percentile VPC
should match the observed percentiles. But if the model is unsuitable
e.g. a one compartment model has been used but a two compartment model
would be better, then the percentile VPC should be able to demonstrate
this. Of course if one was interested in finding out if the assumption
of dose linearity was satisfied then stratification by dose would be
needed in the VPC.

Limitations
The construction and intepretation of the VPC requires thought. If the
data structure and model assumptions are complex then more effort is
required. In some situations it may not be possible to simulate the
original design (e.g. adaptive designs where the adaptation rules cannot
be implemented by a simulation algorithm). In this case then it seems
that all simulation based methods (SPC or VPC) would be unhelpful.

Best wishes

Nick

Bergstrand M, Hooker AC, Wallin JE, Karlsson MO. Prediction corrected
visual predictive checks http://www.go-acop.org/acop2009/posters ACOP. 2009.

Comets E, Brendel K, Mentré F. Computing normalised prediction
distribution errors to evaluate nonlinear mixed-effect models: The npde
add-on package for R. Comput Methods Programs Biomed. 2008;90(2):154-66.

Holford NHG. The visual predictive check – superiority to standard
diagnostic (Rorschach) plots [www.page-meeting.org/?abstract=738]. PAGE.
2005;14.

Karlsson MO, Holford NHG. A Tutorial on Visual Predictive Checks. PAGE
17 (2008) Abstr 1434 [wwwpage-meetingorg/?abstract=1434]. 2008.

Mentré F, Escolano S. Prediction discrepancies for the evaluation of
nonlinear mixed-effects models. J Pharmacokinet Pharmacodyn.
2006;33(3):345-67.

Sheiner LB. Learning versus confirming in clinical drug development.
Clinical Pharmacology & Therapeutics. 1997;61(3):275-91.

Wang DD, Zhang S. Standardized visual predictive check – How and when to
used it in model evaluation [www.page-meeting.org/?abstract=1501]. PAGE.
2009;18.



Leonid Gibiansky wrote:
> Hi Dider,
> VPC is very good when your data set is homogeneous: same or similar
> dosing, same or similar sampling, same or similar influential
> covariates that results in similar PK or PD predictions. In cases of
> diverse data sets, traditional VPC is more difficult to implement, and
> it may not be useful.
>
> To see the problem, consider VPC (without stratification) for the data
> with two dose groups, 1 and 100 units (with the rest being similar).
> Obviously, all data that exceed 95% CI would come from the high dose,
> and all data below 5th percentile would come from the low dose, and
> overall, VPC plots and stats will not be useful. With two doses, it is
> easy to fix: just stratify by dose. If you have more diverse groups,
> you have to either do VPC by group, or find the way to plot all values
> in one scale. In cases of dose differences and linear kinetics, one
> can do VPC with all values normalized by dose. In nonlinear cases, it
> is more difficult.
>
> SVPC offers the way out of this problem. In this procedure, each
> observation is compared with the distribution of observations at the
> same time point, with the same dosing, and with the same covariate set
> as in the original data. Position of the observation in the
> distribution of simulated values is characterized by the percent of
> simulated values that is above (or below) the observed value. If the
> model is correct, then percentiles should be uniformly distributed in
> the range of 0 to 100. This should hold for any PRED value, and dose,
> any time post-dose etc.
>
> It is important not to combine all observed points together (to study
> overall distribution of the SVPC percentiles): in this case the test
> in not sensitive. SVPC is useful when these percentile values are
> plotted versus time, time post dose, or PRED (but not IPRED or DV !!)
> values. Then, they can be use to see the problems with the model,
> similar to how WRES vs TIME and WRES vs PRED plots are used. The
> disadvantage is that you loose visual part: your percentile versus
> time profiles should look like a square filled with the points rather
> than like concentration-time profiles. Even in this procedure, it make
> sense to stratify your plots by dose, influential covariates, etc. to
> see whether the plots are uniformly good. Dose, covariate, time or
> PRED dependencies of the SVPC plots may indicate some deficiency of
> the model.
>
> Note that none of these procedures can be used to evaluate the
> concentration or effect controlled trials, or trials with non-random
> drop out. In order to use VPC-based procedures for these cases, you
> need to simulate accordingly: with dosing that depend on simulated
> values (for concentration or effect controlled trials) or with the
> drop-out models.
>
> Thanks
> Leonid
>
> --------------------------------------
> Leonid Gibiansky, Ph.D.
> President, QuantPharm LLC
> web: www.quantpharm.com
> e-mail: LGibiansky at quantpharm.com
> tel: (301) 767 5566
>
>
>
>
> Dider Heine wrote:
>> Dear NMusers:
>> The Visual predictive check (VPC,
>> http://www.page-meeting.org/page/page2005/PAGE2005P105.pdf , and
>> JPKPD, Volume 35, Number 2 / April, 2008) has been touted as a
>> useful tool for assessing the perfomance of population
>> pharmacokinetic models. However I recently came across this abstract
>> from the 2009 PAGE meeting:
>> http://www.page-meeting.org/pdf_assets/4050-Standardized%20Visual%20Predictive%20Check%20in%20Model%20Evaluation%20-%20PAGE2009%20submit.pdf
>> .
>> This abstract states that situations when VPC is not feasible but a
>> "Standardized Visual Predictive Check (SVPC) can be used are as follows:
>> – Patients received individualized dose or there are a small number
>> of patients per dose group and PK or PD is nonlinear, thus
>> observations can not be normalized for dose
>> – There are multiple categorical covariate effects on PK or PD
>> parameters
>> – Covariate is a continuous variable which made stratification
>> impossible
>> – Study design and execution varies among individuals, such as
>> adaptive design, difference in dosing schedule, dose changes and
>> dosing time varies during study, protocol violations
>> – Different concomitant medicines and food intake among individuals
>> when there are drug-drug interactions and food effect on PK
>>
>> However, the original VPC articles seem to suggest that these are the
>> exact situations when the VPC alone is an ideal tool for model
>> validation. Is there any justification for one approach over the
>> other? Has anyone ever seen an SVPC utilized elsewhere, I have found
>> nothing. Are these truly weaknesses of a VPC?
>>
>> Cheers!
>> Dider

--
Nick Holford, Professor Clinical Pharmacology
Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
n.holford
mobile: +64 21 46 23 53
http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford
Received on Fri Sep 18 2009 - 19:27:44 EDT

The NONMEM Users Network is maintained by ICON plc. Requests to subscribe to the network should be sent to: nmusers-request@iconplc.com.

Once subscribed, you may contribute to the discussion by emailing: nmusers@globomaxnm.com.