From: Nick Holford <*n.holford*>

Date: Fri, 24 Oct 2008 09:36:08 +1300

Kyun-Seop,

If you used OMEGA SAME then you only have one extra parameter in the

model - it makes no difference how many occasions you use it with. A

change in OFV of 10 with FOCE should easily allow you to discard the

null hypothesis with alpha=0.05. If you really want to know what the

true Type I error rate is then you can do a randomization test using

simulated data with 3 occasions but simulated with no BOV.

I prefer the term between occasion variability (BOV) to inter-occasion

variability (IOV). Indeed BOV is really just an estimate of within

subject variability (WSV) and that is what is important to understand.

However, I do not rely on this kind of statistical criterion for some

kinds of model building. There is no harm in keeping BOV in the model.

If you assume it is zero then you must be wrong because in reality I

cannot imagine any PKPD parameter which did not vary within an

individual. I use BOV in my models like I use weight. I know for sure

that weight must be a predictor of clearance and volume and I know BOV

must exist. So I dont need statistical tests to decide whether to keep

it in the model.

The traditional diagnostic plots and more sensitive methods such as VPC

(see example in Karlsson & Holford 2008) cannot be expected to reveal

the consequences of omitting BOV because the variability just gets

distributed onto other random effects parameters (between subject

variability and residual error). The simulated distributions will look

the same irrespective of how the variability is distributed.

Nick

Karlsson MO, Holford NHG. A Tutorial on Visual Predictive Checks. PAGE

17 (2008) Abstr 1434 [wwwpage-meetingorg/?abstract=1434]. 2008.

BAE, KYUN-SEOP wrote:

*> Hi,
*

*>
*

*> Objective function value (OFV, MVO) decreased by 10 from 6000 when
*

*> interoccasional variability (IOV) for three occasions is used.
*

*> Standard errors decreased a little bit, but I cannot notice improvements
*

*> of diagnostic plots.
*

*>
*

*> Some use 3.84 criteria (a value from chi-square distribution) for one
*

*> additional parameter.
*

*> I think additional OMEGA SAME for IOV should be handled differently from
*

*> theta, omega or sigma.
*

*>
*

*> (Though I don't like the above criteria using chi-square distribution,)
*

*> I would like to know if there is similar quantitative criteria for IOV.
*

*>
*

*> Thanks,
*

*>
*

*> Kyun-Seop Bae
*

*>
*

--

Nick Holford, Dept Pharmacology & Clinical Pharmacology

University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand

n.holford

http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

Received on Thu Oct 23 2008 - 16:36:08 EDT

Date: Fri, 24 Oct 2008 09:36:08 +1300

Kyun-Seop,

If you used OMEGA SAME then you only have one extra parameter in the

model - it makes no difference how many occasions you use it with. A

change in OFV of 10 with FOCE should easily allow you to discard the

null hypothesis with alpha=0.05. If you really want to know what the

true Type I error rate is then you can do a randomization test using

simulated data with 3 occasions but simulated with no BOV.

I prefer the term between occasion variability (BOV) to inter-occasion

variability (IOV). Indeed BOV is really just an estimate of within

subject variability (WSV) and that is what is important to understand.

However, I do not rely on this kind of statistical criterion for some

kinds of model building. There is no harm in keeping BOV in the model.

If you assume it is zero then you must be wrong because in reality I

cannot imagine any PKPD parameter which did not vary within an

individual. I use BOV in my models like I use weight. I know for sure

that weight must be a predictor of clearance and volume and I know BOV

must exist. So I dont need statistical tests to decide whether to keep

it in the model.

The traditional diagnostic plots and more sensitive methods such as VPC

(see example in Karlsson & Holford 2008) cannot be expected to reveal

the consequences of omitting BOV because the variability just gets

distributed onto other random effects parameters (between subject

variability and residual error). The simulated distributions will look

the same irrespective of how the variability is distributed.

Nick

Karlsson MO, Holford NHG. A Tutorial on Visual Predictive Checks. PAGE

17 (2008) Abstr 1434 [wwwpage-meetingorg/?abstract=1434]. 2008.

BAE, KYUN-SEOP wrote:

--

Nick Holford, Dept Pharmacology & Clinical Pharmacology

University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand

n.holford

http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford

Received on Thu Oct 23 2008 - 16:36:08 EDT