NONMEM Users Network Archive

Hosted by Cognigen

Re: Choice of models

From: Jan-Stefan Van der Walt <janstefan.vanderwalt>
Date: Tue, 24 Jan 2012 10:24:26 +0000

Hi Toufigh,

Recently I used the 90% prediction interval (generated by an appropriately
binned VPC) of the rich data (three studies with observed doses) to
evaluate the sparse data (one sample on 4 occasions). The sparse data
contained more information about the covariates of interest, but the dosing
was unobserved. I analysed the rich and sparse data simultaneously first
including and then excluding the sparse data outside the 90% PI and
compared the results. The eta-shrinkage values decreased considerably when
the observations outside the 90% PI were excluded and I had more confidence
in the covariate relationships.

As a side issue, I estimated a time-after-dose for the observations outside
the 90% PI. It was interesting that the difference between the reported and
estimated dosing times seemed to increase as the trial progressed (0.92h
[month 6], 1.05h [month 12]), 1.11h [month 18] and 3.6h [month 24].

Hope this helps.

Regards,
Jan-Stefan

On 24 January 2012 05:05, Denney, William S. <William.S.Denney
rote:

> Hi Toufigh,
>
> I typically think that data quality decreases with phase and with samplin=
g
> frequency. Given what you described below, I'd think that you're fightin=
g
> data quality in the sparse, phase 3 studies, and with the parameters you'=
re
> describing as having trouble, it seems to support that thought. Were I t=
o
> guess, you could probably pick out the most influential 3% of sparse
> samples (arbitrary percentage), and look at them in more detail and find
> that they look more like Cmax than Ctrough or something such the time sin=
ce
> last dose appears to be off.
>
> Beyond that, philosophically, I think that trough concentrations should
> not be allowed to affect Ka because the effect is usually so small as not
> to be measurable (assuming that we're discussing a drug with a reasonable
> separation between the alpha elimination phase and measurement time).
>
> Thanks,
>
> Bill
>
> On Jan 23, 2012, at 11:45 PM, "Toufigh Gordi" <tgordi
> wrote:
>
> Dear all,****
>
> ** **
>
> I have a general question on the choice of model in a population analysis=
.
> I have a set of data set that includes a large number of studies with abo=
ut
> ¾ of the data from extensive sampling schemes (phase 1, 2, and 3 studie=
s)
> and the rest from sparse samples (phase 3 clinical studies). When
> developing the PK model, a model on the extensive samples only fits the
> data well and I can get quite reasonable parameter estimates, including
> covariate effects, and a successful $COV (NONMEM). When all data is used,
> the model becomes somewhat instable: the same covariates are identified b=
ut
> the model becomes quite sensitive to the initial estimates and the $COV
> step won’t go through. I could, of course, perform a bootstrap to go ar=
ound
> this issue. In general, the fit of the model based on the full data set i=
s
> not as good as the extensive data set model, although the two models are
> rather similar with regard to the parameter estimates. However, the range
> of estimated parameters is wider when using all data and noticeably KA an=
d
> V2 are skewed to very larger values.****
>
> ** **
>
> Moving forward, I could either use the full data model and simulate stead=
y
> state profiles for the phase 3 study (sparse samples) data. Or, I could u=
se
> the model based on the extensive samples only, use the sparse data and
> generate post-hoc estimates for the sparsely sampled individuals and move
> forward that way. The advantage with the first option is that all the
> available data have been used in the modeling process. The disadvantage
> would be that the model is not as good as the other model, with sparse da=
ta
> distorting the parameter estimates. The advantage of the second option is
> that the model performs better and there is really no reason why the
> underlying PK model for the sparsely sampled subjects should be different=
,
> which means one should be able to use that model to generate post-hoc
> estimates. The disadvantage is that not all the available data have been
> used in the model building process.****
>
> ** **
>
> It would be interesting to hear other people’s thoughts and ideas on th=
is.
> ****
>
> ** **
>
> Toufigh ****
>
>


--
*United Kingdom*
Flat 5, 41 Devons Rd, E3 3BF, London
+44 20 7987 6688 *(h) *
+44 77 9618 4662 *(m)*
*South Africa*
Ballet & Lodge, 34 Kerk St, George, 6529
Postnet Suite 39, Private Bag, X6590, George, 6530
+27 44 884 1560 *(h)*
*Sweden*
Pharmacometrics, Department of Pharmaceutical Biosciences
PO Box 591, SE-75124 Uppsala
+46 73 066 7338 *(m)*

Received on Tue Jan 24 2012 - 05:24:26 EST

The NONMEM Users Network is maintained by ICON plc. Requests to subscribe to the network should be sent to: nmusers-request@iconplc.com.

Once subscribed, you may contribute to the discussion by emailing: nmusers@globomaxnm.com.