- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]

From: Eleveld, DJ <d.j.eleveld_at_umcg.nl>

Date: Wed, 11 Feb 2015 21:26:08 +0000

Hi Aziz,

Just some comments off the top of my head in a quite informal way: I'm not =

really sure that these are the same problem because they dont start with th=

e same information in the form of parameter constraints. In model 1 you are=

asking the optimizer for the unconstrained maximum likelihood solution for=

TVCL. OK, this is reasonable in a lot of situations, but not necessairily =

in all situations.

In model 2 you add information by forcing TVCL and CL to be positive. If yo=

u think of the optimal solution as some point in N-dimensional space which =

has to be searched for, in model 2 you are saying “dont even look in the =

space where TVCL or CL is negative”. Even stronger, in model 2 you are al=

so saying “dont even get close to zero” because the log-normal distribu=

tion vanishes towards zero.

Which solution of these is best for some particular application depends on =

a lot of things. One of the things I would think about in this situation is=

whether or not my a priori beliefs match with the structual constraints of=

the model. Do I really think that the “true” CL could be zero? If yes,=

then model 2 is hard to defend in that case.

You description of your situation regarding standard errors is a part of th=

e same thing. When you extrapolate standard errors into low-probability are=

as you are checking the boundaries of the probability area. It should not b=

e suprising that model 1 might tell you that CL is negative since this was =

part of the solution space which you allowed. With model 2 your model struc=

ture says “dont even look there”

In short, although these two models might look similar, I think they are re=

ally quite different. This becomes most clear when you consider the low-pro=

bability space.

Sorry for the vauge language.

Warm regards,

Douglas

________________________________________

From: owner-nmusers_at_globomaxnm.com [owner-nmusers_at_globomaxnm.com] on behalf=

of Chaouch Aziz [Aziz.Chaouch_at_chuv.ch]

Sent: Wednesday, February 11, 2015 5:21 PM

To: nmusers_at_globomaxnm.com

Subject: [NMusers] Standard errors of estimates for strictly positive param=

eters

Hi,

I'm interested in generating samples from the asymptotic sampling distribut=

ion of population parameter estimates from a published PKPOP model fitted w=

ith NONMEM. By definition, parameter estimates are asymptotically (multivar=

iate) normally distributed (unconstrained optimization) with mean M and cov=

ariance C, where M is the vector of parameter estimates and C is the covari=

ance matrix of estimates (returned by $COV and available in the lst file).

Consider the 2 models below:

Model 1:

TVCL = THETA(1)

CL = TVCL*EXP(ETA(1))

Model 2:

TVCL = EXP(THETA(1))

CL = TVCL*EXP(ETA(1))

It is clear that model 1 and model 2 will provide exactly the same fit. How=

ever, although in both cases the standard error of estimates (SE) will refe=

r to THETA(1), the asymptotic sampling distribution of TVCL will be normal =

in model 1 while it will be lognormal in model 2. Therefore if one is inter=

ested in generating random samples from the asymptotic distribution of TVCL=

, some of these samples might be negative in model 1 while they'll remain n=

icely positive in model 2. The same would happen with bounds of (asymptotic=

) confidence intervals: in model 1 the lower bound of a 95% confidence inte=

rval for TVCL might be negative (unrealistic) while it would remain positiv=

e in model 2.

This has obviously no impact for point estimates or even confidence interva=

ls constructed via non-parametric bootstrap since boundary constraints can =

be placed on parameters in NONMEM. But what if one is interested in the asy=

mptotic covariance matrix of estimates returned by $COV? The asymptotic sam=

pling distribution of parameter estimates is (multivariate) normal only if =

the optimization is unconstrained! Doesn't this then speak in favour of mod=

el 2 over model 1? Or does NONMEM take care of it and returns the asymptoti=

c SE of THETA(1) in model 1 on the log-scale (when boundary constraints are=

placed on the parameter)?

Thanks,

Aziz Chaouch

________________________________

De inhoud van dit bericht is vertrouwelijk en alleen bestemd voor de geadr=

esseerde(n). Anderen dan de geadresseerde(n) mogen geen gebruik maken van d=

it bericht, het niet openbaar maken of op enige wijze verspreiden of vermen=

igvuldigen. Het UMCG kan niet aansprakelijk gesteld worden voor een incompl=

ete aankomst of vertraging van dit verzonden bericht.

The contents of this message are confidential and only intended for the eye=

s of the addressee(s). Others than the addressee(s) are not allowed to use =

this message, to make it public or to distribute or multiply this message i=

n any way. The UMCG cannot be held responsible for incomplete reception or =

delay of this transferred message.

Received on Wed Feb 11 2015 - 16:26:08 EST

Date: Wed, 11 Feb 2015 21:26:08 +0000

Hi Aziz,

Just some comments off the top of my head in a quite informal way: I'm not =

really sure that these are the same problem because they dont start with th=

e same information in the form of parameter constraints. In model 1 you are=

asking the optimizer for the unconstrained maximum likelihood solution for=

TVCL. OK, this is reasonable in a lot of situations, but not necessairily =

in all situations.

In model 2 you add information by forcing TVCL and CL to be positive. If yo=

u think of the optimal solution as some point in N-dimensional space which =

has to be searched for, in model 2 you are saying “dont even look in the =

space where TVCL or CL is negative”. Even stronger, in model 2 you are al=

so saying “dont even get close to zero” because the log-normal distribu=

tion vanishes towards zero.

Which solution of these is best for some particular application depends on =

a lot of things. One of the things I would think about in this situation is=

whether or not my a priori beliefs match with the structual constraints of=

the model. Do I really think that the “true” CL could be zero? If yes,=

then model 2 is hard to defend in that case.

You description of your situation regarding standard errors is a part of th=

e same thing. When you extrapolate standard errors into low-probability are=

as you are checking the boundaries of the probability area. It should not b=

e suprising that model 1 might tell you that CL is negative since this was =

part of the solution space which you allowed. With model 2 your model struc=

ture says “dont even look there”

In short, although these two models might look similar, I think they are re=

ally quite different. This becomes most clear when you consider the low-pro=

bability space.

Sorry for the vauge language.

Warm regards,

Douglas

________________________________________

From: owner-nmusers_at_globomaxnm.com [owner-nmusers_at_globomaxnm.com] on behalf=

of Chaouch Aziz [Aziz.Chaouch_at_chuv.ch]

Sent: Wednesday, February 11, 2015 5:21 PM

To: nmusers_at_globomaxnm.com

Subject: [NMusers] Standard errors of estimates for strictly positive param=

eters

Hi,

I'm interested in generating samples from the asymptotic sampling distribut=

ion of population parameter estimates from a published PKPOP model fitted w=

ith NONMEM. By definition, parameter estimates are asymptotically (multivar=

iate) normally distributed (unconstrained optimization) with mean M and cov=

ariance C, where M is the vector of parameter estimates and C is the covari=

ance matrix of estimates (returned by $COV and available in the lst file).

Consider the 2 models below:

Model 1:

TVCL = THETA(1)

CL = TVCL*EXP(ETA(1))

Model 2:

TVCL = EXP(THETA(1))

CL = TVCL*EXP(ETA(1))

It is clear that model 1 and model 2 will provide exactly the same fit. How=

ever, although in both cases the standard error of estimates (SE) will refe=

r to THETA(1), the asymptotic sampling distribution of TVCL will be normal =

in model 1 while it will be lognormal in model 2. Therefore if one is inter=

ested in generating random samples from the asymptotic distribution of TVCL=

, some of these samples might be negative in model 1 while they'll remain n=

icely positive in model 2. The same would happen with bounds of (asymptotic=

) confidence intervals: in model 1 the lower bound of a 95% confidence inte=

rval for TVCL might be negative (unrealistic) while it would remain positiv=

e in model 2.

This has obviously no impact for point estimates or even confidence interva=

ls constructed via non-parametric bootstrap since boundary constraints can =

be placed on parameters in NONMEM. But what if one is interested in the asy=

mptotic covariance matrix of estimates returned by $COV? The asymptotic sam=

pling distribution of parameter estimates is (multivariate) normal only if =

the optimization is unconstrained! Doesn't this then speak in favour of mod=

el 2 over model 1? Or does NONMEM take care of it and returns the asymptoti=

c SE of THETA(1) in model 1 on the log-scale (when boundary constraints are=

placed on the parameter)?

Thanks,

Aziz Chaouch

________________________________

De inhoud van dit bericht is vertrouwelijk en alleen bestemd voor de geadr=

esseerde(n). Anderen dan de geadresseerde(n) mogen geen gebruik maken van d=

it bericht, het niet openbaar maken of op enige wijze verspreiden of vermen=

igvuldigen. Het UMCG kan niet aansprakelijk gesteld worden voor een incompl=

ete aankomst of vertraging van dit verzonden bericht.

The contents of this message are confidential and only intended for the eye=

s of the addressee(s). Others than the addressee(s) are not allowed to use =

this message, to make it public or to distribute or multiply this message i=

n any way. The UMCG cannot be held responsible for incomplete reception or =

delay of this transferred message.

Received on Wed Feb 11 2015 - 16:26:08 EST