ModelRank#
The ModelRank tool is a general tool which filters models based on strictness criteria and ranks the remaining models. If parameter uncertainty is provided in the strictness criteria, ModelRank will dynamically run models with the specified parameter uncertainty method. This tool is used by all Automatic Model Development (AMD) tools, but can also be used standalone.
Running#
The ModelRank tool is available in Pharmpy/pharmr.
To initiate ModelRank in Python/R:
from pharmpy.tools import run_modelrank
res = run_modelrank(models=models, # Read in from tool or by scripting
results=results, # Read in from tool or by scripting
ref_model=ref_model, # One of read in models to be reference
strictness='minimization_successful or (rounding_errors and sigdigs>=0.1)',
rank_type='lrt',
alpha=0.05)
res <- run_modelrank(models=models, # Read in from tool or by scripting
results=results, # Read in from tool or by scripting
ref_model=ref_model, # One of read in models to be reference
strictness='minimization_successful or (rounding_errors and sigdigs>=0.1)',
rank_type='lrt',
alpha=0.05)
This will take an list of models
and their corresponding results
and use a ref_model
to compare e.g.
OFV with. Only models that fulfills the strictness
criteria will be ranked, and likelihood ratio test will be
performed as the rank_type
, using 0.05 as the p-value cutoff alpha
.
To activate the functionality where parameter uncertainty is dynamically run, include RSE in your strictness criteria, for example:
from pharmpy.tools import run_modelrank
res = run_modelrank(models=models, # Read in from tool or by scripting
results=results, # Read in from tool or by scripting
ref_model=ref_model, # One of read in models to be reference
strictness='minimization_successful or (rounding_errors and sigdigs>=0.1) and rse < 0.5',
rank_type='lrt',
alpha=0.05)
res <- run_modelrank(models=models, # Read in from tool or by scripting
results=results, # Read in from tool or by scripting
ref_model=ref_model, # One of read in models to be reference
strictness='minimization_successful or (rounding_errors and sigdigs>=0.1) and rse < 0.5',
rank_type='lrt',
alpha=0.05)
Arguments#
For a more detailed description of each argument, see their respective chapter on this page.
Mandatory#
Argument |
Description |
---|---|
|
Models to rank |
|
Modelfit results of models to rank |
|
Reference model for e.g. LRT or calculating dOFV |
Optional#
Argument |
Description |
---|---|
|
Which Strictness to filter models on (default is
|
|
Which Selection criteria to rank models on (default is OFV) |
|
\(\alpha\) for likelihood ration test |
|
Search space for candidate models (only applicable for mBIC as rank type) |
|
E-value (only applicable for mBIC as rank type) |
|
Parameter uncertainty method to use if necessary for strictness check |
Running ModelRank without parameter uncertainty#
If the strictness criteria does not check parameter uncertainty, or if input models already have estimated uncertainty, the rank tool will first filter the models based on strictness, then rank the remaining models. If likelihood ratio test is used, it will be performed before ranking.
Running ModelRank with parameter uncertainty#
If the strictness criteria check parameter uncertainty, the rank tool will first filter the models based on the strictness it can assess (e.g. minimization status) and rank the remaining models. It will then take the highest ranked and rerun the model, estimating the parameter uncertainty. If the model passes the full strictness criteria, it will be selected as the final model, otherwise it will take the next best model. It will continue either until a model fulfills the full criteria, or until all models have failed the strictness criteria.
Results#
The results object contains various summary tables which can be accessed in the results object, as well as files in .csv/.json format. The selected best model and its results is also included in the strictness critieria.
Consider a ModelSearch run:
res = run_modelsearch(
model=start_model,
results=start_res,
search_space='ABSORPTION([FO,ZO]);PERIPHERALS([0,1]);LAGTIME([OFF,ON])',
algorithm='exhaustive_stepwise',
rank_type='bic')
res <- run_modelsearch(
model=start_model,
results=start_res,
search_space='ABSORPTION([FO,ZO]);PERIPHERALS([0,1]);LAGTIME([OFF,ON])',
algorithm='exhaustive_stepwise',
rank_type='bic')
This will run the ModelRank tool, if we read in that result object we can exlpore in more detail how the models were ranked.
The summary_tool
table contains information such as which feature each model candidate has, the difference to the
start model (in this case comparing BIC), and final ranking:
description | n_params | d_params | dbic | bic | rank | |
---|---|---|---|---|---|---|
model | ||||||
modelsearch_run2 | LAGTIME(ON) | 9 | 2 | 12.594608 | -1272.124874 | 1.0 |
modelsearch_run9 | PERIPHERALS(1);LAGTIME(ON) | 11 | 4 | 12.558028 | -1272.088295 | 2.0 |
modelsearch_run1 | ABSORPTION(ZO) | 7 | 0 | 12.358005 | -1271.888272 | 3.0 |
modelsearch_run7 | LAGTIME(ON);PERIPHERALS(1) | 11 | 4 | 10.956313 | -1270.486580 | 4.0 |
modelsearch_run6 | LAGTIME(ON);ABSORPTION(ZO) | 9 | 2 | 4.809626 | -1264.339892 | 5.0 |
modelsearch_run4 | ABSORPTION(ZO);LAGTIME(ON) | 9 | 2 | 4.809626 | -1264.339892 | 6.0 |
modelsearch_run3 | PERIPHERALS(1) | 9 | 2 | 1.286997 | -1260.817264 | 7.0 |
input | 7 | 0 | 0.000000 | -1259.530267 | 8.0 | |
modelsearch_run8 | PERIPHERALS(1);ABSORPTION(ZO) | 9 | 2 | -14.816162 | -1244.714105 | 9.0 |
modelsearch_run15 | PERIPHERALS(1);LAGTIME(ON);ABSORPTION(ZO) | 11 | 4 | -20.181356 | -1239.348910 | 10.0 |
modelsearch_run5 | ABSORPTION(ZO);PERIPHERALS(1) | 9 | 2 | -1.412488 | -1258.117778 | NaN |
modelsearch_run11 | ABSORPTION(ZO);PERIPHERALS(1);LAGTIME(ON) | 11 | 4 | -2.575704 | -1256.954562 | NaN |
modelsearch_run13 | LAGTIME(ON);PERIPHERALS(1);ABSORPTION(ZO) | 11 | 4 | -4.784631 | -1254.745635 | NaN |
modelsearch_run10 | ABSORPTION(ZO);LAGTIME(ON);PERIPHERALS(1) | 11 | 4 | -5.461547 | -1254.068719 | NaN |
modelsearch_run12 | LAGTIME(ON);ABSORPTION(ZO);PERIPHERALS(1) | 11 | 4 | -5.842090 | -1253.688177 | NaN |
modelsearch_run14 | PERIPHERALS(1);ABSORPTION(ZO);LAGTIME(ON) | 11 | 4 | -31.094995 | -1228.435271 | NaN |
If any models were run with parameter uncertainty, the tool will have a summary_models
table, where you can find
information about the actual model runs, such as minimization status, estimation time, and parameter estimates. The
table is generated with pharmpy.tools.summarize_modelfit_results()
.
The summary_strictness
table contains information about whether strictness was fulfilled or not and more detail
about which part of the strictness criteria failed or not.
minimization_successful | rounding_errors | sigdigs | sigdigs >= 0.1 | strictness_fulfilled | |
---|---|---|---|---|---|
model | |||||
input | True | False | 3.8 | True | True |
modelsearch_run1 | True | False | 4.8 | True | True |
modelsearch_run2 | True | False | 4.4 | True | True |
modelsearch_run3 | True | False | 3.4 | True | True |
modelsearch_run4 | True | False | 3.6 | True | True |
modelsearch_run5 | False | False | NaN | False | False |
modelsearch_run6 | True | False | 4.3 | True | True |
modelsearch_run7 | True | False | 3.0 | True | True |
modelsearch_run8 | True | False | 3.6 | True | True |
modelsearch_run9 | True | False | 3.8 | True | True |
modelsearch_run10 | False | False | NaN | False | False |
modelsearch_run11 | False | False | NaN | False | False |
modelsearch_run12 | False | False | NaN | False | False |
modelsearch_run13 | False | False | NaN | False | False |
modelsearch_run14 | False | False | NaN | False | False |
modelsearch_run15 | True | False | 3.5 | True | True |
The summary_selection_criteria
table contains information about the different components of the selection criteria,
such as penalty terms if using AIC/BIC, p-values and cutoff etc. if using LRT.
ofv | bic_penalty | dbic | bic | rank_val | |
---|---|---|---|---|---|
model | |||||
input | -1292.186761 | 32.656494 | 0.000000 | -1259.530267 | -1259.530267 |
modelsearch_run1 | -1304.544766 | 32.656494 | 12.358005 | -1271.888272 | -1271.888272 |
modelsearch_run2 | -1313.362287 | 41.237413 | 12.594608 | -1272.124874 | -1272.124874 |
modelsearch_run3 | -1307.301233 | 46.483969 | 1.286997 | -1260.817264 | -1260.817264 |
modelsearch_run4 | -1305.577305 | 41.237413 | 4.809626 | -1264.339892 | -1264.339892 |
modelsearch_run5 | -1304.601747 | 46.483969 | -1.412488 | -1258.117778 | NaN |
modelsearch_run6 | -1305.577305 | 41.237413 | 4.809626 | -1264.339892 | -1264.339892 |
modelsearch_run7 | -1325.551467 | 55.064888 | 10.956313 | -1270.486580 | -1270.486580 |
modelsearch_run8 | -1291.198073 | 46.483969 | -14.816162 | -1244.714105 | -1244.714105 |
modelsearch_run9 | -1327.153182 | 55.064888 | 12.558028 | -1272.088295 | -1272.088295 |
modelsearch_run10 | -1309.133607 | 55.064888 | -5.461547 | -1254.068719 | NaN |
modelsearch_run11 | -1312.019450 | 55.064888 | -2.575704 | -1256.954562 | NaN |
modelsearch_run12 | -1308.753064 | 55.064888 | -5.842090 | -1253.688177 | NaN |
modelsearch_run13 | -1309.810523 | 55.064888 | -4.784631 | -1254.745635 | NaN |
modelsearch_run14 | -1283.500159 | 55.064888 | -31.094995 | -1228.435271 | NaN |
modelsearch_run15 | -1294.413798 | 55.064888 | -20.181356 | -1239.348910 | -1239.348910 |