Skip to contents

This vignette outlines a workflow for:

  • Searching, selecting and retrieving transfer to utility models;
  • Preparing a prediction dataset for use with a selected transfer to utility model; and
  • Applying the selected transfer to utility model to a prediction dataset to predict Quality Adjusted Life Years (QALYs).

The practical value of implementing such a workflow is discussed in the economic analysis vignette and a scientific manuscript. Note, this example uses fake data - it should should not be used to inform decision making.

Search, select and retrieve transfer to utility models

To identify datasets that contain transfer to utility models compatible with youthu (ie those developped with the TTU package), you can use the get_ttu_dv_dss function. The function searches specified dataverses (in the below example, the TTU dataverse) for datasets containing output from the TTU package.

ttu_dv_dss_tb <- get_ttu_dv_dss("TTU")

The ttu_dv_dss_tb table summarises some pertinent details about each dataset containing TTU models found by the preceding command. These details include a link to any scientific summary (the “Article” column) associated with a dataset.

Transfer to Utility Datasets
ID Utility Predictors Article
1 aqol6dtotalw BADS total score , GAD7 total score , K6 total score , OASIS total score , PHQ9 total score , SCARED total score, SOFAS total score https://doi.org/10.1101/2021.07.07.21260129

To identify models that predict a specified type of health utility from one or more of a specified subset of predictors, use:

mdls_lup <- get_mdls_lup(ttu_dv_dss_tb = ttu_dv_dss_tb,
                         utility_type_chr = "AQoL-6D",
                         mdl_predrs_in_ds_chr = c("PHQ9 total score",
                                                  "SOFAS total score"))

The preceding command will produce a lookup table with information that includes the catalogue names of models, the predictors used in each model and the analysis that generated each one.

Selected elements from Models Look-Up Table
Catalogue reference Predictors Analysis
PHQ9_1_GLM_GSN_LOG PHQ9 Primary Analysis
PHQ9_1_OLS_CLL PHQ9 Primary Analysis
PHQ9_SOFAS_1_GLM_GSN_LOG PHQ9 , SOFAS Primary Analysis
PHQ9_SOFAS_1_OLS_CLL PHQ9 , SOFAS Primary Analysis
OASIS_SOFAS_1_GLM_GSN_LOG OASIS, SOFAS Primary Analysis
OASIS_SOFAS_1_OLS_CLL OASIS, SOFAS Primary Analysis
BADS_SOFAS_1_GLM_GSN_LOG BADS , SOFAS Primary Analysis
BADS_SOFAS_1_OLS_CLL BADS , SOFAS Primary Analysis
K6_SOFAS_1_GLM_GSN_LOG K6 , SOFAS Primary Analysis
K6_SOFAS_1_OLS_CLL K6 , SOFAS Primary Analysis
SCARED_SOFAS_1_GLM_GSN_LOG SCARED, SOFAS Primary Analysis
SCARED_SOFAS_1_OLS_CLL SCARED, SOFAS Primary Analysis
GAD7_SOFAS_1_GLM_GSN_LOG GAD7 , SOFAS Primary Analysis
GAD7_SOFAS_1_OLS_CLL GAD7 , SOFAS Primary Analysis
SOFAS_1_GLM_GSN_LOG SOFAS Secondary Analysis A
SOFAS_1_OLS_CLL SOFAS Secondary Analysis A
OASIS_PHQ9_1_GLM_GSN_LOG OASIS, PHQ9 Secondary Analysis B
OASIS_PHQ9_1_OLS_CLL OASIS, PHQ9 Secondary Analysis B
GAD7_PHQ9_1_GLM_GSN_LOG GAD7, PHQ9 Secondary Analysis B
GAD7_PHQ9_1_OLS_CLL GAD7, PHQ9 Secondary Analysis B
SCARED_PHQ9_1_GLM_GSN_LOG SCARED, PHQ9 Secondary Analysis B
SCARED_PHQ9_1_OLS_CLL SCARED, PHQ9 Secondary Analysis B

To review the summary information about the predictive performance of a specific model, use:

get_dv_mdl_smrys(mdls_lup,
                 mdl_nms_chr = "PHQ9_SOFAS_1_OLS_CLL")
## $PHQ9_SOFAS_1_OLS_CLL
##        Parameter Estimate    SE          95% CI
## 1 SD (Intercept)    0.348 0.017   0.312 , 0.382
## 2      Intercept    0.428 0.129   0.174 , 0.686
## 3  PHQ9 baseline   -9.115 0.249 -9.601 , -8.618
## 4    PHQ9 change   -7.331 0.339 -8.007 , -6.665
## 5 SOFAS baseline    0.960 0.172   0.616 , 1.292
## 6   SOFAS change    1.146 0.235   0.674 , 1.607
## 7             R2    0.767 0.012   0.743 , 0.788
## 8           RMSE    0.925 0.004   0.922 , 0.928
## 9          Sigma    0.406 0.012   0.384 , 0.429

More information about a selected model can be found in the online model catalogue, the link to which can be obtained with the following command:

get_mdl_ctlg_url(mdls_lup,
                 mdl_nm_1L_chr = "PHQ9_SOFAS_1_OLS_CLL")

[1] “https://dataverse.harvard.edu/api/access/datafile/6484935

Prepare a prediction dataset for use with a selected transfer to utility model

Import data

You can now import and inspect the dataset you plan on using for prediction. In the below example we use fake data.

data_tb <- make_fake_ds_one()
Illustrative example of a prediction dataset
UID Timepoint Date PHQ_total SOFAS_total
Participant_1 Baseline 2022-12-20 7 69
Participant_10 Baseline 2022-11-16 17 60
Participant_10 Follow-up 2023-02-21 17 64
Participant_100 Baseline 2023-01-31 0 76
Participant_1000 Baseline 2023-02-05 0 71
Participant_1000 Follow-up 2023-04-10 0 71

Confirm dataset can be used as a prediction dataset

The prediction dataset must contain variables that correspond to all the predictors of the model you intend to apply. The allowable range and required class of each predictor variable are described in the min_val_dbl, max_val_dbl and class_chr columns of the model predictors lookup table, which can be accessed with a call to the get_predictors_lup function.

predictors_lup <- get_predictors_lup(mdls_lup = mdls_lup,
                                     mdl_nm_1L_chr = "PHQ9_SOFAS_1_OLS_CLL")
Model predictors lookup table
short_name_chr long_name_chr min_val_dbl max_val_dbl class_chr increment_dbl class_fn_chr mdl_scaling_dbl covariate_lgl
PHQ9 PHQ9 total score 0 27 integer 1 youthvars::youthvars_phq9 0.01 FALSE
SOFAS SOFAS total score 0 100 integer 1 youthvars::youthvars_sofas 0.01 TRUE

The prediction dataset must also include both a unique client identifier variable and a measurement time-point identifier variable (which must be a factor with two levels). The dataset also needs to be in long format (ie where measures at different time-points for the same individual are stacked on top of each other in separate rows). We can confirm these conditions hold by creating a dataset metadata object using the make_predn_metadata_ls function. In creating the metadata object, the function checks that the dataset can be used in conjunction with the model specified at the mdl_nm_1L_chr argument. If the prediction dataset uses different variable names for the predictors to those specified in the predictors_lup lookup table, a named vector detailing the correspondence between the two sets of variable names needs to be passed to the predr_vars_nms_chr argument. Finally, if you wish to specify a preferred variable name to use for the predicted utility values when applying the model, you can do this by passing this name to the utl_var_nm_1L_chr argument.

predn_ds_ls <- make_predn_metadata_ls(data_tb,
                                      id_var_nm_1L_chr = "UID",
                                      msrmnt_date_var_nm_1L_chr = "Date",
                                      predr_vars_nms_chr = c(PHQ9 = "PHQ_total",SOFAS = "SOFAS_total"),
                                      round_var_nm_1L_chr = "Timepoint",
                                      round_bl_val_1L_chr = "Baseline",
                                      utl_var_nm_1L_chr = "AQoL6D_HU",
                                      mdls_lup = mdls_lup,
                                      mdl_nm_1L_chr = "PHQ9_SOFAS_1_OLS_CLL")

Apply the selected transfer to utility model to a prediction dataset to predict Quality Adjusted Life Years (QALYs)

Predict health utility at baseline and follow-up timepoints

To generate utility predictions we use the add_utl_predn function. The function needs to be supplied with the prediction dataset (the value passed to argument data_tb) and the validated prediction metadata object we created in the previous step.

data_tb <- add_utl_predn(data_tb,
                         predn_ds_ls = predn_ds_ls)
## Joining with `by = join_by(UID, Timepoint)`

By default the add_utl_predn function samples model parameter values based on a table of model coefficients when making predictions and constrains predictions to an allowed range. You can override these defaults by adding additional arguments new_data_is_1L_chr = "Predicted" (which uses mean parameter values), force_min_max_1L_lgl = F (removes range constraint) and (if the source dataset makes available downloadable model objects) make_from_tbl_1L_lgl = F. These settings will produce different predictions. It is strongly recommended that you consult the model catalogue (see above) to understand how such decisions may affect the validity of the predicted values that will be generated.

Prediction dataset with predicted utilities
UID Timepoint Date PHQ_total SOFAS_total AQoL6D_HU
Participant_1 Baseline 2022-12-20 7 69 0.6039160
Participant_10 Baseline 2022-11-16 17 60 0.4661496
Participant_10 Follow-up 2023-02-21 17 64 0.3535036
Participant_100 Baseline 2023-01-31 0 76 0.9939090
Participant_1000 Baseline 2023-02-05 0 71 0.9999991
Participant_1000 Follow-up 2023-04-10 0 71 0.9774732

Our health utility predictions are now available for use and are summarised below.

summary(data_tb$AQoL6D_HU)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
## 0.04121 0.44149 0.64419 0.62758 0.83702 1.00000

Calculate QALYs

The last step is to calculate Quality Adjusted Life Years, using a method assuming a linear rate of change between timepoints.

data_tb <- data_tb %>% add_qalys_to_ds(predn_ds_ls = predn_ds_ls,
                                       include_predrs_1L_lgl = F,
                                       reshape_1L_lgl = F)
Prediction dataset with QALYs
UID Timepoint Date PHQ_total SOFAS_total AQoL6D_HU AQoL6D_HU_change_dbl duration_prd qalys_dbl
Participant_1 Baseline 2022-12-20 7 69 0.6039160 0.0000000 0S 0.0000000
Participant_10 Baseline 2022-11-16 17 60 0.4661496 0.0000000 0S 0.0000000
Participant_10 Follow-up 2023-02-21 17 64 0.3535036 -0.1126460 97d 0H 0M 0S 0.1088383
Participant_100 Baseline 2023-01-31 0 76 0.9939090 0.0000000 0S 0.0000000
Participant_1000 Baseline 2023-02-05 0 71 0.9999991 0.0000000 0S 0.0000000
Participant_1000 Follow-up 2023-04-10 0 71 0.9774732 -0.0225259 64d 0H 0M 0S 0.1732488