项目作者: easystats

项目描述 :
:muscle: Models' quality and performance metrics (R2, ICC, LOO, AIC, BF, ...)
高级语言: R
项目地址: git://github.com/easystats/performance.git
创建时间: 2019-02-10T09:01:04Z
项目社区:https://github.com/easystats/performance

开源协议:GNU General Public License v3.0

下载


" class="reference-link">performance

DOI
downloads
total

Test if your model is a good model!

A crucial aspect when building regression models is to evaluate the
quality of modelfit. It is important to investigate how well models fit
to the data and which fit indices to report. Functions to create
diagnostic plots or to compute fit measures do exist, however, mostly
spread over different packages. There is no unique and consistent
approach to assess the model quality for different kind of models.

The primary goal of the performance package is to fill this gap and
to provide utilities for computing indices of model quality and
goodness of fit. These include measures like r-squared (R2), root
mean squared error (RMSE) or intraclass correlation coefficient (ICC) ,
but also functions to check (mixed) models for overdispersion,
zero-inflation, convergence or singularity.

Installation

CRAN
performance status
badge
codecov

The performance package is available on CRAN, while its latest
development version is available on R-universe (from rOpenSci).

Type Source Command
Release CRAN install.packages("performance")
Development R-universe install.packages("performance", repos = "https://easystats.r-universe.dev")

Once you have downloaded the package, you can then load it using:

  1. library("performance")

Tip

Instead of library(performance), use library(easystats). This will
make all features of the easystats-ecosystem available.

To stay updated, use easystats::install_latest().

Citation

To cite performance in publications use:

  1. citation("performance")
  2. #> To cite package 'performance' in publications use:
  3. #>
  4. #> Lüdecke et al., (2021). performance: An R Package for Assessment, Comparison and
  5. #> Testing of Statistical Models. Journal of Open Source Software, 6(60), 3139.
  6. #> https://doi.org/10.21105/joss.03139
  7. #>
  8. #> A BibTeX entry for LaTeX users is
  9. #>
  10. #> @Article{,
  11. #> title = {{performance}: An {R} Package for Assessment, Comparison and Testing of Statistical Models},
  12. #> author = {Daniel Lüdecke and Mattan S. Ben-Shachar and Indrajeet Patil and Philip Waggoner and Dominique Makowski},
  13. #> year = {2021},
  14. #> journal = {Journal of Open Source Software},
  15. #> volume = {6},
  16. #> number = {60},
  17. #> pages = {3139},
  18. #> doi = {10.21105/joss.03139},
  19. #> }

Documentation

Documentation
Blog
Features

There is a nice introduction into the package on
youtube.

The performance workflow

Assessing model quality

R-squared

performance has a generic r2() function, which computes the
r-squared for many different models, including mixed effects and
Bayesian regression models.

r2() returns a list containing values related to the “most
appropriate” r-squared for the given model.

  1. model <- lm(mpg ~ wt + cyl, data = mtcars)
  2. r2(model)
  3. #> # R2 for Linear Regression
  4. #> R2: 0.830
  5. #> adj. R2: 0.819
  6. model <- glm(am ~ wt + cyl, data = mtcars, family = binomial)
  7. r2(model)
  8. #> # R2 for Logistic Regression
  9. #> Tjur's R2: 0.705
  10. library(MASS)
  11. data(housing)
  12. model <- polr(Sat ~ Infl + Type + Cont, weights = Freq, data = housing)
  13. r2(model)
  14. #> Nagelkerke's R2: 0.108

The different R-squared measures can also be accessed directly via
functions like r2_bayes(), r2_coxsnell() or r2_nagelkerke() (see a
full list of functions
here).

For mixed models, the conditional and marginal R-squared are
returned. The marginal R-squared considers only the variance of the
fixed effects and indicates how much of the model’s variance is
explained by the fixed effects part only. The conditional R-squared
takes both the fixed and random effects into account and indicates how
much of the model’s variance is explained by the “complete” model.

For frequentist mixed models, r2() (resp. r2_nakagawa()) computes
the mean random effect variances, thus r2() is also appropriate for
mixed models with more complex random effects structures, like random
slopes or nested random effects (Johnson 2014; Nakagawa, Johnson, and
Schielzeth 2017).

  1. set.seed(123)
  2. library(rstanarm)
  3. model <- stan_glmer(
  4. Petal.Length ~ Petal.Width + (1 | Species),
  5. data = iris,
  6. cores = 4
  7. )
  8. r2(model)
  9. #> # Bayesian R2 with Compatibility Interval
  10. #>
  11. #> Conditional R2: 0.954 (95% CI [0.951, 0.957])
  12. #> Marginal R2: 0.414 (95% CI [0.204, 0.644])
  13. library(lme4)
  14. model <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy)
  15. r2(model)
  16. #> # R2 for Mixed Models
  17. #>
  18. #> Conditional R2: 0.799
  19. #> Marginal R2: 0.279

Intraclass Correlation Coefficient (ICC)

Similar to R-squared, the ICC provides information on the explained
variance and can be interpreted as “the proportion of the variance
explained by the grouping structure in the population” (Hox 2010).

icc() calculates the ICC for various mixed model objects, including
stanreg models.

  1. library(lme4)
  2. model <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy)
  3. icc(model)
  4. #> # Intraclass Correlation Coefficient
  5. #>
  6. #> Adjusted ICC: 0.722
  7. #> Unadjusted ICC: 0.521

…and models of class brmsfit.

  1. library(brms)
  2. set.seed(123)
  3. model <- brm(mpg ~ wt + (1 | cyl) + (1 + wt | gear), data = mtcars)
  1. icc(model)
  2. #> # Intraclass Correlation Coefficient
  3. #>
  4. #> Adjusted ICC: 0.930
  5. #> Unadjusted ICC: 0.771

Model diagnostics

Check for overdispersion

Overdispersion occurs when the observed variance in the data is higher
than the expected variance from the model assumption (for Poisson,
variance roughly equals the mean of an outcome).
check_overdispersion() checks if a count model (including mixed
models) is overdispersed or not.

  1. library(glmmTMB)
  2. data(Salamanders)
  3. model <- glm(count ~ spp + mined, family = poisson, data = Salamanders)
  4. check_overdispersion(model)
  5. #> # Overdispersion test
  6. #>
  7. #> dispersion ratio = 2.946
  8. #> Pearson's Chi-Squared = 1873.710
  9. #> p-value = < 0.001

Overdispersion can be fixed by either modelling the dispersion parameter
(not possible with all packages), or by choosing a different
distributional family (like Quasi-Poisson, or negative binomial, see
(Gelman and Hill 2007)).

Check for zero-inflation

Zero-inflation (in (Quasi-)Poisson models) is indicated when the amount
of observed zeros is larger than the amount of predicted zeros, so the
model is underfitting zeros. In such cases, it is recommended to use
negative binomial or zero-inflated models.

Use check_zeroinflation() to check if zero-inflation is present in the
fitted model.

  1. model <- glm(count ~ spp + mined, family = poisson, data = Salamanders)
  2. check_zeroinflation(model)
  3. #> # Check for zero-inflation
  4. #>
  5. #> Observed zeros: 387
  6. #> Predicted zeros: 298
  7. #> Ratio: 0.77

Check for singular model fits

A “singular” model fit means that some dimensions of the
variance-covariance matrix have been estimated as exactly zero. This
often occurs for mixed models with overly complex random effects
structures.

check_singularity() checks mixed models (of class lme, merMod,
glmmTMB or MixMod) for singularity, and returns TRUE if the model
fit is singular.

  1. library(lme4)
  2. data(sleepstudy)
  3. # prepare data
  4. set.seed(123)
  5. sleepstudy$mygrp <- sample(1:5, size = 180, replace = TRUE)
  6. sleepstudy$mysubgrp <- NA
  7. for (i in 1:5) {
  8. filter_group <- sleepstudy$mygrp == i
  9. sleepstudy$mysubgrp[filter_group] <-
  10. sample(1:30, size = sum(filter_group), replace = TRUE)
  11. }
  12. # fit strange model
  13. model <- lmer(
  14. Reaction ~ Days + (1 | mygrp / mysubgrp) + (1 | Subject),
  15. data = sleepstudy
  16. )
  17. check_singularity(model)
  18. #> [1] TRUE

Remedies to cure issues with singular fits can be found
here.

Check for heteroskedasticity

Linear models assume constant error variance (homoskedasticity).

The check_heteroscedasticity() functions assess if this assumption has
been violated:

  1. data(cars)
  2. model <- lm(dist ~ speed, data = cars)
  3. check_heteroscedasticity(model)
  4. #> Warning: Heteroscedasticity (non-constant error variance) detected (p = 0.031).

Comprehensive visualization of model checks

performance provides many functions to check model assumptions, like
check_collinearity(), check_normality() or
check_heteroscedasticity(). To get a comprehensive check, use
check_model().

  1. # defining a model
  2. model <- lm(mpg ~ wt + am + gear + vs * cyl, data = mtcars)
  3. # checking model assumptions
  4. check_model(model)

Model performance summaries

model_performance() computes indices of model performance for
regression models. Depending on the model object, typical indices might
be r-squared, AIC, BIC, RMSE, ICC or LOOIC.

Linear model

  1. m1 <- lm(mpg ~ wt + cyl, data = mtcars)
  2. model_performance(m1)
  3. #> # Indices of model performance
  4. #>
  5. #> AIC | AICc | BIC | R2 | R2 (adj.) | RMSE | Sigma
  6. #> ---------------------------------------------------------------
  7. #> 156.010 | 157.492 | 161.873 | 0.830 | 0.819 | 2.444 | 2.568

Logistic regression

  1. m2 <- glm(vs ~ wt + mpg, data = mtcars, family = "binomial")
  2. model_performance(m2)
  3. #> # Indices of model performance
  4. #>
  5. #> AIC | AICc | BIC | Tjur's R2 | RMSE | Sigma | Log_loss | Score_log | Score_spherical | PCP
  6. #> -----------------------------------------------------------------------------------------------------
  7. #> 31.298 | 32.155 | 35.695 | 0.478 | 0.359 | 1.000 | 0.395 | -14.903 | 0.095 | 0.743

Linear mixed model

  1. library(lme4)
  2. m3 <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy)
  3. model_performance(m3)
  4. #> # Indices of model performance
  5. #>
  6. #> AIC | AICc | BIC | R2 (cond.) | R2 (marg.) | ICC | RMSE | Sigma
  7. #> ----------------------------------------------------------------------------------
  8. #> 1755.628 | 1756.114 | 1774.786 | 0.799 | 0.279 | 0.722 | 23.438 | 25.592

Models comparison

The compare_performance() function can be used to compare the
performance and quality of several models (including models of different
types).

  1. counts <- c(18, 17, 15, 20, 10, 20, 25, 13, 12)
  2. outcome <- gl(3, 1, 9)
  3. treatment <- gl(3, 3)
  4. m4 <- glm(counts ~ outcome + treatment, family = poisson())
  5. compare_performance(m1, m2, m3, m4, verbose = FALSE)
  6. #> # Comparison of Model Performance Indices
  7. #>
  8. #> Name | Model | AIC (weights) | AICc (weights) | BIC (weights) | RMSE | Sigma | Score_log
  9. #> -----------------------------------------------------------------------------------------------
  10. #> m1 | lm | 156.0 (<.001) | 157.5 (<.001) | 161.9 (<.001) | 2.444 | 2.568 |
  11. #> m2 | glm | 31.3 (>.999) | 32.2 (>.999) | 35.7 (>.999) | 0.359 | 1.000 | -14.903
  12. #> m3 | lmerMod | 1764.0 (<.001) | 1764.5 (<.001) | 1783.1 (<.001) | 23.438 | 25.592 |
  13. #> m4 | glm | 56.8 (<.001) | 76.8 (<.001) | 57.7 (<.001) | 3.043 | 1.000 | -2.598
  14. #>
  15. #> Name | Score_spherical | R2 | R2 (adj.) | Tjur's R2 | Log_loss | PCP | R2 (cond.) | R2 (marg.)
  16. #> ---------------------------------------------------------------------------------------------------
  17. #> m1 | | 0.830 | 0.819 | | | | |
  18. #> m2 | 0.095 | | | 0.478 | 0.395 | 0.743 | |
  19. #> m3 | | | | | | | 0.799 | 0.279
  20. #> m4 | 0.324 | | | | | | |
  21. #>
  22. #> Name | ICC | Nagelkerke's R2
  23. #> ------------------------------
  24. #> m1 | |
  25. #> m2 | |
  26. #> m3 | 0.722 |
  27. #> m4 | | 0.657

General index of model performance

One can also easily compute and a composite
index

of model performance and sort the models from the best one to the worse.

  1. compare_performance(m1, m2, m3, m4, rank = TRUE, verbose = FALSE)
  2. #> # Comparison of Model Performance Indices
  3. #>
  4. #> Name | Model | RMSE | Sigma | AIC weights | AICc weights | BIC weights | Performance-Score
  5. #> -----------------------------------------------------------------------------------------------
  6. #> m2 | glm | 0.359 | 1.000 | 1.000 | 1.000 | 1.000 | 100.00%
  7. #> m4 | glm | 3.043 | 1.000 | 2.96e-06 | 2.06e-10 | 1.63e-05 | 37.67%
  8. #> m1 | lm | 2.444 | 2.568 | 8.30e-28 | 6.07e-28 | 3.99e-28 | 36.92%
  9. #> m3 | lmerMod | 23.438 | 25.592 | 0.00e+00 | 0.00e+00 | 0.00e+00 | 0.00%

Visualisation of indices of models’ performance

Finally, we provide convenient visualisation (the see package must be
installed).

  1. plot(compare_performance(m1, m2, m4, rank = TRUE, verbose = FALSE))

Testing models

test_performance() (and test_bf, its Bayesian sister) carries out
the most relevant and appropriate tests based on the input (for
instance, whether the models are nested or not).

  1. set.seed(123)
  2. data(iris)
  3. lm1 <- lm(Sepal.Length ~ Species, data = iris)
  4. lm2 <- lm(Sepal.Length ~ Species + Petal.Length, data = iris)
  5. lm3 <- lm(Sepal.Length ~ Species * Sepal.Width, data = iris)
  6. lm4 <- lm(Sepal.Length ~ Species * Sepal.Width + Petal.Length + Petal.Width, data = iris)
  7. test_performance(lm1, lm2, lm3, lm4)
  8. #> Name | Model | BF | Omega2 | p (Omega2) | LR | p (LR)
  9. #> ------------------------------------------------------------
  10. #> lm1 | lm | | | | |
  11. #> lm2 | lm | > 1000 | 0.69 | < .001 | -6.25 | < .001
  12. #> lm3 | lm | > 1000 | 0.36 | < .001 | -3.44 | < .001
  13. #> lm4 | lm | > 1000 | 0.73 | < .001 | -7.77 | < .001
  14. #> Each model is compared to lm1.
  15. test_bf(lm1, lm2, lm3, lm4)
  16. #> Bayes Factors for Model Comparison
  17. #>
  18. #> Model BF
  19. #> [lm2] Species + Petal.Length 3.45e+26
  20. #> [lm3] Species * Sepal.Width 4.69e+07
  21. #> [lm4] Species * Sepal.Width + Petal.Length + Petal.Width 7.58e+29
  22. #>
  23. #> * Against Denominator: [lm1] Species
  24. #> * Bayes Factor Type: BIC approximation

Plotting Functions

Plotting functions are available through the see
package
.

Code of Conduct

Please note that the performance project is released with a Contributor
Code of
Conduct
.
By contributing to this project, you agree to abide by its terms.

Contributing

We are happy to receive bug reports, suggestions, questions, and (most
of all) contributions to fix problems and add features.

Please follow contributing guidelines mentioned here:

https://easystats.github.io/performance/CONTRIBUTING.html

References





Gelman, Andrew, and Jennifer Hill. 2007. Data Analysis Using Regression
and Multilevel/Hierarchical Models
. Analytical Methods for Social
Research. Cambridge ; New York: Cambridge University Press.



Hox, J. J. 2010. Multilevel Analysis: Techniques and Applications. 2nd
ed. Quantitative Methodology Series. New York: Routledge.



Johnson, Paul C. D. 2014. “Extension of Nakagawa & Schielzeth’s R2 GLMM
to Random Slopes Models.” Edited by Robert B. O’Hara. Methods in
Ecology and Evolution
5 (9): 944–46.



Nakagawa, Shinichi, Paul C. D. Johnson, and Holger Schielzeth. 2017.
“The Coefficient of Determination R2 and Intra-Class Correlation
Coefficient from Generalized Linear Mixed-Effects Models Revisited and
Expanded.” Journal of The Royal Society Interface 14 (134): 20170213.