API

Types defined in the package

GLM.LinearModelType
LinearModel

A combination of a LmResp, a LinPred, and possibly a FormulaTerm

Members

  • rr: a LmResp object
  • pp: a LinPred object
  • f: either a FormulaTerm object or nothing
source
GLM.DensePredCholType
DensePredChol{T}

A LinPred type with a dense Cholesky factorization of X'X

Members

  • X: model matrix of size n × p with n ≥ p. Should be full column rank.
  • beta0: base coefficient vector of length p
  • delbeta: increment to coefficient vector, also of length p
  • scratchbeta: scratch vector of length p, used in linpred! method
  • chol: a Cholesky object created from X'X, possibly using row weights.
  • scratchm1: scratch Matrix{T} of the same size as X
  • scratchm2: scratch Matrix{T} os the same size as X'X
source
GLM.DensePredQRType
DensePredQR

A LinPred type with a dense QR decomposition of X

Members

  • X: Model matrix of size n × p with n ≥ p. Should be full column rank.
  • beta0: base coefficient vector of length p
  • delbeta: increment to coefficient vector, also of length p
  • scratchbeta: scratch vector of length p, used in linpred! method
  • qr: either a QRCompactWY or QRPivoted object created from X, with optional row weights.
  • scratchm1: scratch Matrix{T} of the same size as X
source
GLM.LmRespType
LmResp

Encapsulates the response for a linear model

Members

  • mu: current value of the mean response vector or fitted value
  • offset: optional offset added to the linear predictor to form mu
  • wts: optional vector of prior frequency (a.k.a. case) weights for observations
  • y: observed response vector

Either or both offset and wts may be of length 0

source
GLM.GlmRespType
GlmResp

The response vector and various derived vectors in a generalized linear model.

source

Constructors for models

The most general approach to fitting a model is with the fit function, as in

julia> using Random

julia> fit(LinearModel, hcat(ones(10), 1:10), randn(MersenneTwister(12321), 10))
LinearModel

Coefficients:
────────────────────────────────────────────────────────────────
        Coef.  Std. Error      t  Pr(>|t|)  Lower 95%  Upper 95%
────────────────────────────────────────────────────────────────
x1   0.717436    0.775175   0.93    0.3818  -1.07012    2.50499
x2  -0.152062    0.124931  -1.22    0.2582  -0.440153   0.136029
────────────────────────────────────────────────────────────────

This model can also be fit as

julia> using Random

julia> lm(hcat(ones(10), 1:10), randn(MersenneTwister(12321), 10))
LinearModel

Coefficients:
────────────────────────────────────────────────────────────────
        Coef.  Std. Error      t  Pr(>|t|)  Lower 95%  Upper 95%
────────────────────────────────────────────────────────────────
x1   0.717436    0.775175   0.93    0.3818  -1.07012    2.50499
x2  -0.152062    0.124931  -1.22    0.2582  -0.440153   0.136029
────────────────────────────────────────────────────────────────
GLM.lmFunction
lm(formula, data;
   [wts::AbstractVector], dropcollinear::Bool=true, method::Symbol=:cholesky,
   contrasts::AbstractDict{Symbol}=Dict{Symbol,Any}())
lm(X::AbstractMatrix, y::AbstractVector;
   wts::AbstractVector=similar(y, 0), dropcollinear::Bool=true, method::Symbol=:cholesky)

Fit a linear model to data. An alias for fit(LinearModel, X, y; wts=wts, dropcollinear=dropcollinear, method=method)

In the first method, formula must be a StatsModels.jl Formula object and data a table (in the Tables.jl definition, e.g. a data frame). In the second method, X must be a matrix holding values of the independent variable(s) in columns (including if appropriate the intercept), and y must be a vector holding values of the dependent variable.

Keyword Arguments

  • dropcollinear::Bool=true: Controls whether or not a model matrix less-than-full rank is accepted. If true (the default) the coefficient for redundant linearly dependent columns is 0.0 and all associated statistics are set to NaN. Typically from a set of linearly-dependent columns the last ones are identified as redundant (however, the exact selection of columns identified as redundant is not guaranteed).
  • method::Symbol=:cholesky: Controls which decomposition method to use. If method=:cholesky (the default), then the Cholesky decomposition method will be used. If method=:qr, then the QR decomposition method (which is more stable but slower) will be used.
  • wts::Vector=similar(y,0): Prior frequency (a.k.a. case) weights of observations. Such weights are equivalent to repeating each observation a number of times equal to its weight. Do note that this interpretation gives equal point estimates but different standard errors from analytical (a.k.a. inverse variance) weights and from probability (a.k.a. sampling) weights which are the default in some other software. Can be length 0 to indicate no weighting (default).
  • contrasts::AbstractDict{Symbol}=Dict{Symbol,Any}(): a Dict mapping term names (as Symbols) to term types (e.g. ContinuousTerm) or contrasts (e.g., HelmertCoding(), SeqDiffCoding(; levels=["a", "b", "c"]), etc.). If contrasts are not provided for a variable, the appropriate term type will be guessed based on the data type from the data column: any numeric data is assumed to be continuous, and any non-numeric data is assumed to be categorical (with DummyCoding() as the default contrast type).
source
GLM.glmFunction
glm(formula, data,
    distr::UnivariateDistribution, link::Link = canonicallink(distr); <keyword arguments>)
glm(X::AbstractMatrix, y::AbstractVector,
    distr::UnivariateDistribution, link::Link = canonicallink(distr); <keyword arguments>)

Fit a generalized linear model to data. Alias for fit(GeneralizedLinearModel, ...).

In the first method, formula must be a StatsModels.jl Formula object and data a table (in the Tables.jl definition, e.g. a data frame). In the second method, X must be a matrix holding values of the independent variable(s) in columns (including if appropriate the intercept), and y must be a vector holding values of the dependent variable. In both cases, distr must specify the distribution, and link may specify the link function (if omitted, it is taken to be the canonical link for distr; see Link for a list of built-in links).

Keyword Arguments

  • dofit::Bool=true: Determines whether model will be fit. Only supported with glm.

  • dropcollinear::Bool=true: Controls whether or not a model matrix less-than-full rank is accepted. If true (the default) the coefficient for redundant linearly dependent columns is 0.0 and all associated statistics are set to NaN. Typically from a set of linearly-dependent columns the last ones are identified as redundant (however, the exact selection of columns identified as redundant is not guaranteed).

  • method::Symbol=:cholesky: Controls which decomposition method to use. If method=:cholesky (the default), then the Cholesky decomposition method will be used. If method=:qr, then the QR decomposition method (which is more stable but slower) will be used.

  • wts::Vector=similar(y,0): Prior frequency (a.k.a. case) weights of observations. Such weights are equivalent to repeating each observation a number of times equal to its weight. Do note that this interpretation gives equal point estimates but different standard errors from analytical (a.k.a. inverse variance) weights and from probability (a.k.a. sampling) weights which are the default in some other software. Can be length 0 to indicate no weighting (default).

  • contrasts::AbstractDict{Symbol}=Dict{Symbol,Any}(): a Dict mapping term names (as Symbols) to term types (e.g. ContinuousTerm) or contrasts (e.g., HelmertCoding(), SeqDiffCoding(; levels=["a", "b", "c"]), etc.). If contrasts are not provided for a variable, the appropriate term type will be guessed based on the data type from the data column: any numeric data is assumed to be continuous, and any non-numeric data is assumed to be categorical (with DummyCoding() as the default contrast type).

  • offset::Vector=similar(y,0): offset added to to form eta. Can be of length 0

  • verbose::Bool=false: Display convergence information for each iteration

  • maxiter::Integer=30: Maximum number of iterations allowed to achieve convergence

  • atol::Real=1e-6: Convergence is achieved when the relative change in deviance is less than max(rtol*dev, atol).

  • rtol::Real=1e-6: Convergence is achieved when the relative change in deviance is less than max(rtol*dev, atol).

  • minstepfac::Real=0.001: Minimum line step fraction. Must be between 0 and 1.

  • start::AbstractVector=nothing: Starting values for beta. Should have the same length as the number of columns in the model matrix.

source
GLM.negbinFunction
negbin(formula, data, [link::Link];
       <keyword arguments>)
negbin(X::AbstractMatrix, y::AbstractVector, [link::Link];
       <keyword arguments>)

Fit a negative binomial generalized linear model to data, while simultaneously estimating the shape parameter θ. Extra arguments and keyword arguments will be passed to glm.

In the first method, formula must be a StatsModels.jl Formula object and data a table (in the Tables.jl definition, e.g. a data frame). In the second method, X must be a matrix holding values of the independent variable(s) in columns (including if appropriate the intercept), and y must be a vector holding values of the dependent variable. In both cases, link may specify the link function (if omitted, it is taken to be NegativeBinomial(θ)).

Keyword Arguments

  • initialθ::Real=Inf: Starting value for shape parameter θ. If it is Inf then the initial value will be estimated by fitting a Poisson distribution.
  • dropcollinear::Bool=true: See dropcollinear for glm
  • method::Symbol=:cholesky: See method for glm
  • maxiter::Integer=30: See maxiter for glm
  • atol::Real=1.0e-6: See atol for glm
  • rtol::Real=1.0e-6: See rtol for glm
  • verbose::Bool=false: See verbose for glm
source
StatsAPI.fitFunction
fit(LinearModel, formula::FormulaTerm, data;
    [wts::AbstractVector], dropcollinear::Bool=true, method::Symbol=:cholesky,
    contrasts::AbstractDict{Symbol}=Dict{Symbol,Any}())
fit(LinearModel, X::AbstractMatrix, y::AbstractVector;
    wts::AbstractVector=similar(y, 0), dropcollinear::Bool=true, method::Symbol=:cholesky)

Fit a linear model to data.

In the first method, formula must be a StatsModels.jl Formula object and data a table (in the Tables.jl definition, e.g. a data frame). In the second method, X must be a matrix holding values of the independent variable(s) in columns (including if appropriate the intercept), and y must be a vector holding values of the dependent variable.

Keyword Arguments

  • dropcollinear::Bool=true: Controls whether or not a model matrix less-than-full rank is accepted. If true (the default) the coefficient for redundant linearly dependent columns is 0.0 and all associated statistics are set to NaN. Typically from a set of linearly-dependent columns the last ones are identified as redundant (however, the exact selection of columns identified as redundant is not guaranteed).
  • method::Symbol=:cholesky: Controls which decomposition method to use. If method=:cholesky (the default), then the Cholesky decomposition method will be used. If method=:qr, then the QR decomposition method (which is more stable but slower) will be used.
  • wts::Vector=similar(y,0): Prior frequency (a.k.a. case) weights of observations. Such weights are equivalent to repeating each observation a number of times equal to its weight. Do note that this interpretation gives equal point estimates but different standard errors from analytical (a.k.a. inverse variance) weights and from probability (a.k.a. sampling) weights which are the default in some other software. Can be length 0 to indicate no weighting (default).
  • contrasts::AbstractDict{Symbol}=Dict{Symbol,Any}(): a Dict mapping term names (as Symbols) to term types (e.g. ContinuousTerm) or contrasts (e.g., HelmertCoding(), SeqDiffCoding(; levels=["a", "b", "c"]), etc.). If contrasts are not provided for a variable, the appropriate term type will be guessed based on the data type from the data column: any numeric data is assumed to be continuous, and any non-numeric data is assumed to be categorical (with DummyCoding() as the default contrast type).
source
fit(GeneralizedLinearModel, formula, data,
    distr::UnivariateDistribution, link::Link = canonicallink(d); <keyword arguments>)
fit(GeneralizedLinearModel, X::AbstractMatrix, y::AbstractVector,
    distr::UnivariateDistribution, link::Link = canonicallink(d); <keyword arguments>)

Fit a generalized linear model to data.

In the first method, formula must be a StatsModels.jl Formula object and data a table (in the Tables.jl definition, e.g. a data frame). In the second method, X must be a matrix holding values of the independent variable(s) in columns (including if appropriate the intercept), and y must be a vector holding values of the dependent variable. In both cases, distr must specify the distribution, and link may specify the link function (if omitted, it is taken to be the canonical link for distr; see Link for a list of built-in links).

Keyword Arguments

  • dofit::Bool=true: Determines whether model will be fit. Only supported with glm.

  • dropcollinear::Bool=true: Controls whether or not a model matrix less-than-full rank is accepted. If true (the default) the coefficient for redundant linearly dependent columns is 0.0 and all associated statistics are set to NaN. Typically from a set of linearly-dependent columns the last ones are identified as redundant (however, the exact selection of columns identified as redundant is not guaranteed).

  • method::Symbol=:cholesky: Controls which decomposition method to use. If method=:cholesky (the default), then the Cholesky decomposition method will be used. If method=:qr, then the QR decomposition method (which is more stable but slower) will be used.

  • wts::Vector=similar(y,0): Prior frequency (a.k.a. case) weights of observations. Such weights are equivalent to repeating each observation a number of times equal to its weight. Do note that this interpretation gives equal point estimates but different standard errors from analytical (a.k.a. inverse variance) weights and from probability (a.k.a. sampling) weights which are the default in some other software. Can be length 0 to indicate no weighting (default).

  • contrasts::AbstractDict{Symbol}=Dict{Symbol,Any}(): a Dict mapping term names (as Symbols) to term types (e.g. ContinuousTerm) or contrasts (e.g., HelmertCoding(), SeqDiffCoding(; levels=["a", "b", "c"]), etc.). If contrasts are not provided for a variable, the appropriate term type will be guessed based on the data type from the data column: any numeric data is assumed to be continuous, and any non-numeric data is assumed to be categorical (with DummyCoding() as the default contrast type).

  • offset::Vector=similar(y,0): offset added to to form eta. Can be of length 0

  • verbose::Bool=false: Display convergence information for each iteration

  • maxiter::Integer=30: Maximum number of iterations allowed to achieve convergence

  • atol::Real=1e-6: Convergence is achieved when the relative change in deviance is less than max(rtol*dev, atol).

  • rtol::Real=1e-6: Convergence is achieved when the relative change in deviance is less than max(rtol*dev, atol).

  • minstepfac::Real=0.001: Minimum line step fraction. Must be between 0 and 1.

  • start::AbstractVector=nothing: Starting values for beta. Should have the same length as the number of columns in the model matrix.

source

Model methods

StatsAPI.devianceFunction
deviance(obj::LinearModel)

For linear models, the deviance is equal to the residual sum of squares (RSS).

source
GLM.dispersionFunction
dispersion(m::AbstractGLM, sqr::Bool=false)

Return the estimated dispersion (or scale) parameter for a model's distribution, generally written σ for linear models and ϕ for generalized linear models. It is, by definition, equal to 1 for the Bernoulli, Binomial, and Poisson families.

If sqr is true, the squared dispersion parameter is returned.

source
GLM.ftestFunction
ftest(mod::LinearModel)

Perform an F-test to determine whether model mod fits significantly better than the null model (i.e. which includes only the intercept).

julia> dat = DataFrame(Result=[1.1, 1.2, 1, 2.2, 1.9, 2, 0.9, 1, 1, 2.2, 2, 2],
                       Treatment=[1, 1, 1, 2, 2, 2, 1, 1, 1, 2, 2, 2]);


julia> model = lm(@formula(Result ~ 1 + Treatment), dat);


julia> ftest(model.model)
F-test against the null model:
F-statistic: 241.62 on 12 observations and 1 degrees of freedom, p-value: <1e-07
source
ftest(mod::LinearModel...; atol::Real=0.0)

For each sequential pair of linear models in mod..., perform an F-test to determine if the one model fits significantly better than the other. Models must have been fitted on the same data, and be nested either in forward or backward direction.

A table is returned containing consumed degrees of freedom (DOF), difference in DOF from the preceding model, sum of squared residuals (SSR), difference in SSR from the preceding model, R², difference in R² from the preceding model, and F-statistic and p-value for the comparison between the two models.

Note

This function can be used to perform an ANOVA by testing the relative fit of two models to the data

Optional keyword argument atol controls the numerical tolerance when testing whether the models are nested.

Examples

Suppose we want to compare the effects of two or more treatments on some result. Because this is an ANOVA, our null hypothesis is that Result ~ 1 fits the data as well as Result ~ 1 + Treatment.

julia> dat = DataFrame(Result=[1.1, 1.2, 1, 2.2, 1.9, 2, 0.9, 1, 1, 2.2, 2, 2],
                       Treatment=[1, 1, 1, 2, 2, 2, 1, 1, 1, 2, 2, 2],
                       Other=categorical([1, 1, 2, 1, 2, 1, 3, 1, 1, 2, 2, 1]));


julia> nullmodel = lm(@formula(Result ~ 1), dat);


julia> model = lm(@formula(Result ~ 1 + Treatment), dat);


julia> bigmodel = lm(@formula(Result ~ 1 + Treatment + Other), dat);

julia> ftest(nullmodel, model)
F-test: 2 models fitted on 12 observations
─────────────────────────────────────────────────────────────────
     DOF  ΔDOF     SSR     ΔSSR      R²     ΔR²        F*   p(>F)
─────────────────────────────────────────────────────────────────
[1]    2        3.2292           0.0000
[2]    3     1  0.1283  -3.1008  0.9603  0.9603  241.6234  <1e-07
─────────────────────────────────────────────────────────────────

julia> ftest(nullmodel, model, bigmodel)
F-test: 3 models fitted on 12 observations
─────────────────────────────────────────────────────────────────
     DOF  ΔDOF     SSR     ΔSSR      R²     ΔR²        F*   p(>F)
─────────────────────────────────────────────────────────────────
[1]    2        3.2292           0.0000
[2]    3     1  0.1283  -3.1008  0.9603  0.9603  241.6234  <1e-07
[3]    5     2  0.1017  -0.0266  0.9685  0.0082    1.0456  0.3950
─────────────────────────────────────────────────────────────────
source
StatsAPI.nobsFunction
nobs(obj::LinearModel)
nobs(obj::GLM)

For linear and generalized linear models, returns the number of rows, or, when prior weights are specified, the sum of weights.

source
StatsAPI.nulldevianceFunction
nulldeviance(obj::LinearModel)

For linear models, the deviance of the null model is equal to the total sum of squares (TSS).

source
StatsAPI.predictFunction
predict(mm::LinearModel, newx::AbstractMatrix;
        interval::Union{Symbol,Nothing} = nothing, level::Real = 0.95)

If interval is nothing (the default), return a vector with the predicted values for model mm and new data newx. Otherwise, return a vector with the predicted values, as well as vectors with the lower and upper confidence bounds for a given level (0.95 equates alpha = 0.05). Valid values of interval are :confidence delimiting the uncertainty of the predicted relationship, and :prediction delimiting estimated bounds for new data points.

source
predict(mm::AbstractGLM, newX;
        offset::FPVector=[],
        interval::Union{Symbol,Nothing}=nothing, level::Real=0.95,
        interval_method::Symbol=:transformation)

Return the predicted response of model mm from covariate values newX and, optionally, an offset.

newX must be either a table (in the Tables.jl containing all columns used in the model formula, or a matrix with one column for each predictor in the model. In both cases, each row represents an observation for which a prediction will be returned.

If interval=:confidence, also return upper and lower bounds for a given coverage level. By default (interval_method = :transformation) the intervals are constructed by applying the inverse link to intervals for the linear predictor. If interval_method = :delta, the intervals are constructed by the delta method, i.e., by linearization of the predicted response around the linear predictor. The :delta method intervals are symmetric around the point estimates, but do not respect natural parameter constraints (e.g., the lower bound for a probability could be negative).

source
Missing docstring.

Missing docstring for StatsModels.isnested. Check Documenter's build log for details.

GLM.CloglogLinkType
CloglogLink

A Link01 corresponding to the extreme value (or log-Weibull) distribution. The link is the complementary log-log transformation, log(1 - log(-μ)).

source
GLM.linkfunFunction
GLM.linkfun(L::Link, μ::Real)

Return η, the value of the linear predictor for link L at mean μ.

Examples

julia> μ = inv(10):inv(5):1
0.1:0.2:0.9

julia> show(linkfun.(LogitLink(), μ))
[-2.197224577336219, -0.8472978603872036, 0.0, 0.8472978603872034, 2.1972245773362196]
source
GLM.linkinvFunction
GLM.linkinv(L::Link, η::Real)

Return μ, the mean value, for link L at linear predictor value η.

Examples

julia> μ = 0.1:0.2:1
0.1:0.2:0.9

julia> η = logit.(μ);


julia> linkinv.(LogitLink(), η) ≈ μ
true
source
GLM.muetaFunction
GLM.mueta(L::Link, η::Real)

Return the derivative of linkinv, dμ/dη, for link L at linear predictor value η.

Examples

julia> mueta(LogitLink(), 0.0)
0.25

julia> mueta(CloglogLink(), 0.0) ≈ 0.36787944117144233
true

julia> mueta(LogLink(), 2.0) ≈ 7.38905609893065
true
source
GLM.inverselinkFunction
GLM.inverselink(L::Link, η::Real)

Return a 3-tuple of the inverse link, the derivative of the inverse link, and when appropriate, the variance function μ*(1 - μ).

The variance function is returned as NaN unless the range of μ is (0, 1)

Examples

julia> GLM.inverselink(LogitLink(), 0.0)
(0.5, 0.5, 0.25)

julia> μ, oneminusμ, variance = GLM.inverselink(CloglogLink(), 0.0);



julia> μ + oneminusμ ≈ 1
true

julia> μ*(1 - μ) ≈ variance
false

julia> isnan(last(GLM.inverselink(LogLink(), 2.0)))
true
source
GLM.canonicallinkFunction
canonicallink(D::Distribution)

Return the canonical link for distribution D, which must be in the exponential family.

Examples

julia> canonicallink(Bernoulli())
LogitLink()
source
GLM.glmvarFunction
GLM.glmvar(D::Distribution, μ::Real)

Return the value of the variance function for D at μ

The variance of D at μ is the product of the dispersion parameter, ϕ, which does not depend on μ and the value of glmvar. In other words glmvar returns the factor of the variance that depends on μ.

Examples

julia> μ = 1/6:1/3:1;

julia> glmvar.(Normal(), μ)    # constant for Normal()
3-element Vector{Float64}:
 1.0
 1.0
 1.0

julia> glmvar.(Bernoulli(), μ) ≈ μ .* (1 .- μ)
true

julia> glmvar.(Poisson(), μ) == μ
true

julia> glmvar.(Geometric(), μ) ≈ μ .* (1 .+ μ)
true
source
GLM.mustartFunction
GLM.mustart(D::Distribution, y, wt)

Return a starting value for μ.

For some distributions it is appropriate to set μ = y to initialize the IRLS algorithm but for others, notably the Bernoulli, the values of y are not allowed as values of μ and must be modified.

Examples

julia> GLM.mustart(Bernoulli(), 0.0, 1) ≈ 1/4
true

julia> GLM.mustart(Bernoulli(), 1.0, 1) ≈ 3/4
true

julia> GLM.mustart(Binomial(), 0.0, 10) ≈ 1/22
true

julia> GLM.mustart(Normal(), 0.0, 1) ≈ 0
true

julia> GLM.mustart(Geometric(), 4, 1) ≈ 4
true
source
GLM.devresidFunction
devresid(D, y, μ::Real)

Return the squared deviance residual of μ from y for distribution D

The deviance of a GLM can be evaluated as the sum of the squared deviance residuals. This is the principal use for these values. The actual deviance residual, say for plotting, is the signed square root of this value

sign(y - μ) * sqrt(devresid(D, y, μ))

Examples

julia> devresid(Normal(), 0, 0.25) ≈ abs2(0.25)
true

julia> devresid(Bernoulli(), 1, 0.75) ≈ -2*log(0.75)
true

julia> devresid(Bernoulli(), 0, 0.25) ≈ -2*log1p(-0.25)
true
source
GLM.dispersion_parameterFunction
GLM.dispersion_parameter(D)

Does distribution D have a separate dispersion parameter, ϕ?

Returns false for the Bernoulli, Binomial and Poisson distributions, true otherwise.

Examples

julia> show(GLM.dispersion_parameter(Normal()))
true
julia> show(GLM.dispersion_parameter(Bernoulli()))
false
source
GLM.loglik_obsFunction
GLM.loglik_obs(D, y, μ, wt, ϕ)

Returns wt * logpdf(D(μ, ϕ), y) where the parameters of D are derived from μ and ϕ.

The wt argument is a multiplier of the result except in the case of the Binomial where wt is the number of trials and μ is the proportion of successes.

The loglikelihood of a fitted model is the sum of these values over all the observations.

source
GLM.cancancelFunction
cancancel(r::GlmResp{V,D,L})

Returns true if dμ/dη for link L is the variance function for distribution D

When L is the canonical link for D the derivative of the inverse link is a multiple of the variance function for D. If they are the same a numerator and denominator term in the expression for the working weights will cancel.

source