Title: | Stacked Elastic Net |
---|---|
Description: | Implements stacked elastic net regression (Rauschenberger 2021 <doi:10.1093/bioinformatics/btaa535>). The elastic net generalises ridge and lasso regularisation (Zou 2005 <doi:10.1111/j.1467-9868.2005.00503.x>). Instead of fixing or tuning the mixing parameter alpha, we combine multiple alpha by stacked generalisation (Wolpert 1992 <doi:10.1016/S0893-6080(05)80023-1>). |
Authors: | Armin Rauschenberger [aut, cre] |
Maintainer: | Armin Rauschenberger <[email protected]> |
License: | GPL-3 |
Version: | 1.0.0 |
Built: | 2024-10-27 06:07:13 UTC |
Source: | https://github.com/rauschenberger/starnet |
The R package starnet
implements stacked elastic net regression.
The elastic net generalises ridge and lasso regularisation.
Instead of fixing or tuning the mixing parameter alpha,
we combine multiple alphas by stacked generalisation.
Use function starnet
for model fitting.
Type library(starnet)
and then ?starnet
or
help("starnet)"
to open its help file.
See the vignette for further examples.
Type vignette("starnet")
or browseVignettes("starnet")
to open the vignette.
Maintainer: Armin Rauschenberger [email protected] (ORCID)
Armin Rauschenberger, Enrico Glaab, and Mark A. van de Wiel (2021). "Predictive and interpretable models via the stacked elastic net". Bioinformatics 37(14):2012-2016. doi:10.1093/bioinformatics/btaa535. (Click here to access PDF.)
Useful links:
Report bugs at https://github.com/rauschenberger/starnet/issues
Wrapper for cv.glmnet
,
with different handling of sparsity constraints.
.cv.glmnet(..., nzero)
.cv.glmnet(..., nzero)
... |
see |
nzero |
maximum number of non-zero coefficients: positive integer |
Object of class cv.glmnet
.
NA
NA
Calculate loss from predicted and observed values
.loss(y, x, family, type.measure, foldid = NULL, grouped = TRUE)
.loss(y, x, family, type.measure, foldid = NULL, grouped = TRUE)
y |
observed values:
numeric vector of length |
x |
predicted values:
numeric vector of length |
family |
character |
type.measure |
character |
foldid |
fold identifiers:
integer vector of length |
grouped |
logical (for |
NA
NA
Functions for simulating data
.simulate.block(n, p, mode, family = "gaussian")
.simulate.block(n, p, mode, family = "gaussian")
n |
sample size: positive integer |
p |
dimensionality: positive integer |
mode |
character |
family |
character |
List of vector y
and matrix X
.
NA
NA
Extracts pooled coefficients. (The meta learners weights the coefficients from the base learners.)
## S3 method for class 'starnet' coef(object, nzero = NULL, ...)
## S3 method for class 'starnet' coef(object, nzero = NULL, ...)
object |
starnet object |
nzero |
maximum number of non-zero coefficients:
positive integer, or |
... |
further arguments (not applicable) |
List of scalar alpha
and vector beta
,
containing the pooled intercept and the pooled slopes,
respectively.
set.seed(1) n <- 50; p <- 100 y <- rnorm(n=n) X <- matrix(rnorm(n*p),nrow=n,ncol=p) object <- starnet(y=y,X=X) coef <- coef(object)
set.seed(1) n <- 50; p <- 100 y <- rnorm(n=n) X <- matrix(rnorm(n*p),nrow=n,ncol=p) object <- starnet(y=y,X=X) coef <- coef(object)
Compares stacked elastic net, tuned elastic net, ridge and lasso.
cv.starnet( y, X, family = "gaussian", nalpha = 21, alpha = NULL, nfolds.ext = 10, nfolds.int = 10, foldid.ext = NULL, foldid.int = NULL, type.measure = "deviance", alpha.meta = 1, nzero = NULL, intercept = NULL, upper.limit = NULL, unit.sum = NULL, ... )
cv.starnet( y, X, family = "gaussian", nalpha = 21, alpha = NULL, nfolds.ext = 10, nfolds.int = 10, foldid.ext = NULL, foldid.int = NULL, type.measure = "deviance", alpha.meta = 1, nzero = NULL, intercept = NULL, upper.limit = NULL, unit.sum = NULL, ... )
y |
response:
numeric vector of length |
X |
covariates:
numeric matrix with |
family |
character "gaussian", "binomial" or "poisson" |
nalpha |
number of |
alpha |
elastic net mixing parameters:
vector of length |
nfolds.ext , nfolds.int , foldid.ext , foldid.int
|
number of folds ( |
type.measure |
loss function:
character "deviance", "class", "mse" or "mae"
(see |
alpha.meta |
meta-learner:
value between |
nzero |
number of non-zero coefficients:
scalar/vector including positive integer(s) or |
intercept , upper.limit , unit.sum
|
settings for meta-learner: logical,
or |
... |
further arguments (not applicable) |
List containing the cross-validated loss
(or out-of sample loss if nfolds.ext
equals two,
and foldid.ext
contains zeros and ones).
The slot meta
contains the loss from the stacked elastic net
(stack
), the tuned elastic net (tune
), ridge, lasso,
and the intercept-only model (none
).
The slot base
contains the loss from the base learners.
And the slot extra
contains the loss from the restricted
stacked elastic net (stack
), lasso, and lasso-like elastic net
(enet
),
with the maximum number of non-zero coefficients shown in the column name.
loss <- cv.starnet(y=y,X=X)
loss <- cv.starnet(y=y,X=X)
Import of auc
(internal function)
glmnet.auc(y, prob, w)
glmnet.auc(y, prob, w)
y |
observed classes |
prob |
predicted probabilities |
w |
(ignored here) |
area under the ROC curve
NA
NA
Predicts outcome from features with stacked model.
## S3 method for class 'starnet' predict(object, newx, type = "response", nzero = NULL, ...)
## S3 method for class 'starnet' predict(object, newx, type = "response", nzero = NULL, ...)
object |
starnet object |
newx |
covariates:
numeric matrix with |
type |
character "link" or "response" |
nzero |
maximum number of non-zero coefficients:
positive integer, or |
... |
further arguments (not applicable) |
Matrix of predicted values, with samples in the rows,
and models in the columns. Included models are
alpha
(fixed elastic net),
ridge
(i.e. alpha0
),
lasso
(i.e. alpha1
),
tune
(tuned elastic net),
stack
(stacked elastic net),
and none
(intercept-only model).
set.seed(1) n <- 50; p <- 100 y <- rnorm(n=n) X <- matrix(rnorm(n*p),nrow=n,ncol=p) object <- starnet(y=y,X=X) y_hat <- predict(object,newx=X[c(1),,drop=FALSE])
set.seed(1) n <- 50; p <- 100 y <- rnorm(n=n) X <- matrix(rnorm(n*p),nrow=n,ncol=p) object <- starnet(y=y,X=X) y_hat <- predict(object,newx=X[c(1),,drop=FALSE])
Prints object of class starnet.
## S3 method for class 'starnet' print(x, ...)
## S3 method for class 'starnet' print(x, ...)
x |
starnet object |
... |
further arguments (not applicable) |
Prints "stacked gaussian/binomial/poisson elastic net".
set.seed(1) n <- 50; p <- 100 y <- rnorm(n=n) X <- matrix(rnorm(n*p),nrow=n,ncol=p) object <- starnet(y=y,X=X) print(object)
set.seed(1) n <- 50; p <- 100 y <- rnorm(n=n) X <- matrix(rnorm(n*p),nrow=n,ncol=p) object <- starnet(y=y,X=X) print(object)
Implements stacked elastic net regression.
starnet( y, X, family = "gaussian", nalpha = 21, alpha = NULL, nfolds = 10, foldid = NULL, type.measure = "deviance", alpha.meta = 1, penalty.factor = NULL, intercept = NULL, upper.limit = NULL, unit.sum = NULL, ... )
starnet( y, X, family = "gaussian", nalpha = 21, alpha = NULL, nfolds = 10, foldid = NULL, type.measure = "deviance", alpha.meta = 1, penalty.factor = NULL, intercept = NULL, upper.limit = NULL, unit.sum = NULL, ... )
y |
response:
numeric vector of length |
X |
covariates:
numeric matrix with |
family |
character "gaussian", "binomial" or "poisson" |
nalpha |
number of |
alpha |
elastic net mixing parameters:
vector of length |
nfolds |
number of folds |
foldid |
fold identifiers:
vector of length |
type.measure |
loss function:
character "deviance", "class", "mse" or "mae"
(see |
alpha.meta |
meta-learner:
value between |
penalty.factor |
differential shrinkage:
vector of length |
intercept , upper.limit , unit.sum
|
settings for meta-learner: logical,
or |
... |
further arguments passed to |
Post hoc feature selection: consider
argument nzero
in functions
coef
and predict
.
Object of class starnet
.
The slots base
and meta
contain cv.glmnet
-like objects,
for the base and meta learners, respectively.
Armin Rauschenberger, Enrico Glaab, and Mark A. van de Wiel (2021). "Predictive and interpretable models via the stacked elastic net". Bioinformatics 37(14):2012-2016. doi:10.1093/bioinformatics/btaa535. (Click here to access PDF.)
set.seed(1) n <- 50; p <- 100 y <- rnorm(n=n) X <- matrix(rnorm(n*p),nrow=n,ncol=p) object <- starnet(y=y,X=X,family="gaussian")
set.seed(1) n <- 50; p <- 100 y <- rnorm(n=n) X <- matrix(rnorm(n*p),nrow=n,ncol=p) object <- starnet(y=y,X=X,family="gaussian")
Extracts coefficients from the meta learner, i.e. the weights for the base learners.
## S3 method for class 'starnet' weights(object, ...)
## S3 method for class 'starnet' weights(object, ...)
object |
starnet object |
... |
further arguments (not applicable) |
Vector containing intercept and slopes from the meta learner.
set.seed(1) n <- 50; p <- 100 y <- rnorm(n=n) X <- matrix(rnorm(n*p),nrow=n,ncol=p) object <- starnet(y=y,X=X) weights(object)
set.seed(1) n <- 50; p <- 100 y <- rnorm(n=n) X <- matrix(rnorm(n*p),nrow=n,ncol=p) object <- starnet(y=y,X=X) weights(object)