Force certain parameters to have positive coefficients in lm()

15k Views Asked by At

I would like to know how to constrain certain parameters in lm() to have positive coefficients. There are a few packages or functions (e.g. display) that can make all coefficients, and the intercept, positive.

For instance, in this example, I would like to force only x1 and x2 to have positive coefficients.

    x1=c(NA,rnorm(99)*10)
    x2=c(NA,NA,rnorm(98)*10)
    x3=rnorm(100)*10
    y=sin(x1)+cos(x2)-x3+rnorm(100)

    lm(y~x1+x2+x3)

    Call:
      lm(formula = y ~ x1 + x2 + x3)       
    Coefficients:
      (Intercept)           x1           x2           x3  
    -0.06278      0.02261     -0.02233     -0.99626

I have tried function nnnpls() in package nnls, it can control the coefficient sign easily. Unfortunately I can't use it due to issues with NAs in the data as this function doesn't allow NA.

I saw function glmc() can be used to apply constraints but I couldn't get it working.

Could someone let me know what should I do?

3

There are 3 best solutions below

3
On

You can use package penalized:

set.seed(1)

x1=c(NA,rnorm(99)*10)
x2=c(NA,NA,rnorm(98)*10)
x3=rnorm(100)*10
y=sin(x1)+cos(x2)-x3+rnorm(100)
DF <- data.frame(x1,x2,x3,y)

lm(y~x1+x2+x3, data=DF)
#Call:
#lm(formula = y ~ x1 + x2 + x3, data = DF)
#
#Coefficients:
#(Intercept)           x1           x2           x3  
#   -0.02438     -0.01735     -0.02030     -0.98203  

This gives the same:

library(penalized)

mod1 <- penalized(y, ~ x1 + x2 + x3, ~1, 
                  lambda1=0, lambda2=0, positive = FALSE, data=na.omit(DF))
coef(mod1)
#(Intercept)          x1          x2          x3 
#-0.02438357 -0.01734856 -0.02030120 -0.98202831 

If you constraint the coefficients of x1 and x2 to be positive, they become zero (as expected):

mod2 <- penalized(y, ~ x1 + x2 + x3, ~1, 
                  lambda1=0, lambda2=0, positive = c(T, T, F), data=na.omit(DF))
coef(mod2)
#(Intercept)          x3 
#-0.03922266 -0.98011223 
1
On

With ConsReg https://cran.r-project.org/web/packages/ConsReg/index.html package you can deal with this kind of problems

You can set bound limits (lower and upper) and also restrictions within coefficients, like beta1 > beta2 which in some cases can be very useful.

0
On

You could use the package colf for this. It currently offers two least squares non linear optimizers, namely nls or nlxb:

library(colf)

colf_nlxb(y ~ x1 + x2 + x3, data = DF, lower = c(-Inf, 0, 0, -Inf))
#nlmrt class object: x 
#residual sumsquares =  169.53  on  98 observations
#    after  3    Jacobian and  3 function evaluations
#                name      coeff SEs tstat pval gradient JSingval
#1 param_X.Intercept. -0.0066952  NA    NA   NA   3.8118 103.3941
#2           param_x1  0.0000000  NA    NA   NA 103.7644  88.7017
#3           param_x2  0.0000000  NA    NA   NA   0.0000   9.8032
#4           param_x3 -0.9487088  NA    NA   NA 330.7776   0.0000

colf_nls(y ~ x1 + x2 + x3, data = DF, lower = c(-Inf, 0, 0, -Inf))
#Nonlinear regression model
#  model: y ~ param_X.Intercept. * X.Intercept. + param_x1 * x1 + param_x2 *        
#  x2 + param_x3 * x3
#   data: model_ingredients$model_data
#param_X.Intercept.           param_x1           param_x2           param_x3 
#           -0.0392             0.0000             0.0000            -0.9801 
# residual sum-of-squares: 159
# 
#Algorithm "port", convergence message: both X-convergence and relative convergence (5)

You can set the lower and/or upper bounds to specify the limits as you like for each one of the coefficients.