Logo

Ordinary Least Squares

In [1]: import numpy as np

In [1]: import statsmodels.api as sm

In [1]: import matplotlib.pyplot as plt

In [1]: from statsmodels.sandbox.regression.predstd import wls_prediction_std

In [1]: np.random.seed(9876789)

OLS Estimation

Artificial data

In [1]: nsample = 100

In [1]: x = np.linspace(0, 10, 100)

In [1]: X = np.column_stack((x, x**2))

In [1]: beta = np.array([1, 0.1, 10])

In [1]: e = np.random.normal(size=nsample)

Our model needs an intercept so we add a column of 1s:

In [1]: X = sm.add_constant(X, prepend=False)

In [1]: y = np.dot(X, beta) + e

Inspect data

In [1]: print X[:5, :]

In [1]: print y[:5]

Fit and summary

In [1]: model = sm.OLS(y, X)

In [1]: results = model.fit()

In [1]: print results.summary()

Quantities of interest can be extracted directly from the fitted model. Type dir(results) for a full list. Here are some examples:

In [1]: print results.params

In [1]: print results.rsquared

OLS non-linear curve but linear in parameters

Artificial data

Non-linear relationship between x and y

In [1]: nsample = 50

In [1]: sig = 0.5

In [1]: x = np.linspace(0, 20, nsample)

In [1]: X = np.c_[x, np.sin(x), (x - 5)**2, np.ones(nsample)]

In [1]: beta = [0.5, 0.5, -0.02, 5.]

In [1]: y_true = np.dot(X, beta)

In [1]: y = y_true + sig * np.random.normal(size=nsample)

Fit and summary

In [1]: res = sm.OLS(y, X).fit()

In [1]: print res.summary()

Extract other quantities of interest

In [1]: print res.params

In [1]: print res.bse

In [1]: print res.predict()

Draw a plot to compare the true relationship to OLS predictions. Confidence intervals around the predictions are built using the wls_prediction_std command.

In [1]: plt.figure();

In [1]: plt.plot(x, y, 'o', x, y_true, 'b-');

In [1]: prstd, iv_l, iv_u = wls_prediction_std(res);

In [1]: plt.plot(x, res.fittedvalues, 'r--.');

In [1]: plt.plot(x, iv_u, 'r--');

In [1]: plt.plot(x, iv_l, 'r--');

In [1]: plt.title('blue: true,   red: OLS');
examples/generated/../../_static/ols_predict_0.png

OLS with dummy variables

Artificial data

We create 3 groups which will be modelled using dummy variables. Group 0 is the omitted/benchmark category.

In [1]: nsample = 50

In [1]: groups = np.zeros(nsample, int)

In [1]: groups[20:40] = 1

In [1]: groups[40:] = 2

In [1]: dummy = (groups[:, None] == np.unique(groups)).astype(float)

In [1]: x = np.linspace(0, 20, nsample)

In [1]: X = np.c_[x, dummy[:, 1:], np.ones(nsample)]

In [1]: beta = [1., 3, -3, 10]

In [1]: y_true = np.dot(X, beta)

In [1]: e = np.random.normal(size=nsample)

In [1]: y = y_true + e

Inspect the data

In [1]: print X[:5, :]

In [1]: print y[:5]

In [1]: print groups

In [1]: print dummy[:5, :]

Fit and summary

In [1]: res2 = sm.OLS(y, X).fit()

In [1]: print res.summary()

In [1]: print res2.params

In [1]: print res2.bse

In [1]: print res.predict()

Draw a plot to compare the true relationship to OLS predictions.

In [1]: prstd, iv_l, iv_u = wls_prediction_std(res2);

In [1]: plt.figure();

In [1]: plt.plot(x, y, 'o', x, y_true, 'b-');

In [1]: plt.plot(x, res2.fittedvalues, 'r--.');

In [1]: plt.plot(x, iv_u, 'r--');

In [1]: plt.plot(x, iv_l, 'r--');

In [1]: plt.title('blue: true,   red: OLS');
examples/generated/../../_static/ols_predict_1.png

Joint hypothesis tests

F test

We want to test the hypothesis that both coefficients on the dummy variables are equal to zero, that is, R \times \beta = 0. An F test leads us to strongly reject the null hypothesis of identical constant in the 3 groups:

In [1]: R = [[0, 1, 0, 0], [0, 0, 1, 0]]

In [1]: print np.array(R)

In [1]: print res2.f_test(R)

T test

We want to test the null hypothesis that the effects of the 2nd and 3rd groups add to zero. The T-test allows us to reject the Null (but note the one-sided p-value):

In [1]: R = [0, 1, -1, 0]

In [1]: print res2.t_test(R)

Small group effects

If we generate artificial data with smaller group effects, the T test can no longer reject the Null hypothesis:

In [1]: beta = [1., 0.3, -0.0, 10]

In [1]: y_true = np.dot(X, beta)

In [1]: y = y_true + np.random.normal(size=nsample)

In [1]: res3 = sm.OLS(y, X).fit()

In [1]: print res3.f_test(R)

Multicollinearity

Data

The Longley dataset is well known to have high multicollinearity, that is, the exogenous predictors are highly correlated. This is problematic because it can affect the stability of our coefficient estimates as we make minor changes to model specification.

In [1]: from statsmodels.datasets.longley import load

In [1]: y = load().endog

In [1]: X = load().exog

In [1]: X = sm.tools.add_constant(X, prepend=False)

Fit and summary

In [1]: ols_model = sm.OLS(y, X)

In [1]: ols_results = ols_model.fit()

In [1]: print ols_results.summary()

Condition number

One way to assess multicollinearity is to compute the condition number. Values over 20 are worrisome (see Greene 4.9). The first step is to normalize the independent variables to have unit length:

In [1]: norm_x = np.ones_like(X)

In [1]: for i in range(int(ols_model.df_model)):
   ...:     norm_x[:, i] = X[:, i] / np.linalg.norm(X[:, i])
   ...: 

In [1]: norm_xtx = np.dot(norm_x.T, norm_x)

Then, we take the square root of the ratio of the biggest to the smallest eigen values.

In [1]: eigs = np.linalg.eigvals(norm_xtx)

In [1]: condition_number = np.sqrt(eigs.max() / eigs.min())

In [1]: print condition_number

Dropping an observation

Greene also points out that dropping a single observation can have a dramatic effect on the coefficient estimates:

In [1]: ols_results2 = sm.OLS(y[:-1], X[:-1, :]).fit()

In [1]: res_dropped = ols_results.params / ols_results2.params * 100 - 100

In [1]: print 'Percentage change %4.2f%%\n' * 7 % tuple(i for i in res_dropped)