Risk/Reward Tradeoff

The two quantities we have been modelling (the time-dependent average and standard deviation of the returns) represent respectively the (potential) risk and reward associated with an asset. The relationship between these two quantities is implicit in the GARCH model. However, sometimes the return depends directly on the risk. A variant of the GARCH model can take this explicit relationship into account.

Reference Model

Let’s start by modelling the ACC returns using a GJR GARCH model with a Skewed Student-t Distribution.

specification <- ugarchspec(
  distribution.model = "sstd",
  mean.model = list(armaOrder = c(0, 0)),
  variance.model = list(model = "gjrGARCH")
)

fit <- ugarchfit(data = ACC, spec = specification)

coef(fit)
          mu        omega       alpha1        beta1       gamma1         skew        shape 
2.940343e-04 9.299285e-06 7.509895e-03 9.290928e-01 7.535807e-02 1.069806e+00 6.093516e+00 

The specification of mean.model implies a time-independent average return.

GARCH-in-Mean Model

The GARCH-in-mean (GARCH-M) model integrates a risk term into the equation for the average return:

$$ \mu_t = \mu + \lambda\sigma_t. $$

If \(\lambda > 0\) then higher returns are associated with greater risk and vice versa. 🚨 We are no longer assuming a constant mean. In this model the mean changes with time.

To convert the reference model above into a GARCH-in-mean model we need to add some parameters to the mean.model argument:

  • archm β€” whether to include volatility in the mean; and
  • archpow β€” the exponent of \(\sigma_t\).
specification <- ugarchspec(
  distribution.model = "sstd",
  mean.model = list(armaOrder = c(0, 0), archm = TRUE, archpow = 1),
  variance.model = list(model = "gjrGARCH")
)

fit <- ugarchfit(data = ACC, spec = specification)

coef(fit)
           mu         archm         omega        alpha1         beta1        gamma1          skew         shape 
-3.226572e-03  2.110476e-01  1.129381e-05  6.388947e-03  9.214938e-01  7.933666e-02  1.071112e+00  6.018885e+00 

The value of \(\lambda\) is given by archm.

An alternative formulation of the model sets

$$ \mu_t = \mu + \lambda\sigma_t^2. $$

specification <- ugarchspec(
  distribution.model = "sstd",
  mean.model = list(armaOrder = c(0, 0), archm = TRUE, archpow = 2),
  variance.model = list(model = "gjrGARCH")
)

fit <- ugarchfit(data = ACC, spec = specification)

coef(fit)
           mu         archm         omega        alpha1         beta1        gamma1          skew         shape 
-1.277500e-03  5.482362e+00  1.148246e-05  6.070323e-03  9.208495e-01  7.982585e-02  1.069748e+00  6.040826e+00 

AR(1) Model

The GARCH-in-mean model explicitly creates a dependency between risk and reward. An alternative approach is to use a model based on the correlation between successive returns. The AR(1) model looks like this:

$$ \mu_t = \mu + \rho\mu_{t-1}. $$

The behaviour of this model depends on the sign and magnitude of \(\rho\):

  • \(\rho > 0\) β€” positive autocorrelation; if previous value above (below) long term mean then current value will be above (below) mean too (indicates under-reaction);
  • \(\rho < 0\) β€” negative autocorrelation; if previous value above (below) long term mean then current value will be below (above) mean (indicates over-reaction);
  • \(|\rho| < 1\) β€” mean reversion; after a shock the values decay to the mean;
  • \(|\rho| ~ 1\) β€” momentum; effects of a shock are more persistent.

If AR coefficient is > 0: “MOMENTUM” higher/lower average return followed by higher/lower average return. Market under-reacts, so still reacting next day. If |AR| coefficient is < 1: mean reversion If AR coefficient is < 0: “REVERSION” higher/lower average return followed by lower/higher average return. Market over-reacts, so correcting next day.

Create an AR(1) GJR GARCH model.

specification <- ugarchspec(
  distribution.model = "sstd",
  mean.model = list(armaOrder = c(1, 0)),
  variance.model = list(model = "gjrGARCH")
)

fit <- ugarchfit(data = TATASTEEL, spec = specification)

ar1_mean <- fitted(fit)
ar1_vol <- sigma(fit)

coef(fit)
           mu           ar1         omega        alpha1         beta1        gamma1          skew         shape 
 1.308881e-03 -3.061775e-02  7.479402e-06  1.052012e-02  9.516515e-01  5.658366e-02  1.050847e+00  5.051158e+00 

The negative value for ar1 implies that successive returns are anti-correlated: an above-average return is followed by a below-average return and vice versa.

This has positive ar1. What does this mean?

fit <- ugarchfit(data = HDFC, spec = specification)

coef(fit)
          mu          ar1        omega       alpha1        beta1       gamma1         skew        shape 
6.018488e-04 1.796683e-02 7.485177e-06 2.517259e-02 9.108571e-01 7.901346e-02 1.031812e+00 5.969564e+00 

This has positive ar1.

MA(1) Model

specification <- ugarchspec(
  distribution.model = "sstd",
  mean.model = list(armaOrder = c(0, 1)),
  variance.model = list(model = "gjrGARCH")
)

fit <- ugarchfit(data = TATASTEEL, spec = specification)

coef(fit)
           mu           ma1         omega        alpha1         beta1        gamma1          skew         shape 
 1.311505e-03 -3.259663e-02  7.481072e-06  1.054545e-02  9.516866e-01  5.645505e-02  1.050544e+00  5.043651e+00 

ARMA(1, 1) Model

specification <- ugarchspec(
  distribution.model = "sstd",
  mean.model = list(armaOrder = c(1, 1)),
  variance.model = list(model = "gjrGARCH")
)

fit <- ugarchfit(data = TATASTEEL, spec = specification)

coef(fit)
           mu           ar1           ma1         omega        alpha1         beta1        gamma1          skew         shape 
 1.336432e-03  4.933973e-01 -5.313622e-01  7.294640e-06  1.124270e-02  9.519904e-01  5.503493e-02  1.046950e+00  4.990335e+00 

Alternative Implementation

HOW IS THIS DONE WITH THE TSGARCH PACKAGE?

Let’s try with the {tsgarch} package.

library(tsgarch)

Model Complexity

As we proceeded from a GARCH-in-Mean model to AR(1), MA(1) and ARMA(1, 1) models the number of parameters increased. This means that the models became progressively more complicated (and flexible). In general one should strive for a parsimonious model: just complicated enough to get the job done.