Interpretable Neural Networks for Equity Factor Modeling

Time

-

Locations

Rettaliata Engineering Center, Room 103

Host

Department of Applied Mathematics

Speaker

Matthew Dixon
Department of Applied Mathematics, Illinois Institute of Technology

Description

We present a general method for interpreting a fitted neural network which ranks the importance of predictors and their interaction effects, without assuming a data generation process. This method computes the importance and interaction effects from the Jacobian and Hessian matrix of a composition of smooth semi-affine functions. In the simplest case, that is when the network contains no hidden layers and the error is white noise, the Jacobian recovers the OLS estimator. For one hidden layer, we show that the Jacobian is a weighted sum of independent Bernoulli random variables and provide Chernoff types bounds on the deviation of the Jacobian from its mean. We additionally show that the variance decays with the number of hidden units. We empirically compare our method with other well known methods for interpreting neural networks on toy problems. Finally we apply our method to a neural network equity factor model and highlight the advantages of using interpretable neural networks for hedging factor exposures.

Event Topic

Mathematical Finance, Stochastic Analysis, and Machine Learning

Tags: