Derivatives Pricing via Machine Learning

Abstract

In this paper, we combine the theory of stochastic process and techniques of machine learning with the regression analysis, first proposed by [1] to solve for American option prices, and apply the new methodologies on financial derivatives pricing. Rigorous convergence proofs are provided for some of the methods we propose. Numerical examples show good applicability of the algorithms. More applications in finance are discussed in the Appendices.

Share and Cite:

Ye, T. and Zhang, L. (2019) Derivatives Pricing via Machine Learning. Journal of Mathematical Finance, 9, 561-589. doi: 10.4236/jmf.2019.93029.

1. Introduction

Theoretical and empirical finance research involves the evaluation of conditional expectations, which, in a continuous time jump-diffusion setting, can be related to second order partial integral differential equations of parabolic type (PIDEs) by the Feynman-Kac theorem, and other types of equations such as backward stochastic differential equations with jumps (BSDEJs) or quasi-linear PIDEs in more complicated settings. In theoretical continuous-time finance, many problems, such as asset pricing with market frictions, dynamic hedging or dynamic portfolio-consumption choice problems, can be related to Hamilton-Jacobi-Bellman (HJB) equations via dynamic programming techniques. The HJB equations, from another perspective, are equivalent to BSDEs derived from a probabilistic approach. The nonlinear BSDEs, studied in [2], can be decomposed into a sequence of linear equations, which can be solved by taking conditional expectations, via Picard iteration. For empirical studies, the focus of the literature has been the evaluation of the cross sectional conditional risk-adjusted expected returns and the explanation of them using factors. See [3] [4] and [5] as good illustrations. It is easily seen that, regardless of the fact whether the underlying models are continuous-time or discrete-time, evaluating conditional expectations is inevitable in finance literature. Moreover, in order to perform XVA computations for the measurement of counterparty credit risk, we need to evaluate the conditional expectations, i.e., the derivative prices, on a future simulation grid, as outlined in [6]. These facts call for efficient methods to compute the quantities aforementioned.

In this paper, we extend the basis function expansion approach proposed in [1] with machine learning techniques. Specifically, we propose new efficient methods to evaluate conditional expectations, regardless of the dynamics of the underlying stochastic process, as long as they can be simulated. Rigorous convergence proofs are given using Hilbert space theory. The methodologies can be applied to time zero pricing as well as pricing on a future simulation grid, with the advantage of ANN approximation most prominent in high dimensional problems. In the sequel, we show applications of our methodologies on the pricing of European derivatives and extension to contracts with optimal stopping feature is straightforward through either [1] approach or reflected-BSDEs.

Compared to the literature on traditional stochastic analysis, our methodologies are able to handle large data sets and high-dimensional problems, therefore suffering much less from the curse of dimensionality due to the nature of ANN methods. Moreover, our methodologies are very efficient when evaluating solutions of BSDEJs and PIDEs on a future simulation grid, where none of the traditional methodologies applies. With respect to recent machine learning literature on numerical solutions to BSDEs and PDEs, our methodologies enjoy the theoretical advantage of being able to handle equations with jump-diffusion and convergence results are provided. When applied to the solutions of BSDEJs and PIDEs, our methodologies require much less number of parameters, as compared to the current machine learning based methods to be mentioned below. At any step in the solution process, only one ANN is needed and we do not require nested optimization. In terms of application, not all the prices of OTC derivatives can be easily translated into BSDEJs and PIDEs, for example, a range accrual with both American and barrier (knock-out, for example) feature. However, our methodologies are naturally suitable in those situations. To conclude, our methods enjoy many theoretical and empirical advantages, which makes them attractive and novel.

There has been a huge literature on applications of machine learning techniques to financial research. Classical applications focus on the prediction of market variables such as equity indexes or FX rates and the detection of market anomalies, for example, [7] and [8]. Option pricing via a brute-force curving fitting by ANNs dates back to [9]. More applications of machine learning in finance, especially option pricing prediction, are surveyed in [10]. See references therein. Pricing of American options in high dimensions can be found in [11], which is closest to our method 1. However, there are several improvements of our methods compared to this reference. First of all, we enable deep neural network (DNN) approximation and show convergence. Second, we can incorporate constraints in DNN approximation estimation and prove the mathematical validity of this approach. Third, we propose two more efficient methods to complement the first method of ours. Our treatment of constraints in the estimation of DNNs extends the work of [12] in that we can deal with a larger class of constraints by specifying a general Hilbert subspace as the constrained set. Risk measure computation using machine learning can be found in [13]. Applications of machine learning function approximation on financial econometrics can be found in [14], [15], [16] and [17]. Recent applications include empirical and theoretical asset pricing, reinforcement learning and Q-learning in solving dynamic programming problems such as optimal investment-consumption choice, option pricing and optimal trading strategies construction, e.g., [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28] and references therein. Numerical methods to solve PDEs and BSDEs or the related inverse problems can be found in [29], [30], [31], [32], [33], [34], [35], [36], [37], [38] and [39]. Machine learning based methods enjoy the advantage of being fast, able to handle large data sets and high dimensional problems.

Our methodologies are combinations of traditional statistical learning theory and stochastic analysis with advanced machine learning techniques, introducing powerful function approximation method via the universal approximation theorem and artificial neural networks (ANNs), while preserving the regression-type analysis documented in [1]. The methods are very easy to use, effective, accurate as illustrated by numerical experiments and time efficient. They are different from the convergent expansion method, e.g., [40], simulation methods such as [41], [42], [43] and [44] or the asymptotic expansion method proposed by [45], [46], [47], [48] [49] [50] [51] [52], in that we no longer resort to polynomial basis function expansion or small-diffusion type analysis. Our methods are also different from the pure machine learning based ones documented in [29], [30], [31], [32], [33], [34] and [35], in that we utilize the lead-lag regression formula to evaluate the conditional expectations, preserving the time dependent structure and our methods are able to handle jump-diffusion processes easily.

The organization of this paper is as follows. Section 2 documents the methodologies. Section 3 illustrates the usefulness of our methods by considering European and American derivatives pricing. Section 4 considers numerical experiments and Section 5 concludes. An outline of the proofs and other applications can be found in the appendices.

2. The Methodology

Mathematical Setup

We use a Markov process modeled by a jump-diffusion as illustration. Suppose that we have a stochastic differential equation with jumps

d X t = μ ( t , X t ) d t + σ ( t , X t ) d W t + E γ ( t , X t , e ) N ˜ ( d t , d e ) , X 0 = x 0 (1)

where X r , W d is a standard d-dimensional Brownian motion and N ˜ is a q-dimensional compensated Poisson random measure, with the compensator ν ( d t , d e ) : = ν ( d e ) d t . Information filtration F t = F t W , N is generated by ( W , N ) . We hope to evaluate the conditional expectation E t [ ψ ( X T ) ] for any 0 < t < T , e.g., see [53]. Assumptions on ψ and X are stated below.

Assumption 1 (On Growth Condition of ψ). ψ has polynomial growth in its argument x, i.e., there exists a positive integer P, independent of x, such that for all | x | > 1 , we have, for constant C independent of x

| ψ ( x ) | C | x | P . (2)

The following assumption is w.r.t. X.

Assumption 2 (On X). There exists a unique strong solution to Equation (1) and X has finite polynomial moments of all orders.

The General Approximation Theory

First, we need the following assumptions, definitions and results. Please note that, some of the spaces we introduce are actually conditional ones. The discussions of conditional Hilbert spaces can be found in [54], e.g., L 2 ( F t ) is a conditional Hilbert space for all t [ 0, T ] .

Definition 3 (Projection Operator). For Hilbert spaces X and H , where H X . Define PROJ H x as the projection of x X onto H .

Definition 4 (Orthogonal Space). For Hilbert spaces X and H , where H X . Define ORTH H X as the orthogonal space of H in X .

Definition 5 (Spanning the Hilbert Space). Assume that E = { e j } j Λ is a set of elements in Hilbert space X and Λ is an index set. Define H E as the intersection of all Hilbert subspaces of X containing E .

Assumption 6 (On Joint Continuity). X and H are two Hilbert spaces and H X . Moreover, { H n } n = 1 is a sequence of Hilbert sub-spaces of H satisfying H n H n + 1 for any n 1 and n = 1 H n ¯ = H . We have lim n h PROJ H n h n H = 0 for any h H and lim n h n = h .

The next two theorems are well-known in the literature.

Theorem 7 (Hilbert Projection Theorem). Let H X be two Hilbert spaces and let x X . Then, PROJ H x exists and is unique. Moreover, it is characterized uniquely by x PROJ H x ORTH H X .

Theorem 8 (Repeated Projection Theorem). Let G H X be three Hilbert spaces. Then, for any x X , PROJ G x = PROJ G ( PROJ H x ) .

Remark 9 The conditions of Theorems 7 and 8 on G and H can be relaxed to convexity and completeness instead of Hilbert sub-spaces.

Finally, we have the result below.

Theorem 10. Suppose X is a Hilbert space, { H n } n = 1 and H are Hilbert subspaces of X satisfying H n H n + 1 and n = 1 H n ¯ = H X . x X , define h n = PROJ H n x and h = PROJ H x . Then we have lim n h n = h w.r.t. the norm topology in X , if Assumption 6 is satisfied.

Sometimes we need to add constraints on the calibrated ANN, e.g., the shape constraints. The following assumption and theorem deal with this situation.

Assumption 11 (On Constrained Sub-space). Suppose that Ψ X such that { Ψ H n } n = 1 is a sequence of non-empty convex and complete subspaces of X satisfying Assumption 6, where X and { H n } n = 1 are described.

The following theorem handles the constrained approximation and its convergence.

Theorem 12 (On Constrained Approximation). Under Assumptions 6 and 11, for x X , if h = PROJ H x Ψ , then, we have lim n PROJ Ψ H n x = h .

Remark 13 (On ψ). In Theorem 12, the set Ψ represents prior knowledge on constraints that h satisfies. It can be represented by a set of non-linear inequalities or equalities on functionals of h. Common constraints for option pricing include non-negativity constraint and the positiveness constraint on the second order derivatives. The verification of { Ψ H n } n = 1 satisfying Assumption 6 should be based on a case-by-case manner.

To proceed further, we need the following assumptions.

Assumption 14 (On Some Spaces). { H t J } J = 1 is an increasing sequence of Hilbert sub-spaces of L 2 ( F t ) , H t J H t J + 1 , J = 1 H t J ¯ = H t L 2 ( F t ) . Moreover, { E t [ ξ T ] | ξ T L 2 ( F T ) , E t [ ξ T ] L 2 ( F t ) } ¯ H t L 2 ( F t ) L 2 ( F T ) = X T .

Assumption 15 (On Structure of H t J ). { e t j } j Λ is a set of elements of L 2 ( F t ) , such that H t J = H { e t j } j Λ J , where Λ J Λ J + 1 Λ for any J 1 and J = 1 Λ J = Λ , satisfies Assumption 141.

Then, we have the following results.

Lemma 1. For any adapted stochastic process ξ such that ξ T L 2 ( F T ) , if E t [ ξ T ] L 2 ( F t ) , we have

E t [ ξ T ] = arg min η t L 2 ( F t ) E [ ( ξ T η t ) 2 ] . (3)

The following proposition is a natural extension of Lemma 1.

Proposition 16. For any measurable function ψ and stochastic process X such that ψ ( X T ) L 2 ( F T ) and E t [ ψ ( X T ) ] L 2 ( F t ) , we have

E t [ ψ ( X T ) ] = arg min ξ t L 2 ( F t ) E [ ( ψ ( X T ) ξ t ) 2 ] . (4)

Here ξ t F t and the above minimization problem has a unique solution. In particular, if X is a Markov process, then ξ t = ϕ ( t , X t ) , i.e., ξ t is a function of time t and X t .

We then have the following theorem.

Theorem 17. Under Assumptions 1, 2, 6, 14 and 15, for any adapted stochastic process ξ such that ξ T L 2 ( F T ) and E t [ ξ T ] L 2 ( F t ) , we have

lim J arg min η t H t J E [ ( ξ T η t ) 2 ] = L 2 ( F t ) E t [ ξ T ] . (5)

Further, for any measurable function ψ and stochastic process X such that ψ ( X T ) L 2 ( F T ) and E t [ ψ ( X T ) ] L 2 ( F t ) , we have the following equality

lim J arg min ξ t H t J E [ ( ψ ( X T ) ξ t ) 2 ] = L 2 ( F t ) E t [ ψ ( X T ) ] . (6)

If X is Markov, then we have ξ t = ϕ ( t , X t ) , i.e., ξ t is a function of time t and X t .

The following theorem justifies the Monte Carlo approximation of expectation in the above optimization problems.

Theorem 18 (On Sequential Convergence). Under Assumptions 1, 2, 6, 14 and 15, suppose that | Λ J | = m J < for all J 1 , { X T i } i = 1 M and { e t j , i } j , i = 1 , 1 m J , M are M i.i.d. copies of X T and { e t j } j = 1 m J . Then we have

lim J lim M arg min ξ t m H t J 1 M m = 1 M ( ψ ( X T m ) ξ t m ) 2 = E t [ ψ ( X T ) ] . (7)

The following results justify the universal approximation and ANN approximation approaches proposed in this paper.

Proposition 19 (On Universal Approximation Theory). Let σ denote the function in the universal approximation theorem mentioned in [55], [56] and [57]. Define { e t j } j = 1 m n : = { σ ( α j + β j X t ) } j = 1 m n , where X satisfies Equation (1) and Assumption 2, α j and β j have at most n significant digits in total, where n , i.e., n belongs to the set of natural numbers, j runs from 1 to m n and

m n is the number of all related { e t j } , i.e., m n = | { σ ( α + β X t ) | α and β haveatmost n totalsignificantdigits } | . Then, { H { e t j } j = 1 m n } n satisfies Assumptions 6, 14 and 15. Therefore, Theorems 17 and 18 apply.

Proposition 20 (On Deep Neural Network Approximation). For the DNN defined in ( [58], Definition 1.1], observe that W l ( x ) = α l + β l x . Define

e t j : = W L , j ρ W L 1, j ρ W 1, j ρ ( X t ) (8)

where W l , j ( x ) = α l , j + β l , j x satisfies that l = 1 , 2 , , L , ( α l , j , β l , j ) have at most n total significant digits and n . Then, { H { 1, e t j } j = 1 m n } n , where 1 means

function f ( x ) 1 for all x, satisfies Assumptions 6, 14 and 15. Therefore, Theorems 17 and 18 apply after a localization argument on ψ and X on a compact sub-domain in r .

Remark 21 (On DNN). Please note that, in Proposition 20, we do not intend to prove the convergence when the number of layers goes to infinity. Instead, we show convergence when the number of connections goes to infinity, which can be achieved via enlarging the number of neurons in each layer with the total number of layers remaining fixed.

Remark 22 (On Euler Time Discretization). [59] proposes an exact simulation method for multi-dimensional stochastic differential equations. The discussion of discretization error, of the regression approach proposed in this paper, with Euler method is not hard if ψ satisfies Assumption 1, in which case the dominated convergence theorem and L 2 convergence of Euler method can be applied to show the convergence.

The proofs of the above results can be found in Appendix A. In what follows, we will propose three methods to compute, approximately, the function ϕ in Proposition 16.

Method 1

In general, ϕ , defined in Proposition 16 and Theorem 17, can not be found in closed-form. A natural thought would be to resort to function expansion representations, i.e., to find the solution to the following problem

E t [ ψ ( X T ) ] = arg min { a j , θ j } j = 0 A E [ ( ψ ( X T ) j = 0 a j e j ( t , X t | θ j ) ) 2 ] (9)

where A is an appropriate space for coefficients { a j , θ j } j = 0 and { e j ( θ j ) } j = 0 is a set of functions, with Span ( { e j ( θ j ) } j = 0 ) 2 dense in an appropriate function space Φ 3. To further proceed, we seek a truncation of the function representation formula as follows

E t [ ψ ( X T ) ] arg min { a j , θ j } j = 0 J A J E [ ( ψ ( X T ) j = 0 J a j e j ( t , X t | θ j ) ) 2 ] (10)

for J sufficiently large, where A J is a compact set in the Euclidean space where { a j , θ j } j = 0 J take values. The last step would be to use Monte Carlo simulation to approximate the unconditional expectation appearing in Equations (9) and (10). Therefore turning the conditional expectation computation problem, into a least-square function regression problem, similar to [1]. An obvious choice of { e j ( θ j ) } j = 0 is polynomial basis, for example, the set of Fourier-Hermite basis functions. For expansion using Fourier-Hermite basis functions in high dimensions, see [60].

In fact, Artificial Neural Networks (ANNs) prove to be an efficient and convergent function approximation tool that we can utilize in the above expressions. Write

E t [ ψ ( X T ) ] arg min { a j , θ j } j = 0 J A J E [ ( ψ ( X T ) ANN J ( { a j , θ j } j = 0 J | t , X t ) ) 2 ] (11)

where ANN J denotes an ANN with parameters { a j , θ j } j = 0 J .

Note that, via proper time discretization and fixed point iteration, solving a BSDE with jumps can be decomposed into a series of evaluations of conditional expectations. The machine learning based method outlined above can be applied there. We will write down the algorithm to solve a general Coupled Forward-Backward Stochastic Differential Equation with Jumps (CFBSDEJs) in the appendix. Extensions to other types of BSDEJs are possible.

Here we assume that X is a Markov process. To handle path dependency or non-Markov processes, we can apply the backward induction method outlined in [1]. With the machine learning approach, it is easy to see that this method enables us to get the values of conditional expectations on a future simulation grid.

Method 2

Another method to utilize the idea of [1] is inspired by the boosting random tree method (BRT), see, [61], for example. Partition the domain space r = k = 1 K U t k 4, where { U t k } k = 1 K is a set of disjoint sets in r and consider

E t [ ψ ( X T ) ] = arg min ϕ Φ E [ ( ψ ( X T ) ϕ ( t , X t ) ) 2 ] arg min k = 1 K ϕ k ( t , x ) 1 x U t k Φ E [ ( ψ ( X T ) ϕ k ( t , X t ) ) 2 1 X t U t k ] . (12)

The choice of { U t k } k = 1 K is important and we can use the machine learning classification techniques (or any classification rule), such as kmeans function in R programming language, in Monte Carlo simulation and related computations. Denote d U = sup x , y U | x y | . It is possible to show that as long as lim K max 1 k K d U t k = 0 , we only need finite number of functions, for example, { e j ( θ j ) } j = 0 J , to approximate each { ϕ k } k = 1 K and obtain convergence. In practice, although the domain of X t is r , it might be centered at a small subspace t , therefore facilitating the partition process. Note also that this method might require us to mollify the function ψ , if it is not smooth. We adopt finite order Taylor expansion as the function expansion representation approach. The following theorems provide convergence analysis for this method.

Theorem 23. For an appropriate function space Φ , we have

E t [ ψ ( X T ) ] = arg min ϕ Φ E [ ( ψ ( X T ) ϕ ( t , X t ) ) 2 ] = arg min ϕ Φ E [ ( ψ ( X T ) ϕ ( t , X t ) ) 2 k = 1 K 1 X t U t k ] = arg min ϕ Φ E [ ( ψ ( X T ) k = 1 K 1 X t U t k k = 1 K ϕ ( t , X t ) 1 X t U t k ) 2 ] = arg min k = 1 K ϕ k ( t , x ) 1 x U t k Φ E [ ( ψ ( X T ) ϕ k ( t , X t ) ) 2 1 X t U t k ] . (13)

Theorem 24. Let H t J be as described previously and H t = { ϕ ( t , X t ) | ϕ Φ } . Then, we have

lim max 1 k K d U t k 0 k = 1 K ϕ ^ k ( t , X t ) 1 X t U t k ϕ ( t , X t ) L 2 ( F t ) = 0 (14)

with J large enough, fixed, finite and ϕ ^ k is an approximation to ϕ k , which satisfies

E [ ( ϕ k ( t , X t ) ϕ ^ k ( t , X t ) ) 2 1 X t U t k ] ϵ K (15)

for any k = 1 , 2 , , K , K , lim K K ϵ K = 0 and ϵ K is independent of k when K is sufficiently large.

Method 3

Next, we propose an algorithm combining the ANN and universal approximation theorem (UAT). Suppose that L 2 ( F t ) is the space where we are performing the approximation. Also assume that F t W , N = F t X , i.e., the information filtration is equivalently generated by X. Define an ANN with connection N by ANN ( x , N , θ j , j ) , where x is the state variables that the ANN depends on, θ j is the vector of parameters and j is its label. We define the following nested regression approximation

ψ ( X T ) = ANN ( X t , N , θ 1 ,1 ) + ϵ t , T 1 (16)

ϵ t , T 1 = ANN ( X t , N , θ 2 ,2 ) + ϵ t , T 2 (17)

ϵ t , T 2 = ANN ( X t , N , θ 3 ,3 ) + ϵ t , T 3 (18)

= (19)

ϵ t , T J = ANN ( X t , N , θ J + 1 , J + 1 ) + ϵ t , T J + 1 (20)

= (21)

where { j = 1 J + 1 ANN ( X t , N , θ j , j ) } J = 0 is the approximate sequence of E t [ ψ ( X T ) ] .

In this paper, we will test and compare the performance of all of the proposed methods. A general discussion and rigorous proofs can be found in Appendix A5.

3. Applications in Derivatives Pricing

3.1. European Option Pricing

Suppose that the payoff of a European claim can be written as, similar to [62] and [63], ( f , ψ ) , where f t is a stream of cash flows materialized at each time instance t and ψ T is a one-time terminal payoff at time T. Therefore, under no-arbitrage condition, the price of this European payoff can be written as, under risk neutral measure

V t e : = E t [ t T D t , u f u d u + D t , T ψ T ] (22)

where D t , u : = e t u r v d v is the stochastic discount factor. If we assume a Markov structure f t = f ( t , X t ) and ψ T = ψ ( X T ) , then V t e : = v e ( t , X t ) , i.e., V t e is a function of time t and state vector X t . This problem is a canonical application of the evaluation of conditional expectations and we can apply the methodologies outlined in Section 2 to solve it. European claims with barrier features can be incorporated and priced in a similar way. For example, the price of a knock-in European claim can be written as

V t e : = E t [ τ T D t , u f u d u + D τ , T ψ T ] (23)

where τ = inf v [ t , T ] { X v T | X t T } , where T r . In our setting, the dynamics of X can be arbitrary, possibly stochastic differential equations with jumps, Markov chains, or even non-Markov processes. Previously, Monte Carlo based method for option pricing can be found in [64] and [65], among others.

3.2. American Option Pricing

Still use ( f , ψ ) to denote the payoff structure of an American claim, whose price can be obtained via formula

V t a : = sup τ S [ t , T ] E t [ t τ D t , u f u d u + D t , τ ψ τ ] . (24)

Here S [ t , T ] is the space of all the stopping times in [ t , T ] . We refer the interested readers to [62] and [66] for general derivation and explanation of Equation (24). It is also possible to derive the general BSDE that an American claim price satisfies, for example [67]. Moreover, in [27] and [1], the authors utilize a backward induction approach to solve optimal stopping problems. The idea can be carried out using the methodologies documented in Section 2. American claims with barrier features can be incorporated and priced in a similar way. It is also known that American option prices can be related to reflected BSDEs (RBSDEs), a rigorous discussion of existence and uniqueness of such equations can be found in [68] and references therein.

4. Numerical Experiments

4.1. European Option Pricing

In this section, we consider a Heston model

d S t S t = r d t + ν t d W t , S 0 = s 0 (25)

d ν t = κ ( θ ν t ) d t + σ ν t ( ρ d W t + 1 ρ 2 d B t ) , ν 0 = v 0 (26)

where ( W , B ) is a two dimensional standard Brownian motion. The parameter values are chosen as r = 0.05 , κ = 1.00 , θ = 0.04 , σ = 0.10 , ρ = 0.50 , s 0 = 1.00 , K = 1.00 and v 0 = 0.04 . Time to maturity is set to be T = 0.50 ,

with time discretization step h = 0.01 and N = T h = 50 . The number of

simulation paths is M = 10000 . We price a plain vanilla European call option ( S T K ) + as an illustration. The QQ-plots are displayed in Figures 1-10. The first three correspond to a recursive evaluation, i.e., regressing the values at t + 1 on state variables at time t. The rest of the plots correspond to direct regression, i.e., regressing the discounted payoffs at time T on state variables at time t. Figures 10-12 are for the prices of a digital call option under Black-Scholes setting and Figures 13-15 are QQ-plots for Delta values. Figure 16 and Figure 17 show the QQ-plots for method 3 under Heston model with 3 nested ANN approximations of size 4 and one ANN approximation of size 12 using R routine nnet. The absolute RMSE for the former is 0.1938% and latter 0.2581%, with the running time 10.36 seconds compared to 52.31 for ANN approximation with size 12.

Figure 1. QQ-plot for Method 1, τ = 0.05 and relative pricing error is 1.20%.

Figure 2. QQ-plot for Method 1, τ = 0.25 and relative pricing error is 1.50%.

Figure 3. QQ-plot for Method 1, τ = 0.45 and relative pricing error is 1.20%.

Figure 4. QQ-plot for Method 1, τ = 0.05 and relative pricing error is 1.66%.

Figure 5. QQ-plot for Method 1, τ = 0.20 and relative pricing error is 1.75%.

Figure 6. QQ-plot for Method 1, τ = 0.30 and relative pricing error is 3.00%.

Figure 7. QQ-plot for Method 2, τ = 0.05 and relative pricing error is 1.80%.

Figure 8. QQ-plot for Method 2, τ = 0.20 and relative pricing error is 3.50%.

Figure 9. QQ-plot for Method 2, τ = 0.30 and relative pricing error is 3.53%.

Figure 10. QQ-plot for Method 1, τ = 0.02 and relative pricing error is 0.40%.

Figure 11. QQ-plot for Method 1, τ = 0.05 and relative pricing error is 0.80%.

Figure 12. QQ-plot for Method 1, τ = 0.08 and relative pricing error is 0.60%.

Figure 13. Delta QQ-plot for Method 1, τ = 0.02 .

Figure 14. Delta QQ-plot for Method 1, τ = 0.05 .

Figure 15. Delta QQ-plot for Method 1, τ = 0.08 .

Figure 16. Price QQ-plot for Method 3, τ = 0.20 .

Figure 17. Price QQ-plot for Method 3, τ = 0.20 .

4.2. American Option Pricing

Here we refer the readers to [67] for the BSDE satisfied by a plain vanilla American option. For r = 0.03 , d = 0.07 , σ = 0.20 , T = 3.00 , N = 150 , S 0 = 100 and K = 100 , the benchmark American option price at t 0 = 0 is 9.0660 and the relative difference of our Monte-Carlo price is 0.27%. The running time is less than 30 seconds.

5. Conclusion and Future Research

In this paper, we show how machine learning techniques, specifically, ANN function approximation methods, can be applied to derivatives pricing. We relate pricing problems to the evaluation of conditional expectations via BSDEJs and PIDEs. Future research topics can, potentially, be the development of reinforcement learning methodologies to solve dynamic programming problems and apply them in the context of empirical asset pricing literature. Moreover, the evaluation of energy derivatives calls for SDEJs defined in a Hilbert space. The same theoretical constructions can also be found in the evaluation of fixed income derivatives, such as the random field models proposed and studied in [69]. One can, of course, apply Karhunen-Loéve expansion for a dimension reduction to reduce the problem to the evaluation of conditional expectations of regular SDEJs. However, the development of machine learning based methods to solve directly the conditional expectations on the stochastic processes defined in a Hilbert space is important. In addition, stochastic differential games, that arise in the context of American game options, equity swaps, and the related Mckean-Vlasov type FBSDEJs (mean-field FBSDEJ, see [70] ) are important topics in mathematical finance. They are also related to the theoretical analysis of high-frequency trading. Finding machine-learning based numerical methods to solve these equations is of great interest to us. Last, but not least, machine learning methods in asset pricing and portfolio optimization, which can be found in [71], [72], [73], [28], [74] and [75], admit an elegant way to price financial derivatives under -measure. For example, we can use the method in [72] to calibrate the SDF process and use [75] to generate market scenarios. These methodologies, combined with the methods documented in this paper and [1], have the potential to solve for any derivative price. We leave all the development to future research.

Acknowledgements

We thank the Editor and the referee for their comments. Moreover, we are grateful to Professor Jérôme Detemple, Professor Marcel Rindisbacher and Professor Weidong Tian for their useful suggestions.

Appendix

A. Convergence of the Proposed Methodologies

Proof of Theorem 10. It is known from the projection theorem of Hilbert space that { h n } n = 1 and h actually exist and are unique. Moreover, PROJ H n h = h n as indicated by the repeated projection theorem. It is also known that h h n ORTH H n H . As we ask that Assumption 6 hold, we know that h h n X 0 + as n .

Proof of Theorem 12. The proof follows from Assumption 6 and Theorem 8. We have

lim n PROJ Ψ H n x (27)

= lim n PROJ Ψ H n PROJ H n x (28)

= lim n PROJ Ψ H n h n (29)

= PROJ Ψ H h (30)

= h . (31)

This concludes the proof.

Proof of Lemma 1. For any λ t L 2 ( F t ) , we have

E [ ( ξ T λ t ) 2 ] (32)

= E [ ( ξ T E t [ ξ T ] ) 2 ] + E [ ( λ t E t [ ξ T ] ) 2 ] (33)

+ 2 E [ ( λ t E t [ ξ T ] ) ( ξ T E t [ ξ T ] ) ] = 0 (34)

= E [ ( ξ T E t [ ξ T ] ) 2 ] + E [ ( λ t E t [ ξ T ] ) 2 ] (35)

E [ ( ξ T E t [ ξ T ] ) 2 ] . (36)

Therefore we have the claim announced.

Proof of Theorem 17. The proof of this theorem follows from Assumptions 1, 2, 6, 14, 15 and Theorem 10, by choosing { E t [ ξ T ] | ξ T L 2 ( F T ) , E t [ ξ T ] L 2 ( F t ) } ¯ H t L 2 ( F t ) L 2 ( F T ) = X T .

Proof of Theorem 18. Essentially, Equation (7) is the result of Gauss-Markov Theorem and the consistency property of OLS estimator.

Proof of Proposition 19. This is a direct consequence of the discussion in ( [57], Section 3) (see Equation (5)) and Theorem 10. To elaborate, consider X T = L 2 ( F T ) , x = ψ ( X T ) , its projections h and h n on H t = n = 1 H t n ¯ L 2 ( F t ) and H t n defined in this proposition. Suppose that

h = j = 1 λ j e t j and h n = j = 1 m n μ j n e t j ,

where m n < m n + 1 and { e t j } j = 1 is a set of orthonormal basis in H t . From the repeated projection theorem, we know that μ j n + 1 = μ j n = λ j for any 1 j m n 6 and n . From the L 2 property of h, we know that j = 1 λ j 2 < . Therefore, h h n L 2 ( F T ) = j = n + 1 λ j 2 0 as n .

Proof of Proposition 20. This is a direct consequence of the discussion in ( [58], Theorem 2.2), localization arguments, Theorem 10 and the proof of Proposition 19.

Proof of Theorem 23. The first, second and third equality are obvious given an appropriate choice of Φ depending on the Markov property of X and its moment conditions in Assumption 2. Actually, because of the existence and uniqueness of ϕ Φ such that the RHS of the first equality achieves minimum, we know that

min ϕ Φ E [ ( ψ ( X T ) k = 1 K 1 X t U t k k = 1 K ϕ ( t , X t ) 1 X t U t k ) 2 ] (37)

min k = 1 K ϕ k ( t , x ) 1 x U t k Φ E [ ( ψ ( X T ) ϕ k ( t , X t ) ) 2 1 X t U t k ] . (38)

From another perspective, we know that min k = 1 K ϕ k ( t , x ) 1 x U t k Φ E [ ( ψ ( X T ) ϕ k ( t , X t ) ) 2 1 X t U t k ] is a piecewise minimization. Therefore

min ϕ Φ E [ ( ψ ( X T ) k = 1 K 1 X t U t k k = 1 K ϕ ( t , X t ) 1 X t U t k ) 2 ] (39)

min k = 1 K ϕ k ( t , x ) 1 x U t k Φ E [ ( ψ ( X T ) ϕ k ( t , X t ) ) 2 1 X t U t k ] . (40)

The last equality in Equation (13) holds.

Proof of Theorem 24. The proof of this theorem is a direct consequence of Equations (13), (15) and triangle inequality.

B. Other Applications

In this section, we document other applications of our methodologies in finance.

B.1. Joint Valuation and Calibration

Suppose that there are N derivatives contracts whose prices at time t 0 can be expressed as { V t 0 n } n = 1 N . Their payoffs are { φ n ( X ) } n = 1 N , where X is an

r-dimensional vector of state variables. Sometimes we write X θ to explicitly state dependence of X on its vector of parameters θ . Here suppose X θ satisfies a system of stochastic differential equations with jumps

d X t θ = μ ( t , X t θ | θ ) d t + σ ( t , X t θ | θ ) d W t + E γ ( t , X t θ , e | θ ) N ˜ ( d t , d e ) . (41)

The main idea is that { V t 0 n } n = 1 N might contain derivatives contracts from different asset classes or hybrid ones. Therefore, we need to model X as a joint high dimensional cross-asset system. One potential problem is that θ is in general a high-dimensional vector, which will be hard to estimate using usual optimization routines in R or MATLAB software system. However, we can apply ADAM method, studied in [76] for the parameter estimation. It is based on a stochastic iteration method via the gradient of the MSE function. The key to evaluate the gradient of the MSE function is to evaluate the dynamics of θ X t θ . It satisfies the following system of SDEJ

d θ X t θ = θ μ ( t , X t θ | θ ) d t + x μ ( t , X t θ | θ ) θ X t θ d t + θ σ ( t , X t θ | θ ) d W t + x σ ( t , X t θ | θ ) θ X t θ d W t + E θ γ ( t , X t θ , e | θ ) N ˜ ( d t , d e ) + E x γ ( t , X t θ , e | θ ) θ X t θ N ˜ ( d t , d e ) . (42)

The existence and uniqueness of the solution to the SDEJ system (42) can be obtained with necessary regularity conditions on the coefficients.

B.2. Option Surface Fitting

There is a strand of literature that strives to fit option panels using different dynamics for the underlying assets, for example, [77] on stochastic volatility models, [78] on local volatility models and [79] on local-stochastic volatility models. Models that incorporate jumps can be found in [80], [81] and references therein.

Consider the following stochastic differential equation

d S t S t = r ( t , X t ) d t + σ ( t , S t , X t ) d W t , S 0 = s 0 d X t = α ( t , X t ) d t + β ( t , X t ) d W t , X 0 = x 0 . (43)

Here we model σ by a DNN. The advantage of doing so is that it might fully capture the market volatility surface meantime ensuring a good dynamic fit, while still preserving the existence and uniqueness result for the related stochastic differential equation system (43).

B.3. Credit Risk Management: Evaluation on a Future Simulation Grid

We refer the problem definition to [6]. It is easy to illustrate that the problem is equivalent to the evaluation of conditional expectations on a future simulation grid and our methods are suitable for this type of problems. Note that, some XVA quantities, such as KVA, require the evaluation of CVA on a future simulation grid. Our methodologies, such as the ones proposed in Sections 2 and B.7, can be applied on the evaluation of KVA, once we obtain future present values of financial claims.

B.4. Dynamic Hedging

There are references that utilize machine learning (mainly Reinforcement Learning, or RL) to solve dynamic hedging problems, e.g., [82], [83] and [84]. However, here in this paper we will not follow this route. Instead, we use the BSDE formulation of the problem in [2] and try to solve the BSDE that characterizes the hedging problem. The methodology is outlined in Appendix B.11.

B.5. Dynamic Portfolio-Consumption Choice

We use [85] as an example and try to solve the related coupled FBSDE with jumps. The methodology is outlined in Appendix B.11. Other examples of dynamic portfolio optimization can be found in [53], [86], [87], [88], [89], [90], [91], [92], [93] and [94]. Essentially, dynamic portfolio-consumption choice problems are stochastic programming in nature and can be related to HJB equations or BSDEs. An example of using HJB representation of the problem can be found in [95]. The equations can be solved using the methodologies outlined in Section 0 and Appendix B.11.

B.6. Transition Density Approximation

We can generalize the theory in [96] and [97] to approximate the transition density of a multivariate time-inhomogeneous stochastic differential equation with jumps. According to [96] and [97], the transition density of a multivariate time-inhomogeneous stochastic differential equation with or without jumps can be approximated by polynomials in a weighted-Hilbert space. See ( [97], Equation (2.1)), for example. The key is to evaluate the coefficients { c α } α , which is, again, the evaluation of conditional expectations. The resulted transition density can be used in option pricing, MLE estimation for MSDEJs and prediction, filtering and smoothing problems for hidden Markov models, see [98].

B.7. Evaluating Conditional Expectations via a Measure Change

Consider the following equation

E t [ ψ ( X τ ) ] = r Γ ( t , x ; τ , y ) ψ ( y ) d y (44)

= r Γ 0 ( t 0 , x ; τ , y ) Γ ( t , x ; τ , y ) Γ 0 ( t 0 , x ; τ , y ) ψ ( y ) d y (45)

where Γ 0 is the transition density of a stochastic differential equation with jumps, which can be simulated for arbitrary ( t , τ ) without using time discretization7 and Γ is the transition density function of X. Γ can be approximated by the method outlined in Appendix B.6. It is immediately obvious that we can generate random numbers from Γ 0 and reuse them for the evaluation of the conditional expectation on the left hand side of Equation (44) for different ( t , τ ) .

B.8. Empirical Asset Pricing with Factor Models: Evaluating Expected Returns

In this section, we propose to use machine learning, mainly, ANN techniques, to construct factor models and evaluate the conditional expected asset returns and risk-premium cross-sectionally. Related references are [28] and [74], among others. [3] provide a good example with basis function expansion to capture the non-linearity in asset returns. Specifically, consider the following lead-lag regression

R t + 1 = f ( t , X t ) + ε t , t + 1 . (46)

Here E t [ ε t , t + 1 ] = 0 and X is a set of risk factors. Then, E t [ R t + 1 ] = f ( t , X t ) . Linear factor models assume that f ( t , x ) = a t + b t x . f can also be approximated by basis function expansion, using universal approximation theorem, or via ANNs. The fitted conditional expected asset returns can be fed into the mean-variance optimizer, i.e., [99] and construct long-short portfolios or other trading strategies.

B.9. Recovery and Representation Theorem

In [100], the authors propose a model-free recovery theorem, based on a series expansion of higher order conditional moments of asset returns. Their work inspires us to exploit the ANN-factor models to represent the higher order conditional moments of the asset returns and therefore validating the recovery theorem proposed there-in. Moreover, similar to [57], our machine learning approximation to the conditional expectations of financial payoffs amounts to a compound option representation of arbitrary L 2 -claims in the financial economic system. Also, the second numerical method means that any financial claim, can be locally approximated by a linear combination of power derivatives, following the same idea.

B.10. Theoretical Asset Pricing via Dynamic Stochastic General Equilibrium

Note that, the equation systems proposed in [101], [102] and [103] can be transformed into BSDEs and we can use time discretization and apply the techniques proposed in Section 2 and Appendix B.11 to solve them. In this paper, however, we will not test our methods on this strand of literature.

B.11. Solving High-Dimensional CFBSDEJs

A coupled forward-backward stochastic differential equation with jumps (CFBSDEJ) can be written as

d X t = μ ( t , X t , Y t , Z t , V t ) d t + σ ( t , X t , Y t , Z t , V t ) d W t + E γ ( t , X t , Y t , Z t , V t , e ) N ˜ ( d t , d e ) X 0 = x 0 d Y t = f ( t , X t , Y t , Z t , V t ) d t + Z t d W t + E U t ( e ) N ˜ ( d t , d e ) V t = E U t ( e ) ν ( d e ) Y T = ϕ ( X T ) (47)

where N ˜ ( d t , d e ) = N ( d t , d e ) ν ( d e ) d t is a compensated Poisson random measure. We take the following steps to solve Equation (47) numerically.

Time Discretization

Discretize time interval [ t , T ] into n-equal distance sub-intervals π = { [ t i , t i + 1 ) } i = 0 n 1 with h = t i + 1 t i n , t 0 = t and t n = T . Consider the following Euler discretized equation.

d X t i = μ ( t i , X t i , Y t i , Z t i , V t i ) h + σ ( t i , X t i , Y t i , Z t i , V t i ) d W t i + E γ ( t i , X t i , Y t i , Z t i , V t i , e ) N ˜ ( d t i , d e ) X 0 = x 0 d Y t i = f ( t i , X t i , Y t i , Z t i , V t i ) h + Z t i d W t i + E U t i ( e ) N ˜ ( d t i , d e ) V t i = E U t i ( e ) ν ( d e ) Y T = ϕ ( X T ) (48)

where d X t i : = X t i + 1 X t i and d Y t i : = Y t i + 1 Y t i . Denote the solution to the time-discretized CFBSDEJ as ( X π , Y π , Z π , U π ) . We need the following assumption.

Assumption 25. Under the norm K [ t , T ] 2 introduced in [104], we have

( X , Y , Z , U ) ( X π , Y π , Z π , U π ) K [ t , T ] 2 0 (49)

as n .

Mollification

Define a sequence of functions ( μ m , σ m , γ m , f m , ϕ m ) , which are bounded and have bounded derivatives of all orders and

lim m ( μ m , σ m , γ m , f m , ϕ m ) = ( μ , σ , γ , f , ϕ ) (50)

in a point-wise sense. Also denote the solution to the CFBSDEJ with coefficients ( μ m , σ m , γ m , f m , ϕ m ) as ( X m , Y m , Z m , U m ) . Then, we have the following theorem.

Theorem 26. Under Assumption 25

E t [ g ( X u π , m , Y u π , m , Z u π , m , V u π , m ) ] E t [ g ( X u , Y u , Z u , V u ) ] (51)

as n , m for arbitrary T > u > t > 0 . g is a function with at most polynomial growth in its arguments.

Picard Iteration

After the time discretization and mollification are done, we will resort to Picard fixed point iteration technique to decompose the solution ( X π , m , Y π , m , Z π , m , U π , m ) to a sequence of uncoupled FBSDEJs whose solutions are denoted by ( X π , m , k , Y π , m , k , Z π , m , k , U π , m , k ) , where k denotes the index of Picard iteration. For zeroth order, consider

d X t i π , m ,1 = μ m ( t i , X t i π , m ,1 ,0,0,0 ) h + σ m ( t i , X t i π , m ,1 ,0,0,0 ) d W t i + E γ m ( t i , X t i π , m ,1 ,0,0,0, e ) N ˜ ( d t i , d e ) X 0 π , m ,1 = x 0

d Y t i π , m ,1 = f m ( t i , X t i π , m ,1 , Y t i π , m ,1 , Z t i π , m ,1 , V t i π , m ,1 ) h + Z t i π , m ,1 d W t i + E U t i π , m ,1 ( e ) N ˜ ( d t i , d e ) V t i π , m ,1 = E U t i π , m ,1 ( e ) ν ( d e ) Y T π , m ,1 = ϕ ( X T π , m ,1 ) (52)

For k 2 , define

d X t i π , m , k = μ m ( t i , X t i π , m , k , Y t i π , m , k 1 , Z t i π , m , k 1 , V t i π , m , k 1 ) h + σ m ( t i , X t i π , m , k , Y t i π , m , k 1 , Z t i π , m , k 1 , V t i π , m , k 1 ) d W t i + E γ m ( t i , X t i π , m , k , Y t i π , m , k 1 , Z t i π , m , k 1 , V t i π , m , k 1 , e ) N ˜ ( d t i , d e ) X 0 π , m , k = x 0

d Y t i π , m , k = f m ( t i , X t i π , m , k , Y t i π , m , k , Z t i π , m , k , V t i π , m , k ) h + Z t i π , m , k d W t i + E U t i π , m , k ( e ) N ˜ ( d t i , d e ) V t i π , m , k = E U t i π , m , k ( e ) ν ( d e ) Y T π , m , k = ϕ ( X T π , m , k ) (53)

Evaluation of Conditional Expectations

For Equation system (53), we can start from the last time interval and work backwards. The problem is transformed into the evaluation of E t i [ u ( t i + 1 , X t i + 1 π , m , k ) ] , where u is the intermediate solution and satisfies u ( T , ) = ϕ ( ) .

B.12. Pricing Kernel Approximation

A pricing kernel η t is an L 2 ( F t ) stochastic process, adapted to the information filtration { F t } 0 t T , such that

V t = E t [ D t , T η t , T V T ] (54)

where V T is an F T payoff, D t , T = D T D t = e t T r v d v and η t , T = η T η t . It is obvious that η t = E t [ η T ] , i.e., η is a -martingale. Represent

D T η T = j = 0 a j e T j (θj)

where { e T j } j = 0 is a set of orthonormal basis in L 2 ( F T ) space and θ j is the vector of coefficients of e j . Suppose that we have K derivative contracts, denoted by { V T k } k = 1 K , with basis representation V T k = j = 0 b k j e T j ( θ j ) . Therefore

V t 0 k = E t 0 [ j = 0 a j e T j ( θ j ) j = 0 b k j e T j ( θ j ) ] = j = 0 a j b k j . (55)

Equation (55), if truncated after J terms, formulates a linear equation system and the unknowns { a j } j = 0 J and { θ j } j = 0 J can be recovered from ordinary least square optimization. After we obtain η T , η t can be recovered by η t = E t [ η T ] , via the methodology outlined in Section 2.

Remark 27. If { e t j ( θ j ) } j = 0 is not orthonormal, Equation (55) becomes nonlinear in { θ j } j = 0 J . The evaluations remain the same, with only more complicated numerical computations. The basis can also be represented by ANNs.

Remark 28. For a specific representation via universal approximation theorem, see [55].

Remark 29. It is possible to allow shape constraints in the estimation (55) and formulate a constrained optimization problem, see [105], for example.

We can also directly utilize the method proposed in Section 2, when used with time discretization and Monte Carlo simulation. Denote M as the number of

sample paths and { V T m , k } m = 1 , k = 1 M , K as M simulated final payoffs for each of the K derivatives. Define { a m } m = 1 M as M real numbers. Let { V 0 k } k = 1 K be K derivative prices at time t 0 = 0 . Find the solution to the following optimization problem

{ a m } m = 1 M = arg min { ϕ m } m = 1 M [ k = 1 K ( V 0 k 1 M m = 1 M ϕ m V T m , k ) 2 ] . (56)

After obtaining { a m } m = 1 M , we try to find function relation g such that

a m = g ( T , X T m ) = D 0 , T m η T m

where { X T m } m = 1 M is a set of simulated state variables at time T. When fitting g, we can add some shape or no-arbitrage constraints, or other regularization conditions, to the optimization problem and formulate a constrained ANN

(ACNN). We always assume that the matrix t ( { V T m , k } m = 1 , k = 1 M , K ) { V T m , k } m = 1 , k = 1 M , K is a K × K invertible matrix, where t ( ) is the matrix transpose operator.

C. Intuition of Convergence Proof for Appendix B.11

In Appendix B.11, we propose a method to solve numerically a CFBSDEJ. As long as the time discretization step is convergent, we can argue that the methodology converges, in some sense, to the true one, as outlined above in Appendix B.11. Potentially, we need an a priori estimate formula, similar to the one in [2], for coupled BSDEs, to justify Picard iteration at every time discretization step.

NOTES

1It is obvious that { e t j } j Λ can be the basis or frame of L 2 ( F t ) . However, we do not assume so in this paper.

2It is the linear space spanned by the set { e j ( θ j ) } j = 0 .

3We should understand that distance can be defined in function space Φ .

4K can be positive infinity, i.e., K = .

5We will only show convergence of Methods 1 and 2.

6Here we only consider the case where | Λ n | = m n < for any n . The case with | Λ n | = is analogous.

7For example, a Lévy process.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Longstaff, F. and Schwartz, E. (2001) Valuing American Options by Simulation: A Simple Least—Square Approach. The Review of Financial Studies, 14, 113-147.
https://doi.org/10.1093/rfs/14.1.113
[2] El Karoui, N., Peng, S. and Quenez, M.C. (1997) Backward Stochastic Differential Equations in Finance. Mathematical Finance, 7, 1-71.
https://doi.org/10.1111/1467-9965.00022
[3] Adrian, T., Crump, R. and Vogt, E. (2018) Nonlinearity and Flight-to-Safety in the Risk-Return Trade-Off for Stocks and Bonds. Forthcoming in Journal of Finance, 74, 1931-1973.
[4] Fama, E. and French, K. (1993) Common Risk Factors in the Returns on Stocks and Bonds. Journal of Financial Economics, 33, 3-56.
https://doi.org/10.1016/0304-405X(93)90023-5
[5] Fama, E. and French, K. (2015) A Five-Factor Asset Pricing Model. Journal of Financial Economics, 116, 1-22.
[6] Zhu, S. and Pykhtin, M. (2008) A Guide to Modeling Counterparty Credit Risk. Working Paper.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1032522
[7] Aydogdu, M. (2018) Predicting Stock Returns Using Neural Networks. Working Paper.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3141492
https://doi.org/10.2139/ssrn.3141492
[8] Voshgha, H. (2008) Early Detection of Defaulting Firms: Artificial Neural Network Application; Australian Context. Working Paper.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2130505
[9] Hutchinson, J., Lo, A. and Poggio, T. (1994) A Nonparametric Approach to Pricing and Hedging Derivative Securities via Learning Networks. Journal of Finance, 49, 851-889.
https://doi.org/10.1111/j.1540-6261.1994.tb00081.x
[10] Hahn, J.T. (2013) Option Pricing Using Artificial Neural Networks: The Australian Perspective. Ph.D. Thesis, Bond University, Queensland.
[11] Kohler, M., Krzyzak, M. and Todorovic, N. (2010) Pricing of High-Dimensional American Options by Neural Networks. Mathematical Finance, 20, 383-410.
https://doi.org/10.1111/j.1467-9965.2010.00404.x
[12] Dugas, C., Bengio, Y., Bélisle, F., Nadeau, C. and Garcia, R. (2009) Incorporating Functional Knowledge in Neural Networks. Journal of Machine Learning Research, 10, 1239-1262.
[13] Eckstein, S., Kupper, M. and Pohl, M. (2018) Robust Risk Aggregation with Neural Networks. Quantitative Finance, 1-40.
https://arxiv.org/abs/1811.00304
[14] Giovanis, E. (2010) Applications of Neural Network Radial Basis Function in Economics and Financial time Series. SSRN Electronic Journal.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1667442
https://doi.org/10.2139/ssrn.1667442
[15] Kopitkov, D. and Indelman, V. (2018) Deep PDF: Probabilistic Surface Optimization and Density Estimation. Computer Science, 1-18.
https://arxiv.org/abs/1807.10728
[16] Luo, R., Zhang, W., Xu, X. and Wang, J. (2017) A Neural Stochastic Volatility Model. Computer Science, 1-11.
https://arxiv.org/pdf/1712.00504.pdf
[17] Sasaki, H. and Hyvarinen, A. (2018) Neural-Kernelized Conditional Density Estimation. Statistics, 1-12.
https://arxiv.org/abs/1806.01754
[18] Weissensteiner, A. (2009) AQ-Learning Approach to Derive Optimal Consumption and Investment Strategies. IEEE Transactions on Neural Networks, 20, 1234-1243.
https://doi.org/10.1109/TNN.2009.2020850
[19] Casgrain, P. and Jaimungal, S. (2016) Trading Algorithms with Learning in Latent Alpha Models. SSRN Electronic Journal.
https://doi.org/10.2139/ssrn.2871403
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2871403
[20] Heaton, J., Polson, N. and Witte, J. (2016) Deep Learning for Finance: Deep Portfolios. Applied Stochastic Models in Business and Industry, 33, 3-12.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2838013
https://doi.org/10.2139/ssrn.2838013
[21] Samo, Y. and Vernuurt, A. (2016) Stochastic Portfolio Theory: A Machine Learning Perspective. Quantitative Finance, 1-9.
https://arxiv.org/pdf/1605.02654.pdf
[22] Jiang, Z., Xu, D. and Liang, J. (2017) A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem. Computational Finance, 1-31.
https://arxiv.org/pdf/1706.10059.pdf
[23] Deng, Y., Bao, F., Kong, Y., Ren, Z. and Dai, Q. (2017) Deep Direct Reinforcement Learning for Financial Signal Representation and Trading. IEEE Transactions on Neural Networks and Learning Systems, 28, 653-664.
https://doi.org/10.1109/TNNLS.2016.2522401
[24] Halperin, I. (2017) QLBS: Q-Learner in the Black-Scholes(-Merton) Worlds. Quantitative Finance, 1-34.
https://arxiv.org/abs/1712.04609v2
https://doi.org/10.2139/ssrn.3087076
[25] Ritter, G. (2017) Machine Learning for Trading. Working Paper.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3015609
https://doi.org/10.2139/ssrn.3015609
[26] Xing, F., Cambrida, E., Malandri, L. and Vercellis, C. (2018) Discovering Bayesian Market Views for Intelligent Asset Allocatio.
https://arxiv.org/pdf/1802.09911.pdf
[27] Becker, S., Cheridito, P. and Jentzen, A. (2018) Deep Optimal Stopping. Mathematics, arXiv: 1804. 05394.
https://arxiv.org/abs/1804.05394
[28] Gu, S., Kelly, B. and Xiu, D. (2018) Empirical Asset Pricing via Machine Learning. 31st Australasian Finance and Banking Conference 2018, Sydney, 13-15 December 2018.
https://doi.org/10.3386/w25398
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3159577
[29] Weinan, E., Han, J. and Jentzen, A. (2017) Deep Learning-Based Numerical Methods for High-Dimensional Parabolic Partial Differential Equations and Backward Stochastic Differential Equations. Mathematics, 1-39.
https://arxiv.org/pdf/1706.04702.pdf
[30] Weinan, E., Hutzenthaler, M., Jentzen, A. and Kruse, T. (2017) On Multilevel Picard Numerical Approximations for High-Dimensional Nonlinear Parabolic Partial Differential Equations and High-Dimensional Nonlinear Backward Stochastic Differential Equations. Mathematics, 1-25.
https://arxiv.org/pdf/1708.03223.pdf
[31] Han, J., Jentzen, A. and Weinan, E. (2017) Overcoming the Curse of Dimensionality: Solving High-Dimensional Partial Differential Equations Using Deep Learning. Mathematics, 1-14.
https://arxiv.org/pdf/1707.02568.pdf
[32] Khoo, Y., Lu, J. and Ying, L. (2017) Solving Parametric PDE Problems with Artificial Neural Networks. Mathematics, 1-17.
https://arxiv.org/pdf/1707.03351.pdf
[33] Beck, C., Weinan, E. and Jentzen, A. (2017) Machine Learning Approximation Algorithms for High-Dimensional Fully Nonlinear Partial Differential Equations and Second-Order Backward Stochastic Differential Equations. Mathematics, 1-56.
https://arxiv.org/pdf/1709.05963.pdf
[34] Sirignano, J. and Spiliopoulos, K. (2017) DGM: A Deep Learning Algorithm for Solving Partial Differential Equations. Mathematics, 1-31.
https://arxiv.org/pdf/1708.07469.pdf
[35] Long, Z., Lu, Y. and Ma, X. (2018) PDE-Net: Learning PDEs from Data. Mathematics, 1-17.
https://arxiv.org/pdf/1710.09668.pdf
[36] Long, Z. and Lu, Y. (2018) PDE-Net 2.0: Learning PDEs from Data with a Numeric Symbolic Hybrid Deep Network. Computer Science, 1-16.
https://arxiv.org/pdf/1812.04426.pdf
[37] Haehnel, P., Marecek, J. and Monteil, J. (2018) Scaling up Deep Learning for PDE-Based Models. Computer Science, 1-39.
https://arxiv.org/pdf/1810.09425.pdf
[38] Berg, J. and Nystrom, K. (2018) Data-Driven Discovery of PDEs in Complex Datasets. Statistics, 1-22.
https://arxiv.org/pdf/1808.10788.pdf
[39] Rudy, S., Alla, A., Brunton, S. and Nathan Kutz, J. (2018) Data-Driven Identification of Parametric Partial Differential Equations. Mathematics, 1-17.
https://arxiv.org/pdf/1806.00732.pdf
[40] Detemple, J., Lorig, M., Rindisbacher, M. and Zhang, L. (2018) An Analytical Expansion Method for Forward Backwards to Chastic Differential Equations with Jumps.
[41] Briand, P. and Labart, C. (2012) Simulation of BSDEs by Wiener Chaos Expansion. The Annals of Applied Probability, 24, 1129-1171.
https://doi.org/10.1214/13-AAP943
[42] Geiss, C. and Labart, C. (2015) Simulation of BSDEs with Jumps by Wiener Chaos Expansion. Mathematics, arXiv: 1502.05649.
http://arxiv.org/abs/1502.05649
[43] Gnameho, K., Stadje, M. and Pelsser, A. (2017) A Regression-Later Algorithm for Backward Stochastic Differential Equations. Mathematics, 1-33.
https://arxiv.org/pdf/1706.07986
[44] Gobet, E. and Labart, C. (2007) Error Expansion for the Discretization of Backward Stochastic Differential Equations. Stochastic Processes and Their Applications, 117, 803-829.
https://doi.org/10.1016/j.spa.2006.10.007
[45] Takahashi, A. and Yamada, T. (2016) An Asymptotic Expansion for Forward-Backward SDEs: A Malliavin Calculus Approach. Asia-Pacific Financial Markets, 23, 337-373.
[46] Takahashi, A. and Yamada, T. (2015) On the Expansion to Quadratic FBSDEs.
[47] Gobet, E. and Pagliarani, S. (2014) Analytical Approximations of BSDEs with Non-Smooth Driver. SIAM Journal on Financial Mathematics, 6, 919-958.
https://doi.org/10.2139/ssrn.2448691
[48] Fujii, M. and Takahashi, A. (2012) Analytical Approximation for Non-Linear FBSDEs with Perturbation Scheme. International Journal of Theoretical and Applied Finance, 15, Article ID: 1250034.
https://doi.org/10.1142/S0219024912500343
[49] Fujii, M. and Takahashi, A. (2012) Perturbative Expansion of FBSDE in an Incomplete Market with Stochastic Volatility. The Quarterly Journal of Finance, 2, 1-22.
https://doi.org/10.2139/ssrn.1999137
[50] Fujii, M. and Takahashi, A. (2015) Asymptotic Expansion for Forward-Backward SDEs with Jumps. Quantitative Finance, 1-39.
https://doi.org/10.2139/ssrn.2672890
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2672890
[51] Fujii, M. and Takahashi, A. (2016) Quadratic-Exponential Growth BSDEs with Jumps and Their Malliavin’s Differentiability. Working Paper.
https://doi.org/10.2139/ssrn.2705670
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2705670
[52] Fujii, M. and Takahashi, A. (2016) Solving Backward Stochastic Differential Equations by Connecting the Short-Term Expansions. Quantitative Finance, 1-41.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2795490
[53] Detemple, J. and Rindisbacher, M. (2005) Closed-Form Solutions for Optimal Portfolio Selection with Stochastic Interest Rate and Investment Constraints. Mathematical Finance, 15, 539-568.
https://doi.org/10.1111/j.1467-9965.2005.00250.x
[54] Hansen, L. and Richard, S. (1987) The Role of Conditioning Information in Deducing Testable Restrictions Implied by Dynamic Asset Pricing Models. Econometrica, 55, 587-613.
https://doi.org/10.2307/1913601
[55] Jiang, J. and Tian, W. (2018) Semi-Nonparametric Approximation and Index Options. Annals of Finance, 1-38.
https://doi.org/10.1007/s10436-018-0341-4
[56] Tian, W. (2014) Spanning with Indexes. Journal of Mathematical Economics, 53, 111-118.
https://doi.org/10.1016/j.jmateco.2014.06.007
[57] Tian, W. (2018) The Financial Market: Not as Big as You Think. Mathematics and Financial Economics, 51, 1-19.
[58] Bolcskei, H., Grohs, P., Kutyniok, G. and Petersen, P. (2018) Optimal Approximation with Sparsely Connected Deep Neural Networks. Computer Science, 1-36.
https://arxiv.org/abs/1705.01714
[59] Henry-Labordere, P. (2015) Exact Simulation of Multi-Dimensional Stochastic Differential Equations. Working Paper, 1-28.
https://doi.org/10.2139/ssrn.2598505
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2598505
[60] Prater, A. (2012) Discrete Sparse Fourier Hermite Approximations in High Dimensions. Doctoral Thesis, Syracuse University, New York.
[61] Fonseca, Y., Medeiros, M., Vasconcelos, G. and Veiga, A. (2018) Boost: Boosting Smooth Trees for Partial Effect Estimation in Nonlinear Regressions. Statistics, 1-30.
https://arxiv.org/pdf/1808.03698.pdf
[62] Detemple, J. (2006) American-Style Derivatives: Valuation and Computation. Chapman and Hall/CRC, New York.
https://doi.org/10.1201/9781420034868
[63] Guyon, J. and Henry-Labordere, P. (2014) Nonliner Option Pricing. Chapman and Hall, New York.
https://doi.org/10.1201/b16332
[64] Detemple, J., Garcia, R. and Rindisbacher, M. (2005) Representation Formulas for Malliavin Derivatives of Diffusion Processes. Finance and Stochastics, 9, 349-367.
https://doi.org/10.1007/s00780-004-0151-6
[65] Detemple, J. and Rindisbacher, M. (2005) Asymptotic Properties of Monte Carlo Estimators of Derivatives. Management Science, 51, 1657-1675.
https://doi.org/10.1287/mnsc.1050.0398
[66] Detemple, J. (2014) Optimal Exercise for Derivative Securities. Annual Review of Financial Economics, 6, 459-487.
https://doi.org/10.1146/annurev-financial-110613-034241
[67] Fujii, M., Sato, S. and Takahashi, A. (2012) An FBSDE Approach to American Option Pricing with an Interacting Particle Method. Quantitative Finance, 1-18.
https://arxiv.org/abs/1211.5867
https://doi.org/10.2139/ssrn.2180696
[68] Chassagneux, J., Elie, R. and Kharroubi, I. (2010) A Note on Existence and Uniqueness for Solutions of Multidimensional Reflected BSDEs. Electronic Communications in Probability, 16, 120-128.
https://doi.org/10.1214/ECP.v16-1614
[69] Collin-Dufresne, P. and Goldstein, R. (2003) Generalizing the Affine Framework to HJM and Random Field Models. SSRN Electronic Journal.
https://doi.org/10.2139/ssrn.410421
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=410421
[70] Carmona, R. and Delarue, F. (2015) Forward-Backward Stochastic Differential Equations and Controlled McKean-Vlasov Dynamics. Annals of Probability, 43, 2647-2700.
https://doi.org/10.1214/14-AOP946
[71] Bianchi, D., Büchner, M. and Tamoni, A. (2019) Bond Risk Premia with Machine Learning. USC-INET Research Paper No. 19-11.
https://doi.org/10.2139/ssrn.3400941
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3232721
[72] Chen, L., Pelger, M. and Zhu, J. (2019) Deep Learning in Asset Pricing. Quantitative Finance, 1-89.
https://arxiv.org/abs/1904.00745
https://doi.org/10.2139/ssrn.3350138
[73] Feng, G., Polson, N. and Xu, J. (2019) Deep Learning in Asset Pricing. Statistics, 1-33.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3350138
[74] Yang, Q., Ye, T. and Zhang, L. (2018) A General Framework of Optimal Investment. Working Paper.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3136708
[75] Yu, P., Lee, J., Kulyatin, I., Shi, Z. and Dasgupta, S. (2019) Model-Based Deep Reinforcement Learning for Dynamic Portfolio Optimization. Computer Science, 1-21.
https://arxiv.org/abs/1901.08740
[76] Kingma, D. and Ba, J.L. (2014) Adam: A Method for Stochastic Optimization. Computer Science, 1-15.
https://arxiv.org/abs/1412.6980
[77] Heston, S. (1993) A Closed-Form Solution for Options with Stochastic Volatility with Applications to Bond and Currency Options. The Review of Financial Studies, 6, 327-343.
https://doi.org/10.1093/rfs/6.2.327
[78] Dupire, B. (1994) Pricing with a Smile. Risk.
http://www.risk.net/data/risk/pdf/technical/2007/risk20_0707_technical_volatility.pdf
[79] Homescu, C. (2014) Local Stochastic Volatility Models: Calibration and Pricing. Working Paper.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2448098
https://doi.org/10.2139/ssrn.2448098
[80] Broadie, M., Chernov, M. and Johannes, M. (2007) Model Specification and Risk Premia: Evidence from futures Options. Journal of Finance, 62, 1453-1490.
https://doi.org/10.1111/j.1540-6261.2007.01241.x
[81] Guennon, H. (2016) Local Volatility Models Enhanced with Jumps. Working Paper, 1-11.
https://papers.ssrn.com/abstract=2781102
https://doi.org/10.2139/ssrn.2781102
[82] Buehler, H., Gonon, L., Teichmann, J. and Wood, B. (2018) Deep Hedging. Working Paper.
https://doi.org/10.2139/ssrn.3120710
https://arxiv.org/abs/1802.03042
[83] Halperin, I. (2018) The QLBS Q-Learner Goes NuQLear: Fitted Q Iteration, Inverse RL, and Option Portfolios. Quantitative Finance, 1-18.
https://arxiv.org/abs/1801.06077
https://doi.org/10.2139/ssrn.3102707
[84] Halperin, I. (2018) QLBS: Q-Learner in the Black-Scholes(-Merton) Worlds. Quantitative Finance, 1-34.
https://arxiv.org/abs/1712.04609
https://doi.org/10.2139/ssrn.3087076
[85] Schroder, M. and Skiadas, C. (2008) Optimality and State Pricing in Constrained Financial Markets with Recursive Utility under Continuous and Discontinuous Information. Mathematical Finance, 18, 199-238.
https://doi.org/10.1111/j.1467-9965.2007.00330.x
[86] Detemple, J. and Zapatero, F. (1991) Asset Prices in an Exchange Economy with Habit Formation. Econometrica, 59, 1633-1657.
https://doi.org/10.2307/2938283
[87] Karatzas, I., Lehoczky, J., Shreve, S. and Xu, G. (1991) Martingale and Duality Methods for Utility Maximization in a Incomplete Market. SIAM Journal on Control and Optimization, 29, 702-730.
https://doi.org/10.1137/0329039
[88] He, H. and Pearson, N. (1991) Consumption and Portfolio Policies with Incomplete Markets and Short-Sale Constraints: The Infinite Dimensional Case. Journal of Economic Theory, 54, 259-304.
https://doi.org/10.1016/0022-0531(91)90123-L
[89] Karatzas, I. and Cvitanic, J. (1992) Convex Duality in Constrained Portfolio Optimization. Annals of Applied Probability, 2, 767-818.
https://doi.org/10.1214/aoap/1177005576
[90] Detemple, J., Garcia, R. and Rindisbacher, M. (2003) A Monte Carlo Method for Optimal Portfolios. Journal of Finance, 58, 401-446.
https://doi.org/10.1111/1540-6261.00529
[91] Detemple, J., Garcia, R. and Rindisbacher, M. (2005) Intertemporal Asset Allocation: A Comparison of Methods. Journal of Banking and Finance, 29, 2821-2848.
https://doi.org/10.1016/j.jbankfin.2005.02.004
[92] Detemple, J. and Rindisbacher, M. (2010) Dynamic Asset Allocation: Portfolio Decomposition Formula and Applications. The Review of Financial Studies, 23, 25-100.
https://doi.org/10.1093/rfs/hhp040
[93] Detemple, J. (2012) Portfolio Selection: A Review. Journal of Optimization Theory and Applications, 161, 1-21.
https://doi.org/10.1007/s10957-012-0208-1
[94] Matoussi, A. and Xing, H. (2016) Convex Duality for Stochastic Differential Utility. Quantitative Finance, 1-22.
http://arxiv.org/pdf/1601.03562.pdf
https://doi.org/10.2139/ssrn.2715425
[95] Kraft, H., Seiferling, T. and Seifried, F. (2015) Optimal Consumption and Investment with Epstein-Z in Recursive Utility. Working Paper.
https://doi.org/10.2139/ssrn.2444747
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2424706
[96] Ait-Sahalia, Y. (2008) Closed-Form Likelihood Expansions for Multivariate Diffusions. Annals of Statistics, 36, 906-937.
https://doi.org/10.1214/009053607000000622
[97] Filipovic, D., Mayerhofer, E. and Schneider, P. (2013) Density Approximations for Multivariate Affine Jump Diffusion Processes. Journal of Econometrics, 176, 93-111.
https://doi.org/10.1016/j.jeconom.2012.12.003
[98] Van Handel, R. (2008) Hidden Markov Models. Princeton Lecture Notes.
[99] Markowitz, H. (1952) Portfolio Selection. Journal of Finance, 7, 77-91.
https://doi.org/10.1111/j.1540-6261.1952.tb01525.x
[100] Schneider, P. and Trojani, F. (2018) (Almost) Model Free Recovery. Forthcoming in Journal of Finance, 74, 323-370.
https://doi.org/10.1111/jofi.12737
[101] Chabakauri, G. (2013) Dynamic Equilibrium with Two Stocks, Heterogeneous Investors, and Portfolio Constraints. The Review of Financial Studies, 26, 3104-3141.
https://doi.org/10.2139/ssrn.2221073
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2221073
[102] Chabakauri, G. (2015) Asset Pricing with Heterogeneous Preferences, Beliefs, and Portfolio Constraints. Journal of Monetary Economics, 75, 21-34.
[103] Kardaras, C., Xing, H. and Zitkovic, G. (2015) Incomplete Stochastic Equilibria for Dynamic Monetary Utility. Mathematics, 1-33.
https://arxiv.org/abs/1505.07224
[104] Halle, J.O. (2010) Backward Stochastic Differential Equations with Jumps. Master Thesis, University of Oslo, Oslo, Norway.
[105] Dalderop, J. (2016) Nonparametric State-Price Density Estimation Using High Frequency Data. Working Paper.
https://doi.org/10.2139/ssrn.2718938

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.