The Jaffa Transform for Hessian Matrix Systems and the Laplace Equation

Abstract

Hessian matrices are square matrices consisting of all possible combinations of second partial derivatives of a scalar-valued initial function. As such, Hessian matrices may be treated as elementary matrix systems of linear second-order partial differential equations. This paper discusses the Hessian and its applications in optimization, and then proceeds to introduce and derive the notion of the Jaffa Transform, a new linear operator that directly maps a Hessian square matrix space to the initial corresponding scalar field in nth dimensional Euclidean space. The Jaffa Transform is examined, including the properties of the operator, the transform of notable matrices, and the existence of an inverse Jaffa Transform, which is, by definition, the Hessian matrix operator. The Laplace equation is then noted and investigated, particularly, the relation of the Laplace equation to Poisson’s equation, and the theoretical applications and correlations of harmonic functions to Hessian matrices. The paper concludes by introducing and explicating the Jaffa Theorem, a principle that declares the existence of harmonic Jaffa Transforms, which are, essentially, Jaffa Transform solutions to the Laplace partial differential equation.

Share and Cite:

Jaffa, D. (2024) The Jaffa Transform for Hessian Matrix Systems and the Laplace Equation. Journal of Applied Mathematics and Physics, 12, 98-125. doi: 10.4236/jamp.2024.121010.

1. Introduction

Developed in the nineteenth century by German mathematician Ludwig Otto Hesse, the Hessian matrix is a square matrix consisting of all possible combinations of second partial derivatives of a scalar-valued function f. The Hessian matrix may consequently be treated as an elementary system of second-order partial differential equations, referred to as a Hessian matrix system. The generalized form of the Hessian matrix, denoted by H f , is defined as:

H f = [ f x 1 2 f x 1 x n f x n x 1 f x n 2 ]

where f is a C 2 scalar-valued subspace of n . The elements along the central diagonal of the Hessian matrix are the homogenous second partial derivatives of f, whilst the remaining elements are the second-order mixed partial derivatives. Hence, the property emerges that:

tr ( H f ) = i = 1 n f x i 2 = 2 f

The above property is referred to as the Laplacian Trace property, pivotal in the derivation of harmonic solutions to Hessian matrix systems. Additionally, consider the Jacobian matrix [1] of the gradient of f:

J f = [ d γ 1 d x 1 d γ n d x 1 d γ 1 d x n d γ n d x n ]

where γ i is the i-th component of f’s gradient vector. By defining the Jacobian in terms of f, it is evident that:

J f = [ d γ 1 d x 1 d γ n d x 1 d γ 1 d x n d γ n d x n ] = [ f x 1 2 f x 1 x n f x n x 1 f x n 2 ] = H f

Implicit Gradient Form of Hessian Matrices

Theorem 1.1. All n × n Hessian matrices may be described in terms of the gradient of the initial function, in the form:

H f = [ γ 1 γ 2 γ n ]

Proof. Let f n be a C 2 scalar-valued function. Consider the gradient field of f:

f ( x 1 , x 2 , , x n ) = [ f x 1 ( x 1 , x 2 , , x n ) f x n ( x 1 , x 2 , , x n ) ] = [ γ 1 ( x 1 , x 2 , , x n ) γ n ( x 1 , x 2 , , x n ) ]

The Hessian of f:

H f = [ f x 1 2 f x 1 x n f x n x 1 f x n 2 ] = [ γ 1 x 1 γ n x 1 γ 1 x n γ n x n ]

The first column contains the n second partial derivatives of the first component of f . Similarly, the nth column contains the n second partial derivatives of the nth component of f . Hence, the above Hessian matrix may be described as:

H f = [ γ 1 x 1 γ n x 1 γ 1 x n γ n x n ] = [ γ 1 γ 2 γ n ] (1)

The form derived in (1) holds true for all n × n Hessian matrices, and is referred to as the implicit gradient form. This form describes the Hessian of a scalar-valued initial function in terms of the function’s gradient components. By the above form emerges a method of determining solutions to Hessian matrix systems utilizing the Intersect Rule of gradient fields [2] , as derived and demonstrated within this paper.

2. Analysis of Hessian Determinants

The following section discusses the Hessian determinant and its prevalence regarding concavity and curvature in three dimensions. The section analyzes the correlation between the initial function’s concavity and curvature at critical points, and the hessian determinant and eigenvalues.

2.1. The Second Partial Derivative Test

Theorem 2.1. For all 2 × 2 Hessian matrices, the Hessian determinant yields the second partial derivative test for concavity. As in:

| H f ( x 0 , y 0 ) | = 2 f x 2 ( x 0 , y 0 ) 2 f y 2 ( x 0 , y 0 ) 2 f y x ( x 0 , y 0 ) 2

For all input vectors ( x 0 , y 0 ) 2 .

Proof. Let f 3 be a C 2 scalar-valued function defined by f : 2 . The Hessian of f is defined as:

H f = [ 2 f x 2 2 f x y 2 f y x 2 f y 2 ]

By Clairaut’s Theorem (1743), the symmetry of the second mixed partial derivatives allows the Hessian to be defined as:

H f = [ 2 f x 2 2 f y x 2 f y x 2 f y 2 ]

The Hessian determinant:

| H f | = 2 f x 2 2 f y 2 2 f y x 2

Recall the second partial derivative test for concavity, which states that, for a C 2 scalar-valued function in 3 , the nature of critical point ( x 0 , y 0 ) may be determined through:

2 f x 2 ( x 0 , y 0 ) 2 f y 2 ( x 0 , y 0 ) 2 f y x ( x 0 , y 0 ) 2

Which is, by definition, the Hessian determinant. Hence, the proof is complete, and it is fair to state that:

2 f x 2 ( x 0 , y 0 ) 2 f y 2 ( x 0 , y 0 ) 2 f y x ( x 0 , y 0 ) 2 = | H f ( x 0 , y 0 ) | (2)

Thus, the Hessian determinant provides the general form of the second partial derivative test, allowing the Hessian at a given critical point to be utilized to determine the nature of said critical point. As a result, the Hessian matrix proves pivotal in unconstrained optimization, in addition to concavity testing at critical input points in 3 .

2.2. Eigenvalues of Hessian Matrices

Theorem 2.2. For all 2 × 2 Hessian matrices, the Hessian determinant is equivalent to the product of its eigenvalues. As in:

λ 1 λ 2 = | H f |

Proof. Consider the Hessian of a C 2 scalar-valued function f 3 :

H f = [ 2 f x 2 2 f x y 2 f y x 2 f y 2 ]

The Hessian determinant:

| H f | = | 2 f x 2 2 f x y 2 f y x 2 f y 2 | = f x x f y y f x y 2

The eigenvalues of the Hessian may be derived through:

det ( λ I 2 H f ) = | λ f x x f x y f x y λ f y y | = ( λ f x x ) ( λ f y y ) f x y 2 = 0

Simplifying:

λ 2 λ ( f x x + f y y ) + f x x f y y f x y 2 = 0

Substituting | H f | = f x x f y y f x y 2 and 2 f = f x x + f y y :

λ 2 λ 2 f + | H f | = 0

Utilizing the quadratic formula to determine the eigenvalues:

λ = 2 f ± 2 f 2 4 | H f | 2

The Hessian possesses two (potentially) distinct eigenvalues, given by:

λ 1 = 2 f + 2 f 2 4 | H f | 2 λ 2 = 2 f 2 f 2 4 | H f | 2

The product of the eigenvalues:

λ 1 λ 2 = 2 f + 2 f 2 4 | H f | 2 2 f 2 f 2 4 | H f | 2

Simplifying:

λ 1 λ 2 = 1 4 ( 2 f + 2 f 2 4 | H f | ) ( 2 f 2 f 2 4 | H f | ) = 1 4 ( 2 f 2 2 f 2 + 4 | H f | )

Combining like-terms:

λ 1 λ 2 = 1 4 ( 2 f 2 2 f 2 + 4 | H f | ) = 1 4 ( 4 | H f | ) = | H f |

Hence:

λ 1 λ 2 = | H f |

Thus, the Hessian determinant is equivalent to the product of the Hessian’s eigenvalues. By this notion, the Hessian determinant may be utilized to determine the nature of the eigenvalues. As in:

| H f | < 0 λ 1 λ 2 < 0 Relative Extrema

| H f | = 0 λ 1 λ 2 = 0 Inconclusive Concavity

| H f | > 0 λ 1 λ 2 > 0 Inconsistent Concavity

Which correlates the eigenvalues and determinant of the Hessian matrix, with the concavity and curvature of its initial function, as per the three-dimensional second partial derivative test derived in (2).

As a whole, the Hessian determinant is a pivotal tool regarding the analysis of an initial function’s curvature and behavior at critical input points. The functionality of the Hessian determinant evidently impacts the non-zero nature of the eigenvalues. Put simply, degenerate Hessian determinants imply null eigenvalues, and the inconclusive outcome of the curvature and concavity test. Accordingly, the Hessian matrix possesses a crucial role in multivariable optimization, as demonstrated within the following section.

3. Applications of Hessian Matrices

Hessian matrices contain all necessary information regarding the second partial derivatives of a scalar-valued function. The primary applications of Hessian matrices root in constrained and unconstrained multivariable optimization, as conveyed below.

3.1. The Bordered Hessian

Consider the Lagrange function, where x n :

L ( λ , x ) = f ( x ) λ g ( x ) + λ κ

where f is to be optimized subject to constraint function, g, and real constraint constant, κ. Determine the gradient of the Lagrange function:

L = [ L λ L x 1 L x n ] = [ κ g ( x ) f x 1 λ g x 1 f x 2 λ g x 2 f x 1 λ g x 1 ]

By Theorem 1.1, the Hessian may be determined utilizing the components of the gradient field, hence:

H L = [ 2 L λ 2 2 L λ x 1 2 L λ x n 2 L x 1 λ 2 L x 1 2 2 L x 1 x n 2 L x n λ 2 L x n x 1 2 L x n 2 ] = [ 0 g x 1 g x n g x 1 2 f x 1 2 λ 2 g x 1 2 2 f x 1 x n λ 2 g x 1 x n g x n 2 f x n x 1 λ 2 g x n x 1 2 f x n 2 λ 2 g x n 2 ]

Note that the first column—excluding the zero entry—consists of the negative first partial derivatives of g, as does the first row. Therefore, the above Hessian matrix may be described in terms of the gradient of g, considering the first row and column consist of implicit gradient vectors:

H L = [ 0 g T g 2 f x 1 2 λ 2 g x 1 2 2 f x 1 x n λ 2 g x 1 x n 2 f x n x 1 λ 2 g x n x 1 2 f x n 2 λ 2 g x n 2 ]

Additionally, note that the remaining entries consist of all possible combinations of second partial derivatives of the function f ( x ) λ g ( x ) , which, by definition, is the implicit Hessian of f ( x ) λ g ( x ) . Thus:

H L = [ 0 g T g H f λ g ] (3)

The above form of Hessian matrices, occasionally denoted by H Λ , is referred to as the Bordered Hessian, the Hessian of the Lagrange function. The Bordered Hessian possesses various uses in the context of constrained multivariable optimization, reducing an (n + 1) × (n + 1) matrix into a 2 × 2 implicit square matrix, proving pivotal in determining the nature of critical inputs of the Lagrange function, in order to optimize f.

Recall that, as per Theorem 2.1, the determinant of 2 × 2 Hessian matrices yields the generalized second partial derivative test for concavity. Considering that the bordered Hessian may be reduced into an implicit 2 × 2 matrix, the resulting Hessian determinant is the generalized form of the second partial derivative test of the Lagrange function.

To contextualize this, suppose v is an input vector of the Lagrange function, such that:

L ( v ) = 0

Which implies that v is a critical input point of the Lagrange function. By the bordered Hessian described in (3), the determinant of the Lagrangian may be reduced to:

| H L | = g T g = [ g x 1 g x n ] [ g x 1 g x n ]

Evaluating the vector-matrix product:

g x 1 2 + + g x n 2 = i = 1 n g x i 2

Inputting v into the Hessian determinant will yield one of the following results:

i = 1 n g x i 2 ( v ) < 0 Relative Extrema

i = 1 n g x i 2 ( v ) = 0 Inconclusive Concavity

i = 1 n g x i 2 ( v ) > 0 Inconsistent Concavity

Assuming v is an extreme value with agreeing concavities, inputting v into the Lagrangian will determine the nature of the extreme value. As a result, if v is a local maximum, the input function f will possess a relative maximum at v , subject to the constraint function g and constraint constant κ. If v is a local minimum, the input function f will possess a relative minimum at v , subject to the above constraints.

This notion proves majorly applicable in the context of economic optimization. When maximizing profit functions subject to various constraints, the Bordered Hessian may be utilized to determine the extreme inputs of the profit function, whilst still adhering to the bounds imposed by the constraints.

Hence, the Hessian matrix proves a crucial tool in the fields of economic and generalized nonlinear optimization.

3.2. Linear and Quadratic Approximations of Multivariable Functions

Recall the notion of a Taylor polynomial expansion [3] of single variable scalar-valued function f, centered about input value x0:

f ( x ) = n = 0 f ( n ) ( x 0 ) n ! ( x x 0 ) n

As n , the accuracy of the approximation increases, consequently, the “neighborhood” of the expansion’s accuracy near x0 increases. Expanding the first two and three terms of the Taylor polynomial:

f ( x ) f ( x 0 ) + f ( x 0 ) ( x x 0 )

f ( x ) f ( x 0 ) + f ( x 0 ) ( x x 0 ) + 1 2 f ( x 0 ) ( x x 0 ) 2

As the number of terms in the expansion increases, as does the accuracy of the approximation around x0. Hence, as n grows infinitely large, the approximation approaches the function almost identically. Additionally, Recall the single variable tangent line approximation of functions in 2 about point (x0, y0):

y y 0 = d y d x ( x 0 ) ( x x 0 )

where d y d x ( x 0 ) indicates the derivative of the function at input value x0. Rearranging and expressing the above in a form comparable to that of the Taylor polynomial expansion:

y = f ( x 0 ) + f ( x 0 ) ( x x 0 )

Hence, the information required to linearize and approximate the single variable function through tangency is the value of y and the derivative, y', at x0. This notion may be extended for functions in nth dimensional Euclidean space.

Multivariable, scalar-valued functions may be approximated about a certain input point, and within a minute neighborhood of said point through linear and quadratic approximations. Given certain information with respect to the values of a function and its first partial derivatives at certain input points, an affine function tangent to the curve at said input point may be derived through the process of linearization.

To contextualize this abstract notion, consider multivariable scalar valued function f, defined as f : n , with general input vector x n . Consider the particular nth dimensional input vector x 0 , at which the function f shall be approximated. Moreover, recall that the gradient of f is directly analogous to the first derivative of a single variable scalar-valued function. Hence, when given the value of f and its gradient at input vector x 0 , the affine approximation about x 0 is defined as:

L f = f ( x 0 ) + f ( x 0 ) ( x x 0 )

Which is directly comparable to the single variable tangent line approximation. In addition, it is crucial to note that the affine approximation—occasionally referred to as the tangent plane approximation in 3 —possesses a certain margin of error, and swiftly decreases in accuracy upon minutely shifting away from the miniscule neighborhood of x 0 . The notion of affine linearity roots from the lack of higher degree terms in the approximation, although, as the dimensions of the input space increase, describing the approximation as a linear tangent plane grows increasingly abstract.

Due to the practically negligible size of the neighborhood of x 0 whilst approximating through affine tangency, the notion of quadratic approximations of multivariable functions emerges. Reconsider scalar-valued function f. To extend the neighborhood of approximation around x 0 , second partial derivative information is required.

Define function Q f as the quadratic function tangent to f at x 0 , and approximately equivalent to f within a certain neighborhood of x 0 . The rationale behind referring to Q f as quadratic roots in the fact that Q f must include all possible combinations of quadratic terms—terms consisting of the product of two input variables—within f’s input space. Furthermore, recall the single variable Taylor polynomial expansion of the first three terms:

f ( x ) f ( x 0 ) + f ( x 0 ) ( x x 0 ) + 1 2 f ( x 0 ) ( x x 0 ) 2

As noted above, the gradient of a multivariable, scalar-valued function is analogous to the single variable first derivative, as the gradient provides all necessary information regarding the first partial derivatives of the function. Noting this, consider the definition of the Hessian matrix of f at input vector x 0 , generalized for all nth dimensional Euclidean space:

H f ( x 0 ) = [ f x 1 2 ( x 0 ) f x 1 x n ( x 0 ) f x n x 1 ( x 0 ) f x n 2 ( x 0 ) ]

The Hessian of f at x 0 provides all necessary information regarding the second partial derivatives of f at x 0 . Hence, it is fair to state that the Hessian of a multivariable, scalar-valued function is directly analogous to the second derivative of a single variable function.

In order to construct Q f in a manner such that the property of tangency to f at x 0 holds, whilst also containing all possible combinations of quadratic terms in the input space of f, define Q f recursively in terms of L f :

Q f = L f + g ( x → )

L f accounts for the tangency properties, whilst g ( x ) must be a function with the same input space as f, such that the second partial derivative information of Q f at certain inputs matches that of f. As a matter of fact, g ( x ) is defined as:

1 2 ( x x 0 ) T H f ( x 0 ) ( x x 0 )

Which, when simplified, produces a term directly corresponding to the n = 2 term of the single variable Taylor polynomial expansion. Therefore:

Q f = f ( x 0 ) + f ( x 0 ) ( x x 0 ) + 1 2 ( x x 0 ) T H f ( x 0 ) ( x x 0 ) (4)

Consequently, it is majorly evident that the Hessian matrix’ analogy to the single variable second derivative extends the second degree Taylor polynomial expansion for functions in all nth dimensional Euclidean space, and, as a result, provides a method of approximating multivariable functions.

4. Introduction to the Jaffa Transform

This section introduces and derives the Jaffa Transform, an operator that maps a square matrix space to a scalar field in nth dimensional Euclidean space, utilizing the Intersect Rule from the calculus of sets. The primary purpose, as to be demonstrated, of the Jaffa Transform, is the derivation of general solutions to Hessian matrix systems, in cohesion with harmonic function solutions to the Laplace partial differential equation.

4.1. Deriving General Solutions to Hessian Matrix Systems

As briefly mentioned previously, the Hessian matrix may be treated as an elementary system of linear second-order partial differential equations. To expand on this notion, consider the Hessian below:

H f = [ h 11 h 1 n h n 1 h n n ]

where, h i j is the entry in the i-th row and j-th column. Each entry is a function of the same input space as the initial function, f. Hence:

H f = [ h 11 h 1 n h n 1 h n n ] = [ 2 f x 1 2 2 f x 1 x n 2 f x n x 1 2 f x n 2 ]

Forming a matrix system of linear second-order partial differential equations. To describe the matrix as an explicit system of differential equations:

{ h 11 ( x ) = 2 f x 1 2 h n 1 ( x ) = 2 f x n x 1 h 1 n ( x ) = 2 f x 1 x n h n n ( x ) = 2 f x n 2

Let γ i = f x i , which thus yields:

{ γ 1 = [ 2 f x 1 2 2 f x n x 1 ] T γ 2 = [ 2 f x 1 x 2 2 f x n x 2 ] T γ n = [ 2 f x 1 x n 2 f x n 2 ] T

The Hessian matrix system has thus been reduced to a system of linear gradient equations. In order to solve gradient equations, the Intersect Rule of the calculus of sets may be utilized.

Utilizing the information given by the gradient vectors and the Intersect Rule’s definition of the potential function of a gradient field:

γ 1 = i = 1 n γ 1 x i d x i = i = 1 n 2 f x i x 1 d x i

γ n = i = 1 n γ n x i d x i = i = 1 n 2 f x i x n d x i

The Intersect Rule hence yields:

f = [ i = 1 n γ 1 x i d x i i = 1 n γ n x i d x i ] = [ i = 1 n 2 f x i x 1 d x i i = 1 n 2 f x i x n d x i ]

which now reduces the Hessian matrix system into a gradient equation.

Assuming that f is a C 2 function—an assumption which holds true, given the existence of the Hessian of f—there must exist a certain solution space consisting of all possible functions which satisfy the Hessian matrix system. The solution space oftentimes consists of infinitely many elements, given the nature of second-order partial differential equations. The following section discusses the methodology and intuition behind determining the aforementioned solution space.

4.2. The Jaffa Transform Derivation

Lemma 4.1. For all Hessian matrices, there exists an integral transform that maps the Hessian matrix to its scalar-valued initial function, defined as:

J { H f } = k = 1 n i = 1 n 2 f x i x k d x i d x k = f

Proof. Recall the Hessian matrix implicit gradient form, as derived in Theorem 1.1:

H f = [ d γ 1 d x 1 d γ n d x 1 d γ 1 d x n d γ n d x n ] = [ γ 1 γ 2 γ n ]

Additionally, recall the Intersect Rule, a theorem which states that for all gradient fields in n , the potential function, f, of the gradient field may be defined as:

f = i = 1 n γ i d x i = i = 1 n Λ i

where:

Λ i = { ρ i ( x 1 , , x n ) | ρ i ( x 1 , , x n ) = γ i d x i }

The set Λ i may be referred to as the “integral set” of f’s gradient’s i-th component, γ i . The Intersect Rule’s primary conjecture states that the general potential function of a gradient field may be defined by intersecting the n integral sets of the function’s gradient components, and, as a result, implicitly intersecting the integrals of the n components with respect to the corresponding input variables.

It is evident that the methods of the Intersect Rule may be utilized within the process of deriving the general solution space of a Hessian matrix system. With respect to the implicit gradient form above, in order to determine the components of f’s gradient field, apply the Intersect Rule to each entry of the Hessian:

γ 1 = i = 1 n γ 1 x i d x i = γ 1 x 1 d x 1 γ 1 x 2 d x 2 γ 1 x n d x n

γ 2 = i = 1 n γ 2 x i d x i = γ 2 x 1 d x 1 γ 2 x 2 d x 2 γ 2 x n d x n

γ n = i = 1 n γ n x i d x i = γ n x 1 d x 1 γ n x 2 d x 2 γ n x n d x n

In terms of f:

γ 1 = i = 1 n γ 1 x i d x i = 2 f x 1 2 d x 1 2 f x n x 1 d x n = i = 1 n 2 f x i x 1 d x i

γ 2 = i = 1 n γ 2 x i d x i = 2 f x 1 x 2 d x 1 2 f x n x 2 d x n = i = 1 n 2 f x i x 2 d x i

γ n = i = 1 n γ n x i d x i = 2 f x 1 x n d x 1 2 f x n 2 d x n = i = 1 n 2 f x i x n d x i

By the Intersect Rule, the above intersections must provide the general components of f’s gradient field. The result of the first stage of the mapping:

f = [ γ 1 γ 2 γ n ] = [ i = 1 n 2 f x i x 1 d x i i = 1 n 2 f x i x 2 d x i i = 1 n 2 f x i x n d x i ]

Hence, f has been determined.

Note that the Intersect Rule may be utilized yet again in order to derive f given its gradient field, as the first stage of the mapping transformed the Hessian matrix system into a gradient equation:

f = k = 1 n γ k d x k

Expanding the intersection:

f = k = 1 n γ k d x k = γ 1 d x 1 γ 2 d x 2 γ n d x n

Describing γ k in terms of f:

f = i = 1 n 2 f x i x 1 d x i d x 1 i = 1 n 2 f x i x n d x i d x n

Condensing the above intersections:

f = k = 1 n i = 1 n 2 f x i x k d x i d x k

The above process is referred to as the Jaffa Transform, a new method of deriving general solutions to Hessian matrix systems. The Jaffa Transform maps a matrix-valued function to a scalar-valued function, particularly, transforming a Hessian matrix system to the solution space of its entries, which completes the proof of Lemma 4.1. To formally define the Jaffa Transform of the Hessian of f:

J { H f } = k = 1 n i = 1 n 2 f x i x k d x i d x k (5)

The Jaffa Transform applies the Intersect Rule twice, utilizing the information given within the Hessian matrix, in order to solve Hessian systems. Assuming the conjecture of the Intersect Rule holds true, the Jaffa Transform directly produces the generalized solution space of the Hessian matrix, the space consisting of all functions which satisfy the system.

5. Properties of the Jaffa Transform

By definition, the Jaffa Transform of the Hessian of f is always equivalent to the general form of f. As in:

J { H f } = k = 1 n i = 1 n 2 f x i x k d x i d x k = f

Through this, crucial properties of the transform may be derived, as carried out within the following section.

5.1. Closed under Matrix Addition

Lemma 5.1. For all C 2 scalar-valued functions f and g in n with Hessian matrices H f and H g , respectively:

J { H f + H g } = J { H f } + J { H g }

Proof. Consider the n × n Hessian matrices of C 2 scalar-valued functions f and g:

H f = [ 2 f x 1 2 2 f x n x 1 2 f x 1 x n 2 f x n 2 ] H g = [ 2 g x 1 2 2 g x n x 1 2 g x 1 x n 2 g x n 2 ]

The sum of the above matrices:

H f + H g = [ 2 f x 1 2 2 f x n x 1 2 f x 1 x n 2 f x n 2 ] + [ 2 g x 1 2 2 g x n x 1 2 g x 1 x n 2 g x n 2 ]

Matrix addition property:

H f + H g = [ 2 f x 1 2 + 2 g x 1 2 2 f x n x 1 + 2 g x n x 1 2 f x 1 x n + 2 g x 1 x n 2 f x n 2 + 2 g x n 2 ] = H f + g

The Jaffa Transform of the sum of the matrices:

J { H f + H g } = J { H f + g }

By definition:

J { H f + H g } = J { H f + g } = f ( x ) + g ( x → )

With respect to the sum of the Jaffa Transforms of the matrices:

J { H f } + J { H g }

Recall that:

J { H f } = f ( x → )

J { H g } = g ( x → )

Substituting:

J { H f } + J { H g } = f ( x ) + g ( x → )

Hence:

J { H f + H g } = J { H f } + J { H g } (6)

5.2. Closed under Scalar Multiplication

Lemma 5.2. For all C 2 scalar-valued functions f in n with Hessian matrix H f , and scalar β :

J { β H f } = β J { H f }

Proof. Consider the Hessian of f as described within Lemma 5.1:

H f = [ 2 f x 1 2 2 f x n x 1 2 f x 1 x n 2 f x n 2 ]

Consider the Hessian of the scalar multiple of f by a factor of β :

H β f = [ β 2 f x 1 2 β 2 f x n x 1 β 2 f x 1 x n β 2 f x n 2 ] = β [ 2 f x 1 2 2 f x n x 1 2 f x 1 x n 2 f x n 2 ] = β H f

The Jaffa Transform of the Hessian of β f :

J { H β f } = J { β H f } = β f ( x → )

Consider the product of β and the Jaffa Transform of the Hessian of f:

β J { H f }

By definition:

β J { H f } = β f ( x → )

Hence:

J { β H f } = β J { H f } (7)

5.3. Linearity of the Jaffa Transform Operator

The conditions required to establish the linearity of an operator is that of being closed under the addition and subtraction of inputs, and under real scalar multiplication. Consequently, the results of (6) and (7) convey that the Jaffa Transform evidently abides by the conditions of linearity as an operator. Hence, the Jaffa Transform is a linear operator.

Given the linearity of the Jaffa Transform, linear combinations of the transform also satisfy the Hessian matrix system. To contextualize this, suppose there exists an arbitrary Hessian, H h , that may be defined as:

H h = H f + H g

The resulting Hessian matrix system will possess the following Jaffa Transform solutions:

J { H h } = J { H f + H g }

Which must be, by definition, a scalar field in n . Considering the linearity of the transform, the solutions may be separated, given the transform’s being closed under input matrix addition:

J { H h } = J { H f } + J { H g }

Additionally, let f = c 1 δ and g = c 2 σ , where c 1 , c 2 . Describing f and g in terms of δ and σ, respectively:

J { H h } = c 1 J { H δ } + c 2 J { H σ }

Hence, linear combinations of the solutions must also satisfy the Hessian matrix system, given the transform’s being closed under real scalar multiplication. Thus, the principle of superposition holds as a result of the transform’s linearity.

The uniqueness of the Jaffa Transform’s linearity roots in the mapping of spaces. As in, the Jaffa Transform maps a real square matrix space to a scalar field in Euclidean space. The Jaffa may thus be described in terms of its domain and codomain as:

J : M n , n ( ) n

As a result, the Jaffa is one of the few linear integral transform operators that maps between differing real spaces as a whole, as opposed to differing domains, exclusively.

6. Notable Jaffa Transforms

This section discusses the Jaffa Transform of notable Hessian Matrices, and the prevalence of said Jaffa Transforms. Particularly proving the existence of functions for which the Hessian is identical to the n × n zero matrix, identity matrix, and Bordered Hessian.

6.1. The Zero Matrix

Lemma 6.1. For the n × n zero matrix, 0 n , there exists a scalar-valued function, f, such that f = J { 0 n } , defined as:

J { 0 n } = C + i = 1 n a i x i , where a i and C

Proof. Consider the n × n zero matrix:

0 n = [ 0 0 0 0 ]

The proposed conjecture is that there exists a scalar-valued subspace, f, such that H f 0 n . To determine f, apply the Jaffa Transform to the zero matrix:

J { 0 n } = ( 0 d x 1 0 d x n ) d x 1 ( 0 d x 1 0 d x n ) d x n

Evaluating the integrals:

J { 0 n } = { b 1 + f 1 ( x 2 , , x n ) } { b n + f n ( x 1 , , x n 1 ) } d x 1 { c 1 + g 1 ( x 2 , , x n ) } { c n + g n ( x 1 , , x n 1 ) } d x n

where, b i , c i . Intersecting the integral sets:

J { 0 n } = ( b 1 + b 2 + + b n ) d x 1 ( c 1 + c 2 + + c n ) d x n

Summing the constant terms:

J { 0 n } = a 1 d x 1 a n d x n

Evaluating the n indefinite integrals:

J { 0 n } = { a 1 x 1 + ρ 1 ( x 2 , , x n ) } { a n x n + ρ n ( x 1 , , x n 1 ) }

Intersecting the n integral sets:

J { 0 n } = C + a 1 x 1 + a 2 x 2 + a 3 x 3 + + a n x n = C + i = 1 n a i x i

Hence:

J { 0 n } = C + i = 1 n a i x i (8)

A notable property of the generalized zero matrix:

tr ( 0 n ) 0

Recall the Laplacian Trace property of Hessian matrices, which states that for all Hessian matrices:

tr ( H f ) = i = 1 n f x i 2 = 2 f

Thus:

tr ( 0 n ) = 2 f 0

Which implies that the function f satisfies the n-th dimensional Laplace equation. By definition:

2 f = 2 J { 0 n } 0

This evidently conveys that the zero matrix possesses a harmonic Jaffa Transform, as the Jaffa Transform satisfies the Laplace equation. Therefore, the Jaffa Transform was utilized to derive harmonic functions.

6.2. The Identity Matrix

Lemma 6.2. For the n × n identity matrix, I n , there exists a scalar-valued function, f, such that f = J { I n } , defined as:

J { I n } = C + i = 1 n 1 2 x i 2 + a i x i , where a i and C

Proof. Consider the n × n identity matrix:

I n = [ 1 0 0 0 1 0 0 0 1 ]

The conjecture proposed states that there exists a scalar-valued subspace, f, such that H f I n . To determine f, apply the Jaffa Transform to the identity matrix:

J { I n } = ( d x 1 0 d x n ) d x 1 ( 0 d x 1 d x n ) d x n

Integrating:

J { I n } = { x 1 + f 1 ( x 2 , , x n ) } { b 1 + f n ( x 1 , , x n 1 ) } d x 1 { c 1 + g 1 ( x 2 , , x n ) } { x n + g n ( x 1 , , x n 1 ) } d x n

where, b i , c i . The intersection of the integral sets simplifies to:

J { I n } = ( x 1 + b 1 + + b n 1 ) d x 1 ( x n + c 1 + + c n 1 ) d x n

Summing the constant terms:

J { I n } = ( x 1 + a 1 ) d x 1 ( x n + a n ) d x n

Evaluating the n indefinite integrals:

J { I n } = { x 1 2 2 + a 1 x 1 + ρ 1 ( x 2 , , x n ) } { x n 2 2 + a n x n + ρ n ( x 1 , , x n 1 ) }

Intersecting the n integral sets:

J { I n } = C + x 1 2 2 + a 1 x 1 + x 2 2 2 + a 2 x 2 + + x n 2 2 + a n x n = C + i = 1 n 1 2 x i 2 + a i x i

Hence:

J { I n } = C + i = 1 n 1 2 x i 2 + a i x i (9)

Manipulating the summation:

J { I n } = C + i = 1 n 1 2 x i 2 + i = 1 n a i x i

By the results of Lemma 6.1, it can be concluded that:

J { I n } = J { 0 n } + i = 1 n 1 2 x i 2

6.3. The Bordered Hessian

Lemma 6.3. For the Bordered Hessian, H Λ , as described in (3), there exists a Jaffa Transform such that:

J { H Λ } = C + L ( λ , x ) , where C

Proof. Recall the Bordered Hessian in implicit gradient form, as stated in (3), which describes the Bordered Hessian in terms of the constraint function, g, and optimized function, f:

H L = [ 0 g T g H f λ g ]

The conjecture proposed states that the Bordered Hessian possesses a Jaffa Transform always equivalent to the Lagrange function. By definition, the Jaffa Transform maps a Hessian matrix to the corresponding scalar-valued initial function. As in, for all Hessian matrices of function f:

J { H f } = f ( x → )

As proven in Section 3.1, the Hessian of the Lagrange function is, indeed, the Bordered Hessian. Considering this notion, in cohesion with the definition of the Jaffa Transform, it is fair to conclude that:

J { H Λ } = C + L ( λ , x ) (10)

7. The Inverse Jaffa Transform

This section discusses the notion of the existence of an inverse operator for any given linear operator. Noting that the Jaffa Transform is a linear operator, it is proved that the conditions for invertibility apply to the Jaffa Transform.

7.1. Invertibility of Linear Operators

The invertibility of a linear operator roots in its surjectivity and injectivity, as in, a linear operator must be bijective in order to be invertible. To restate the conditions of invertibility, consider linear operator T, defined as:

T ( x ) : Ω 1 Ω 2

Which indicates:

domain ( T ) = Ω 1 codomain ( T ) = Ω 2

T is said to be invertible if and only if:

S ( y ) : Ω 2 Ω 1 | S ( T ( x ) ) = x , T ( S ( y ) ) = y x Ω 1 , y Ω 2

The above condition hence defines S ( y ) = T 1 ( x ) . The existence of such an operator indicates the invertibility of T ( x ) .

7.2. Invertibility of the Jaffa Transform

Lemma 7.1. The invertibility of the Jaffa Transform roots in the existence of the Hessian matrix. As in:

J 1 { f } = H f

Proof. Recall that the Jaffa Transform is a linear operator, which transforms a real square matrix space to a scalar field in nth dimensional Euclidean space. To define the transform in terms of its domain and codomain:

J : M n , n ( ) n

By definition, the inverse of a linear operator—assuming the existence of an inverse—must map from the codomain to the domain of the transform. Let X = J 1 { f } be defined as:

X : n M n , n ( ℝ )

Such that:

X ( J { H f } ) = H f H f M n , n ( ℝ )

and:

J { X ( f ) } = f f n

Substituting J { H f } = f :

X ( J { H f } ) = X ( f ) = H f

Hence:

X ( f ) = H f

It is majorly apparent that, in order to inverse the mapping of a Hessian matrix to its corresponding initial function, it is necessary to take the Hessian matrix of the initial function. Thus, it holds that:

J 1 { f } = H f (11)

Essentially, considering the Hessian matrix is the inverse Jaffa Transform, the Jaffa Transform may, as a result, be viewed and treated as the inverse Hessian operator. However, it is absolutely crucial to distinguish between the inverse Hessian matrix, H f 1 , and the Jaffa Transform. The inverse Hessian matrix only exists assuming the functionality of the Hessian determinant, and is, by definition, a matrix for which the matrix product with the Hessian yields the identity matrix.

Meanwhile, the Jaffa Transform is the inverse mapping of the Hessian matrix, as it intakes members of the codomain of the Hessian, and transforms said inputs to functions within the domain of the Hessian operator. By this, it is fair to state that J 1 { f } H f 1 .

8. Harmonic Jaffa Transforms

This section derives Poisson’s equation, then briefly introduces and discusses the Laplace equation, its applications, and its correlation to Poisson’s equation of heat conduction and electrostatic. The section then proceeds to examine the prevalence of the Jaffa Transform in deriving nth dimensional solutions to the Laplace equation, by transforming Hessian matrices with tr ( H f ) 0 , to generalized harmonic functions, satisfying the Laplace equation. The section concludes with the notion of the Jaffa Theorem, a fundamental principle in the calculus of sets which ensures the existence of harmonic Jaffa Transforms for all traceless Hessian matrices.

8.1. Poisson’s Equation of Electrostatic

Recall Gauss’s law regarding the electric flux through a closed surface:

ϕ E = ρ v ϵ 0

where, ϕ E is the electric flux through a surface of volume v, ρ v is the charge density, and ϵ 0 is the electric constant—also referred to as the electric permittivity.

Note that electric force possesses an associated electric potential. By this, in an electric potential scalar field, V, the electric force vector field, E , is given by the relation:

E = V

This relation holds true given that the electric potential decreases as it is converted to electric force energy. Also, note the proportionality of the electric flux density [4] vector field, D , and the electric force vector field, E :

D = ϵ 0 E = ϵ 0 V

With ϵ 0 as the constant of proportionality. The electric flux through a surface is, by definition, the divergence of the electric force vector field, hence, the following relation exists:

ϕ E = 1 ϵ 0 D = E = ρ v ϵ 0

Multiplying both sides by ϵ 0 yields:

D = ρ v

Which is oftentimes referred to as the differential form of Gauss’s law of electric flux [5] . Describing the electric charge density in terms of the electric potential field through the substitution D = ϵ 0 V :

ϵ 0 V = ρ v

By the dot product scalar multiplication property:

ϵ 0 V = ρ v

Dividing both sides by ϵ 0 :

V = ρ v ϵ 0

The dot product of the gradient vector with itself may be condensed into:

2 V = ρ v ϵ 0

The 2 operator is referred to as the Laplacian operator, and shall be discussed in greater depth within the following section. The derived equation is referred to as Poisson’s equation of electrostatic [6] , and is utilized in the modeling of flows within systems containing external force. An illustration of this notion roots in the modeling of unsteady-state heat conduction, as in, heat conduction within a region containing either a heat source or sink. By the notion of external force roots another form of Poisson’s equation:

2 V = σ ( x )

where σ ( x ) is the external force function [7] —oftentimes referred to as the external source function or component. It is crucial to note that, generally, Poisson’s equation is an inhomogeneous second-order partial differential equation. However, there exists a particular homogeneous case of Poisson’s equation, the Laplace equation.

8.2. The Laplace Equation

As previously mentioned, the Poisson equation is generally nonhomogeneous, given its application in the representation of flows through regions containing external force. With that, in order to represent a flow through a region without external force, the homogeneous Poisson equation emerges, in the form of the Laplace equation.

The nth dimensional Laplace partial differential equation [8] is a particular homogeneous case of Poisson’s equation, which implies that ρ v ϵ 0 = 0 , indicating a steady-state heat conduction system, suggesting the absence of heat sources and sinks. The Laplace equation states that for nth dimensional function, u:

2 u = Δ u 0

where 2 and Δ are the Laplacian operators, defined as the Euclidean inner product of the gradient operator vector with itself—also referred to as the squared norm of the gradient operator. As in:

2 = , = 2 = Δ

Expressing the Laplace equation in terms of the inner product of the gradient vector:

2 u = u 0

Let U = u , which implies that U is the conservative gradient field associated with u:

2 u = U = div ( U ) 0

Consequently, the Laplacian of a scalar field represents the divergence of its corresponding gradient vector field. The divergence of a vector field is, by definition, the amount of outward flow at a given input point within the field. In this context, the divergence of U depicts the amount of outward flow at any given point in the gradient of u.

Given that the divergence of the gradient field is identical to zero, for any and every input point within U , there is no outward flow divergence, and, similarly, no inward flow convergence.

Suppose that U represents a particular heat conduction flow through a real space. This implies that at any given point within the gradient space, there does not exist a heat source or sink. Put simply, there is no external force that applies added heat or cooling to the system, which implies that U depicts the flow of an isolated heat conduction system.

The absence of sources and sinks within a vector field implies that the field portrays an incompressible flow. As a result, all functions which satisfy the Laplace equation have incompressible gradient fields.

8.3. Applications of the Laplace Equation

The Laplace equation possesses various physical applications, particularly in the context of describing incompressible, irrotational flows within physical spaces. As previously established, all solutions to the Laplace equation—referred to as harmonic functions—by definition, hold the property of having an incompressible gradient field. A vector field, Φ , is said to be incompressible if and only if:

div ( Φ ) = Φ = , Φ 0

Suppose there exists a function, φ , such that Φ = φ , which implies that φ is the potential function of Φ . Given the components of Φ , φ may be determined through the Intersect Rule. In assumption that there exists such a φ —signifying that Φ is conservative—substitute Φ = φ in the above equation:

div ( φ ) = φ = , φ = 2 φ 0

Which is, by definition, the Laplacian operator applied on φ . Hence, φ is a harmonic function, satisfying the Laplace equation. With respect to steady-state flows, the harmonic function, φ , is referred to as the potential flow within a space, whilst Φ is referred to as the flow’s velocity. However, it is worth noting that the potential flow is occasionally correlated to the flow velocity by:

Φ = φ

Given the nature of increasing kinetic energy implying decreasing potential energy due to energy conversion. However, for the sake of consistency, the correlation Φ = φ shall be utilized when describing flow velocity in terms of potential flow.

With respect to the irrotational flows depicted by harmonic functions, reconsider vector field Φ , which is defined as conservative, given the existence of a corresponding scalar-valued potential function. A vector field is said to be irrotational if, at all given points within the vector field, the curl is zero.

The curl of a vector field is, by definition, the amount of rotation at a given input point within the field. Hence, if the curl of a vector field is identical to zero, then there exists the absence of any rotation within the field. Meaning that, at any and every input point in the vector field, there is no vortex rotation.

Suppose that vector field Φ represents the flow of a particular fluid within a real space. The complete absence of curl in the vector field directly conveys that, at any input point in the field, the fluid flowing through said region will never rotate and curl in a vortex-like manner.

To correlate harmonic functions to irrotational fluid flow, recall the irrotational property of conservative vector fields, which states that:

Curl ( φ ) = × φ 0

Utilizing the relation Φ = φ :

Curl ( φ ) = Curl ( Φ ) = × Φ 0

Therefore, solutions to the Laplace equation are utilized in order to describe steady-state physical systems of incompressible and irrotational flow, amongst various other applications. This notion proves particularly prevalent with respect to incompressible and irrotational fluid mechanics, and may be extended to various coordinate systems. Although, within the following section, assume the Laplacian operator refers to that of Cartesian coordinates, as opposed to spherical and cylindrical coordinate system.

8.4. The Jaffa Theorem

The Hessian matrix is correlated to the Laplace equation by the Laplacian trace property. As in, the sum of the terms along the central diagonal of the Hessian matrix—which is, by definition, the trace of the Hessian—is equivalent to the Laplacian operator applied to the Hessian’s initial function.

Consider an arbitrary scalar-valued function, f in n , with existing continuous second partial derivatives. The Laplacian operator is defined as the Euclidean inner product of the gradient vector with itself, as previously mentioned:

2 = , = [ x 1 x n ] [ x 1 x n ]

Evaluating the inner product:

2 = 2 x 1 2 + + 2 x n 2 = i = 1 n 2 x i 2

Applying the Laplacian operator to f:

2 f = 2 f x 1 2 + + 2 f x n 2 = i = 1 n 2 f x i 2 = tr ( H f )

As mentioned in the previous section, a function is said to be harmonic if and only if it satisfies the Laplace equation. As in, the Laplacian of the function must be identical to zero in order to be considered harmonic.

Suppose that f is harmonic. This implies that f satisfies the Laplace equation, and it thus holds that, by definition:

2 f = tr ( H f ) 0

Hence, the Laplacian trace property consequently yields another property, which states that, if the initial function is harmonic, then its corresponding Hessian matrix will be traceless.

Theorem 8.1. Recall the Laplacian trace property of Hessian matrices, which states:

tr ( H f ) = i = 1 n f x i 2 = 2 f

Essentially, the sum of the terms along the central diagonal of any Hessian matrix is equivalent to the Laplacian of the initial function, f. The proof of this property is above. The property proves vital in the derivation of harmonic functions through the use of the Jaffa Transform.

The theorem states:

tr ( H f ) 0 J { H f } | 2 J { H f } 0 (12)

The proposition of (12) is referred to as the Jaffa Theorem, which states that if and only if the trace of the Hessian of f is identical to zero, then there always exists a harmonic Jaffa Transform of f, which satisfies the Laplace equation.

The proof of the Jaffa Theorem lies in elementary mathematical deduction. By definition, the Jaffa Transform maps the Hessian of scalar-valued function f back to f. Hence, if the initial function is harmonic, this implies that the Jaffa Transform of its Hessian matrix is also harmonic. Moreover, the Laplacian trace property may be utilized to determine the harmonic nature of the initial function. Therefore, if the trace of the generalized Hessian of f is identical to zero, then the initial function, f, is harmonic, and, as a result, the Jaffa Transform of the Hessian of f is harmonic.

Harmonic Jaffa Transforms exist due to the Laplacian trace property, and the mapping between Hessian matrices and initial functions. Suppose there exists a Hessian matrix of a scalar valued initial function, f, with tr ( H f ) 0 . Consider the definition of the Jaffa Transform:

J { H f } = k = 1 n i = 1 n 2 f x i x k d x i d x k = f

Given that f is harmonic, it must hold that:

2 f = , f = [ f x 1 f x n ] [ f x 1 f x n ] = i = 1 n f x i 2 0

The Jaffa Transform yields the generalized form of the initial function of the Hessian matrix. Utilizing the substitution f = J { H f } results in:

2 J { H f } = , J { H f } = [ J { H f } x 1 J { H f } x n ] [ J { H f } x 1 J { H f } x n ] = i = 1 n J { H f } x i 2 0

Which implies that the Jaffa Transform of the Hessian of f satisfies the Laplace equation, meaning it is harmonic. Thus, a Jaffa Transform is said to be harmonic if it is the transform of a traceless Hessian matrix. To restate this, harmonic Jaffa Transforms are defined as the transforms of Hessian matrices of harmonic functions.

As a result, any Hessian matrix with zero trace at all given input values must always possess a Jaffa Transform that satisfies Laplace equation. Hence, through the Jaffa Theorem, various new solutions to the Laplace equation may be derived for all dimensions. The Jaffa Transform is extendable into all quasi-infinite dimensional functions, which implies that there exist infinite dimensional harmonic functions.

Although, it is crucial to note that said functions no longer exist in Euclidean space, however, in l 2 Hilbert space, also referred to as l 2 Lebesgue measure space. Additionally, the physical notion of an infinite dimensional function grows increasingly abstract, however, theoretically, may exist within infinite dimensional Hilbert spaces.

Given a quasi-infinite dimensional Hessian matrix space with zero trace, the infinite dimensional harmonic initial function may be defined through the Jaffa Transform as:

J { H f } = k = 1 n = 1 2 f x n x k d x n d x k

Thus, the Jaffa Theorem ensures not exclusively the existence of finite dimensional harmonic functions, however, that of abstract infinite dimensional Laplacian harmonic functions, contributing to the field of infinite dimensional harmonic analysis.

By the Jaffa Theorem, various nth dimensional solutions to the Laplace equation may be derived. When given a traceless Hessian matrix, the Jaffa Transform may be utilized in order to determine a solution that satisfies both the Hessian matrix system and the nth dimensional Laplace equation.

9. Discussion

The Jaffa Transform is a new, invertible, linear integral transform method of solving partial differential equations by mapping an n × n matrix space to a corresponding scalar-valued function in nth dimensional Euclidean space. The Jaffa Transform utilizes notions from the calculus of sets, by applying the Intersect Rule twice, in order to derive solutions to Hessian matrix systems, and, under certain circumstances, the Laplace equation. The Jaffa Theorem deploys the Jaffa Transform in order to establish a principle that may be used in the derivation of nth dimensional solutions to the Laplace equation. Hence, new solutions to the Laplace equation may be derived, given the existence of harmonic Jaffa Transforms, and the correlation between Hessian matrices and the Laplacian. Overall, the Jaffa Transform is a pivotal innovation in the field of vector calculus and partial differential equations, as it correlates concepts from the calculus of sets in order to transform and solve various linear second-order partial differential equations.

Acknowledgements

I would like to greatly thank and acknowledge the American Community School of Beirut for consistently providing a phenomenal educational and work environment, and for majorly nourishing talents in all fields and regards.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Jacob, K.G.J. (1896) Uber Die Bildung Und Die Eigenschaft Der Determinanten: De Formatione et Proprietatibus Determinantium. W. Engelmann, Potsdam.
[2] Jaffa, D.A. (2023) Conservative Vector Fields and the Intersect Rule. Journal of Applied Mathematics and Physics, 11, 2888-2903.
https://doi.org/10.4236/jamp.2023.1110190
[3] Taylor, B. (1715) Methodus Incrementorum Directa et Inversa. Typis Pearsonianis prostant apud Gul. Innys ad Insignia Principis in Coemeterio Paulino, Londini.
[4] Ellingson, S.W. (2022) 2.4: Electric Flux Density. LibreTexts Engineering.
https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Electro-Optics/Book%3A_Electromagnetics_I_(Ellingson)/02%3A_Electric_and_Magnetic_Fields/2.04%3A_Electric_Flux_Density#:~:text=The%20electric%20flux%20density%20D,true%20only%20in%20homogeneous%20media
[5] Ellingson, S.W. (2022) 5.7: Gauss’ Law—Differential Form. LibreTexts Engineering.
https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Electro-Optics/Book%3A_Electromagnetics_I_(Ellingson)/05%3A_Electrostatics/5.07%3A_Gauss%E2%80%99_Law_-_Differential_Form
[6] Ellingson, S.W. (2022) 5.15: Poisson’s and Laplace’s Equations. LibreTexts Engineering.
https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Electro-Optics/Book%3A_Electromagnetics_I_(Ellingson)/05%3A_Electrostatics/5.15%3A_Poisson%E2%80%99s_and_Laplace%E2%80%99s_Equations
[7] Hunt, R.E. (2002) Chapter 2: Poisson’s Equation.
https://www.damtp.cam.ac.uk/user/reh10/lectures/nst-mmii-chapter2.pdf
[8] De Laplace, P.S.M. (1798) Traite de Mecanique Celeste. Chez J.B.M. Duprat, Normandy.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.