New Formulas and Results for 3-Dimensional Vector Fields

Abstract

New formulas are derived for once-differentiable 3-dimensional fields, using the operator . This new operator has a property similar to that of the Laplacian operator; however, unlike the Laplacian operator, the new operator requires only once-differentiability. A simpler formula is derived for the classical Helmholtz decomposition. Orthogonality of the solenoidal and irrotational parts of a vector field, the uniqueness of the familiar inverse-square laws, and the existence of solution of a system of first-order PDEs in 3 dimensions are proved. New proofs are given for the Helmholtz Decomposition Theorem and the Divergence theorem. The proofs use the relations between the rectangular-Cartesian and spherical-polar coordinate systems. Finally, an application is made to the study of Maxwell’s equations.

Share and Cite:

Agashe, S. (2021) New Formulas and Results for 3-Dimensional Vector Fields. Applied Mathematics, 12, 1058-1096. doi: 10.4236/am.2021.1211069.

1. Introduction

In this article, the following new formula is derived, where f : R 3 R is a continuously differentiable function which vanishes at infinity:

f ( a , b , c ) = 1 4 π R 3 x f x + y f y + z f z [ ( x a ) 2 + ( y b ) 2 + ( z c ) 2 ] 3 2 d x d y d z , (1)

where the derivatives are evaluated at ( x , y , z ) , (Theorem 4 below); three other properties (Theorem 5) are also proved.

Using these results, the following formula is proved (Theorem 9 below) for a 3-dimensional continuously differentiable vector field, i.e., a continuously differentiable function F : R 3 R 3 :

F ( a , b , c ) = 1 4 π R 3 1 r 3 [ ( F ) r r × ( × F ) ] d x d y d z , (2)

assuming lim r F ( r sin θ cos ϕ , r sin θ sin ϕ , r cos θ ) = 0 , uniformly for all 0 θ π , 0 ϕ 2 π , where r denotes the vector from ( a , b , c ) to ( x , y , z ) , or a relative position vector, and r denotes its length (briefly, F vanishes at infinity). Further, F and × F are evaluated at ( x , y , z ) . This formula requires only one integration.

A formula similar to (2) appears in Stokes [1] and Blumenthal [2]; Blumenthal has “ g r a d ( 1 r ) ” in place of our “ r r 2 ”. Stokes (using modern notation) is looking

for a vector field F such that × F = 0 , and F is a specified function which vanishes outside a “finite portion of space”. He next seeks a vector field G, say, which has zero divergence and has, as its curl, a given function whose divergence is zero. The sum F + G then has the specified divergence and curl. Blumenthal assumes that the first partial derivatives of the vector field also vanish at infinity; he then proves uniqueness up to an additive constant function. Finally, his proof technique is different from ours, using Green’s theorem, among others (see [3] [4] [5] for surveys of the Helmholtz Decomposition. Stokes’s work preceded Helmholtz’s).

We also prove an extension of (2) for the “bounded” case (Theorem 12).

Compare (2) with the formula given, without proof, by Jefimenko [6]:

F ( a , b , c ) = 1 4 π R 3 ( F ) × ( × F ) r d x d y d z (3)

which requires differentiability of F and × F , and a stronger “at infinity” condition, namely, lim r F ( r 2 sin θ cos ϕ , r 2 sin θ sin ϕ , r 2 cos θ ) = 0 , uniformly for all 0 θ π , 0 ϕ 2 π .

Another related formula [7] [8] [9] [10] [11], often called the “Grad-Curl Theorem” [9], is:

F ( a , b , c ) = ( 1 4 π R 3 F r d x d y d z ) + × ( 1 4 π R 3 × F r d x d y d z ) . (4)

Its proof requires the vector field to be twice-differentiable in [7] [8] [9] but only once-differentiable in [10] [11], and the stronger “at infinity” condition to hold in all these references. All three formulas above immediately prove (a part of) Helmholtz’s Theorem which state that a 3-dimensional vector field is uniquely determined by its divergence, F and curl, × F .

Denoting the two “parts” of F in our Formula (2) above by F and F × , (we will often use d V to denote volume integration), namely,

F ( a , b , c ) = 1 4 π R 3 1 r 3 ( F ) r d V ,

F × ( a , b , c ) = 1 4 π R 3 1 r 3 r × ( × F ) d V ,

we show (Theorem 16), without requiring twice-differentiability of F, that:

F = F , × F = 0 ,

F × = 0 , × F × = × F .

F is usually called the irrotational or lamellar part of F, and F × the solenoidal part of F. We also show (Theorem 17) that these two parts are orthogonal, i.e.,

R 3 ( F ) ( F × ) d V = 0 ,

so that the decomposition implied by our Formula (2) is a “complementary-orthogonal” decomposition. In fact, we show a stronger result, namely, the irrotational part of one field is orthogonal to the solenoidal part of any field. This result may be related to Tellegen’s Theorem of Electrical Network Theory.

Further, we show that the “corresponding” parts in the three Formulas (2), (3), (4), are all equal; thus:

F = 1 4 π R 3 ( F ) r d V = ( 1 4 π R 3 F r d V ) , (5)

F × = 1 4 π R 3 × ( × F ) r d V = × ( 1 4 π R 3 × F r d V ) . (6)

This follows from three new Formulas (Theorems 13, 14, 15 below) regarding integrands with , , and × operators:

R 3 1 r [ f ] = R 3 1 r 3 [ ( f ) r ] ,

R 3 1 r [ × A ] = R 3 1 r 3 [ r × A ] ,

R 3 1 r [ A ] = R 3 1 r 3 [ r A ] .

Formula (4) is usually written more compactly:

F = ϕ + × A , (7)

where ϕ , called the scalar potential associated with F, and A, called the vector potential associated with F, are given by:

ϕ = 1 4 π R 3 F r d V , A = 1 4 π R 3 × F r d V . (8)

We derive interesting alternative expressions for the potentials which do not involve any , that is to say, differentiation operation, namely:

ϕ = 1 4 π 1 r 3 ( r F ) , A = 1 4 π 1 r 3 ( r × F ) . (9)

These expressions seem to be new.

Of course, in the classical formulas above, the recovery of F is from its divergence and curl as sources, but not directly; it involves potentials as intermediaries, whereas in our formula, the recovery is more direct.

The proofs of Formula (4) in [7] [8] [9] use the Laplacian operator 2 , namely:

2 = 2 x 2 + 2 y 2 + 2 z 2 , (10)

and its property:

f ( a , b , c ) = 1 4 π R 3 2 f x 2 + 2 f y 2 + 2 f z 2 ( ( x a ) 2 + ( y b ) 2 + ( z c ) 2 ) 1 2 d V , (11)

where f is a scalar function and the derivatives are evaluated at ( x , y , z ) . This property requires twice-differentiability. The proofs of (4) in [10] [11] use a Green’s function solution of the Poisson equation. In contrast, our proofs use the differential operator:

( x x + y y + z z ) ,

and its property (1) above which does not seem to have been noticed before.

Our approach exploits some nice properties of the spherical-polar coordinate system in its relation with the rectangular coordinate system, resulting in some simple integrations (incidentally, Gauss [12] exploited these in his Memoir on the “inverse square force law” to prove that the potential function is twice-

differentiable). Our derivations do not use the Dirac δ -function, “singularity functions” like ( 1 r ) and 2 ( 1 r ) , and “ δ -function identities” (derivations that use the δ -function are not necessarily shorter. As an example, see

[13] ). We also do not use the theory of distributions.

Interestingly, results similar to (1) hold for an operator ( x x ) in one variable, the operator ( x x + y y ) in two variables, and even one in four variables, namely, ( x 1 x 1 + x 2 x 2 + x 3 x 3 + x 4 x 4 ) .

We prove an extension of Theorem 1 (Theorem 6) for bounded regions, involving volume and surface integrals, with a new, more natural definition of a region bounded by a surface, appropriate for spherical-polar coordinates.

Application to 3-dimensional vector fields begins with Theorem 9 which gives our Formula (2). The technique used leads immediately to an extension (Theorem 12) of Theorem 9. It appears to be a better alternative to the usual formula for vector fields over bounded regions. We prove some new results (Theorems 13, 14, 15) on removing a derivative occurring inside an integral. Using them, we prove a property of irrotational and solenoidal parts (Theorem 16) and then their orthogonality (Theorem 17). An existence result (Theorem 18) is then easily proved regarding a simple system of first-order partial differential equations in 3 independent variables. This result is believed to be new. Helmholtz’s Theorem is then proved (Theorems 19 and 20) in a new form and with weaker assumptions. We give a proof of the Divergence Theorem (Theorem 21) with our new definition of a closed surface. Finally, we make an application to Maxwell’s equations.

2. Preliminaries

We start with the usual defining relations between the rectangular coordinates ( x , y , z ) and the spherical-polar coordinates ( r , θ , ϕ ) :

x = r sin θ cos ϕ , y = r sin θ sin ϕ , z = r cos θ .

Let us denote by R s p h 3 the set of spherical-polar coordinate values, i.e., the set { ( r , θ , ϕ ) : r 0 , 0 θ π , 0 ϕ 2 π } , and by T the transformation from the spherical-polar to rectangular coordinates, so that T is a function on R s p h 3 onto R 3 , and T ( r , θ , ϕ ) = ( x , y , z ) where x , y , z are given by the equations above. Then, if f is a function on R 3 into R, we denote by f ^ the function on R s p h 3 defined by:

f ^ ( r , θ , ϕ ) = f ( r sin θ cos ϕ , r sin θ sin ϕ , r cos θ ) .

Remark 1: The two functions f and f ^ have different domains, and so are different qua functions, but their values are related. They are usually denoted by a single symbol, and notation like f ( x , y , z ) and f ( r , θ , ϕ ) is used to indicate the two different meanings (we could also write ( f T ) for f ^ , where “ ” denotes the composition of two functions). We will say that the function f ^ is associated with the function f.

We note the following relations, between the derivatives corresponding to the coordinate variables of the two systems, involving the Jacobian matrix JT corresponding to the transformation T:

[ f ^ r f ^ θ f ^ ϕ ] = [ sin θ cos ϕ sin θ sin ϕ cos θ r cos θ cos ϕ r cos θ sin ϕ r sin θ r sin θ sin ϕ r sin θ cos ϕ 0 ] [ f x f y f z ] , (12)

the determinant, | J T | , of the matrix being r 2 sin θ ; the two sides of these equations are to be evaluated at corresponding triples ( r , θ , ϕ ) and ( x , y , z ) .

By inverting these relations, we obtain, if sin θ 0 :

[ f ^ x f ^ y f ^ z ] = 1 sin θ [ sin 2 θ cos ϕ sin θ cos θ cos ϕ sin ϕ sin 2 θ sin ϕ sin θ cos θ sin ϕ cos ϕ sin θ cos θ sin 2 θ 0 ] [ f ^ r 1 r f ^ θ 1 r f ^ ϕ ] . (13)

These latter relations will be used in our proofs below. They could be obtained “directly” but the matrix inversion route is easier. We have, in particular:

f ^ r = 1 r ( x f x + y f y + z f z ) ,

f ^ ϕ = x f y y f x ,

There is no such nice relation for f ^ / θ .

Remark 2: If the action of a time-dependent source field f ( x , y , z , t ) is delayed or advanced in time, the associated delayed/advanced function f ^ ( r , θ , ϕ , t ) is defined as follows:

f ^ ( r , θ , ϕ , t ) = f ( r sin θ cos ϕ , r sin θ sin ϕ , r cos θ , t r v ) , (14)

where v is a “speed” parameter; v > 0 for delayed action, and v < 0 for advanced action. We then have:

f ˜ r = 1 r ( x f x + y f y + z f z ) 1 v f t , (15)

and so, in Equation (2.2) above, we have, in the column on the right-hand side, ( r + 1 v t ) instead of r . All the derivatives are evaluated at a retarded time argument. This simple modification leads to simple changes in the results below. Note that f ˜ t = f t . We could, of course, introduce a modified operator:

( x x + y y + z z ) r v t .

3. The Basic Results for 3-Dimensional Scalar Fields

3.1. The Basic Result

We are now ready to prove a special case of the basic result (1) mentioned in the Introduction.

Theorem 1 (basic result for the new operator): If f : R 3 R has continuous first-order partial derivatives, and: lim r f ( r sin θ cos ϕ , r sin θ sin ϕ , r cos θ ) = 0 , uniformly for all 0 θ π , 0 ϕ 2 π , where r = x 2 + y 2 + z 2 , then:

R 3 x f x + y f y + z f z ( x 2 + y 2 + z 2 ) 3 2 d x d y d z = 4 π f ( 0 , 0 , 0 ) . (16)

where the numerator of the integrand is to be evaluated at ( x , y , z ) . The integrand is undefined at ( 0,0,0 ) .

Remark 3: The integral is, of course, an “improper” integral, and is to be understood as the limit of a “definite” integral extended over the compact set { ( x , y , z ) : 0 < ε r R } as ε 0 and R . We need the ε since the integrand is not well-defined at ( 0,0,0 ) .

Proof: The “change of variables formula for multidimensional integrals” [14] tells us that for a function f:

R 3 f = R s p h 3 ( f T ) ( | J T | ) = R s p h 3 f ^ r 2 sin θ ,

since | J T | ( r , θ , ϕ ) = r 2 sin θ . Now r f ^ r = x f x + y f y + z f z , so that we have:

R 3 x f x + y f y + z f z ( x 2 + y 2 + z 2 ) 3 2 d x d y d z = lim ε 0 , R r = ε r = R θ = 0 θ = π ϕ = 0 ϕ = 2 π 1 r 3 ( r f ^ r ) r 2 sin θ d r d θ d ϕ = lim ε 0 , R θ = 0 θ = π ϕ = 0 ϕ = 2 π sin θ [ r = ε r = R f ^ r d r ] d θ d ϕ = lim ε 0 , R θ = 0 θ = π ϕ = 0 ϕ = 2 π sin θ [ f ^ ( r , θ , ϕ ) ] r = ε r = R d θ d ϕ = θ = 0 θ = π ϕ = 0 ϕ = 2 π sin θ [ 0 f ( 0 , 0 , 0 ) ] d θ d ϕ

= f ( 0 , 0 , 0 ) θ = 0 θ = π ϕ = 0 ϕ = 2 π sin θ d θ d ϕ = f ( 0 , 0 , 0 ) ( ϕ = 0 ϕ = 2 π ( θ = 0 θ = π sin θ d θ ) d ϕ ) = f ( 0 , 0 , 0 ) × 2 × 2 π = 4 π f ( 0 , 0 , 0 ) .

We have used the notation [ f ^ ( r , θ , ϕ ) ] r = 0 r = to denote the difference [ f ^ ( , θ , ϕ ) f ^ ( 0, θ , ϕ ) ] . In the above sequence of calculations, we have changed a 3-dimensional integral into a succession of integrals and changed the order of integration which is permissible because all the intervals of integration are finite “intervals”. We will skip many intermediate steps in later derivations.

Remark 4: In the computations above we see the advantages of the spherical-polar coordinates over the rectangular. The multiple integral is reduced to iterated integrals. The improperness of the integral can be handled with limits on only one variable, namely, r. One can see also how the number π makes its appearance in the formula. Of course, unlike x , y , z , there is no symmetry between r , θ , ϕ .

Remark 5: Our Theorem 1 can be compared with the familiar result involving the Laplacian operator 2 , namely: R 3 1 r 2 f = 4 π f ( 0,0,0 ) , which holds

under a stronger regularity condition, namely, lim r r 2 f ( r sin θ cos ϕ , r sin θ sin ϕ , r cos θ ) = 0 , uniformly for all θ , ϕ . Obviously, the Laplacian result assumes that f is twice-differentiable, whereas our Theorem 1 requires f to be only once-differentiable.

Remark 6: Our basic result can be interpreted as saying that the differential operator or gradient has an inverse which is an integral operator—a result similar to the “Fundamental Theorem of the Calculus of One Variable”. It is also a little surprising that the integral operator involves integration over all space, whereas one can obtain the difference between the values of the function f at two points as a line integral of the gradient:

f ( Q ) f ( P ) = P Q f d s .

It would be interesting to relate the volume and line integrals.

We have a Green-like identity, which we state as a Corollary, involving two functions f and h:

Corollary 1. With similar assumptions about f and h,

R 3 h x f x + y f y + z f z ( x 2 + y 2 + z 2 ) 3 2 d x d y d z + R 3 f x h x + y h y + z h z ( x 2 + y 2 + z 2 ) 3 2 d x d y d z = 4 π f ( 0 , 0 , 0 ) h ( 0 , 0 , 0 ) .

Proof: We simply use the distributive property of our operator, namely:

( x x + y y + z z ) ( f h ) = h ( x x + y y + z z ) ( f ) + f ( x x + y y + z z ) ( h ) .

Remark 7: We could put a multiplier ψ ( r ) , say, with the integrand to enable us to treat modifications of the inverse square law of force such as are involved in the Yukawa potential. We then have the following theorem which can be proved along the lines of the proof of Theorem 1.

Theorem 2:

R 3 ψ ( r ) ( x f x + y f y + z f z ) d x d y d z = 4 π ( α β ) R 3 ( r ψ ( r ) + 3 ψ ( r ) ) f ( x , y , z ) d x d y d z ,

where α = lim r r 3 ψ ( r ) f ^ ( r , θ , ϕ ) and β = lim r 0 r 3 ψ ( r ) f ^ ( r , θ , ϕ ) .

Choosing ψ ( r ) = 1 r 3 , we will obtain Theorem 1 as a special case, if lim r f ^ ( r , θ , ϕ ) = 0 . We can see that this is the only choice for ψ ( r ) for which Theorem 1 will hold.

Choosing ψ ( r ) = 1 r , we obtain, if lim r r 2 f ^ ( r , θ , ϕ ) = 0 :

R 3 x f x + y f y + z f z ( x 2 + y 2 + z 2 ) 1 2 d x d y d z = R 3 2 r f ( x , y , z ) d x d y d z .

Finally, choosing ψ ( r ) = 1 r 2 , we obtain, if lim r r f ^ ( r , θ , ϕ ) = 0 :

R 3 x f x + y f y + z f z ( x 2 + y 2 + z 2 ) 1 2 d x d y d z = R 3 1 r 2 f ( x , y , z ) d x d y d z .

3.2. Basic Results for R, R2, and R4

Interestingly, we have results similar to Theorem 1 in one, two, and even four independent variables. The result for one variable is easy to see, namely:

Theorem 1 (R):

+ 1 | x | ( x d f d x ) d x = 2 f ( 0 ) .

We next have a similar result in two variables, namely:

Theorem 1 (R2):

R 2 x f x + y f y r 2 d x d y = 2 π f ( 0 , 0 ) .

Proof: The usual rectangular-to-polar transformation has, for the Jacobian determinant JT, the value r, and so we need only r2 in the denominator of the integrand. The rest of the calculations proceed as in the proof of Theorem 1. Note that we have a multiplier 2 π for f ( 0 , 0 ) .

Perhaps this can be used to derive new results for the plane.

To prove a similar result for four variables, we need an unusual coordinate transformation which, however, has the desired features of the usual rectangular-to-spherical transformation in two and three variables. We note the following simple fact:

( x 1 2 + x 2 2 + x 3 2 + x 4 2 ) 2 = ( x 1 2 + x 2 2 ) 2 + ( x 3 2 + x 4 2 ) 2 ,

which suggests the transformation:

x 1 = r cos θ cos ψ 1 , x 2 = r cos θ sin ψ 1 , x 3 = r sin θ cos ψ 2 , x 4 = r sin θ sin ψ 2 ,

with r = x 1 2 + x 2 2 + x 3 2 + x 4 2 , 0 r , 0 θ π 2 , 0 ψ 1 2 π , 0 ψ 2 2 π , and | J T | = r 3 sin θ cos θ .

We then have:

Theorem 1 (R4):

R 4 x 1 f x 1 + x 2 f x 2 + x 3 f x 3 + x 4 f x 4 r 4 d x 1 d x 2 d x 3 d x 4 = 2 π 2 f ( 0 , 0 , 0 , 0 ) .

Proof: Note that we have r 4 in the denominator and the multiplier of f ( 0 , 0 , 0 , 0 ) is 2 π 2 . We use the fact that:

r g r = ( x 1 f x 1 + x 2 f x 2 + x 3 f x 3 + x 4 f x 4 ) .

Perhaps this can be used to derive new results for space-time.

3.3. Extensions of the Basic Result

Next, using the slightly modified expression for f ^ r noted above for delayed/advanced action, we immediately have:

Theorem 3 (Theorem 1 with delayed/advanced action):

R 3 x f x + y f y + z f z ( x 2 + y 2 + z 2 ) 3 2 d x d y d z = 4 π f ( 0 , 0 , 0 , t ) + 1 v R 3 f t ( x , y , z , t r v ) x 2 + y 2 + z 2 d x d y d z = 4 π f ( 0 , 0 , 0 , t ) + 1 v t R 3 f ( x , y , z , t r v ) x 2 + y 2 + z 2 d x d y d z .

Thus, we can have a recovery of a function through partial derivatives evaluated at a retarded time argument, if we wish. We can easily prove a generalization of Theorem 1 to obtain two slightly different formulas for the value of f at points other than the origin.

Theorem 4 (Theorem 1 for a general point (a, b, c)): Under the same assumptions as those of Theorem 1, if ( a , b , c ) R 3 , then:

1) R 3 x f x + y f y + z f z ( x 2 + y 2 + z 2 ) 3 2 d x d y d z = 4 π f ( a , b , c ) , (17)

where the partial derivatives are evaluated at ( x + a , y + b , z + c ) , and also:

2) R 3 ( x a ) f x + ( y b ) f y + ( z c ) f z [ ( x a ) 2 + ( y b ) 2 + ( z c ) 2 ] 3 2 d x d y d z = 4 π f ( a , b , c ) , (18)

where the partial derivatives are evaluated at ( x , y , z ) .

Note that in (18) above, a slightly different operator dependent on ( a , b , c ) , namely, [ ( x a ) x + ( y b ) y + ( z c ) z ] , appears. The form of the integral

in (23) is convenient for interpretation and computation, whereas the form in (17) is useful for derivations where the integral needs to be differentiated with respect to a , b , c which appear as parameters.

Proof: Define a related function f ¯ by:

f ¯ ( x , y , z ) = f ( x + a , y + b , z + c )

so that:

f ¯ ( 0,0,0 ) = f ( a , b , c ) .

Applying Theorem 1 to f ¯ , we get:

R 3 1 r 3 ( x f ¯ x + y f ¯ y + z f ¯ z ) d V = 4 π f ( a , b , c ) ,

where r = x 2 + y 2 + z 2 and the partial derivatives are evaluated at ( x , y , z ) . Further,

f ¯ x ( x , y , z ) = f x ( x + a , y + b , z + c ) ,

keeping in view the definition of f ¯ . Similar relations hold for the other two partial derivatives.

Remark 8: The function f ¯ is a translation of the function f . In his calculation of the derivative of a potential function, Gauss used this idea to show that the potential of a “mass distribution” at any point has the same value as the potential at the origin—or any chosen reference point—of a suitably translated distribution. We could prove the result above by using a translation of the rectangular coordinates, i.e., by changing the origin. Another approach would be by using a non-standard spherical-polar to rectangular coordinate transformation that we will use later on. The transformation, denoted by T a , b , c is defined by:

x = a + r sin θ cos ϕ , y = b + r sin θ sin ϕ , z = c + r cos θ .

so that:

r = ( x a ) 2 + ( y b ) 2 + ( z c ) 2 .

3.4. A Basic Result for Related Operators

We state a result for the operator ( x y y x ) that follows from the fact:

f ^ ϕ = x f y y f x . (19)

Result: Under the conditions of Theorem 1,

R 3 1 r 3 ( x f y y f x ) d V = 0. (20)

Proof: The integral is equal to:

R s p h 3 1 r 3 ( f ^ ϕ ) r 2 d r d θ d ϕ = 1 r ( 0 2 π f ^ ϕ d ϕ ) d r d θ = 1 r [ f ^ ( r , θ , 2 π ) f ^ ( r , θ , 0 ) ] d r d θ = 0.

Remark 9: One can guess two more results like the one above, namely:

R 3 1 r 3 ( y f z z f y ) d V = 0 , (21)

R 3 1 r 3 ( z f x x f z ) d V = 0. (22)

To prove these, we could, once again, talk about a change of variables, this time a permutation of the variables, from x , y , z to, say, z , y , x , i.e., new variables x , y , z such that x = z , y = y , z = x , a related function f , use the result proved above to get:

R 3 1 r 3 ( x f y y f x ) d x d y d z = 0

and finally, appeal to the “dummy variables” idea to get:

R 3 1 r 3 ( z f y y f z ) d x d y d z = 0.

A better approach would be to use yet another spherical-polar to rectangular coordinate transformation, namely:

x = r cos θ , y = r sin θ sin ϕ , z = r sin θ cos ϕ ,

and then proceed as in the proof above to derive:

R 3 1 r 3 ( z f y y f z ) d V = 0.

Indeed, the usual definitions of θ and ϕ come from spherical astronomy, where one talks about “declination” and “azimuth” angles, the zenith being in the z-direction.

We can prove the same result using the standard spherical-polar coordinates as follows.

R 3 1 r 3 ( z f y y f z ) d V = R s p h 3 1 r 3 1 sin θ [ r cos θ ( sin 2 θ sin ϕ f ^ r + sin θ cos θ sin ϕ 1 r f ^ θ + cos ϕ 1 r f ^ ϕ ) r sin θ sin ϕ ( sin θ cos θ f ^ r sin 2 θ 1 r f ^ θ ) ] r 2 sin θ d r d θ d ϕ = r = 0 r = θ = 0 θ = π ϕ = 0 ϕ = 2 π 1 r ( sin θ sin ϕ f ^ θ + cos θ cos ϕ f ^ ϕ ) d r d θ d ϕ

= r = 0 r = 1 r [ ϕ = 0 ϕ = 2 π ( θ = 0 θ = π sin θ sin ϕ f ^ θ d θ ) d ϕ + θ = 0 θ = π ( ϕ = 0 ϕ = 2 π cos θ cos ϕ f ^ ϕ d ϕ ) d θ ] d r = r = 0 r = 1 r [ ϕ = 0 ϕ = 2 π ( [ f ^ sin θ sin ϕ ] θ = 0 θ = π θ = 0 θ = π f ^ cos θ sin ϕ d θ ) d ϕ + θ = 0 θ = π ( [ f ^ cos θ cos ϕ ] ϕ = 0 ϕ = 2 π + ϕ = 0 ϕ = 2 π f ^ cos θ sin ϕ d ϕ ) d θ ] d r = 0.

We collect these 3 results together as a Theorem:

Theorem 5 (basic result for related operators): Under the assumption that f has continuous partial derivatives,

R 3 1 r 3 ( x f y y f x ) d V = 0 , (23)

R 3 1 r 3 ( y f z z f y ) d V = 0 (24)

R 3 1 r 3 ( z f x x f z ) d V = 0. (25)

Remark 10: These three results show that the three first-order partial derivatives of f are not totally “independent” of one another, even though second-order partial derivatives may not exist. Of course, we do assume that the first-order derivatives are continuous. Also, like Theorem 1, we will have two versions of these results. They are crucial in our derivation of the new alternative to Poisson’s Formula. They do not seem to have been noted before.

We can combine the 3 equations into a single vector equation:

R 3 r × f r 3 d V = ( 0 , 0 , 0 ) = 0 . (26)

In contrast, we could write our basic result as:

R 3 r f r 3 d V = 4 π f ( 0 , 0 , 0 ) . (27)

3.5. Basic Results for Bounded and Unbounded Regions

Theorem 1 above involved a volume integral extended over whole space. We now prove a theorem that involves a volume integral extended over a bounded region bounded by a closed “surface”, and a surface integral. But we will use a new definition of a surface in spherical-polar coordinates (Our surface can be projected onto the surface of a sphere. The usual definition of a surface by a function like z = f ( x , y ) implies that it can be projected on a plane).

Let S be a positive real-valued function, bounded away from 0, of two variables θ and ϕ , with 0 θ π and 0 ϕ 2 π , i.e., S ( θ , ϕ ) > ε for some ε > 0 ; we mean by the surface S the set:

S s p h = { ( r , θ , ϕ ) : r = S ( θ , ϕ ) , 0 θ π , 0 ϕ 2 π }

in spherical coordinates, and the set:

S r e c t = { ( r sin θ cos ϕ , r sin θ sin ϕ , r cos θ ) : r = S ( θ , ϕ ) , 0 θ π , 0 ϕ 2 π }

in rectangular coordinates.

By the region V bounded by the surface S we mean the set:

V s p h = { ( r , θ , ϕ ) : 0 r S ( θ , ϕ ) , 0 θ π , 0 ϕ 2 π }

in spherical coordinates, and the set:

V r e c t = { ( r sin θ cos ϕ , r sin θ sin ϕ , r cos θ ) : 0 r S ( θ , ϕ ) , 0 θ π , 0 ϕ 2 π }

in rectangular coordinates. Note that the origin is an interior point of V and that each ray from the origin meets the surface in only one point.

Theorem 6 (basic result for bounded region): Let S be a surface such that S has continuous first-order partial derivatives S θ and S ϕ , let V be the region

bounded by S, and let f : R 3 R be a function with continuous first-order partial derivatives in V. Then:

f ( 0 , 0 , 0 ) = 1 4 π V r e c t 1 r 3 ( x f x + y f y + z f z ) d V + 1 4 π θ = 0 , ϕ = 0 θ = π , ϕ = 2 π f ^ S ( θ , ϕ ) sin θ d θ d ϕ . (28)

where we denote by f ^ S ( θ , ϕ ) the value of the function at a surface point, namely,

f ( S ( θ , ϕ ) sin θ cos ϕ , S ( θ , ϕ ) sin θ sin ϕ , S ( θ , ϕ ) cos θ ) .

Proof: We have:

V r e c t 1 r 3 ( x f x + y f y + z f z ) d V = V s p h 1 r 3 ( r f ^ r ) r 2 sin θ d r d θ d ϕ = V s p h f ^ r sin θ d r d θ d ϕ = θ = 0 , ϕ = 0 θ = π , ϕ = 2 π ( r = 0 r = S ( θ , ϕ ) f ^ r d r ) sin θ d θ d ϕ = θ = 0 , ϕ = 0 θ = π , ϕ = 2 π [ f ^ ( S ( θ , ϕ ) , θ , ϕ ) f ^ ( 0 , θ , ϕ ) ] sin θ d θ d ϕ = θ = 0 , ϕ = 0 θ = π , ϕ = 2 π f ^ S ( θ , ϕ ) sin θ d θ d ϕ 4 π f ( 0 , 0 , 0 ) .

Remark 11: The theorem can be interpreted as follows. Given a region bounded by a surface, the value of a once-differentiable scalar function (field) at an interior point is uniquely determined by the values of the function on the surface and the values of its partial derivatives in the region. It thus gives a “formula” for determining the value at an interior point. Note that the 2-dimensional integral above is not the usual surface integral. Further, instead of the whole region V bounded by the surface S, we could consider a cone with vertex at the origin and terminating on the surface and get a partial surface integral. We could then obtain an expression for the value of the function at a point outside the surface.

Remark 12: With an appropriate definition of “vector element of surface area” d S , we can write the 2-dimensional integral on the right hand side above as a “surface integral”:

1 4 π S ( f r r 3 ) d S . (29)

The vector element d S is also written as d S n ^ where d S is the “magnitude” of the surface element and n ^ is the unit normal vector. However, the “surface integral” is harder to visualize and calculate.

Proof: S is not a spherical surface in general. So we have to define what we could mean by the magnitude d S of the surface element and the unit normal n ^ to it. It turns out to be easier to define the vector surface element d S (it will be used later in our proof of the Divergence Theorem with our definition of surface).

Consider a “quadrilateral” ABCD, with the 4 corners determined by 4 pairs of θ , ϕ values. Thus, let A be the point ( S ( θ , ϕ ) , θ , ϕ ) ; B: ( S ( θ , ϕ + δ ϕ ) , θ , ϕ + δ ϕ ) ; C: ( S ( θ + δ θ , ϕ + δ ϕ ) , θ + δ θ , ϕ + δ ϕ ) ; D : ( S ( θ + δ θ , ϕ ) , θ + δ θ , ϕ ) . The r coordinates are given by the values of the function defining the surface.

We next calculate the first-order approximations to the vectors A B and A D . In rectangular coordinates these are:

A B = ( S ϕ sin θ cos ϕ S ( θ , ϕ ) sin θ sin ϕ , S ϕ sin θ sin ϕ + S ( θ , ϕ ) sin θ cos ϕ , S ϕ cos θ ) d ϕ

A D = ( S θ sin θ cos ϕ + S ( θ , ϕ ) cos θ cos ϕ , S θ sin θ sin ϕ + S ( θ , ϕ ) cos θ sin ϕ , S θ cos θ sin θ ) d θ .

We then define the vector surface element d S as:

d S = A D × A B

which turns out to be, in rectangular coordinates:

d θ d ϕ ( α , β , γ )

where, writing S in place of S ( θ , ϕ ) for easy readability:

α = S sin ϕ S ϕ S sin θ cos θ cos ϕ S θ + S 2 sin 2 θ cos ϕ ,

β = S cos ϕ S ϕ S sin θ cos θ sin ϕ S θ + S 2 sin 2 θ sin ϕ ,

γ = S sin 2 θ S θ + S 2 sin θ cos θ ,

and if r denotes the position vector of the point A, we have:

r d S = ( S ( θ , ϕ ) ) 3 sin θ d θ d ϕ ,

hence the desired result.

By carrying out the integration from r = S ( θ , ϕ ) to infinity, we see that the following result holds when the integration is extended over the complement V ˜ r e c t of the bounded region:

Theorem 7. Basic Result for Unbounded Region. Let S be a surface such that S has continuous first-order partial derivatives S θ and S ϕ , let V ˜ be

the complement of the region bounded by S, and let f : R 3 R be a function with continuous first-order partial derivatives in V ˜ such that f ^ 0 as r . Then:

1 4 π V ˜ r e c t 1 r 3 ( x f x + y f y + z f z ) d V = 1 4 π θ = 0 , ϕ = 0 θ = π , ϕ = 2 π f ^ S ( θ , ϕ ) sin θ d θ d ϕ . (30)

Finally, considering the possibility that S is a surface of discontinuity for f, we split the region of integration for r into two parts, and with this understanding for the volume integral, we have:

Theorem 8. Let S be a surface such that S has continuous first-order partial derivatives S θ and S ϕ , let V be the region bounded by S, and let f : R 3 R

be a function with continuous first-order partial derivatives in V, except possibly on S. Then:

f ( 0 , 0 , 0 ) = 1 4 π R 3 1 r 3 ( x f x + y f y + z f z ) d V + 1 4 π θ = 0 , ϕ = 0 θ = π , ϕ = 2 π [ f ^ S ( θ , ϕ ) ] sin θ d θ d ϕ (31)

1 4 π θ = 0 , ϕ = 0 θ = π , ϕ = 2 π [ f ^ S ( θ , ϕ ) ] + sin θ d θ d ϕ (32)

where we denote by [ f ^ S ( θ , ϕ ) ] the limiting value of the function at a surface point, namely, f ( S ( θ , ϕ ) sin θ cos ϕ , S ( θ , ϕ ) sin θ sin ϕ , S ( θ , ϕ ) cos θ ) from inside, and by [ f ^ S ( θ , ϕ ) ] + the limiting value from outside.

Like Theorem 4, we can have a “translated” version of Theorem 8. We can write it in coordinate-free form as:

f ( fieldpoint ) = 1 4 π V ( 1 r 3 r f ( sourcepoint ) ) + 1 4 π S ( f ( surfacepoint ) r r 3 d S )

where r denotes the vector from field point to source point and r denotes its length.

Theorem 5 with delayed/advanced action:

f ( a , b , c , t ) = 1 4 π V r e c t 1 r 3 ( x f x + y f y + z f z ) d V + 1 4 π 1 v t V r e c t f ( x , y , z , t r v ) x 2 + y 2 + z 2 d x d y d z + 1 4 π θ = 0 , ϕ = 0 θ = π , ϕ = 2 π f S ( θ , ϕ ) sin θ d θ d ϕ .

4. New Formulas for 3-Dimensional Vector Fields

4.1. A New Formula for Unbounded Case

We now turn to 3-dimensional vector fields, i.e., functions F : R 3 R 3 . We can immediately extend the results for scalar fields to vector fields by considering a 3-dimensional vector-valued function F as a set of 3 scalar valued functions F x , F y , F z and use three recovery formulas, using Theorem 4:

F x ( a , b , c ) = 1 4 π R 3 1 r 3 ( x F x x + y F x y + z F x z ) d V , (33)

F y ( a , b , c ) = 1 4 π R 3 1 r 3 ( x F y x + y F y y + z F y z ) d V , (34)

F z ( a , b , c ) = 1 4 π R 3 1 r 3 ( x F z x + y F z y + z F z z ) d V . (35)

This recovery involves nine partial derivatives but only three combinations of these appear in each recovery formula. However, using Theorem 5, putting a

multiplier 1 4 π , we have:

1 4 π R 3 1 r 3 ( x F y y y F y x ) d x d y d z = 0 ,

1 4 π R 3 1 r 3 ( x F z z z F z x ) d x d y d z = 0.

“Adding” the left-hand sides of these two equations to the right-hand side of the equation above for F x ( a , b , c ) and rearranging terms, we get:

F x ( 0,0,0 ) = 1 4 π R 3 1 r 3 [ x ( F x x + F y y + F z z ) y ( F y x F x y ) + z ( F x z F z x ) ] .

We can obtain similar expressions for F y ( a , b , c ) and F z ( a , b , c ) .

We recognize that ( F x x + F y y + F z z ) is the divergence of F, i.e., F . We also see that y ( F y x F x y ) z ( F x z F z x ) is the x-component of r × ( × F ) because:

r × ( × F ) = [ i j k x y z ( F z y F y z ) ( F x z F z x ) ( F y x F x y ) ] .

Here, r is ( x i + y j + z k ) . Consideration of the other components of F and using the notation of Theorem 4, we obtain, if the weaker regularity condition lim r F ( r sin θ cos ϕ , r sin θ sin ϕ , r cos θ ) = 0 , holds uniformly for all θ , ϕ , the following theorem:

Theorem 9 (a new formula). Under the assumptions that F has continuous derivatives and that lim r F ( r sin θ cos ϕ , r sin θ sin ϕ , r cos θ ) = 0 , holds uniformly for all θ , ϕ , we have:

F = 1 4 π R 3 1 r 3 [ ( F ) r r × ( × F ) ] d V .

This theorem is our new alternative to Jefimenko’s formula stated in the Introduction, namely:

F = 1 4 π R 3 ( F ) × ( × F ) r d V .

which holds under a stronger regularity condition, namely, lim r r 2 F ( r sin θ cos ϕ , r sin θ sin ϕ , r cos θ ) = 0 , uniformly for all θ , ϕ .

Theorem 10 (Delayed/advanced version of Theorem 9).

F = 1 4 π R 3 1 r 3 [ ( F ) r r × ( × F ) ] + 1 4 π 1 v t R 3 1 r 2 F .

Note, however, that with retarded action, not only the divergence and the curl but also the time-derivative of the field appear as “sources”.

4.2. New Formula for a Bounded Region

To prove our new formula above, we applied Theorem 4 and Theorem 5 to the components of F and then combined the results appropriately. To obtain a new formula for a bounded region, we will need an extension of Theorem 5 above, which we now state and prove.

Theorem 11 (basic result for related operators over a bounded region): Let S be a surface such that S has continuous first-order partial derivatives S θ and S ϕ , let V be the region bounded by S, and let f : R 3 R be a function with

continuous first-order partial derivatives in V. Then, using then the notation of Theorem 7:

V r e c t 1 r 3 ( x f y y f x ) d V = θ = 0 , ϕ = 0 θ = π , ϕ = 2 π 1 S ( θ , ϕ ) S ϕ f ^ S ( θ , ϕ ) d θ d ϕ ,

V r e c t 1 r 3 ( y f z z f y ) d V = θ = 0 , ϕ = 0 θ = π , ϕ = 2 π 1 S ( θ , ϕ ) [ S θ f ^ S ( θ , ϕ ) sin θ sin ϕ S ϕ f ^ S ( θ , ϕ ) cos θ cos ϕ ] d θ d ϕ ,

V r e c t 1 r 3 ( z f x x f z ) d V = θ = 0 , ϕ = 0 θ = π , ϕ = 2 π 1 S ( θ , ϕ ) [ S θ f ^ S ( θ , ϕ ) sin θ cos ϕ + S ϕ f ^ S ( θ , ϕ ) cos θ sin ϕ ] d θ d ϕ .

Proof: We prove only the first of the three equations above. Indeed:

I = V r e c t 1 r 3 ( x f y y f x ) d V = V s p h 1 r 3 ( f ^ ϕ ) r 2 d r d θ d ϕ = θ = 0 θ = π ϕ = 0 ϕ = 2 π r = 0 r = S ( θ , ϕ ) 1 r f ^ ϕ d r d θ d ϕ .

But here we cannot integrate with respect to ϕ first because the upper limit on r depends on ϕ (and θ ). So we use a “trick”. We will use Leibniz’s rule for derivative of an integral with respect to a parameter—with a twist. Leibniz’s Rule says:

d d q α ( q ) β ( q ) f ( q , p ) d p = d β d q f ( q , β ( q ) ) d α d q f ( q , α ( q ) ) + α ( q ) β ( q ) f q ( q , p ) d p .

Here, q is a parameter. This is moving differentiation into the inside of an integral. We rearrange terms to get:

α ( q ) β ( q ) f q ( q , p ) d p = d d q α ( q ) β ( q ) f ( q , p ) d p d β d q f ( q , β ( q ) ) + d α d q f ( q , α ( q ) ) .

This is moving differentiation to the outside. Using it, we get:

r = 0 r = S ( θ , ϕ ) 1 r f ^ ϕ ( r , θ , ϕ ) d r = ϕ r = 0 r = S ( θ , ϕ ) 1 r f ^ ( r , θ , ϕ ) d r 1 S S ( θ , ϕ ) ϕ f ^ ( S ( θ , ϕ ) , θ , ϕ ) .

Substituting, we get:

I = θ = 0 θ = π ϕ = 0 ϕ = 2 π [ ϕ r = 0 r = S ( θ , ϕ ) 1 r f ^ ( r , θ , ϕ ) d r ] d ϕ d θ θ = 0 θ = π ϕ = 0 ϕ = 2 π 1 S S ( θ , ϕ ) ϕ f ^ ( S ( θ , ϕ ) , θ , ϕ ) d θ d ϕ = θ = 0 θ = π [ r = 0 r = S ( θ , ϕ ) 1 r f ^ ( r , θ , ϕ ) d r ] ϕ = 0 ϕ = 2 π d θ θ = 0 θ = π ϕ = 0 ϕ = 2 π 1 S S ( θ , ϕ ) ϕ f ^ ( S ( θ , ϕ ) , θ , ϕ ) d θ d ϕ = 0 θ = 0 θ = π ϕ = 0 ϕ = 2 π 1 S S ( θ , ϕ ) ϕ f ^ ( S ( θ , ϕ ) , θ , ϕ ) d θ d ϕ .

Equipped with the result above, we now get the following result in a way similar to that for Theorem 8.

Theorem 12 (bounded version of Theorem 9).

F = 1 4 π R 3 ( F ) r r × ( × F ) r 3 d V + 1 4 π θ = 0 , ϕ = 0 θ = π , ϕ = 2 π F S ( θ , ϕ ) sin θ d θ d ϕ + 1 4 π S F S × ( r × d S r 3 ) = 1 4 π R 3 ( F ) r r × ( × F ) r 3 d V + 1 4 π S ( r d S r 3 ) F S + 1 4 π S F S × ( r × d S r 3 ) .

Compare the “surface integral” terms above with the formula given by Zhou [10]:

F = S F n 4 π r d S + × S F × n 4 π r d S .

His n is our n ^ , and our d S is d S n ^ .

4.3. Uniqueness of the Inverse Square Laws

We can interpret our result as also saying that if ( F ) and ( × F ) are regarded as sources for the field F, and then no matter how the sources “actually” work, they can be regarded as working through an inverse square law, or influence function, or Green’s function. The inverse square law is, in this sense, ubiquitous. There are, actually, two inverse-square laws. The scalar source ( F ) acts radially, whereas the vector source ( × F ) acts transversely. They both have dependence only on the relative position vector r , i.e., the vector relating the source point with the field point. Are these laws unique?

The uniqueness is with respect to the question whether the inverse square “ r -dependent” radial influence function for a scalar source (Coulomb’s Law of Electrostatics) is the only “ r -dependent” one with the property that the divergence of the generated vector field equals the generating scalar field and the curl of the generated field is zero. Similarly, for the vector source and transverse action function (Biot-Savart Law of Magnetic Effect of Stationary Currents). Thus, if V ( r ) is a vector influence function for any arbitrary scalar field ρ to produce a vector field F such that F = ( 1 / 4 π ) ρ , and × F = 0 , then must V = r / r 3 ? Here, by influence function V we mean that F = ( 1 / 4 π ) ρ V ( r ) , i.e., F ( a , b , c ) = ( 1 / 4 π ) ρ ( x , y , z ) V ( x a , y b , z c ) . We can immediately see that this is the case.

Indeed, we have, by our result, also F ( a , b , c ) = ( 1 / 4 π ) ρ ( x , y , z ) r / r 3 . So,

0 = ρ ( V ( r ) r / r 3 ) ,

and so by a theorem of Titchmarsh on Convolution (Product) of two continuous functions, since ρ 0 (the general, non-trivial case), we have 0 = V ( r ) r / r 3 .

Remark 13: It is curious that the defining relation above for the vector field generated by a scalar field through a r -dependent influence function has not been noticed to be a (space) convolution integral by workers in Electromagnetism—a fact which would probably be easily seen more readily by workers in Linear System Theory. The influence function there is called the impulse response and the convolution is in time.

The uniqueness result for the vector source with transverse action can be easily seen to be true.

Remark 14: It is indeed surprising that whereas the expressions for F x , F y , F z separately involved 3 partial derivatives multiplied by x , y , z , the vector F, i.e., the 3 scalars put together, has an expression that involves 4 different “disjoint” combinations of the partial derivatives multiplied by x , y , z , and these happen to be the combinations occurring in F and × F . On the other hand, it is not clear whether we can calculate the partial derivatives of F x , F y , F z knowing F and × F without calculating F.

Remark 15: Incidentally, it may be stated that we were led to our new formula above by starting with Poisson’s Theorem and then using two little-known formulas below, namely:

R 3 1 r [ ( F ) ] = R 3 1 r 3 [ ( F ) r ] (36)

(37)

R 3 1 r [ × ( × F ) ] = R 3 1 r 3 [ r × ( × F ) ] . (38)

This led us, in turn, to our Formula (1) by writing the expression for F x ( 0,0,0 ) in our formula above and expanding out the expressions for F and × F .

The above two little-known formulas were obtained by application of the following new formulas which are proved in Section 5 (Theorem 13, Theorem 14):

R 3 1 r [ f ] = R 3 1 r 3 [ ( f ) r ] (39)

R 3 1 r [ × A ] = R 3 1 r 3 [ r × A ] , (40)

along with another (Theorem 15):

R 3 1 r [ A ] = R 3 1 r 3 [ r A ] . (41)

Remark 16: Incidentally, we can prove Laplace’s theorem using our new result and the results mentioned above.

Laplace’s Theorem: Given a twice-differentiable vector function F,

F = 1 4 π 2 F r . (42)

Proof: We start with our new result for F and then use Theorem 13 and Theorem 14:

F = 1 4 π R 3 1 r 3 [ ( F ) r r × ( × F ) ] d V (43)

= 1 4 π R 3 1 r [ ( F ) × ( × F ) ] (44)

= 1 4 π 2 F r . (45)

5. Removing a Derivative Occurring inside an Integral

Our main result in Section 3 can be regarded as enabling us to “integrate out” completely differential expressions occurring inside an integral. We now state a number of results which only remove a derivative occurring inside an integral, without succeeding in integrating out completely. We first state three lemmas. Here, g denotes the function associated with f.

Lemma 1: ψ ( r ) f x = ψ ( r ) r x f + lim ε 0 , R d θ d ϕ sin 2 θ cos ϕ [ r 2 ψ ( r ) g ] r = ε r = R ,

Lemma 2: ψ ( r ) f y = ψ ( r ) r y f + lim ε 0 , R d θ d ϕ sin 2 θ sin ϕ [ r 2 ψ ( r ) g ] r = ε r = R ,

Lemma 3: ψ ( r ) f z = ψ ( r ) r z f + lim ε 0 , R d θ d ϕ sin θ cos θ [ r 2 ψ ( r ) g ] r = ε r = R .

Proof of Lemma 1: Denoting the associated function by g, we have:

R 3 ψ ( r ) f x = R s p h 3 ψ ( r ) 1 sin θ [ sin 2 θ cos ϕ g r + sin θ cos θ cos ϕ 1 r g θ sin ϕ 1 r g ϕ ] r 2 sin θ d r d θ ϕ = R s p h 3 [ sin 2 θ cos ϕ r 2 ψ ( r ) g r + sin θ cos θ cos ϕ r ψ ( r ) g θ sin ϕ r ψ ( r ) g ϕ ] d r d θ d ϕ = I 1 + I 2 + I 3 ,

say. For the first term, we have:

I 1 = θ = 0 θ = π ϕ = 0 ϕ = 2 π sin 2 θ cos ϕ [ r = 0 r = r 2 ψ ( r ) g r d r ] d θ d ϕ = θ = 0 θ = π ϕ = 0 ϕ = 2 π sin 2 θ cos ϕ ( [ r 2 ψ ( r ) g ( r , θ , ϕ ) ] r = 0 r = r = 0 r = ( 2 r ψ ( r ) + r 2 ψ ( r ) ) g d r ) d θ d ϕ = R s p h 3 ( 2 r ψ ( r ) + r 2 ψ ( r ) ) sin 2 θ cos ϕ g + θ = 0 θ = π ϕ = 0 ϕ = 2 π sin 2 θ cos ϕ [ r 2 ψ ( r ) g ( r , θ , ϕ ) ] r = 0 r = d θ d ϕ ,

where we have used integration by parts with respect to r. For the second term we have:

I 2 = ϕ = 0 ϕ = 2 π r = 0 r = r ψ ( r ) cos ϕ [ θ = 0 θ = π sin θ cos θ g θ d θ ] d ϕ d r = ϕ = 0 ϕ = 2 π r = 0 r = r ψ ( r ) cos ϕ ( [ sin θ cos θ g ( r , θ , ϕ ) ] θ = 0 θ = π θ = 0 θ = π ( cos 2 θ sin 2 θ ) g d θ ) d ϕ d r = R s p h 3 r ψ ( r ) cos ϕ ( cos 2 θ sin 2 θ ) g .

Here, we have used integration by parts with respect to θ . For the third term we have:

I 3 = r = 0 r = θ = 0 θ = π r ψ ( r ) ( ϕ = 0 ϕ = 2 π sin ϕ g ϕ d ϕ ) d r d θ = r = 0 r = θ = 0 θ = π r ψ ( r ) ( [ sin ϕ g ( r , θ , ϕ ) ] ϕ = 0 ϕ = 2 π ϕ = 0 ϕ = 2 π cos ϕ g d ϕ ) d r d θ = R s p h 3 r ψ ( r ) cos ϕ g ,

integrating by parts again with respect to ϕ this time.

Putting together these 3 results we have:

R 3 ψ ( r ) f x = R s p h 3 r 2 ψ ( r ) sin 2 θ cos ϕ g + θ = 0 θ = π ϕ = 0 ϕ = 2 π sin 2 θ cos ϕ [ r 2 ψ ( r ) g ( r , θ , ϕ ) ] r = 0 r = d θ d ϕ .

On the right hand side of Lemma 1 we have the first term R 3 ψ ( r ) r [ ( x f ) ] which equals R s p h 3 ψ ( r ) r [ ( r sin θ cos ϕ ) r 2 sin θ g ] which equals the expression on the left hand side of Lemma 1 above.

Lemmas 2 and 3 can be proved similarly.

Using the above three lemmas, our first result removes a operator occurring inside an integral. Note that we have a ( 1 r ) multiplier on the operator side and a ( 1 r 3 ) multiplier on the other side.

Theorem 13 (New result for inside integral). Assuming f is differentiable and that lim r r g ( r , θ , ϕ ) = 0 for all θ , ϕ , we have:

R 3 1 r [ f ] = R 3 1 r 3 [ ( f ) r ] .

Proof: On considering the x-, y-, and z-components of the two sides of the result to be proved, choosing ψ ( r ) = 1 r , we see that the Theorem follows immediately from the three lemmas.

Remark 16: By choosing f x , f y , f z in place of f respectively in the three Lemmas above and adding, we get the following result involving the Laplacian operator:

R 3 1 r ( 2 f x 2 + 2 f y 2 + 2 f z 2 ) = R 3 1 r 3 ( x f x + y f y + z f z ) (46)

so that using our basic result, we get:

R 3 1 r ( 2 f x 2 + 2 f y 2 + 2 f z 2 ) = 4 π f ( 0,0,0 ) . (47)

If we have advanced/retarded action, we will have:

R 3 1 r [ ( 2 f x 2 + 2 f y 2 + 2 f z 2 ) 1 v 2 2 f t 2 ] d V = 4 π f ( 0 , 0 , 0 ) . (48)

Our next result enables us to remove a × operator occurring inside an integral. Note again that we have a ( 1 r ) multiplier on the operator side and a ( 1 r 3 ) multiplier on the other side.

Theorem 14 (new result for × inside integral): Assuming the vector function A is differentiable and that its components satisfy the same limit condition as in Theorem 13:

R 3 1 r [ × A ] = R 3 1 r 3 [ r × A ] .

Proof: Consider the three components of the two sides and apply the 3 Lemmas.

Finally, as one may expect, we have a result which removes the operator. Once again, the multiplier on the operator side is ( 1 r ) on the operator side and ( 1 r 3 ) on the other side.

Theorem 15 (New result for inside integral). Assuming the vector function A is differentiable and that its components satisfy the same limit condition as in Theorem 13:

R 3 1 r [ A ] = R 3 1 r 3 [ r A ] .

These three theorems have perhaps appeared as Exercises in some textbooks or have been used when carrying out some derivations. But it is useful to highlight them as we have done.

Also, there are variations obtained by changing the multiplier on the operator side to ( 1 r 2 ) , and even dropping it altogether. They can be proved by changing

the multipliers in the three Lemmas. We state these results below. Here,

r = ( x 2 + y 2 + z 2 ) 1 2 and f and the partial derivatives are evaluated at ( x , y , z ) .

Lemma 4: ψ ( r ) x f x = [ ψ ( r ) r x 2 + ψ ( r ) ] f + lim ε 0 , R d θ d ϕ sin 3 θ cos 2 ϕ [ r 3 ψ ( r ) g ] r = ε r = R .

Lemma 5: ψ ( r ) y f x = ψ ( r ) r x y f + lim ε 0 , R d θ d ϕ sin 3 θ cos ϕ sin ϕ [ r 3 ψ ( r ) g ] r = ε r = R .

And in the special case ψ ( r ) = 1 / r 3 ,

y f x = 3 r 5 x y f + 1 v t 1 r 4 x y f ,

provided g ( r , θ , ϕ ) is finite as r .

Lemma 6: ψ ( r ) z f x = ψ ( r ) r x z f + lim ε 0 , R d θ d ϕ sin 3 θ cos ϕ cos θ [ r 3 ψ ( r ) g ] r = ε r = R .

Lemma 7: ψ ( r ) x f y = ψ ( r ) r x y f + lim ε 0 , R d θ d ϕ sin 3 θ cos ϕ sin ϕ [ r 3 ψ ( r ) g ] r = ε r = R .

Lemma 8: ψ ( r ) y f y = [ ψ ( r ) r y 2 + ψ ( r ) ] f + lim ε 0 , R d θ d ϕ sin 3 θ sin 2 ϕ [ r 3 ψ ( r ) g ] r = ε r = R .

Lemma 9: ψ ( r ) z f y = ψ ( r ) r z y f + lim ε 0 , R d θ d ϕ sin 3 θ sin ϕ cos θ [ r 3 ψ ( r ) g ] r = ε r = R .

Lemma 10: ψ ( r ) x f z = ψ ( r ) r x z f + lim ε 0 , R d θ d ϕ sin 3 θ cos ϕ cos θ [ r 3 ψ ( r ) g ] r = ε r = R .

Lemma 11: ψ ( r ) y f z = ψ ( r ) r y z f + lim ε 0 , R d θ d ϕ sin 3 θ sin ϕ cos θ [ r 3 ψ ( r ) g ] r = ε r = R .

Lemma 12: ψ ( r ) z f z = [ ψ ( r ) r z 2 + ψ ( r ) ] f + lim ε 0 , R d θ d ϕ sin 3 θ cos 2 θ [ r 3 ψ ( r ) g ] r = ε r = R .

Using the above lemmas, we now prove without requiring twice-differentiability of F that the divergence of F , i.e., the irrotational part of F determined by F in our formula is indeed F , and that its curl is zero. Similarly, the curl of F × , i.e., the solenoidal part of F determined by × F in our formula is indeed × F , and that its divergence is zero.

Theorem 16 (Property of irrotational and solenoidal parts). Assuming that ( F ) ( r , θ , ϕ ) 0 as r uniformly in θ and ϕ , and that ( × F ) ( r , θ , ϕ ) 0 as r uniformly in θ and ϕ , we have:

F = F ,

× F = 0 ,

F × = 0 ,

× F × = × F .

Proof: To prove the first result, we do not start by moving the differentiation required by into the inside of the integral because that would require twice-differentiability of F. Instead, we “undo” the differentiation inside the integral by using the above three results. Thus, we start with the i-component of the expression within the parentheses, namely:

1 4 π 1 r 3 ( F x x + F y y + F z z ) x d x d y d z .

This becomes, on applying the three results above:

1 4 π ( 4 π 3 [ F x ] 0 + ( 3 r 3 F x x 2 r 2 1 r 3 F x ) d V 1 4 π 3 r 5 F x x y d V 1 4 π 3 r 5 F x x x z d V ) .

/ x of this is:

1 3 [ F x x ] 0 1 4 π ( 3 r 3 F x x x 2 r 2 1 r 3 F x x ) d V 1 4 π 3 r 5 F x x x y d V 1 4 π 3 r 5 F x x x z d V .

Similarly, we can calculate the / y of the j-component of F and the / z of the k-component of F and add them to obtain, because ( F ) ( r , θ , ϕ ) 0 as r uniformly in θ and ϕ and transferring 4 π to the left-hand side:

4 π F = 4 π 3 ( F ) + 3 r 5 x ( x F x x + y F x y + z F x z ) d V + 3 r 5 y ( x F y x + y F y y + z F y z ) d V + 3 r 5 z ( z F z x + y F z y + z F z z ) d V [ 3 r 5 ( x F x + y F y + z F z ) d V 4 π 3 F ]

= 3 r 5 r sin θ cos ϕ r F ^ x r r 2 sin θ d r d θ d ϕ + 3 r 5 r sin θ sin ϕ r F ^ y r r 2 sin θ d r d θ d ϕ + 3 r 5 r cos θ r F ^ z r r 2 sin θ d r d θ d ϕ 3 r 5 ( x F x + y F y + z F z ) d V = 3 r sin 2 θ cos ϕ F ^ x r d r d θ d ϕ + 3 r sin 2 θ sin ϕ F ^ y r d r d θ d ϕ + 3 r cos θ sin θ F ^ z r d r d θ d ϕ 3 r 5 ( x F x + y F y + z F z ) d V

= [ 3 lim ε 0 , R ( sin 2 θ cos ϕ F ^ x ( r , θ , ϕ ) r ) d θ d ϕ + sin 2 θ cos ϕ 3 r 2 F ^ x d r d θ d ϕ ] + [ 3 lim ε 0 , R ( sin 2 θ cos ϕ F ^ y ( r , θ , ϕ ) r ) d θ d ϕ + sin 2 θ cos ϕ 3 r 2 F ^ y d r d θ d ϕ ] + [ 3 lim ε 0 , R ( sin 2 θ cos ϕ F ^ z ( r , θ , ϕ ) r ) d θ d ϕ + sin 2 θ cos ϕ 3 r 2 F ^ z d r d θ d ϕ ] 3 r 5 ( x F x + y F y + z F z ) d V

= [ 3 ( 4 π 3 F x x ) + 3 r 5 x F x d V ] + [ 3 ( 4 π 3 F y y ) + 3 r 5 y F y d V ] + [ 3 ( 4 π 3 F z z ) + 3 r 5 z F z d V ] 3 r 5 ( x F x + y F y + z F z ) d V = 4 π F .

Similar-tedious-calculations show that the other results are true.

We next prove the orthogonality of the irrotational part of any field with the solenoidal part of any field. To do this, we first prove the following two results:

f r = 1 r 3 f r (49)

× A r = 1 r 3 ( r × A ) . (50)

We sketch proofs of these results—and one more—under the assumption that f and A are differentiable.

Result 1: f r = 1 r 3 f r .

Proof: We show the equality of the i-components of the two sides. Denoting the integral on left-hand side by V, we have:

V ( a , b , c ) = R s p h 3 f r .

Carrying out differentiation into the inside of the integral, we have:

V a = 1 r f a .

Using Lemma 1 above, the result follows immediately.

Result 2: × A r = 1 r 3 ( r × A ) .

Proof: Again, we show the equality of the i-components of the two sides. Denoting the integral on the left-hand side by F, we have:

F ( a , b , c ) = R s p h 3 A x i + A y j + A z k r .

So, the i-component of the left-hand side of × F is:

1 r ( A z b A y c ) .

Using Lemmas 2 and 3 above, the result follows.

Result 3: A r = 1 r 3 ( r A ) .

Theorem 17 (orthogonality of irrotational and solenoidal parts). If F 1 and F 2 are any two once-differentiable fields that satisfy the assumptions of Theorems 13, 14, 15,

F 1 F × 2 d V = 0. (51)

Proof: From result 1 above, F 1 = g for some scalar function g, say. Now, given a scalar function g and a vector function B, we have the identity:

g B = ( g B ) g B .

So,

F 1 F × 2 = ( g F × 2 ) g ( F × 2 ) = ( g F × 2 ) 0 ,

by Theorem 17. We then have:

F 1 F × 2 d V = ( g F × 2 ) = 0 ,

by Lemma 1 if lim ε 0 , R d θ d ϕ sin 2 θ cos ϕ [ r 2 g F × 2 ] r = ε r = R = 0 .

Corollary 2. 1) If a vector field is orthogonal to all solenoidal fields, it must be irrotational; 2) If a vector field is orthogonal to all irrotational fields, it must be solenoidal.

Gui and Dou [11] mention the orthogonality only in passing (Proposition 4, p. 288) without citing any references.

6. Moving Differentiation into inside of an Integral: A New Proof of Helmholtz’s Theorem

We now illustrate how the operation of differentiation with respect to a rectangular coordinate applied to an integral is equivalent to an integral in spherical-polar coordinates involving an integrand which has a derivative. Commonly, this is known as interchanging the order of integration and differentiation, or carrying out a differentiation into the inside of an integral. In textbooks on Electromagnetism this procedure is used to move the “del” operators into the inside of an integral for later manipulations. We will use this procedure to prove Helmholtz’s Theorem. We will illustrate this first with an existence theorem for a first-order PDE system. The existence theorem appears to be new.

6.1. An Existence Theorem for a PDE System

This is a “converse” of Theorem 4 which amounts to a solution of the simplest 3-variable partial differential equation problem: find a function w ( x , y , z ) such that:

w x = f ( x , y , z ) , w y = g ( x , y , z ) , w z = h ( x , y , z ) .

It can also be viewed as a statement of sufficient conditions under which a function exists with specified gradient (of course, the solution will not be unique in the absence of boundary conditions).

Theorem 18. Under the assumptions that the functions f , g , h have continuous partial derivatives and satisfy the conditions:

f y = g x , g z = h y , h x = f z ,

and that f , g , h 0 as r , if w is given by:

w ( a , b , c ) = 1 4 π R 3 1 r 3 [ x f ( x + a , y + b , z + c ) + y g ( x + a , y + b , z + c ) + z h ( x + a , y + b , z + c ) ] d x d y d z ,

where r = x 2 + y 2 + z 2 , then we have:

w a = f , w b = g , w c = h .

Proof: It is tempting to bring the derivative under the integral sign, but the integrand is not defined at one point, namely, ( 0,0,0 ) . So, we use the rectangular to spherical-polar transformation so that:

w ( a , b , c ) = 1 4 π R s p h 3 1 r 3 [ r sin θ cos ϕ f + r sin θ sin ϕ g + r cos θ h ] r 2 sin θ d r d θ d ϕ = 1 4 π R s p h 3 [ sin θ cos ϕ f + sin θ sin ϕ g + cos θ h ] sin θ d r d θ d ϕ .

Note that f , g , h in the integrand above are to be evaluated at ( a + r sin θ cos ϕ , b + r sin θ cos ϕ , c + r cos θ ) , thus contain a , b , c as parameters and are differentiable with respect to them, so that using the rule of “differentiating under the integral sign”, we get:

w a = 1 4 π R s p h 3 [ sin θ cos ϕ f ^ 1 + sin θ sin ϕ g ^ 1 + cos θ h ^ 1 ] sin θ d r d θ d ϕ

where f 1 , g 1 , h 1 denote the partial derivatives of f , g , h with respect to the first “component”. But by the assumption above,

g 1 = f 2 , h 1 = f 3 ,

so we have:

w a = 1 4 π R s p h 3 [ sin θ cos ϕ f 1 + sin θ sin ϕ f 2 + cos θ f 3 ] sin θ d r d θ d ϕ = 1 4 π R 3 1 r 3 ( x f x + y f y + z f z ) d x d y d z = 1 4 π [ 4 π f ( a , b , c ) ] = f ( a , b , c )

by Theorem 4. Similarly, we can prove w b = g , w c = h .

Remark 17: Note how use of spherical-polar coordinates has allowed differentiation under the integral sign with impunity, which would not be possible if we had [ ( x a ) 2 + ( y b ) 2 + ( z c ) 2 ] 3 / 2 in the denominator of the integrand. Gauss uses spherical-polar coordinates in his paper on the “inverse square law of force” to calculate derivative of the potential function in his proof of Poisson’s equation. Green somehow does not use spherical-polar coordinates.

Remark 18: In the “recovery” formula given by Theorem 4, one does not require conditions of equality of the second-order mixed partial derivatives. Indeed, we did not require even the existence of the second-order derivatives. However, in proving the existence theorem on the solution of the PDE problem, we have invoked the equality of the second-order mixed partial derivatives. Perhaps, with a suitable modification of our argument, one may be able to dispense with that requirement. In the partial differential Equation (PDE) view, the Laplacian result says that a function is determined by its second-order partial derivatives, whereas our result says that it is determined by its first-order derivatives. In both cases, the solution can be obtained by a “simple” volume integration.

6.2. Helmholtz Theorem

We first state and prove a theorem, which gives one aspect of Helmholtz’s Theorem, requiring weaker assumptions. This aspect of Helmholtz’s Theorem is an existence theorem which shows the existence of a vector field having a prescribed divergence and curl, subject to the condition that the prescribed curl has zero divergence. The other aspect is a decomposition theorem which states that any continuously differentiable vector field can be decomposed into two “components”, one of which is the gradient of a scalar field and the other is the curl of a vector field, and that these “generating” fields can be obtained from the original vector field. Zhou [10] and Gui and Dou [11] have a good discussion of various proofs of the Helmholtz Theorem given in many references.

Theorem 19 (existence of field with specified divergence and curl). If f is a given continuously differentiable scalar function and A is a given continuously differentiable vector function such that A = 0 , then the function W defined by:

W ( a , b , c ) = 1 4 π R 3 1 r 3 [ f r r × A ] ,

satisfies:

W = f and × W = A .

Thus, there exists a vector field with a specified divergence value and a specified curl value, provided the curl value has zero divergence.

Proof: For the components of W we have:

W x ( a , b , c ) = 1 4 π R 3 1 r 3 [ x f ( x + a , y + b , z + c ) y A z ( x + a , y + b , z + c ) + z A y ( x + a , y + b , z + c ) ] ,

W y ( a , b , c ) = 1 4 π R 3 1 r 3 [ y f ( x + a , y + b , z + c ) z A x ( x + a , y + b , z + c ) + x A z ( x + a , y + b , z + c ) ] ,

W z ( a , b , c ) = 1 4 π R 3 1 r 3 [ z f ( x + a , y + b , z + c ) x A y ( x + a , y + b , z + c ) + y A x ( x + a , y + b , z + c ) ] .

As above, using the rectangular to spherical-polar transformation, we get:

W x ( a , b , c ) = 1 4 π R s p h ( sin θ cos ϕ f sin θ sin ϕ A z + cos θ A y ) sin θ d r d θ d ϕ ,

W y ( a , b , c ) = 1 4 π R s p h ( sin θ sin ϕ f cos θ A x + sin θ cos ϕ A z ) sin θ d r d θ d ϕ ,

W z ( a , b , c ) = 1 4 π R s p h ( cos θ f sin θ cos ϕ A y + sin θ sin ϕ A x ) sin θ d r d θ d ϕ .

Note that the arguments of the functions f, etc., contain a , b , c as parameters. Carrying out differentiation under the integral sign, and noting that the partial derivatives of the functions with respect to a have the same value as the derivatives with respect to x, we get:

W x a ( a , b , c ) = 1 4 π R s p h [ sin θ cos ϕ f 1 sin θ sin ϕ A z 1 + cos θ A y 1 ] sin θ d r d θ d ϕ , = 1 4 π R 3 1 r 3 ( x f x y A z x + z A y x ) d x d y d z .

Similarly,

W y b ( a , b , c ) = 1 4 π R 3 1 r 3 ( y f y z A x y + x A z y ) d x d y d z

W z c ( a , b , c ) = 1 4 π R 3 1 r 3 ( z f z x A y z + y A x z ) d x d y d z .

Adding the three partial derivatives above, and using Theorem 1 and Theorem 5, we obtain:

W = f .

Next, we compute ( × W ) x :

( × W ) x = W z b W y c = 1 4 π R s p h [ ( cos θ f 2 sin θ cos ϕ A y 2 + sin θ sin ϕ A x 2 ) ( y sin θ sin ϕ f 3 cos θ A x 3 + sin θ cos ϕ A z 3 ) ] sin θ d r d θ d ϕ

= 1 4 π R 3 1 r 3 [ ( z f y x A y y + y A x y ) ( y f z z A x z + x A z z ) ] d x d y d z = 1 4 π R 3 1 r 3 [ ( x A x x + y A x y + z A x z ) + ( z f y y f z ) x ( A x x + A y y + A z z ) ] d x d y d z = A x

because of the assumption that A = ( A x x + A y y + A z z ) = 0 and using Theorem 4 and Theorem 5 once again.

Remark 19: Note that in the proof above, we have virtually proved the following two results for the two parts of W, W 1 = 1 4 π R 3 1 r 3 f r and W 2 = 1 4 π R 3 1 r 3 r × A :

W 1 = f , × A = 0 ,

and:

W 2 = 0 , × W 2 = A .

To prove the decomposition aspect of the Helmholtz Theorem, in view of our new Poisson Formula, it suffices to use the Result 1 and 2 of the previous section.

We now state:

Theorem 20 (Helmholtz theorem, existence of decomposition). If F is a given continuously differentiable vector function, there exists a scalar function V and a vector function A such that:

F = V + × A ,

where V and A are given by:

V = 1 4 π F r , A = 1 4 π × F r .

Obviously, V has zero curl, and × A has zero divergence. Thus, any arbitrary vector field can be decomposed into a zero-curl part and a zero-divergence part.

Proof: This follows from the Results 1 and 2 of the previous section and our formula:

F = 1 4 π R 3 1 r 3 [ ( F ) r r × ( × F ) ] d V .

The function V is usually called the scalar potential function and A the vector potential function generating F. The above two expressions are the ones commonly given. But using our Theorems 14 and 15, the same functions are also given by the following expressions which do not involve any , that is to say, differentiation operation.

V = 1 4 π 1 r 3 ( r F ) , A = 1 4 π 1 r 3 ( r × F ) .

Interestingly, there is a close similarity between these expressions and the following one:

F = 1 r 2 [ ( r F ) r r × ( r × F ) ] .

Such a “decomposition” or “representation” of any arbitrary vector in terms of another arbitrary vector follows immediately from the “vector algebra” identity:

u × ( v × w ) = ( u w ) v ( u v ) w ,

and is used in Clifford geometric algebra. Perhaps, there is some deeper connection here!

Corollary 3 (Poisson’s theorem of electrostatics).

( R 3 1 r 3 f r ) = 4 π f .

Remark 20: The Poisson Theorem of Electrostatics is more commonly stated as 2 V = ρ where V denotes the potential function corresponding to the source density function ρ . The proofs given in most textbooks use the Divergence Theorem and “suffer” from the defect that they assume that the potential function is twice-differentiable. Gauss’s proof was probably the first to show that the potential function is twice-differentiable, though under the assumption that the density function ρ is once-differentiable. Our proof also makes this assumption. Interestingly, Kellogg [15] proves the result without making this assumption, but assumes what is known as a Hölder condition.

7. A New Proof of the Divergence Theorem

In this section, we prove the Divergence Theorem, employing spherical-polar coordinates, which will further illustrate the “power” of these coordinates used along with rectangular coordinates. We need to prove it because we have a new definition of a surface.

Theorem 21 (divergence theorem). If a region V is enclosed by a surface S and F : R 3 R 3 , then:

V F = S F d S .

Proof: As in the classical proofs of the Theorem, we prove the equality of the corresponding three terms on the two sides of the desired equation. In the rectangular coordinates proof, it is usually required that the surface is such that it is “raised” on its projections on each of the three coordinate planes. We do not require this because we have used a different definition of a surface.

We first consider the integral of the part F x x of the divergence and the

corresponding part on the right hand side. We use the expression for the surface element d S as calculated earlier. We denote the functions associated with F x by G x . We will show that:

V r e c t F x x = θ = 0 θ = π ϕ = 0 ϕ = 2 π G x [ S sin ϕ S ϕ S sin θ cos θ cos ϕ S θ + S 2 sin 2 θ cos ϕ ] d θ d ϕ . (52)

We have:

V r e c t F x x = V s p h 1 sin θ [ sin 2 θ cos ϕ G x r + sin θ cos θ cos ϕ 1 r G x θ sin ϕ 1 r G x ϕ ] r 2 sin θ d r d θ d ϕ = V s p h [ sin 2 θ cos ϕ ( r 2 G x r ) + r cos ϕ sin θ cos θ G x θ r sin ϕ G x ϕ ] d r d θ d ϕ = I 1 + I 2 + I 3 ,

where:

I 1 = θ = 0 θ = π ϕ = 0 ϕ = 2 π r = 0 r = S ( θ , ϕ ) sin 2 θ cos ϕ r 2 ( G x r ) d r d θ d ϕ ,

I 2 = θ = 0 θ = π ϕ = 0 ϕ = 2 π r = 0 r = S ( θ , ϕ ) r cos ϕ sin θ cos θ ( G x θ ) d r d θ d ϕ ,

I 3 = θ = 0 θ = π ϕ = 0 ϕ = 2 π r = 0 r = S ( θ , ϕ ) r sin ϕ ( G x ϕ ) d r d θ d ϕ .

Then, looking at the factor G x r in I 1 , we use integration by parts with

respect to r first. Note that this is justified although the upper limit of the integral with respect to r is not a constant but S ( θ , ϕ ) which depends on θ and ϕ which are involved in the other two integrations. We get:

I 1 = θ = 0 θ = π ϕ = 0 ϕ = 2 π sin 2 θ cos ϕ [ r 2 G x ( r , θ , ϕ ) ] r = 0 r = S ( θ , ϕ ) d θ d ϕ V sin 2 θ cos ϕ 2 r G x d r d θ d ϕ = θ = 0 θ = π ϕ = 0 ϕ = 2 π sin 2 θ cos ϕ S ( θ , ϕ ) 2 G x ( S ( θ , ϕ ) , θ , ϕ ) d θ d ϕ 2 V s p h r sin 2 θ cos ϕ G x d r d θ d ϕ .

We cannot integrate by parts with respect to θ in I 2 and with respect to ϕ in I 3 because this will involve interchanging the order of integration with respect to r, the upper limit of which is a function of θ and ϕ . So we will use our “twisted” Leibnitz to transform I 2 and I 3 .

Working on G x / θ in I 2 first, using our “twisted Leibniz” and θ as the parameter:

I 2 = θ = 0 θ = π ϕ = 0 ϕ = 2 π cos ϕ sin θ cos θ [ r = 0 r = S ( θ , ϕ ) r G x θ d r ] d θ d ϕ = θ = 0 θ = π ϕ = 0 ϕ = 2 π cos ϕ sin θ cos θ [ S θ S ( θ , ϕ ) G x ( S ( θ , ϕ ) , θ , ϕ ) + θ 0 S ( θ , ϕ ) r G x d r ] d θ d ϕ = θ = 0 θ = π ϕ = 0 ϕ = 2 π cos ϕ sin θ cos θ S θ S ( θ , ϕ ) G x ( S ( θ , ϕ ) , θ , ϕ ) d θ d ϕ + ϕ = 0 ϕ = 2 π cos ϕ [ θ = 0 θ = π sin θ cos θ θ ( 0 S ( θ , ϕ ) r G x d r ) d θ ] d ϕ

= θ = 0 θ = π ϕ = 0 ϕ = 2 π cos ϕ sin θ cos θ S θ S ( θ , ϕ ) G x ( S ( θ , ϕ ) , θ , ϕ ) d θ d ϕ + ϕ = 0 ϕ = 2 π cos ϕ θ = 0 θ = π [ ( cos 2 θ sin 2 θ ) ( 0 S ( θ , ϕ ) r G x d r ) d θ ] d ϕ = θ = 0 θ = π ϕ = 0 ϕ = 2 π cos ϕ sin θ cos θ S θ S ( θ , ϕ ) G x ( S ( θ , ϕ ) , θ , ϕ ) d θ d ϕ V s p h cos ϕ ( cos 2 θ sin 2 θ ) r G x .

Working on I 3 next, using our twisted Leibniz again:

I 3 = θ = 0 θ = π ϕ = 0 ϕ = 2 π sin ϕ [ r = 0 r = S ( θ , ϕ ) r G x ϕ d r ] d θ d ϕ = θ = 0 θ = π ϕ = 0 ϕ = 2 π sin ϕ [ S ϕ S G x + ϕ ( r = 0 r = S ( θ , ϕ ) r G x d r ) ] d θ d ϕ = θ = 0 θ = π ϕ = 0 ϕ = 2 π sin ϕ S ϕ S G x d θ d ϕ θ = 0 θ = π [ ϕ = 0 ϕ = 2 π sin ϕ ϕ ( r = 0 r = S ( θ , ϕ ) r G x d r ) d ϕ ] d θ = θ = 0 θ = π ϕ = 0 ϕ = 2 π sin ϕ S ϕ S G x d θ d ϕ θ = 0 θ = π [ ϕ = 0 ϕ = 2 π cos ϕ ( r = 0 r = S ( θ , ϕ ) r G x d r ) d ϕ ] d θ = θ = 0 θ = π ϕ = 0 ϕ = 2 π sin ϕ S ϕ S G x d θ d ϕ + V s p h cos ϕ r G x .

Adding the three new expressions for I 1 , I 2 , I 3 , and noting that the three V s p h terms add up to zero, we obtain the desired equality (52).

It remains to patiently verify the equality of the remaining two terms on the two sides—or appeal to “symmetry”.

8. Application to Maxwell’s Equations

8.1. A Modified Approach to Maxwell’s Equations

We refer to the discussions on “Generalized Biot-Savart Law” in Griffiths and Heald [16], on “Generalized Helmholtz Theorem” in Davis [17] and Woodside [18], and on “Can Maxwell’s equations be obtained from the continuity equation?” by Heras [13].

In our modified approach, we will retain two of the “Maxwell” equations in their usual form, namely:

E = 1 ε 0 ρ , (53)

× E = B t , (54)

but take B as defined by Jefimenko’s formula [6], namely:

B = μ 0 4 π ( [ J ] × r r 3 + [ J / t ] × r c r 2 ) d V , (55)

where the square brackets mean that the contents are to be evaluated at the retarded time. Here, it is tacitly assumed that J is conduction current (Jefimenko prefers H over B, so that he gives the formula for H). Jefimenko’s formula for B is, in principle, susceptible of experimental verification, and can, therefore, be called a Generalized Biot-Savart “Law” (incidentally, Maxwell in his Treatise does not even mention the Biot-Savart Law. He seems to prefer Ampere’s Formula × H = J ). It satisfies the Maxwell equation:

B = 0. (56)

Remark 21: Note that Jefimenko did not define B by his formula but rather derived his formula by using the other two “Maxwell” equations:

× E = B t , (57)

× B = μ 0 J + μ 0 ε 0 E / t , (58)

deducing, as is done in text-books, that B satisfies a wave equation, and then obtained a solution of the wave equation, using what he calls the “Wave Field Theorem”. It is not the only solution. Indeed, we can add any time-independent solution L of the equation × L = 0 to obtain another solution (incidentally, Griffiths’s [8] may be the only textbook which highlights Jefimenko’s work).

We can then write the solution of the first two Maxwell equations using our recovery formula and Jefimenko’s B as:

E = 1 4 π R 3 1 r 3 [ ( E ) r r × ( × E ) ] d V (59)

= 1 4 π R 3 1 r 3 [ 1 ε 0 ρ r r × ( B / t ) ] d V , (60)

where the integrands ρ and B are not to be retarded. Thus, one need not talk about retarded Coulomb and Faraday fields since the actions of the ρ field and the defined B-field are not retarded in their action in our formula above (this is important because the equations can be extended to material media by simply changing ε 0 to ε 0 ε r ). The modified B field, however, is determined not only by J through retarded action but also by the time-derivative of J through retarded action (Biot-Savart Law has only J). Note that our formula for E does not involve any self-reference since B is defined explicitly by Jefimenko’s formula.

We finally show that our E given by the formula above with Jefimenko’s B satisfies Maxwell’s fourth equation.

Since ( × B ) = 0 , using our recovery formula for × B , we get:

× B = 1 4 π [ 1 r 3 ( r × ( × J 1 c 2 2 B t 2 ) ) ] . (61)

Using our recovery formula for J:

J = 1 4 π ( 1 r 3 ( J ) r 1 r 3 r × × J ) (62)

and since J = ρ t , differentiating our solution for E above and comparing,

we have Maxwell’s fourth equation. Of course, we assume the continuity equation, ρ / t + J = 0 , to hold, so that ρ and J are not independent sources for the fields (compare with Heras’s [13] derivation using δ -functions). Thus, if we accept the “action-at-a-distance-with-delay-proportional-to-distance” implied by the formula for B, we have the standard “field” description for both B and E.

This is our modified approach which is, of course, un-Maxwellian because it does not invoke the displacement current. But now, a very surprising fact! We will show that B as defined by Jefimenko’s formula satifies a wave equation without invoking E at all and without invoking the equation of continuity. This possibility has been missed perhaps because of concern about the displacement current.

We first calculate × B (note all the integrals are with retarded arguments):

× B = J 1 4 π ( 1 r 3 [ J ] r 1 c 1 r 2 [ J ] t r 1 c 2 1 r 2 [ J ] t 2 ) . (63)

But now we apply × again to get:

× × B = × J 1 4 π ( 0 0 1 c 2 [ 1 r ( 1 r 2 ) ( r × 2 [ J ] t 2 ) + 1 c t 1 r 2 ( r × 2 [ J ] t 2 ) ] ) (64)

= × J + 1 c 2 2 B t 2 , (65)

so, since B = 0 , we have:

2 B = × J + 1 v 2 2 B t 2 , (66)

the wave equation.

8.2. Some Remarks

First a remark on mutuality of forces (Newton’s Third Law). It is usually pointed out that the Biot-Savart—actually, Grassmann—Law for force exerted by one “current element” on another does not satisfy Newton’s Third Law of Motion. Jefimenko’s formula for E shows that moving charges ( ρ / t ) and accelerated charges ( J / t ) exert a force on a stationary charge in addition to the Coulomb force. If mutuality is to hold, a stationary charge ρ should exert a force on moving and accelerated charges, in addition to the Coulomb force.

Finally, some closing remarks on “Fields versus Action-at-a-distance”, nature of “sources”, and “causality”, are in order. The “cause” of the fields E and B are ρ and J, subject to the Continuity Equation, in the sense that they determine the fields uniquely—which really means that we can calculate them. But they cannot be chosen at will because they are not only subject to practical limitations, but also limited by the “fact” that they change under the action of the fields that they themselves produce collectively. Thus, we assume that J and E are related depending on the medium, whether a conductor or a dielectric. The fields act locally, as evidenced by the Lorentz formula, but they are not caused or produced locally. Thus, even in the electrostatic case, although the equation

E = 1 ε 0 ρ holds locally, i.e., at each “point” of “space” and at each instant of

time, ρ at a point and a time-instant does not determine E at that point and time-instant. The set of all the values of ρ at all the points of space collectively determine E—and this involves action-at-a-distance, and perhaps even with a time-retardation. Strictly speaking, even E does not “determine” E locally; only E over a region of space no matter how small determines E . Thus, what exists and happens in each and every “region” of “space” affects what happens everywhere.

9. Conclusion

In the present contribution, a number of new formulas for 3-dimensional vector fields have been derived and new proofs given for some classical results, such as the Helmholtz Theorem and the Divergence Theorem. A new definition of a surface is given and used to derive a new result. Orthogonality of the irrotational + solenoidal decomposition is proved. As an application, a new approach to Maxwell’s equations is suggested.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Stokes, G.G. (1849) On the Dynamical Theory of Diffraction. Transactions of the Cambridge Philosophical Society, 9, 1-62.
[2] Blumenthal, O. (1905) über Die Zerlegung Unendlicher Vectorfelder. Mathematische Annalen, 61, 235-250.
https://doi.org/10.1007/BF01457564
[3] Sprössig, W. (2010) On Helmholtz Decompositions and Their Generalizations—An Overview. Fluid Dynamics Research, 33, 374-383.
https://doi.org/10.1002/mma.1212
[4] Bhatia, H., Norgard, G., Pascucci, V. and Bremer, P.T. (2013) The Helmholtz-Hodge Decomposition—A Survey. IEEE Transactions on Visualization and Computer Graphics, 19, 1386-1404.
https://doi.org/10.1109/TVCG.2012.316
[5] Kustepeli, A. (2016) On the Helmholtz Theorem and Its Generalization for Multi-Layers. Electromagnetics, 36, 135-148.
https://doi.org/10.1080/02726343.2016.1149755
[6] Jefimenko, O.D. (1989) Electricity and Magnetism. 2nd Edition, Electret Scientific Company, Star City, 42.
[7] Panofsky, W.K.H. and Phillips, M. (1962) Classical Electricity and Magnetism. 2nd Edition, Addison-Wesley Publishing Co., Reading, MA, 2-7.
[8] Griffiths, D.J. (1999) Introduction to Electrodynamics. 3rd Edition, Pearson Education (Singapore) Ltd., Delhi, 573-575.
[9] Edgar, R.S. (1989) Field Analysis and Potential Theory. Springer, Berlin, 342-346.
https://doi.org/10.1007/978-3-642-83765-4
[10] Zhou, X.L. (2007) On Helmholtz’s Theorem and Its Interpretations. Journal of Electromagnetic Waves and Application, 21, 471-483.
https://doi.org/10.1163/156939307779367314
[11] Gui, Y.F. and Dou, W.B. (2007) A Rigorous and Completed Statement on Helmholtz Theorem. Progress in Electromagnetic Research, 69, 287-304.
[12] Gauss, C.F. (1840) General Propositions Relating to Attractive and Repulsive Forces Acting in the Inverse Ratio of the Square of the Distance. In: Taylor, R., Ed., Scientific Memoirs: Selected from the Transactions of Foreign Academies of Science and from Foreign Journals, Vol. III, R. and J. E. Taylor, New York, 153-196.
[13] Heras, J.A. (2007) Can Maxwell’s Equations Be Obtained from the Continuity Equation? American Journal of Physics, 75, 652-657.
https://doi.org/10.1119/1.2739570
[14] Edwards, C.H. (1973) Advanced Calculus of Several Variables. Academic Press Inc., New York, 252 p.
[15] Kellogg, O.D. (1954) Foundations of Potential Theory. Dover Publications Inc., New York.
[16] Griffiths, D.J. and Heald, M.A. (1991) Time-Dependent Generalizations of the Biot-Savart and Coulomb Laws. American Journal of Physics, 59, 111-117.
https://doi.org/10.1119/1.16589
[17] Davis, M.A. (2010) A Generalized Helmholtz Theorem for Time-Varying Vector Fields. Fluid Dynamics Research, 74, 72-76.
https://doi.org/10.1119/1.2121756
[18] Woodside, D.A. (2009) Three-Vector and Scalar Field Identities and Uniqueness Theorems in Euclidean and Minkowski Spaces. American Journal of Physics, 77, 438-446.
https://doi.org/10.1119/1.3076300

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.