Open Access Articles- Top Results for Directional derivative

Directional derivative

In mathematics, the directional derivative of a multivariate differentiable function along a given vector v at a given point x intuitively represents the instantaneous rate of change of the function, moving through x with a velocity specified by v. It therefore generalizes the notion of a partial derivative, in which the rate of change is taken along one of the coordinate curves, all other coordinates being constant.

The directional derivative is a special case of the Gâteaux derivative.


File:Directional derivative contour plot.svg
A contour plot of <math>f(x, y)=x^2 + y^2</math>, showing the gradient vector in blue, and the unit vector <math>\bold{u}</math> scaled by the directional derivative in the direction of <math>\bold{u}</math> in orange. The gradient vector is longer because gradients points in the direction of greatest rate of increase of a function.

Generally applicable definition

The directional derivative of a scalar function

<math>f(\bold{x}) = f(x_1, x_2, \ldots, x_n)</math>

along a vector

<math>\bold{v} = (v_1, \ldots, v_n)</math>

is the function defined by the limit[1]

<math>\nabla_{\bold{v}}{f}(\bold{x}) = \lim_{h \rightarrow 0}{\frac{f(\bold{x} + h\bold{v}) - f(\bold{x})}{h}}.</math>

If the function f is differentiable at x, then the directional derivative exists along any vector v, and one has

<math>\nabla_{\bold{v}}{f}(\bold{x}) = \nabla f(\bold{x}) \cdot \bold{v}</math>

where the <math>\nabla</math> on the right denotes the gradient and <math>\cdot</math> is the dot product.[2] Intuitively, the directional derivative of f at a point x represents the rate of change of f with respect to time when it is moving at a speed and direction given by v.

Variation using only direction of vector

File:Geometrical interpretation of a directional derivative.svg
The angle α between the tangent A and the horizontal will be maximum if the cutting plane contains the direction of the gradient A.

Some authors define the directional derivative to be with respect to the vector v after normalization, thus ignoring its magnitude. In this case, one has

<math>\nabla_{\bold{v}}{f}(\bold{x}) = \lim_{h \rightarrow 0}{\frac{f(\bold{x} + h\bold{v}) - f(\bold{x})}{h|\bold{v}|}},</math>

or in case f is differentiable at x,

<math>\nabla_{\bold{v}}{f}(\bold{x}) = \nabla f(\bold{x}) \cdot \frac{\bold{v}}{|\bold{v}|} .</math>

This definition has some disadvantages: its applicability is limited to when the norm of a vector is defined and nonzero. It is incompatible with notation used in some other areas of mathematics, physics and engineering, but should be used when what is wanted is the rate of increase in f per unit distance.

Restriction to unit vector

Some authors restrict the definition of the directional derivative to being with respect to a unit vector. With this restriction, the two definitions above become the same.


Directional derivatives can be also denoted by:

<math>\nabla_{\bold{v}}{f}(\bold{x}) \sim \frac{\partial{f(\bold{x})}}{\partial{v}} \sim f'_\mathbf{v}(\bold{x}) \sim D_\bold{v}f(\bold{x}) \sim \mathbf{v}\cdot{\nabla f(\bold{x})} \sim \bold{v}\cdot \frac{\partial f(\bold{x})}{\partial\bold{x}} </math>

where v is a parameterization of a curve to which v is tangent and which determines its magnitude.


Many of the familiar properties of the ordinary derivative hold for the directional derivative. These include, for any functions f and g defined in a neighborhood of, and differentiable at, p:

  1. The sum rule:
(f + g) = \nabla_{\bold{v}} f + \nabla_{\bold{v}} g</math>

|2= The constant factor rule: For any constant c,

<math>\nabla_{\bold{v}} (cf) = c\nabla_{\bold{v}} f</math>

|3= The product rule (or Leibniz rule):

<math>\nabla_{\bold{v}} (fg) = g\nabla_{\bold{v}} f + f\nabla_{\bold{v}} g</math>

|4= The chain rule: If g is differentiable at p and h is differentiable at g(p), then

<math>\nabla_{\bold{v}}(h\circ g)(\bold{p}) = h'(g(\bold{p})) \nabla_{\bold{v}} g (\bold{p})</math>


In differential geometry

Let M be a differentiable manifold and p a point of M. Suppose that f is a function defined in a neighborhood of p, and differentiable at p. If v is a tangent vector to M at p, then the directional derivative of f along v, denoted variously as <math>\nabla_{\bold{v}} f(\bold{p})</math> (see covariant derivative), <math>L_{\bold{v}} f(\bold{p})</math> (see Lie derivative), or <math>{\bold{v}}_{\bold{p}}(f)</math> (see Tangent space §Definition via derivations), can be defined as follows. Let γ : [−1,1] → M be a differentiable curve with γ(0) = p and γ′(0) = v. Then the directional derivative is defined by

<math>\nabla_{\bold{v}} f(\bold{p}) = \left.\frac{d}{d\tau} f\circ\gamma(\tau)\right|_{\tau=0}</math>

This definition can be proven independent of the choice of γ, provided γ is selected in the prescribed manner so that γ′(0) = v.

The Lie derivative

The Lie derivative of a vector field <math>\scriptstyle W^\mu(x)</math> along a vector field <math>\scriptstyle V^\mu(x)</math> is given by the difference of two directional derivatives (with vanishing torsion):

<math>\mathcal{L}_V W^\mu=(V\cdot\nabla) W^\mu-(W\cdot\nabla) V^\mu</math>

In particular, for a scalar field <math>\scriptstyle \phi(x)</math>, the Lie derivative reduces to the standard directional derivative:

<math>\mathcal{L}_V \phi=(V\cdot\nabla) \phi</math>

The Riemann tensor

Directional derivatives are often used in introductory derivations of the Riemann curvature tensor. Consider a curved rectangle with an infinitesimal vector δ along one edge and δ' along the other. We translate a covector S along δ then δ' and then subtract the translation along δ' and then δ. Instead of building the directional derivative using partial derivatives, we use the covariant derivative. The translation operator for δ is thus

<math>1+\sum_\nu \delta^\nu D_\nu=1+\delta\cdot D</math>

and for δ'

<math>1+\sum_\mu \delta'^\mu D_\mu=1+\delta'\cdot D</math>

The difference between the two paths is then

<math>(1+\delta'\cdot D)(1+\delta\cdot D)S^\rho-(1+\delta\cdot D)(1+\delta'\cdot D)S^\rho=\sum_{\mu,\nu}\delta'^\mu \delta^\nu[D_\mu,D_\nu]S_\rho</math>

It can be argued[3] that the noncommutativity of the covariant derivatives measures the curvature of the manifold:

<math>[D_\mu,D_\nu]S_\rho=\pm \sum_\sigma R^\sigma_{\rho\mu\nu}S_\sigma</math>

with R the Riemann tensor of course and the sign depending on the sign convention of the author.

In group theory


In the Poincaré algebra, we can define an infinitesimal translation operator P as


(the i ensures that P is a self-adjoint operator) For a finite displacement λ, the unitary Hilbert space representation for translations is[4]


By using the above definition of the infinitesimal translation operator, we see that the finite translation operator is an exponentiated directional derivative:


This is a translation operator in the sense that it acts on multivariable functions f(x) as

<math>U(\boldsymbol{\lambda}) f(\mathbf{x})=\exp\left(\boldsymbol{\lambda}\cdot\nabla\right) f(\mathbf{x})=f(\mathbf{x}+\boldsymbol{\lambda}) </math>


The rotation operator also contains a directional derivative. The rotation operator for an angle θ, i.e. by an amount θ=|θ| about an axis parallel to <math>\scriptstyle \hat{\theta}</math>=θ/θ is


Here L is the vector operator that generates SO(3):

0& 0 & 0\\ 
0& 0 & 1\\ 
0& -1 & 0

\end{pmatrix}\mathbf{i}+\begin{pmatrix} 0 &0 & -1\\

0& 0 &0 \\ 

1 & 0 & 0 \end{pmatrix}\mathbf{j}+\begin{pmatrix}

0&1  &0 \\ 
-1&0  &0 \\ 

0 & 0 & 0 \end{pmatrix}\mathbf{k}</math> It may be shown geometrically that an infinitesimal right-handed rotation changes the position vector x by

<math>\mathbf{x}\rightarrow \mathbf{x}-\delta\boldsymbol{\theta}\times\mathbf{x}</math>

So we would expect under infinitesimal rotation:

<math>U(R(\delta\boldsymbol{\theta}))f(\mathbf{x})=f(\mathbf{x}-\delta\boldsymbol{\theta}\times\mathbf{x})=f(\mathbf{x})-(\delta\boldsymbol{\theta}\times\mathbf{x})\cdot\nabla f</math>

It follows that


Following the same exponentiation procedure as above, we arrive at the rotation operator in the position basis, which is an exponentiated directional derivative:[8]


Normal derivative

A normal derivative is a directional derivative taken in the direction normal (that is, orthogonal) to some surface in space, or more generally along a normal vector field orthogonal to some hypersurface. See for example Neumann boundary condition. If the normal direction is denoted by <math>\bold{n}</math>, then the directional derivative of a function f is sometimes denoted as <math>\frac{ \partial f}{\partial n}</math>. In other notations

<math>\frac{ \partial f}{\partial n} = \nabla f(\bold{x}) \cdot \bold{n} = \nabla_{\bold{n}}{f}(\bold{x}) = \frac{\partial f}{\partial \bold{x}}\cdot\bold{n} = Df(\bold{x})[\bold{n}] </math>

In the continuum mechanics of solids

Several important results in continuum mechanics require the derivatives of vectors with respect to vectors and of tensors with respect to vectors and tensors.[9] The directional directive provides a systematic way of finding these derivatives.

The definitions of directional derivatives for various situations are given below. It is assumed that the functions are sufficiently smooth that derivatives can be taken.

Derivatives of scalar valued functions of vectors

Let <math>f(\mathbf{v})</math> be a real valued function of the vector <math>\mathbf{v}</math>. Then the derivative of <math>f(\mathbf{v})</math> with respect to <math>\mathbf{v}</math> (or at <math>\mathbf{v}</math>) in the direction <math>\mathbf{u}</math> is defined as

 \frac{\partial f}{\partial \mathbf{v}}\cdot\mathbf{u} = Df(\mathbf{v})[\mathbf{u}] 
    = \left[\frac{d }{d \alpha}~f(\mathbf{v} + \alpha~\mathbf{u})\right]_{\alpha = 0}

</math> for all vectors <math>\mathbf{u}</math>.


  1. If <math>f(\mathbf{v}) = f_1(\mathbf{v}) + f_2(\mathbf{v})</math> then <math> \frac{\partial f}{\partial \mathbf{v
\cdot\mathbf{u} = \left(\frac{\partial f_1}{\partial \mathbf{v}} + \frac{\partial f_2}{\partial \mathbf{v}}\right)\cdot\mathbf{u}

|2= If <math>f(\mathbf{v}) = f_1(\mathbf{v})~ f_2(\mathbf{v})</math> then <math>

  \frac{\partial f}{\partial \mathbf{v}}\cdot\mathbf{u} =  \left(\frac{\partial f_1}{\partial \mathbf{v}}\cdot\mathbf{u}\right)~f_2(\mathbf{v}) + f_1(\mathbf{v})~\left(\frac{\partial f_2}{\partial \mathbf{v}}\cdot\mathbf{u} \right)

|3= If <math>f(\mathbf{v}) = f_1(f_2(\mathbf{v}))</math> then <math>

  \frac{\partial f}{\partial \mathbf{v}}\cdot\mathbf{u} =  \frac{\partial f_1}{\partial f_2}~\frac{\partial f_2}{\partial \mathbf{v}}\cdot\mathbf{u} 


Derivatives of vector valued functions of vectors

Let <math>\mathbf{f}(\mathbf{v})</math> be a vector valued function of the vector <math>\mathbf{v}</math>. Then the derivative of <math>\mathbf{f}(\mathbf{v})</math> with respect to <math>\mathbf{v}</math> (or at <math>\mathbf{v}</math>) in the direction <math>\mathbf{u}</math> is the second-order tensor defined as

 \frac{\partial \mathbf{f}}{\partial \mathbf{v}}\cdot\mathbf{u} = D\mathbf{f}(\mathbf{v})[\mathbf{u}] 
    = \left[\frac{d }{d \alpha}~\mathbf{f}(\mathbf{v} + \alpha~\mathbf{u})\right]_{\alpha = 0}

</math> for all vectors <math>\mathbf{u}</math>.


  1. If <math>\mathbf{f}(\mathbf{v}) = \mathbf{f}_1(\mathbf{v}) + \mathbf{f}_2(\mathbf{v})</math> then <math> \frac{\partial \mathbf{f
{\partial \mathbf{v}}\cdot\mathbf{u} = \left(\frac{\partial \mathbf{f}_1}{\partial \mathbf{v}} + \frac{\partial \mathbf{f}_2}{\partial \mathbf{v}}\right)\cdot\mathbf{u}

|2= If <math>\mathbf{f}(\mathbf{v}) = \mathbf{f}_1(\mathbf{v})\times\mathbf{f}_2(\mathbf{v})</math> then <math>

  \frac{\partial \mathbf{f}}{\partial \mathbf{v}}\cdot\mathbf{u} =  \left(\frac{\partial \mathbf{f}_1}{\partial \mathbf{v}}\cdot\mathbf{u}\right)\times\mathbf{f}_2(\mathbf{v}) + \mathbf{f}_1(\mathbf{v})\times\left(\frac{\partial \mathbf{f}_2}{\partial \mathbf{v}}\cdot\mathbf{u} \right)

|3= If <math>\mathbf{f}(\mathbf{v}) = \mathbf{f}_1(\mathbf{f}_2(\mathbf{v}))</math> then <math>

  \frac{\partial \mathbf{f}}{\partial \mathbf{v}}\cdot\mathbf{u} =  \frac{\partial \mathbf{f}_1}{\partial \mathbf{f}_2}\cdot\left(\frac{\partial \mathbf{f}_2}{\partial \mathbf{v}}\cdot\mathbf{u} \right)


Derivatives of scalar valued functions of second-order tensors

Let <math>f(\boldsymbol{S})</math> be a real valued function of the second order tensor <math>\boldsymbol{S}</math>. Then the derivative of <math>f(\boldsymbol{S})</math> with respect to <math>\boldsymbol{S}</math> (or at <math>\boldsymbol{S}</math>) in the direction <math>\boldsymbol{T}</math> is the second order tensor defined as

 \frac{\partial f}{\partial \boldsymbol{S}}:\boldsymbol{T} = Df(\boldsymbol{S})[\boldsymbol{T}] 
    = \left[\frac{d }{d \alpha}~f(\boldsymbol{S} + \alpha\boldsymbol{T})\right]_{\alpha = 0}

</math> for all second order tensors <math>\boldsymbol{T}</math>.


  1. If <math>f(\boldsymbol{S}) = f_1(\boldsymbol{S}) + f_2(\boldsymbol{S})</math> then <math> \frac{\partial f}{\partial \boldsymbol{S
:\boldsymbol{T} = \left(\frac{\partial f_1}{\partial \boldsymbol{S}} + \frac{\partial f_2}{\partial \boldsymbol{S}}\right):\boldsymbol{T} </math>

|2= If <math>f(\boldsymbol{S}) = f_1(\boldsymbol{S})~ f_2(\boldsymbol{S})</math> then <math> \frac{\partial f}{\partial \boldsymbol{S}}:\boldsymbol{T} = \left(\frac{\partial f_1}{\partial \boldsymbol{S}}:\boldsymbol{T}\right)~f_2(\boldsymbol{S}) + f_1(\boldsymbol{S})~\left(\frac{\partial f_2}{\partial \boldsymbol{S}}:\boldsymbol{T} \right) </math>

|3= If <math>f(\boldsymbol{S}) = f_1(f_2(\boldsymbol{S}))</math> then <math> \frac{\partial f}{\partial \boldsymbol{S}}:\boldsymbol{T} = \frac{\partial f_1}{\partial f_2}~\left(\frac{\partial f_2}{\partial \boldsymbol{S}}:\boldsymbol{T} \right) </math> }}

Derivatives of tensor valued functions of second-order tensors

Let <math>\boldsymbol{F}(\boldsymbol{S})</math> be a second order tensor valued function of the second order tensor <math>\boldsymbol{S}</math>. Then the derivative of <math>\boldsymbol{F}(\boldsymbol{S})</math> with respect to <math>\boldsymbol{S}</math> (or at <math>\boldsymbol{S}</math>) in the direction <math>\boldsymbol{T}</math> is the fourth order tensor defined as

 \frac{\partial \boldsymbol{F}}{\partial \boldsymbol{S}}:\boldsymbol{T} = D\boldsymbol{F}(\boldsymbol{S})[\boldsymbol{T}] 
    = \left[\frac{d }{d \alpha}~\boldsymbol{F}(\boldsymbol{S} + \alpha\boldsymbol{T})\right]_{\alpha = 0}

</math> for all second order tensors <math>\boldsymbol{T}</math>.


  1. If <math>\boldsymbol{F}(\boldsymbol{S}) = \boldsymbol{F}_1(\boldsymbol{S}) + \boldsymbol{F}_2(\boldsymbol{S})</math> then <math> \frac{\partial \boldsymbol{F
{\partial \boldsymbol{S}}:\boldsymbol{T} = \left(\frac{\partial \boldsymbol{F}_1}{\partial \boldsymbol{S}} + \frac{\partial \boldsymbol{F}_2}{\partial \boldsymbol{S}}\right):\boldsymbol{T} </math>

|2= If <math>\boldsymbol{F}(\boldsymbol{S}) = \boldsymbol{F}_1(\boldsymbol{S})\cdot\boldsymbol{F}_2(\boldsymbol{S})</math> then <math> \frac{\partial \boldsymbol{F}}{\partial \boldsymbol{S}}:\boldsymbol{T} = \left(\frac{\partial \boldsymbol{F}_1}{\partial \boldsymbol{S}}:\boldsymbol{T}\right)\cdot\boldsymbol{F}_2(\boldsymbol{S}) + \boldsymbol{F}_1(\boldsymbol{S})\cdot\left(\frac{\partial \boldsymbol{F}_2}{\partial \boldsymbol{S}}:\boldsymbol{T} \right) </math>

|3= If <math>\boldsymbol{F}(\boldsymbol{S}) = \boldsymbol{F}_1(\boldsymbol{F}_2(\boldsymbol{S}))</math> then <math> \frac{\partial \boldsymbol{F}}{\partial \boldsymbol{S}}:\boldsymbol{T} = \frac{\partial \boldsymbol{F}_1}{\partial \boldsymbol{F}_2}:\left(\frac{\partial \boldsymbol{F}_2}{\partial \boldsymbol{S}}:\boldsymbol{T} \right) </math>

|4= If <math>f(\boldsymbol{S}) = f_1(\boldsymbol{F}_2(\boldsymbol{S}))</math> then <math> \frac{\partial f}{\partial \boldsymbol{S}}:\boldsymbol{T} = \frac{\partial f_1}{\partial \boldsymbol{F}_2}:\left(\frac{\partial \boldsymbol{F}_2}{\partial \boldsymbol{S}}:\boldsymbol{T} \right) </math> }}

See also


  1. ^ R. Wrede, M.R. Spiegel (2010). Advanced Calculus (3rd edition ed.). Schaum's Outline Series. ISBN 978-0-07-162366-7. 
  2. ^ Technically, the gradient ∇f is a covector, and the "dot product" is the action of this covector on the vector v.
  3. ^ Zee, A. (2013). Einstein gravity in a nutshell. Princeton: Princeton University Press. p. 341. ISBN 9780691145587. 
  4. ^ Weinberg, Steven (1999). The quantum theory of fields (Reprinted (with corr.). ed.). Cambridge [u.a.]: Cambridge Univ. Press. ISBN 9780521550017. 
  5. ^ Zee, A. (2013). Einstein gravity in a nutshell. Princeton: Princeton University Press. ISBN 9780691145587. 
  6. ^ Mexico, Kevin Cahill, University of New (2013). Physical mathematics (Repr. ed.). Cambridge: Cambridge University Press. ISBN 978-1107005211. 
  7. ^ Edwards, Ron Larson, Robert, Bruce H. (2010). Calculus of a single variable (9th ed. ed.). Belmont: Brooks/Cole. ISBN 9780547209982. 
  8. ^ Shankar, R. (1994). Principles of quantum mechanics (2nd ed. ed.). New York: Kluwer Academic / Plenum. p. 318. ISBN 9780306447907. 
  9. ^ J. E. Marsden and T. J. R. Hughes, 2000, Mathematical Foundations of Elasticity, Dover.


  • Hildebrand, F. B. (1976). Advanced Calculus for Applications. Prentice Hall. ISBN 0-13-011189-9. 
  • K.F. Riley, M.P. Hobson, S.J. Bence (2010). Mathematical methods for physics and engineering. Cambridge University Press. ISBN 978-0-521-86153-3. 

External links