Part 3 - Higher partial derivatives

Posted on Sep 1, 2023
(Last updated: May 26, 2024)

Introduction

In this part we’ll define higher partial derivatives and how we define if a function with several variables is differentiable.

Higher partial derivatives

In the last part we defined what partial derivatives were: $$ f_x = \lim{h \to 0} \dfrac{f(x + h, y) - f(x, y)}{h} $$

We can now derive this function again: $$ f_{xx} = \lim{h \to 0} \dfrac{f_x(x + h, y) - f_x(x, y)}{h} $$

$$ f_{xy} = \lim{h \to 0} \dfrac{f_x(x, y + h) - f_x(x, y)}{h} $$

$$ f_{yx} = \lim{h \to 0} \dfrac{f_y(x + h, y) - f_y(x, y)}{h} $$

$$ f_{yy} = \lim{h \to 0} \dfrac{f_y(x, y + h) - f_y(x, y)}{h} $$

Example

Find all 2nd order partial derivatives of:

$$ f(x, y) = x \cdot e^y $$

1st order: $$ f_x(x, y) = e^y $$

$$ f_y(x, y) = x \cdot e^y $$

2nd order: $$ f_{xx}(x, y) = 0 $$

$$ f_{xy}(x, y) = e^y $$

$$ f_{yx}(x, y) = e^y $$

$$ f_{yy}(x, y) = x \cdot e^y $$

As we see $f_{xy}$ and $f_{yx}$ are the same. This is often the case.

Clairaut’s theorem

Clairaut’s theorem states that:

If $f_{xy}$ and $f_{yx}$ are continuous, then $f_{xy} = f_{yx}$

We’ll save the actual proof for another day, but we can “explain” why this theorem holds:

Explaination

Suppose we have: $$ f(x, y) = x^n y^m $$

The 1st & 2nd order partial derivatives of this polynomial are:

$$ f_x(x, y) = n \cdot x^{n - 1} y^m $$

$$ f_y(x, y) = m \cdot x^n y^{m - 1} $$

$$ f_{xy}(x, y) = m \cdot n \cdot x^{n - 1} y^{m - 1} $$

$$ f_{yx}(x, y) = n \cdot m \cdot x^{n - 1} y^{m - 1} $$

We see that this holds true for polynomials. Since any function can be approximated by polynomials, it’s not surprising Clairaut’s theorem holds.

Remark

Suppose, $f$, is a composition of $sin(x)$, $cos(x)$, polynomials and the exponential function. For example: $$ f(x, y) = e^{cos(xy - 3 \cdot sin(xy))} $$

Since all of these functions are differentiable any amount of times and all of these are continuous. This must mean that partial derivatives of any order must also be continuous.

This means we can easily use Clairaut’s theorem, since it only applies to functions that are continuous. Therefore we do not need to know if the partial derivatives are continuous ahead of time.

Example

Find $f_{xy}$: $$ f(x, y) = sin(e^{x \cdot cos(x - 3))} + xy $$

Trying to first compute $f_x$ will prove to be quite tricky, but using Clairaut’s theorem this problem is trivial: $$ f_y(x, y) = x $$

$$ \boxed{f_yx(x, y) = 1} $$

Notation

There are a lot of different notations for partial derivatives. We’ll mainly use $f_x$ and $f_y$.

However, the most common notation is: $$ f_x = \dfrac{\partial f}{\partial x} $$

$$ f_y = \dfrac{\partial f}{\partial y} $$

In the case for higher order partial derivatives: $$ f_{xx} = \dfrac{\partial \left(\dfrac{\partial f}{\partial x}\ \right)}{\partial x} = \dfrac{\partial^2 f}{\partial x^2} $$

$$ f_{xy} = \dfrac{\partial \left(\dfrac{\partial f}{\partial x}\ \right)}{\partial y} = \dfrac{\partial^2 f}{\partial x \partial y} $$

$$ f_{yx} = \dfrac{\partial \left(\dfrac{\partial f}{\partial y}\ \right)}{\partial x} = \dfrac{\partial^2 f}{\partial y \partial x} $$

$$ f_{yy} = \dfrac{\partial \left(\dfrac{\partial f}{\partial y}\ \right)}{\partial y} = \dfrac{\partial^2 f}{\partial y^2} $$

N’th order partial derivatives

We can derive how many times we want, with how many variables we want: $$ f_{xxx}, f_{xxy}, f_{xyx}, \ldots $$

This also includes Clairaut’s theorem.

Geometrical sense

Let’s try to understand what derivatives in two variables looks like geometricaly.

For one variable, it is: $f’(a) =$ the slope of the tangent line to the graph at $(a, f(a))$. We could also say that this is the rate of change at point $a$.

For two variables: $f_x(a, b)$ intersects the graph by the vertical plane, $y = b$. $C$, is the curve of the intersection. In other words $f_x(a, b)$ = the slope of the tangent line $T$.

We could also say:

Rate of change in the direction of the variable (we are differanting)

Differentiability

For one function, the following statement holds:

$f$ is differentiable $\Rightarrow$ $f$ has derivate at $a$, meaning $f’(a)$ exists.

When dealing with several variables, this becomes tricky. Let’s look at the definition and see what we can do:

$$ f’(a) = \lim_{x \to 0} \dfrac{f(x + h) - f(x)}{h} $$

This just means we have a small change in $x$, so let’s say: $$ \Delta x = h $$

Let’s define the small change in the function value as: $$ \Delta y := f(a + \Delta x) - f(a) $$

Then, the derivate is: $$ f’(a) = \lim_{\Delta x \to 0} \dfrac{\Delta y}{\Delta x} $$

Let’s move around the terms: $$ \lim_{\Delta x \to 0} \dfrac{\Delta y}{\Delta x} - f’(a) = 0 $$

Let’s now call this for $\varepsilon$: $$ \varepsilon = \dfrac{\Delta y}{\Delta x} - f’(a) $$

We can define this as a function of $\Delta x$: $$ \varepsilon = \varepsilon(\Delta x) $$

Which means: $$ \varepsilon \Delta x = \Delta y - f’(a) \Delta x $$

Which finally means: $$ \boxed{\Delta y = \varepsilon \Delta x + f’(a) \Delta x} $$

And as $\Delta x \to 0$, $\varepsilon \to 0$ as well.

For functions of two variables, we can now instead define $\Delta z$: $$ \Delta z = \ldots = \boxed{f_x(a, b)\Delta x + f_y(a, b)\Delta y + \varepsilon_1 \Delta x + \varepsilon_2 \Delta y} $$

Where: $$ \varepsilon_1 = \varepsilon_1(\Delta x, \Delta y), \varepsilon_2 = \varepsilon_2(\Delta x, \Delta y) $$