Series of orthogonal functions

Consider a set of functions such that every function in the set is orthogonal to any other function . Let the norm of one of those functions be (if the , we say the set is orthonormal). Then, we can write

where is the Kronecker delta.

Because the set is orthogonal (and therefore linearly independent), we can use it as a basis of the (function) vector space. Then, we can develop a function (where is the field it is defined on, for example or ) as a series of the orthogonal set as

The are then the components. How to find the components (=coefficients of the series)?

We do the inner product from the left with on both sides of the equation

so then


Exercise:

Consider the set of functions , defined in the interval .

  1. Show that are orthogonal to each other in the given interval.
  2. Compute the normalization constant for each polynomial.
  3. Compute the product . Those two vectors aren't orthogonal. But are they linearly independent?
  4. One can orthogonalize two vectors with a procedure called Gram-Schmidt orthogonalization. Show that the second-order polynomial is indeed orthogonal to (i.e., )
  5. Compute the components of the (vector) function in the given base (set of functions).

Eigenvalue problem for functions and operators

Review: the eigenvalue problem for matrices

In linear algebra, we learn that if a (column) vector can be thought as an "arrow", a matrix can be thought as a transformation on that vector (a rotation, a stretch, a flip, etc.). We also learn that a real scalar multiplied by a vector stretches the vector along its direction.

The equation is called the eigenvalue problem. The idea is that the matrix has the same effect on than a scalar multiplication by . The values of for which this equation is true are called eigenvalues and their associated vectors , eigenvectors of the matrix .

Operators and matrices

In the same way that we have seen that there is a correspondence between column vectors and functions, there is a correspondence between matrices and differential operators. Differential operators transform functions in an equivalent way as matrices transform column vectors. To understand exactly why we can make those correspondences, the interested reader should study representations in group theory.


Example 1: consider the base , which is orthogonal in the interval , and the operator . Compute the matrix representation of .

Solution: We can build such matrix representation by considering the inner product

that is


Example 2: in programming a simulation, you need to discretize the operator where the coordinate is discretized as a grid with values distributed at a regular interval . Compute a matrix representation of the operator.

Solution: We have a grid of points in . This means that the values where exists can be written as where .

A possible discretization of the second derivative operator applied on a test function is

With this, we can see that in discretized space, the operator is

Let's check: imagine a function sampled at every point of the grid. So, the function becomes

Then, ignoring the first terms (boundaries), if we take the component and apply the matrix multiplication (row of A dot entry of the column matrix ), we recover

The more grid points in our discretization, the more dimension our matrix has. This means that for continuous space () we can think of the derivative as an infinitely dense matrix, and a function as a column vector with infinitely many entries, one per each value of .


Connection with quantum mechanics (not for this course!): With the matrix representation , we can define a transpose, . For a complex operator and its complex conjugate , we can also define an adjoint operator, . If an operator is the same as its adjoint, we call it hermitic.

Eigenvalue problem for operators

We have seen that matrices and operators are related. Then, it is not surprising that we can define an eigenvalue problem for a differential operator operating on (vector) functions , through the differential equation

where is then a constant (eigenvalue).


Example: we have seen that the set of functions is orthogonal in . Show that this set of functions satisfies an eigenvalue problem with the operator , and compute the eigenvalues.

Solution: the eigenvalue problem would be

Let's check: . Then, and we identify as the eigenvalues and the as the eigenvectors of .

Note that, in principle, we should be able to find the eigenvectors (and the eigenvalues) with the sole knowledge of the operator.


Last update: 12 Aug 2025 at 13:54