Home Linear Algebra Matrix Operations

# Matrix Operations

One of the most important courses in linear algebra is matrix calculus.  We discuss matrix operations, mainly the multiplication of matrices. Indeed, matrices are widely used in IT. In fact, after discretization, many physical models can be rewritten as a linear system of a finite dimension. The latter is governed by a matrix. Thus, the solution exists if and only if the matrix is ​​invertible.

## What is a matrix?

Formally, a matrix is ​​just an array in which we put entries. This table can be a square or a rectangle.

Mathematically, a matrix is a linear transformation $A$ from $\mathbb{R}^n$ to $\mathbb{R}^p$. If $p=n$ then $A$ is called a square matrix, and if $p\neq q,$ $A$ is called a rectangular matrix. Using the base of the vector space $\mathbb{R}^p,$ the matrix $A$ takes the following form \begin{align*}A=\begin{pmatrix} a_{11}&a_{21}&\cdots&a_{n1}\cr a_{12}&a_{22}& \cdots&a_{n2}\cr \vdots&\vdots&\ddots&\vdots\\ a_{1p}&a_{2p}&\cdots&a_{np}\end{pmatrix}.\end{align*}For simplicity, we write $A=(a_{ij})_{\underset{1\le j\le p}{1\le i\le n}}$. We say that $A$ is a $np$-matrix. On the other hand, $n$ is the number of columns, while $p$ is the number of lines in the matrix.

## The space of matrices

In what follows the field $\mathbb{K}$ will be the set of real numbers $\mathbb{R}$ or $\mathbb{C}$ the set of complex numbers.  We denote by $\mathscr{M}_{np}(\mathbb{K})$ the set of all matrices $A=(a_{ij})_{\underset{1\le j\le p}{1\le i\le n}}$. If $n=p,$ we set $\mathscr{M}_{nn}(\mathbb{K}):=\mathscr{M}_{n}(\mathbb{K}),$ the set of square matrices of order $n$.

Let $A$ and $B$ be two matrices in $\mathscr{M}_{np}(\mathbb{K})$ with coefficients $a_{ij}$ and $b_{ij},$ respectively. Let $\lambda \in \mathbb{K}$. We define the following matrix operations \begin{align*} A+B&=(a_{ij}+b_{ij})_{\underset{1\le j\le p}{1\le i\le n}}\cr \lambda\cdot A &=(\lambda a_{ij})_{\underset{1\le j\le p}{1\le i\le n}}.\end{align*} Then $(\mathscr{M}_{np}(\mathbb{K}),+,\cdot)$ is a vector space on $\mathbb{K}$.

Let’s consider the elementary matrices $E^{ij}$  for $i=1,\cdots,n$ and $j=1,\cdots,p$ defined in the following way: all the coefficients are zero except the coefficient which corresponds to index $i,j$. We recall that the set $$\{E^{ij}:i=1,\cdots,n,\;j=1,\cdots,p\}$$ is basis of the matrix space $\mathscr{M}_{np}(\mathbb{K})$. So that the dimension of $\mathscr{M}_{np}(\mathbb{K})$ is $np$.

## Multiplying matrices

Let matrices $A=(a_{ij})\in \mathscr{M}_{np}(\mathbb{K})$  and $B=(b_{ij})\in \mathscr{M}_{qn}(\mathbb{K})$. Then $C=AB=(c_{ij})\in \mathscr{M}_{pq}(\mathbb{K})$, where the entries $c_{ij}$ is given by \begin{align*}c_{ij}=\sum_{k=1}^n a_{ik}b_{kj}.\end{align*} The power of a matrix $A$ is a matrix $$A^n=\underset{(n\;\text{times})}{A\cdot A\cdots A},\quad n\in\mathbb{N}.$$ When multiplying matrices you have to be very careful because in general $AB\neq BA$. In fact, takes, for example, matrices \begin{align*} A=\begin{pmatrix} 1&2\\1&0\end{pmatrix},\quad B=\begin{pmatrix}0&3\\ 1&2\end{pmatrix}.\end{align*} Then $$AB=\begin{pmatrix}2&7\\ 0&3\end{pmatrix},\quad BA=\begin{pmatrix}3&0\\3&2\end{pmatrix}.$$ Then we cannot immediately use the binomial expansion formula since this formula is used in a commutative ring. Here the matrix space is not commutative. But we can still use the binomial expansion if the matrices $A$ and $B$ satisfy $AB=BA$.

## Matrix of a linear map

Let $\varphi:E\to F$ be a linear map, where $E$ and $F$ are finite-dimentional spaces with dimensions $n$ and $p$, respectively. That is $\varphi(x+\lambda y)=\varphi(x)+\lambda \varphi(y)$ for any $x,y\in E$, and $\lambda\in\mathbb{K}$. Let $(e_1,\cdots,e_n)$ and $(f_1,f_2,\cdots,f_p),$ the basies of $E$ and $F,$ respectively. The $np$-matrix $A$ associated with the linear map $\varphi$ is given by \begin{align*}A=\begin{pmatrix}\varphi(e_1)&\varphi(e_2)&\cdots&\varphi(e_n)\end{pmatrix}\end{align*}where for each $i=1,2,\cdots,n,$ $\varphi(e_i)$ is a colum vector calculted in the basis $(f_1,f_2,\cdots,f_p),$.

Example: Let $\mathbb{R}_2[X]$ the vector space of polynomial of degree less or equal to $2$. We recall that the dimension of this space is $3$. Consider the application \begin{align*} \Phi:\mathbb{R}_2[X]\to \mathbb{R}^3,\qquad \Phi(P)=(P(-1),P(0),P(1)). \end{align*}

• Let us fisrt prove that $\Phi$ is a linear transformation. In fact, let $P,Q\in \mathbb{R}_2[X]$ and $\lambda\in \mathbb{R}$. We have \begin{align*} \Phi(P+\lambda Q)&=(P(-1)+\lambda Q(-1),P(0)+\lambda Q(0),P(1)+\lambda Q(1))\cr &= (P(-1),P(0),P(1))+(\lambda Q(-1),\lambda Q(0),\lambda Q(1)) \cr &= (P(-1),P(0),P(1))+\lambda (Q(-1), Q(0), Q(1))\cr &= \Phi(P)+\lambda \Phi(Q). \end{align*}
• Let $B=(1,X,X^2)$ the canonical basis of $\mathbb{R}_2[X]$ and $B’=(e_1,e_2,e_3)$ the canonical basis of $\mathbb{R}^3$. Now determine the matrix $A$ which represents the linear map in these bases. To compute the matrix associated with $\Phi$ in the basis $B$ et $B’$, we first give the coordinates of the vectors $\Phi(1),\Phi(X)$ and $\Phi(X^2)$ in the basis $B’$. We recall that $e_1=(1,0,0),\;e_2=(0,1,0),$ and $e_3=(0,0,1)$. Then \begin{align*} \Phi(1)&=(1,1,1)=e_1+e_2+e_3\cr \Phi(X)&=(-1,0,1)=-e_1+e_3\cr \Phi(X^2)&=(1,0,1)=e_1+e_3. \end{align*} Then \begin{align*} A&=\begin{pmatrix} \Phi(1)&\Phi(X)&\Phi(X^2)\end{pmatrix}\cr &= \begin{pmatrix} 1&-1&1\\1&0&0\\1&1&1\end{pmatrix}. \end{align*}
• Finally, we prove that $\Phi$ is an isomorphism. In fact, we know that \begin{align*}\dim\left(\mathbb{R}_2[X]\right)=3=\dim\left(\mathbb{R}^3\right).\end{align*} Then to prove that $\Phi$ is an isomorphism it suffices to prove that $\Phi$ is injective, which is equivalent to proving that the kernel $\ker(\Phi)={0}$. If $P\in \ker(\Phi)$ then $P(0)=P(-1)=P(1)=0$. this means that $P$ have three distinct roots, which is impossible because the degree of $P$ is less or equal $2$. Hence $P=0$ “the null polynomial”, so $\ker(\Phi)={0}$. The application $\Phi$ is then injective then it is an isomorphism.

You may also consult the eigenvalues of matrices.

Previous articlePrimitives of continuous functions
Next articleLyapunov stability for nonlinear systems