In this short page, you will learn the basics on the power series representation of a given function. As motivation for our problem, let's think about a special idea. In general, it is difficult to calculate the values of transendental functions - for example, how can one accurately report the value of $\ln(37.88)$ or $\sin\left(\frac{\pi}{83}\right)$? In general, computers and humans are quite good at evaluating polynonials without too much trouble. The big idea Taylor and MacLaurin: Find a way to write a function as a power series! That is, given a function $f(x)$ on an interval $(a,b)$ with $x_0\in (a,b)$, can we find a sequence $\{a_n\}_{n=0}^\infty$ so that $$f(x) = P(x) = \displaystyle\sum_{n=0}^\infty a_n (x-x_0)^n$$ for each $x\in (a,b)$?
If the answer to this question in yes, the meaning of it is important. Remember that the values of $P(x)$ are given by the limit of its partial sums $$P_k(x) = \displaystyle \sum_{n=0}^k a_n(x-x_0)^n = a_0 + a_1(x-x_0)+a_2(x-x_0)^2 +...+ a_k(x-x_0)^k.$$ Notice that each partial sum $P_k$ is a polynomial of degree $k$ and as $k$ gets big, $P_k(x)$ provides a better and better approximation of $f(x)$. This allows us to approximate the values of these functions with great accuracy. Let's investigate!
Given $x_0$, a power series centered at $x_0$ is of the form $$P(x)=\displaystyle\sum_{n=0}^\infty a_n(x-x_0)^n$$ where $\{a_n\}_{n=0}^\infty$ is a sequence of real numbers. It is important to know that $P(x_0) = a_0$ is always true. In other words, $P(x)$ converges at it's centre $x_0$. For a real number $x$, we say that $P(x)$ converges to a real number $S$ if $$S = \displaystyle\lim_{k\rightarrow\infty} \sum_{n=0}^k a_nx^n(x-x_0).$$ If this limit does not exist or in infinite, we say $P(x)$ diverges at $x$.
Theorem 1: Suppose that $$\displaystyle \lim_{n\rightarrow\infty} \left|\frac{a_{n+1}}{a_n}\right| = L.$$ Then $P(x)$ converges absolutely for all $x$ so that $|x-x_0| < \frac{1}{L}$ and diverges for $|x-x_0|>\frac{1}{L}$.
That is, the power series $P(x)$ converges absolutely for all $x$ that are within $\frac{1}{L}$ of $x_0$. That is, for every $x\in I=(x_0 - \frac{1}{L},x_0+\frac{1}{L})$, the interval of length $1/L$ centered at $x_0$, then $P(x)$ converges absolutely. This number $\frac{1}{L}$ is very important, and we call it "The Radius of Convergence" for $P(x)$. We will refer to the interval $I$ as the open interval of convergence. Here is a nice example of these new terms.
Example 1: Recall that if $r\in (-1,1)$ then the Geometric series $\displaystyle\sum_{n=0}^\infty r^n = \frac{1}{1-r}$ and diverges otherwise. Thinking of $r$ moving in $(-1,1)$, we can now say that the Power series $$P(x) = \sum_{n=0}^\infty x^n$$ has $I=(-1,1)$ as its open interval of convergence. More, we know that for every $x\in I$, $$P(x) = \frac{1}{1-x}.$$ To show that $(-1,1)$ is the open interval of convergence for this series, notice that $$\lim_{n\rightarrow\infty}\left|\frac{a_{n+1}}{a_n}\right| = \lim_{n\rightarrow\infty} \left|\frac{1}{1} \right|=1.$$ By our Theorem, this means that the open interval of convergence is $I=(-1,1)$.
The Proof of Theorem 1: Let $x\in I=(x_0-1/L, x_0+1/L)$. Will will use the Ratio Test. Indeed, notice that $$\lim_{n\rightarrow\infty}\left|\frac{a_{n+1}}{a_n}\right|= \lim_{n\rightarrow\infty} \frac{|a_{n+1}||x-x_0|^{n+1}}{|a_n||x-x_0|^n} = |x-x_0|\lim_{n\rightarrow\infty}\left| \frac{a_{n+1}}{a_n}\right| = |x-x_0|L.$$ By the ratio test, the series converges absolutely for any $x$ so that $|x-x_0|L<1$. That is, for all $x$ with $|x-x_0|<\frac{1}{L}$. **
In terms of our original question, setting $f(x) = \frac{1}{1-x}$, we see that the Geometric series formula gives a power series representation for $f$ and we know that for each $x\in I$, $$\frac{1}{1-x} = \lim_{n\rightarrow\infty} \sum_{n=0}^k x^n.$$ To see this in action, here is an animation showing how the partial sums converge to $f(x)$ for $x\in (-1,1)$.
The function $f(x) = \frac{1}{1-x}$ is very nice in in the interval $(-1,1)$. In particular, it is differentiable to any order and we can find its anti-derivative. Notice that $$f'(x) = \frac{1}{(1-x)^2} \textrm{ and } F(x) = \int f(x)~dx = -\ln(|1-x|)$$ where $F$ is an antiderivative of $f$. We can compare these functions to the series we get by differentiation and integrating term by term: $$\frac{1}{(1-x)^2} = \sum_{n=1}^\infty nx^{n-1} \textrm{ and } F(x) = \sum_{n=0}^\infty \frac{x^{n+1}}{n+1}$$ Here are two animations showing how partial sums of the above approximate $f'$ and $F$ respectively.
$f':~$ $~~F:~$
That these term by term differentiated or integrated series converge is not unique to Geometric series. Here is a Theorem that tells us this happens.
Theorem 2: Given a power series $P(x) = \sum_{n=0}^\infty a_n(x-x_0)^n$ with radius of convergence $R\geq 0$, each of the following series also have radius of convergence $R$. $$\sum_{n=k}^\infty na_n (x-x_0)^{n-1} \textrm{ and } \sum_{n=0}^\infty \frac{a_n}{n+1}(x-x_0)^{n+1}.$$
The lesson of this theorem is that differentiation and integration do not have an effect on the radius of convergence. Although we will not cover it, I do feel it necessary to mention that the endpoints of I may have different convergence behavior that must be analyzed on a case by case basis.
Given an infinitely differentiable function $f(x)$, finding a candidate power series as a representation for $f(x)$ is reduced to finding its Taylor or MacLaurin expansion.
Definition: Given an infinitely differentiable function $f(x)$ and a real number $x_0$, its Taylor expansion centred at $x_0$ is given by $$\sum_{n=0}^\infty \frac{f^{(n)}(x_0)}{n!}(x-x_0)^n. $$ If $x_0=0$ then we call the Taylor Series centred at $x_0=0$ the MacLaurin series for $f$. This series is: $$\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}.$$
Example 2: Let's find the MacLaurin series for $f(x)=\sin(x)$. We see that $f'(x)=\cos(x)$, $f''(x) = -\sin(x)$, $f'''(x) = -\cos(x)$, and $f^{(iv)}(x) = \sin(x)$. Higher order derivatives will follow the same pattern. More, $f(0) = 0$, $f'(0) = 1$, $f''(0) = 0$, $f'''(0) = -1$ and $f^{(iv)}(0) = 0$. So, we see that if $n$ is even, $f^{(n)}(0) = 0$ and if it is odd, $f^{(n)}(0) = \pm 1$. Using this, we write our MacLaurin expansion as $$sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}x^n = f(0) + f'(0)x + \frac{f''(0)}{2}x^2+\frac{f'''(0)}{3!} x^3+ \frac{f^{(iv)}(0)}{4!}x^4 +\dots = x-\frac{1}{3!}x^3 + \frac{1}{5!}x^5 - \frac{1}{7!}x^7 + \frac{1}{9!}x^9+\dots$$ If we are careful, we can see that this series is given by $$\sum_{n=0}^\infty (-1)^n\frac{x^{2n+1}}{(2n+1)!}.$$
To investigate the convergence of the partial sums, here is a desmos animation:
Here you see the red colored plot of $f(x)=\sin(x)$ and the blue colored plot is of the partial sum $M_k(x)=\sum_{n=0}^k (-1)^n\frac{x^{2n+1}}{(2n+1)!}$. You can see that the graphs align vey quickly allowing us to approximate. Let's go back to our questions at the beginning and compare your calculators value for $\sin(0.5) = 0.4794255386$ (from the calculator on my phone) with the value of $M_{15}(0.5)$. Using Desmos we can evaluate this to find $M_{15}(0.5)=0.479425538604$. You can see that we produced the value from the calculator to better than $10^{-10}$ since $|\sin(0.5)-M_{15}(0.5)| < 10^{-10}$.
There are exercises in Homework #5 to support this note. Please try some examples!