# Real analysis

The first four partial sums of the Fourier series for a square wave. Fourier series are an important tool in real analysis.

In mathematics, real analysis is the branch of mathematical analysis that studies the behavior of real numbers, sequences and series of real numbers, and real-valued functions.[1] Some particular properties of real-valued sequences and functions that real analysis studies include convergence, limits, continuity, smoothness, differentiability and integrability.

Real analysis is distinguished from complex analysis, which deals with the study of complex numbers and their functions.

## Scope

### Construction of the real numbers

The theorems of real analysis rely intimately upon the structure of the real number line. The real number system consists of a set (${\displaystyle \mathbb {R} }$ ), together with two binary operations denoted + and , and an order denoted <. The operations make the real numbers a field, and, along with the order, an ordered field. The real number system is the unique complete ordered field, in the sense that any other complete ordered field is isomorphic to it. Intuitively, completeness means that there are no 'gaps' in the real numbers. In particular, this property distinguishes the real numbers from other ordered fields (e.g., the rational numbers ${\displaystyle \mathbb {Q} }$ ) and is critical to the proof of several key properties of functions of the real numbers. The completeness of the reals is often conveniently expressed as the least upper bound property (see below).

There are several ways of formalizing the definition of the real numbers. Modern approaches consist of providing a list of axioms, and a proof of the existence of a model for them, which has above properties. Moreover, one may show that any two models are isomorphic, which means that all models have exactly the same properties, and that one may forget how the model is constructed for using real numbers. Some of these constructions are described in the main article.

### Order properties of the real numbers

The real numbers have several important lattice-theoretic properties that are absent in the complex numbers. Most importantly, the real numbers form an ordered field, in which sums and products of positive numbers are also positive. Moreover, the ordering of the real numbers is total, and the real numbers have the least upper bound property:

Every nonempty subset of ${\displaystyle \mathbb {R} }$  that has an upper bound has a least upper bound that is also a real number.

These order-theoretic properties lead to a number of important results in real analysis, such as the monotone convergence theorem, the intermediate value theorem and the mean value theorem.

However, while the results in real analysis are stated for real numbers, many of these results can be generalized to other mathematical objects. In particular, many ideas in functional analysis and operator theory generalize properties of the real numbers – such generalizations include the theories of Riesz spaces and positive operators. Also, mathematicians consider real and imaginary parts of complex sequences, or by pointwise evaluation of operator sequences.

### Topological properties of the real numbers

Many of the theorems of real analysis are consequences of the topological properties of the real number line. The order properties of the real numbers described above are closely related to these topological properties. As a topological space, the real numbers has a standard topology, which is the order topology induced by order ${\displaystyle <}$ . Alternatively, by defining the metric or distance function ${\displaystyle d:\mathbb {R} \times \mathbb {R} \to \mathbb {R} _{\geq 0}}$  using the absolute value function as ${\displaystyle d(x,y)=|x-y|}$ , the real numbers become the prototypical example of a metric space. The topology induced by metric ${\displaystyle d}$  turns out to be identical to the standard topology induced by order ${\displaystyle <}$ . Theorems like the intermediate value theorem that are essentially topological in nature can often be proved in the more general setting of metric or topological spaces rather than in ${\displaystyle \mathbb {R} }$  only. Often, such proofs tend to be shorter or simpler compared to classical proofs that apply direct methods.

### Sequences

A sequence is a function whose domain is a countable, totally ordered set. The domain is usually taken to be the natural numbers[2], although it is occasionally convenient to also consider bidirectional sequences indexed by the set of all integers, including negative indices.

Of interest in real analysis, a real-valued sequence, here indexed by the natural numbers, is a map ${\displaystyle a:\mathbb {N} \to \mathbb {R} ,\ n\mapsto a_{n}}$ . Each ${\displaystyle a(n)=a_{n}}$  is referred to as a term (or, less commonly, an element) of the sequence. A sequence is rarely denoted explicitly as a function; instead, by convention, it is almost always notated as if it were an ordered ∞-tuple, with individual terms or a general term enclosed in parentheses:

${\displaystyle (a_{n})=(a_{n})_{n\in \mathbb {N} }=(a_{1},a_{2},a_{3},\cdots )}$ .[3]

A sequence that tends to a limit (i.e., ${\textstyle \lim _{n\to \infty }a_{n}}$  exists) is said to be convergent; otherwise it is divergent. (See the section on limits and convergence for details.) A real-valued sequence ${\displaystyle (a_{n})}$  is bounded if there exists ${\displaystyle M\in \mathbb {R} }$  such that ${\displaystyle |a_{n}|  for all ${\displaystyle n\in \mathbb {N} }$ . A real-valued sequence ${\displaystyle (a_{n})}$  is monotonically increasing or decreasing if

${\displaystyle a_{1}\leq a_{2}\leq a_{3}\leq \ldots }$  or ${\displaystyle a_{1}\geq a_{2}\geq a_{3}\geq \ldots }$

holds, respectively. If either holds, the sequence is said to be monotonic. The monotonicity is strict if the chained inequalities still hold with ${\displaystyle \leq }$  or ${\displaystyle \geq }$  replaced by < or >.

Given a sequence ${\displaystyle (a_{n})}$ , another sequence ${\displaystyle (b_{k})}$  is a subsequence of ${\displaystyle (a_{n})}$  if ${\displaystyle b_{k}=a_{n_{k}}}$  for all positive integers ${\displaystyle k}$  and ${\displaystyle (n_{k})}$  is a strictly increasing sequence of natural numbers.

### Limits and convergence

Roughly speaking, a limit is the value that a function or a sequence "approaches" as the input or index approaches some value.[4] (This value can include the symbols ${\displaystyle \pm \infty }$  when addressing the behavior of a function or sequence as the variable increases or decreases without bound.) The idea of a limit is fundamental to calculus (and mathematical analysis in general) and its formal definition is used in turn to define notions like continuity, derivatives, and integrals. (In fact, the study of limiting behavior has been used as a characteristic that distinguishes calculus and mathematical analysis from other branches of mathematics.)

The concept of limit was informally introduced for functions by Newton and Leibniz, at the end of 17th century, for building infinitesimal calculus. For sequences, the concept was introduced by Cauchy, and made rigorous, at the end of 19th century by Bolzano and Weierstrass, who gave the modern ε-δ definition, which follows.

Definition. Let ${\displaystyle f}$  be a real-valued function defined on ${\displaystyle E\subset \mathbb {R} }$ . We say that ${\displaystyle f(x)}$  tends to ${\displaystyle L}$  as ${\displaystyle x}$  approaches ${\displaystyle x_{0}}$ , or that the limit of ${\displaystyle f(x)}$  as ${\displaystyle x}$  approaches ${\displaystyle x_{0}}$  is ${\displaystyle L}$  if, for any ${\displaystyle \epsilon >0}$ , there exists ${\displaystyle \delta >0}$  such that for all ${\displaystyle x\in E}$ , ${\displaystyle 0<|x-x_{0}|<\delta }$  implies that ${\displaystyle |f(x)-L|<\epsilon }$ . We write this symbolically as

${\displaystyle f(x)\to L\ \ {\text{as}}\ \ x\to x_{0}}$ , or ${\displaystyle \lim _{x\to x_{0}}f(x)=L}$ .

Intuitively, this definition can be thought of in the following way: We say that ${\displaystyle f(x)\to L}$  as ${\displaystyle x\to x_{0}}$ , when, given any positive number ${\displaystyle \epsilon }$ , no matter how small, we can always find a ${\displaystyle \delta }$ , such that we can guarantee that ${\displaystyle f(x)}$  and ${\displaystyle L}$  are less than ${\displaystyle \epsilon }$  apart, as long as ${\displaystyle x}$  (in the domain of ${\displaystyle f}$ ) is a real number that is less than ${\displaystyle \delta }$  away from ${\displaystyle x_{0}}$  but distinct from ${\displaystyle x_{0}}$ . The purpose of the last stipulation, which corresponds to the condition ${\displaystyle 0<|x-x_{0}|}$  in the definition, is to ensure that ${\displaystyle \lim _{x\to x_{0}}f(x)=L}$  does not imply anything about the value of ${\displaystyle f(x_{0})}$  itself. Actually, ${\displaystyle x_{0}}$  does not even need to be in the domain of ${\displaystyle f}$  in order for ${\displaystyle \lim _{x\to x_{0}}f(x)}$  to exist.

In a slightly different but related context, the concept of a limit applies to the behavior of a sequence ${\displaystyle (a_{n})}$  when ${\displaystyle n}$  becomes large.

Definition. Let ${\displaystyle (a_{n})}$  be a real-valued sequence. We say that ${\displaystyle (a_{n})}$  converges to ${\displaystyle a}$  if, for any ${\displaystyle \epsilon >0}$ , there exists a natural number ${\displaystyle N}$  such that ${\displaystyle n\geq N}$  implies that ${\displaystyle |a-a_{n}|<\epsilon }$ . We write this symbolically as

${\displaystyle a_{n}\to a\ \ {\text{as}}\ \ n\to \infty }$ , or ${\displaystyle \lim _{n\to \infty }a_{n}=a}$ ;

if ${\displaystyle (a_{n})}$  fails to converge, we say that ${\displaystyle (a_{n})}$  diverges.

Generalizing to a real-valued function of a real variable, a slight modification of this definition (replacement of sequence ${\displaystyle (a_{n})}$  and term ${\displaystyle a_{n}}$  by function ${\displaystyle f}$  and value ${\displaystyle f(x)}$  and natural numbers ${\displaystyle N}$  and ${\displaystyle n}$  by real numbers ${\displaystyle M}$  and ${\displaystyle x}$ , respectively) yields the definition of the limit of ${\displaystyle f(x)}$  as ${\displaystyle x}$  increases without bound, notated ${\displaystyle \lim _{x\to \infty }f(x)}$ . Reversing the inequality ${\displaystyle x\geq M}$  to ${\displaystyle x\leq M}$  gives the corresponding definition of the limit of ${\displaystyle f(x)}$  as ${\displaystyle x}$  decreases without bound, ${\displaystyle \lim _{x\to -\infty }f(x)}$ .

Sometimes, it is useful to conclude that a sequence converges, even though the value to which it converges is unknown or irrelevant. In these cases, the concept of a Cauchy sequence is useful.

Definition. Let ${\displaystyle (a_{n})}$  be a real-valued sequence. We say that ${\displaystyle (a_{n})}$  is a Cauchy sequence if, for any ${\displaystyle \epsilon >0}$ , there exists a natural number ${\displaystyle N}$  such that ${\displaystyle m,n\geq N}$  implies that ${\displaystyle |a_{m}-a_{n}|<\epsilon }$ .

It can be shown that a real-valued sequence is Cauchy if and only if it is convergent. This property of the real numbers is expressed by saying that the real numbers endowed with the standard metric, ${\displaystyle (\mathbb {R} ,|\cdot |)}$ , is a complete metric space. In a general metric space, however, a Cauchy sequence need not converge.

In addition, for real-valued sequences that are monotonic, it can be shown that the sequence is bounded if and only if it is convergent.

#### Uniform and pointwise convergence for sequences of functions

In addition to sequences of numbers, one may also speak of sequences of functions on ${\displaystyle E\subset \mathbb {R} }$ , that is, infinite, ordered families of functions ${\displaystyle f_{n}:E\to \mathbb {R} }$ , denoted ${\displaystyle (f_{n})_{n=1}^{\infty }}$ , and their convergence properties. However, in the case of sequences of functions, there are two kinds of convergence, known as pointwise convergence and uniform convergence, that need to be distinguished.

Roughly speaking, pointwise convergence of functions ${\displaystyle f_{n}}$  to a limiting function ${\displaystyle f:E\to \mathbb {R} }$ , denoted ${\displaystyle f_{n}\rightarrow f}$ , simply means that given any ${\displaystyle x\in E}$ , ${\displaystyle f_{n}(x)\to f(x)}$  as ${\displaystyle n\to \infty }$ . In contrast, uniform convergence is a stronger type of convergence, in the sense that a uniformly convergent sequence of functions also converges pointwise, but not conversely. Uniform convergence requires members of the family of functions, ${\displaystyle f_{n}}$ , to fall within some error ${\displaystyle \epsilon >0}$  of ${\displaystyle f}$  for every value of ${\displaystyle x\in E}$ , whenever ${\displaystyle n\geq N}$ , for some integer ${\displaystyle N}$ . For a family of functions to uniformly converge, sometimes denoted ${\displaystyle f_{n}\rightrightarrows f}$ , such a value of ${\displaystyle N}$  must exist for any ${\displaystyle \epsilon >0}$  given, no matter how small. Intuitively, we can visualize this situation by imagining that, for a large enough ${\displaystyle N}$ , the functions ${\displaystyle f_{N},f_{N+1},f_{N+2},\ldots }$  are all confined within a 'tube' of width ${\displaystyle 2\epsilon }$  about ${\displaystyle f}$  (i.e., between ${\displaystyle f-\epsilon }$  and ${\displaystyle f+\epsilon }$ ) for every value in their domain ${\displaystyle E}$ .

The distinction between pointwise and uniform convergence is important when exchanging the order of two limiting operations (e.g., taking a limit, a derivative, or integral) is desired: in order for the exchange to be well-behaved, many theorems of real analysis call for uniform convergence. For example, a sequence of continuous functions (see below) is guaranteed to converge to a continuous limiting function if the convergence is uniform, while the limiting function may not be continuous if convergence is only pointwise. Karl Weierstrass is generally credited for clearly defining the concept of uniform convergence and fully investigating its implications.

### Compactness

Compactness is a concept from general topology that plays an important role in many of the theorems of real analysis. The property of compactness is a generalization of the notion of a set being closed and bounded. (In the context of real analysis, these notions are equivalent: a set in Euclidean space is compact if and only if it is closed and bounded.) Briefly, a closed set contains all of its boundary points, while a set is bounded if there exists a real number such that the distance between any two points of the set is less than that number. In ${\displaystyle \mathbb {R} }$ , sets that are closed and bounded, and therefore compact, include the empty set, any finite number of points, closed intervals, and their finite unions. However, this list is not exhaustive; for instance, the set ${\displaystyle \{1/n:n\in \mathbb {N} \}\cup \{0}\}$  is another example of a compact set. On the other hand, the set ${\displaystyle \{1/n:n\in \mathbb {N} \}}$  is not compact because it is bounded but not closed, as the boundary point 0 is not a member of the set. The set ${\displaystyle [0,\infty )}$  is also not compact because it is closed but not bounded.

For subsets of the real numbers, there are several equivalent definitions of compactness.

Definition. A set ${\displaystyle E\subset \mathbb {R} }$  is compact if it is closed and bounded.

This definition also holds for Euclidean space of any finite dimension, ${\displaystyle \mathbb {R} ^{n}}$ , but it is not valid for metric spaces in general. The equivalence of the definition with the definition of compactness based on subcovers, given later in this section, is known as the Heine-Borel theorem.

A more general definition that applies to all metric spaces uses the notion of a subsequence (see above).

Definition. A set ${\displaystyle E}$  in a metric space is compact if every sequence in ${\displaystyle E}$  has a convergent subsequence.

This particular property is known as subsequential compactness. In ${\displaystyle \mathbb {R} }$ , a set is subsequentially compact if and only if it is closed and bounded, making this definition equivalent to the one given above. Subsequential compactness is equivalent to the definition of compactness based on subcovers for metric spaces, but not for topological spaces in general.

The most general definition of compactness relies on the notion of open covers and subcovers, which is applicable to topological spaces (and thus to metric spaces and ${\displaystyle \mathbb {R} }$  as special cases). In brief, a collection of open sets ${\displaystyle U_{\alpha }}$  is said to be an open cover of set ${\displaystyle X}$  if the union of these sets is a superset of ${\displaystyle X}$ . This open cover is said to have a finite subcover if a finite subcollection of the ${\displaystyle U_{\alpha }}$  could be found that also covers ${\displaystyle X}$ .

Definition. A set ${\displaystyle X}$  in a topological space is compact if every open cover of ${\displaystyle X}$  has a finite subcover.

Compact sets are well-behaved with respect to properties like convergence and continuity. For instance, any Cauchy sequence in a compact metric space is convergent. As another example, the image of a compact metric space under a continuous map is also compact.

### Continuity

A function from the set of real numbers to the real numbers can be represented by a graph in the Cartesian plane; such a function is continuous if, roughly speaking, the graph is a single unbroken curve with no "holes" or "jumps".

There are several ways to make this intuition mathematically rigorous. Several definitions of varying levels of generality can be given. In cases where two or more definitions are applicable, they are readily shown to be equivalent to one another, so the most convenient definition can be used to determine whether a given function is continuous or not. In the first definition given below, ${\displaystyle f:I\to \mathbb {R} }$  is a function defined on a non-degenerate interval ${\displaystyle I}$  of the set of real numbers as its domain. Some possibilities include ${\displaystyle I=\mathbb {R} }$ , the whole set of real numbers, an open interval ${\displaystyle I=(a,b)=\{x\in \mathbb {R} \,|\,a  or a closed interval ${\displaystyle I=[a,b]=\{x\in \mathbb {R} \,|\,a\leq x\leq b\}.}$  Here, ${\displaystyle a}$  and ${\displaystyle b}$  are distinct real numbers, and we exclude the case of ${\displaystyle I}$  being empty or consisting of only one point, in particular.

Definition. If ${\displaystyle I\subset \mathbb {R} }$  is a non-degenerate interval, we say that ${\displaystyle f:I\to \mathbb {R} }$  is continuous at ${\displaystyle p\in E}$  if ${\displaystyle \lim _{x\to p}f(x)=f(p)}$ . We say that ${\displaystyle f}$  is a continuous map if ${\displaystyle f}$  is continuous at every ${\displaystyle p\in I}$ .

In contrast to the requirements for ${\displaystyle f}$  to have a limit at a point ${\displaystyle p}$ , which do not constrain the behavior of ${\displaystyle f}$  at ${\displaystyle p}$  itself, the following two conditions, in addition to the existence of ${\textstyle \lim _{x\to p}f(x)}$ , must also hold in order for ${\displaystyle f}$  to be continuous at ${\displaystyle p}$ : (i) ${\displaystyle f}$  must be defined at ${\displaystyle p}$ , i.e., ${\displaystyle p}$  is in the domain of ${\displaystyle f}$ ; and (ii) ${\displaystyle f(x)\to f(p)}$  as ${\displaystyle x\to p}$ . The definition above actually applies to any domain ${\displaystyle E}$  that does not contain an isolated point, or equivalently, ${\displaystyle E}$  where every ${\displaystyle p\in E}$  is a limit point of ${\displaystyle E}$ . A more general definition applying to ${\displaystyle f:X\to \mathbb {R} }$  with a general domain ${\displaystyle X\subset \mathbb {R} }$  is the following:

Definition. If ${\displaystyle X}$  is an arbitrary subset of ${\displaystyle \mathbb {R} }$ , we say that ${\displaystyle f:X\to \mathbb {R} }$  is continuous at ${\displaystyle p\in X}$  if, for any ${\displaystyle \epsilon >0}$ , there exists ${\displaystyle \delta >0}$  such that for all ${\displaystyle x\in X}$ , ${\displaystyle |x-p|<\delta }$  implies that ${\displaystyle |f(x)-f(p)|<\epsilon }$ . We say that ${\displaystyle f}$  is a continuous map if ${\displaystyle f}$  is continuous at every ${\displaystyle p\in X}$ .

A consequence of this definition is that ${\displaystyle f}$  is trivially continuous at any isolated point ${\displaystyle p\in X}$ . This somewhat unintuitive treatment of isolated points is necessary to ensure that our definition of continuity for functions on the real line is consistent with the most general definition of continuity for maps between topological spaces (which includes metric spaces and ${\displaystyle \mathbb {R} }$  in particular as special cases). This definition, which extends beyond the scope of our discussion of real analysis, is given below for completeness.

Definition. If ${\displaystyle X}$  and ${\displaystyle Y}$  are topological spaces, we say that ${\displaystyle f:X\to Y}$  is continuous at ${\displaystyle p\in X}$  if ${\displaystyle f^{-1}(V)}$  is a neighborhood of ${\displaystyle p}$  in ${\displaystyle X}$  for every neighborhood ${\displaystyle V}$  of ${\displaystyle f(p)}$  in ${\displaystyle Y}$ . We say that ${\displaystyle f}$  is a continuous map if ${\displaystyle f^{-1}(U)}$  is open in ${\displaystyle X}$  for every ${\displaystyle U}$  open in ${\displaystyle Y}$ .

(Here, ${\displaystyle f^{-1}(S)}$  refers to the preimage of ${\displaystyle S\subset Y}$  under ${\displaystyle f}$ .)

#### Uniform continuity

Definition. If ${\displaystyle X}$  is a subset of the real numbers, we say a function ${\displaystyle f:X\to \mathbb {R} }$  is uniformly continuous on ${\displaystyle X}$  if, for any ${\displaystyle \epsilon >0}$ , there exists a ${\displaystyle \delta >0}$  such that for all ${\displaystyle x,y\in X}$ , ${\displaystyle |x-y|<\delta }$  implies that ${\displaystyle |f(x)-f(y)|<\epsilon }$ .

Explicitly, when a function is uniformly continuous on ${\displaystyle X}$ , the choice of ${\displaystyle \delta }$  needed to fulfill the definition must work for all of ${\displaystyle X}$  for a given ${\displaystyle \epsilon }$ . In contrast, when a function is continuous at every point ${\displaystyle p\in X}$  (or said to be continuous on ${\displaystyle X}$ ), the choice of ${\displaystyle \delta }$  may depend on both ${\displaystyle \epsilon }$  and ${\displaystyle p}$ . Importantly, in contrast to simple continuity, uniform continuity is a property of a function that only makes sense with a specified domain; to speak of uniform continuity at a single point ${\displaystyle p}$  is meaningless.

On a compact set, it is easily shown that all continuous functions are uniformly continuous. If ${\displaystyle E}$  is a bounded noncompact subset of ${\displaystyle \mathbb {R} }$ , then there exists ${\displaystyle f:E\to \mathbb {R} }$  that is continuous but not uniformly continuous. As a simple example, consider ${\displaystyle f:(0,1)\to \mathbb {R} }$  defined by ${\displaystyle f(x)=1/x}$ . By choosing points close to 0, we can always make ${\displaystyle |f(x)-f(y)|>\epsilon }$  for any single choice of ${\displaystyle \delta >0}$ , for a given ${\displaystyle \epsilon >0}$ .

#### Absolute continuity

Definition. Let ${\displaystyle I\subset \mathbb {R} }$  be an interval on the real line. A function ${\displaystyle f:I\to \mathbb {R} }$  is said to be absolutely continuous on ${\displaystyle I}$  if for every positive number ${\displaystyle \epsilon }$ , there is a positive number ${\displaystyle \delta }$  such that whenever a finite sequence of pairwise disjoint sub-intervals ${\displaystyle (x_{1},y_{1}),(x_{2},y_{2}),\ldots ,(x_{n},y_{n})}$  of ${\displaystyle I}$  satisfies[5]

${\displaystyle \sum _{k=1}^{n}(y_{k}-x_{k})<\delta }$

then

${\displaystyle \displaystyle \sum _{k=1}^{n}|f(y_{k})-f(x_{k})|<\epsilon .}$

Absolutely continuous functions are continuous: consider the case n = 1 in this definition. The collection of all absolutely continuous functions on I is denoted AC(I). Absolute continuity is an important concept in the Lebesgue theory of integration, allowing the formulation of a generalized version of the fundamental theorem of calculus that applies to the Lebesgue integral.

### Differentiation

The notion of the derivative of a function or differentiability originates from the concept of approximating a function near a given point using the "best" linear approximation. This approximation, if it exists, is unique and is given by the line that is tangent to the function at the given point ${\displaystyle a}$ , and the slope of the line is the derivative of the function at ${\displaystyle a}$ .

A function ${\displaystyle f:\mathbb {R} \to \mathbb {R} }$  is differentiable at ${\displaystyle a}$  if the limit

${\displaystyle f'(a)=\lim _{h\to 0}{\frac {f(a+h)-f(a)}{h}}}$

exists. This limit is known as the derivative of ${\displaystyle f}$  at ${\displaystyle a}$ , and the function ${\displaystyle f'}$ , possibly defined on only a subset of ${\displaystyle \mathbb {R} }$ , is the derivative (or derivative function) of ${\displaystyle f}$ . If the derivative exists everywhere, the function is said to be differentiable.

As a simple consequence of the definition, ${\displaystyle f}$  is continuous at ${\displaystyle a}$  if it is differentiable there. Differentiability is therefore a stronger regularity condition (condition describing the "smoothness" of a function) than continuity, and it is possible for a function to be continuous on the entire real line but not differentiable anywhere (see Weierstrass's nowhere differentiable continuous function). It is possible to discuss the existence of higher-order derivatives as well, by finding the derivative of a derivative function, and so on.

One can classify functions by their differentiability class. The class ${\displaystyle C^{0}}$  (sometimes ${\displaystyle C^{0}([a,b])}$  to indicate the interval of applicability) consists of all continuous functions. The class ${\displaystyle C^{1}}$  consists of all differentiable functions whose derivative is continuous; such functions are called continuously differentiable. Thus, a ${\displaystyle C^{1}}$  function is exactly a function whose derivative exists and is of class ${\displaystyle C^{0}}$ . In general, the classes ${\displaystyle C^{k}}$  can be defined recursively by declaring ${\displaystyle C^{0}}$  to be the set of all continuous functions and declaring ${\displaystyle C^{k}}$  for any positive integer ${\displaystyle k}$  to be the set of all differentiable functions whose derivative is in ${\displaystyle C^{k-1}}$ . In particular, ${\displaystyle C^{k}}$  is contained in ${\displaystyle C^{k-1}}$  for every ${\displaystyle k}$ , and there are examples to show that this containment is strict. Class ${\displaystyle C^{\infty }}$  is the intersection of the sets ${\displaystyle C^{k}}$  as ${\displaystyle k}$  varies over the non-negative integers, and the members of this class are known as the smooth functions. Class ${\displaystyle C^{\omega }}$  consists of all analytic functions, and is strictly contained in ${\displaystyle C^{\infty }}$  (see bump function for a smooth function that is not analytic).

The chain rule, mean value theorem, l'Hospital's rule, and Taylor's theorem are important results in the elementary theory of the derivative.

### Series

A series formalizes the imprecise notion of taking the sum of an endless sequence of numbers. The idea that taking the sum of an "infinite" number of terms can lead to a finite result was counterintuitive to the ancient Greeks and led to the formulation of a number of paradoxes by Zeno and other philosophers. The modern notion of assigning a value to a series avoids dealing with the ill-defined notion of adding an "infinite" number of terms. Instead, the finite sum of the first ${\displaystyle n}$  terms of the sequence, known as a partial sum, is considered, and the concept of a limit is applied to the sequence of partial sums as ${\displaystyle n}$  grows without bound. The series is assigned the value of this limit, if it exists.

Given an (infinite) sequence ${\displaystyle (a_{n})}$ , we can define an associated series as the formal mathematical object ${\textstyle a_{1}+a_{2}+a_{3}+\cdots =\sum _{n=1}^{\infty }a_{n}}$ , sometimes simply written as ${\textstyle \sum a_{n}}$ . The partial sums of a series ${\textstyle \sum a_{n}}$  are the numbers ${\textstyle s_{n}=\sum _{j=1}^{n}a_{j}}$ . A series ${\textstyle \sum a_{n}}$  is said to be convergent if the sequence consisting of its partial sums, ${\displaystyle (s_{n})}$ , is convergent; otherwise it is divergent. The sum of a convergent series is defined as the number ${\textstyle s=\lim _{n\to \infty }s_{n}}$ .

It is to be emphasized that the word "sum" is used here in a metaphorical sense as a shorthand for taking the limit of a sequence of partial sums and should not be interpreted as simply "adding" an infinite number of terms. For instance, in contrast to the behavior of finite sums, rearranging the terms of an infinite series may result in convergence to a different number (see the article on the Riemann rearrangement theorem for further discussion).

An example of a convergent series is a geometric series which forms the basis of one of Zeno's famous paradoxes:

${\displaystyle \sum _{n=1}^{\infty }{\frac {1}{2^{n}}}={\frac {1}{2}}+{\frac {1}{4}}+{\frac {1}{8}}+\cdots =1}$ .

In contrast, the harmonic series has been known since the Middle Ages to be a divergent series:

${\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n}}=1+{\frac {1}{2}}+{\frac {1}{3}}+\cdots =\infty }$ .

(Here, "${\displaystyle =\infty }$ " is merely a notational convention to indicate that the partial sums of the series grow without bound.)

A series ${\textstyle \sum a_{n}}$  is said to converge absolutely if ${\textstyle \sum |a_{n}|}$  is convergent. A convergent series ${\textstyle \sum a_{n}}$  for which ${\textstyle \sum |a_{n}|}$  diverges is said to converge conditionally (or nonabsolutely). It is easily shown that absolute convergence of a series implies its convergence. On the other hand, an example of a conditionally convergent series is

${\displaystyle \sum _{n=1}^{\infty }{\frac {(-1)^{n-1}}{n}}=1-{\frac {1}{2}}+{\frac {1}{3}}-{\frac {1}{4}}+\cdots =\log 2}$ .

#### Taylor series

The Taylor series of a real or complex-valued function ƒ(x) that is infinitely differentiable at a real or complex number a is the power series

${\displaystyle f(a)+{\frac {f'(a)}{1!}}(x-a)+{\frac {f''(a)}{2!}}(x-a)^{2}+{\frac {f^{(3)}(a)}{3!}}(x-a)^{3}+\cdots .}$

which can be written in the more compact sigma notation as

${\displaystyle \sum _{n=0}^{\infty }{\frac {f^{(n)}(a)}{n!}}\,(x-a)^{n}}$

where n! denotes the factorial of n and ƒ (n)(a) denotes the nth derivative of ƒ evaluated at the point a. The derivative of order zero ƒ is defined to be ƒ itself and (xa)0 and 0! are both defined to be 1. In the case that a = 0, the series is also called a Maclaurin series.

A Taylor series of f about point a may diverge, converge at only the point a, converge for all x such that ${\displaystyle |x-a|  (the largest such R for which convergence is guaranteed is called the radius of convergence), or converge on the entire real line. Even a converging Taylor series may converge to a value different from the value of the function at that point. If the Taylor series at a point has a nonzero radius of convergence, and sums to the function in the disc of convergence, then the function is analytic. The analytic functions have many fundamental properties. In particular, an analytic function of a real variable extends naturally to a function of a complex variable. It is in this way that the exponential function, the logarithm, the trigonometric functions and their inverses are extended to functions of a complex variable.

#### Fourier series

Fourier series decomposes periodic functions or periodic signals into the sum of a (possibly infinite) set of simple oscillating functions, namely sines and cosines (or complex exponentials). The study of Fourier series typically occurs and is handled within the branch mathematics > mathematical analysis > Fourier analysis.

### Integration

Integration is a formalization of the problem of finding the area bound by a curve and the related problems of determining the length of a curve or volume enclosed by a surface. The basic strategy to solving problems of this type was known to the ancient Greeks and Chinese, and was known as the method of exhaustion. Generally speaking, the desired area is bounded from above and below, respectively, by increasingly accurate circumscribing and inscribing polygonal approximations whose exact areas can be computed. By considering approximations consisting of a larger and larger ("infinite") number of smaller and smaller ("infinitesimal") pieces, the area bound by the curve can be deduced, as the upper and lower bounds defined by the approximations converge around a common value.

The spirit of this basic strategy can easily be seen in the definition of the Riemann integral, in which the integral is said to exist if upper and lower Riemann (or Darboux) sums converge to a common value as thinner and thinner rectangular slices ("refinements") are considered. Though the machinery used to define it is much more elaborate compared to the Riemann integral, the Lebesgue integral was defined with similar basic ideas in mind. Compared to the Riemann integral, the more sophisticated Lebesgue integral allows area (or length, volume, etc.; termed a "measure" in general) to be defined and computed for much more complicated and irregular subsets of Euclidean space, although there still exist "non-measurable" subsets for which an area cannot be assigned.

#### Riemann integration

The Riemann integral is defined in terms of Riemann sums of functions with respect to tagged partitions of an interval. Let ${\displaystyle [a,b]}$  be a closed interval of the real line; then a tagged partition ${\displaystyle {\cal {P}}}$  of ${\displaystyle [a,b]}$  is a finite sequence

${\displaystyle a=x_{0}\leq t_{1}\leq x_{1}\leq t_{2}\leq x_{2}\leq \cdots \leq x_{n-1}\leq t_{n}\leq x_{n}=b.\,\!}$

This partitions the interval ${\displaystyle [a,b]}$  into ${\displaystyle n}$  sub-intervals ${\displaystyle [x_{i-1},x_{i}]}$  indexed by ${\displaystyle i=1,\ldots ,n}$ , each of which is "tagged" with a distinguished point ${\displaystyle t_{i}\in [x_{i-1},x_{i}]}$ . For a function ${\displaystyle f}$  bounded on ${\displaystyle [a,b]}$ , we define the Riemann sum of ${\displaystyle f}$  with respect to tagged partition ${\displaystyle {\cal {P}}}$  as

${\displaystyle \sum _{i=1}^{n}f(t_{i})\Delta _{i},}$

where ${\displaystyle \Delta _{i}=x_{i}-x_{i-1}}$  is the width of sub-interval ${\displaystyle i}$ . Thus, each term of the sum is the area of a rectangle with height equal to the function value at the distinguished point of the given sub-interval, and width the same as the sub-interval width. The mesh of such a tagged partition is the width of the largest sub-interval formed by the partition, ${\displaystyle ||\Delta _{i}||=\max _{i=1,\ldots ,n}\Delta _{i}}$ . We say that the Riemann integral of ${\displaystyle f}$  on ${\displaystyle [a,b]}$  is ${\displaystyle S}$  if for any ${\displaystyle \epsilon >0}$  there exists ${\displaystyle \delta >0}$  such that, for any tagged partition ${\displaystyle {\cal {P}}}$  with mesh ${\displaystyle ||\Delta _{i}||<\delta }$ , we have

${\displaystyle \left|S-\sum _{i=1}^{n}f(t_{i})\Delta _{i}\right|<\epsilon .}$

This is sometimes denoted ${\displaystyle {\mathcal {R}}\int _{a}^{b}f=S}$ . When the chosen tags give the maximum (respectively, minimum) value of each interval, the Riemann sum is known as the upper (respectively, lower) Darboux sum. A function is Darboux integrable if the upper and lower Darboux sums can be made to be arbitrarily close to each other for a sufficiently small mesh. Although this definition gives the Darboux integral the appearance of being a special case of the Riemann integral, they are, in fact, equivalent, in the sense that a function is Darboux integrable if and only if it is Riemann integrable, and the values of the integrals are equal. In fact, calculus and real analysis textbooks often conflate the two, introducing the definition of the Darboux integral as that of the Riemann integral, due to the slightly easier to apply definition of the former.

The fundamental theorem of calculus asserts that integration and differentiation are inverse operations in a certain sense.

#### Lebesgue integration and measure

Lebesgue integration is a mathematical construction that extends the integral to a larger class of functions; it also extends the domains on which these functions can be defined. The concept of a measure, an abstraction of length, area, or volume, is central to the definition of the Lebesgue integral and is important to the study of probability theory. (For a construction of the Lebesgue integral, the main article on Lebesgue integration should be consulted.)

### Distributions

Distributions (or generalized functions) are objects that generalize functions. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative.

### Relation to complex analysis

Real analysis is an area of analysis that studies concepts such as sequences and their limits, continuity, differentiation, integration and sequences of functions. By definition, real analysis focuses on the real numbers, often including positive and negative infinity to form the extended real line. Real analysis is closely related to complex analysis, which studies broadly the same properties of complex numbers. In complex analysis, it is natural to define differentiation via holomorphic functions, which have a number of useful properties, such as repeated differentiability, expressability as power series, and satisfying the Cauchy integral formula.

In real analysis, it is usually more natural to consider differentiable, smooth, or harmonic functions, which are more widely applicable, but may lack some more powerful properties of holomorphic functions. However, results such as the fundamental theorem of algebra are simpler when expressed in terms of complex numbers.

Techniques from the theory of analytic functions of a complex variable are often used in real analysis – such as evaluation of real integrals by residue calculus.

## Generalizations and related areas of mathematics

Various ideas from real analysis can be generalized from the real line to broader or more abstract contexts. These generalizations link real analysis to other disciplines and subdisciplines, in many cases playing an important role in their development as distinct areas of mathematics. For instance, generalization of ideas like continuous functions and compactness from real analysis to metric spaces and topological spaces connects real analysis to the field of general topology, while generalization of finite-dimensional Euclidean spaces to infinite-dimensional analogs led to the study of Banach spaces, and Hilbert spaces as topics of importance in functional analysis. Georg Cantor's investigation of sets and sequence of real numbers, mappings between them, and the foundational issues of real analysis gave birth to naive set theory. The study of issues of convergence for sequences of functions eventually gave rise to Fourier analysis as a subdiscipline of mathematical analysis. Investigation of the consequences of generalizing differentiability from functions of a real variable to ones of a complex variable gave rise to the concept of holomorphic functions and the inception of complex analysis as another distinct subdiscipline of analysis. On the other hand, the generalization of integration from the Riemann sense to that of Lebesgue led to the formulation of the concept of abstract measure spaces, a fundamental concept in measure theory. Finally, the generalization of integration from the real line to curves and surfaces in higher dimensional space brought about the study of vector calculus, whose further generalization and formalization played an important role in the evolution of the concepts of differential forms and smooth (differentiable) manifolds in differential geometry and other closely related areas of geometry and topology.

3. ^ Some authors (e.g., Rudin 1976) use braces instead and write ${\displaystyle \{a_{n}\}}$ . However, this notation conflicts with the usual notation for a set, which, in contrast to a sequence, disregards the order and the multiplicity of its elements.