Open Access Articles- Top Results for (%CE%B5, %CE%B4)-definition of limit

(ε, δ)-definition of limit

File:Límite 01.svg
Whenever a point x is within δ units of c, f(x) is within ε units of L

In calculus, the (ε, δ)-definition of limit ("epsilon-delta definition of limit") is a formalization of the notion of limit. It was first given by Bernard Bolzano in 1817. Augustin-Louis Cauchy never gave an (<math>\varepsilon,\delta</math>) definition of limit in his Cours d'Analyse, but occasionally used <math>\varepsilon,\delta</math> arguments in proofs. The definitive modern statement was ultimately provided by Karl Weierstrass.[1][2]


Isaac Newton was aware, in the context of the derivative concept, that the limit of the ratio of evanescent quantities was not itself a ratio, as when he wrote:

Those ultimate ratios ... are not actually ratios of ultimate quantities, but limits ... which they can approach so closely that their difference is less than any given quantity...

Occasionally Newton explained limits in terms similar to the epsilon-delta definition.[3] Augustin-Louis Cauchy gave a definition of limit in terms of a more primitive notion he called a variable quantity. He never gave an epsilon-delta definition of limit (Grabiner 1981). Some of Cauchy's proofs contain indications of the epsilon, delta method. Whether or not his foundational approach can be considered a harbinger of Weierstrass's is a subject of scholarly dispute. Grabiner feels that it is, while Schubring (2005) disagrees[dubious ].[1] Nakane concludes that Cauchy and Weierstrass gave the same name to different notions of limit.[4][unreliable source?]

Informal statement

Let f be a function. To say that

<math> \lim_{x \to c}f(x) = L \, </math>

means that f(x) can be made as close as desired to L by making the independent variable x close enough, but not equal, to the value c.

How close is "close enough to c" depends on how close one wants to make f(x) to L. It also of course depends on which function f is and on which number c is. Therefore let the positive number ε (epsilon) be how close one wishes to make f(x) to L; strictly one wants the distance to be less than ε. Further, if the positive number δ is how close one will make x to c, and if the distance from x to c is less than δ (but not zero), then the distance from f(x) to L will be less than ε. Therefore δ depends on ε. The limit statement means that no matter how small ε is made, δ can be made small enough.

The letters ε and δ can be understood as "error" and "distance", and in fact Cauchy used ε as an abbreviation for "error" in some of his work.[1] In these terms, the error (ε) in the measurement of the value at the limit can be made as small as desired by reducing the distance (δ) to the limit point.

This definition also works for functions with more than one argument. For such functions, δ can be understood as the radius of a circle or a sphere or some higher-dimensional analogy centered at the point where the existence of a limit is being proven, in the domain of the function and, for which, every point inside maps to a function value less than ε away from the value of the function at the limit point.

Precise statement

The <math>(\varepsilon, \delta)</math> definition of the limit of a function is as follows:[5]

Let <math>f : D \rightarrow \mathbb{R}</math> be a function defined on a subset <math> D \subseteq \mathbb{R} </math>, let <math>c</math> be a limit point of <math>D</math>, and let <math>L</math> be a real number. Then

the function <math>f</math> has a limit <math>L</math> at <math>c</math>

is defined to mean

for all <math> \varepsilon > 0 </math>, there exists a <math> \delta > 0 </math> such that for all <math> x </math> in <math> D </math> that satisfy <math> 0 < | x - c | < \delta </math>, the inequality <math> |f(x) - L| < \varepsilon </math> holds.


<math> \lim_{x \to c} f(x) = L \iff (\forall \varepsilon > 0)(\exists \ \delta > 0) (\forall x \in D)(0 < |x - c | < \delta \ \Rightarrow \ |f(x) - L| < \varepsilon)</math>

Worked example

Let us prove the statement that

<math>\lim_{x \to 5} (3x - 3) = 12.</math>

This is easily shown through graphical understandings of the limit, and as such serves as a strong basis for introduction to proof. According to the formal definition above, a limit statement is correct if and only if confining <math>x</math> to <math>\delta</math> units of <math>c</math> will inevitably confine <math>f(x)</math> to <math>\varepsilon</math> units of <math>L</math>. In this specific case, this means that the statement is true if and only if confining <math>x</math> to <math>\delta</math> units of 5 will inevitably confine

<math>3x - 3</math>

to <math>\varepsilon</math> units of 12. The overall key to showing this implication is to demonstrate how <math>\delta</math> and <math>\varepsilon</math> must be related to each other such that the implication holds. Mathematically, we want to show that

<math> 0 < | x - 5 | < \delta \ \Rightarrow \ | (3x - 3) - 12 | < \varepsilon . </math>

Simplifying, factoring, and dividing 3 on the right hand side of the implication yields

<math> | x - 5 | < \varepsilon / 3 ,</math>

which immediately gives the required result if we choose

<math> \delta = \varepsilon / 3 .</math>

Thus the proof is completed. The key to the proof lies in the ability of one to choose boundaries in <math>x</math>, and then conclude corresponding boundaries in <math>f(x)</math>, which in this case were related by a factor of 3, which is entirely due to the slope of 3 in the line

<math> y = 3x - 3 .</math>


A function f is said to be continuous at c if it is both defined at c and its value at c equals the limit of f as x approaches c:

<math>\lim_{x\to c} f(x) = f(c).</math>

If the condition 0 < |x − c| is left out of the definition of limit, then requiring f(x) to have a limit at c would be the same as requiring f(x) to be continuous at c.

f is said to be continuous on an interval I if it is continuous at every point c of I.

Comparison with infinitesimal definition

Keisler proved that a hyperreal definition of limit reduces the quantifier complexity by two quantifiers.[6] Namely, <math>f(x)</math> converges to a limit L as <math>x</math> tends to a if and only if for every infinitesimal e, the value <math>f(x+e)</math> is infinitely close to L; see microcontinuity for a related definition of continuity, essentially due to Cauchy. Infinitesimal calculus textbooks based on Robinson's approach provide definitions of continuity, derivative, and integral at standard points in terms of infinitesimals. Once notions such as continuity have been thoroughly explained via the approach using microcontinuity, the epsilon-delta approach is presented as well. Karel Hrbacek argues that the definitions of continuity, derivative, and integration in Robinson-style non-standard analysis must be grounded in the ε-δ method in order to cover also non-standard values of the input[7] Błaszczyk et al. argue that microcontinuity is useful in developing a transparent definition of uniform continuity, and characterize the criticism by Hrbacek as a "dubious lament".[8] Hrbacek proposes an alternative non-standard analysis, which is unlike Robinson's having many "levels" of infinitesimals, so that limits at one level can be defined in terms of infinitesimals at the next level.[9]

See also


  1. ^ a b c Grabiner, Judith V. (March 1983), "Who Gave You the Epsilon? Cauchy and the Origins of Rigorous Calculus" (PDF), The American Mathematical Monthly (Mathematical Association of America) 90 (3): 185–194, JSTOR 2975545, doi:10.2307/2975545, archived from the original on 2009-05-03, retrieved 2009-05-01 
  2. ^ Cauchy, A.-L. (1823), "Septième Leçon - Valeurs de quelques expressions qui se présentent sous les formes indéterminées <math>\frac{\infty}{\infty}, \infty^0, \ldots</math> Relation qui existe entre le rapport aux différences finies et la fonction dérivée", Résumé des leçons données à l’école royale polytechnique sur le calcul infinitésimal, Paris, archived from the original on 2009-05-03, retrieved 2009-05-01, p. 44. . Accessed 2009-05-01.
  3. ^ Pourciau, B. (2001), "Newton and the Notion of Limit", Historia Mathematica 28 (1), doi:10.1006/hmat.2000.2301 
  4. ^ Nakane, Michiyo. Did Weierstrass's differential calculus have a limit-avoiding character? His definition of a limit in ε−δ style. BSHM Bull. 29 (2014), no. 1, 51–59.
  5. ^ Rudin, Walter (1976). Principles of Mathematical Analysis. McGraw-Hill Science/Engineering/Math. p. 83. ISBN 978-0070542358. 
  6. ^ Keisler, H. Jerome (2008), "Quantifiers in limits" (PDF), Andrzej Mostowski and foundational studies, IOS, Amsterdam, pp. 151–170 
  7. ^ Hrbacek, K. (2007), "Stratified Analysis?", in Van Den Berg, I.; Neves, V., The Strength of Nonstandard Analysis, Springer 
  8. ^ Błaszczyk, Piotr; Katz, Mikhail; Sherry, David (2012), "Ten misconceptions from the history of analysis and their debunking", Foundations of Science, arXiv:1202.4153, doi:10.1007/s10699-012-9285-8 
  9. ^ Hrbacek, K. (2009). "Relative set theory: Internal view". Journal of Logic and Analysis 1. 


  • Grabiner, Judith V. The origins of Cauchy's rigorous calculus. MIT Press, Cambridge, Mass.-London, 1981.
  • Schubring, Gert (2005), Conflicts Between Generalization, Rigor, and Intuition: Number Concepts Underlying the Development of Analysis in 17th–19th Century France and Germany, Springer, ISBN 0-387-22836-5