Frequent Links
Exponentiation
 <math>b^n = \underbrace{b \times \cdots \times b}_n</math>
The exponent is usually shown as a superscript to the right of the base. Some common exponents have their own names: the exponent 2 (or 2nd power) is called the square of b (b^{2}) or b squared; the exponent 3 (or 3rd power) is called the cube of b (b^{3}) or b cubed. The exponent −1 of b, or 1 / b, is called the reciprocal of b.
When n is a negative integer and b is not zero, b^{n} is naturally defined as 1/b^{−n}, preserving the property b^{n} × b^{m} = b^{n + m}.
Exponentiation for integer exponents can be defined for a wide variety of algebraic structures, including matrices.
Exponentiation is used extensively in many fields, including economics, biology, chemistry, physics, and computer science, with applications such as compound interest, population growth, chemical reaction kinetics, wave behavior, and publickey cryptography.
Calculation results  

Addition (+)  
<math>\scriptstyle\left.\begin{matrix}\scriptstyle\text{summand}+\text{summand}\\\scriptstyle\text{augend}+\text{addend}\end{matrix}\right\}=</math>  <math>\scriptstyle\text{sum}</math> 
Subtraction (−)  
<math>\scriptstyle\text{minuend}\text{subtrahend}=</math>  <math>\scriptstyle\text{difference}</math> 
Multiplication (×)  
<math>\scriptstyle\left.\begin{matrix}\scriptstyle\text{multiplicand}\times\text{multiplicand}\\\scriptstyle\text{multiplicand}\times\text{multiplier}\end{matrix}\right\}=</math>  <math>\scriptstyle\text{product}</math> 
Division (÷)  
<math>\scriptstyle\left.\begin{matrix}\scriptstyle\frac{\scriptstyle\text{dividend}}{\scriptstyle\text{divisor}}\\\scriptstyle\frac{\scriptstyle\text{numerator}}{\scriptstyle\text{denominator}}\end{matrix}\right\}=</math>  <math>\scriptstyle\text{quotient}</math> 
Modulation (mod)  
<math>\scriptstyle\text{dividend}\bmod\text{divisor}=</math>  <math>\scriptstyle\text{remainder}</math> 
Exponentiation  
<math>\scriptstyle\text{base}^\text{exponent}=</math>  <math>\scriptstyle\text{power}</math> 
nth root (√)  
<math>\scriptstyle\sqrt[\text{degree}]{\scriptstyle\text{radicand}}=</math>  <math>\scriptstyle\text{root}</math> 
Logarithm (log)  
<math>\scriptstyle\log_\text{base}(\text{antilogarithm})=</math>  <math>\scriptstyle\text{logarithm}</math> 
Contents
 1 Background and terminology
 2 Integer exponents
 3 Rational exponents
 4 Real exponents
 5 Complex exponents with positive real bases
 6 Powers of complex numbers
 7 Generalizations
 8 Repeated exponentiation
 9 Zero to the power of zero
 10 Limits of powers
 11 Efficient computation with integer exponents
 12 Exponential notation for function names
 13 In programming languages
 14 History of the notation
 15 List of wholenumber exponentials
 16 See also
 17 References
 18 External links
Background and terminology
The expression b^{2} = b ⋅ b is called the square of b because the area of a square with sidelength b is b^{2}. It is pronounced "b squared".
The expression b^{3} = b ⋅ b ⋅ b is called the cube of b because the volume of a cube with sidelength b is b^{3}. It is pronounced "b cubed".
The exponent says how many copies of the base are multiplied together. For example, 3^{5} = 3 ⋅ 3 ⋅ 3 ⋅ 3 ⋅ 3 = 243. The base 3 appears 5 times in the repeated multiplication, because the exponent is 5. Here, 3 is the base, 5 is the exponent, and 243 is the power or, more specifically, the fifth power of 3, 3 raised to the fifth power, or 3 to the power of 5.
The word "raised" is usually omitted, and very often "power" as well, so 3^{5} is typically pronounced "three to the fifth" or "three to the five". The exponentiation b^{n} can be read as b raised to the nth power, or b raised to the power of n, or b raised by the exponent of n, or most briefly as b to the n.
Exponentiation may be generalized from integer exponents to more general types of numbers.
The word "exponent" was coined in 1544 by Michael Stifel.^{[1]}
The modern notation for exponentiation was introduced by René Descartes in his Géométrie of 1637.^{[2]}^{[3]}
Integer exponents
The exponentiation operation with integer exponents requires only elementary algebra.
Positive integer exponents
Formally, powers with positive integer exponents may be defined by the initial condition^{[4]}
 <math>b^1 = b</math>
and the recurrence relation
 <math>b^{n+1} = b^n \cdot b</math>
From the associativity of multiplication, it follows that for any positive integers m and n,
 <math>b^{m+n} = b^m \cdot b^n</math>
Zero exponent
Any nonzero number raised by the exponent 0 is 1;^{[5]} one interpretation of such a power is as an empty product. The case of 0^{0} is discussed below.
Negative exponents
The following identity holds for an arbitrary integer n and nonzero b:
 <math>b^{n} = 1/b^n </math>
Raising 0 by a negative exponent is left undefined.
The identity above may be derived through a definition aimed at extending the range of exponents to negative integers.
For nonzero b and positive n, the recurrence relation from the previous subsection can be rewritten as
 <math>b^{n} = {b^{n+1}}/{b}, \quad n \ge 1 .</math>
By defining this relation as valid for all integer n and nonzero b, it follows that
 <math>\begin{align}
b^0 &= {b^{1}}/{b} = 1 \\ b^{1} &= {b^{0}}/{b} = {1}/{b} \end{align}</math>
and more generally for any nonzero b and any nonnegative integer n,
 <math>b^{n} = {1}/{b^n} .</math>
This is then readily shown to be true for every integer n.
Combinatorial interpretation
For nonnegative integers n and m, the power n^{m} is the number of functions from a set of m elements to a set of n elements (see cardinal exponentiation). Such functions can be represented as mtuples from an nelement set (or as mletter words from an nletter alphabet).
0^{5} = │ { } │ = 0 There is no 5tuple from the empty set. 1^{4} = │ { (1,1,1,1) } │ = 1 There is one 4tuple from a oneelement set. 2^{3} = │ { (1,1,1), (1,1,2), (1,2,1), (1,2,2), (2,1,1), (2,1,2), (2,2,1), (2,2,2) } │ = 8 There are eight 3tuples from a twoelement set. 3^{2} = │ { (1,1), (1,2), (1,3), (2,1), (2,2), (2,3), (3,1), (3,2), (3,3) } │ = 9 There are nine 2tuples from a threeelement set. 4^{1} = │ { (1), (2), (3), (4) } │ = 4 There are four 1tuples from a fourelement set. 5^{0} = │ { () } │ = 1 There is exactly one 0tuple.
Identities and properties
The following identities hold for all integer exponents, provided that the base is nonzero:
 <math>\begin{align}
b^{m + n} &= b^m \cdot b^n \\ (b^m)^n &= b^{m\cdot n} \\ (b \cdot c)^n &= b^n \cdot c^n
\end{align}</math>
Exponentiation is not commutative. This contrasts with addition and multiplication, which are. For example, 2 + 3 = 3 + 2 = 5 and 2 ⋅ 3 = 3 ⋅ 2 = 6, but 2^{3} = 8, whereas 3^{2} = 9.
Exponentiation is not associative either. Addition and multiplication are. For example, (2 + 3) + 4 = 2 + (3 + 4) = 9 and (2 ⋅ 3) ⋅ 4 = 2 ⋅ (3 ⋅ 4) = 24, but 2^{3} to the 4 is 8^{4} or 4096, whereas 2 to the 3^{4} is 2^{81} or 2417851639229258349412352. Without parentheses to modify the order of calculation, by convention the order is topdown, not bottomup:
 <math>b^{p^q} = b^{(p^q)} \ne (b^p)^q = b^{(p \cdot q)} = b^{p \cdot q} .</math>
Particular bases
Powers of ten
In the base ten (decimal) number system, integer powers of 10 are written as the digit 1 followed or preceded by a number of zeroes determined by the sign and magnitude of the exponent. For example, = 10^{3}1000 and = 10^{−4}0.0001.
Exponentiation with base 10 is used in scientific notation to denote large or small numbers. For instance, 299792458Lua error: Unmatched closebracket at pattern character 67. (the speed of light in vacuum, in metre per second) can be written as 2.99792458×10^{8}Lua error: Unmatched closebracket at pattern character 67. and then approximated as 2.998×10^{8}Lua error: Unmatched closebracket at pattern character 67..
SI prefixes based on powers of 10 are also used to describe small or large quantities. For example, the prefix kilo means = 10^{3}1000, so a kilometre is 1000Lua error: Unmatched closebracket at pattern character 67..
Powers of two
The positive powers of 2 are important in computer science because there are 2^{n} possible values for an nbit binary register.
Powers of 2 are important in set theory since a set with n members has a power set, or set of all subsets of the original set, with 2^{n} members.
The negative powers of 2 are commonly used, and the first two have special names: half, and quarter.
In the base 2 (binary) number system, integer powers of 2 are written as 1 followed or preceded by a number of zeroes determined by the sign and magnitude of the exponent. For example, two to the power of three is written as 1000 in binary.
Powers of one
The integer powers of one are all one: 1^{n} = 1.
Powers of zero
If the exponent is positive, the power of zero is zero: 0^{n} = 0, where n > 0.
If the exponent is negative, the power of zero (0^{n}, where n < 0) is undefined, because division by zero is implied.
If the exponent is zero, some authors define 0^{0} = 1, whereas others leave it undefined, as discussed below under § Zero to the power of zero.
Powers of minus one
If n is an even integer, then (−1)^{n} = 1.
If n is an odd integer, then (−1)^{n} = −1.
Because of this, powers of −1 are useful for expressing alternating sequences. For a similar discussion of powers of the complex number i, see § Powers of complex numbers.
Large exponents
The limit of a sequence of powers of a number greater than one diverges; in other words, the sequence grows without bound:
 b^{n} → ∞ as n → ∞ when b > 1
This can be read as "b to the power of n tends to +∞ as n tends to infinity when b is greater than one".
Powers of a number with absolute value less than one tend to zero:
 b^{n} → 0 as n → ∞ when b < 1
Any power of one is always one:
 b^{n} = 1 for all n if b = 1
If the number b varies tending to 1 as the exponent tends to infinity then the limit is not necessarily one of those above. A particularly important case is
 (1 + 1/n)^{n} → e as n → ∞
See § The exponential function below.
Other limits, in particular of those that take on an indeterminate form, are described in § Limits of powers below.
Rational exponents
An nth root of a number b is a number x such that x^{n} = b.
If b is a positive real number and n is a positive integer, then there is exactly one positive real solution to x^{n} = b. This solution is called the principal nth root of b. It is denoted ^{n}√b, where √ is the radical symbol; alternatively, the principal root may be written b^{1/n}. For example: 4^{1/2} = 2, 8^{1/3} = 2.
The fact that <math>x=b^{1/n}</math> solves <math>x^n=b</math> follows from noting that
 <math>x^n = \underbrace{ b^\frac{1}{n} \times b^\frac{1}{n} \times \cdots \times b^\frac{1}{n} }_n = b^{\left( \frac{1}{n} + \frac{1}{n} + \cdots + \frac{1}{n} \right)} = b^\frac{n}{n} = b^1 = b.</math>
If n is even, then x^{n} = b has two real solutions if b is positive, which are the positive and negative nth roots (the positive one being denoted <math>b^{1/n}</math>). If b is negative, the equation has no solution in real numbers for even n.
If n is odd, then x^{n} = b has one real solution. The solution is positive if b is positive and negative if b is negative.
The principal root of a positive real number b with a rational exponent u/v in lowest terms satisfies
 <math>b^\frac{u}{v} = \left(b^u\right)^\frac{1}{v} = \sqrt[v]{b^u}</math>
where u is an integer and v is a positive integer.
Rational powers u/v, where u/v is in lowest terms, are positive if u is even (and hence v is odd) (because then b^{u} is positive), and negative for negative b if u and v are odd (because then b^{u} is negative). There are two roots, one of each sign, if b is positive and v is even (as exemplified by the case in which u = 1 and v = 2, whereby a positive b has two square roots); in this case the principal root is defined to be the positive one.
Thus we have (−27)^{1/3} = −3 and (−27)^{2/3} = 9. The number 4 has two 3/2th roots, namely 8 and −8; however, by convention 4^{3/2} denotes the principal root, which is 8. Since there is no real number x such that x^{2} = −1, the definition of b^{u/v} when b is negative and v is even must use the imaginary unit i, as described more fully in the section § Powers of complex numbers.
Care needs to be taken when applying the power identities with negative nth roots. For instance, −27 = (−27)^{((2/3)⋅(3/2))} = ((−27)^{2/3})^{3/2} = 9^{3/2} = 27 is clearly wrong. The problem here occurs in taking the positive square root rather than the negative one at the last step, but in general the same sorts of problems occur as described for complex numbers in the section § Failure of power and logarithm identities.
Real exponents
The identities and properties shown above for integer exponents are true for positive real numbers with noninteger exponents as well. However the identity
 <math>(b^r)^s = b^{r\cdot s}</math>
cannot be extended consistently to cases where b is a negative real number (see § Real exponents with negative bases). The failure of this identity is the basis for the problems with complex number powers detailed under § Failure of power and logarithm identities.
Exponentiation to real powers of positive real numbers can be defined either by extending the rational powers to reals by continuity, or more usually as given in § Powers via logarithms below.
Limits of rational exponents
Since any irrational number can be expressed as the limit of a sequence of rational numbers, exponentiation of a positive real number b with an arbitrary real exponent x can be defined by continuity with the rule^{[6]}
 <math> b^x = \lim_{r (\in\mathbb Q)\to x} b^r\quad(b \in\mathbb R^+,\,x\in\mathbb R)</math>
where the limit as r gets close to x is taken only over rational values of r. This limit only exists for positive b. The (ε, δ)definition of limit is used, this involves showing that for any desired accuracy of the result b^{x} one can choose a sufficiently small interval around x so all the rational powers in the interval are within the desired accuracy.
For example, if x = π, the nonterminating decimal representation π = 3.14159... can be used (based on strict monotonicity of the rational power) to obtain the intervals bounded by rational powers
 <math>[b^3,b^4]</math>, <math>[b^{3.1},b^{3.2}]</math>, <math>[b^{3.14},b^{3.15}]</math>, <math>[b^{3.141},b^{3.142}]</math>, <math>[b^{3.1415},b^{3.1416}]</math>, <math>[b^{3.14159},b^{3.14160}]</math>, ...
The bounded intervals converge to a unique real number, denoted by <math>b^\pi</math>. This technique can be used to obtain any irrational power of a positive real number b. The function f_{b}(x) = b^{x} is thus defined for any real number x.
The exponential function
The important mathematical constant e, sometimes called Euler's number, is approximately equal to 2.718 and is the base of the natural logarithm. Although exponentiation of e could, in principle, be treated the same as exponentiation of any other real number, such exponentials turn out to have particularly elegant and useful properties. Among other things, these properties allow exponentials of e to be generalized in a natural way to other types of exponents, such as complex numbers or even matrices, while coinciding with the familiar meaning of exponentiation with rational exponents.
As a consequence, the notation e^{x} usually denotes a generalized exponentiation definition called the exponential function, exp(x), which can be defined in many equivalent ways, for example by:
 <math>\exp(x) = \lim_{n \rightarrow \infty} \left(1+\frac x n \right)^n </math>
Among other properties, exp satisfies the exponential identity:
 <math>\exp(x+y) = \exp(x) \cdot \exp(y)</math>
The exponential function is defined for all integer, fractional, real, and complex values of x. In fact, the matrix exponential is welldefined for square matrices (in which case the exponential identity only holds when x and y commute), and is useful for solving systems of linear differential equations.
Since exp(1) is equal to e and exp(x) satisfies the exponential identity, it immediately follows that exp(x) coincides with the repeatedmultiplication definition of e^{x} for integer x, and it also follows that rational powers denote (positive) roots as usual, so exp(x) coincides with the e^{x} definitions in the previous section for all real x by continuity.
Powers via logarithms
The natural logarithm ln(x) is the inverse of the exponential function e^{x}. It is defined for b > 0, and satisfies
 <math>b = e^{\ln b}</math>
If b^{x} is to preserve the logarithm and exponent rules, then one must have
 <math>b^x = (e^{\ln b})^x = e^{x \cdot\ln b}</math>
for each real number x.
This can be used as an alternative definition of the real number power b^{x} and agrees with the definition given above using rational exponents and continuity. The definition of exponentiation using logarithms is more common in the context of complex numbers, as discussed below.
Real exponents with negative bases
Powers of a positive real number are always positive real numbers. The solution of x^{2} = 4, however, can be either 2 or −2. The principal value of 4^{1/2} is 2, but −2 is also a valid square root. If the definition of exponentiation of real numbers is extended to allow negative results then the result is no longer well behaved.
Neither the logarithm method nor the rational exponent method can be used to define b^{r} as a real number for a negative real number b and an arbitrary real number r. Indeed, e^{r} is positive for every real number r, so ln(b) is not defined as a real number for b ≤ 0.
The rational exponent method cannot be used for negative values of b because it relies on continuity. The function f(r) = b^{r} has a unique continuous extension^{[6]} from the rational numbers to the real numbers for each b > 0. But when b < 0, the function f is not even continuous on the set of rational numbers r for which it is defined.
For example, consider b = −1. The nth root of −1 is −1 for every odd natural number n. So if n is an odd positive integer, (−1)^{(m/n)} = −1 if m is odd, and (−1)^{(m/n)} = 1 if m is even. Thus the set of rational numbers q for which (−1)^{q} = 1 is dense in the rational numbers, as is the set of q for which (−1)^{q} = −1. This means that the function (−1)^{q} is not continuous at any rational number q where it is defined.
On the other hand, arbitrary complex powers of negative numbers b can be defined by choosing a complex logarithm of b.
Irrational exponents
If a is a positive algebraic number, and b is a rational number, it has been shown above that a^{b} is algebraic. This remains true even if one accepts any algebraic number for a, with the only difference that a^{b} may take several values (see below), all algebraic. Gelfond–Schneider theorem provides some information on the nature of a^{b} when b is irrational (that is not rational). It states:
If a is an algebraic number different of 0 and 1, and b an irrational algebraic number, then all the values of a^{b} are transcendental numbers (that is, not algebraic).
Complex exponents with positive real bases
Imaginary exponents with base e
A complex number is an expression of the form <math>z=x+iy</math>, where x and y are real numbers, and i is the socalled imaginary unit, a number that satisfies the rule <math>i^2=1</math>). A complex number can be visualized as a point in the (x,y) plane. The polar coordinates of a point in the (x,y) plane consist of a nonnegative real number r and angle θ such that x = r cos θ and y = r sin θ. So
 <math>x+iy=r(\cos\theta + i\sin\theta).</math>
The product of two complex numbers z_{1} = x_{1} + iy_{1}, z_{2} = x_{2} + iy_{2} is obtained by expanding out the product of the binomials and simplifying using the rule <math>i^2=1</math>:
 <math>z_1z_2=(x_1+iy_1)(x_2+iy_2) = (x_1x_2y_1y_2) + i(x_1y_2+x_2y_1).</math>
As a consequence of the angle sum formulas of trigonometry, if z_{1} and z_{2} have polar coordinates (r_{1}, θ_{1}), (r_{2}, θ_{2}), then their product z_{1}z_{2} has polar coordinates equal to (r_{1}r_{2}, θ_{1} + θ_{2}).
Consider the right triangle in the complex plane which has 0, 1, 1 + ix/n as vertices. For large values of n, the triangle is almost a circular sector with a radius of 1 and a small central angle equal to x/n radians. 1 + ix/n may then be approximated by the number with polar coordinates (1, x/n). So, in the limit as n approaches infinity, (1 + ix/n)^{n} approaches (1, x/n)^{n} = (1^{n}, nx/n) = (1, x), the point on the unit circle whose angle from the positive real axis is x radians. The cartesian coordinates of this point are (cos x, sin x). So e^{ ix} = cos x + isin x; this is Euler's formula, connecting algebra to trigonometry by means of complex numbers.
The solutions to the equation e^{z} = 1 are the integer multiples of 2πi:
 <math>\{ z : e^z = 1 \} = \{ 2k\pi i : k \in \mathbb{Z} \}</math>
More generally, if e^{v} = w, then every solution to e^{z} = w can be obtained by adding an integer multiple of 2πi to v:
 <math>\{ z : e^z = w \} = \{ v + 2k\pi i : k \in \mathbb{Z} \}</math>
Thus the complex exponential function is a periodic function with period 2πi.
More simply: e^{iπ} = −1; e^{x + iy} = e^{x}(cos y + i sin y).
Trigonometric functions
It follows from Euler's formula stated above that the trigonometric functions cosine and sine are
 <math>\cos(z) = \frac{e^{iz} + e^{iz}}{2}; \qquad \sin(z) = \frac{e^{iz}  e^{iz}}{2i}</math>
Before the invention of complex numbers, cosine and sine were defined geometrically. The above formula reduces the complicated formulas for trigonometric functions of a sum into the simple exponentiation formula
 <math>e^{i(x+y)}=e^{ix}\cdot e^{iy}</math>
Using exponentiation with complex exponents may reduce problems in trigonometry to algebra.
Complex exponents with base e
The power z = e^{x + iy} can be computed as e^{x} ⋅ e^{iy}. The real factor e^{x} is the absolute value of z and the complex factor e^{iy} identifies the direction of z.
Complex exponents with positive real bases
If b is a positive real number, and z is any complex number, the power b^{z} is defined as e^{z ⋅ ln(b)}, where x = ln(b) is the unique real solution to the equation e^{x} = b. So the same method working for real exponents also works for complex exponents.
For example:
 2^{i} = e^{ i⋅ln(2)} = cos(ln(2)) + i⋅sin(ln(2)) ≈ 0.76924 + 0.63896i
 e^{i} ≈ 0.54030 + 0.84147i
 10^{i} ≈ −0.66820 + 0.74398i
 (e^{2π})^{i} ≈ 535.49^{i} ≈ 1
The identity <math>(b^z)^u=b^{zu}</math> is not generally valid for complex powers. A simple counterexample is given by:
 <math>(e^{2\pi i})^i=1^i=1\neq e^{2\pi}=e^{2\pi i\cdot i}</math>
The identity is, however, valid when <math>z</math> is a real number, and also when <math>u</math> is an integer.
Powers of complex numbers
Integer powers of nonzero complex numbers are defined by repeated multiplication or division as above. If i is the imaginary unit and n is an integer, then i^{n} equals 1, i, −1, or −i, according to whether the integer n is congruent to 0, 1, 2, or 3 modulo 4. Because of this, the powers of i are useful for expressing sequences of period 4.
Complex powers of positive reals are defined via e^{x} as in section Complex exponents with positive real bases above. These are continuous functions.
Trying to extend these functions to the general case of noninteger powers of complex numbers that are not positive reals leads to difficulties. Either we define discontinuous functions or multivalued functions. Neither of these options is entirely satisfactory.
The rational power of a complex number must be the solution to an algebraic equation. Therefore it always has a finite number of possible values. For example, w = z^{1/2} must be a solution to the equation w^{2} = z. But if w is a solution, then so is −w, because (−1)^{2} = 1. A unique but somewhat arbitrary solution called the principal value can be chosen using a general rule which also applies for nonrational powers.
Complex powers and logarithms are more naturally handled as single valued functions on a Riemann surface. Single valued versions are defined by choosing a sheet. The value has a discontinuity along a branch cut. Choosing one out of many solutions as the principal value leaves us with functions that are not continuous, and the usual rules for manipulating powers can lead us astray.
Any nonrational power of a complex number has an infinite number of possible values because of the multivalued nature of the complex logarithm. The principal value is a single value chosen from these by a rule which, amongst its other properties, ensures powers of complex numbers with a positive real part and zero imaginary part give the same value as for the corresponding real numbers.
Exponentiating a real number to a complex power is formally a different operation from that for the corresponding complex number. However in the common case of a positive real number the principal value is the same.
The powers of negative real numbers are not always defined and are discontinuous even where defined. In fact, they are only defined when the exponent is a rational number with the denominator being an odd integer. When dealing with complex numbers the complex number operation is normally used instead.
Complex exponents with complex bases
For complex numbers w and z with w ≠ 0, the notation w^{z} is ambiguous in the same sense that log w is.
To obtain a value of w^{z}, first choose a logarithm of w; call it log w. Such a choice may be the principal value Log w (the default, if no other specification is given), or perhaps a value given by some other branch of log w fixed in advance. Then, using the complex exponential function one defines
 <math>w^z = e^{z \log w}</math>
because this agrees with the earlier definition in the case where w is a positive real number and the (real) principal value of log w is used.
If z is an integer, then the value of w^{z} is independent of the choice of log w, and it agrees with the earlier definition of exponentiation with an integer exponent.
If z is a rational number m/n in lowest terms with z > 0, then the countably infinitely many choices of log w yield only n different values for w^{z}; these values are the n complex solutions s to the equation s^{n} = w^{m}.
If z is an irrational number, then the countably infinitely many choices of log w lead to infinitely many distinct values for w^{z}.
The computation of complex powers is facilitated by converting the base w to polar form, as described in detail below.
A similar construction is employed in quaternions.
Complex roots of unity
A complex number w such that w^{n} = 1 for a positive integer n is an nth root of unity. Geometrically, the nth roots of unity lie on the unit circle of the complex plane at the vertices of a regular ngon with one vertex on the real number 1.
If w^{n} = 1 but w^{k} ≠ 1 for all natural numbers k such that 0 < k < n, then w is called a primitive nth root of unity. The negative unit −1 is the only primitive square root of unity. The imaginary unit i is one of the two primitive 4th roots of unity; the other one is −i.
The number e^{2πi/n} is the primitive nth root of unity with the smallest positive complex argument. (It is sometimes called the principal nth root of unity, although this terminology is not universal and should not be confused with the principal value of ^{n}√1, which is 1.^{[7]})
The other nth roots of unity are given by
 <math>\left( e^{ \frac{2}{n} \pi i } \right) ^k = e^{ \frac{2}{n} \pi i k }</math>
for 2 ≤ k ≤ n.
Roots of arbitrary complex numbers
Although there are infinitely many possible values for a general complex logarithm, there are only a finite number of values for the power w^{q} in the important special case where q = 1/n and n is a positive integer. These are the nth roots of w; they are solutions of the equation z^{n} = w. As with real roots, a second root is also called a square root and a third root is also called a cube root.
It is conventional in mathematics to define w^{1/n} as the principal value of the root. If w is a positive real number, it is also conventional to select a positive real number as the principal value of the root w^{1/n}. For general complex numbers, the nth root with the smallest argument is often selected as the principal value of the nth root operation, as with principal values of roots of unity.
The set of nth roots of a complex number w is obtained by multiplying the principal value w^{1/n} by each of the nth roots of unity. For example, the fourth roots of 16 are 2, −2, 2i, and −2i, because the principal value of the fourth root of 16 is 2 and the fourth roots of unity are 1, −1, i, and −i.
Computing complex powers
It is often easier to compute complex powers by writing the number to be exponentiated in polar form. Every complex number z can be written in the polar form
 <math>z = re^{i\theta} = e^{\ln(r) + i\theta}</math>
where r is a nonnegative real number and θ is the (real) argument of z. The polar form has a simple geometric interpretation: if a complex number u + iv is thought of as representing a point (u, v) in the complex plane using Cartesian coordinates, then (r, θ) is the same point in polar coordinates. That is, r is the "radius" r^{2} = u^{2} + v^{2} and θ is the "angle" θ = atan2(v, u). The polar angle θ is ambiguous since any integer multiple of 2π could be added to θ without changing the location of the point. Each choice of θ gives in general a different possible value of the power. A branch cut can be used to choose a specific value. The principal value (the most common branch cut), corresponds to θ chosen in the interval (−π, π]. For complex numbers with a positive real part and zero imaginary part using the principal value gives the same result as using the corresponding real number.
In order to compute the complex power w^{z}, write w in polar form:
 <math>w = r e^{i\theta}</math>
Then
 <math>\log w = \log r + i \theta</math>
and thus
 <math>w^z = e^{z \log w} = e^{z(\log r + i\theta)}</math>
If z is decomposed as c + di, then the formula for w^{z} can be written more explicitly as
 <math>\left( r^c e^{d\theta} \right) e^{i (d \log r + c\theta)} = \left( r^c e^{d\theta} \right) \left[ \cos(d \log r + c\theta) + i \sin(d \log r + c\theta) \right]</math>
This final formula allows complex powers to be computed easily from decompositions of the base into polar form and the exponent into Cartesian form. It is shown here both in polar form and in Cartesian form (via Euler's identity).
The following examples use the principal value, the branch cut which causes θ to be in the interval (−π, π]. To compute i^{i}, write i in polar and Cartesian forms:
 <math>\begin{align}
i &= 1 \cdot e^{\frac{1}{2} i \pi} \\ i &= 0 + 1i
\end{align}</math>
Then the formula above, with r = 1, θ = π/2, c = 0, and d = 1, yields:
 <math>i^i = \left( 1^0 e^{\frac{1}{2}\pi} \right) e^{i \left[1 \cdot \log 1 + 0 \cdot \frac{1}{2}\pi \right]} = e^{\frac{1}{2}\pi} \approx 0.2079</math>
Similarly, to find (−2)^{3 + 4i}, compute the polar form of −2,
 <math>2 = 2e^{i \pi}</math>
and use the formula above to compute
 <math>(2)^{3 + 4i} = \left( 2^3 e^{4\pi} \right) e^{i[4\log(2) + 3\pi]} \approx (2.602  1.006 i) \cdot 10^{5}</math>
The value of a complex power depends on the branch used. For example, if the polar form i = 1e^{5πi/2} is used to compute i^{i}, the power is found to be e^{−5π/2}; the principal value of i^{i}, computed above, is e^{−π/2}. The set of all possible values for i^{i} is given by:^{[8]}
 <math>\begin{align}
i &= 1 \cdot e^{\frac{1}{2} i\pi + i 2 \pi k} \big k \isin \mathbb{Z} \\ i^i &= e^{i \left(\frac{1}{2} i\pi + i 2 \pi k\right)} \\ &= e^{\left(\frac{1}{2} \pi + 2 \pi k\right)}
\end{align}</math>
So there is an infinity of values which are possible candidates for the value of i^{i}, one for each integer k. All of them have a zero imaginary part so one can say i^{i} has an infinity of valid real values.
Failure of power and logarithm identities
Some identities for powers and logarithms for positive real numbers will fail for complex numbers, no matter how complex powers and complex logarithms are defined as singlevalued functions. For example:
 The identity log(b^{x}) = x ⋅ log b holds whenever b is a positive real number and x is a real number. But for the principal branch of the complex logarithm one has
 <math> i\pi = \log(1) = \log\left[(i)^2\right] \neq 2\log(i) = 2\left(\frac{i\pi}{2}\right) = i\pi</math>
 Regardless of which branch of the logarithm is used, a similar failure of the identity will exist. The best that can be said (if only using this result) is that:
 <math>\log(w^z) \equiv z \cdot \log(w) \pmod{2 \pi i}</math>
 This identity does not hold even when considering log as a multivalued function. The possible values of log(w^{z}) contain those of z ⋅ log w as a subset. Using Log(w) for the principal value of log(w) and m, n as any integers the possible values of both sides are:
 <math>\begin{align}
\left\{\log(w^z)\right\} &= \left\{ z \cdot \operatorname{Log}(w) + z \cdot 2 \pi i n + 2 \pi i m \right\} \\ \left\{z \cdot \log(w)\right\} &= \left\{ z \cdot \operatorname{Log}(w) + z \cdot 2 \pi i n \right\} \end{align}</math>
 The identities (bc)^{x} = b^{x}c^{x} and (b/c)^{x} = b^{x}/c^{x} are valid when b and c are positive real numbers and x is a real number. But a calculation using principal branches shows that
 <math>1 = (1\times 1)^\frac{1}{2} \not = (1)^\frac{1}{2}(1)^\frac{1}{2} = 1</math>
 and
 <math>i = (1)^\frac{1}{2} = \left (\frac{1}{1}\right )^\frac{1}{2} \not = \frac{1^\frac{1}{2}}{(1)^\frac{1}{2}} = \frac{1}{i} = i</math>
 On the other hand, when x is an integer, the identities are valid for all nonzero complex numbers.
 If exponentiation is considered as a multivalued function then the possible values of (−1×−1)^{1/2} are {1, −1}. The identity holds but saying {1} = {(−1×−1)^{1/2}} is wrong.
 The identity (e^{x})^{y} = e^{xy} holds for real numbers x and y, but assuming its truth for complex numbers leads to the following paradox, discovered in 1827 by Clausen:^{[9]}
 For any integer n, we have:
 <math>e^{1 + 2 \pi i n} = e^{1} e^{2 \pi i n} = e \cdot 1 = e</math>
 <math>\left( e^{1+2\pi i n} \right)^{1 + 2 \pi i n} = e</math>
 <math>e^{1 + 4 \pi i n  4 \pi^{2} n^{2}} = e</math>
 <math>e^1 e^{4 \pi i n} e^{4 \pi^2 n^2} = e</math>
 <math>e^{4 \pi^2 n^2} = 1</math>
 but this is false when the integer n is nonzero.
 There are a number of problems in the reasoning:
 The major error is that changing the order of exponentiation in going from line two to three changes what the principal value chosen will be.
 From the multivalued point of view, the first error occurs even sooner. Implicit in the first line is that e is a real number, whereas the result of e^{1+2πin} is a complex number better represented as e+0i. Substituting the complex number for the real on the second line makes the power have multiple possible values. Changing the order of exponentiation from lines two to three also affects how many possible values the result can have. <math>\scriptstyle (e^z)^w \;\ne\; e^{z w}</math>, but rather <math>\scriptstyle (e^z)^w \;=\; e^{(z \,+\, 2\pi i n) w}</math> multivalued over integers n.
 For any integer n, we have:
Generalizations
Monoids
Exponentiation can be defined in any monoid.^{[10]} A monoid is an algebraic structure consisting of a set X together with a rule for composition ("multiplication") satisfying an associative law and a multiplicative identity, denoted by 1. Exponentiation is defined inductively by:
 <math>x^0=1</math> for all <math>x\in X</math>
 <math>x^{n+1}=x^nx</math> for all <math>x\in X</math> and nonnegative integers n
Monoids include many structures of importance in mathematics, including groups and rings (under multiplication), with more specific examples of the latter being matrix rings and fields.
Matrices and linear operators
If A is a square matrix, then the product of A with itself n times is called the matrix power. Also <math>A^0</math> is defined to be the identity matrix,^{[11]} and if A is invertible, then <math>A^{n}=(A^{1})^n</math>.
Matrix powers appear often in the context of discrete dynamical systems, where the matrix A expresses a transition from a state vector x of some system to the next state Ax of the system.^{[12]} This is the standard interpretation of a Markov chain, for example. Then <math>A^2x</math> is the state of the system after two time steps, and so forth: <math>A^nx</math> is the state of the system after n time steps. The matrix power <math>A^n</math> is the transition matrix between the state now and the state at a time n steps in the future. So computing matrix powers is equivalent to solving the evolution of the dynamical system. In many cases, matrix powers can be expediently computed by using eigenvalues and eigenvectors.
Apart from matrices, more general linear operators can also be exponentiated. An example is the derivative operator of calculus, <math>d/dx</math>, which is a linear operator acting on functions <math>f(x)</math> to give a new function <math>(d/dx)f(x)=f'(x)</math>. The nth power of the differentiation operator is the nth derivative:
 <math>\left(\frac{d}{dx}\right)^nf(x) = \frac{d^n}{dx^n}f(x) = f^{(n)}(x).</math>
These examples are for discrete exponents of linear operators, but in many circumstances it is also desirable to define powers of such operators with continuous exponents. This is the starting point of the mathematical theory of semigroups.^{[13]} Just as computing matrix powers with discrete exponents solves discrete dynamical systems, so does computing matrix powers with continuous exponents solve systems with continuous dynamics. Examples include approaches to solving the heat equation, Schrödinger equation, wave equation, and other partial differential equations including a time evolution. The special case of exponentiating the derivative operator to a noninteger power is called the fractional derivative which, together with the fractional integral, is one of the basic operations of the fractional calculus.
Finite fields
A field is an algebraic structure in which multiplication, addition, subtraction, and division are all welldefined and satisfy their familiar properties. The real numbers, for example, form a field, as do the complex numbers and rational numbers. Unlike these familiar examples of fields, which are all infinite sets, some fields have only finitely many elements. The simplest example is the field with two elements <math>F_2=\{0,1\}</math> with addition defined by <math>0+1=1+0=1</math> and <math>0+0=1+1=0</math>, and multiplication <math>0\cdot 0=1\cdot 0 = 0\cdot 1=0</math> and <math>1\cdot 1=1</math>.
Exponentiation in finite fields has applications in public key cryptography. For example, the Diffie–Hellman key exchange uses the fact that exponentiation is computationally inexpensive in finite fields, whereas the discrete logarithm (the inverse of exponentiation) is computationally expensive.
Any finite field F has the property that there is a unique prime number p such that <math>px=0</math> for all x in F; that is, x added to itself p times is zero. For example, in <math>F_2</math>, the prime number p = 2 has this property. This prime number is called the characteristic of the field. Suppose that F is a field of characteristic p, and consider the function <math>f(x) = x^p</math> that raises each element of F to the power p. This is called the Frobenius automorphism of F. It is an automorphism of the field because of the Freshman's dream identity <math>(x+y)^p = x^p+y^p</math>. The Frobenius automorphism is important in number theory because it generates the Galois group of F over its prime subfield.
In abstract algebra
Exponentiation for integer exponents can be defined for quite general structures in abstract algebra.
Let X be a set with a powerassociative binary operation which is written multiplicatively. Then x^{n} is defined for any element x of X and any nonzero natural number n as the product of n copies of x, which is recursively defined by
 <math>\begin{align}
x^1 &= x \\ x^n &= x^{n1}x \quad\hbox{for }n>1
\end{align}</math>
One has the following properties
 <math>\begin{align}
(x^i x^j) x^k &= x^i (x^j x^k) \quad\text{(powerassociative property)} \\ x^{m+n} &= x^m x^n \\ (x^m)^n &= x^{mn}
\end{align}</math>
If the operation has a twosided identity element 1, then x^{0} is defined to be equal to 1 for any x.
 <math>\begin{align}
x1 &= 1x = x \quad\text{(twosided identity)} \\ x^0 &= 1
\end{align}</math>^{[citation needed]}
If the operation also has twosided inverses and is associative, then the magma is a group. The inverse of x can be denoted by x^{−1} and follows all the usual rules for exponents.
 <math>\begin{align}
x x^{1} &= x^{1} x = 1 \quad\text{(twosided inverse)} \\ (x y) z &= x (y z) \quad\text{(associative)} \\ x^{n} &= \left(x^{1}\right)^n \\ x^{mn} &= x^m x^{n}
\end{align}</math>
If the multiplication operation is commutative (as for instance in abelian groups), then the following holds:
 <math>(xy)^n = x^n y^n </math>
If the binary operation is written additively, as it often is for abelian groups, then "exponentiation is repeated multiplication" can be reinterpreted as "multiplication is repeated addition". Thus, each of the laws of exponentiation above has an analogue among laws of multiplication.
When there are several powerassociative binary operations defined on a set, any of which might be iterated, it is common to indicate which operation is being repeated by placing its symbol in the superscript. Thus, x^{∗n} is x ∗ ... ∗ x, while x^{#n} is x # ... # x, whatever the operations ∗ and # might be.
Superscript notation is also used, especially in group theory, to indicate conjugation. That is, g^{h} = h^{−1}gh, where g and h are elements of some group. Although conjugation obeys some of the same laws as exponentiation, it is not an example of repeated multiplication in any sense. A quandle is an algebraic structure in which these laws of conjugation play a central role.
Over sets
If n is a natural number and A is an arbitrary set, the expression A^{n} is often used to denote the set of ordered ntuples of elements of A. This is equivalent to letting A^{n} denote the set of functions from the set {0, 1, 2, …, n−1} to the set A; the ntuple (a_{0}, a_{1}, a_{2}, …, a_{n−1}) represents the function that sends i to a_{i}.
For an infinite cardinal number κ and a set A, the notation A^{κ} is also used to denote the set of all functions from a set of size κ to A. This is sometimes written ^{κ}A to distinguish it from cardinal exponentiation, defined below.
This generalized exponential can also be defined for operations on sets or for sets with extra structure. For example, in linear algebra, it makes sense to index direct sums of vector spaces over arbitrary index sets. That is, we can speak of
 <math>\bigoplus_{i \in \mathbb{N}} V_{i}</math>
where each V_{i} is a vector space.
Then if V_{i} = V for each i, the resulting direct sum can be written in exponential notation as V^{⊕N}, or simply V^{N} with the understanding that the direct sum is the default. We can again replace the set N with a cardinal number n to get V^{n}, although without choosing a specific standard set with cardinality n, this is defined only up to isomorphism. Taking V to be the field R of real numbers (thought of as a vector space over itself) and n to be some natural number, we get the vector space that is most commonly studied in linear algebra, the real vector space R^{n}.
If the base of the exponentiation operation is a set, the exponentiation operation is the Cartesian product unless otherwise stated. Since multiple Cartesian products produce an ntuple, which can be represented by a function on a set of appropriate cardinality, S^{N} becomes simply the set of all functions from N to S in this case:
 <math>S^N \equiv \{ f\colon N \to S \}</math>
This fits in with the exponentiation of cardinal numbers, in the sense that S^{N} = S^{N}, where X is the cardinality of X. When "2" is defined as {0, 1}, we have 2^{X} = 2^{X}, where 2^{X}, usually denoted by P(X), is the power set of X; each subset Y of X corresponds uniquely to a function on X taking the value 1 for x ∈ Y and 0 for x ∉ Y.
In category theory
In a Cartesian closed category, the exponential operation can be used to raise an arbitrary object to the power of another object. This generalizes the Cartesian product in the category of sets. If 0 is an initial object in a Cartesian closed category, then the exponential object 0^{0} is isomorphic to any terminal object 1.
Of cardinal and ordinal numbers
In set theory, there are exponential operations for cardinal and ordinal numbers.
If κ and λ are cardinal numbers, the expression κ^{λ} represents the cardinality of the set of functions from any set of cardinality λ to any set of cardinality κ.^{[14]} If κ and λ are finite, then this agrees with the ordinary arithmetic exponential operation. For example, the set of 3tuples of elements from a 2element set has cardinality 8 = 2^{3}. In cardinal arithmetic, κ^{0} is always 1 (even if κ is an infinite cardinal or zero).
Exponentiation of cardinal numbers is distinct from exponentiation of ordinal numbers, which is defined by a limit process involving transfinite induction.
Repeated exponentiation
Just as exponentiation of natural numbers is motivated by repeated multiplication, it is possible to define an operation based on repeated exponentiation; this operation is sometimes called hyper4 or tetration. Iterating tetration leads to another operation, and so on, a concept named hyperoperation. This sequence of operations is expressed by the Ackermann function and Knuth's uparrow notation. Just as exponentiation grows faster than multiplication, which is fastergrowing than addition, tetration is fastergrowing than exponentiation. Evaluated at (3, 3), the functions addition, multiplication, exponentiation, and tetration yield 6, 9, 27, and 7625597484987 (= 3^{27} = 3^{33} = ^{3}3) respectively.
Zero to the power of zero
Discrete exponents
There are many widely used formulas having terms involving naturalnumber exponents that require 0^{0} to be evaluated to 1. For example, regarding b^{0} as an empty product assigns it the value 1, even when b = 0. Alternatively, the combinatorial interpretation of b^{0} is the number of empty tuples of elements from a set with b elements. There is exactly one empty tuple, even if b = 0. Equivalently, the settheoretic interpretation of 0^{0} is the number of functions from the empty set to the empty set. There is exactly one such function, the empty function.^{[14]}
Polynomials and power series
Likewise, when working with polynomials, it is often necessary to assign <math>0^0</math> the value 1. A polynomial is an expression of the form <math>a_0x^0+\cdots+a_nx^n</math> where x is an indeterminate, and the coefficients <math>a_n</math> are real numbers (or, more generally, elements of some ring). The set of all real polynomials in x is denoted by <math>\mathbb R[x]</math>. Polynomials are added termwise, and multiplied by the applying the usual rules for exponents in the indeterminate x (see Cauchy product). With these algebraic rules for manipulation, polynomials form a polynomial ring. The polynomial <math>x^0</math> is the identity element of the polynomial ring, meaning that it is the (unique) element such that the product of <math>x^0</math> with any polynomial <math>p(x)</math> is just <math>p(x)</math>.^{[15]} Polynomials can be evaluated by specializing the indeterminate x to be a real number. More precisely, for any given real number <math>x_0</math> there is a unique unital ring homomorphism <math>\operatorname{ev}_{x_0}:\mathbb R[x]\to\mathbb R</math> such that <math>\operatorname{ev}_{x_0}(x^1)=x_0</math>.^{[16]} This is called the evaluation homomorphism. Because it is a unital homomorphism, we have <math>\operatorname{ev}_{x_0}(x^0) = 1.</math> That is, <math>x^0=1</math> for all specializations of x to a real number (including zero).
This perspective is significant for many polynomial identities appearing in combinatorics. For example, the binomial theorem <math>(1 + x)^n = \sum_{k = 0}^n \binom{n}{k} x^k</math> is not valid for x = 0 unless 0^{0} = 1.^{[17]} Similarly, rings of power series require <math>x^0=1</math> to be true for all specializations of x. Thus identities like <math>\frac{1}{1x} = \sum_{n=0}^{\infty} x^n</math> and <math>e^{x} = \sum_{n=0}^{\infty} \frac{x^n}{n!}</math> are only true as functional identities (including at x = 0) if 0^{0} = 1.
In differential calculus, the power rule <math>\frac{d}{dx} x^n = nx^{n1}</math> is not valid for n = 1 at x = 0 unless 0^{0} = 1.
Continuous exponents
Limits involving algebraic operations can often be evaluated by replacing subexpressions by their limits; if the resulting expression does not determine the original limit, the expression is known as an indeterminate form.^{[18]} In fact, when f(t) and g(t) are realvalued functions both approaching 0 (as t approaches a real number or ±∞), with f(t) > 0, the function f(t)^{g(t)} need not approach 1; depending on f and g, the limit of f(t)^{g(t)} can be any nonnegative real number or +∞, or it can diverge. For example, the functions below are of the form f(t)^{g(t)} with f(t), g(t) → 0 as t → 0^{+}, but the limits are different:
 <math> \lim_{t \to 0^+} {t}^{t} = 1, \quad \lim_{t \to 0^+} \left(e^{\frac{1}{t^2}}\right)^t = 0, \quad \lim_{t \to 0^+} \left(e^{\frac{1}{t^2}}\right)^{t} = +\infty, \quad \lim_{t \to 0^+} \left(e^{\frac{1}{t}}\right)^{at} = e^{a}</math>.
Thus, the twovariable function x^{y}, though continuous on the set {(x, y) : x > 0}, cannot be extended to a continuous function on any set containing (0, 0), no matter how one chooses to define 0^{0}.^{[19]} However, under certain conditions, such as when f and g are both analytic functions and f is positive on the open interval (0, b) for some positive b, the limit approaching from the right is always 1.^{[20]}^{[21]}^{[22]}
Complex exponents
In the complex domain, the function z^{w} may be defined for nonzero z by choosing a branch of log z and defining z^{w} as e^{w log z}. This does not define 0^{w} since there is no branch of log z defined at z = 0, let alone in a neighborhood of 0.^{[23]}^{[24]}^{[25]}
History of differing points of view
The debate over the definition of <math>0^0</math> has been going on at least since the early 19th century. At that time, most mathematicians agreed that <math>0^0 = 1</math>, until in 1821 Cauchy^{[26]} listed <math>0^0</math> along with expressions like <math>\frac{0}{0}</math> in a table of indeterminate forms. In the 1830s Libri^{[27]}^{[28]} published an unconvincing argument for <math>0^0 = 1</math>, and Möbius^{[29]} sided with him, erroneously claiming that <math>\scriptstyle \lim_{t \to 0^+} f(t)^{g(t)} \;=\; 1</math> whenever <math>\scriptstyle \lim_{t \to 0^+} f(t) \;=\; \lim_{t \to 0^+} g(t) \;=\; 0</math>. A commentator who signed his name simply as "S" provided the counterexample of <math>\scriptstyle (e^{1/t})^t</math>, and this quieted the debate for some time. More historical details can be found in Knuth (1992).^{[30]}
More recent authors interpret the situation above in different ways:
 Some argue that the best value for <math>0^0</math> depends on context, and hence that defining it once and for all is problematic.^{[31]} According to Benson (1999), "The choice whether to define <math>0^0</math> is based on convenience, not on correctness. If we refrain from defining <math>0^0</math> then certain assertions become unnecessarily awkward. The consensus is to use the definition <math>0^0=1</math>, although there are textbooks that refrain from defining <math>0^0</math>."^{[32]}
 Others argue that <math>0^0</math> should be defined as 1. Knuth (1992) contends strongly that <math>0^0</math> "has to be 1", drawing a distinction between the value <math>0^0</math>, which should equal 1 as advocated by Libri, and the limiting form <math>0^0</math> (an abbreviation for a limit of <math>\scriptstyle f(x)^{g(x)}</math> where <math>\scriptstyle f(x), g(x) \to 0</math>), which is necessarily an indeterminate form as listed by Cauchy: "Both Cauchy and Libri were right, but Libri and his defenders did not understand why truth was on their side."^{[30]}
Treatment on computers
IEEE floating point standard
The IEEE 7542008 floating point standard is used in the design of most floating point libraries. It recommends a number of functions for computing a power:^{[33]}
 pow treats 0^{0} as 1. This is the oldest defined version. If the power is an exact integer the result is the same as for pown, otherwise the result is as for powr (except for some exceptional cases).
 pown treats 0^{0} as 1. The power must be an exact integer. The value is defined for negative bases; e.g., pown(−3,5) is −243.
 powr treats 0^{0} as NaN (NotaNumber – undefined). The value is also NaN for cases like powr(−3,2) where the base is less than zero. The value is defined by e^{power×log(base)}.
Programming languages
Most programming language with a power function are implemented using the IEEE pow function and therefore evaluate 0^{0} as 1. The later C^{[34]} and C++ standards describe this as the normative behaviour. The Java standard^{[35]} mandates this behavior. The .NET Framework method System.Math.Pow
also treats 0^{0} as 1.^{[36]}
Mathematics software
 Sage simplifies b^{0} to 1, even if no constraints are placed on b.^{[37]} It takes 0^{0} to be 1, but does not simplify 0^{x} for other x.
 Maple distinguishes between integers 0, 1, ... and the corresponding floats 0.0, 1.0, ... (usually denoted 0., 1., ...). If x does not evaluates to a number, then x^{0} and x^{0.0} are respectively evaluated to 1 (integer) and 1.0 (float); on the other hand, 0^{x} is evaluated to the integer 0, while 0.0^{x} is evaluated as 0.^{x}. If both the base and the exponent are zero (or are evaluated to zero), the result is Float(undefined) if the exponent is the float 0.0; with an integer as exponent, the evaluation of 0^{0} results in the integer 1, while that of 0.^{0} results in the float 1.0.
 Macsyma also simplifies b^{0} to 1 even if no constraints are placed on b, but issues an error for 0^{0}. For x>0, it simplifies 0^{x} to 0.^{[citation needed]}
 Mathematica and Wolfram Alpha simplify b^{0} into 1, even if no constraints are placed on b.^{[38]} While Mathematica does not simplify 0^{x}, Wolfram Alpha returns two results, 0 for x > 0, and "indeterminate" for real x.^{[39]} Both Mathematica and Wolfram Alpha take 0^{0} to be "(indeterminate)".^{[40]}
 Matlab, Magma, GAP, singular, PARI/GP and the Google and iPhone calculators evaluate 0^{0} as 1.
Limits of powers
The section § Zero to the power of zero gives a number of examples of limits that are of the indeterminate form 0^{0}. The limits in these examples exist, but have different values, showing that the twovariable function x^{y} has no limit at the point (0, 0). One may consider at what points this function does have a limit.
More precisely, consider the function f(x, y) = x^{y} defined on D = {(x, y) ∈ R^{2} : x > 0}. Then D can be viewed as a subset of R^{2} (that is, the set of all pairs (x, y) with x, y belonging to the extended real number line R = [−∞, +∞], endowed with the product topology), which will contain the points at which the function f has a limit.
In fact, f has a limit at all accumulation points of D, except for (0, 0), (+∞, 0), (1, +∞) and (1, −∞).^{[41]} Accordingly, this allows one to define the powers x^{y} by continuity whenever 0 ≤ x ≤ +∞, −∞ ≤ y ≤ +∞, except for 0^{0}, (+∞)^{0}, 1^{+∞} and 1^{−∞}, which remain indeterminate forms.
Under this definition by continuity, we obtain:
 x^{+∞} = +∞ and x^{−∞} = 0, when 1 < x ≤ +∞.
 x^{+∞} = 0 and x^{−∞} = +∞, when 0 ≤ x < 1.
 0^{y} = 0 and (+∞)^{y} = +∞, when 0 < y ≤ +∞.
 0^{y} = +∞ and (+∞)^{y} = 0, when −∞ ≤ y < 0.
These powers are obtained by taking limits of x^{y} for positive values of x. This method does not permit a definition of x^{y} when x < 0, since pairs (x, y) with x < 0 are not accumulation points of D.
On the other hand, when n is an integer, the power x^{n} is already meaningful for all values of x, including negative ones. This may make the definition 0^{n} = +∞ obtained above for negative n problematic when n is odd, since in this case x^{n} → +∞ as x tends to 0 through positive values, but not negative ones.
Efficient computation with integer exponents
The simplest method of computing b^{n} requires n − 1 multiplication operations, but it can be computed more efficiently than that, as illustrated by the following example. To compute 2^{100}, note that 100 = 64 + 32 + 4. Compute the following in order:
 2^{2} = 4
 (2^{2})^{2} = 2^{4} = 16
 (2^{4})^{2} = 2^{8} = 256
 (2^{8})^{2} = 2^{16} = 65,536
 (2^{16})^{2} = 2^{32} = 4,294,967,296
 (2^{32})^{2} = 2^{64} = 18,446,744,073,709,551,616
 2^{64} 2^{32} 2^{4} = 2^{100} = 1,267,650,600,228,229,401,496,703,205,376
This series of steps only requires 8 multiplication operations instead of 99 (since the last product above takes 2 multiplications).
In general, the number of multiplication operations required to compute b^{n} can be reduced to Θ(log n) by using exponentiation by squaring or (more generally) additionchain exponentiation. Finding the minimal sequence of multiplications (the minimallength addition chain for the exponent) for b^{n} is a difficult problem for which no efficient algorithms are currently known (see Subset sum problem), but many reasonably efficient heuristic algorithms are available.^{[42]}
Exponential notation for function names
Placing an integer superscript after the name or symbol of a function, as if the function were being raised to a power, commonly refers to repeated function composition rather than repeated multiplication. Thus f^{ 3}(x) may mean f(f(f(x))); in particular, f^{ −1}(x) usually denotes the inverse function of f. Iterated functions are of interest in the study of fractals and dynamical systems. Babbage was the first to study the problem of finding a functional square root f^{ 1/2}(x).
However, for historical reasons, a special syntax applies to the trigonometric functions: a positive exponent applied to the function's abbreviation means that the result is raised to that power, while an exponent of −1 denotes the inverse function. That is, sin^{2}x is just a shorthand way to write (sin x)^{2} without using parentheses, whereas sin^{−1}x refers to the inverse function of the sine, also called arcsin x. There is no need for a shorthand for the reciprocals of trigonometric functions since each has its own name and abbreviation; for example, 1/(sin x) = (sin x)^{−1} = csc x. A similar convention applies to logarithms, where log^{2}x usually means (log x)^{2}, not log log x.
In programming languages
The superscript notation x^{y} is convenient in handwriting but inconvenient for typewriters and computer terminals that align the baselines of all characters on each line. Many programming languages have alternate ways of expressing exponentiation that do not use superscripts:

x ↑ y
: Algol, Commodore BASIC 
x ^ y
: BASIC, J, MATLAB, R, Microsoft Excel, TeX (and its derivatives), TIBASIC, bc (for integer exponents), Haskell (for nonnegative integer exponents), Lua, ASP and most computer algebra systems 
x ^^ y
: Haskell (for fractional base, integer exponents), D 
x ** y
: Ada, Bash, COBOL, CoffeeScript, Fortran, FoxPro, Gnuplot, OCaml, F#, Perl, PHP, PL/I, Python, Rexx, Ruby, SAS, Seed7, Tcl, ABAP, Mercury, Haskell (for floatingpoint exponents), Turing, VHDL 
pown x y
: F# (for integer base, integer exponent) 
x⋆y
: APL
Many programming languages lack syntactic support for exponentiation, but provide library functions.
In Bash, C, C++, C#, Java, JavaScript, Perl, PHP, Python and Ruby, the symbol ^ represents bitwise XOR. In Pascal, it represents indirection. In OCaml and Standard ML, it represents string concatenation.
History of the notation
The term power was used by the Greek mathematician Euclid for the square of a line.^{[43]} Archimedes discovered and proved the law of exponents, 10^{a} 10^{b} = 10^{a+b}, necessary to manipulate powers of 10.^{[44]} In the 9th century, the Persian mathematician Muhammad ibn Mūsā alKhwārizmī used the terms mal for a square and kab for a cube, which later Islamic mathematicians represented in mathematical notation as m and k, respectively, by the 15th century, as seen in the work of Abū alHasan ibn Alī alQalasādī.^{[45]}
In the late 16th century, Jost Bürgi used Roman Numerals for exponents.^{[46]}
Early in the 17th century, the first form of our modern exponential notation was introduced by Rene Descartes in his text titled La Géométrie; there, the notation is introduced in Book I.^{[2]}
Nicolas Chuquet used a form of exponential notation in the 15th century, which was later used by Henricus Grammateus and Michael Stifel in the 16th century. Samuel Jeake introduced the term indices in 1696.^{[43]} In the 16th century Robert Recorde used the terms square, cube, zenzizenzic (fourth power), sursolid (fifth), zenzicube (sixth), second sursolid (seventh), and zenzizenzizenzic (eighth).^{[47]} Biquadrate has been used to refer to the fourth power as well.
Some mathematicians (e.g., Isaac Newton) used exponents only for powers greater than two, preferring to represent squares as repeated multiplication. Thus they would write polynomials, for example, as ax + bxx + cx^{3} + d.
Another historical synonym, involution,^{[48]} is now rare and should not be confused with its more common meaning.
List of wholenumber exponentials
n  n^{2}  n^{3}  n^{4}  n^{5}  n^{6}  n^{7}  n^{8}  n^{9}  n^{10} 

2  4  8  16  32  64  128  256  512  1,024 
3  9  27  81  243  729  2,187  6,561  19,683  59,049 
4  16  64  256  1,024  4,096  16,384  65,536  262,144  1,048,576 
5  25  125  625  3,125  15,625  78,125  390,625  1,953,125  9,765,625 
6  36  216  1,296  7,776  46,656  279,936  1,679,616  10,077,696  60,466,176 
7  49  343  2,401  16,807  117,649  823,543  5,764,801  40,353,607  282,475,249 
8  64  512  4,096  32,768  262,144  2,097,152  16,777,216  134,217,728  1,073,741,824 
9  81  729  6,561  59,049  531,441  4,782,969  43,046,721  387,420,489  3,486,784,401 
10  100  1,000  10,000  100,000  1,000,000  10,000,000  100,000,000  1,000,000,000  10,000,000,000 
See also
References
 ^ See:
 Earliest Known Uses of Some of the Words of Mathematics
 Michael Stifel, Arithmetica integra (Nuremberg ("Norimberga"), (Germany): Johannes Petreius, 1544), Liber III (Book 3), Caput III (Chapter 3): De Algorithmo numerorum Cossicorum. (On algorithms of algebra.), page 236. Stifel was trying to conveniently represent the terms of geometric progressions. He devised a cumbersome notation for doing that. On page 236, he presented the notation for the first eight terms of a geometric progression (using 1 as a base) and then he wrote: "Quemadmodum autem hic vides, quemlibet terminum progressionis cossicæ, suum habere exponentem in suo ordine (ut 1ze habet 1. 1ʓ habet 2 &c.) sic quilibet numerus cossicus, servat exponentem suæ denominationis implicite, qui ei serviat & utilis sit, potissimus in multiplicatione & divisione, ut paulo inferius dicam." (However, you see how each term of the progression has its exponent in its order (as 1ze has a 1, 1ʓ has a 2, etc.), so each number is implicitly subject to the exponent of its denomination, which [in turn] is subject to it and is useful mainly in multiplication and division, as I will mention just below.) [Note: Most of Stifel's cumbersome symbols were taken from Christoff Rudolff, who in turn took them from Leonardo Fibonacci's Liber Abaci (1202), where they served as shorthand symbols for the Latin words res/radix (x), census/zensus (x^{2}), and cubus (x^{3}).]
 ^ ^{a} ^{b} René Descartes, Discourse de la Méthode … (Leiden, (Netherlands): Jan Maire, 1637), appended book: La Géométrie, book one, page 299. From page 299: " … Et aa, ou a^{2}, pour multiplier a par soy mesme; Et a^{3}, pour le multiplier encore une fois par a, & ainsi a l'infini ; … " ( … and aa, or a^{2}, in order to multiply a by itself; and a^{3}, in order to multiply it once more by a, and thus to infinity ; … )
 ^ Cajori, Florian (1991) [1893]. A History of Mathematics (5th ed.). AMS. p. 178. ISBN 0821821024.
 ^ Hodge, Jonathan K.; Schlicker, Steven; Sundstorm, Ted (2014). Abstract Algebra: an inquiry based approach. CRC Press. p. 94. ISBN 9781466567061.
 ^ Achatz, Thomas (2005). Technical Shop Mathematics (3rd ed.). Industrial Press. p. 101. ISBN 0831130865.
 ^ ^{a} ^{b} Denlinger, Charles G. (2011). Elements of Real Analysis. Jones and Bartlett. pp. 278–283. ISBN 9780763779474.
 ^ This definition of a principal root of unity can be found in:
 Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein (2001). Introduction to Algorithms (second ed.). MIT Press. ISBN 0262032937. Online resource
 Paul Cull, Mary Flahive, and Robby Robson (2005). Difference Equations: From Rabbits to Chaos (Undergraduate Texts in Mathematics ed.). Springer. ISBN 0387232346. Defined on page 351, available on Google books.
 "Principal root of unity", MathWorld.
 ^ Complex number to a complex power may be real at Cut The Knot gives some references to i^{i}
 ^ Steiner J, Clausen T, Abel NH (1827). "Aufgaben und Lehrsätze, erstere aufzulösen, letztere zu beweisen" [Problems and propositions, the former to solve, the later to prove]. Journal für die reine und angewandte Mathematik 2: 286–287.
 ^ Nicolas Bourbaki (1970). Algèbre. Springer., I.2
 ^ Chapter 1, Elementary Linear Algebra, 8E, Howard Anton
 ^ Strang, Gilbert (1988), Linear algebra and its applications (3rd ed.), BrooksCole, Chapter 5.
 ^ E Hille, R S Phillips: Functional Analysis and SemiGroups. American Mathematical Society, 1975.
 ^ ^{a} ^{b} N. Bourbaki, Elements of Mathematics, Theory of Sets, SpringerVerlag, 2004, III.§3.5.
 ^ Nicolas Bourbaki (1970). Algèbre. Springer., §III.2 No. 9: "L'unique monôme de degré 0 est l'élément unité de <math>A[(X_i)_{i\in I}]</math>; on l'identifie souvent à l'élément unité 1 de <math>A</math>".
 ^ Nicolas Bourbaki (1970). Algèbre. Springer., §IV.1 No. 3.
 ^ "Some textbooks leave the quantity 0^{0} undefined, because the functions x^{0} and 0^{x} have different limiting values when x decreases to 0. But this is a mistake. We must define x^{0} = 1, for all x, if the binomial theorem is to be valid when x = 0, y = 0, and/or x = −y. The binomial theorem is too important to be arbitrarily restricted! By contrast, the function 0^{x} is quite unimportant".Ronald Graham, Donald Knuth, and Oren Patashnik (19890105). "Binomial coefficients". Concrete Mathematics (1st ed.). Addison Wesley Longman Publishing Co. p. 162. ISBN 0201142368.
 ^ Malik, S. C.; Savita Arora (1992). Mathematical Analysis. New York: Wiley. p. 223. ISBN 9788122403237.
In general the limit of φ(x)/ψ(x) when x = a in case the limits of both the functions exist is equal to the limit of the numerator divided by the denominator. But what happens when both limits are zero? The division (0/0) then becomes meaningless. A case like this is known as an indeterminate form. Other such forms are ∞/∞ 0 × ∞, ∞ − ∞, 0^{0}, 1^{∞} and ∞^{0}.
 ^ L. J. Paige (March 1954). "A note on indeterminate forms". American Mathematical Monthly 61 (3): 189–190. JSTOR 2307224. doi:10.2307/2307224.
 ^ sci.math FAQ: What is 0^0?
 ^ Rotando, Louis M.; Korn, Henry (1977). "The Indeterminate Form 0^{0}". Mathematics Magazine (Mathematical Association of America) 50 (1): 41–42. JSTOR 2689754. doi:10.2307/2689754.
 ^ Lipkin, Leonard J. (2003). "On the Indeterminate Form 0^{0}". The College Mathematics Journal (Mathematical Association of America) 34 (1): 55–56. JSTOR 3595845. doi:10.2307/3595845.
 ^ "Since ln(0) does not exist, 0^{z} is undefined. For Re(z) > 0, we define it arbitrarily as 0." George F. Carrier, Max Krook and Carl E. Pearson, Functions of a Complex Variable: Theory and Technique , 2005, p. 15
 ^ "For z = 0, w ≠ 0, we define 0^{w} = 0, while 0^{0} is not defined." Mario Gonzalez, Classical Complex Analysis, Chapman & Hall, 1991, p. 56.
 ^ "... Let's start at x = 0. Here x^{x} is undefined." Mark D. Meyerson, The x^{x} Spindle, Mathematics Magazine 69, no. 3 (June 1996), 198206.
 ^ AugustinLouis Cauchy, Cours d'Analyse de l'École Royale Polytechnique (1821). In his Oeuvres Complètes, series 2, volume 3.
 ^ Guillaume Libri, Note sur les valeurs de la fonction 0^{0x}, Journal für die reine und angewandte Mathematik 6 (1830), 67–72.
 ^ Guillaume Libri, Mémoire sur les fonctions discontinues, Journal für die reine und angewandte Mathematik 10 (1833), 303–316.
 ^ A. F. Möbius (1834). "Beweis der Gleichung 0^{0} = 1, nach J. F. Pfaff" [Proof of the equation 0^{0} = 1, according to J. F. Pfaff]. Journal für die reine und angewandte Mathematik 12: 134–136.
 ^ ^{a} ^{b} Donald E. Knuth, Two notes on notation, Amer. Math. Monthly 99 no. 5 (May 1992), 403–422 (arXiv:math/9205211 [math.HO]).
 ^ Examples include Edwards and Penny (1994). Calculus, 4th ed, PrenticeHall, p. 466, and Keedy, Bittinger, and Smith (1982). Algebra Two. AddisonWesley, p. 32.
 ^ Donald C. Benson, The Moment of Proof : Mathematical Epiphanies. New York Oxford University Press (UK), 1999. ISBN 9780195117219
 ^ Handbook of FloatingPoint Arithmetic. Birkhäuser Boston. 2009. p. 216. ISBN 9780817647049.
 ^ John Benito (April 2003). "Rationale for International Standard—Programming Languages—C" (PDF). Revision 5.10. p. 182.
 ^ "Math (Java Platform SE 8) pow". Oracle.
 ^ ".NET Framework Class Library Math.Pow Method". Microsoft.
 ^ "Sage worksheet calculating x^0". Jason Grout.
 ^ "Wolfram Alpha calculates b^0". Wolfram Alpha LLC, accessed April 25, 2015.
 ^ "Wolfram Alpha calculates 0^x". Wolfram Alpha LLC, accessed April 25, 2015.
 ^ "Wolfram Alpha calculates 0^0". Wolfram Alpha LLC, accessed April 25, 2015.
 ^ N. Bourbaki, Topologie générale, V.4.2.
 ^ Gordon, D. M. (1998). "A Survey of Fast Exponentiation Methods". Journal of Algorithms 27: 129–146. doi:10.1006/jagm.1997.0913.
 ^ ^{a} ^{b} O'Connor, John J.; Robertson, Edmund F., "Etymology of some common mathematical terms", MacTutor History of Mathematics archive, University of St Andrews.
 ^ For further analysis see The Sand Reckoner.
 ^ O'Connor, John J.; Robertson, Edmund F., "Abu'l Hasan ibn Ali al Qalasadi", MacTutor History of Mathematics archive, University of St Andrews.
 ^ Cajori, Florian (2007). A History of Mathematical Notations; Vol I. Cosimo Classics. Pg 344 ISBN 1602066841
 ^ Quinion, Michael. "Zenzizenzizenzic  the eighth power of a number". World Wide Words. Retrieved 20100319.
 ^ This definition of "involution" appears in the OED second edition, 1989, and MerriamWebster online dictionary [1]. The most recent usage in this sense cited by the OED is from 1806.
External links
 sci.math FAQ: What is 0^{0}?
 Introducing 0th power at PlanetMath.org.
 Laws of Exponents with derivation and examples
 What does 0^0 (zero to the zeroth power) equal? on AskAMathematician.com
Unknown extension tag "indicator"