### Math Teacher At Play: e-to-the-i-theta

Somebody’s preparing a lecture or something and mentions the angle-sum laws. There are two of ’em sharing the office with me doing the same class (different from mine but one I’ve taught many times). Says he probably won’t get to it tonight.

Me:
“I could never remember the doggone things until I found out about the “cis” function… e-to-the-i-theta equals cos-theta plus i-sine-theta. I always make it a point to show it to the students this way too… if you can just take this bit on faith, you can remember what’s what forever. After a while it even begins to make sense… cosines are x-coördinates (so we should think of “real parts”) and sines are y’s (and so, “imaginary parts”)… then things multiply out accordingly.”

And the *other* guy seemed to know pretty well what I was talking about. But the guy who brought it up, not. But he’s in too much of a hurry to sit still for any actual explanation, alas (not that I blame him). So I gave up and prepared my *own* stuff or something and did my lecture and all that. But had it on my mind.

So, first, here’s the part I’ve known since, I forget, early grad school days.

Prerequisite: A little trig and a little faith. (Actually, you could take it *all* on faith and still have the best way to memorize the angle-sum laws… but of course there’s no *point* in memorizing the angle-sum laws if you’re not already studying trig.) On faith, you should believe that there’s a Complex Number Field “containing” the Reals and a number, i, satisfying i^2 = -1. This isn’t much of a leap… anyhow, students will have heard rumors about this situation and may even have done a few exercises.

The *leap* of faith comes with the proclamation
$e^{i\theta} = \cos(\theta) + i\sin(\theta)\,.$
This is admittedly something of a whopper. To *prove* this admittedly-weird equation one requires some Analysis (that’s “Calculus” to you, I suppose). The only proof I’ve worked out in detail involves so-called “infinite series”… oh, what the hell.

[
$e^t = \sum_{k=0}^\infty {{t^k}\over{k!}}$
$\sin(t)= \sum_{k=0}^\infty{{(-1)^k t^{2k+1}}\over{(2k+1)!}}$
$\cos(t) = \sum_{k=0}^\infty{{(-1)^k t^{2k}}\over{(2k)!}}\,;$
Shove in i-times-theta for “t” and turn the crank; equate real and imaginary parts.
]

The function
$\cos(\theta) + i\sin(\theta)$
is sometimes called “cis($\theta$)”, by the way. So the theorem I’ve outlined—the one I ask my students to “take on faith” if they want to learn my favorite trig-mnemonic—can be stated as $e^{i\theta}$ = cis($\theta$). OK. Everybody’s willing to suspend disbelief thus far, right? Because, look. The “angle sum” law for the cis function goes like this.

$\cos(\phi + \psi) + i\sin(\phi + \psi)=$
$e^{i(\phi + \psi)} =$
$e^{i\phi}e^{i\psi}=$
$[\cos(\phi)+i\sin(\phi)][cos(\psi)+i\sin(\psi)]=$
$\cos(\phi)\cos(\psi)-\sin(\phi)\sin(\psi) + i[\cos(\phi)\sin(\psi) + \sin(\phi)\cos(\psi)]\,.$

Equate real and imaginary parts for the results. To wit:
$\cos(\phi + \psi) = \cos(\phi)\cos(\psi)-\sin(\phi)\sin(\psi)$
and
$\sin(\phi + \psi) = \cos(\phi)\sin(\psi) + \sin(\phi)\cos(\psi)\,,$
the “angle sum” laws.

Amazing isn’t it.

(Summary: when you know “cis” and how to work with exponents [“*add* exponents to *multiply* exponentials-with-matching-bases]… and, oh yeah, how to use i^2 = -1 [to multiply “complex numbers”]… you get the [matchings and {s, i, g, n}-signs for] the trig functions “for free”. And you really *oughta* know those things: this isn’t all they’re good for [by a long shot]. )

Now, that’s essentially the lecture I’ve given many times: the part I’ve known for years. But I’ve dusted off some ODE books lately and think I have a better idea than I ever did about what’s “really” going on.

Bump up the prerequisites. To get much of anything out of what follows, the reader should know a little Calculus… derivatives for exponentials and trig-functions… and be willing to believe an existence-and-uniqueness theorem from ODE (“ordinary differential equations”). I plan to get *around* the “infinite series” argument… or maybe just to *hide* it…

Let’s imagine that we *know* about e^x for Real x and want to investigate e^z for Complex z. Write z = x + iy.

Now e^(x+iy) = e^x * e^(iy), of course. And we already understand e^x by assumption. So we can focus in on the function

f(t) = e^(it).

Differentiate f twice: f”(t) = -e^(it).
So f is a solution to the ODE f” + f = 0.

But one already knows (or readily checks) that sin(t) and cos(t) are *also* solutions to the same ODE. Moreover (here’s the “high theory”), since ours is a *second-order* ODE, these *two* (“linearly independent”) functions “span the solution space”: *every* solution can be written as a sum-of-scalar-multiples of these two.

Assuming this is true. There exist constants K_1 and K_2 satisfying
$e^{it} = K_1\cos(t) + K_2\sin(t)\,.$

Putting t = 0 shows that K_1 = 1 (recall that cos(0)=1 and sin(0)=0); it then follows that $e^{\pi i} = -K_1 = -1$ (by a similar calculation).

Now consider $e^{{i\pi}\over2}$. Our sum-of-scalar-multiples equation gives us $e^{{i\pi}\over2} = K_2$
but we can also calculate $(e^{{i\pi}\over2})^2 = e^{i\pi} = -1$ (by our previous paragraph). So K_2 is a square-root-of-minus-one: $\pm i$.

OK. It’s late. Getting rid of the \pm (plus-or-minus) is an exercise. The point, if I still have a reader, is clear, I hope: assuming “good behavior” for Complex solutions to (second-order linear) Differential Equations, we’ve *bypassed* the series (which in some sense are just a complicated expression of the facts that (e^x)’ – (e^x) = 0, (sin(x))” + sin(x) = 0, and (cos(x))” + cos(x) = 0 [along with, now that I think of it, the values of e^0, cos(0), and sin(0)])… to obtain the e-to-the-i-theta-equals-cis theorem.

If I’d’ve known you could actually *do* stuff with ODE, maybe I’d’ve learned about it when I was a sophomore. Or the one time I taught it…

1. I like the ODE argument a lot; I have an awesome one to offer in return. I got it from an early chapter of Tristan Needham’s utterly fantastic book Visual Complex Analysis. (Maybe for the sake of my time I should just refer you there and not take the time to write it out, but it’s so delicious I can’t resist.) It is not rigorous but it is highly intuitively illuminating.

All you have to believe to begin with is that, if z is seen as a vector in the complex plane, then iz is a ccw right angle from z and at the same magnitude; and that a suitable set of facts from calculus are going to generalize without change to the complex case.

Consider the function z=e^it of a real variable t. Interpret z as a position vector and t as time. Clearly for t=0 we have z=e^0=1.

Find the velocity: z’ = i*e^it = iz.

Interpret this physically. z is a vector-valued function of time t whose velocity vector is at right angles ccw to itself and of the same magnitude.

ALREADY I know that because z’s derivative is orthogonal to z, its path will be along a circle centered at the origin. (Again, thinking physically here. z’s velocity is orthogonal to z, therefore z is not moving toward or away from the origin, therefore z is at a constant distance from the origin.) I know it’s the unit circle because of the initial value at t=0.

This tells me z’s magnitude is 1. Thus its velocity is also 1, and it is traveling ccw around a unit circle at a constant speed of 1 (unit of distance per unit of t). I.e. 1 radian per 1 unit of t.

This is the whole story. Just use this information to calculate z’s exact (real and imaginary) coordinates to find z=cos t + i*sin t.

Is that gorgeous or what?

2. gorgeous indeed. somewhere along the line
i’d satisfied myself of the tangents-at-right-angles-
-producing-circles thing for r*e^(it).
but never thought of pushing it any further.

i’ll get a look at the book.
recently i was in the math library
at big state u; haven’t visited in years.
blown away as could have been predicted.
lots of mind-blowing new stuff.
in grad school i’d spend probably
an hour a week just browsing stacks…

ccw?

3. w.p. handel

counter-clockwise.

• ## (Partial) Contents Page

Vlorbik On Math Ed ('07—'09)
(a good place to start!)