## Archive for January, 2009

Somebody sez Brutus is absent today for good reason & she’s been asked to take the notes. Which is a bunch of exercises. And Brutus reads the blog. So here goes, quick and dirty.

**1.** The order of transformations matters. Demonstrate this by graphing the “original” graph * y = | x| *, then *(i)* Shift up 2, then reflect in the *x* axis; and *(ii)* Reflect in *x* axis, then shift up 2.

**2.**Write a definition for the piecewise function on the blackboard. Sorry. Not ready to try to draw it here. Piecewise-linear if *that’s* any help. (Part of the point here is that *both* directions — graphics to algebra *and* algebra to graphics — make good exercises. We had the other one on the quiz.)

**3.** Give (exactly — e.g. 32/113, not .3274) both co-ordinates for the vertex of .

**4.**The Demand function for a certain product is *p = 100 – .2 x*. *(a)* Determine the Revenue function. *(b)*Find the maximum revenue. *(c)* What are the *quantity* and *price* that give this revenue?

Also covered (though not here): anything from the first 2 quizzes (domain & range; increasing & decreasing; intercepts; maxes and mins [nonquadratic]; symmetries …)

OK. The snow day probably means postponing the exam *again*. Meanwhile, here are some remarks while I’m thinking about *writing* the doggone thing.

All of our work thus far takes place in (various subsets of) (the **Real Numbers**) or (the *xy*-plane). We are particularly concerned with *real-valued functions of a real variable*; typically these are given in the form *y = f(x)*.

The big idea of the course so far is pretty clearly **Transformations**: when the right-hand side of our typical equation is replaced by *f(x) + K* or *A f(x)*, one has a *vertical* transformation (a **translation**—which for some reason we’ve been calling a **shift**—or a **scaling** [*i.e.*, a **stretch** or a **compression**]); the corresponding Graphical Transformations can conveniently be expressed as and —”add *K* to each 2nd co-ordinate” and “multiply each 2nd co-ordinate by 2”. The *horizontal* transformations associated with *y = f(x – H)* and *y = f(x/W)*, expressed in the same notation, are then and —note here that the number *subtracted* from the *x* variable in the (new) *equation* is actually *added* to the *x* co-ordinate of each ordered pair in “shifting” the (old) graph of *f* (and, more or less of course, a *division* “inside the parens” in the functional equation produces a *multiplication* of first co-ordinates by the value in question [here called *W* for “wavelength”, by the way … with some loss of accuracy …] in the transformation).

Throw in the **reflections** and , and you’ve got what I hope is a pretty good summary of the theory as thus far presented. All of this theory can now be brought to bear on an equation like : beginning with a graph of its “parent function” , we can understand this as and go on to analyze the corresponding transformation as a reflection in the *x* axis, followed vertical “stretch” by a factor of 2, followed by a horizontal shift (to the *right*) by 1 unit and a vertical shift up 3 units (that is, ; I’ll remark here that even though the angle-bracket notation isn’t an actual course *requirement*, I’d feel pretty helpless saying most of this *without* it …).

As to the *order* in which the “suboperations” forming this transformation are performed, the hard issues are essentially ignored by our text. And, for right now, by me. I’m gettin’ out in the snow and play. As soon as I debug all this TeX-slash-HTML …

I never learned the doggone thing until I was the teacher and had to, for one thing. I was trying to cop some math-geek attitude (“Never *memorize* what can be *understood* instead!”—it turns out this is sort of a damfool commitment). I knew I could *derive* it (by Completing The Square, of course) and that was by golly good enough for me—how often was I going to need to solve a quadratic equation, after all (many a thousand times, of course … but who knew?)?

Of course I *knew* I could derive it because I’d *practice* it from time to time (even the most exercise-phobic math major must *occasionally* solve a quadratic equation!).

Well, this week I got to practice some more, with live audiences. Had a great time of course. This is some of the world’s best material.

I’ve never *studied* the history in a systematic way (and don’t intend to now … but I’ll probably look at a few references along the way so I don’t make *too* much of a fool of myself … I try to keep things like dates pretty vague in lectures …).

Certain Babylonian texts, then, dating from about 1700 BCE, give *procedures* for finding (what we would now call) the roots of quadratic equations. But it wasn’t until the European Renaissance—the “rebirth of learning” after the so-called Dark Ages—that Algebra had its *first* flowering and it became possible to express such procedures as “formulas”. One crucial step along the way seems to have been learning to treat (the now-familiar) *negative numbers* on the same footing as positive ones: this eliminates the need for certain case-by-case breakdowns (as I was remarking the other day).

Anyhow, once *variables* and other enormous improvements in the notations were introduced, it became possible to write out the Quadratic Formula (QF). And to a certain kind of a person, that’ll be all it takes: give ’em a Quadratic Formula and some free time, and the next thing you know, they start asking questions like “What about a *cubic* formula?”. And so, with one heck of a lot of hard work by some really talented guys (mostly all guys doing math back then, I’m afraid … no genderbias intended) … they found it. And, dammit, it’s too unwieldy to actually set down as a single formula. The procedure is spelled out in a sidebar (“Historical Feature”) in the text. The general *fourth degree* equation was solved not too much later … and there the situation stayed for a few hundred years. Finally, in the early 19th Century, with the birth of Modern (“Abstract”) Algebra, it became possible to prove that *there is no* “Quintic Formula”—no procedure involving only roots, powers, multiplications, and additions (“algebraic” operations) that solves every polynomial equation of degree five.

Returning to QF. It’s worth remarking that we don’t need the “complete the square” technique to *prove* it. If once we have it in front of us, we can simply “plug in” the whole shebang on Ax^2 + Bx + C and perform a certain brute-force computation (and darn good exercise) … out pops zero. But this procedure gives no insight on where QF “comes from” (anyway, not immediately, not to me … though I *can* at least imagine working through the computation, backwards maybe, *looking* for some such insight [“now *where* does the “4AC” come from again?”]). It’s *also* worth remarking that, when *f(x) = Ax^2 + Bx + C* has *real* roots, we can literally *see* (on a graph) that the line of symmetry runs halfway between (the vertical lines)

and

; this accounts for the fact (also derived by me and the text in two *other* ways) that the x-co-ordinate of the vertex of *f* is *-B/(2A)*.

And the *rest* of QF also has its own story to tell. The most-commonly-used properties of the **discriminant** *B^2 – 4AC* are spelled out in the text of course; I won’t rehash them here. Except to mention that the case of a *negative* discriminant points the way to the theory of Complex Numbers. And it was learning to take *these* seriously (*i.e.*, to quote myself, “learning to treat them on the same footing” as the [so-called] Real Numbers [this eliminates the need for certain case-by-case breakdowns …]) that made it possible to state the Fundamental Theorem of Algebra (“every polynomial factors”). I’ll have much more to say about that.

Oh. One more thing. It has the scansion of “Pop Goes The Weasel”.

Here’s a little eight-pager I like to call “Counting Heads” (PDF): deriving the formula for expanding the n^th power of a binomial, from first principles, and applying it to “coin toss” problems. (Not to be confused with D. Marusek’s outstanding novel.)

The **transformations** section of the text begins, to my predictable chagrin, with a graphing-calculator “Exploration”: adding (or subtracting) a constant value at the end of a function (in the Function Editor [*i.e.*, the “Y=” screen] of the grapher) produces the by-now familiar (to any student in regular attendance) vertical shift. “We are led to the following conclusion:

If a real numberAnd this is about as clear as can be expected: this stuff is hard. But, doggone it, we’re adding to the right side of ankis added to the right side of a functiony = f(x), the graph of the new functiony = f(x) + kis the graph offshifted vertically upkunits (if ) ordown|k| units (if )."

*equation*, aren’t we. And shouldn’t we

*take a hint*from the doggone grapher and refer to and (the authors go on, in effect, to call by the name

*f*—but what’s the name for the

*new*graph?)?

As for the cases on the sign of *k* (and the use of absolute value), well, it’s not the way I’d handle it but it’s not *obviously* worse. I usually say something to the effect that the graph of [the equation] *y = f(x) + h* is obtained by shifting the graph of *y = f(x)* “up” by *k* units *with the understanding* that “shifting up by a negative number” is interpreted—in a very familiar way—as shifting *down*. It can be helpful to mention, say “*y = Mx + B*” here—the point being that we *don’t need to know* the sign of *B* for the formula to be valid. (Or the words: for the case of *y = f(x-h)* we *add h to each x co-ordinate* of the graph—*regardless* of the sign of *h*.)

Again, it’s not *obvious* that treating the “up” and “down” cases explicitly *right there in the display* that (effectively) defines “vertical shifting” was a bad idea … but I believe it *was* a bad idea (sort of): anyway, by the time a student reaches the Conic Sections stuff (next quarter) some of the “formula” displays have become *much* more complicated than they have any good reason to be (and goodness knows, Conics are hard enough already). The sign issues have to be discussed *somewhere*; using inequalites and absolute values to do it is probably very much the right idea … what I’m after here is that the definitions *be definitions*. Which would mean, first of all, *identifying* ’em as such; and then, as concise as you can be while getting the job done.

It would appear that somebody made an editorial decision *not* to discuss the kind of “understanding”s I mentioned a moment ago; the results are disastrous. This is particularly true for students inclined to the so-called “rote memory” strategy (learn the formulas by heart before even *beginning* to work at “seeing the big picture”). Of course, most of us probably consider it something of a duty to try to talk such students *out* of relying heavily on this strategy … but this hardly seems like the right way to do it. And yes, there sure does seem to be quite a bit of resistance (anyway at the level just *below* 148) to the idea that, say, “dividing by a number is just multiplying by its reciprocal”—this comes across as “mumblemumble” to many a 102 student (for example).

I call it “teacher talk”: the student somehow *knows for sure* that this technical stuff you’re always so careful about saying *cannot possibly* have anything to do with their existing ideas about how to solve equations; they’ll move heaven and earth to try to find some list of “rules” they can memorize if only they can avoid using the word “reciprocal” (or, of course, what’s worse, “multiplicative inverse”). And if you’d only stop trying to pretend any of this *means* anything and just *tell them what to do*, the scales would probably fall from their eyes in a minute flat. Of course it’s damn-right a duty to try to disabuse *these* poor souls of their misapprehensions as to the nature of our art. And this duty doesn’t fall on me alone.

Overlooking the power of the symbols until we end up with different formulas for, say, ellipses with vertical and horizontal major axes feels to me like pandering to the worst instincts of our weakest students and, I keep having to *apologize* (“Well, if they’d done this right, it’d be much easier … well, look: forget it. Here’s what they want you to *do* …”).

So. Let’s try and make this very difficult task as easy as it can (reasonably) be. Thinking of, say “stretching” and “compressing” graphs as two aspects of the *same* process (requiring only a *single* formula), is just a flat out good idea: the notation is beginning to do some of our thinking for us. It’s not *quite* the Heart of the Matter—an awful lot of people *have* learned about Transformations and Conics (for example) with the kind of overly-detailed treatment of cases I’m complaining of here, after all. But for me it’s pretty close. And, believe it or not, I don’t necessarily *want* to do constant battle with textbooks.

Rolfe Schmitt is introducing binary numbers to his kids.

Turns out you can create PDF files with the Xerography device down the hall. Here at long last are 67 pages of *Numbers, Sets, and Logic* by yours truly. You couldn’t really print it all out and use it with your own classes—yet; it’s got scribblings by me from classroom use on some early pages and breaks off in the middle. For that matter, the drawings are exactly as ugly as my blackboard work. But I’ve wanted to put some version of this up on the web for years (without having to reformat everything by hand) and am thrilled to have suddenly broken through to here. I posted re-formatted versions of section 1.1 and 1.2 in this very blog back in July.

The coffee-shop lingering function explained by John D. Cook.

Archimedes and revisited by Mark Dominus.

Desperately seeking well-written topology papers.

MathTV.com plugged.

David Eppstein on drawing graphs in *Illustrator*.

I’d been groping for the right notation for Transformations of Graphs since the first day; I settled it over the weekend.

By I will mean a certain Transformation of the *xy*-plane (at this point I tend to write “” on the board; of course , but none of this “set-theoretical” language has made it into my lectures so far). To wit: .

This definition obviously takes the “maps to” notation () for defining functions for granted—which I’ve sort of been doing all along without pinning myself down with anything as vulgar as a definition. The right-hand side of the latest equation, then … hold it. Does everybody know that the colon-equalsign

combination means “equals by definition”? Well, it’s a pretty handy little trick, let me tell you. OK. Now. The RHS in our latest equation expoits a notation rarely seen in lower-division texts (alas): instead of the ungainly “Let *f* be the function defined by “, we have the straightforward declaration (“eff, by definition, is the function that maps ex to ex-squared”).

The more familiar notation gives a “formula” *not* for itself but for . A lot of people would have you believe that this distinction doesn’t matter and in certain contexts such people must even be put up with. But it sure matters to *me*, here and now. Because once I know how to write definitions in the “maps to” style, I don’t need to mention any arbitrary old letter-of-the-alphabet like *f* when what I’m *really* talking about is “the squaring function” … and I can just go ahead and write down facts like : this is *calling things by their right names* (“The inverse of the function mapping *x* to the cube *root* of *x*–*plus*-five is the function mapping *x* to *x*–*cubed*, *minus* five”—you just can’t *write* that sentence in “*f(x)*” style … only something like “Let *f* be BLAHBLAH; then *f*-inverse is LALALA”— but what’s any of it really got to do with anything called “eff”?).

Readers already familiar with all of these ideas—or astonishingly quick on the uptake—might notice that, so far, it might appear that I don’t actually *need* the “maps to” notation for my purposes.After all (for example), one has (recall that a function *is* a set of ordered pairs)—and the “ordered pair” version is *almost* as concise as the “mapping” language. But here’s the *real* payoff: the “arrow” notation carries over seamlessly when the domain is, say, (*ordered pairs* of numbers as opposed to *individual* real numbers)—and this is the application we actually wanted: denotes the “reflect in the *y*-axis” transformation. Note that is harder to scan (anyway, so it seems to me); also invokes that pesky “T” and anyhow *you* try getting students to believe that simply won’t do as a LHS.

So. Whenever we *say* “reflect in the *y*-axis”, we can *write* . And I’ve been saying so all along. What’s new here is that I’m proposing to call it . This has the *drawback* that it “freezes” the variables *x* and *y*: wherever “angle brackets” are in effect, *x* and *y* *must* mean “the first and second co-ordinates of a certain ordered pair” (note that, by contrast, ; the variable names *here* can be changed without changing the actual set of orderded pairs itself).

And this “freezing” is indeed somewhat unfortunate. But I’m more than willing to pay that price, to have a quick-and-dirty way to spell “shift left by three”: is sure as heck gonna be a lot easier to calculate with.