## Archive for the ‘Math 148’ Category

### Last Post

I’ve started up new blogs at Math 148: Precalculus and Madness and Poverty.

Nothing would please me more than for some comments to bust out right about here. Nothing that could happen online, anyway. Thanks for your kind attention.

### One Must Imagine Vlorbik Happy

Exam II is in a couple days. Zeros of polynomials, graphs of rational functions, compositions and inverses.

(I have just named three topics, not four; here [shrink the window to the size of a column] are some remarks on “the serial comma” I made back in th’ XXth c.)

Of course my classes are ill-prepared for this material, for the usual reason (no time to do things right). So I intend to look at some stuff that didn’t fit… but really to “review” work we’ve done along the way.

At the end of last week—very likely my most productive ever as a blogger, by the way—I mentioned that by applying the Transformations from the early part of the course—shifts and scalings—to the simplest Rational Function worthy of the name—of course I refer to the reciprocal function $[x\mapsto {1\over x}]$—one would arrive at (what I called there) the Linear Fractional functions:
$\mu (x)= {{Ax + B}\over{Cx + D}}\,.$

I used $\mu$ (“mu”, a Greek “m”), by the way, to honor Ferdinand Mobius; when (the constants) A, B, C, and D are allowed to take Complex Number values, one has the so-called Mobius Transformations (on ${\Bbb C}$). But we’re considering only Real Number values for our constants here (and so $\mu:{\Bbb R}\rightarrow{\Bbb R}$ is a real-valued function of a real variable)… the set we’re studying is then
LF$:= \{ [x\mapsto {{Ax + B}\over{Cx + D}}] | A, B, C, D \in {\Bbb R} ; C \not= 0; AD-BC\not= 0 \}$.

The inequalities at the end of this code exclude the cases where C is zero (which would give linear functions; this is easy to see) and where AD-BC is zero (which gives constant functions since then the numerator is a constant multiple of the denominator; this requires a small calculation to see).

Now. I sure haven’t proved that we’ll get this set by applying shifts and scalings to the reciprocal function. And ideally, this would be an exercise. For the course I happen to be running three sections of at this time, that would be absurd, however, so the next best thing would be to have it in the lecture notes. Which, while perfectly do-able in principle, appears unlikely. So here it is in the blog.

The general shifting-and-scaling Transformation (on ${\Bbb R}^2$, the [so-called] xy-plane), expressed in the symbolism created for these notes, is $\langle Wx+H, Ay+K \rangle$;this abbreviates (in the more standard bracket-and-arrow notation) $[ (x, y) \mapsto (Wx + H, Ay + K) ]$, which itself is unofficial for 148—we’re speaking of the transformation that replaces the function $y_1 = f(x)$ with
$y_2 = Af({{x-H}\over W})+K\,.$
Probably no teacher of this material could now resist the temptation to point out what is presumably obvious to any actual reader of these notes: in the “f-notation”, one has a subtraction and a division where unguided “common sense” might lead the unwary to expect a multiplication and an addition. Anyhow, I can’t. Let’s go. The lecture I’m imagining would then begin.

Let $f(x) = {1\over x}$ denote the reciprocal function and “apply the generic shift-and-stretch”; one has
$T(x) = Af({{x-H}\over W})+K= A{1\over({{x-H}\over W})}+K\,.$
Replacing $1\over W$ with V (to make typing easier), one now has
$T(x) = {A\over{Vx -VH}}+K=$
${{A +K(Vx-VH)}\over{Vx - VH}}=$
${{[KV]x +[A -KVH]}\over {Vx - VH}}\,.$
With the “obvious” changes in the constants, it’s clear that we’ve arrived at the desired form (I mean ${{Ax + B}\over{Cx + D}}$; ideally it would be safe to let it go without saying that the “A”s of these formulii do not [necessarily] have the same value [but why take a chance?]). So that’s it.

(Almost. We’re omitting the calculations concerning AD-BC from sheer bone-laziness [and a feeling that suchlike technicalities may drive students to sneak out of the room while my back is turned]. That C is nonzero is already implicit since the division by W in the beginning of our latest calculation already implies that V is nonzero [go ahead and leave if you feel you must].)

Part of the point here is that LF would seem to have been tailor-made for our consideration in 148. We went to a lot of trouble to develop the theory of Shifts and Scalings; here’s a subset of the Rational Functions (that we’re now trying to understand) obtained by applying these transformations to the simplest rational function of all (as usual, I “really” mean [what I’ve already called] the simplest one worthy of the name: the reciprocal). The result is an infinite collection of “next-simplest” cases. If everybody wasn’t in such an all-fired hurry (to “climb Calculus Mountain”, as an old sparring partner of mine was wont to say), one would more-or-less of course begin an exploration of, say, Graphing Rational Functions by looking in some detail at graphing these particular Rational Functions.

Having done so, the instructor would then be in a position to say things like, “Remember how the vertical asymptote of a Linear Fractional function at x = H [which came from shifting that of the reciprocal function] showed up in the ‘f-notation’ code as an xminusH? Because this is how you get a zero denominator in the rational function you’re examining? Well, this generalizes to rational functions generally; in f/g, if we can factor (the polynomial) g into linear factors, we can then ‘read off’ the vertical asymptotes for that function…”

(WordPress has just correctly typeset some “quotes ‘within’ quotes”. For every one of these, anyway in my copy, one pays the price of maybe about a dozen opening apostrophes being set wrongly. I’ve made some remarks about this punctuation disaster and will link ’em (see?), and omit this paragraph, as soon as I find ’em [and have the leisure, and the access].)

As an example of the advantages of considering the class LF carefully, consider the computations involved in calculating the inverse of a linear fractional function. For specificity, let $R(x) = {{5x+3}\over{7x+4}}$.

No, wait. Now that we’re here, let’s first remark in passing on the remarkable fact that the functions of LF are invertible (by which I mean “have inverse functions” as opposed to mere inverse relationsevery relation has one of those…)—and that one sees this (literally!) by considering the graph of the reciprocal and noticing that Shifts and Scalings preserve the property of being a one-to-one function. This means that a function that’s one-to-one (these are precisely the “invertible” functions as my students had bloodywell better know on Wednesday; the code $[x_1 \not= x_2] \Rightarrow [f(x_1)\not=f(x_2)]$ won’t there be called upon but everybody should be able to make some sense of it by now) before applying a Shift or Scaling (or a series of these; this follows easily) is still one-to-one after the transformation is applied.

One is at this point actually pointing at a graph of ${1\over x}$ (where I have at last, somewhat hypocritically, used the commonplace “call a function by the name of an algebaic expression” convention; I’m here actually imagining my own voice saying “one over ex” when the symbol $1\over x$ appears on your screen: the point is that I consider this convention to be best honored in the spoken part of our everlasting development of The Art [the “oral law”, if you will]), and a “generic” graph: “they all look like this” (sweeping one’s hands around along the curve of the graph)…”rising-jumping-rising (or, like one-over-ex itself, falling-jumping-falling)…there’ll never be a repeated y-value becuase of the way this horizontal asymptote here kind of prevents it…”

And the point I’m making here (if any, as it seems to me) is that this “literal seeing” I was referring to a moment ago is at the same time an appeal to the imagination: our audience is being asked to spin up a movie in the YouTube of their imaginations and “see” the Shift or the Stretch in question dynamically. This is why one pulls one’s hands apart to suggest “stretching”, or seems to “grab” an invisible graph and “shift” it (or what have you). When the visual imagination is well-developed (as it isn’t in me very well at all), one tends to prefer to study continuous phenomena as opposed to discrete ones in one’s mathematical work (I’m a “discrete” man myself [as is obvious if the two other claims of this sentence so far are taken as true])… but every user of advanced mathematics has to develop at least some skill working with both types of these phenomena.

So. With our visual “proof” that RF consists of invertible functions in hand, let’s invert one. $R(x) = {{5x+3}\over{7x+4}}$ has already been mentioned. Okay. Notice that there are “two copies of x” in this “formula”. This is not the case for the other functions whose inverses are calculated in the exercises from the current section of our text (so here again, LF is seen to be of some special interest).

Our technique—write x = f(y) (interchanging the “usual” roles for the variables) and solve for y—will then call for a new “trick”. (To elaborate on this. The other functions considered, $[x \mapsto (3x + 1)^5]$ for example, can be thought of as a sequence of “moves” made on an expression, starting with x [multiply by three; add one; power by 5]. The inverses can then be computed without making a mark on the page by considering the inverses each move in the opposite order: root by 5; subtract one; divide by three; the inverse function we seek is $[x \mapsto {{\root5\of{x} - 1}\over3}]$. This technique is referred to as “shoes and socks” for what I hope is the obvious reason.)

Actually, it’s a pretty familiar trick, and I’ve never failed to present it at the board in any 148 equivalent up until now. Putting $x = {{5y+3}\over{7y+4}}$, one will first “cross-multiply” to get $x(7y+4) = 5y +3$, then “distribute the x” and “collect terms involving y” to arrive at $7xy - 5y = 3-4x$. At this point, it’s clear that we’ve found the right “trick”: by “factoring out” y and performing the obvious division, we’ve shown that $y = {{3-4x}\over{7x - 5}}$ (and should now replace “y” with “$f^{-1}(x)$“; the certain knowlege that trying to make sense of this step will be considered more confusing than enlightening by many beginners typically causes even me to adopt a “never mind why for now” attitude about this procedure… not that there’s anything wrong with that in principle… sometimes this is exactly the attitude to take… it just seems to be way overdone…), we’re done.

Having calculated the inverse for this Linear Fractional function (and checked it if we know what’s good for us; I’ve used [half of] this kind of check as an exam problem many times), we can confidently tackle the generic one; one arrives at the remarkable fact that
$[x \mapsto {{Ax+B}\over{Cx+D}}]^{-1} = [x \mapsto {{Dx - B}\over{-Cx +A}}]\,.$

The most remarkable feature of this fact may be its striking resemblence to the inverse of a two-by-two invertible matrix; but one need not know about such calculations to see that the manipulations of the constants are easily memorized (“swap” the values in the upper left and lower right [the “main diagonal”] and change the sign of the other two). Students preparing for exams involving the prospect of some need to compute the inverse of a linear fractional function might then choose to memorize this fact in order to forgo the kind of calculation we demonstrated a few paragraphs ago. (There’ll be no such problem in my exams this quarter.)

One will of course wish to try this trick out: the inverse of $R(x) = {{5x+3}\over{7x+4}}$ is then $R^{-1}(x) = {{4x-3}\over{-7x+5}}$; some straightforward sign-manipulations show that this is indeed the same function we calculated already. It works.

But—and this will be last of all—what I find most exciting about the topic of Inverses of Rational Functions is that we can “compute” them visually by applying the graphing principles developed earlier in our course. For blogging purposes my graphing skills might as well be nonexistent, so this will be in outline. The graph of $[x \mapsto {{Ax+B}\over{Cx+D}}]$ has a vertical asymptote at $x = {{-D}\over C}$ and a horizontal asymptote at $y = {A\over C}$; also it has an x intercept at $({{-B}\over A}, 0)$ and a y intercept at $(0, {B\over D})$. (All of these facts about this graph can be worked out by any well-prepared student using the principles developed for graphing Rational Functions generally; what follows is then a [potentially surprising] application of these ideas). Interchanging the roles of x and y (and, what follows, of “horizontal” and “vertical”), and working the graphing process “backward” (a skill developed by working certain exercises not considered by me so far this quarter in class and probably never to be so considered), one can arrive at the inverse transformation. And this without invoking either the algebraic process (expand; collect like terms; factor) or the “formula” (that is itself developed in this way); one has in effect used the transformation theory to give a geometric proof of what might appear to be an algebraic fact.

I’ve got to prepare for actual classes now; have a nice week.

### And Into The Black

If somebody comes up to you out of the blue and says, okay, a Rational Function has the form $R(x) = {{f(x)}\over{g(x)}}$, where f and g are polynomial functions, why then, there are any number of things about this definition that you might want to know next. Hey Vlorbik (you might say for example), how about giving us an example for hecksake? Outstanding.

“Show me one that is; show me one that isn’t”—if there could be such a thing as training a subject in straight thinking, maybe this would then be drilled into students (like the economists’ excuses for power’s abuses): find the key examples! (And [so-called] “counterexamples”; entire [useful, fascinating] books have been devoted to these.)

But, then, what makes an example “key” (this line of investigation might continue)? And if I had to pick one place to look first for an answer, I might quickly settle on the “simplest” examples. In considering the definition of Rational Functions (RFs), the well-prepared mind soon finds itself being drawn to consideration of, not the simplest RF—that would be the constant at zero—but rather the simplest RF that’s not a polynomial (the point here is that since polynomials are used in the definition we’re considering here, we’re already assuming that some theory of polynomials is [at least partly] in place; we’re looking for something “new”).

So. That would be $[x\mapsto {1\over x}]$, the justly-famous reciprocal function. How so? (Thanks for asking.) Well, I’ve already ruled out the zero function as too simple; so the numerator (also known as f—part of my job is to help people become at least a little more comfortable about simultaneous consideration of two different points of view for one same phenomenon…) also can’t be the zero function. Let’s see. The next simplest thing would be f(x) = 1 (the constant at 1). What about downstairs? Well g oughtn’t to be also a constant (since then R itself would be a constant… which is a polynomial [of degree zero]… and so has already been dismissed from consideration]); what’s the next simplest thing after that? Well, not degree zero… degree one, then. Who’s got degree one? Things with x to the first power, right? What’s the simplest thing with x to the first power in it? The question answers itself: x itself.

There it is. One has just stared the problem down: look straight at it until it tells you something. R(x) = 1/x sure enough must be the key example here: the simplest rational function that’s not a polynomial. The awesome simplicity of this gorgeous curve is even more striking (to my eyes) when considered as the graph of xy = 1 (rather than y = 1/x—fractions are always hard).

Before going on to start playing with the reciprocal function—by applying the Transformations considered in the first part of our course—let me mention here that the “curve” in question is a hyperbola. This seems not to be very well-known (by contrast, many even among the doomed—whose miserable lot it is to take “remedial” math classes fruitlessly because they’ve convinced themselves before even getting started that it will never make sense—consider it common knowlege that the graph of y = x^2 is a “parabola”); many a veteran of Math 150 will know something (for at least a few weeks) about hyperbolas of the considerably more complicated form ${{(x-h)^2}\over{a^2}} - {{(y-k)^2}\over{b^2}} = 1$ but with no inkling that the graph of the reciprocal function is also hyperbolic. This somewhat depressing state of affairs seems to be a result of not having asked and answered the “key example” question often enough when introducing Analytic Geometry.

Anyhow, now we hit it with the shifts-and-scalings from week one; the result is the (rich and strange) set of linear fractional transformations:
$\mu (x) = {{Ax + B}\over {Cx + D}}$
(“quotients of linear functions”; one insists here that AD – BC is nonzero [exercise: find out why]). There are, anyway, entire Chapters of books about these. Somebody’s probably done an entire course in ’em. Certainly somebody could. But not me. This is the kind of thing that sometimes makes me almost sorry I didn’t try harder to get back into the Pros.

### Bricks Without Straw

OK. Look. Last week’s rant was exhausting and I don’t want to make a career out of railing against this doggone book. But come on. How the bejabbers am I supposed to talk about inverse functions without a notation for the furshlugginer Identity Function? I mean, seriously.

Consider the set of Real-valued functions of a Real variable, then. Recall that yesterday we defined the composite of f with g by $f \circ g (x) = f(g(x))$. We (okay, I) failed to mention that the commonest ways to pronounce the lefthand side are “f circle g”, “f composed with g”, and “f composite g”. I’ll have said it in class, though, if only to urge everyone not to pronounce it “fog”… what’s obvious here—but is not obvious in my handwritten work—is that the Circle symbol isn’t the letter O.

Anyhow, those are indeed all common; you pretty much have to become comfortable with at least one of ’em to talk about this stuff at all. Assuming everyone’s done enough practice problems with composite functions to make sense of some calculations, then, we’ve got some tools in hand; let’s see what we can do with ’em.

Yesterday we looked briefly at functions I called a, m, and p—the “add one” function, the “multiply by three” function, and the “raise to the power five” function. At the risk of belaboring a point, I’ll mention right now that the “scarequotes” should not be taken to indicate that there’s anything in the world wrong with calling these functions by these names—indeed, I’d be glad to argue that these are in at least some ways better names than, say, a, m, and p.

Formally, of course, one has $a(x) = x+ 1$, $m(x) = 3x$, and $p(x) = x^5$ (as I put it yesterday), or
$a := [x\mapsto x+1]$
$m := [x\mapsto 3x]$
$p := [x\mapsto x^5]$
(as I only wish I had the nerve to try to pull off in 148 this quarter). The notations all by themselves don’t make the nature of these functions any easier to understand than their verbal descriptions… though for “messier” functions there will come a time when words fail and we have to write things down to keep track… what the notations do give us, though, is something to calculate with.

I also mentioned functions called $a^{-1}$ and $m^{-1}$ yesterday, but failed even to name ’em—”a-inverse” and “m-inverse”—never mind define ’em.

So here we go. Define the identity function, I, by I(x)=x. This function is just as essential to the theory of compositions as the number 0 is for the theory of addition (or the number 1 is for the theory of multiplication): an object that does nothing gives us a simplest possible case… to which every other case can then be compared. If you don’t understand this parable, how will you understand all parables?

Defining inverses is now a simple matter. Suppose f and g (real-valued functions of a real variable, as you will recall) satisfy $f\circ g = I$ and $g\circ f = I$. We then call g the inverse of f, and write $g = f^{-1}$. And that’s it; everything else is consequences.

An equation of functions, $f\circ g = I$, say, means that the functions on its either side have the same domains and that for any “input” value from that domain, they both evaluate to the same “output”: $f\circ g (x) = I(x), \forall x\in D$, in our example. The symbol D here stands for the Domain in question (some subset of ${\Bbb R}$, of course), and “$\forall$“, as usual, means “for all”… hmmm. It seems to be a habit with me to assume familiarity with the $\in$ symbol here but this is very likely a bad habit; this symbol typically denotes “is an element of” (though in this context, I would most likely pronounce it “in”; hovering your mouse over the code will reveal that this is no mere idiosyncracy of mine [ Or not. This feature isn’t working on my equipment at this time. I’ve seen it done. One typically blames the user at this point. It’s not your fault if this doesn’t work]).

Those with a taste for the technical might feel that “really” an equation of functions ought to mean that certain sets of ordered pairs are equal (as sets…”coextensive”, as one sometimes hears it said [but without regard to order; the point to this digression is that, for example, one has $\{37, 168\} = \{168, 37\}$ as sets; the digression itself is offered for the plain fun of it here (since, when working with subsets of ${\Bbb R}$, it does no harm to assume that they’re given the standard “number-line” ordering; the issue doesn’t even arise)]). I’ve done the calculations, just now (right here at the keypad), and decided not to publish; suffice it to say that they were messy enough to convince me I’d lose whatever reader or two I might still have left. One encounters such work in “Transition to Advanced Mathematics” courses; even veterans of Calc classes sometimes blench.

The good news at this point is that calcuating inverses with the standard textbook notations is, in principle, a pretty simple matter: given a “formula” for f(x), one puts (typically without any explanation, as if by magic) x = f(y), and then uses ordinary “math 102”-style algebra to solve for y; the result is that $y = f^{-1}(x)$; one has computed the “formula” for the inverse of f (and this formula is in the variable x).

The bad news here is that this is one of those areas where many students literally will not listen to reason: one encounters considerable resistance to attempts to explain why this calculation is appropriate—even more than usual, one will tend here to run up against the “just show me how to do it” wall (“ours is not to reason why, just invert and multiply”, as I read somewhere… “function inverses” is hardly the only situation where this type of resistance becomes an issue…). But not wanting to think about why $f(x) = y$ is equivalent to $x = f^{-1}(y)$ (when f is an invertible function)—by “applying $f^{-1}$ to both sides”— is of course tantamount to rejecting the whole idea of an inverse altogether. Or maybe rejecting algebra itself.

And I’m a long way from knowing what to do about it. But it’s something of an article of faith with me that frequently-encountered student pathologies result from improperly-presented material: there is a right way to do this. Or there will be, once I’ve wrestled the son-of-a-bitch to the ground…

### DM Seeks ODE For ???

In the AM classes, I began with bigpicture stuff (for reasons obscure to me): function composition considered as an operation. Specifically, as on operation on functions.

The point here is to stress the analogy with the more familiar situation where operations like addition, subtraction, multiplication, and division are applied to (pairs of) numbers to produce “new” numbers, the operation—composition of functions—we are about to consider will be applied to (pairs of) functions to produce new functions.

Just as the “code” $a \times b$ denotes a number when a and b are numbers, so too, whenever instead of numbers, we’re considering functions, we can (and will) denote by $f\circ g$ a new function—which is still to be defined here.

And why have I been delaying this crucial definition? Because I’m trying, indeed I can only hope trying not-too-desperately, to call your attention specifically to the perfect parallel in the way the notation is layed out.

Let me now go on, after digressing to remark that by now we’re pretty far away from anything rightly called “math 148 lecture notes”, to name this “layout”. Let’s call it object-infix-object. By “object”, one more-or-less obviously means number-or-function and by “infix” an operation symbol like $+\,,-\,,\times\,,\div$, or (our soon-to-be-introduced) $\circ$. Because once we start seeing functions as objects to be “operated on” (and denote one operator on such objects by $\circ$—call it [for now] “circle”), why then we can start setting up equations like $h = f\circ g$ and start right in solving ’em: we are in the presence of an Algebra of Functions.

And only by getting away from numbers can we clearly see what’s been happening all along, from arithmetic on up. Quite often, problem-solving techniques developed for one application (equations about numbers, say) will turn out to be useful in other applications (equations about functions, for example). Here is power: the proper study of mathematics is not counting or measuring or even necessarily calculating but reasoning itself. Sets-With-Operations turn out to be exactly the right framework for an enormous variety of problems: a framework whose power and flexibility are awe-inspiring (pretty much universally: this is “math-phobia” laid bare).

Returning (slowly) to actual course-like material, let me mention here that I do sometimes let slip there a reference to the “Real (or Complex) Field” (instead of the “set of Real (or Complex) Numbers”), for example; I’m even more prone to write down things like $f, g \in {\Bbb R}[x]$ (typically with an immediate gloss [“f & g polys w/ Real coeffs” or somesuch])—having the right symbols is even more important than having the right words (and everybody knows I’m a fanatic for having the right words).

Suppose $f, g : {\Bbb R} \rightarrow {\Bbb R}$ (i.e., f and g are real-valued functions of a real variable [see?]). We then define the composite of f with g by
$f \circ g (x) = f(g(x))$
(whenever the right-hand-side is, itself, defined—as it is not when, for example, g is the “constant at zero” function and f the “reciprocal” function).

Let’s go ahead and fix a few functions for purposes of illustration:
$a(x) = x+ 1$
$m(x) = 3x$
$p(x) = x^5$
(the “add-one” function, the “multiply-by-three” function, and the “raise-to-the-power-five” function).

Let me say here that a is notx + 1” [a(x) is; a itself is a function not a number]. Maybe we can throw some light on this by writing $a = [x \mapsto x+1]$… note that this is entirely in the spirit of my stated program of creating an “algebra” for dealing with equations about functions.

It may be nothing more than a matter of taste, but I find it much more satisfying to define a function called a with an equation beginning “a=”, rather than (the much more common) “a(x)=“—as if the name of the variable had anything to do with the function itself. I’ve ranted about this before.

All of which grades no papers and that guitar’s not gonna start playing itself. So I’ll wrap up.

It’s not a coincidence that all three of the examples I chose have as their domain and range the full set of Reals (but neither is it a necessity; I’ve chosen simple examples to begin our investigation); unusually alert readers (mostly having had a course like 148 already) may have noticed that each is also (what we well later call) a one-to-one function.

Given the models $m\circ a (x) = 3x+3$ and $a\circ m (x) = 3x + 1$, the reader may be in a position to compute (formulas for), say, $m\circ p$ and $p \circ m$, as well as to observe that in general $f\circ g \not= g\circ f$—function compositon is not commutative. Ambitious readers can consider “inverse” functions like
$a^{-1}(x) = x -1$ and
$m^{-1}(x) = {x\over 3}$,
going on to show that $(a\circ m)^{-1}= m^{-1}\circ a^{-1}$—the inverse of the composite is the composite of the inverses in the opposite order (the “shoes and socks” theorem).

There’ll be much more of this anon. Or there would if I had a better attention span.

### Section 5.5: A Manifesto

The post is even more of a mess than usual. That “does not parse” parsed yesterday. I had to cut a piece out (you’ll find the hole) because it was acting downright weird for no apparent reason. Welcome to WordPress.

The text is even more of a mess than usual. Evidently certain forces have led its creators (“the Redactor”—an entity whose exact nature is very ill-understood [and for all I know, incomprehensible], but that we can imagine as a sequence of corporate committees— and “the Author” [typically also, from what I have been able to ascertain, a committee]) to create a display called Steps for Finding the Real Zeroes of a Polynomial Function. And thus far, we are of course in full, sweet, agreement: this is the holy grail of Algebra and as such, one of the most interesting subjects there is or ever could be in Life Itself; let such steps be i-cast on every cel (from the rooftops)… or whatever the kids say these days. (“Factor it if you know how. If it’s a constant, you’re done. If it’s linear, use subtractions and divisions to isolate the variable. The Quadratic Formula tells the whole story in the quadratic case. The cubic presents special difficulties. So first use a change of variable, if necessary, to….“—instead of, say, “buy! buy! buy!“). But now look what they’ve done to the beautiful face of this Alma Mater of problems.

Step 1: Use the degree of the polynomial to determine the maximum number of zeros.
Step 2: If the polynomial has integer coefficients, use the Rational Zeros Theorem to identify those rational numbers that potentially can be zeros.
Step 3: Using a graphing utility, graph the polynomial function.
Step 4: (a) Use eVALUEate, substitution, synthetic division, or long division to test a potential rational zero based on the graph.
(b) Each time that a zero (and thus a factor) is found, repeat Step 4 on the depressed equation. In attempting to find the zeros, remember to use (if possible) the factoring techniques that you already know (special products, factoring by grouping, and so on).
Why not skip the Rational Zeros Theorem altogether? Or, not that I’m proposing to do it here at Home Campus Community College, omit the calculator?

I’d probably love to do a “drill-and-kill” version of the course (where I have of course used the industry code for “those who establish a routine of doing lots of routine exercises, set by the instructor, should flatten the exams”)… but it’s just not an option, not with this many topics on the schedule (and us, mea culpa, so far behind it): a lot of teachers really like this “synthetic division” thing and there’s a pretty obvious reason: if you’re gonna crank out dozens of divisions-by-monic-linears, this is your tool.

In such a course, one would—naturally—ban calculators (and check by-hand homeworks for completeness, and much else besides). Certain Computer Gods of Texas have made certain unholy alliances with the local Management Gods to decree that ours shall be a calculator-driven version. In this context, I’m even ready to pretend to accept this decree: this very section is, for me, the first really essential use of the doggone graphers in the whole 102-103-104-148 sequence. There’s no time for lots of polynomial divisions, that’s for sure… so we’ll only do divisions if we have to… which means we won’t use ’em to find zeros (R‘s,say [“roots”])… but will use ’em to “divide away” the corresponding factors ((x-R)‘s, say).

The mostly-unspoken absurdity here is of course that, once you’ve decided to use a computer, why should you limit yourself to one of these expensive handhelds that do very few things compared to more modern electronica (and mostly do those badly)? I dropped a link to a free polynomial-factoring page into my homepage recently; any goodsize class will include some students who can access such programs on their telephones. Why should the line for “what computations the human should do” be drawn at “what such-and-such no-bid-contracting Behemoth decides they can sell”? But as I say, I’m pretending to accept this state of affairs (in order to speak to other issues).

Returning to the text. The “human computation” version should omit Step 3, together with the reference to “eVALUEate” (which, besides being twee, is bad calculator advice: one of course actually uses TRACE here). And then you just put in a “calculator” version saying what to do if you’re using a graphing calculator (which by the way is not a fucking “graphing utility” [utilities are programs not hardware except in edu-babble]). Instead of this neither-fish-nor-fowl thing that nobody will ever do (that isn’t crazy, or stupid, or, what is obviously the most likely case, simply following orders).

Steps 1 and 2 I’ll accept as they stand. Note that the Rational Zeros Theorem can be considered part of the “calculator” version of the process (RZT gives us a bound on rational zeros and a darn good set of hints as to where to “guess-and-test” [between integer values as observed on the grapher, say]; this avoids using intersect [or, worse, root] to determine certain rational values).

Step 3 speaks for itself; put it in the one version, out of the other.

In Step 4 we’ll find most of the trouble, then. So here’s a scholarly crux right off the bat: VME (to [selfindulgently] use “impersonal third person authorial” for a moment) is here using the Fourth Edition while knowing full well that Step 4 has actually been changed in the Fifth. But, a poor workbeing blames its tools, the Fifth is downtown in the office, whereas Fourths, their cash value having fallen suddenly to nothing a short time ago, are promiscuously littered about in various remote VME locations.

This much is known to me of the new Step 4 as of now: they made it worse. Because now it has the nerve actually to say “Use the Factor Theorem to determine if the potential rational zero is a zero”, when, goddamnit, the Factor Theorem has precisely nothing to tell us about whether a rational number is a zero until we already have the factored form—which is essentially the problem we’re supposed to be trying to solve. The Redactor has swallowed its own philosophical tail here and entered some new dimension of incomprehensibility.

Returning to the edition at hand, then:

In (a) I’ve complained of the calculator slang already; separating the p-and-p (paper-and-pencil, natch) methods from the FGC (graphing calculator) methods.

In (b) we have another of the Redactor’s masterpieces: we are told to “repeat Step 4 on the depressed equation”. But the depressed equation is available to us only if we have used a p-and-p method in step 4a. The calculator version here requires an explicit declaration to the effect: “Use the root to depress the equation (by either division algorithm)”. Anyway, the depressed equation appears to have popped out of the thin air here: in part (a) it hasn’t been mentioned even as a side-effect of the “test a zero” process. And yet this process is the very heart of the matter: in practical terms, it’s very much what this section is about. (Where, to be perfectly explicit, by “practical terms”, I’m referring to “terms of ‘how do you do the exercises?’ “.)

Speaking of which. Does anybody here not lay out all the possible p‘s and q‘s out across the top and the LHS of a table to form all of the “potential roots” supplied by the Rational Zeros Theorem (RZT)? Because if you’re in a big hurry and there’s this overwhelming amount of stuff that you’re given to cover in a couple meetings that oughta probably take months, this is an exercise you can essentially train students to do, in a pretty short time, and I’d never dream of doing this other than by laying out a table … oughtn’t that be in the book somewhere?

Even the statement of RZT resists comprehension (as I guess… it’s clear enough to me…): “${p\over q}$, in lowest terms, is a rational zero of f, then p must be a factor of $a_0$ and q must be a factor of $a_n$” is perfectly clear in its context; don’t let anybody tell you any different. (In particular, $a_0$ and $a_n$ have been displayed with their usual meanings right there in the statement of the theorem, as good taste requires.) But one should darn well put it in words as well, as if people are actually going to talk about it: “the numerator (of the zero) divides the constant term (of the polynomial)”.

“Numerator divides constant” is much more memorable to at least some minds than “pee divides a-naught”, and is anyway more meaningful (since, in another context, my own lectures for example, one may have, say $A_i$‘s in place of the $a_i$‘s or $n\over d$ in place of $p \over q$). That the verbal translation of a formula should appear somewhere near its display looks like a simple corrollary of, what is taken by at least some people as a basic principle for Math Ed, the “Rule of Three” (or of “Four”).

I’ll go ahead and add that “p must be a factor of a_0, and q must be a factor of a_n” gets old pretty fast when you’re writing on the board (or notebook or what have you); one soon discovers a crying need for some such symbolism as the (completely standard and easily understood) $p | a_0$ and $q | a_n$; moreover, we’ll be able to use this notation quite a bit in other contexts (like
$f(R) = 0 \Rightarrow (x-R) | f$
this is of course the statement of the Factor Theorem (as it appears, not in the book, but in The Book).

One more thing here. Students will of course take $p | A_0$ and $q|A_n$ as facts to be memorized. And some will inevitably mix them up: it was for situations of exactly this type that the phrase “minding one’s p‘s and q‘s” must have been coined. But of course, by merely contemplating, say 2x – 3 = 0 for a few seconds, one easily reminds oneself of what’s going on: $3\over 2$ is the root … it must be the numerator that divides the constant… (and so on). We darn well have to keep showing students how to do this kind of thing (use small examples to remind oneself of the details of big generalities). Whenever there’s some easy way to avoid memorizing something, we should at least mention it.

I wouldn’t mind so much—I kind of like being the “good guy” who gets to come in and say, “You know that passage of the text that doesn’t make any sense? Well, what they’re trying to tell you is this…”—but I’ve got reasons to believe that some of my colleagues are even more clueless than I am about stuff like this; it’s safer to just put it into the text in the first place.

The theorem of the very first display of the section—mysteriously called an “algorithm” there—is that polynomials “divide like natural numbers” … a fact summarized in the equation f/g = q + r/g.

I’ll remark here on the fly that $(\exists q, r) (\forall f,g (g\not=0)), \delta(r)<\delta(g)$ should precede this equation (in some dialect—I hope it’s obvious that I don’t dare indulge in such straight-up set-theory with my live audiences… in part because one would also need to make it clear somewhere that $f, g, r, q, 0 \in {\Bbb R}[x]$ and that the inequality under the $\forall$ quantifier states that g is not the zero polynomial) and that $\delta$ here represents “degree”. [There’s a lost passage here that caused WordPress to freak out utterly and set the whole page wrong. The Tex code parsed OK, but then … blooey. It wasn’t essential. Just me playing around with the degree function.]

But that was just me playing around. The fact that quotients and remainders can be computed (for ordered pairs of polynomial functions) deserves such prominent placement. It also deserves the name of a theorem; “the Division Algorithm”, rightly so-called, is the process defined in the proof of the theorem (and used in actually computing the polynomials r and q—we’re speaking of a constructive proof). What does not deserve such prominent placement is the next thing: the dreaded Remainder Theorem (RT).

Not in my course anyway. RT is hard to understand (of course I can’t prove this … but I can say that I’m pretty sure I didn’t understand it until about Abstract Algebra or so …) and is used only in proving the “hard” direction of the Factor Theorem (by us; those ever-so-fortunate p-and-p classes use it a bunch; I’m guessing here). Moreover, the authors have just gotten through admitting that they stated The Theorem Called “Algorithm” without proof; this theorem is of course quoted in the proof of RT (so it ain’t much of a proof at that).

And I walked into a trap here and caused myself to deflate right out in front of a class when I suddenly realized I wasn’t willing to try to really explain—I mean “explain so as to be understood” (with all the necessary give-and-take)— what was going on with this part of this section (and so I oughtn’t to have brought it up in the blackboard notes at all): you can lose a lot of hard-won trust in a moment flat by just giving up.

The Theorem for Bounds on Zero is omitted campuswide; good. I’ll go ahead and mention that this omission sort of hints that the creators of the local version of the course are aware that this might not really be the text we should use. While I’m at it, they’ve also changed the order of the sections in this Chapter. This might very well be contributing to my difficulties. If there can be said to be an intended audience for this treatment, then that audience will have had more experience in graphing rational functions before getting here (and so would have seen lots more examples of the Factor Theorem at work before its statement here, for example).

As much as I’ve been complaining about the text, I ought to make it clear that what I’m really fighting is the lack of time to talk about it. There’s enough here for a whole ten-week course as far as I’m concerned (and I’d love to teach that course, with students just like the ones I’ve got now). Meanwhile, there’s this completely demented parody of an industry standard to the effect that “This is College! It’s supposed to be hard! Let ’em learn how to study!” and so on. This is far from a majority opinion in most departments according to my wild guess. But when the committees start making up the rules, all of a sudden those with this opinion speak up plenty loud and nobody wants to appear like the weakling. (“Well, my students seem to need about twice the time on this topic than what’s allotted” can very easily be twisted into “I don’t know how to teach this stuff properly”, so it’s just easier to keep your mouth shut. And another invisible 800-pound gorilla is born.)

And then, and this is the most frustrating phenomenon of all, you get together in the group-office-cum-teacher’s-lounge and all anybody ever wants to complain about is the students. “They keep wanting to do this, no matter how I tell ’em to do that!”—and I keep trying to change the subject to “We’re telling ’em to do that, in the wrong way!”

Because once new students are seen to make the same old mistakes, that’s information: knowing the most likely mistakes tells us where to put up the warning signs (even Bourbaki, whose indifference to pedagogy was legendary, did this). The fault, dear colleagues, is not in our students but in ourselves.

So why do I always feel like I’m the only one complaining about textbooks and syllabi and stuff that’s actually somewhat under the control of people right here in our department (instead of the lack of math maturity found in math students, which is not)? OK. Rhetorical question. Because disrespect for the helpless is free, but fighting the power is dangerous, is why. To which I can only say, sure. But at least it’s interesting

### Next Stop: Multiplicities

So somehow it finally dawned on me. There’s some major sticking point whenever the subject of the connection between the roots of a polynomial function (f, say… it’s always handy to have names for things, after all…) and its factors comes up: so this connection should be mentioned right up front as clearly as possible.

This last bit is what somehow doesn’t seem to have occurred to me until, to be embarassingly specific about it, right in the middle of the third time through the material this week (I’ve got three different 148 classes this quarter): there I was, finally boxing off the display I’d wanted all along (as I’ll always do eventually; I’m more or less convincing myself here it should be one of the first displays of its lecture).I refer, if you must insist on some math with the navel-gazing, to the proposition that “R is a root of f if and only if (x-R) is a factor of f“. This rather innocuous looking fact gets pretty close to the Heart of the Matter (which, for us here now, is, what else, the Fundamental Theorem of Algebra). Unfortunately, however, it’s not stated precisely enough for me to relax just yet.

First of all, in the proposition I’ve just caused to appear between quote marks, one has implicitly assumed that f is not only a polynomial, but a polynomial in the variable x. Such assumptions are quite often harmless, but sometimes (for instance, if x is already being used with another meaning in the problem we’re working on) they can lead to confusion.

Suppose $f \in {\Bbb R}[x]$, then. That was painless enough, right? The code can be pronounced “f is a polynomial, in the variable x, with Real coefficients”. That was the only handwave that actually bothered me, but I certainly seem to be having a good time and we can sure be more precise about what’s going on if we want to…

Let R be a root of f, then. This means that f(R)=0 (and it means this first of all: we are invoking the very definition of “root” here; the reason I’ve digressed to say so is that the importance of definitions seems to be very ill-understood by quite a few of the math laity and so I’m seizing an opportunity to stress it). Let’s assume further that f has degree n (one has, ideally, already defined a polynomial function in x, of degree n, as equal [for all values of x] to
$\null A_n x^n + A_{n-1} x^{n-1} + \dots + A_2 x^2 + A_1 + A_0$,
where “the A‘s are Real constants”—we might instead say here that “$A_i \in {\Bbb R}$, for $i \in \{0, 1 , 2, \dots n\}$“… for example if there were some need to be more concise, or more precise, or maybe because the symbols are astonishingly beautiful [and, with sufficient practice, reveal themselves to be much easier to understand than mere words…]— and $A_n \not= 0$).

Where was I? Oh yes. Something about a polynomial, f, of degree n, having the root R. I claim that there is then a polynomial, g, having the property that
$f(x) = (x-R)\cdot g(x)$
for all values of x. And that’s (almost) it: “if R is a root, then x – R is a factor”.

This claim isn’t obviously true. What’s much easier to see is the converse statement: “if x-R is a factor, then R is a root” (“plug in” x = R on the equation [f(x) = (x-R)g(x)] definingx – R is a factor”; done). We’ll look at the harder direction soon (in 148… maybe in the blog). For now, it’ll content me—and serve you well!—if you take both directions for granted (though of course if any of this is unfamiliar, one will wish to look at a few examples…)

### All Knowledge Is Found In Blogs

There’s a page at “S.O.S. Math” on polynomial long division that looks like a pretty good introduction (in particular, there are exercises … a vital part of the process!). And here’s a post and thread on “synthetic” division in JD2718 (one of my favorite mathblogs).

For me, the main point to be made is that, until somebody understands “long” division, they’ve got no business learning how to do “synthetic” division. For this version of this class, students are free to use the “synthetic” algorithm (when appropriate) if they want to, but I’m not going to require it.

### Review Problems

Somebody sez Brutus is absent today for good reason & she’s been asked to take the notes. Which is a bunch of exercises. And Brutus reads the blog. So here goes, quick and dirty.

1. The order of transformations matters. Demonstrate this by graphing the “original” graph y = | x| , then (i) Shift up 2, then reflect in the x axis; and (ii) Reflect in x axis, then shift up 2.

2.Write a definition for the piecewise function on the blackboard. Sorry. Not ready to try to draw it here. Piecewise-linear if that’s any help. (Part of the point here is that both directions — graphics to algebra and algebra to graphics — make good exercises. We had the other one on the quiz.)

3. Give (exactly — e.g. 32/113, not .3274) both co-ordinates for the vertex of $y = -17x^2 +11x + 5$.

4.The Demand function for a certain product is p = 100 – .2 x. (a) Determine the Revenue function. (b)Find the maximum revenue. (c) What are the quantity and price that give this revenue?

Also covered (though not here): anything from the first 2 quizzes (domain & range; increasing & decreasing; intercepts; maxes and mins [nonquadratic]; symmetries …)

### Our Story So Far: Part I

OK. The snow day probably means postponing the exam again. Meanwhile, here are some remarks while I’m thinking about writing the doggone thing.

All of our work thus far takes place in (various subsets of) $\Bbb R$ (the Real Numbers) or $\Bbb R^2$ (the xy-plane). We are particularly concerned with real-valued functions of a real variable; typically these are given in the form y = f(x).

The big idea of the course so far is pretty clearly Transformations: when the right-hand side of our typical equation is replaced by f(x) + K or A f(x), one has a vertical transformation (a translation—which for some reason we’ve been calling a shift—or a scaling [i.e., a stretch or a compression]); the corresponding Graphical Transformations can conveniently be expressed as $\langle x, y+K \rangle$ and $\langle x, Ay \rangle$—”add K to each 2nd co-ordinate” and “multiply each 2nd co-ordinate by 2”. The horizontal transformations associated with y = f(x – H) and y = f(x/W), expressed in the same notation, are then $\langle x+H, y\rangle$ and $\langle Wx , y \rangle$—note here that the number subtracted from the x variable in the (new) equation is actually added to the x co-ordinate of each ordered pair in “shifting” the (old) graph of f (and, more or less of course, a division “inside the parens” in the functional equation produces a multiplication of first co-ordinates by the value in question [here called W for “wavelength”, by the way … with some loss of accuracy …] in the transformation).

Throw in the reflections $\langle x, -y \rangle$ and $\langle -x, y \rangle$, and you’ve got what I hope is a pretty good summary of the theory as thus far presented. All of this theory can now be brought to bear on an equation like $y = -2(x-1)^2 + 3$: beginning with a graph of its “parent function” $f(x) = x^2$, we can understand this as $y = -2f(x-1) + 3$ and go on to analyze the corresponding transformation as a reflection in the x axis, followed vertical “stretch” by a factor of 2, followed by a horizontal shift (to the right) by 1 unit and a vertical shift up 3 units (that is, $\langle x+1, 2y +3 \rangle$; I’ll remark here that even though the angle-bracket notation isn’t an actual course requirement, I’d feel pretty helpless saying most of this without it …).

As to the order in which the “suboperations” forming this transformation are performed, the hard issues are essentially ignored by our text. And, for right now, by me. I’m gettin’ out in the snow and play. As soon as I debug all this TeX-slash-HTML …

• ## (Partial) Contents Page

Vlorbik On Math Ed ('07—'09)
(a good place to start!)