One Must Imagine Vlorbik Happy

Exam II is in a couple days. Zeros of polynomials, graphs of rational functions, compositions and inverses.

(I have just named three topics, not four; here [shrink the window to the size of a column] are some remarks on “the serial comma” I made back in th’ XXth c.)

Of course my classes are ill-prepared for this material, for the usual reason (no time to do things right). So I intend to look at some stuff that didn’t fit… but really to “review” work we’ve done along the way.

At the end of last week—very likely my most productive ever as a blogger, by the way—I mentioned that by applying the Transformations from the early part of the course—shifts and scalings—to the simplest Rational Function worthy of the name—of course I refer to the reciprocal function [x\mapsto {1\over x}]—one would arrive at (what I called there) the Linear Fractional functions:
\mu (x)= {{Ax + B}\over{Cx + D}}\,.

I used \mu (“mu”, a Greek “m”), by the way, to honor Ferdinand Mobius; when (the constants) A, B, C, and D are allowed to take Complex Number values, one has the so-called Mobius Transformations (on {\Bbb C}). But we’re considering only Real Number values for our constants here (and so \mu:{\Bbb R}\rightarrow{\Bbb R} is a real-valued function of a real variable)… the set we’re studying is then
LF:= \{ [x\mapsto {{Ax + B}\over{Cx + D}}] | A, B, C, D \in {\Bbb R} ; C \not= 0; AD-BC\not= 0 \}.

The inequalities at the end of this code exclude the cases where C is zero (which would give linear functions; this is easy to see) and where AD-BC is zero (which gives constant functions since then the numerator is a constant multiple of the denominator; this requires a small calculation to see).

Now. I sure haven’t proved that we’ll get this set by applying shifts and scalings to the reciprocal function. And ideally, this would be an exercise. For the course I happen to be running three sections of at this time, that would be absurd, however, so the next best thing would be to have it in the lecture notes. Which, while perfectly do-able in principle, appears unlikely. So here it is in the blog.

The general shifting-and-scaling Transformation (on {\Bbb R}^2, the [so-called] xy-plane), expressed in the symbolism created for these notes, is \langle Wx+H, Ay+K \rangle;this abbreviates (in the more standard bracket-and-arrow notation) [ (x, y) \mapsto (Wx + H, Ay + K) ], which itself is unofficial for 148—we’re speaking of the transformation that replaces the function y_1 = f(x) with
y_2 = Af({{x-H}\over W})+K\,.
Probably no teacher of this material could now resist the temptation to point out what is presumably obvious to any actual reader of these notes: in the “f-notation”, one has a subtraction and a division where unguided “common sense” might lead the unwary to expect a multiplication and an addition. Anyhow, I can’t. Let’s go. The lecture I’m imagining would then begin.

Let f(x) = {1\over x} denote the reciprocal function and “apply the generic shift-and-stretch”; one has
T(x) = Af({{x-H}\over W})+K= A{1\over({{x-H}\over W})}+K\,.
Replacing 1\over W with V (to make typing easier), one now has
T(x) = {A\over{Vx -VH}}+K=
{{A +K(Vx-VH)}\over{Vx - VH}}=
{{[KV]x +[A -KVH]}\over {Vx - VH}}\,.
With the “obvious” changes in the constants, it’s clear that we’ve arrived at the desired form (I mean {{Ax + B}\over{Cx + D}}; ideally it would be safe to let it go without saying that the “A”s of these formulii do not [necessarily] have the same value [but why take a chance?]). So that’s it.

(Almost. We’re omitting the calculations concerning AD-BC from sheer bone-laziness [and a feeling that suchlike technicalities may drive students to sneak out of the room while my back is turned]. That C is nonzero is already implicit since the division by W in the beginning of our latest calculation already implies that V is nonzero [go ahead and leave if you feel you must].)

Part of the point here is that LF would seem to have been tailor-made for our consideration in 148. We went to a lot of trouble to develop the theory of Shifts and Scalings; here’s a subset of the Rational Functions (that we’re now trying to understand) obtained by applying these transformations to the simplest rational function of all (as usual, I “really” mean [what I’ve already called] the simplest one worthy of the name: the reciprocal). The result is an infinite collection of “next-simplest” cases. If everybody wasn’t in such an all-fired hurry (to “climb Calculus Mountain”, as an old sparring partner of mine was wont to say), one would more-or-less of course begin an exploration of, say, Graphing Rational Functions by looking in some detail at graphing these particular Rational Functions.

Having done so, the instructor would then be in a position to say things like, “Remember how the vertical asymptote of a Linear Fractional function at x = H [which came from shifting that of the reciprocal function] showed up in the ‘f-notation’ code as an xminusH? Because this is how you get a zero denominator in the rational function you’re examining? Well, this generalizes to rational functions generally; in f/g, if we can factor (the polynomial) g into linear factors, we can then ‘read off’ the vertical asymptotes for that function…”

(WordPress has just correctly typeset some “quotes ‘within’ quotes”. For every one of these, anyway in my copy, one pays the price of maybe about a dozen opening apostrophes being set wrongly. I’ve made some remarks about this punctuation disaster and will link ’em (see?), and omit this paragraph, as soon as I find ’em [and have the leisure, and the access].)

As an example of the advantages of considering the class LF carefully, consider the computations involved in calculating the inverse of a linear fractional function. For specificity, let R(x) = {{5x+3}\over{7x+4}}.

No, wait. Now that we’re here, let’s first remark in passing on the remarkable fact that the functions of LF are invertible (by which I mean “have inverse functions” as opposed to mere inverse relationsevery relation has one of those…)—and that one sees this (literally!) by considering the graph of the reciprocal and noticing that Shifts and Scalings preserve the property of being a one-to-one function. This means that a function that’s one-to-one (these are precisely the “invertible” functions as my students had bloodywell better know on Wednesday; the code [x_1 \not= x_2] \Rightarrow [f(x_1)\not=f(x_2)] won’t there be called upon but everybody should be able to make some sense of it by now) before applying a Shift or Scaling (or a series of these; this follows easily) is still one-to-one after the transformation is applied.

One is at this point actually pointing at a graph of {1\over x} (where I have at last, somewhat hypocritically, used the commonplace “call a function by the name of an algebaic expression” convention; I’m here actually imagining my own voice saying “one over ex” when the symbol 1\over x appears on your screen: the point is that I consider this convention to be best honored in the spoken part of our everlasting development of The Art [the “oral law”, if you will]), and a “generic” graph: “they all look like this” (sweeping one’s hands around along the curve of the graph)…”rising-jumping-rising (or, like one-over-ex itself, falling-jumping-falling)…there’ll never be a repeated y-value becuase of the way this horizontal asymptote here kind of prevents it…”

And the point I’m making here (if any, as it seems to me) is that this “literal seeing” I was referring to a moment ago is at the same time an appeal to the imagination: our audience is being asked to spin up a movie in the YouTube of their imaginations and “see” the Shift or the Stretch in question dynamically. This is why one pulls one’s hands apart to suggest “stretching”, or seems to “grab” an invisible graph and “shift” it (or what have you). When the visual imagination is well-developed (as it isn’t in me very well at all), one tends to prefer to study continuous phenomena as opposed to discrete ones in one’s mathematical work (I’m a “discrete” man myself [as is obvious if the two other claims of this sentence so far are taken as true])… but every user of advanced mathematics has to develop at least some skill working with both types of these phenomena.

So. With our visual “proof” that RF consists of invertible functions in hand, let’s invert one. R(x) = {{5x+3}\over{7x+4}} has already been mentioned. Okay. Notice that there are “two copies of x” in this “formula”. This is not the case for the other functions whose inverses are calculated in the exercises from the current section of our text (so here again, LF is seen to be of some special interest).

Our technique—write x = f(y) (interchanging the “usual” roles for the variables) and solve for y—will then call for a new “trick”. (To elaborate on this. The other functions considered, [x \mapsto (3x + 1)^5] for example, can be thought of as a sequence of “moves” made on an expression, starting with x [multiply by three; add one; power by 5]. The inverses can then be computed without making a mark on the page by considering the inverses each move in the opposite order: root by 5; subtract one; divide by three; the inverse function we seek is [x \mapsto {{\root5\of{x} - 1}\over3}]. This technique is referred to as “shoes and socks” for what I hope is the obvious reason.)

Actually, it’s a pretty familiar trick, and I’ve never failed to present it at the board in any 148 equivalent up until now. Putting x =  {{5y+3}\over{7y+4}}, one will first “cross-multiply” to get x(7y+4) = 5y +3, then “distribute the x” and “collect terms involving y” to arrive at 7xy - 5y = 3-4x. At this point, it’s clear that we’ve found the right “trick”: by “factoring out” y and performing the obvious division, we’ve shown that y = {{3-4x}\over{7x - 5}} (and should now replace “y” with “f^{-1}(x)“; the certain knowlege that trying to make sense of this step will be considered more confusing than enlightening by many beginners typically causes even me to adopt a “never mind why for now” attitude about this procedure… not that there’s anything wrong with that in principle… sometimes this is exactly the attitude to take… it just seems to be way overdone…), we’re done.

Having calculated the inverse for this Linear Fractional function (and checked it if we know what’s good for us; I’ve used [half of] this kind of check as an exam problem many times), we can confidently tackle the generic one; one arrives at the remarkable fact that
[x \mapsto {{Ax+B}\over{Cx+D}}]^{-1} = [x \mapsto {{Dx - B}\over{-Cx +A}}]\,.

The most remarkable feature of this fact may be its striking resemblence to the inverse of a two-by-two invertible matrix; but one need not know about such calculations to see that the manipulations of the constants are easily memorized (“swap” the values in the upper left and lower right [the “main diagonal”] and change the sign of the other two). Students preparing for exams involving the prospect of some need to compute the inverse of a linear fractional function might then choose to memorize this fact in order to forgo the kind of calculation we demonstrated a few paragraphs ago. (There’ll be no such problem in my exams this quarter.)

One will of course wish to try this trick out: the inverse of R(x) = {{5x+3}\over{7x+4}} is then R^{-1}(x) = {{4x-3}\over{-7x+5}}; some straightforward sign-manipulations show that this is indeed the same function we calculated already. It works.

But—and this will be last of all—what I find most exciting about the topic of Inverses of Rational Functions is that we can “compute” them visually by applying the graphing principles developed earlier in our course. For blogging purposes my graphing skills might as well be nonexistent, so this will be in outline. The graph of [x \mapsto {{Ax+B}\over{Cx+D}}] has a vertical asymptote at x = {{-D}\over C} and a horizontal asymptote at y = {A\over C}; also it has an x intercept at ({{-B}\over A}, 0) and a y intercept at (0, {B\over D}). (All of these facts about this graph can be worked out by any well-prepared student using the principles developed for graphing Rational Functions generally; what follows is then a [potentially surprising] application of these ideas). Interchanging the roles of x and y (and, what follows, of “horizontal” and “vertical”), and working the graphing process “backward” (a skill developed by working certain exercises not considered by me so far this quarter in class and probably never to be so considered), one can arrive at the inverse transformation. And this without invoking either the algebraic process (expand; collect like terms; factor) or the “formula” (that is itself developed in this way); one has in effect used the transformation theory to give a geometric proof of what might appear to be an algebraic fact.

I’ve got to prepare for actual classes now; have a nice week.


  1. more broken internet. i’ll soon be altogether helpless thanks to the (newspeak) “upgrades” that eternally make every god damn thing harder and harder like harrison fucking bergeron oh dear god why didn’t i quit long ago i *hate* this.

  2. static.ow.ly/docs/needham_t_visual_complex_analysis_bhj.pdf

    outstanding on mobius: tristan needham.

  1. 1 cut & paste | the livingston review

    […] Bricks Without Straw compositions02/20/09 And Into The Black rational functions02/23/09 One Must Imagine Vlorbik Happy moebius transformations02/24/09 I Quit: A Clarification mathblogging considered […]

  2. 2 Vlorbik On Math Ed | the livingston review

    […] 02/18/09 Bricks Without Straw compositions 02/20/09 And Into The Black rational functions 02/23/09 One Must Imagine Vlorbik Happy moebius transformations 02/24/09 I Quit: A Clarification mathblogging considered […]




Leave a comment