Archive for April, 2014

class of ’92. represent.

i just filed the last grades for the semester.
that’s 22 years a pro (albeit a minor-league
player), with two or three years off here and
there (for good behavior; no good deed goes
unpunished… don’t you know there’s a *war*
on?). mostly i worked summers but that appears
unlikely this year. in other words, i’m on
vacation as of this hour. yay me. time for
an ep of the cooking show, i guess.

or,
contents-page for
the “combinatorics” category of
_a_grader’s_notes_ (so far).
(in its natural order.)

a couple of good exercises
at random before breakfast
(april 17)
handwritten marginal translations:
english-to-code & code-to-english.

hey. this is fun and easy. (april 17)
the ross-collection copy of bogart
(1983) falls into my hands and i begin
to annotate it.

bogart so far (progress report at
random)
(april 21) longish.
stirling numbers of the second kind.
(i’ve got it this time at last.)
the “onto arrow” —–>> and its ilk.

a thing unto itself (april 25)
more about KPB2.3
(= bogart \section 2.3).
specifically,”
* [ (k)_n counts one-to-one F^n’s]
* [ k^n (counts functions)]
* [ S(n,k)k! counts “onto”
functions N_n —->> N_k]
“.
also… filling a much-needed gap…
N_k = {1, 2, … ,k}.
(why *is* there no standard symbol
for these amazingly-useful sets?)
{\Bbb N}_k := \{i\}_{i=1}^k

bogart so far, so far (april 26)
this very post, in some weird
go-to-hell-escher-and-back self-
reference-gone-mad. (what’s next,
two spocks at once?)

and that’s it. heck, if it’d been
a *big* project, i’d’ve never taken
it on. BIJAGDH. maybe some day…
maybe today!… i’ll say something
about the infuriating treatment of
“the basic counting functions” or
the amazing “poyla’s change-making
example”. not now.

letting
{\Bbb N}_1 = \{1\},{\Bbb N}_2 = \{1,2\},
{\Bbb N}_3 = \{1,2,3\} \ldots
(i.e., {\Bbb N}_k = \{i\}_{i=1}^k… en-sub-kay stands
for the set of positive integers from
one to kay, in other words)…
and immediately turning right around
and redefining {1, 2, 3, … k} as
“N_k” instead, without all the fancy
type…

last monday’s bogart so far
was a ramble on stirling numbers (of the second
and first kind, SN2K & SN1K) and how it was
to read about ’em in KPB (kenneth p.~bogart’s
_introductory_combinatorics_… or should i
just say “op cit.”?). the notations i’ve just
introduced are mine not bogart’s; this will
persist as i imagine.

now i’ve read that section some more and
its pages are littered with my pencil notes.
for instance,
x^n = \sum_{j=0}^n S(n,j)(x)_j,
copied out verbatim for the sheer joy
of it, and
* [ (k)_n counts one-to-one F^n’s]
* [ k^n (counts functions)]
* [ S(n,k)k! counts “onto”
functions N_n —->> N_k]
,
written out on p.~50 (where the “falling
factorial” function (k)_n = k!/(k-n)!
is defined) because it seems to me
as clear a summary of what for me
is the heart of the matter as i can
produce at this time: this is what
SN2K’s are *for*, in the context of
some very familiar material (one has
delivered manymany lectures
bearing on “one to one” and “onto”
functions [and even quite a few
devoted to those topics specifically]…
but typically in a context where
“infinitely many” such functions
will be imagined to “exist”… which
makes “counting” the functions sort
of beside the point).

we could go so far as to *define*
stirling numbers of the second kind by
SN2K(n,k) := #{ f | f: N_n —>> N_k}/k!
.
“count the onto functions and divide by
the permutations of the range”… this
of course give the number of *partitions*
of N_n into k “classes” (regardless of
order “within a given class”).
not that i think this would be a good idea.

okay. like i said on monday, a good section
of KPB (2.3 “partitions and stirling numbers”).
i’ve read over all 18 exercises & even set some up.
heck, some could even be considered “done”.
but really i came to ramble about sections 3.1
and 3.2. or so i thought. but *really* i came
to avoid marking up some pages. and i’ve done
about enough of that. for now.

6.1.5
Let a, b, c, & d
be integers satisfying
b > 0, d > 0, and
ad – bc = 1.
Let n = max{b,d}.
Then a/b and c/d are consecutive
fractions in the n^th Farey Sequence.

Proof:
Since ad-bc=1, we have
ad/(bd) = (1+bc)/(bd), hence
a/b = 1/(bd) + c/d and so
a/b > c/d.

Note that (cf. Corollary 6.2)
ad-bc = 1 also implies
(a,b) = 1 and (c,d) = 1;
our fractions are in
“lowest terms” (and hence
appear in F_n).

Now suppose c/d & a/b are *not* adjacent.
Then there is at least one fraction,
which we may assume is in “lowest terms”,
“p/q”, say, with (p,q) = 1, such that
p/q is *between* c/d and a/b
in the Farey sequence F_n:
F_n = { …, c/d, …, p/q, …, a/b, …}.

Then
q = (ad-bc)q
= adq ___________ – bqc
= adq – bdp + bdp – bcq
= d(aq-bp) + b(dp-cq)
(*).

But a/b > p/q implies
aq – pb > 0, and so
(since the variables are integers)
we have that
aq – bp is greater-or-equal to 1
(aq – bp \ge 1).
The same computation with different
letters shows that
[p/q > c/d] \implies [dp – cq \ge 1].

These inequalities together with (*) give
q = d(aq-bp) + b(dp-cq)
q \ge d + b.

Recall that n = max{b,d}.
It follows that d + b > n.
This gives us q > n, a contradiction
(“p/q” cannot be in F_n with q > n).

Hence a/b and c/d *are* adjacent in F_n; done.

leaving only the “typing it up worse
for the net” to me… the \TeX-set
homework i’m about to loosely base my
plain-text solutions on is quite a lovely
thing to look upon… as my versions will
not be. now. make it so.

6.1.3
Let a/b, a’/b’, and a”/b” be consecutive fractions
in the Farey sequence of order n.

(i.e., in
F_n = {0/1, 1/n, …, 1/2, … , (n-1)/n, 1/1}
[for the record… it’s convenient to
have a “symbol” for this sequence;
F-subscript-n is standard but the
text doesn’t include it.]
)
We will show that these fractions
satisfy the “Mediant Property”:
a’/b’ = (a+a”)/(b+b”).

Consecutive “Farey fractions”
satisfy Theorem 6.1 and so
a’b – ab’ = 1 and
a”b’ – a’b” = 1.

Equating the left-hand sides,
a’b – ab’ = a”b’ – a’b”.

Regroup; factor;
“divide through by b'(b+b”)”:
a'(b+b”) = b'(a”+a)
a’/b’ = (a+a”)/(b+b”).
(We are done.)

Photo on 2014-04-21 at 13.56

it turns out (chapter 2, section 3)
that “stirling numbers of the 2nd kind” count
the ways to partition an n-element set into k classes:
letting S_h = \{i\}_{i=1}^h
whenever h is a positive integer (so that “S_h”…
as i propose to call it in ordinary type…
denotes {1, 2, … , h} [and in particular,
S_h has h elements]), we have
S(n,k)\cdot k! = \sharp \{ f| f: S_n \rightarrow S_k, ({\forall y}\in S_k) ({\exists x}\in S_n) f(x)=y \}.

now, this would be much easier to say
if i had an “onto arrow”. instead of
S(n,k)k! = \sharp \{ f| f: S_n \rightarrow S_k, ({\forall y}\in S_k) ({\exists x}\in S_n) f(x)=y \}
i would’ve had only
S(n,k)k! = \sharp \{ f| f: S_n \ontoarrow S_k \}.

in ordinary type,
f: A—->B
means that f is a function from A to B,
g: A >—-> B
means that g is a *one-to-one* function
from A to B (an “injection”), and
h: A —->> B
means that A is an *onto* function
(a “surjection”) from A to B.

the code for the “distinguished” functions
(one-to-one or onto) is very similar to that
for “ordinary” functions… the symbols differ
*just enough* to mark the relevant distinctions.
this helps to make these “arrows” very easy
to understand and to use; anybody whose work
regularly calls for distinguishing surjections
or injections from “generic” functions would
be *crazy* to *prefer* writing out “f is onto”
or “f is one-to-one”.

of course, no one dares even *try* teaching
’em to undergrads: because a sizable fraction
of any sizable class at any level will insist
on trying to do mathematics *without*
making clear distinctions… particularly
in *writing*.

no, just scribbling “f onto”
somewhere on the page oughta do it.

who am i kidding. just scribbling “onto”
somewhere. “you know what i *meant*…”

i despair.

it turns out there’s a pascal-triangle-like aspect
to S(n,k); one easily sees that
S(n,k) = S(n-1, k-1) + kS(n, k)
and so one may crank out a “SN2K triangle”
1
1 1
1 3 1
1 7 6 1…
forming the k^{th} entry of the n^{th} row
by adding k-times-number-above to
number-above-and-to-the-left
(imagining invisible zeroes as needed).

i’ve penciled in
“1 15 25 20 1”
under bogart’s Table 2.1.

and on and on it goes.
the SN1K number s(n,k)
turns out to be the “k coefficient” in
the polynomial expansion of
(x)_n = x(x-1)(x-2)…(x-n+1).

the “falling factorial” function (k)_n
is already familiar to many students as
(k)_n = nPk = n!/(n-k)!
(and so “counts one-to-one functions”…
“lists” that *don’t reuse* any listed
objects).

then there are a bunch of “summation identities”
allowing us to apply stirling numbers in counting
ways to manipulate various sets (again, one has
the famous “pascal’s triangle” situation ready to
hand as a motivating example of this *kind* of thing).
the “bell numbers” (named for the famous ET).
some exercises. it’s a great section.

i’m particularly grateful to professor bogart for
explicitly remarking that “If you have had linear
algebra, you may recognize that the Stirling numbers
are “change of basis” coefficients.”

well, now that you *say* so, yes.
so *this* is what all the fuss (about
“coefficients in the polynomial expansion”)
is about! it’s all about getting back and
forth between the “standard”
{1, x, x^2, x^3, …}
basis for F[x] (the “space” of polynomials
in x) and the “falling factorial” basis
{(x)_0, (x)_1, (x)_2, …} =
{1, x, (x^2 -x), (x^3 -3x^2 +2x), …}
(which, presumably, arises “naturally”
in certain problems about *listing* things).
why was i not *informed*?

knowing *this*… “SN1K numbers are
change-of-basis coefficients from
falling-factorials to powers-of-ex”…
will allow me (finally) to *remember*
how they’re defined. meanwhile, i’m
that little bit more comfortable with
“summation formulas” in combinatorics
generally. a real good section.

section 3.2 is even *more* exciting.
“polya’s change-making example”
is used to motivate the use of
“generating functions”. this topic
has been sort of a sore spot with me
since *senior year* (a long time ago)
when it was last-on-the-list one
semester in a probability class and
hurried through with no understanding
by me at all. then i’d come across
it from time to time. but never in
the context of “stuff i need, on a
deadline”… and so, until now,
never seriously studied. but now?
it’s starting to sort of make sense.

\sharp \{ (v_1, v_2, \ldots , v_k) \in (\{ j\}_{j=1}^{n})^k \mid [a \not= b] \Rightarrow [v_a \not= v_b]\}=
{{n!}\over{(n-k)!}} = \null_n P_k.
— the number of “permutations of k things
from a set of n” (“en-permute-kay”, in its
most convenient say-it-out-loud version);
the number of ways to make an (ordered)
*list* of k (distinct) things chosen from
a set of n things. one has known this from,
well, time immemorial. but never written
it out in straight-up *set* notation
until today.

now it’s on page 5 of my “bogart”.
(_introductory_combinatorics_, kenneth
p.~bogart, 1983; mine’s a recent acquisition
from the math complex, once of the
“bertha halley ross collection” according
to its bookplate. that turns out to have
been mrs. arnold ross, late of the OSU
[an all-time giant in recruiting new talent
into mathematics, founder of the “ross program”].
you can see from the edges of the pages that
somebody read it up to chapter six and stopped.
they didn’t write on the pages though.
[but that’s being put right now.])

and cranking out suchlike “set code” *is* fun
and easy… for me. showing somebody how to
*read* it? fun but *not* easy. showing some-
body how to *write* it? well… how would i
*know*? (how would *anyone*?)

in one sense, “textbook” exercises.
but in the usual sense, not.
i made up these “exercises” for myself,
but they sure as heck involve a *textbook*.
just now… very much in my usual way…
i made some pencil notes on a textbook page.

there was even a “warm-up”.
my *first* marks on that page…
“page 43”, let’s say… were to
scratch over the first “of” of

In number theory, the function of given in Theorem 2.2 would be called an ordered composition of k with n parts, because it gives a list of n numbers that add up to k.
and replace it with “f“.

because theorem 2.2, as it turns out,
tells us that

Theorem 2.2. The number of functions f from a set {x_1, x_2, \ldots x_n} to the nonnegative integers such that \sum_{i=1}^n f(x_i) = k is C(n + k – 1; k).

or, in other words, that
\sharp \{ f \mid f: \{x_i\}_{i=1}^{n} \rightarrow {\Bbb N}, \sum_{i=1}^n f(x_i) = k\} ={{n+k-1}\choose k}.

and that was, if you will, “exercise 1”:
“restate theorem 2.2 wordlessly”.
this code now appears in the (generous)
margin of page 43 (with the unimaginative
label “THEOREM 2.2”).

exercise 2?
well, the
\sum_{i=1}^n f(x_i) = k
bit appears *twice* on page 43
centered on a line of its own
(in a “display”, in other words).
so next to the first such display
i wrote out
(
f TAKES
n VALUES
ADDING UP
TO k
)
… where the parens “(“,”)” indicate
big ol’ *handwritten* parens
bracketing the whole “block”
of handwritten “f takes n
values adding up to k” thing;
it fills the “display” space
(to the right of the displayed code)
rather nicely from top to bottom.

so the first “exercise” was of the form:
“translate from english into code”
and the second was of the form
“translate from code into english”.

“write it up in \TeX” will here
be considered beyond the scope of our
post
… it *is* time for breakfast…

i just couldn’t wait to observe here that such
“translating”, back-and-forth, code-to-language,
language-to-code is a *large part* of “doing math”.
throw in code-to-graph (and back) and get even more.
venn-diagrams back-and-forth to truth tables.
(hey, look. “isomorphisms”!)

anyhow. we’re *always* doing suchlike exercises.
trouble is. what-you-mean-“we”?

*feed* me!

more tales out of school

my recent example of bad freshman calculus
was, heaven knows, something egregious.
but *this*! this is something else again.

on another day, i might have just bitten down
and let it go with full credit… the very
kind of thing i have to do dozens of times
a night when i sit down to earn my right
to go on calling myself a math teacher.

ordinary unambitious undergrads
will *never* be made in classes like ours
to write carefully. and they’ll take any
attempt to do it as a violation of their
basic civil rights *as* undergrads and
threaten to vote with their feet.
so you’ve got to play it pretty cool.
even though it might break your heart.

just mark it wrong, again.
and tell ’em why, again.
“CODE IS NOT ENGLISH”
usually doesn’t get points-off
*regardless* of how badly the little
dears have mangled plain sense into
worse-than-meaningless mumbo-jumbo
by using “syncopated style”.

but this example, on this day?
i just couldn’t do it. its author
lost a half a point (out of two).

“f(x) = log_a x = (ln x)/(ln a)
f'(x) = 1/(ln a) * 1/x” …
thus far well and good …
“since the derivative of ln x = 1/x”.

and my first instinct, and indeed my
first move, is to circle the “=”,
write “IS” near it, put a big “X” near
the whole mess so far, and write out
“CODE IS NOT ENGLISH”.
and let it go at that.
like usual.

but, finally, dammit.

if that thing deserves a perfect score
then i’m carl friedrich gauss.

if i *ever* encounter “ln x = 1/x”
*without* letting it bother me,
may my right hand lose its cunning.
take away my mathbooks and send me
to seminary or something; i’ll’ve
quit being a *math* teacher altogether.

the student in question will almost certainly
take my comments for raving lunacy. and so,
i suppose, will some portion of readers of this
post. none of *my* business either way, though,
i suppose. sometimes you’ve just gotta get up
on the net and *vent*. thanks for reading.

bad freshman calculus

log_a(x) = ({{ln x}\over{ln a}}) = {1\over{x ln a}} by quotiant rule
EXHIBIT A

but this is *senior level* stuff…
“analysis” *not* freshman-calc.
and even *there*, one seldom encounters
such an *explicit* version of the
natural-log-equals-reciprocal fallacy
ln(x) = {1\over x}.
(the “quotiEnt rule”…
or rather, *the* quotient rule…
our subject has fallen into the
“leaving out articles makes it
harder to understand and so is
in my best interest” trap…
has *no bearing* on this [mis]-
calculation [we aren’t diff-
erentiating a fraction (or any-
thing else for that matter)].)

here’s the whole sad story.
math is hard and everybody knows it.
what they *don’t* know is that it’s
nonetheless easier than anything else.
*particularly* when one is trying
to do things like “pass math tests”.

there’s always a widespread (and *very*
persistent) belief abroad in math
classes that trying to understand
what the technical terms mean (for
example) is “confusing” and should
be dodged at every opportunity.

we teachers go on (as we must)
pretending that when we say things like
“an equation” or “the *product* law”
we believe that our auditors
are thinking of things like
“a string of symbols
representing the assertion
that a thing-on-the-left
has the same meaning as
a thing-on-the-right”
or “the rule (in its context)
about *multiplication*”.

but if we ever look at the documents
produced by these auditors in attempting
to carry out the calculations we only
wish we could still believe we have been
*explaining* for all these weeks?
we soon learn that they have been thinking
nothing of the kind.

anyhow. the example at hand.
calculus class is encountered by *most* of its students
as “practicing a bunch of calculating tricks”.

the “big ideas”… algebra-and-geometry, sets,
functions, sequences, limits, and so on…
are imagined as *never to be understood*.
math teachers will perversely insist on
*talking* in this language when demonstrating
the tricks.

well, the “big ideas” that *characterize*
freshman calc are “differentiation” and
“integration”. for example “differentiation”
transforms the expression “x^n” into the
expression “n*x^(n-1)” (we are suppressing
certain details more or less of course).
this “power law” is the one thing you can
count on a former calculus student to have
remembered (they won’t be able to supply
the context, though… those pesky “details”),
in my experience.

anyhow… long story longer… somewhere
along the line, usually pretty early on…
one encounters the weirdly mystifying
*natural logarithm* function. everything
up to this point could be understood as
glorified sets, algebra, and geometry…
and maybe i’ve been able to fake it pretty well…
but *this* thing depends on the “limit” concept
in a crucial way. so to heck with it.

and a *lot* of doing-okay-til-now students
just decide to learn *one thing* about
ell-en-of-ex
(the function [x |—-> ln(x)],
to give it its right name
[anyhow, *one* of its right
“coded” names; “the log”
and “the natural-log function”
serve me best, i suppose,
most of the time]):
“the derivative of ln(x) is 1/x”.

anything else will have to wait.
but here, just as i said i would,
i have given the student too much
credit for careful-use-of-vocabulary:
again and again and again and again,
one will see clear evidence that
whoever filled in some quiz-or-exam
instead “learned” that
“ln(x) is 1/x”.

because, hey look.
how am *i* supposed to know
that “differentiation” means
“take the derivative”?
those words have *no meaning*!
all i know… and all i *want* to know!…
is that somewhere along the line in
every problem, i’ll do one of the tricks
we’ve been practicing. and the only trick
i know…. or ever *want* to know!… about
“ln” is ell-en-is-one-over-ex. so there.

it gets pretty frustrating in calc I
as you can imagine. to see it in analysis II
would drive a less battle-hardened veteran
to despair; in me, to my shame, there’s a
tendency… after the screaming-in-agony
moments… to malicious glee. (o, cursed spite.)

because, hey, look. if we all we *meant* by
“ln(x)” was “1/x” what the devil would we have
made all this other *fuss* about? for, in
your case, grasshopper, several god-damn *years*?
did you think this was never going to *matter*?
in glorified-advanced-*calculus*? flunking fools
like this could get to be a pleasure.

but how would i know. i’m just the grader;
it’s just a two-point homework problem.

and anyway, that’s not really the *exact* thing
i sat down to rant about.

next ish: *more* bad freshman calc from analysis II.