I had just paged through the October issue of The Notices of the American Mathematical Society - of which I am an emeritus member - when I came upon an article by Sophia D. Merow (p. 1583):
It concerned a 16 year old girl named Gracie Cunningham, who one morning - while applying her makeup - thought up a bunch of questions concerning algebra and its origins. They were all innocent questions, perhaps a tad naive, but she wanted to get some answers. Not knowing quite what to expect, but hoping for the best she dispatched her video into the cybersphere, hoping for the best. But when she got up and checked her feed there were hundreds of critical responses, not very kind either. See e.g. for some more background:
Her video was posted to Twitter, where it exploded in popularity, and Cunningham faced an onslaught of comments from people insulting her intelligence. "It's definitely been crazy," Cunningham told As It Happens in a Twitter direct message. "Having a bunch of people you don't know attacking you is really overwhelming and honestly scary."
What drove those critics bonkers and why? First, the questions she asked:
1. How did people know what they were looking for when they started theorizing about formulas because I wouldn't know what to look for?
2. Once they did find these formulas, how did they know that they were right?
3. Why is everyone being really mean to me on twitter?
4. Why did a physicist who's followed by Barack Obama retweet me?
5. Is this number 5? I can't count.
6. Is anyone gonna post this on twitter?
7. Why are the only people who are disagreeing with me the ones who are dumb, and the physicists and mathematicians are agreeing with me
Cunningham's question 3 is basically identical to what I am asking. I suspect the reason is the questions at first blush appear surprisingly simple, even naive, hence a casual reader (say on Twitter, which I've often referred to as a cartoon medium) might dismiss them as "dumb". But consider the words, first for question (1): How did people know what they were looking for?
The first issue is who are "they", the "people"? Most mathematicians today might invoke the Greeks or ancient Sumerians but some form of basic, even abstract math had to predate even them. It makes sense these earliest considerations probably appeared around the time agriculture developed, ca. 9,000 B.C. Crops were harvested, and crops were stored. There were different types of crops - call them by name - or x, y and z (or whatever symbol or character might suffice, it doesn't matter, we just need an identifier). These agrarian humans in order to preserve and protect food stores likely assessed what they had in terms of first counting their stores, then registering in terms of proportions. Let's put it in terms of a few crops today in this sense:
Say your community (of 100) has collected or harvested 5,000 honey crisp apples, 12,000 ears of sweet corn, and 14,000 oranges. How to apportion the food? In the simplest sense, the agrarians would work out that each person would be allotted 50 apples, 140 oranges and 120 ears of corn. Note that algebraic symbols aren't even needed to work this distribution of crop goods, just simple division, e.g. 12,000 / 100 = 120. However, at any time algebraic symbols might be added to shorten the working, writing.
Thus: x = 5,000 (honey crisp apples)
y = 12, 000 ears of corn
z = 14,000 oranges,
Per day allocations can then be written as: x/ 100, y/ 100, and z/ 100
This mundane illustration alone more than answers Gracie's two core questions and also illustrates holes in her math education, especially the context given. Specifically, why did her teachers not use such simple illustrations in her algebra class? Why did they not offer her a historical context for number, for counting, factoring, figuring and even using symbols? If these questions could have been answered she'd also have seen no one was "theorizing about formulas" but rather using practical means of figuring - in this case for edible crops to survive - and the "formulas" arose naturally as part of the process.
From this fact, that any "formulas" arose naturally, it can also be seen that their correctness was established once the distribution of crop goods was made. So each person will receive:
x/100 = 5,000/ 100 = 50 apples
y/ 100 = 12,000 /100 = 120 ears of corn
z/ 100 = 14,000/ 100 = 140 oranges
Now, what if this supply - for whatever reason - had to last 50 days? What does each person have to eat per day?
That can also easily be worked out and symbolically,
e.g. (x/100)/ 50 = 50/50 = 1 apple per day
and (y/100)/50 = 120/ 50 = 12/5 = 2 and 2/5 ears of corn per day
and: (z/100)/50 = 140/50 = 14/4 = 7 /2 = 3 1/2 oranges per day
This is only one illustration and it can be argued that practical mathematics (and use of symbols) goes back even earlier in antiquity. This is why no better text for background here can be found than 'Mathematics for the Million' by Lancelot Hogben. In his Chapter One, Mathematics In Remote Antiquity, Hogben offers a brilliant encapsulation of all the myriad ways in which primitive - and later humans - used math and symbols to survive. As he writes on p. 33:
"While our remote nomadic ancestors lived only by hunting and food gathering, the rising or setting positions of stars o the horizon - whether they had or had not risen by the time darkness fell, or had not yet done so at daybreak- were their only means of again locating a hunting ground. Say, already visited and so their most reliable guide to the onset of the season for which particular game or foods (berries, roots, shellfish or grain) would be most abundant at a particular location.
Thus they would date the routine of their journeying by the occasion when a particular star rose or set before dawn, and rose or set just after sunset. Before there was farming of any sort, it is likely that the older folk of the tribe had learned to reckon in lunar time (i.e. successive full or new Moons) when each food was available."
This was a crucial point and touches on one of the first uses of basic astronomy. One could also obtain the optimal times for lambing, sowing and reaping, based on observing the Sun's rising and setting positions - and "the varying lengths of its noon shadow from one rainy season to another."
Indeed, in an earlier blog post I had shown a simple, primitive shadow stick from which the length of the Sun's shadow could be measured. (For primitive hunter - gatherers finger lengths might have been used as opposed to metric units as I had shown) The whole point is that all this background - if taught - would have shown Gracie just how real mathematics is and that includes the earliest uses of its "mysterious" branch known as algebra. In this regard perhaps her one error was in declaring "I don't think math is real" which likely brought the meanies upon her. A better choice of words might have been "Math (algebra) is difficult!"
Never mind! A number of experts in the fields of physics, mathematics and education did come to her defense. What motivated them? Probably first the Twitter mob piling onto an innocent kid who just asked some naive sounding questions. But second, an opportunity to educate that Twitter mob at a deeper level regarding the math questions Gracie raised.
One of the best was from mathematician Eugenia Cheng and included extended answers to Gracie's questions which the interested reader can find here:
eugeniacheng.com/gracie-twitter/
Dr. Cheng also has a provocative column (WSJ, 'How Fractions Can Create A Vaccine Fallacy', Nov. 13-14, p. C4) on practical math in which she examines "base rate fallacies". For example it is a fallacy to look at 60 breakthrough infections in a total (local) population of only 100 and then say: "Wow! Sixty percent breakthrough infections is too much!" Instead those infections (and any others) need to be taken using a denominator reflecting the entire vaccinated population - in which case it turns out to be minuscule.
Another example has to do with facial recognition software. This technology is supposed to be 99 % accurate in its identifications - which might lead some to believe "anyone it recognizes is 99 percent likely to be a criminal." In fact, this is inverse to what it actually shows. Again, like breakthrough infections in a vaccinated population we are looking at a fraction, in this case out of the total number of tests - not the positive matches.
So "given a theoretical population of 1 million if 100 are suspected criminals then 99 will be accurately flagged. But nearly a million people are innocent and the software will flag 1 percent as criminals or nearly 10,000 people. In fact, this 99 percent accurate software incorrectly flags almost 100 times more people than it correctly identifies. This result is counterintuitive for a test described as 99 percent accurate."
Yes, math is real indeed, which Gracie would see in reading this - especially if a software can inaccurately flag thousands more people as criminals who are really innocent.
See Also:
How abstract mathematics can help us understand the world | Dr Eugenia Cheng | TEDxLondon - YouTube
And:
No comments:
Post a Comment