Top
10 mathematical innovations:
Introduction:
Of all the mathematical innovations
since ancient times, only some are worthy of multicentenary celebrations.
Certainly logarithms, celebrating their 400th anniversary this year, are among them. Ranking where logarithms
rate among the rest is subjective, of course, but I’d put them 10th (they’d be
higher if everybody still used slide rules, though). Here are the rest of my
Top 10 mathematical innovations, which you might as well read here because
David Letterman isn’t going to get to them before he retires:
10. Logarithms
A great aid to anybody who
multiplied or messed with powers and roots, logarithms made slide rules
possible and clarified all sorts of mathematical relationships in various
fields. Napier and Bürgi both had the
basic idea in the late 16th century, but both spent a couple of decades
calculating log tables before publishing them. Napier’s came first, in
1614. Briggs made them popular, though, by recasting Napier’s
version into something closer to the modern base-10 form.
9. Matrix algebra
An ancient Chinese math text
included matrix-like calculations, but their modern form was established in the
mid-19th century by Cayley. (Several others,
including Jacques Binet, had explored aspects of matrix multiplication before
then.) Besides their many other applications, matrices became extremely useful
for quantum mechanics. In fact, in 1925 Werner Heisenberg reinvented a system
identical to matrix multiplication to do quantum calculations without even
knowing that matrix algebra already existed.
8. Complex numbers
Before Cardano, square roots of
negative numbers had shown up in various equations, but nobody took them very
seriously, regarding them as meaningless. Cardano played around with them, but
it was Bombelli in the
mid-16th century who worked out the details of calculating with complex
numbers, which combine ordinary numbers with roots of negative numbers. A
century later John Wallis made the first serious case that the square roots of
negative numbers were actually physically meaningful.
7. Non-Euclidean
geometry
Gauss, in the early 19th century, was probably the first to
figure out an alternative to Euclid’s traditional geometry, but Gauss was a
perfectionist, and perfection is the enemy of publication. So Lobachevsky and Bolyai get the
credit for originating one non-Euclidean approach to space, while Riemann, much later, produced the non-Euclidean geometry that was
most helpful for Einstein in articulating general relativity. The best thing
about non-Euclidean geometry was that it demolished the dumb idea that some
knowledge is known to be true a priori, without any need to check it out by
real-world observations and experiments. Immanuel Kant thought Euclidean space
was the exemplar of a priori knowledge. But not only is it not a priori, it’s
not even right.
6. Binary logic
Boole was interested in developing a mathematical
representation of the “laws of thought,” which led to using symbols (such as x)
to stand for concepts (such as Irish mathematicians). He hit a snag when he
realized that his system required x times x to be equal to x. That requirement
pretty much rules out most of mathematics, but Boole noticed that x squared
does equal x for two numbers: 0 and 1. In 1854 he wrote a whole book based on
doing logic with 0s and 1s — a book that was well-known to the founders of modern
computer languages.
5. Decimal fractions
Stevin introduced the idea of decimal fractions to a European
audience in a pamphlet published in 1585, promising to teach “how all
Computations that are met in Business may be performed by Integers alone
without the aid of Fractions.” He thought his decimal fraction approach would
be of value not only to merchants but also to astrologers, surveyors and
measurers of tapestry. But long before Stevin, the basic idea of decimals had
been applied in limited contexts. In the mid-10th century, al-Uqlidisi, in Damascus, wrote a treatise on Arabic (Hindu) numerals
in which he dealt with decimal fractions, although historians differ on whether
he understood them thoroughly or not.
4. Zero and 3.
Negative numbers
Brahmagupta, a seventh-century Hindu astronomer, was not the first to
discuss negative numbers, but he was the first to make sense of them. It’s not
a coincidence that he also had to figure out the concept of zero to make
negative numbers make sense. Zero was not just nothingness, but a meaningful
number, the number you get by subtracting a number from itself. “Zero was not
just a placeholder,” writes Joseph Mazur in his new book Enlightening Symbols.
“For what may have been the first time ever, there was a number to represent
nothing.”
2. Calculus
You know the story — Newton gets all the credit, even though Leibniz invented calculus at about the same time, and with
more convenient notation (still used today). In any event, calculus made all
sorts of science possible that couldn’t have happened without its calculational
powers. Today everything from architecture and astronomy to neuroscience and
thermodynamics depends on calculus.
1. Arabic numerals
Did you ever wonder why the Romans
didn’t do much creative quantitative science? Try doing a complicated
calculation with their numerals. Great advances in Western European science
followed the introduction of Arabic numerals by the Italian mathematician Fibonacci in the early 13th century. He learned them from
conducting business in Africa and the Middle East. Of course, they should
really be called Hindu numerals because the Arabs got them from the Hindus. In
any case, mathematics would be stuck in the dark ages without such versatile
numerals. And nobody would want to click on a Top X list. (Wait — maybe they
would. But you won’t see any list like that on this blog.)