Research in pure mathematics
What is pure mathematics?
Pure mathematics is the study of a priori truth --- facts that
hold because they must hold, because logic mandates them. You could also say
that these are things which are tautologically true, or true "by definition."
But that does not mean they are necessarily
simple or trivial. Is it obvious that
there are infinitely many prime numbers? Not if you've never seen
the proof. And yet this is true solely as a
consequence of the definition of prime numbers.
Why is advanced mathematics so hard to explain to laymen, and even to
other scientists?
The infinitude of primes has been known for over two
thousand years. Mathematics extends as far back as written
history.
Mathematical truths are tautological --- so they can never be falsified.
Because of this, the discipline of mathematics is fundamentally
cumulative. It keeps building on itself, creating more and more elaborate
structures. All of the "easy" things
in math were discovered very long ago and are now known to
every educated person. More advanced topics are hard to explain if one has
to jump over intermediate material, as if you tried to explain division to
someone who did not already know multiplication. Actually, much advanced
math logically cannot be explained to laymen: once you had
covered all of the necessary intermediate material, the listener would be
an expert himself!
Understanding advanced math requires patience, but it does not require
a tremendous intellect. Although mathematical research is cumulative, it
periodically undergoes unifying simplifications, so that the subject does
not really get more complicated as one goes deeper into it. Anyone
who can do long division is probably capable of learning as much math as
they want, if they're willling to spend the time.
Is it good for anything?
It is difficult to imagine having an industrial civilization without
calculus, or any civilization without arithmetic. What about more advanced
recent topics in mathematics? Here are some applications:
- vector calculus in electricity and magnetism
- Boolean algebra in computer science
- finite-state automata in linguistics
- differential geometry in physics (general relativity)
- game theory in economics
- Banach spaces in engineering (control theory)
- group theory in chemistry
- number theory in cryptography
Actually, in many cases the real mathematical theory --- the body of theorems
which are regarded by mathematicians as central --- is not used very much
in applications;
but the basic concepts are crucial. Perhaps the greatest contribution of
modern mathematics to science is to provide good definitions.
That is harder than it sounds! Who would have come up with Boolean
algebras, if not us? But we had to come up with them. Modern mathematics
without Boolean algebras (or topological spaces, or groups, or Banach spaces,
or varieties) would have an obvious gaping hole.
What is Fermat's last theorem good for?
Fermat's
last theorem --- the statement that there are no whole number
solutions to the equation xn + yn = zn with
n > 2 --- doesn't appear to have any applications, in itself, and perhaps
never will. But the
ideas that were developed (by Andrew Wiles and others) in order to prove it
are stronger candidates for future usefulness.
This is one reason why mathematicians are so interested
in proving theorems: often, the ideas that underlie a proof are
more significant than the conclusion.
There are also whole areas of mathematics, like set theory, which don't seem
likely to ever have much practical significance (though you can never be
sure). These subjects are needed only for the internal coherence of
mathematics. But they are the exception, not the rule.
One clear lesson from the past is that subjects which initially
seem unrelated sometimes
end up being closely linked, and the most important application of any given
branch of mathematics often has nothing to do with its historical origin. It
just happens that a lot of what we do does end up being useful in one way
or another, something Eugene Wigner referred to as the
"unreasonable
effectiveness" of mathematics in the natural sciences.
Conversely, mathematicians have always derived, and continue to derive,
great inspiration and insight from work in other fields, especially
physics.
Is there really anything left to do at the level of basic
research?
Definitely. The twentieth century has witnessed a vigorous development
of abstract mathematics. Today's mathematics differs from math in the
year 1900 at least as much as today's physics differs from the physics
of that time.
Hilbert's twenty-three problems give an indication of the progress
math has made in the past hundred years. These problems were posed
by the great German mathematician David Hilbert at the
International
Congress of Mathematicians in 1900. Although some of Hilbert's problems
are vague, the more definite ones have mostly been solved --- with one big
exception:
the Riemann
hypothesis. This is considered the most outstanding unsolved problem
in pure mathematics at the moment.
In my specialty,
C*-algebras,
the central problem is to find a rigorous formulation of quantum field theory.
For instance, the "renormalized" series that are used in
quantum
electrodynamics (the quantum theory of the electromagnetic field, QED) do
not converge. In plain language, physicists have a sophisticated theory of
QED which was originally developed in the late 1940s by
Schwinger, Feynmann, and Tomonaga.
It typically makes predictions in the form of infinite series, and
just the first few terms of the series (or even just the first term) yield
highly accurate results. But adding more terms makes the answers less accurate,
and the series eventually diverge. Thus, the accuracy of the initial
calculation shows that the theory must be "right" in some sense,
but taken literally it is nonsensical.
To learn what C*-algebras have to do with this, take
a class from me or read
my book.
Return to home page