The post Rebooting Philosophy appeared first on OUPblog.

]]>** **

When we use a computer, its performance seems to degrade progressively. This is not a mere impression. An old version of Firefox, the free Web browser, was infamous for its “memory leaks”: it would consume increasing amounts of memory to the detriment of other programs. Bugs in the software actually do slow down the system. We all know what the solution is: reboot. We restart the computer, the memory is reset, and the performance is restored, until the bugs slow it down again.

Philosophy is a bit like a computer with a memory leak. It starts well, dealing with significant and serious issues that matter to anyone. Yet, in time, its very success slows it down. Philosophy begins to care more about philosophers’ questions than philosophical ones, consuming increasing amount of intellectual attention. Scholasticism is the ultimate freezing of the system, the equivalent of Windows’ “blue screen of death”; so many resources are devoted to internal issues that no external input can be processed anymore, and the system stops. The world may be undergoing a revolution, but the philosophical discourse remains detached and utterly oblivious. Time to reboot the system.

Philosophical “rebooting” moments are rare. They are usually prompted by major transformations in the surrounding reality. Since the nineties, I have been arguing that we are witnessing one of those moments. It now seems obvious, even to the most conservative person, that we are experiencing a turning point in our history. The information revolution is profoundly changing every aspect of our lives, quickly and relentlessly. The list is known but worth recalling: education and entertainment, communication and commerce, love and hate, politics and conflicts, culture and health, … feel free to add your preferred topics; they are all transformed by technologies that have the recording and processing of information as their core functions. Meanwhile, philosophy is degrading into self-referential discussions on irrelevancies.

The result of a philosophical rebooting today can only be beneficial. Digital technologies are not just tools merely modifying how we deal with the world, like the wheel or the engine. They are above all formatting systems, which increasingly affect how we understand the world, how we relate to it, how we see ourselves, and how we interact with each other.

The ‘Fourth Revolution’ betrays what I believe to be one of the topics that deserves our full intellectual attention today. The idea is quite simple. Three scientific revolutions have had great impact on how we see ourselves. In changing our understanding of the external world they also modified our self-understanding. After the Copernican revolution, the heliocentric cosmology displaced the Earth and hence humanity from the centre of the universe. The Darwinian revolution showed that all species of life have evolved over time from common ancestors through natural selection, thus displacing humanity from the centre of the biological kingdom. And following Freud, we acknowledge nowadays that the mind is also unconscious. So we are not immobile, at the centre of the universe, we are not unnaturally separate and diverse from the rest of the animal kingdom, and we are very far from being minds entirely transparent to ourselves. One may easily question the value of this classic picture. After all, Freud was the first to interpret these three revolutions as part of a single process of reassessment of human nature and his perspective was blatantly self-serving. But replace Freud with cognitive science or neuroscience, and we can still find the framework useful to explain our strong impression that something very significant and profound has recently happened to our self-understanding.

Since the fifties, computer science and digital technologies have been changing our conception of who we are. In many respects, we are discovering that we are not standalone entities, but rather interconnected informational agents, sharing with other biological agents and engineered artefacts a global environment ultimately made of information, the infosphere. If we need a champion for the fourth revolution this should definitely be Alan Turing.

The fourth revolution offers a historical opportunity to rethink our exceptionalism in at least two ways. Our intelligent behaviour is confronted by the smart behaviour of engineered artefacts, which can be adaptively more successful in the infosphere. Our free behaviour is confronted by the predictability and manipulability of our choices, and by the development of artificial autonomy. Digital technologies sometimes seem to know more about our wishes than we do. We need philosophy to make sense of the radical changes brought about by the information revolution. And we need it to be at its best, for the difficulties we are facing are challenging. Clearly, we need to reboot philosophy now.

Luciano Floridi is Professor of Philosophy and Ethics of Information at the University of Oxford, Senior Research Fellow at the Oxford Internet Institute, and Fellow of St Cross College, Oxford. He was recently appointed as ethics advisor to Google. His most recent book is

The Fourth Revolution: How the Infosphere is Reshaping Human Reality.

Subscribe to the OUPblog via email or RSS.

Subscribe to only philosophy articles on the OUPblog via email or RSS.

*Image credit: Alan Turing Statue at Bletchley Park. By Ian Petticrew. CC-BY-SA-2.0 via Wikimedia Commons.*

The post Rebooting Philosophy appeared first on OUPblog.

]]>The post Does the “serving-first advantage” actually exist? appeared first on OUPblog.

]]>

Suppose you are watching a tennis match between Novak Djokovic and Rafael Nadal. The commentator says: “Djokovic serves first in the set, so he has an advantage.” Why would this be the case? Perhaps because he is then ‘always’ one game ahead, thus serving under less pressure. But does it actually influence him and, if so, how?

Now we come to the seventh game, which some consider to be the most important game in the set. But is it? Nadal serves an ace at break point down (30-40). Of course! Real champions win the big points, but they win most points on service anyway. At first, it may appear that real champions outperform on big points, but it turns out that weaker players underperform, so that it only seems that the champions outperform. And Nadal goes on to win three consecutive games. He is in a winning mood, the momentum is on his side. But does a ‘winning mood’ actually exist in tennis? (*Spoiler*: It does, but it is smaller than many expect.)

To figure out whether the “serving-first advantage” actually exists, we can use data on more than one thousand sets played at Wimbledon in order to calculate how often the player who served first also won the set. This statistic shows that for the men there is a slight advantage in the first set, but no advantage in the other sets.

On the contrary, in the other sets, there is actually a disadvantage: the player who serves first in the set is more likely to lose the set than to win it. This is surprising. Perhaps it is different for the women? But no, the same pattern occurs in the women’s singles.

It so happens that the player who serves first in a set (if it is not the first set) is usually the weaker player. This is so, because (a) the stronger player is more likely to win the previous set, and (b) the previous set is more likely won by serving the set out rather than by breaking serve. Therefore, the stronger player typically wins the previous set on service, so that the weaker player serves first in the next set. The weaker player is more likely to lose the current set as well, not because of a service (dis)advantage, but because he or she is the weaker player.

This example shows that we must be careful when we try to draw conclusions based on simple statistics. The fact that the player who serves first in the second and subsequent sets often loses the set is true, but this primarily concerns weaker players, while the original hypothesis includes all players. Therefore, we must control for quality differences, and statistical models enable us to do that properly. It then becomes clear that there is no advantage or disadvantage for the player who serves first in the second or subsequent sets; but it does matter in the first set, so it is wise to elect to serve after winning the toss.

Franc Klaassenis Professor of International Economics at University of Amsterdam.Jan R. Magnusis Emeritus Professor at Tilburg University and Visiting Professor of Econometrics at the Vrije Universiteit Amsterdam. They are the co-authors of.Analyzing Wimbledon: The Power of Statistics

Subscribe to the OUPblog via email or RSS.

Subscribe to only business and economics articles on the OUPblog via email or RSS.

*Image Credit: “Wimbledon Centre Court Panoramic: Rafael Nadal vs Del Potro” (2011) by Rian (Ree) Saunders. CC BY 2.0 via 58996719@N07 Flickr*

The post Does the “serving-first advantage” actually exist? appeared first on OUPblog.

]]>The post Statistics and big data appeared first on OUPblog.

]]>** **

Nowadays it appears impossible to open a newspaper or switch on the television without hearing about “big data”. Big data, it sometimes seems, will provide answers to all the world’s problems. Management consulting company McKinsey, for example, promises “a tremendous wave of innovation, productivity, and growth … all driven by big data”.

An alien observer visiting the Earth might think it represents a major scientific breakthrough. Google Trends shows references to the phrase bobbing along at about one per week until 2011, at which point there began a dramatic, steep, and almost linear increase in references to the phrase. It’s as if no one had thought of it until 2011. Which is odd because data mining, the technology of extracting valuable, useful, or interesting information from large data sets, has been around for some 20 years. And statistics, which lies at the heart of all of this, has been around as a formal discipline for a century or more.

Or perhaps it’s not so odd. If you look back to the beginning of data mining, you find a very similar media enthusiasm for the advances it was going to bring, the breakthroughs in understanding, the sudden discoveries, the deep insights. In fact it almost looks as if we have been here before. All of this leads one to suspect that there’s less to the big data enthusiasm than meets the eye. That it’s not so much a sudden change in our technical abilities as a sudden media recognition of what data scientists, and especially statisticians, are capable.

Of course, I’m not saying that the increasing size of data sets does not lead to promising new opportunities – though I would question whether it’s the “large” that really matters as much as the novelty of the data sets. The tremendous economic impact of GPS data (estimated to be $150-270bn per year), retail transaction data, or genomic and bioinformatics data arise not from the size of these data sets, but from the fact that they provide new kinds of information. And while it’s true that a massive mountain of data needed to be explored to detect the Higgs boson, the core aspect was the nature of the data rather than its amount.

Moreover, if I’m honest, I also have to admit that it’s not solely statistics which leads to the extraction of value from these massive data sets. Often it’s a combination of statistical inferential methods (e.g. determining an accurate geographical location from satellite signals) along with data manipulation algorithms for search, matching, sorting and so on. How these two aspects are balanced depends on the particular application. Locating a shop which stocks that out of print book is less of an inferential statistical problem and more of a search issue. Determining the riskiness of a company seeking a loan owes little to search but much to statistics.

Some time after the phrase “data mining” hit the media, it suffered a backlash. Predictably enough, much of this was based around privacy concerns. A paradigmatic illustration was the *Total Information Awareness* project in the United States. Its basic aim was to search for suspicious behaviour patterns within vast amounts of personal data, to identify individuals likely to commit crimes, especially terrorist offences. It included data on web browsing, credit card transactions, driving licences, court records, passport details, and so on. After concerns were raised, it was suspended in 2003 (though it is claimed that the software continued to be used by various agencies). As will be evident from recent events, concerns about the security agencies monitoring of the public continues.

The key question is whether proponents of the huge potential of big data and its allied notion of open data are learning from the past. Recent media concern in the UK about merging of family doctor records with hospital records, leading to a six-month delay in the launch of the project, illustrates the danger. Properly informed debate about the promise and the risks is vital.

Technology is amoral — neither intrinsically moral nor immoral. Morality lies in the hands of those who wield it. This is as true of big data technology as it is of nuclear technology and biotechnology. It is abundantly clear — if only from the examples we have already seen — that massive data sets do hold substantial promise for enhancing the well-being of mankind, but we must be aware of the risks. A suitable balance must be struck.

It’s also important to note that the mere existence of huge data files is of itself of no benefit to anyone. For these data sets to be beneficial, it’s necessary to be able to use the data to build models, to estimate effect sizes, to determine if an observed effect should be regarded as mere chance variation, to be sure it’s not a data quality issue, and so on. That is, statistical skills are critical to making use of the big data resources. In just the same way that vast underground oil reserves were useless without the technology to turn them into motive power, so the vast collections of data are useless without the technology to analyse them. Or, as I sometimes put it, *people don’t want data, what they want are answers*. And statistics provides the tools for finding those answers.

David J. Handis Professor of Statistics at Imperial College, London and author of Statistics: A Very Short Introduction

The Very Short Introductions (VSI) series combines a small format with authoritative analysis and big ideas for hundreds of topic areas. Written by our expert authors, these books can change the way you think about the things that interest you and are the perfect introduction to subjects you previously knew nothing about. Grow your knowledge with OUPblog and the VSI series every Friday and like Very Short Introductions on Facebook. Subscribe to on Very Short Introductions articles on the OUPblog via email or RSS.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS

*Image credit: Diagram of Total Information Awareness system designed by the Information Awareness Office. Public domain via Wikimedia Commons*

The post Statistics and big data appeared first on OUPblog.

]]>The post The genesis of computer science appeared first on OUPblog.

]]>

Politically, socially, and culturally, the 1960s were tumultuous times. But tucked away amidst the folds of the Cold War, civil rights activism, anti-war demonstrations, the feminist movement, revolts of students and workers, flower power, sit-ins, Marxist and Maoist revolutions — almost unnoticed — a new science was born in university campuses across North America, Britain, Europe. and even, albeit tentatively, certain non-Western parts of the world. This new science acquired a name of its own: *computer science* (or some variations thereof, ‘computing science’, ‘informatique’, ‘informatik’).

At the heart of this new science was the process by which symbols, representing information, could be automatically (or with minimal human intervention) transformed into other symbols (representing other kinds or new information). This process was called, variously, *automatic computation, information processing, *or *symbol processing*. The agent of this process was the artifact named, generically, *computer.*

The computer is an *automaton*. In the past, this word, ‘automation’ (coined in the 17th century) was used to mean an artifact which, largely driven by its own source of motive power, performs certain repetitive patterns of movement and action without any external influences. Often, these actions imitated those of humans and animals. Ingenious mechanical automata had been invented since antiquity, largely for the amusement of the wealthy though some were of a more utilitarian nature (such as the water clock, said to be invented in the 1st century CE by the engineer/inventor Hero of Alexandria).

So mechanical automata that carry out physical actions of one sort or another form a venerable tradition. But the automatic electronic digital computer marked the birth of a whole new genus of automata, for this artifact was designed or intended to imitate human thinking; and, indeed, to extend or even replace humans in some of their highest cognitive capacities. Such was the power and scope of this artifact, it became the fount of a socio-technological revolution now commonly referred to as the Information Revolution, and a brand new science, computer science.

But computer science is not a *natural* science. It is not of the same kind as, say, physics, chemistry, biology, or astronomy. The gazes of these sciences are directed toward the natural world, inorganic and organic. The domain of computer science is the artificial world, the world of made objects, of artifacts — in particular, *computational artifacts*. Computer science is a *science of the artificial*, to use a term coined by Nobel laureate polymath scientist Herbert Simon.

A fundamental difference between a natural science like physics and an artificial science such as computer science relates to the age old philosophical distinction between *is* and *ought*. The natural scientist is concerned with the world *as it is*; she is not in the business of deliberately changing the natural world. Thus, the astronomer peering at the cosmos does not desire to change it but to understand it; the paleontologist examining rock layers in search of fossils is doing this to learn more about the history of life on earth, not to change the earth (or life) itself. For the natural scientist, understanding the natural world is an end in itself.

The scientist of the artificial also wishes to understand, not nature but artifacts. However that desire is a means to an end, for the scientist of the artificial, ultimately, wishes to *alter *the world in some respect. Thus the computer scientist wants to alter some aspect of the world by creating computational artifacts as improvements on existing one, or by creating new computational artifacts that have never existed before. If the natural scientist is concerned with the world *as it is*, the computer scientist obsesses with the world as she thinks *it ought to be*. For computer scientists, like other scientists of the artificial (such as engineering scientists) their domain comprises of artifacts that are intended to serve some purpose. An astronomer does not ask what a particular galaxy or planet is *for*; it just *is*. A computer scientist, striving to understand a particular computational artifact begins with the purpose for which it was created. Artifacts are imbued with purpose, reflecting the purposes or goals imagined for them by their human creators.

So how was this science of the artificial called computer science born? Where, when, and how did it begin? Who were its creators? What kinds of purposes drove the birth of this science? What were its seminal ideas? What makes it distinct from other, more venerable, sciences of the artificial? Was the genesis of computer science evolutionary or revolutionary? A ‘big bang’ or a ‘steady state’ birth? These are the kinds of questions that interest historians of science peering into the origins of what is one of the youngest artificial sciences of the 20th century.

Subrata Dasgupta is the Computer Science Trust Fund Eminent Scholar Chair in the School of Computing & Informatics at the University of Louisiana at Lafayette, where he is also a professor in the Department of History. Dasgupta has written fourteen books, most recently

It Began with Babbage: The Genesis of Computer Science.

Subscribe to the OUPblog via email or RSS.

Subscribe to only technology articles on the OUPblog via email or RSS.

*Image Credit: A reflection of a man typing on a laptop computer. Photo by Matthew Roth. CC-BY-SA-3.0 via Wikimedia Commons.*

The post The genesis of computer science appeared first on OUPblog.

]]>The post Fractal shapes and the natural world appeared first on OUPblog.

]]>

Fractal shapes, as visualizations of mathematical equations, are astounding to look at. But fractals look even more amazing in their natural element—and that happens to be in more places than you might think.

Kenneth Falconer is a mathematician who specializes in Fractal Geometry and related topics. He is Professor of Pure Mathematics at the University of St. Andrews and a member of the Analysis Research Group of the School of Mathematics and Statistics. Kenneth’s main research interests are in fractal and multifractal geometry, geometric measure theory and related areas. He has published over 100 papers in mathematical journals. He is author of

Fractals: A Very Short Introduction.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credits:
*

The post Fractal shapes and the natural world appeared first on OUPblog.

]]>The post The *real* unsolved problems of mathematics appeared first on OUPblog.

With the arrival of the new year, you can be certain that the annual extravaganza known as the Joint Mathematics Meetings cannot be far behind. This year’s conference is taking place in Baltimore, Maryland. It is perhaps more accurate to say that it is a conference of conferences, since much of the business to be transacted will take place in smaller sessions devoted to this or that branch of mathematics. In these sessions, researchers at the cutting edge of the discipline will discuss the most recent developments on the biggest open problems in the business. It will all be terribly clever and largely impenetrable. You can be certain, however, that the real open questions of mathematics will barely be addressed.

It is hardly a secret that large conferences like this are as much about socialization as they are about research. This presents some problems, since the Joint Meetings can be a minefield of social awkwardness and ambiguous etiquette.

For example, imagine that you are walking across the lobby and you notice someone you know slightly coming the other way. Should you stop and chat? Or is a nod of acknowledgement sufficient? If you do stop, what sort of greeting is appropriate? A handshake? A hug? And how do you exit the conversation once the idle chit chat runs out? Sometimes you stop and chat, and then someone friendlier with the other person arrives to interrupt. One minute you’re making small talk about your recent job histories, and the next you’re just standing there watching your conversation partner make dinner plans with someone who just appeared. Now what do you do? Usually your only course is to mutter something about being late for a talk and then slink off with whatever dignity you can muster.

The exhibition center presents its own problems. How long can you stand in one place perusing a book before it becomes rude? Quite a while, apparently, if we are to judge from some of the stingier characters we inevitably meet. If the book is that interesting just buy it and be done with it. Come to think of it, when you are standing there looking through books, what is the maximum allowable angle to which you can separate the covers? Cracking the spine is definitely frowned upon. How many Hershey’s miniatures can you reasonably pilfer from the MAA booth? Which book should you buy to burn up your AMS points? Let me suggest that the answer to that one depends on which book will look best on your shelf, since you know full-well you are never going to read it.

Actually presenting a talk brings with it some challenges of its own. Perhaps you are giving a contributed talk, and you get the first slot after lunch. So it’s just you, the person speaking after you, and whoever drew the short straw for moderation duty. Do you acknowledge the lack of an audience? Or do you go through the motions like you’re keynoting? After giving your talk, is it acceptable simply to leave? Or are you ethically obligated to stay for the talk right after yours? What do you do if you notice an error in someone else’s talk? Should you expose it to the world during the question period, or just discuss it privately with the speaker afterward?

Perhaps we need a special session to discuss these questions. That, at least would be a session where everyone could understand what was being said. On the other hand, given the occasionally strained relationship between mathematicians and social graces, perhaps I should not be so cavalier about that.

Jason Rosenhouse is Associate Professor of Mathematics at James Madison University. He is the author of Taking Sudoku Seriously: The Math Behind the World’s Most Popular Pencil Puzzle with Laura Taalman; The Monty Hall Problem: The Remarkable Story of Math’s Most Contentious Brain Teaser; and Among The Creationists: Dispatches from the Anti-Evolutionist Front Lines. Read Jason Rosenhouse’s previous blog articles.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credit: Complex formulae on a blackboard. © JordiDelgado via iStockphoto. *

The post The *real* unsolved problems of mathematics appeared first on OUPblog.

The post The legacy of the Superconducting Super Collider appeared first on OUPblog.

]]>

Almost exactly 20 years ago, on 19 October 1993, the US House of Representatives voted 264 to 159 to reject further financing for the Superconducting Super Collider (SSC), the particle accelerator being built under Texas. Two billion dollars had already been spent on the Collider, and its estimated total cost had grown from $4.4bn to $11bn; a budget saving of $9bn beckoned. Later that month President Clinton signed the bill officially terminating the project.

This was not good news for two of my Harvard roommates, PhD students in theoretical physics. Seeing the academic job market for physicists collapsing around them, they both found employment at a large investment bank in New York in the nascent field of quantitative finance. It was their assertion that derivative markets, whatever in fact they were, seemed mathematically challenging that catalyzed my own move to Wall Street from an academic career.

The cohort of PhDs in science, technology, engineering, and mathematics that moved to finance from academia in the early 1990s (a cohort I have called the “SSC generation”) sparked a remarkable growth in the sophistication and complexity of financial markets. They built models which enabled banks and hedge funds to price and trade complex financial instruments called derivatives, contracts whose value derives from the levels of other financial variables, such as the price of the Japanese Yen or a collection of mortgages on apartments in Florida. They created a new subject, known as financial engineering or quantitative finance, and a brand new career path, that of quantitative analyst (“quant”), a vocation that became so popular — for its monetary rewards certainly, but also for its dynamism and innovation — that by June 2008, 28% of graduating Harvard seniors going into full time employment were heading to finance.

However, just as some investors in 2007-2008 were questioning the inexorable rise in house prices and the potential for a market bubble, so too were many students questioning their own career choices, sensing the possibility of a career bubble. As Harvard University President Drew Faust said in her first address to the senior class in June 2008, “You repeatedly asked me: Why are so many of us going to Wall Street?”

Three months later, both market and career bubbles collapsed as Lehman Brothers filed for bankruptcy. In the midst of the financial crisis, on 3 October 2008, the House of Representatives voted 263 to 171 to pass the Emergency Economic Stabilization Act, authorizing the Treasury secretary to spend $700bn — roughly 65 Super Colliders — to purchase distressed assets.

What went wrong? While the causes of the financial crisis have been widely debated, it is clear that many financial engineers were caught in what I have termed the “quant delusion,” an over-confidence in and over-reliance on mathematical models. The edifice of quantitative finance built over 15 years by the SSC generation was dramatically rocked by the events of 2008. Fundamental logical arguments that practitioners had taken for granted were shown not to hold. Decades of modeling advances were revealed to be invalid or thrown into question.

It is hard to prove a direct causal link between the cancellation of the SSC, the rise of financial engineering, and the chaos of 2008. However, if some roots of the financial crisis can be traced, however distantly, to October 1993, might one consequence of the financial crisis itself be a healthy reassessment of career choices amongst graduates?

I encounter evolving attitudes among students in the class that I teach at Harvard, Statistics 123, “Applied Quantitative Finance”. Many still plan a future on Wall Street, and are motivated by the mathematical challenges and dynamic environment ahead of them. Some are interested in the elegant mathematical and probabilistic theory that underlies derivatives markets, and are keen to understand the way of thinking that exists on Wall Street. Others appreciate that they have a broad range of equally compelling career options, whether in technology, life sciences, climate science, or fundamental research, and take my course simply because they have enjoyed their introduction to probability and want to experience one of its most compelling applications.

Stephen Blyth is Professor of the Practice of Statistics at Harvard University, and Managing Director at the Harvard Management Company. His book, An Introduction to Quantitative Finance, was published by Oxford University Press in November 2013.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credit: graphs and charts. © studiocasper via iStockphoto. *

The post The legacy of the Superconducting Super Collider appeared first on OUPblog.

]]>The post Let them eat theorems! appeared first on OUPblog.

]]>** **

“This is not maths – maths is about doing calculations, not proving theorems!” So wrote a disaffected student at the end of my recent pure maths lecture course. Theorems, along with their proofs, have gotten a bad name.

The first (and often only) theorem most people encounter is Pythagoras Theorem, discovered over 2500 years ago; that if you square the lengths of the two perpendicular sides of a right-angled triangle and add these numbers together then you get the square of the length of the third side. To many, the name Pythagoras conjures up memories of eccentric maths teachers enthusing over spiders webs of lines. Yet, if the writer of the software underlying your computer had not known their Pythagoras and other such theorems, you would not now be viewing this neatly aligned text or navigating around your screen at the touch of a mouse.

A theorem is the name for an incontrovertible mathematical fact, a statement that is an unavoidable consequence of precisely defined terms or facts that have already been established. Pythagoras’ Theorem follows inexorably from the notions of a straight line, a right-angle and length. A couple of hundred years later, Euclid formulated his theorems or ‘Propositions’ of geometry which became the foundation of western mathematical education for the next 2000 years. My favourite is the Intersecting Chord Theorem: if you draw two intersecting straight lines across a circle and multiply together the lengths of the parts of the chords on either side of the intersection point then you get the same answer for both chords (see diagram). This is a remarkable statement: there seems no obvious reason why it should be so. Yet it is an inevitable consequence of the definition of a circle. Sadly, learning the formal propositions of Euclid by rote, as they were often taught in the past, may have hidden their substance and elegance and turned off many budding mathematicians.

Many further geometrical theorems have been established since Euclid’s days, some with evocative names. The Ham Sandwich Theorem says that given three objects there is always a plane that simultaneously divides each object into two parts of equal volume; thus a sandwich can always be divided by a straight slice so that the bread, butter, and ham are all equally divided between the two portions. Then, according to the Hairy Ball Theorem, it is impossible to comb a sphere covered with hair or fur in such a way that the hairs lie down smoothly everywhere on the sphere. One consequence, perhaps reassuring at times of extreme weather, is that at any instant there is somewhere on the earth’s surface where there is no wind.

The Mandelbrot set has become an icon recognised by many with little or no mathematical knowledge but who have been fascinated by its intriguing beauty. The Fundamental Theorem of the Mandelbrot Set, as it is sometimes called, relates geometrical aspects of this extraordinarily complicated object to the simple formula *z*^{2} + *c*. The theorem was contained in the writings of Pierre Fatou and Gaston Julia back in 1919, but was virtually forgotten until in the mid-1970s Mandelbrot’s computer images revealed the set’s intricate detail. A picture can bring a theorem to life!

Of course, not all theorems are about geometry. Some concern properties of numbers; perhaps the most famous is Fermat’s Last Theorem, that the equation *x ^{n}* +

Theorems are the pillars of mathematics. New theorems, often building on the foundations of earlier ones, are continually being proved. Yes, some may be esoteric, but others have been fundamental in the development of things that we take for granted, such as Stokes’ Theorem for electronic communication and fluid flow. And, though I obviously failed to convince my student, they are the basis for many of the calculations undertaken daily by scientists and engineers.

Kenneth Falconer is author of Fractals: A Very Short Introduction and Fractal Geometry: Mathematical Foundations and Applications (Wiley, 2014). He has been Professor of Pure Mathematics at the University of St Andrews since 1993.

The Very Short Introductions (VSI) series combines a small format with authoritative analysis and big ideas for hundreds of topic areas. Written by our expert authors, these books can change the way you think about the things that interest you and are the perfect introduction to subjects you previously knew nothing about. Grow your knowledge with OUPblog and the VSI series every Friday and like Very Short Introductions on Facebook.

Subscribe to the OUPblog via email or RSS.

Subscribe to Very Short Introductions articles on the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credits: 1) Figure drawn by author; 2) Image computed by Ben Falconer*

The post Let them eat theorems! appeared first on OUPblog.

]]>The post Making sense with data visualisation appeared first on OUPblog.

]]>

Statistics to me has always been about trying to make the best sense of incomplete information and having some feeling for how good that ‘best sense’ is. At a very crude level if you have a firm employing 235 people and you randomly sample 200 of these on some topic, I would feel my information was pretty good (even though it is incomplete). If the information I have is based on a sample of five people or I have asked all the people in one office, then I would know my information was nothing like as good as in the former case.

More than ever, in the current International Year of Statistics, there is an acceptance that understanding quantitative information is a necessary skill in almost any academic discipline and in almost all professional jobs (and very many jobs at lower levels). Statistics is used wide range of contexts such as physical, life and social sciences, sports, marketing, finance, geography, and psychology. In fact it’s used anywhere there is interesting data, and with supporting visual explanations of what is happening in various statistical techniques, it need not be an intimidating area to be involved in.

I am currently doing some work at Durham on data visualisation, including on education performance data, the 2011 UK riots, and health. For example, interactive data resources show the proportions of pupils gaining five good GCSEs (with and without a requirement to include English and Maths), disaggregated by sex, ethnicity and whether they are eligible for free school meals. The first screen shot shows boys’ performance rates for various ethnic groups and how eligibility for free school meals varies across ethnic groups. You can see it’s very dramatic in both white and mixed groups, and much more modest for asian, black and other groups, and almost non-existent for the Chinese. The second screen shot shows how the display changes if the bottom slider is moved to change the performance measure to remove the requirement for English and Maths. The position of the variables can be moved (just drag and drop) to different positions to allow other comparisons to be made directly, and to develop a real sense of the stories in the data.

It would be much more logical if social scientists wanting to put forward theoretical explanations for inequalities in health, in education, in crime etc., were able to explore the data actively in an interface like this – to develop a rich picture of the relationships between factors, which are important and which less so, where particular combinations of factors give unexpected outcomes – and then to try to provide theory which is consistent with the observed patterns of behaviour.

Additionally, I have just started working on a new project working on visualisations of 2011 UK Census data and with Imperial College Reach Out Lab on supporting data sharing in science. Essentially there is a Pratice Transfer Partnership of HE Reach Out Labs where we are trying to develop experiments with more variables that different institutions will collaborate on to bulld a large multi-variate data set which students and teachers would then have access to embedded in our visualisation tools. The ambition is to tie more mathematics in with authentic scientific enquiry, so the collaboration between Science and Mathematics is something with real potential in making mathematics and statistics more directly and obviously relevant to students.

James Nicholson is the author of Statistics S1 and Statistics S2 in the A Level Mathematics for Edexcel course published by Oxford University Press. He is also Principal Research Fellow at the SMART Centre at Durham University.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credit: Graphs created by James Nicholson. Used with permission. All rights reserved. *

The post Making sense with data visualisation appeared first on OUPblog.

]]>The post Why launch a new journal? appeared first on OUPblog.

]]>**Why have you decided to launch a new journal of survey research?**

Well, we thought the field of survey research needed a flagship journal and, fortunately for us, the two largest professional organizations for survey researchers — the American Association for Public Opinion Research (AAPOR) and the American Statistical Association (ASA) — shared our view. These organizations have agreed to sponsor the new journal. AAPOR will make the journal available to its more than 2,000 members as part of their annual dues — that is, at no added cost to them. And ASA will offer a similar deal to the 1,000+ members of its Survey Research Methods Section.

**Isn’t there a danger of journal overload? How did you make such considerations?**

Articles on survey statistics and methodology have traditionally been scattered across journals that focus primarily on statistics, sociology, political science, communications, epidemiology, demography, and a range of other disciplines. We thought it was time to have a journal that would focus only on survey statistics and methodology. Of course, there are now journals devoted mainly to survey topics, such as the *Journal of Official Statistics* and *Survey Methodology*. However, as valuable as these journals are, they are sponsored by government agencies and we believe that the flagship journal for the field should have the backing of the largest, most prestigious professional organizations for survey researchers. Hence, the new journal.

**How has the field changed in the last 25 years?**

The field has grown up. In the United States, three programs — at the University of Maryland, the University of Michigan, and the University of Nebraska — now offer doctoral degrees in survey methodology. There are also academic programs in survey methodology in the United Kingdom and elsewhere in Europe. In the United States alone, more than forty doctorates in survey methodology have been awarded. There are now textbooks covering every aspect of survey statistics and methodology. Survey statistics and methodology has become a fully-fledged discipline and we believe the time is ripe for it to have a journal that reflects that status.

**What are some of the latest developments in survey research?**

This may be a pivotal time for surveys. Survey costs are spiraling upward, response rates are falling, and many of the government agencies that sponsor surveys are likely to face serious budget cuts in the coming years. Moreover, partly in response to these problems, some researchers are giving up on probability sampling, a mainstay for survey research for the last sixty years. At the same time, everyone seems to want estimates based on survey data, often for ever-smaller areas or subgroups, and to make policy decisions based on these estimates.

Despite all these worrisome developments, surveys still seem to give accurate results. Whatever their problems, the polls were able forecast the outcome of the 2012 elections with almost uncanny accuracy. Similarly, according to Census Bureau evaluations, the 2010 census may have been the most accurate census ever done.

**What do you hope to see in the coming years from both the field and the journal?**

We hope that authors will surprise us with articles describing good work in areas we had not anticipated and we promise to be open to such work. Most of all, we hope that journal becomes a fount of high quality research in all areas of survey statistics and methodology.

Joseph Sedransk is Professor Emeritus of Statistics at Case Western Reserve University. Roger Tourangeau is a Vice President at Westat. Before going to Westat, he headed the Joint Program in Survey Methodology at the University of Maryland for nearly 10 years; during this time, he was also a Research Professor in the University of Michigan’s Survey Research Center. Joseph Sedransk is the editor for statistical papers and Roger Tourangeau the editor for the methodological papers for the new

Journal of Survey Statistics and Methodology.

The Journal of Survey Statistics and Methodology, sponsored by AAPOR and the American Statistical Association, will begin publishing in 2013. Its objective is to publish cutting edge scholarly articles on statistical and methodological issues for sample surveys, censuses, administrative record systems, and other related data. It aims to be the flagship journal for research on survey statistics and methodology.

Subscribe to the OUPblog via email or RSS.

Subscribe to only social sciences articles on the OUPblog via email or RSS.

*Image credit: Check mark. Composición 3D. Mostrando un concepto de selección. Image by ricardoinfante, iStockphoto. *

The post Why launch a new journal? appeared first on OUPblog.

]]>The post Tragedy of the science-communication commons appeared first on OUPblog.

]]> ]]>

There’s a prevailing notion that communicating science is difficult, and it is therefore difficult to engage the general public. People can be fazed by statistics in particular, so how can we convey the importance of this science effectively?

I’ve earlier written that science is science communication — that is, the act of communicating scientific ideas and findings to ourselves and others is itself a central part of science. My point was to push against a conventional separation between the act of science and the act of communication, the idea that science is done by scientists and communication is done by communicators. It’s a rare bit of science that does not include communication as part of it. As a scientist and science communicator myself, I’m particularly sensitive to devaluing of communication. (For example, Bayesian Data Analysis is full of original research that was done in order to communicate; or, to put it another way, we often think we understand a scientific idea, but once we try to communicate it, we recognize gaps in our understanding that motivate further research.)

I once saw the following on one of those inspirational-sayings-for-every-day desk calendars: “To have ideas is to gather flowers. To think is to weave them into garlands.” Similarly, writing — more generally, communication to oneself or others — forces logic and structure, which are central to science.

Dan Kahan saw what I wrote and responded by flipping it around: He pointed out that there is a science of science communication. As scientists, we should move beyond the naive view of communication as the direct imparting of facts and ideas. We should think more systematically about how communications are produced and how they are understood by their immediate and secondary recipients.

The science of science communication is still in its early stages, and I’m glad that people such as Kahan are working on it. Here’s something he wrote recently explicating his theory of cultural cognition:

The motivation behind this research has been to understand the science communication problem. The “science communication problem” (as I use this phrase) refers to the failure of valid, compelling, widely available science to quiet public controversy over risk and other policy relevant facts to which it directly speaks. The climate change debate is a conspicuous example, but there are many others, including (historically) the conflict over nuclear power safety, the continuing debate over the risks of HPV vaccine, and the never-ending dispute over the efficacy of gun control…. The research I will describe reflects the premise that making sense of these peculiar packages of types of people and sets of factual beliefs is the key to understanding—and solving—the science communication problem. The cultural cognition thesis posits that people’s group commitments are integral to the mental processes through which they apprehend risk…

I think of Kahan as part of a loose network of constructive skeptics, along with various people including Thomas Basbøll, John Ioannidis, the guys at Retraction Watch, bloggers such as Felix Salmon, and a whole bunch of psychology researchers such as Wicherts, Wagenmakers, Simonsohn, Nosek, etc. This doesn’t represent a complete list but rather is intended to give a sense of the different aspect of this movement-without-a-name. Ten or twenty or thirty years ago, I don’t think such a movement existed. There were concerns about individual studies or research programs, but not such a sense of a statistics-centered crisis in science as a whole.

Andrew Gelman is a Professor in the Department of Statistics at Columbia University. He is the co-author of Teaching Statistics: A Bag of Tricks with Deborah Nolan. Read his blog Statistical Modeling, Causal Inference, and Social Science.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

The post Tragedy of the science-communication commons appeared first on OUPblog.

]]> ]]>The post Symmetry is transformation appeared first on OUPblog.

]]> ]]>

Symmetry has been recognised in art for millennia as a form of visual harmony and balance, but it has now become one of the great unifying principles of mathematics. A precise mathematical concept of symmetry emerged in the nineteenth century, as an unexpected side-effect of research into algebraic equations. Since then it has developed into a huge area of mathematics, with applications throughout the sciences.

Today we usually think of symmetry as a regularity of visual pattern—the sixfold symmetry of a snowflake, the circular symmetry of ripples on a pond, the spherical symmetry of a droplet of water or a planet. Here the role of symmetry is mainly descriptive. But there is a sense in which a natural *process* can also be symmetric, and the mathematics of symmetry can predict the results of that process, helping us to understand how nature’s patterns arise.

The key step towards a rigorous notion of symmetry arose not in geometry, but in algebra: attempts to solve quintic equations. The ancient Babylonians knew how to solve quadratic equations, and Renaissance Italian mathematicians discovered how to solve cubic and quartic equations, but here, everyone got stuck. Eventually, it turned out that no solution of the required kind exists for the general quintic equation.

The deep reason for this impossibility lies in the symmetries of the equation, which are the possible ways to permute its solutions while preserving all algebraic relations among them. When an equation has ‘the wrong kind of symmetry’ it can’t be solved by a formula of the traditional type. And equations of the fifth degree have the wrong kind of symmetry.

Mathematicians realised that symmetry is not a thing, but a *transformation*: a way to move or otherwise disturb something while—paradoxically—leaving it unchanged. For example, to a good approximation a human figure viewed in a mirror looks just like the original. Mixing up the roots of an equation doesn’t change suitable formulas in which they appear. Rotating a sphere through some angles produces an identical sphere.

The collection of all such transformations is called the symmetry group of the object; the structure of this group provides a powerful way to find out how the object behaves. The upshot of this discovery was a new, abstract branch of algebra: group theory.

Groups turned out to be fundamental to the study of crystals; the form and behaviour of a crystal depends on the symmetry group of its atomic lattice. Groups are also vital to chemistry: the way a molecule vibrates depends on its symmetries. The symmetries of a uniformly flat desert determine the possible patterns of sand dunes when the flat pattern becomes unstable. The symmetries of biological tissue determine the possible patterns of animal markings, such as stripes and spots. The symmetries of a cloud of gas determine the spiral form of a galaxy. The symmetries of space and time underpin Einstein’s theories of special and general relativity. The symmetries of fundamental particles constrain quantum field theory and affect the possibilities for unifying it with relativity.

Symmetry is such a huge idea, with so many diverse ramifications, that only an encyclopaedia could really do it justice. But it is possible to sketch its origins, give some idea of how the formal theory works out, sample its applications, and witness its diversity and generality. Moreover, the subject has great visual beauty and appeal: here, for once, mathematics can be a spectator sport, and audience participation is not mandatory.

I have spent much of my research career working on connections between symmetries and nature’s patterns, in fluid flow, animal movement, visual perception, and evolutionary biology—and I am just one of many. The well is nowhere near running dry. New applications are constantly being found. Symmetry is one of the truly deep concepts, possessing both visual and logical beauty. Its effects can be seen everywhere, if you know how to look.

Ian Stewart is Emeritus Professor of Mathematics at Warwick University. He is a well-established communicator of mathematics, and the author of over 80 books, including several on the subject of symmetry, such as Symmetry: A Very Short Introduction. His summary of the problems of mathematics,

From Here to Infinity, and collections of his columns fromScientific American (How to Cut a Cake, Cows in the Maze), have been very successful, and his recent bookProfessor Stewart’s Cabinet of Mathematical Curiosities, has been a bestseller.

The Very Short Introductions (VSI) series combines a small format with authoritative analysis and big ideas for hundreds of topic areas. Written by our expert authors, these books can change the way you think about the things that interest you and are the perfect introduction to subjects you previously knew nothing about. Grow your knowledge with OUPblog and the VSI series every Friday and like Very Short Introductions on Facebook.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credits: Symmetrical landscape, By Johann Jaritz (Own work), Creative Commons Licence via Wikimedia Commons*

The post Symmetry is transformation appeared first on OUPblog.

]]> ]]>The post Memories of undergraduate mathematics appeared first on OUPblog.

]]> ]]>

Two contrasting experiences stick in mind from my first year at university.

First, I spent a lot of time in lectures that I did not understand. I don’t mean lectures in which I got the general gist but didn’t quite follow the technical details. I mean lectures in which I understood not one thing from the beginning to the end. I still went to all the lectures and wrote everything down – I was a dutiful sort of student – but this was hardly the ideal learning experience.

Second, at the end of the year, I was awarded first class marks. The best thing about this was that later that evening, a friend came up to me in the bar and said, “Hey Lara, I hear you got a first!” and I was rapidly surrounded by other friends offering enthusiastic congratulations. This was a revelation. I had attended the kind of school at which students who did well were derided rather than congratulated. I was delighted to find myself in a place where success was celebrated.

Looking back, I think that the interesting thing about these two experiences is the relationship between the two. How could I have done so well when I understood so little of so many lectures?

I don’t think that there was a problem with me. I didn’t come out at the very top, but obviously I had the ability and dedication to get to grips with the mathematics. Nor do I think that there was a problem with the lecturers. Like the vast majority of the mathematicians I have met since, my lecturers cared about their courses and put considerable effort into giving a logically coherent presentation. Not all were natural entertainers, but there was nothing fundamentally wrong with their teaching.

I now think that the problems were more subtle, and related to two issues in particular.

First, there was a communication gap: the lecturers and I did not understand mathematics in the same way. Mathematicians understand mathematics as a network of axioms, definitions, examples, algorithms, theorems, proofs, and applications. They present and explain these, hoping that students will appreciate the logic of the ideas and will think about the ways in which they can be combined. I didn’t really know how to learn effectively from lectures on abstract material, and research indicates that I was pretty typical in this respect.

Students arrive at university with a set of expectations about what it means to ‘do mathematics’ – about what kind of information teachers will provide and about what students are supposed to do with it. Some of these expectations work well at school but not at university. Many students need to learn, for instance, to treat definitions as stipulative rather than descriptive, to generate and check their own examples, to interpret logical language in a strict, mathematical way rather than a more flexible, context-influenced way, and to infer logical relationships within and across mathematical proofs. These things are expected, but often they are not explicitly taught.

My second problem was that I didn’t have very good study skills. I wasn’t terrible – I wasn’t lazy, or arrogant, or easily distracted, or unwilling to put in the hours. But I wasn’t very effective in deciding how to spend my study time. In fact, I don’t remember making many conscious decisions about it at all. I would try a question, find it difficult, stare out of the window, become worried, attempt to study some section of my lecture notes instead, fail at that too, and end up discouraged. Again, many students are like this. I have met a few who probably should have postponed university until they were ready to exercise some self-discipline, but most do want to learn.

What they lack is a set of strategies for managing their learning – for deciding how to distribute their time when no-one is checking what they’ve done from one class to the next, and for maintaining momentum when things get difficult. Many could improve their effectiveness by doing simple things like systematically prioritizing study tasks, and developing a routine in which they study particular subjects in particular gaps between lectures. Again, the responsibility for learning these skills lies primarily with the student.

Personally, I never got to a point where I understood every lecture. But I learned how to make sense of abstract material, I developed strategies for studying effectively, and I maintained my first class marks. What I would now say to current students is this: take charge. Find out what lecturers and tutors are expecting, and take opportunities to learn about good study habits. Students who do that should find, like I did, that undergraduate mathematics is challenging, but a pleasure to learn.

Lara Alcock is a Senior Lecturer in the Mathematics Education Centre at Loughborough University. She has taught both mathematics and mathematics education to undergraduates and postgraduates in the UK and the US. She conducts research on the ways in which undergraduates and mathematicians learn and think about mathematics, and she was recently awarded the Selden Prize for Research in Undergraduate Mathematics Education. She is the author of How to Study for a Mathematics Degree (2012, UK) and How to Study as a Mathematics Major (2013, US).

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

Subscribe to only education articles on the OUPblog via email or RSS.

*Image credit: Screenshot of Oxford English Dictionary definition of mathematics, n., via OED Online. All rights reserved.*

The post Memories of undergraduate mathematics appeared first on OUPblog.

]]> ]]>The post The map she carried appeared first on OUPblog.

]]> ]]>

In the heyday of the British Empire, Britain’s second most-widely-read book, after the Bible, was: (a) *Richard III* (b) *Robinson Crusoe* (c) *The Elements* (d) *Beowulf* ? Why do I ask?

“Since late medieval or early modern time,” Michael Walzer writes in *Exodus and Revolution*, “there has existed in the West a characteristic way of thinking about political change, a pattern that we commonly impose upon events, a story that we repeat to one another. The story has roughly this form: oppression, liberation, social contract, political struggle, new society…. Because of the centrality of the Bible in Western thought and the endless repetition of the story, the pattern has been etched deeply into our political culture. It isn’t only the case that events fall, almost naturally, into an Exodus shape; we work actively to give them that shape.”

The second-most-widely-read book plays that role in Western thought too: (c) *The Elements* by Euclid. Since late medieval or early modern time, there has existed in the West a characteristic way of organizing knowledge, a pattern that we commonly impose upon observations, concepts, and ideas, a pattern we teach our children. Because of the centrality of Euclid in Western education and the endless repetition of his axioms, definitions, theorems and proofs, the pattern has been etched deeply into our intellectual culture. It isn’t only the case that knowledge falls, almost naturally, into a Euclidean shape; we work actively to give it that shape.

Euclid was the geometry of the medieval university and the bedrock of European education for centuries. It wasn’t just about the triangles; Euclid sharpened your mind, trained your logic. His clever proofs were the very model of argument. To master Euclid was to master the world, the world around you and beyond. “Nature and Nature’s laws lay hid in night; God said, Let Newton be! and all was light.” And what did Newton’s lamp look like? See for yourself in the *Principia Mathematica*. “All human knowledge begins with intuitions,” said Kant, “proceeds from there to concepts, and ends with ideas.” Where do you think he got that? Euclideana even permeates our politics, but for this blog I’ll stick to science.

Non-Euclidean geometries put an end to that? No, they didn’t. Non-Euclidean geometries substituted one axiom for another, but they kept Euclid’s vision of organized knowledge, his faith in deductive reasoning. Non-Euclidean geometry is as Euclidean as Euclid’s! So is the new, improved axiom set David Hilbert proposed for geometry in the 19th century. (It turned out that Euclid’s wasn’t perfect.) So is the quixotic Russell-Whitehead program, in the early 20th century, to reduce mathematics to logic. Modern mathematics is consciously Euclidean to the core. In 1900, in a still-influential address, David Hilbert proposed rewriting Newton for modern physics along this vision of organized knowledge.

Born in 1894, Dorothy Wrinch grew up in a London suburb. She aced the mathematics program at Cambridge University and then studied logic with Bertrand Russell. The naturalist D’Arcy Thompson was another mentor and friend; his *Growth and Form* was her bible. Tugged by philosophy, mathematics, and biology for a decade, she cast her lot with biology, determined to unravel it through the powerful lens of logic. The model of protein architecture she came up with catalyzed protein chemists despite or because of its weaknesses. Why?

With this map to guide her, she found what she was looking for. “A number of new sciences have passed from the embryonic stage,” she wrote in 1934. “Discarding description as their ultimate purpose, they are now ready to take their places in the world state of science. The thesis which I wish now to develop is but a logical consequence of the thorough-going application of this principle.” Her protein model was one such consequence.

Biology ripe for logic? Some natives were not amused. (Or they were.) “Her idea of science is completely different from theirs,” Linus Pauling put it. You betcha!

Euclid fell from his curricular throne and the British Empire collapsed at about the same time. Quantum mechanics scotched Hilbert’s program and Gödel scotched Russell’s. Biology has resisted Euclid too. Though the structures of thousands of proteins are now known in exact detail, their inner logic remains where Dorothy left it, the brass ring on the Nobel carousel.

Marjorie Senechal is the Louise Wolff Kahn Professor Emerita in Mathematics and History of Science and Technology, Smith College, author of I Died for Beauty: Dorothy Wrinch and the Cultures of Science, and Editor-in-Chief of The Mathematical Intelligencer. At the Join Mathematics Meeting, AMS-MAA Special Session on the History of Mathematics, II, Room 9, Upper Level, San Diego Convention Center, she is speaking on 5:00 p.m. on Saturday, 12 January, on Biogeometry, 1941.

Subscribe to the OUPblog via email or RSS.

Subscribe to only articles about mathematics on the OUPblog via email or RSS.

The post The map she carried appeared first on OUPblog.

]]> ]]>The post Teaching algorithmic problem-solving with puzzles and games appeared first on OUPblog.

]]> ]]>

In the last few years algorithmic thinking has become somewhat of a buzzword among computer science educators, and with some justice: ubiquity of computers in today’s world does make algorithmic thinking a very important skill for almost any student. Although at the present time there are few colleges and universities that require non-computer science majors to take a course exposing them to important issues and methods of algorithmic problem solving, one should expect the number of such schools to grow significantly in the near future.

Algorithmic puzzles, i.e., puzzles that involve clearly defined procedures for solving problems, provide an ideal vehicle to introduce students to major ideas and methods of algorithmic problem solving:

- Algorithmic puzzles force students to think about algorithms on a more abstract level, divorced from programming and computer language minutiae. In fact, puzzles can be used to illustrate major strategies of the design and analysis of algorithms without any computer programming — an important point, especially for courses targeting non-CS majors.
- Solving puzzles helps in developing creativity and problem-solving skills — the qualities any student should strive to acquire.
- Puzzles are fun, and students are usually willing to put more effort into solving them than in doing routine exercises.
- Puzzles provide attractive topics for student research because many of them don’t require an extensive mathematical or computing background.

It’s important to stress that algorithmic puzzles is a serious topic. A few algorithmic puzzles such as Fibonacci’s Rabbits and Königsberg’s Bridges played an important role in history of mathematics. Such well-known and intriguing problems as the Traveling Salesman and the Knapsack Problem, which clearly have a puzzle flavor, lie at the heart of the so-called *P *≠ *NP* conjecture, the most important open question in modern computer science and mathematics.

So reader, I would like to challenge you to an algorithmic puzzle, *#136, “Catching a Spy”*:

In a computer game, a spy is located on a one-dimensional line. At time 0, the spy is at location *a*. With each time interval, the spy moves *b* units to the right if *b*≥0 and |*b*| units to the left if *b*<0. *a* and *b* are fixed integers, but they are unknown to you. Your goal is to identify the spy’s location by asking at each time interval (starting at time 0) whether the spy is currently at some location of your choosing. For example, you can ask whether the spy is currently at location 19, to which you will receive a truthful yes/no answer. If the answer is “yes,” you reach your goal; if the answer is “no,” you can ask the next time whether the spy is at the same or another location of your choice. Devise an algorithm that will find the spy after a finite number questions.

Leave the answer in the comments below.

Anany Levitin is a professor of Computing Sciences at Villanova University. He is the co-author of Algorithmic Puzzles with Maria Levitin. He is the author of Introduction to the

Design and Analysis of Algorithms, Third edition, a popular textbook on design and analysis of algorithms, which has been translated into Chinese, Greek, Korean, and Russian. He has also published papers on mathematical optimization theory, software engineering, data management, algorithm design, and computer science education.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credit: Leonardo da Pisa, Liber abbaci, Ms. Biblioteca Nazionale di Firenze, Codice magliabechiano cs cI, 2626, fol. 124r Source: Heinz Lüneburg, Leonardi Pisani Liber Abbaci oder Lesevergnügen eines Mathematikers, 2. überarb. und erw. Ausg., Mannheim et al.: BI Wissenschaftsverlag, 1993. Public domain via Wikimedia Commons. *

The post Teaching algorithmic problem-solving with puzzles and games appeared first on OUPblog.

]]> ]]>The post What do mathematicians do? appeared first on OUPblog.

]]> ]]>

Writing in 1866, the British mathematician John Venn wrote, in reference to the branch of mathematics known as probability theory, “To many persons the mention of Probability suggests little else than the notion of a set of rules, very ingenious and profound rules no doubt, with which mathematicians amuse themselves by setting and solving puzzles.” I suspect many of my students would extend Venn’s quip to the entirety of mathematics. Often they seem to believe, upon entering my classroom for the first time, that a tacit agreement exists between us. They will dutifully memorize whatever rules I give them and apply them with machine-like accuracy at test-time, but to expect anything beyond that is considered a serious breach of etiquette.

I held such views myself, once upon a time. That is why my first visit to the annual Joint Mathematics Meetings, as an undergraduate student in the early nineties, was such an eye-opening experience. This is the largest mathematics conference of the year, held every January in a different city. Almost two decades later, I am still consistently amazed by the sheer variety of things that mathematicians study. Browsing through the program for this year’s edition, which is being held in San Diego, I notice that there are sessions on complex dynamics and celestial mechanics. Continued fractions get their own session, as do coverings of the integers, and frontiers in geomathematics. Financial mathematics gets a session. So does graph theory, and also the history of mathematics. If you prefer, you can go in for the real jawbreakers. They have titles like, “Advances in General Optimization and Global Optimality Conditions for Multiobjective Fractional Programming Based on Generalized Invexity.” For me, reading the program is like listening to opera. I may not understand all the words, but it sure sounds good!

This conference is called the Joint Mathematics Meetings, because it is held jointly between the two major mathematics organizations in the United States: The American Mathematical Society (AMS) and the Mathematical Association of America (MAA). The AMS generally concerns itself with the profession of mathematics and publishes several highly prestigious research journals. The MAA, by contrast, generally focuses on the educational aspects of mathematics. The sessions I listed above are directed towards researchers and are organized by the AMS. MAA sessions tend to have gentler titles. This year they are hosting a session on the beauty and power of number theory; another one on writing, talking, and sharing mathematics; still another on mathematics in industry; and, my personal favorite, a session called, “Where Have All the Zeros Gone?”

The sessions, however, are only the tip of the iceberg. There are also keynote talks featuring the alphas of our profession. In my experience, the main purpose of these talks is to remind you that, your PhD notwithstanding, there are mathematicians out there who are way smarter than you are. There is also the employment center, populated by eager job-seekers who stand out clearly from the other conference attendees, because they are well-dressed. There is also the exhibition center, in which every mathematical publisher on the planet shows off its latest books. For an impulse buyer like me, this is a dangerous place.

Which brings me back to the John Venn quote with which I started and the question at the top of this essay. Yes, I suppose we do spend a lot of time setting and solving puzzles. We dutifully apply the rules of proper inference to the abstract objects that have caught our fancy, thereby producing publishable theorems. That, however, is really a very small part of what mathematicians do.

You see, more than anything else, to be a mathematician is to be part of a community. Whatever else it is, mathematics is a social activity undertaken by human beings to further human goals and purposes. The main point of the conference is not to transact mathematical business, though that is certainly important. Rather, the point is to socialize, to renew old friendships, and to engage in casual conversations. The point is to remind you that mathematics is not about ivory tower theorizing, but about being part of a community that is united by its love for, and its belief in the importance of, mathematics. This applies whether your focus is on pure mathematics or applied mathematics. It does not matter whether you prefer teaching, research or community outreach. It includes elementary school teachers showing grade-schoolers the mechanics of basic arithmetic, high school teachers giving students their first taste of higher-level math, and graduate school professors at the frontiers of modern research. It also includes the students who will form the next generation not just of professional mathematicians, but of mathematically informed lay people as well.

All are part of the same community, and all are essential to the continued health of our discipline.

Jason Rosenhouse is Associate Professor of Mathematics at James Madison University. His most recent book is

Among The Creationists: Dispatches from the Anti-Evolutionist Front Lines. He is also the author of Taking Sudoku Seriously: The Math Behind the World’s Most Popular Pencil Puzzle with Laura Taalman and The Monty Hall Problem: The Remarkable Story of Math’s Most Contentious Brain Teaser. Read Jason Rosenhouse’s previous blog articles.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credit: John Venn. Public domain via Wikimedia Commons.*

The post What do mathematicians do? appeared first on OUPblog.

]]> ]]>The post Celebrating Newton, 325 years after Principia appeared first on OUPblog.

]]> ]]>** **

This year, 2012, marks the 325th anniversary of the first publication of the legendary *Principia *(*Mathematical Principles of Natural Philosophy*), the 500-page book in which Sir Isaac Newton presented the world with his theory of gravity. It was the first comprehensive scientific theory in history, and it’s withstood the test of time over the past three centuries.

Unfortunately, this superb legacy is often overshadowed, not just by Einstein’s achievement but also by Newton’s own secret obsession with Biblical prophecies and alchemy. Given these preoccupations, it’s reasonable to wonder if he was quite the modern scientific guru his legend suggests, but personally I’m all for celebrating him as one of the greatest geniuses ever. Although his private obsessions were excessive even for the seventeenth century, he was well aware that in eschewing metaphysical, alchemical, and mystical speculation in his *Principia*, he was creating a new way of thinking about the fundamental principles underlying the natural world. To paraphrase Newton himself, he changed the emphasis from metaphysics and mechanism to experiment and mathematical analogy. His method has proved astonishingly fruitful, but initially it was quite controversial.

He had developed his theory of gravity to explain the cause of the mysterious motion of the planets through the sky: in a nutshell, he derived a formula for the force needed to keep a planet moving in its observed elliptical orbit, and he connected this force with everyday gravity through the experimentally derived mathematics of falling motion. Ironically (in hindsight), some of his greatest peers, like Leibniz and Huygens, dismissed the theory of gravity as “mystical” because it was “too mathematical.” As far as they were concerned, the law of gravity may have been brilliant, but it didn’t explain how an invisible gravitational force could reach all the way from the sun to the earth without any apparent material mechanism. Consequently, they favoured the mainstream Cartesian “theory”, which held that the universe was filled with an invisible substance called* ether*, whose material nature was completely unknown, but which somehow formed into great swirling whirlpools that physically dragged the planets in their orbits.

The only evidence for this vortex “theory” was the physical fact of planetary motion, but this fact alone could lead to any number of causal hypotheses. By contrast, Newton explained the mystery of planetary motion in terms of a known physical phenomenon, gravity; he didn’t need to postulate the existence of fanciful ethereal whirlpools. As for the question of how gravity itself worked, Newton recognized this was beyond his scope — a challenge for posterity — but he knew that for the task at hand (explaining why the planets move) “it is enough that gravity really exists and acts according to the laws that we have set forth and is sufficient to explain all the motions of the heavenly bodies…”

What’s more, he found a way of testing his theory by using his formula for gravitational force to make quantitative predictions. For instance, he realized that comets were not random, unpredictable phenomena (which the superstitious had feared as fiery warnings from God), but small celestial bodies following well-defined orbits like the planets. His friend Halley famously used the theory of gravity to predict the date of return of the comet now named after him. As it turned out, Halley’s prediction was fairly good, although Clairaut — working half a century later but just before the predicted return of Halley’s comet — used more sophisticated mathematics to apply Newton’s laws to make an even more accurate prediction.

Clairaut’s calculations illustrate the fact that despite the phenomenal depth and breadth of *Principia*, it took a further century of effort by scores of mathematicians and physicists to build on Newton’s work and to create modern “Newtonian” physics in the form we know it today. But Newton had created the blueprint for this science, and its novelty can be seen from the fact that some of his most capable peers missed the point. After all, he had begun the radical process of transforming “natural philosophy” into theoretical physics — a transformation from traditional qualitative philosophical speculation about possible causes of physical phenomena, to a quantitative study of experimentally observed physical effects. (From this experimental study, mathematical propositions are deduced and then made general by induction, as he explained in *Principia*.)

Even the secular nature of Newton’s work was controversial (and under apparent pressure from critics, he did add a brief mention of God in an appendix to later editions of *Principia*). Although Leibniz was a brilliant philosopher (and he was also the co-inventor, with Newton, of calculus), one of his stated reasons for believing in the ether rather than the Newtonian vacuum was that God would show his omnipotence by creating something, like the ether, rather than leaving vast amounts of nothing. (At the quantum level, perhaps his conclusion, if not his reasoning, was right.) He also invoked God to reject Newton’s inspired (and correct) argument that gravitational interactions between the various planets themselves would eventually cause noticeable distortions in their orbits around the sun; Leibniz claimed God would have had the foresight to give the planets perfect, unchanging perpetual motion. But he was on much firmer ground when he questioned Newton’s (reluctant) assumption of absolute rather than relative motion, although it would take Einstein to come up with a relativistic theory of gravity.

Einstein’s theory is even more accurate than Newton’s, especially on a cosmic scale, but within its own terms — that is, describing the workings of our solar system (including, nowadays, the motion of our own satellites) — Newton’s law of gravity is accurate to within one part in ten million. As for his method of making scientific theories, it was so profound that it underlies all the theoretical physics that has followed over the past three centuries. It’s amazing: one of the most religious, most mystical men of his age put his personal beliefs aside and created the quintessential blueprint for our modern way of doing science in the most objective, detached way possible. Einstein agreed; he wrote a moving tribute in the London *Times *in 1919, shortly after astronomers had provided the first experimental confirmation of his theory of general relativity:

“Let no-one suppose, however, that the mighty work of Newton can really be superseded by [relativity] or any other theory. His great and lucid ideas will retain their unique significance for all time as the foundation of our modern conceptual structure in the sphere of [theoretical physics].”

Robyn Arianrhod is an Honorary Research Associate in the School of Mathematical Sciences at Monash University. She is the author of Seduced by Logic: Émilie Du Châtelet, Mary Somerville and the Newtonian Revolution and Einstein’s Heroes. Read her previous blog posts.

Subscribe to the OUPblog via email or RSS.

Subscribe to only science and medicine articles on the OUPblog via email or RSS.

The post Celebrating Newton, 325 years after Principia appeared first on OUPblog.

]]> ]]>The post What sort of science do we want? appeared first on OUPblog.

]]> ]]>** **

29 November 2012 is the 140th anniversary of the death of mathematician Mary Somerville, the nineteenth century’s “Queen of Science”. Several years after her death, Oxford University’s Somerville College was named in her honor — a poignant tribute because Mary Somerville had been completely self-taught. In 1868, when she was 87, she had signed J. S. Mill’s (unsuccessful) petition for female suffrage, but I think she’d be astonished that we’re still debating “the woman question” in science. Physics, in particular — a subject she loved, especially mathematical physics — is still a very male-dominated discipline, and men as well as women are concerned about it.

Of course, science today is far more complex than it was in Somerville’s time, and for the past forty years feminist critics have been wondering if it’s the kind of science that women actually want; physics, in particular, has improved the lives of millions of people over the past 300 years, but it’s also created technologies and weapons that have caused massive human, social and environmental destruction. So I’d like to revisit an old debate: are science’s obstacles for women simply a matter of managing its applications in a more “female-friendly” way, or is there something about its exclusively male origins that has made science itself sexist?

To manage science in a more female-friendly way, it would be interesting to know if there’s any substance behind gender stereotypes such as that women prefer to solve immediate human problems, and are less interested than men in detached, increasingly expensive fundamental research, and in military and technological applications. Either way, though, it’s self-evident that women should have more say in how science is applied and funded, which means it’s important to have more women in decision-making positions — something we’re still far from achieving.

But could the scientific paradigm itself be alienating to women? Mary Somerville didn’t think so, but it’s often argued (most recently by some eco-feminist and post-colonial critics) that the seventeenth-century Scientific Revolution, which formed the template for modern science, was constructed by European men, and that consequently, the scientific method reflects a white, male way of thinking that inherently preferences white men’s interests and abilities over those of women and non-Westerners. It’s a problematic argument, but justification for it has included an important critique of reductionism — namely, that Western male experimental scientists have traditionally studied physical systems, plants, and even human bodies by dissecting them, studying their components separately and losing sight of the whole system or organism.

The limits of the reductionist philosophy were famously highlighted in biologist Rachel Carson’s book, *Silent Spring*, which showed that the post-War boom in chemical pest control didn’t take account of the whole food chain, of which insects are merely a part. Other dramatic illustrations are climate change, and medical disasters like the thalidomide tragedy: clearly, it’s no longer enough to focus selectively on specific problems such as the action of a drug on a particular symptom, or the local effectiveness of specific technologies; instead, scientists must consider the effect of a drug or medical procedure on the whole person, whilst new technological inventions shouldn’t be separated from their wider social and environmental ramifications.

In its proper place, however, reductionism in basic scientific research is important. (The recent infamous comment by American Republican Senate nominee Todd Akin — that women can “shut down” their bodies during a “legitimate rape”, in order not to become pregnant — illustrates the need for a basic understanding of how the various parts of the human body work.) I’m not sure if this kind of reductionism is a particularly male or particularly Western way of thinking, but either way there’s much more to the scientific method than this; it’s about developing testable hypotheses from observations (reductionist or holistic), and then testing those hypotheses in as objective a way as possible. The key thing in observing the world is curiosity, and this is a human trait, discernible in all children, regardless of race or gender. Of course, girls have traditionally faced more cultural restraints than boys, so perhaps we still need to encourage girls to be actively curious about the world around them. (For instance, it’s often suggested that women prefer biology to physics because they want to help people — and yet, many of the recent successes in medical and biological science would have been impossible without the technology provided by fundamental, curiosity-driven physics.)

Like Mary Somerville, I think the scientific method has universal appeal, but I also think feminist and other critics are right to question its patriarchal and capitalist origins. Although science at its best is value-free, it’s part of the broader community, whose values are absorbed by individual scientists. So much so that Yale researchers Moss-Racusin et al recently uncovered evidence that many scientists themselves, male and female, have an unconscious sexist bias. In their widely reported study, participants judged the same job application (for a lab manager position) to be less competent if it had a (randomly assigned) female name than if it had a male name.

In Mary Somerville’s day, such bias was overt, and it had the authority of science itself: women’s smaller brain size was considered sufficient to “prove” female intellectual inferiority. It was bad science, and it shows how patriarchal perceptions can skew the interpretation not just of women’s competence, but also of scientific data itself. (Without proper vigilance, this kind of subjectivity can slip through the safeguards of the scientific method because of other prejudices, too, such as racism, or even the agendas of funding bodies.) Of course, acknowledging the existence of patriarchal values in society isn’t about hating men or assuming men hate women. Mary Somerville met with “the utmost kindness” from individual scientific men, but that didn’t stop many of them from seeing her as the exception that proved the male-created rule of female inferiority. After all, it takes analysis and courage to step outside a long-accepted norm. And so, the “woman question” is still with us — but in trying to resolve it, we might not only find ways to remove existing gender biases, but also broaden the conversation about what sort of science we all want in the twenty-first century.

Robyn Arianrhod is an Honorary Research Associate in the School of Mathematical Sciences at Monash University. She is the author of Seduced by Logic: Émilie Du Châtelet, Mary Somerville and the Newtonian Revolution and Einstein’s Heroes.

Subscribe to the OUPblog via email or RSS.

Subscribe to only science and medicine articles on the OUPblog via email or RSS.

View more about this book on the _{ }

*Image credit: Mary Somerville. Public domain via Wikimedia Commons.*

The post What sort of science do we want? appeared first on OUPblog.

]]> ]]>The post Summing up Alan Turing appeared first on OUPblog.

]]> ]]>** **

Three words to sum up Alan Turing? Humour. He had an impish, irreverent and infectious sense of humour. Courage. Isolation. He loved to work alone. Reading his scientific papers, it is almost as though the rest of the world — the busy community of human minds working away on the same or related problems — simply did not exist. Turing was determined to do it his way. Three more words? A patriot. Unconventional — he was uncompromisingly unconventional, and he didn’t much care what other people thought about his unusual methods. A genius. Turing’s brilliant mind was sparsely furnished, though. He was a Spartan in all things, inner and outer, and had no time for pleasing décor, soft furnishings, superfluous embellishment, or unnecessary words. To him what mattered was the truth. Everything else was mere froth. He succeeded where a better furnished, wordier, more ornate mind might have failed. Alan Turing changed the world.

What would it have been like to meet him? Turing was tallish (5 feet 10 inches) and broadly built. He looked strong and fit. You might have mistaken his age, as he always seemed younger than he was. He was good looking, but strange. If you came across him at a party you would notice him all right. In fact you might turn round and say “Who on earth is that?” It wasn’t just his shabby clothes or dirty fingernails. It was the whole package. Part of it was the unusual noise he made. This has often been described as a stammer, but it wasn’t. It was his way of preventing people from interrupting him, while he thought out what he was trying to say. *Ah – Ah – Ah – Ah – Ah.* He did it loudly.

If you crossed the room to talk to him, you’d probably find him gauche and rather reserved. He was decidedly lah-di-dah, but the reserve wasn’t standoffishness. He was a man of few words, shy. Polite small talk did not come easily to him. He might if you were lucky smile engagingly, his blue eyes twinkling, and come out with something quirky that would make you laugh. If conversation developed you’d probably find him vivid and funny. He might ask you, in his rather high-pitched voice, whether you think a computer could ever enjoy strawberries and cream, or could make you fall in love with it. Or he might ask if you can say why a face is reversed left to right in a mirror but not top to bottom.

Once you got to know him Turing was fun — cheerful, lively, stimulating, comic, brimming with boyish enthusiasm. His raucous crow-like laugh pealed out boisterously. But he was also a loner. “Turing was always by himself,” said codebreaker Jerry Roberts: “He didn’t seem to talk to people a lot, although with his own circle he was sociable enough.” Like everyone else Turing craved affection and company, but he never seemed to quite fit in anywhere. He was bothered by his own social strangeness — although, like his hair, it was a force of nature he could do little about. Occasionally he could be very rude. If he thought that someone wasn’t listening to him with sufficient attention he would simply walk away. Turing was the sort of man who, usually unintentionally, ruffled people’s feathers — especially pompous people, people in authority, and scientific poseurs. He was moody too. His assistant at the National Physical Laboratory, Jim Wilkinson, recalled with amusement that there were days when it was best just to keep out of Turing’s way. Beneath the cranky, craggy, irreverent exterior there was an unworldly innocence though, as well as sensitivity and modesty.

Turing died at the age of only 41. His ideas lived on, however, and at the turn of the millennium *Time *magazine listed him among the twentieth century’s 100 greatest minds, alongside the Wright brothers, Albert Einstein, DNA busters Crick and Watson, and the discoverer of penicillin, Alexander Fleming. Turing’s achievements during his short life were legion. Best known as the man who broke some of Germany’s most secret codes during the war of 1939-45, Turing was also the father of the modern computer. Today, all who click, tap or touch to open are familiar with the impact of his ideas. To Turing we owe the brilliant innovation of storing applications, and all the other programs necessary for computers to do our bidding, inside the computer’s memory, ready to be opened when we wish. We take for granted that we use the same slab of hardware to shop, manage our finances, type our memoirs, play our favourite music and videos, and send instant messages across the street or around the world. Like many great ideas this one now seems as obvious as the wheel and the arch, but with this single invention — the stored-program universal computer — Turing changed the way we live. His universal machine caught on like wildfire; today personal computer sales hover around the million a day mark. In less than four decades, Turing’s ideas transported us from an era where ‘computer’ was the term for a human clerk who did the sums in the back office of an insurance company or science lab, into a world where many young people have never known life without the Internet.

B. Jack Copeland is the Director of the Turing Archive for the History of Computing, and author of Turing: Pioneer of the Information Age, Alan Turing’s Electronic Brain, and Colossus. He is the editor of The Essential Turing. Read the new revelations about Turing’s death after Copeland’s investigation into the inquest.

Visit the Turing hub on the Oxford University Press UK website for the latest news in theCentenary year. Read our previous posts on Alan Turing including: “Maurice Wilkes on Alan Turing” by Peter J. Bentley, “Turing : the irruption of Materialism into thought” by Paul Cockshott, “Alan Turing’s Cryptographic Legacy” by Keith M. Martin, and “Turing’s Grand Unification” by Cristopher Moore and Stephan Mertens, “Computers as authors and the Turing Test” by Kees van Deemter, and “Alan Turing, Code-Breaker” by Jack Copeland.

For more information about Turing’s codebreaking work, and to view digital facsimiles of declassified wartime ‘Ultra’ documents, visit The Turing Archive for the History of Computing. There is also an extensive photo gallery of Turing and his war at www.the-turing-web-book.com.

Subscribe to the OUPblog via email or RSS.

Subscribe to only British history articles on the OUPblog via email or RSS.

View more about this book on the _{ }

The post Summing up Alan Turing appeared first on OUPblog.

]]> ]]>The post Is Almanac Day in your calendar? appeared first on OUPblog.

]]> ]]>

As well as Halloween, Guy Fawkes, and All Saints’s day, this time of the year used to see another day of fun and frenzy. ‘Almanac Day’, towards the end of November, saw the next year’s almanacs go on sale. It generally came round on or about 22 November: St Cecilia’s Day. In London, Stationers’ Hall would be crammed to the rafters:

The clock strikes, wide asunder start the gates, and in they come, a whole army of porters, darting hither and thither, and seizing the said bags, in many instances as big as themselves. Before we can well understand what is the matter, men and bags have alike vanished – the hall is clear … they will be dispersed through every city and town, and parish, and hamlet of England; the curate will be glancing over the pages of his little book to see what promotions have taken place in the church, and sigh as he thinks of rectories, and deaneries, and bishoprics; the sailor will be deep in the mysteries of tides and new moons that are learnedly expatiated upon in the pages of his; the believer in the stars will be finding new draughts made upon that Bank of Faith impossible to be broken or made bankrupt — his superstition, as he turns over the pages of his Moore — but we have let out our secret. Yes, they are all almanacks — those bags contained nothing but almanacks.

Two hundred or three hundred years ago you could choose from twenty or more almanacs every year. Unlike most of the modern ones they were slim things, with a couple of dozen pages. There were almanacs for Whigs, almanacs for Tories, almanacs for people who believed in astrology and almanacs for those who didn’t, almanacs for farmers, sailors, merchants.

My own journey into the wonderful world of early modern almanacs began with *Poor Robin’s Almanac*. Robin was a fictional character, invented in the 1660s as a way to lampoon astrologers and their almanacs. He went on to write a long-running spoof almanac, clocking up 164 annual issues. He did prognostication –

If on the second of February, thou go either to Fair or Market with store of money in thy pocket, and there have thy purse picked of it all, then that is an unfortunate day.

and history –

1367 BC: Women first invented kissing

and the year’s calendar –

23 June: Friar Tuck’s Day.

Poor Robin’s intellectual descendants included *Punch* (it copied part of his title page) and *Poor Richard*, pseudonym of Benjamin Franklin and author of *The Way to Wealth*. In his day he was loved and very widely read, but he was killed off in the 1820s by a combination of mismanagement, waning popularity, and attacks from the *Society for the Diffusion of Useful Knowledge*.

Others were less uproarious, but just as much fun. *The Ladies’ Diary*, or *Woman’s Almanack* specialized in genteel mathematical puzzles. ‘If I’m a year younger than one-twentieth the square of my age, how old am I?’ ‘If the sun takes four minutes to cross the horizon on New Year’s Day’, where am I? It attracted questions and answers sent in from all over Britain, and gave prizes for the best ones. It ran for over 130 years.

*Old Moore* provided predictions political, social, and meteorological based on the movements of the heavens.

Let my Muse raise, and tell what News she hears

Amongst the Stars, and Motions of the Spheres.

But it combined them with some remarkable popular science writing, on subjects ranging from astronomy to ancient history, compiled by authors who had one eye on the Philosophical Transactions and the other on the public’s taste for sensationalism.

Another scientifically-minded production was the *Nautical Almanac*, started in the 1760s by Longitude’s villain Nevile Maskelyne (he was actually rather a pleasant chap). It gave the moon’s position at three-hour intervals for the whole year, and instructions for working out your longitude from an observation. At two shillings and sixpence, plus the price of a sextant, it came in a good bit cheaper than a Harrison chronometer.

At times nearly one Briton in six was buying an almanac: ‘the greatest triumph of journalism until modern times’ according to historian Bernard Capp. Almanac day may be no more, but almanacs have been circulating for nearly as long as calendars, and if the genre has waxed and waned over the years it seems in no danger of extinction. Partly eclipsed in the early nineteenth century by other forms of popular instruction, the almanac blazed forth again from the 1830s, with sales rising to a million a year for the most popular. Today, *Old Moore* is still with us, though somewhat transformed; so is the *Nautical Almanac*. Whitaker and Schott have given almanacs a new lease of life as annual reference books. Their survival seems a safe prediction.

Benjamin Wardhaughis a historian and fellow of Wolfson College, Oxford. His book, Poor Robin’s Prophecies: A curious Almanac, and the everyday mathematics of Georgian Britain, publishes this month.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

Subscribe to only history articles on the OUPblog via email or RSS.

View more about this book on the _{ }

The post Is Almanac Day in your calendar? appeared first on OUPblog.

]]> ]]>The post The mathematics of democracy: Who should vote? appeared first on OUPblog.

]]> ]]>

An interesting, if somewhat uncommon, lens through which to view politics is that of mathematics. One of the strongest arguments ever made in favor of democracy, for example, was in 1785 by the political philosopher-mathematician, Nicolas de Condorcet. Because different people possess different pieces of information about an issue, he reasoned, they predict different outcomes from the same policy proposals, and will thus favor different policies, even when they actually share a common goal. Ultimately, however, if the future were perfectly known, some of these predictions would prove more accurate than others. From a present vantage point, then, each voter has some probability of actually favoring an inferior policy. Individually, this probability may be rather high, but collective decisions draw information from large numbers of sources, mistaking mistakes less likely.

To clarify Condorcet’s argument, note that an individual who knows nothing can identify the more effective of two policies with 50% probability; if she knows a lot about an issue, her odds are higher. For the sake of argument, suppose that a citizen correctly identifies the better alternative 51% of the time. On any given issue, then, many will erroneously support the inferior policy, but (assuming that voters form opinions independently, in a statistical sense) a 51% majority will favor whichever policy is actually superior. More formally, the probability of a collective mistake approaches zero as the number of voters grows large.

Condorcet’s mathematical analysis assumes that voters’ opinions are equally reliable, but in reality, expertise varies widely on any issue, which raises the question of who should be voting? One conventional view is that everyone should participate; in fact, this has a mathematical justification, since in Condorcet’s model, collective errors become less likely as the number of voters increases. On the other hand, another common view is that citizens with only limited information should abstain, leaving a decision to those who know the most about the issue. Ultimately, the question must be settled mathematically: assuming that different citizens have different probabilities of correctly identifying good policies, what configuration of voter participation maximizes the probability of making the right collective decision?

It turns out that, when voters differ in expertise, it is not optimal for all to vote, even when each citizen’s private accuracy exceeds 50%. In other words, a citizen with only limited expertise on an issue can best serve the electorate by ignoring her own opinion and abstaining, in deference to those who know more. Mathematically, it might seem that more information is always better, if only slightly. This would indeed be the case, except that each vote takes weight away from other votes, which may be better informed.

If voters recognize the potential harm of an uninformed vote, this could explain why many citizens vote in some races, but skip others on the same ballot, or vote in general elections, but not in primaries, where information is more limited. This raises a new question, however, which is who should continue voting: if the least informed citizens all abstain, then a moderately informed citizen now becomes the least informed voter; should she abstain, as well?

Mathematically, it turns out that for any distribution of expertise, there is a threshold above which citizens should continue voting, no matter how large the electorate grows. A citizen right at this threshold is less knowledgeable than other voters, but nevertheless improves the collective electoral decision by bolstering the number of votes. The formula that derives this threshold is of limited practical use, since voter accuracies cannot readily be measured, but simple example distributions demonstrate that voting may well be optimal for a sizeable majority of the electorate.

The dual message that poorly informed votes reduce the quality of electoral decisions, but that moderately informed votes can improve even the decisions made even by more expert peers, may leave an individual feeling conflicted as to whether she should express her tentative opinions, or abstain in deference to those with better expertise. Assuming that her peers vote and abstain optimally, it may be useful to first predict voter turnout, and then participate (or not) accordingly: when half the electorate votes, it should be the better-informed half; when voter turnout is 75%, all but the least-informed quartile should participate.

An important caveat of Condorcet’s probability analysis is that disagreements are actually illusory: if voters envisioned the same policy outcomes, they would largely support the same policies. Whether this is accurate or not is an open philosophical question, but voters seem implicitly to embrace this assumption when they attempt to persuade and convert one another via debate, endorsements, or policy research: such efforts are only worthwhile if an individual expects others, once convinced, to abandon their former policy positions, in favor of her own. Some policies also do receive overwhelming public support.

If Condorcet’s basic premise is right, an uninformed citizen’s highest contribution may actually be to abstain from voting, trusting her peers to make decisions on her behalf. At the same time, voters with only limited expertise can rest assured that a single, moderately-informed vote can improve upon the decision made by a large number of experts. One might say that this is the true essence of democracy.

Joseph C. McMurray is Assistant Professor in the Department of Economics at Brigham Young University. His recent paper, Aggregating Information by Voting: The Wisdom of the Experts versus the Wisdom of the Masses, has been made freely available for a limited time by the

Review of Economic Studiesjournal.

The Review of Economic Studies is widely recognised as one of the core top-five economics journals. The Review is essential reading for economists and has a reputation for publishing path-breaking papers in theoretical and applied economics.

Subscribe to the OUPblog via email or RSS.

Subscribe to only law and politics articles on the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

*Image credit: Voting card. Photo by rrmf13, iStockphoto.*

The post The mathematics of democracy: Who should vote? appeared first on OUPblog.

]]> ]]>The post The Joy of Sets appeared first on OUPblog.

]]> ]]>** **

In recent days, the pro-mathematics portion of the Internet has been buzzing over the following paragraph, taken from the website of Christian publishing company A Beka Book:

Unlike the “modern math” theorists, who believe that mathematics is a creation of man and thus arbitrary and relative, *A Beka Book* teaches that the laws of mathematics are a creation of God and thus absolute….*A Beka Book* provides attractive, legible, and workable traditional mathematics texts that are not burdened with modern theories such as set theory.

As a result of recent legislative activity in the state of Louisiana, these curricular materials will now be supported with taxpayer money.

In more than a decade of socializing with creationists and other religious fundamentalists, I frequently encountered blinkered arguments about mathematics. This attack on set theory, however, was new to me. I cannot even imagine why anyone would think set theory is relevant to discussions of whether it is man or God who creates math. Perhaps the problem is that set theorists often speak a bit casually about infinity, which some people think is tantamount to discussing God. Alas, this line of criticism is too blinkered to take seriously.

Whatever their objection, they are really missing out on something great. Set theory is fascinating.

By a “set” we mean simply any collection of objects. You walk into a grocery store and see a pile of grapefruits over here and a pile of apples over there. A mathematician might then refer to the set of grapefruits on the one hand and the set of apples on the other. This provides a useful way of talking about all the apples (or grapefruits) combined as one unit, as opposed to discussing any specific apple (or grapefruit).

Of course, we can identify many other sets. We might wish to distinguish the set of Gala apples from the set of Granny Smiths. Or we might want to make a larger set by combining the set of apples and the set of grapefruits together to form part of the set of all fruit. For any description you would care to give, it is reasonable to talk about the set of all things that fit that description.

This seemed obvious, for example, to Gottlob Frege, a German mathematician/philosopher who did pioneering work in logic and set theory in the late nineteenth and early twentieth centuries. Bertrand Russell pointed out that this notion is fundamentally flawed. He first observed that some sets answer to their own descriptions while others don’t. The set of all grapefruits isn’t itself a grapefruit. Therefore, this set doesn’t contain itself among its members. On the other hand, the set of all abstract ideas is, indeed, an abstract idea. So it contains itself.

Russell now considered the set whose members are precisely the sets that are not contained within themselves. The set of all grapefruits is contained in Russell’s set, for example, while the set of all abstract ideas isn’t. He now wondered whether his set did or didn’t answer to its own description. If we suppose that it does so answer, then it must be contained within itself. But Russell’s set only contains sets that aren’t contained within themselves. This is a contradiction. You see, if we assume that Russell’s set answers to its own description then it both contains itself and doesn’t contain itself. Impossible.

Alas, the alternative assumption fares no better. If we suppose that Russell’s set doesn’t answer to its own description, then it must be among the sets that aren’t contained within themselves. But this is precisely the criterion you must satisfy to get into Russell’s set in the first place. Either way you have a contradiction, meaning this isn’t a properly defined set.

Nor is this the only way to get into trouble with sets. Consider the set of counting numbers {1, 2, 3, 4, …} that cannot be uniquely identified with fewer than two hundred characters. For example, a number such as 1000 can be identified by writing “ten multiplied by one hundred,” but I can do it more efficiently by writing “one thousand,” and more efficiently still by writing “ten cubed.”

Now, since there are only finitely many phrases having fewer than 200 characters, and infinitely many counting numbers, it is clear that my set must contain *something*. And since it must contain *something*, it must also contain a smallest number. (In the math biz, this curious fact of counting numbers is known as the “well-ordering principle.”) That smallest number in the set is therefore uniquely identified by the phrase, “The smallest counting number that cannot be described with fewer than two hundred characters.” But did I not just describe it with fewer than 200 characters? Prolonged consideration of such things can be harmful to your mental health.

Actually, my favorite application of the well-ordering principle is this: Consider the set of all the boring counting numbers. This set must have a smallest member, let us call it X. But then X is the smallest boring counting number, which makes it very interesting indeed! Surely this contradiction shows that all counting numbers are interesting?

Indeed they are. And sets are as well. Just don’t be too ingenious about how you define them.

Jason Rosenhouse is Associate Professor of Mathematics at James Madison University. His most recent book is

Among The Creationists: Dispatches from the Anti-Evolutionist Front Lines. He is also the author of Taking Sudoku Seriously: The Math Behind the World’s Most Popular Pencil Puzzle with Laura Taalman and The Monty Hall Problem: The Remarkable Story of Math’s Most Contentious Brain Teaser. Read Jason Rosenhouse’s previous blog articles.

Subscribe to the OUPblog via email or RSS.

Subscribe to only religion articles on the OUPblog via email or RSS.

View more about this book on the _{ }

The post The Joy of Sets appeared first on OUPblog.

]]> ]]>The post Maths, magic, and the electric guitar appeared first on OUPblog.

]]> ]]>The world famous Edinburgh International Festival has kicked off, beginning three weeks of the best the arts world has to offer. The Fringe Festival has countless alternative, weird, and wacky events happening all over the city, and the Edinburgh International Book Festival is underway. Throughout the Book Festival we’ll be bringing you sneak peeks of our authors’ talks and backstage debriefs so that, even if you can’t make it to Edinburgh this year, you won’t miss out on all the action.

I’ve just had a great time at the 2012 Edinburgh International Book Festival, even though it was a rather strange experience for a mathematician.

In the Author’s Yurt (sic), for example, I was surrounded by fiction writers, with lots of pointy beards and wild hair.

As it happens, I used to write detective stories when I was a young boy, so once had vague dreams of becoming a fiction writer myself. But it was not to be. And now, after an academic career at Oxford, I find that I am an author of a rather different kind.

1089 and All That is my first ‘popular’ maths book, aimed at the general public. It is a light-hearted and somewhat quirky account of the biggest ideas of the subject, which — as I see it — means (a) wonderful theorems, (b) beautiful proofs and (c) great applications. (And, preferably, all three things at once.) Above all, perhaps, I try to convey the element of surprise in mathematics, particularly when unexpected connections arise between different parts of the subject.I’ve lectured on this book before but Edinburgh was a bit different, because I was part of the RBS Schools Programme, so I knew in advance that the audience would be pupils and their teachers. But my lecture had been advertised for a wide age range, so I wasn’t entirely sure what to expect.

Twenty minutes before the lecture, in Charlotte Square Gardens, the audience started to arrive, and I peered out of the Author’s Yurt.

The children looked very small. The average age was about 10.

I began to get nervous. My lecture had number tricks, practical demonstrations, a bit of audience participation, and, at the end, the electric guitar. But it also involved some quite deep mathematical ideas.

So, five minutes later, I peered out again.

They’d got even smaller.

Thankfully, it all seemed to go well enough in the end. But what really surprised me were the questions afterwards. There was no stopping them.

Some, like “How long have you been playing the guitar?” (53 years), were predictable. But there were several on maths (“What’s your favourite equation?”) and many more on the actual process of writing the book. In fact I began to wonder afterwards how many budding young authors there had been in the audience. And, in retrospect, I wish I’d asked them.

In any event, it seemed to come to a happy end, and I was whisked off to a book-signing and then to a radio interview with BBC Scotland, which turned out to be conducted by three pupils from a local primary school.

So, for me, the whole experience was a memorable one, to say nothing of the spectacular view of Edinburgh Castle from my hotel window, complete with night-time illumination and festival fireworks.

And while I could ramble on further, it seems to me that it was all summed up, really, a long time ago, by the satirical novelist Peter de Vries, who wrote: “I love being an author; what I can’t stand is the paperwork.”

David Acheson is the author of 1089 and All That: A Journey into Mathematics. He is an Emeritus Fellow of Jesus College, Oxford, and was recently President of the Mathematical Association. He is also the author of From Calculus to Chaos: An Introduction to Dynamics.

Subscribe to the OUPblog via email or RSS.

Subscribe to only mathematics articles on the OUPblog via email or RSS.

View more about this book on the _{ }

The post Maths, magic, and the electric guitar appeared first on OUPblog.

]]> ]]>The post Computers as authors and the Turing Test appeared first on OUPblog.

]]> ]]>** **

Alan Turing’s work was so important and wide-ranging that it is difficult to think of a more broadly influential scientist in the last century. Our understanding of the power and limitations of computing, for example, owes a tremendous amount to his work on the mathematical concept of a Turing Machine. His practical achievements are no less impressive. Some historians believe that the Second World War would have ended differently without his contributions to code-breaking. Yet another part of his work is the Turing test — Turing’s answer to a momentous question: What’s essential about human intelligence?

The inspiration for the Turing test came from a conversation game in which one player (the deceiver) tries to fool another player (the detective) about the deceiver’s gender. To win, a male deceiver would need to answer the detective’s questions in a way that suggests that he, the deceiver, is female. It doesn’t suffice for the deceiver to answer direct questions about gender. He should also show good knowledge about feminine topics and get properly upset over male chauvinism. What’s more, he should use turns of phrase that are typical of women. All this without overdoing it, of course.

Turing realised that this conversation game could be turned on its head if the role of the deceiver is played by a computer, not a person. The task for a computer deceiver is to fool the detective into believing that the deceiver is a person of flesh and blood. Analogous to the original game, the computer can win by thinking like a human. Now suppose that, playing this modified game, a computer was able to fool human detectives into believing it to be human. (A deceiver wins if the detectives are unable to get the computer/human decision correct more often than would be expected by chance.) Surely, so Turing argued, this would mean that the computer has managed to think like a human. Hence, if this happened, one would have to conclude that the computer displays real human thinking; the makers of the deceiver program would have captured human intelligence. The link between the Turing Test and intelligence has often been questioned, but the idea of the Test itself is very much alive.

Natural Language Generation (NLG) systems are computer programs that convert numerical or symbolic information into ordinary language. Weather forecasting, medical decision support, and other applications are starting to use systems of this kind. How should NLG programs be tested? No single method has all the answers, but human behaviour is still a gold standard to which many of these systems aspire. As in the Turing Test, researchers try to make their NLG systems produce text that resembles human-written text, partly because they believe that this may be the shortest route to making them effective. More and more often, NLG systems are tested in international evaluation contests that focus on one or more particular aspects of language use.

One of the most important challenges for NLG is to let computers talk in a human way about numbers. Numbers play an important role in many areas, including the medical domain. When nurses write about a patient — producing a shift report for instance — they have many numbers at their disposal (body temperature, oxygen saturation rates, etc.). However, they frequently suppress these numbers, replacing them by terms that are qualitative and vague. Instead of citing concrete oxygen saturation figures, they simply write “The SATS have remained OK”, for example. When talking about episodes of decreased heart rate, they throw in words like “temporary,” “prolonged,” and “significant.. Interestingly, doctors suppress numbers even more than nurses. Computational NLG systems in this area, by contrast, tend to stick with the numbers, producing stilted bits of text like the following: “By 10:40 SaO2 had decreased to 87. As a result, Fraction of Inspired Oxygen (FIO2) was set to 36%. SaO2 increased to 93.” The challenge is to do better, emulating human writers.

Unfortunately, the writings of doctors are rather difficult to mimic. The challenge is not just to decide when numerical information is useful. (Texts written by doctors contain numbers too, though fewer.) The hardest challenge for the NLG system is to “interpret” the numbers and this can involve difficult judgment calls, deciding whether a certain pattern of numbers should be summarized as “OK,” for instance, and deciding whether an episode of slow heart rhythm is merely “temporary” or “prolonged.” The medics’ texts are not dumbed-down versions of the computer-generated ones. They are highly sophisticated, despite their apparent simplicity. It will take research in NLG years before its computer programs stand a chance at winning a Turing Test in this area.

Kees van Deemter is a Reader in Computing Science at the University of Aberdeen. He is interested in getting computers to speak or write, and in the logical, linguistic, and philosophical issues that this raises. His book, Not Exactly: In Praise of Vagueness, puts the spotlight on vague and qualitative concepts, viewing them from a variety of angles and making a highly technical literature easily accessible to a wide audience. It explores how vague and qualitative concepts play a role in all areas of life, including even the exact sciences, where they are mostly unwelcome; how vague concepts fit into our current understanding of language and logic; the practical applications; and when and why vague language can be effective. Find out more about Not Exactly: In Praise of Vagueness. Kees van Deemter is also the author of approximately 120 peer-reviewed research publications.

OUPblog is celebrating Alan Turing’s 100th birthday with blog posts from our authors all this week. Read our previous posts on Alan Turing including: “Maurice Wilkes on Alan Turing” by Peter J. Bentley, “Turing : the irruption of Materialism into thought” by Paul Cockshott, “Alan Turing’s Cryptographic Legacy” by Keith M. Martin, and “Turing’s Grand Unification” by Cristopher Moore and Stephan Mertens.

Subscribe to the OUPblog via email or RSS.

Subscribe to only technology articles on the OUPblog via email or RSS.

View more about this book on the _{ }

The post Computers as authors and the Turing Test appeared first on OUPblog.

]]> ]]>The post Turing’s Grand Unification appeared first on OUPblog.

]]> ]]>** **

Many of the central moments in science have been unifications: realizations that seemingly disparate phenomena are all aspects of one underlying structure. Newton showed that the same laws of motion and gravity govern apples and planets, creating the first explanatory framework that joins the terrestrial to the celestial. Maxwell showed that a single field can explain electricity, magnetism, and light. Darwin realized that natural selection shapes all forms of life. And Einstein demonstrated that space and time are shadows of a single, four-dimensional spacetime.

This quest for unity drives us to this day, as we hunt for a Grand Unified Theory that combines gravity with quantum mechanics. But while it is less well-known, computer science had its own grand unification in 1936, thanks to Alan Turing.

At the dawn of the 20th century, mathematicians and logicians were focused on the axiomatic underpinnings of mathematics. Shaken by paradoxes — like Russell’s set of all sets that don’t contain themselves (does it or doesn’t it?) — they wanted to re-build mathematics from the ground up, creating a foundation free of paradox. This stimulated a great deal of interest in axiomatic systems, their power to establish truth, and the difficulty of finding proofs or disproofs of open questions in mathematics.

In 1928, David Hilbert posed the Entscheidungsproblem, asking whether there is a “mechanical procedure” that can decide whether or not any given mathematical statement is true — say the Twin Prime Conjecture, that there are an infinite number of pairs of primes that differ by two. Such a procedure would complete the mathematical adventure, providing a general method to determine the truth or falsehood of any statement. To us, this sounds horribly final, but to Hilbert it was a glorious dream.

But what exactly is a “mechanical procedure”? Or, in modern terms, an algorithm? Intuitively, it is a procedure that can be carried out according to a fixed computer program, like a recipe followed by a dutiful cook. But what kinds of steps can this computer perform? What kind of information does it have access to, and how is it allowed to transform this information?

Several models of computation had been proposed, with different attitudes towards what it means to compute. The recursive functions build functions from simpler ones using rules like composition and induction, starting with “atomic” functions like *x*+1 that we can take for granted. Another model, Church’s λ-calculus, repeatedly transforms a string of symbols by substitutions and rearrangements until only the answer remains.

Each of these models has its charms. Indeed, each one lives on in today’s programming languages. Recursive functions are much like subroutines and loops in C and Java, and the λ-calculus is at the heart of functional programming languages like Lisp and Scheme. Every computer science student knows that we can translate from each of these languages to the others, and that while their styles are radically different, they can ultimately perform the same tasks. But in 1936, it was far from obvious that these models are equivalent, or that either one is capable of everything we might reasonably call a computation. Why should a handful of ways to define functions in terms of simpler ones, or a particular kind of symbol substitution, be enough to carry out any conceivable computation?

Turing settled this issue with a model that is both mathematically precise and intuitively complete. He began by imagining a human computer, carrying out a procedure with pencil and paper. If we had to, he argued, we could boil any such procedure down into a series of steps, each of which reads and writes a single symbol. We don’t need to remember much ourselves, since we can use notes on the paper to keep track of what to do next. And although it might be inconvenient, a one-dimensional roll of paper is enough to write down anything we might need to read later.

At that point, we have a Turing machine: a controller with a finite number of internal states, and a tape on which it can read and write symbols in a finite alphabet. Nothing has been left out. Any reasonable attempt to augment the Turing machine with two-dimensional tapes, multiple controllers, etc. can be simulated by the original model.

Famously, Turing then showed that there are problems that no Turing machine can solve. The Halting Problem asks whether a given Turing machine will ever halt; in modern terms, whether a program will return an answer, or “hang” and run forever. If there were a machine that could answer this question, we could ask it about itself, demanding that it predict its own behavior. We then add a twist, making it halt if it will hang, and hang if it will halt. The only escape is to accept that no such machine exists in the first place.

This also gives a nice proof of Gödel’s Theorem that there are unprovable truths, or to be more precise, that no axiomatic system can prove all mathematical truths. (Unless it also “proves” some falsehoods, in which case we can’t trust it!) For if every truth of the form “this Turing machine will never halt” had a finite proof, we could solve the Halting Problem by doing two things in parallel: running the machine to see if it halts, while simultaneously looking for proofs that it won’t. Thus, no axiomatic system can prove every truth of this form.

Turing showed that his machines are exactly as powerful as the recursive functions and the λ-calculus, unifying all three models under a single definition of computation. What we can compute doesn’t depend on the details of our computers. They can be serial or parallel, classical or quantum. One kind of computer might be much more efficient than another, but given enough time, each one can simulate all the others. The belief that these models capture everything that could reasonably be called an algorithm, or a mechanical procedure, is called the Church-Turing Thesis.

Turing drew the line between what is computable and what is not, if we have an unbounded amount of time and memory. Since his death, computer science has focused on what we can compute if these resources are limited. For instance, the P vs. NP question asks whether there are problems where we can check a solution in polynomial time (as a function of problem size) but where actually finding a solution is much harder. For instance, mathematical proofs can be checked in time roughly proportional to their length — that’s the whole point of formal proofs — but they seem difficult to find. Despite our strong intuition that finding things is harder than checking them, we have been unable to prove that P ≠ NP, and it remains the outstanding open question of the field.

The fact that Turing didn’t live to see how computer science grew and flowered — that he wasn’t there to play a role in its development, as he did at its foundations – is one of the tragedies of the 20th century. He received, too late, an apology from the British government for the persecution that led to his death. Let’s wish him a happy birthday, and raise a glass to his short but brilliant life.

Cristopher Moore and Stephan Mertens are the authors of The Nature of Computation. Cristopher Moore is a professor at Santa Fe Institute and previously, a professor in the Computer Science Department and Department of Physics and Astronomy, University of New Mexico. Stephan Mertens is a a theoretical physicist at the Institute of Theoretical Physics, Otto-von-Guericke University, Magdeburg, and external professor at the Santa Fe Institute.

OUPblog is celebrating Alan Turing’s 100th birthday with blog posts from our authors all this week. Read our previous posts on Alan Turing including: “Maurice Wilkes on Alan Turing” by Peter J. Bentley, “Turing : the irruption of Materialism into thought” by Paul Cockshott, and “Alan Turing’s Cryptographic Legacy” by Keith M. Martin. Look for “Computers as authors and the Turing Test” by Kees van Deemter tomorrow.

Subscribe to the OUPblog via email or RSS.

Subscribe to only articles on mathematics on the OUPblog via email or RSS.

View more about this book on the _{ }

The post Turing’s Grand Unification appeared first on OUPblog.

]]> ]]>