Читать онлайн книгу «The Evolution of Everything: How Small Changes Transform Our World» автора Matt Ridley

The Evolution of Everything: How Small Changes Transform Our World
Matt Ridley
‘If there is one dominant myth about the world, one huge mistake we all make … it is that we all go around assuming the world is much more of a planned place than it is.’From the industrial revolution and the rise of China, to urbanisation and the birth of bitcoin, Matt Ridley demolishes conventional assumptions that the great events and trends of our day are dictated by those on high. On the contrary, our most important achievements develop from the ground up. In this wide-ranging and erudite book, Matt Ridley brilliantly makes the case for evolution as the force that has shaped much of our culture, our minds, and that even now is shaping our future.As compelling as it is controversial, as authoritative as it is ambitious, Ridley’s deeply thought-provoking book will change the way we think about the world and how it works.




How Small Changes Transform Our World





Copyright (#ue27debc1-38af-57a8-87ea-cb95d9f61a8e)
4th Estate
An imprint of HarperCollinsPublishers
1 London Bridge Street
London SE1 9GF
www.4thestate.co.uk (http://www.4thestate.co.uk)
First published in Great Britain in 2015 by 4th Estate
Copyright © Matt Ridley 2015
The right of Matt Ridley to be identified as the author of this work has been asserted by him in accordance with the Copyright, Design and Patents Act 1988
A catalogue record for this book is available from the British Library
All rights reserved under International and Pan-American Copyright Conventions. By payment of the required fees, you have been granted the non-exclusive, non-transferable right to access and read the text of this e-book on screen. No part of this text may be reproduced, transmitted, down-loaded, decompiled, reverse engineered, or stored in or introduced into any information storage and retrieval system, in any form or by any means, whether electronic or mechanical, now known or hereinafter invented, without the express written permission of HarperCollins.
Source ISBN: 9780007542475
Ebook Edition © 2016 ISBN: 9780007542505
Version: 2016-04-13
CONTENTS
Cover (#u275db2ca-2a1a-5133-ae67-a43f10da7720)
Title Page (#u1247ea87-7b65-579a-be86-4c7348fc5048)
Copyright
Prologue: The General Theory of Evolution
1. The Evolution of the Universe
2. The Evolution of Morality
3. The Evolution of Life
4. The Evolution of Genes
5. The Evolution of Culture
6. The Evolution of the Economy
7. The Evolution of Technology
8. The Evolution of the Mind
9. The Evolution of Personality
10. The Evolution of Education
11. The Evolution of Population
12. The Evolution of Leadership
13. The Evolution of Government
14. The Evolution of Religion
15. The Evolution of Money
16. The Evolution of the Internet
Epilogue: The Evolution of the Future
Footnotes
Sources and Further Reading
Index
Acknowledgements
By the Same Author
About the Publisher

PROLOGUE (#ue27debc1-38af-57a8-87ea-cb95d9f61a8e)
The General Theory of Evolution (#ue27debc1-38af-57a8-87ea-cb95d9f61a8e)
The word ‘evolution’ originally means ‘unfolding’. Evolution is a story, a narrative of how things change. It is a word freighted with many other meanings, of particular kinds of change. It implies the emergence of something from something else. It has come to carry a connotation of incremental and gradual change, the opposite of sudden revolution. It is both spontaneous and inexorable. It suggests cumulative change from simple beginnings. It brings the implication of change that comes from within, rather than being directed from without. It also usually implies change that has no goal, but is open-minded about where it ends up. And it has of course acquired the very specific meaning of genetic descent with modification over the generations in biological creatures through the mechanism of natural selection.
This book argues that evolution is happening all around us. It is the best way of understanding how the human world changes, as well as the natural world. Change in human institutions, artefacts and habits is incremental, inexorable and inevitable. It follows a narrative, going from one stage to the next; it creeps rather than jumps; it has its own spontaneous momentum, rather than being driven from outside; it has no goal or end in mind; and it largely happens by trial and error – a version of natural selection. Take, for example, electric light. When an obscure engineer named Thomas Newcomen in 1712 hit upon the first practical method of turning heat into work, he could have had no notion that the basic principle behind his invention – the expansion of water when boiled to make steam – would eventually result, via innumerable small steps, in machines that generate electricity to provide artificial light: heat to work to light. The change from incandescent to fluorescent and next to LED light is still unfolding. The sequence of events was and is evolutionary.
My argument will be that in all these senses, evolution is far more common, and far more influential, than most people recognise. It is not confined to genetic systems, but explains the way that virtually all of human culture changes: from morality to technology, from money to religion. The way in which these streams of human culture flow is gradual, incremental, undirected, emergent and driven by natural selection among competing ideas. People are the victims, more often than the perpetrators, of unintended change. And though it has no goal in mind, cultural evolution none the less produces functional and ingenious solutions to problems – what biologists call adaptation. In the case of the forms and behaviours of animals and plants, we find this apparent purposefulness hard to explain without imputing deliberate design. How can it not be that the eye was designed for seeing? In the same way, we assume that when we find human culture being well adapted to solve human problems, we tend to assume that this is because some clever person designed it with that end in mind. So we tend to give too much credit to whichever clever person is standing nearby at the right moment.
The way that human history is taught can therefore mislead, because it places far too much emphasis on design, direction and planning, and far too little on evolution. Thus, it seems that generals win battles; politicians run countries; scientists discover truths; artists create genres; inventors make breakthroughs; teachers shape minds; philosophers change minds; priests teach morality; businessmen lead businesses; conspirators cause crises; gods make morality. Not just individuals, but institutions too: Goldman Sachs, the Communist Party, the Catholic Church, Al Qaeda – these are said to shape the world.
That’s the way I was taught. I now think it is more often wrong than right. Individuals can make a difference, of course, and so can political parties or big companies. Leadership still matters. But if there is one dominant myth about the world, one huge mistake we all make, one blind spot, it is that we all go around assuming the world is much more of a planned place than it is. As a result, again and again we mistake cause for effect; we blame the sailing boat for the wind, or credit the bystander with causing the event. A battle is won, so a general must have won it (not the malaria epidemic that debilitated the enemy army); a child learns, so a teacher must have taught her (not the books, peers and curiosity that the teacher helped her find); a species is saved, so a conservationist must have saved it (not the invention of fertiliser which cut the amount of land needed to feed the population); an invention is made, so an inventor must have invented it (not the inexorable, inevitable ripeness of the next technological step); a crisis occurs, so we see a conspiracy (and not a cock-up). We describe the world as if people and institutions were always in charge, when often they are not. As Nassim Taleb remarks in his book Antifragile, in a complex world the very notion of ‘cause’ is suspect: ‘another reason to ignore newspapers with their constant supply of causes for things’.
Taleb is brutally dismissive of what he mockingly calls the Soviet-Harvard illusion, which he defines as lecturing birds on flight and thinking that the lecture caused their skill at flying. Adam Smith was no less rude about what he called the man of system, who imagines ‘that he can arrange the different members of a great society with as much ease as the hand arranges the different pieces upon a chess-board’, without considering that on the great chessboard of human society, the pieces have a motion of their own.
To use a word coined by Abraham Lincoln, I hope gradually to ‘disenthrall’ you over the course of this book, from the obsession with human intentionality, design and planning. I want to do for every aspect of the human world a little bit of what Charles Darwin did for biology, and get you to see past the illusion of design, to see the emergent, unplanned, inexorable and beautiful process of change that lies underneath.
I have often noticed that human beings are surprisingly bad at explaining their own world. If an anthropologist from Alpha Centauri were to arrive here and ask some penetrating questions, he would get no good answers. Why is the homicide rate falling all around the world? Criminologists cannot agree. Why is global average income more than ten times what it was in the nineteenth century? Economic historians are divided. Why did some Africans start to invent cumulative technology and civilisation around 200,000 years ago? Anthropologists do not know. How does the world economy work? Economists pretend to explain, but they cannot really do so in any detail.
These phenomena belong in a strange category, first defined in 1767 by a Scottish army chaplain by the name of Adam Ferguson: they are the result of human action, but not of human design. They are evolutionary phenomena, in the original meaning of the word – they unfold. And evolutionary phenomena such as these are everywhere and in everything. Yet we fail to recognise this category. Our language and our thought divide the world into two kinds of things – those designed and made by people, and natural phenomena with no order or function. The economist Russ Roberts once pointed out that we have no word to encompass such phenomena. The umbrella that keeps you dry in a shower of rain is the result of both human action and human design, whereas the rainstorm that soaks you when you forget it is neither. But what about the system that enables a local shop to sell you an umbrella, or the word umbrella itself, or the etiquette that demands that you tilt your umbrella to one side to let another pedestrian pass? These – markets, language, customs – made things. But none of them is designed by a human being. They all emerged unplanned.
We transfer this thinking back into our understanding of the natural world too. We see purposeful design in nature, rather than emergent evolution. We look for hierarchy in the genome, for a ‘self’ in the brain, and for free will in the mind. We latch on to any excuse to blame an extreme weather event on human agency – whether witchdoctoring or man-made global warming.
Far more than we like to admit, the world is to a remarkable extent a self-organising, self-changing place. Patterns emerge, trends evolve. Skeins of geese form Vs in the sky without meaning to, termites build cathedrals without architects, bees make hexagonal honeycombs without instruction, brains take shape without brain-makers, learning can happen without teaching, political events are shaped by history rather than vice versa. The genome has no master gene, the brain has no command centre, the English language has no director, the economy has no chief executive, society has no president, the common law has no chief justice, the climate has no control knob, history has no five-star general.
In society, people are the victims and even the immediate agents of change, but more often than not the causes are elsewhere – they are emergent, collective, inexorable forces. The most powerful of these inexorable forces is biological evolution by natural selection itself, but there are other, simpler forms of evolutionary, unplanned change. Indeed, to borrow a phrase from a theorist of innovation, Richard Webb, Darwinism is the ‘special theory of evolution’; there’s a general theory of evolution too, and it applies to much more than biology. It applies to society, money, technology, language, law, culture, music, violence, history, education, politics, God, morality. The general theory says that things do not stay the same; they change gradually but inexorably; they show ‘path dependence’; they show descent with modification; they show trial and error; they show selective persistence. And human beings none the less take credit for this process of endogenous change as if it was directed from above.
This truth continues to elude most intellectuals on the left as well as the right, who remain in effect ‘creationists’. The obsession with which those on the right resist Charles Darwin’s insight – that the complexity of nature does not imply a designer – matches the obsession with which those on the left resist Adam Smith’s insight – that the complexity of society does not imply a planner. In the pages that follow, I shall take on this creationism in all its forms.

1 (#ue27debc1-38af-57a8-87ea-cb95d9f61a8e)
The Evolution of the Universe (#ue27debc1-38af-57a8-87ea-cb95d9f61a8e)
If you possess a firm grasp of these tenets, you will see
That Nature, rid of harsh taskmasters, all at once is free
And everything she does, does on her own, so that gods play
No part …
Lucretius, De Rerum Natura, Book 2, lines 1090–3
A ‘skyhook’ is an imaginary device for hanging an object from the sky. The word originated in a sarcastic remark by a frustrated pilot of a reconnaissance plane in the First World War, when told to stay in the same place for an hour: ‘This machine is not fitted with skyhooks,’ he replied. The philosopher Daniel Dennett used the skyhook as a metaphor for the argument that life shows evidence of an intelligent designer. He contrasted skyhooks with cranes – the first impose a solution, explanation or plan on the world from on high; the second allow solutions, explanations or patterns to emerge from the ground up, as natural selection does.
The history of Western thought is dominated by skyhooks, by devices for explaining the world as the outcome of design and planning. Plato said that society worked by imitating a designed cosmic order, a belief in which should be coercively enforced. Aristotle said that you should look for inherent principles of intentionality and development – souls – within matter. Homer said gods decided the outcome of battles. St Paul said that you should behave morally because Jesus told you so. Mohamed said you should obey God’s word as transmitted through the Koran. Luther said that your fate was in God’s hands. Hobbes said that social order came from a monarch, or what he called ‘Leviathan’ – the state. Kant said morality transcended human experience. Nietzsche said that strong leaders made for good societies. Marx said that the state was the means of delivering economic and social progress. Again and again, we have told ourselves that there is a top–down description of the world, and a top–down prescription by which we should live.
But there is another stream of thought that has tried and usually failed to break through. Perhaps its earliest exponent was Epicurus, a Greek philosopher about whom we know very little. From what later writers said about his writings, we know that he was born in 341 BC and thought (as far as we can tell) that the physical world, the living world, human society and the morality by which we live all emerged as spontaneous phenomena, requiring no divine intervention nor a benign monarch or nanny state to explain them. As interpreted by his followers, Epicurus believed, following another Greek philosopher, Democritus, that the world consisted not of lots of special substances including spirits and humours, but simply of two kinds of thing: voids and atoms. Everything, said Epicurus, is made of invisibly small and indestructible atoms, separated by voids; the atoms obey the laws of nature and every phenomenon is the result of natural causes. This was a startlingly prescient conclusion for the fourth century BC.
Unfortunately Epicurus’s writings did not survive. But three hundred years later, his ideas were revived and explored in a lengthy, eloquent and unfinished poem, De Rerum Natura (Of the Nature of Things), by the Roman poet Titus Lucretius Carus, who probably died in mid-stanza around 49 BC, just as dictatorship was looming in Rome. Around this time, in Gustave Flaubert’s words, ‘when the gods had ceased to be, and Christ had not yet come, there was a unique moment in history, between Cicero and Marcus Aurelius when man stood alone’. Exaggerated maybe, but free thinking was at least more possible then than before or after. Lucretius was more subversive, open-minded and far-seeing than either of those politicians (Cicero admired, but disagreed with, him). His poem rejects all magic, mysticism, superstition, religion and myth. It sticks to an unalloyed empiricism.
As the Harvard historian Stephen Greenblatt has documented, a bald list of the propositions Lucretius advances in the unfinished 7,400 hexameters of De Rerum Natura could serve as an agenda for modernity. He anticipated modern physics by arguing that everything is made of different combinations of a limited set of invisible particles, moving in a void. He grasped the current idea that the universe has no creator, Providence is a fantasy and there is no end or purpose to existence, only ceaseless creation and destruction, governed entirely by chance. He foreshadowed Darwin in suggesting that nature ceaselessly experiments, and those creatures that can adapt and reproduce will thrive. He was with modern philosophers and historians in suggesting that the universe was not created for or about human beings, that we are not special, and there was no Golden Age of tranquillity and plenty in the distant past, but only a primitive battle for survival. He was like modern atheists in arguing that the soul dies, there is no afterlife, all organised religions are superstitious delusions and invariably cruel, and angels, demons or ghosts do not exist. In his ethics he thought the highest goal of human life is the enhancement of pleasure and the reduction of pain.
Thanks largely to Greenblatt’s marvellous book The Swerve, I have only recently come to know Lucretius, and to appreciate the extent to which I am, and always have been without knowing it, a Lucretian/Epicurean. Reading his poem in A.E. Stallings’s beautiful translation in my sixth decade is to be left fuming at my educators. How could they have made me waste all those years at school plodding through the tedious platitudes and pedestrian prose of Jesus Christ or Julius Caesar, when they could have been telling me about Lucretius instead, or as well? Even Virgil was writing partly in reaction to Lucretius, keen to re-establish respect for gods, rulers and top–down ideas in general. Lucretius’s notion of the ceaseless mutation of forms composed of indestructible substances – which the Spanish-born philosopher George Santayana called the greatest thought that mankind has ever hit upon – has been one of the persistent themes of my own writing. It is the central idea behind not just physics and chemistry, but evolution, ecology and economics too. Had the Christians not suppressed Lucretius, we would surely have discovered Darwinism centuries before we did.

The Lucretian heresy
It is by the thinnest of threads that we even know the poem De Rerum Natura. Although it was mentioned and celebrated by contemporaries, and charred fragments of it have been found in the Villa of the Papyri at Herculaneum (a library belonging probably to Julius Caesar’s father-in-law), it sank into obscurity for much of history. Passing quotations from it in the ninth century AD show that it was very occasionally being read by monks, but by 1417 no copy had been in wide circulation among scholars for more than a millennium. As a text it was effectively extinct. Why?
It is not hard to answer that question. Lucretius’s special contempt for all forms of superstition, and indeed his atomism, which contradicted the doctrine of transubstantiation, condemned him to obscurity once the Christians took charge. His elevation of the pleasure principle – that the pursuit of pleasure could lead to goodness and that there was nothing nice about pain – was incompatible with the recurring Christian obsession that pleasure is sinful and suffering virtuous.

Whereas Plato and Aristotle could be accommodated within Christianity, because of their belief in the immortality of the soul and the evidence for design, the Epicurean heresy was so threatening to the Christian Church that Lucretius had to be suppressed. His atheism is explicit, even Dawkinsian, in its directness. The historian of philosophy Anthony Gottlieb compares a passage from Lucretius with one from Richard Dawkins’s The Selfish Gene. The first talks of ‘the generation of living creatures’ by ‘every sort of combination and motion’; the second of how ‘unordered atoms could group themselves into ever more complex patterns until they ended up manufacturing people’. Lucretius was, carped John Dryden, at times ‘so much an atheist, he forgot to be a poet’. He talks about people ‘crushed beneath the weight of superstition’, claims that ‘it is religion breeds wickedness’ and aims to give us ‘the power to fight against the superstitions and the threats of priests’. Little wonder they tried to stamp him out.
They almost succeeded. St Jerome – keen to illustrate the wages of sin – dismissed Lucretius as a lunatic, driven mad by a love potion, who then committed suicide. No evidence to support these calumnies exists; saints do not show their sources. The charge that all Epicureans were scandalous hedonists was trumped up and spread abroad, and it persists to this day. Copies of the poem were rooted out of libraries and destroyed, as were any other Epicurean and sceptical works. Almost all traces of such materialist and humanist thought had apparently long since vanished from Europe when in 1417 a Florentine scholar and recently unemployed papal secretary named Gian Francesco Poggio Bracciolini, stumbled upon a copy of the whole poem. Poggio was hunting for rare manuscripts in libraries in central Germany when he came across a copy of De Rerum Natura in a monastic library (probably at Fulda). He sent a hastily-made copy to his wealthy bibliophile friend Niccolò Niccoli, whose transcription was then copied more than fifty times. In 1473 the book was printed and the Lucretian heresy began to infect minds all across Europe.

Newton’s nudge
In his passionate attachment to rationalism, materialism, naturalism, humanism and liberty, Lucretius deserves a special place in the history of Western thought, even above the beauty of his poetry. The Renaissance, the scientific revolution, the Enlightenment and the American Revolution were all inspired by people who had to some degree imbibed Lucretius. Botticelli’s Venus effectively depicts the opening scene of Lucretius’s poem. Giordano Bruno went to the stake, with his mouth pinned shut to silence his heresies, for quoting Lucretius on the recombination of atoms and the awe with which we should embrace the idea that human beings are not the purpose of the universe. Galileo’s Lucretian atomism, as well as his Copernican heliocentrism, was used against him at his trial. Indeed, the historian of science Catherine Wilson has argued that the whole of seventeenth-century empiricism, started by Pierre Gassendi in opposition to Descartes, and taken up by the most influential thinkers of the age, including Thomas Hobbes, Robert Boyle, John Locke, Gottfried Leibniz and Bishop Berkeley, was fuelled to a remarkable extent by the sudden popularity of Lucretius.
As Lucretian ideas percolated, the physicists were the first to see where they led. Isaac Newton became acquainted with Epicurean atomism as a student at Cambridge, when he read a book by Walter Charleton expounding Gassendi’s interpretation of Lucretius. Later he acquired a Latin edition of De Rerum Natura itself, which survives from his library and shows signs of heavy use. He echoed Lucretian ideas about voids between atoms throughout his books, especially the Opticks.
Newton was by no means the first modern thinker to banish a skyhook, but he was one of the best. He explained the orbits of the planets and the falling of apples by gravity, not God. In doing so, he did away with the need for perpetual divine interference and supervision by an overworked creator. Gravity kept the earth orbiting the sun without having to be told. Jehovah might have kicked the ball, but it rolled down the hill of its own accord.
Yet Newton’s disenthralment was distinctly limited. He was furious with anybody who read into this that God might not be in ultimate charge, let alone not exist. He asserted firmly that: ‘This most elegant system of the sun, planets, and comets could not have arisen without the design and dominion of an intelligent and powerful being.’ His reasoning was that, according to his calculations, the solar system would eventually spin off into chaos. Since it apparently did not, God must be intervening periodically to nudge the planets back into their orbits. Jehovah has a job after all, just a part-time one.

The swerve
That’s that then. A skyhook still exists, just out of sight. Again and again this was the pattern of the Enlightenment: gain a yard of ground from God, but then insist he still holds the field beyond and always will. It did not matter how many skyhooks were found to be illusory, the next one was always going to prove real. Indeed, so common is the habit of suddenly seeing design, after all the hard work has been done to show that emergence is more plausible, that I shall borrow a name for it – the swerve. Lucretius himself was the first to swerve. In a world composed of atoms whose motions were predictable, Lucretius (channelling Democritus and Epicurus) could not explain the apparent human capacity for free will. In order to do so, he suggested, arbitrarily, that atoms must occasionally swerve unpredictably, because the gods make them do so. This failure of nerve on the part of the poet has been known since as the Lucretian swerve, but I intend to use the same phrase more generally for each occasion on which I catch a philosopher swerving to explain something he struggles to understand, and positing an arbitrary skyhook. Watch out, in the pages that follow, for many Lucretian swerves.
Newton’s rival, Gottfried Leibniz, in his 1710 treatise on theodicy, attempted a sort of mathematical proof that God existed. Evil stalked the world, he concluded, the better to bring out the best in people. God was always calculating carefully how to minimise evil, if necessary by allowing disasters to occur that killed more bad people than good. Voltaire mocked Leibniz’s ‘optimism’, a word that then meant almost the opposite of what it means today: that the world was perfect and unimprovable (‘optimal’), because God had made it. After 60,000 people died in the Lisbon earthquake of 1755, on the morning of All Saints’ Day when the churches were full, theologians followed Leibniz in explaining helpfully that Lisbon had earned its punishment by sinning. This was too much for Voltaire, who asked sardonically in a poem: ‘Was then more vice in fallen Lisbon found/Than Paris, where voluptuous joys abound?’
Newton’s French follower Pierre-Louis Maupertuis went to Swedish Lapland to prove that the earth was flattened towards the poles, as Newtonian mechanics predicted. He then moved on from Newton by rejecting other arguments for the existence of God founded on the wonders of nature, or the regularity of the solar system. But having gone thus far, he suddenly stopped (his Lucretian swerve), concluding that his own ‘least action’ principle to explain motion displayed such wisdom on the part of nature that it must be the product of a wise creator. Or, to paraphrase Maupertuis, if God’s as clever as me, he must exist. A blazing non sequitur.
Voltaire, perhaps irritated by the fact that his mathematically gifted mistress Emilie, Marquise du Châtelet had slept with Maupertuis and had written in defence of Leibniz, then based his character Dr Pangloss in his novel Candide on an amalgam of Leibniz and Maupertuis. Pangloss remains blissfully persuaded – and convinces the naïve Candide – that this is the best of all possible worlds, even as they both experience syphilis, shipwreck, earthquake, fire, slavery and being hanged. Voltaire’s contempt for theodicy derived directly and explicitly from Lucretius, whose arguments he borrowed throughout life, styling himself at one point the ‘latter-day Lucretius’.

Pasta or worms?
Voltaire was by no means the first poet or prose stylist to draw upon Lucretius, nor would he be the last. Thomas More tried to reconcile Lucretian pleasure with faith in Utopia. Montaigne quoted Lucretius frequently, and echoed him in saying ‘the world is but a perennial movement … all things in it are in constant motion’; he recommended that we ‘fall back into Epicurus’ infinity of atoms’. Britain’s Elizabethan and Jacobean poets, including Edmund Spenser, William Shakespeare, John Donne and Francis Bacon, all play with themes of explicit materialism and atomism that came either directly or indirectly from Lucretius. Ben Jonson heavily annotated his Dutch edition of Lucretius. Machiavelli copied out De Rerum Natura in his youth. Molière, Dryden and John Evelyn translated it; John Milton and Alexander Pope emulated, echoed and attempted to rebut it.
Thomas Jefferson, who collected five Latin versions of De Rerum Natura along with translations into three languages, declared himself an Epicurean, and perhaps deliberately echoed Lucretius in his phrase ‘the pursuit of happiness’. The poet and physician Erasmus Darwin, who helped inspire not just his evolutionary grandson but many of the Romantic poets too, wrote his epic, erotic, evolutionary, philosophical poems in conscious imitation of Lucretius. His last poem, The Temple of Nature, was intended as his version of De Rerum Natura.
The influence of this great Roman materialist culminates rather neatly in the moment when Mary Shelley had the idea for Frankenstein. She had her epiphany after listening to her husband Percy discuss with George, Lord Byron, the coming alive of ‘vermicelli’ that had been left to ferment, in experiments of ‘Dr Darwin’. Given that Shelley, Byron and Erasmus Darwin were all enthusiastic Lucretians, perhaps she misheard and, rather than debating the resurrection of pasta, they were actually quoting the passage in De Rerum Natura (and Darwin’s experimental imitation of it) where Lucretius discusses spontaneous generation of little worms in rotting vegetable matter – ‘vermiculos’. Here is the history of Western thought in a single incident: a Classical writer, rediscovered in the Renaissance, who inspired the Enlightenment and influenced the Romantic movement, then sparks the most famous Gothic novel, whose villain becomes a recurring star of modern cinema.
Lucretius haunted philosophers of the Enlightenment, daring free thinkers further down the path that leads away from creationist thinking. Pierre Bayle, in his Thoughts on the Comet of 1680, closely followed Lucretius’s Book 5 in suggesting that the power of religion derived from fear. Montesquieu channelled Lucretius in the very first sentence of The Spirit of the Laws (1748): ‘Laws in their most general signification, are the necessary relations arising from the nature of things’ (my emphasis). Denis Diderot in his Philosophical Thoughts echoed Lucretius to the effect that nature was devoid of purpose, the motto for his book being a line from De Rerum Natura: ‘Now we see out of the dark what is in the light’. Later, in The Letter on the Blind and the Deaf, Diderot suggested that God himself was a mere product of the senses, and went to jail for the heresy. The atheist philosopher Paul-Henri, baron d’Holbach, took Lucretian ideas to their ultimate extreme in his Le Système de la Nature of 1770. D’Holbach saw nothing but cause and effect, and matter in motion: ‘no necessity to have recourse to supernatural powers to account for the formation of things’.
One place where such scepticism began to take hold was in geology. James Hutton, a farmer from southern Scotland, in 1785 laid out a theory that the rocks beneath our feet were made by processes of erosion and uplift that are still at work today, and that no great Noachian flood was needed to explain seashells on mountaintops: ‘Hence we are led to conclude, that the greater part of our land, if not the whole, had been produced by operations natural to this globe.’ He glimpsed the vast depths of geological time, saying famously, ‘We find no vestige of a beginning – no prospect of an end.’ For this he was vilified as a blasphemer and an atheist. The leading Irish scientist Richard Kirwan even went as far as to hint that ideas like Hutton’s contributed to dangerous events like the French Revolution, remarking on how they had ‘proved too favourable to the structure of various systems of atheism or infidelity, as these have been in their turn to turbulence and immorality’.

No need of that hypothesis
The physicists, who had set the pace in tearing down skyhooks, continued to surprise the world. It fell to Pierre-Simon Laplace (using Emilie du Châtelet’s improvements to cumbersome Newtonian geometry) to take Newtonism to its logical conclusion. Laplace argued that the present state of the universe was ‘the effect of its past and the cause of its future’. If an intellect were powerful enough to calculate every effect of every cause, then ‘nothing would be uncertain and the future just like the past would be present before its eyes’. By mathematically showing that there was no need in the astronomical world even for Newton’s Nudge God to intervene to keep the solar system stable, Laplace took away that skyhook. ‘I had no need of that hypothesis,’ he told Napoleon.
The certainty of Laplace’s determinism eventually crumbled in the twentieth century under assault from two directions – quantum mechanics and chaos theory. At the subatomic level, the world turned out to be very far from Newtonian, with uncertainty built into the very fabric of matter. Even at the astronomical scale, Henri Poincaré discovered that some arrangements of heavenly bodies resulted in perpetual instability. And as the meteorologist Edward Lorenz realised, exquisite sensitivity to initial conditions meant that weather systems were inherently unpredictable, asking, famously, in the title of a lecture in 1972: ‘Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?’
But here’s the thing. These assaults on determinism came from below, not above; from within, not without. If anything they made the world a still more Lucretian place. The impossibility of forecasting the position of an electron, or the weather a year ahead, made the world proof against the confidence of prognosticators and experts and planners.

The puddle that fits its pothole
Briefly in the late twentieth century, some astronomers bought into a new skyhook called the ‘anthropic principle’. In various forms, this argued that the conditions of the universe, and the particular values of certain parameters, seemed ideally suited to the emergence of life. In other words, if things had been just a little bit different, then stable suns, watery worlds and polymerised carbon would not be possible, so life could never get started. This stroke of cosmic luck implied that we lived in some kind of privileged universe uncannily suitable for us, and this was somehow spooky and cool.
Certainly, there do seem to be some remarkably fortuitous features of our own universe without which life would be impossible. If the cosmological constant were any larger, the pressure of antigravity would be greater and the universe would have blown itself to smithereens long before galaxies, stars and planets could have evolved. Electrical and nuclear forces are just the right strength for carbon to be one of the most common elements, and carbon is vital to life because of its capacity to form multiple bonds. Molecular bonds are just the right strength to be stable but breakable at the sort of temperatures found at the typical distance of a planet from a star: any weaker and the universe would be too hot for chemistry, any stronger and it would be too cold.
True, but to anybody outside a small clique of cosmologists who had spent too long with their telescopes, the idea of the anthropic principle was either banal or barmy, depending on how seriously you take it. It so obviously confuses cause and effect. Life adapted to the laws of physics, not vice versa. In a world where water is liquid, carbon can polymerise and solar systems last for billions of years, then life emerged as a carbon-based system with water-soluble proteins in fluid-filled cells. In a different world, a different kind of life might emerge, if it could. As David Waltham puts it in his book Lucky Planet, ‘It is all but inevitable that we occupy a favoured location, one of the rare neighbourhoods where by-laws allow the emergence of intelligent life.’ No anthropic principle needed.
Waltham himself goes on to make the argument that the earth may be rare or even unique because of the string of ridiculous coincidences required to produce a planet with a stable temperature with liquid water on it for four billion years. The moon was a particular stroke of luck, having been formed by an interplanetary collision and having then withdrawn slowly into space as a result of the earth’s tides (it is now ten times as far away as when it first formed). Had the moon been a tiny bit bigger or smaller, and the earth’s day a tiny bit longer or shorter after the collision, then we would have had an unstable axis and a tendency to periodic life-destroying climate catastrophes that would have precluded the emergence of intelligent life. God might claim credit for this lunar coincidence, but Gaia – James Lovelock’s theory that life itself controls the climate – cannot. So we may be extraordinarily lucky and vanishingly rare. But that does not make us special: we would not be here if it had not worked out so far.
Leave the last word on the anthropic principle to Douglas Adams: ‘Imagine a puddle waking up one morning and thinking, “This is an interesting world I find myself in – an interesting hole I find myself in – fits me rather neatly, doesn’t it? In fact it fits me staggeringly well, may have been made to have me in it!”’

Thinking for ourselves
It is no accident that political and economic enlightenment came in the wake of Newton and his followers. As David Bodanis argues in his biography of Voltaire and his mistress, Passionate Minds, people would be inspired by Newton’s example to question traditions around them that had apparently been accepted since time immemorial. ‘Authority no longer had to come from what you were told by a priest or a royal official, and the whole establishment of the established church or the state behind them. It could come, dangerously, from small, portable books – and even from ideas you came to yourself.’
Gradually, by reading Lucretius and by experiment and thought, the Enlightenment embraced the idea that you could explain astronomy, biology and society without recourse to intelligent design. Nikolaus Copernicus, Galileo Galilei, Baruch Spinoza and Isaac Newton made their tentative steps away from top–down thinking and into the bottom–up world. Then, with gathering excitement, Locke and Montesquieu, Voltaire and Diderot, Hume and Smith, Franklin and Jefferson, Darwin and Wallace, would commit similar heresies against design. Natural explanations displaced supernatural ones. The emergent world emerged.

2 (#ulink_3fcd0563-a66a-56f8-9b26-8b871f49f4f3)
The Evolution of Morality (#ulink_3fcd0563-a66a-56f8-9b26-8b871f49f4f3)
O miserable minds of men! O hearts that cannot see!
Beset by such great dangers and in such obscurity
You spend your lot of life! Don’t you know it’s plain
That all your nature yelps for is a body free from pain,
And, to enjoy pleasure, a mind removed from fear and care?
Lucretius, De Rerum Natura, Book 2, lines 1–5
Soon a far more subversive thought evolved from the followers of Lucretius and Newton. What if morality itself was not handed down from the Judeo-Christian God as a prescription? And was not even the imitation of a Platonic ideal, but was a spontaneous thing produced by social interaction among people seeking to find ways to get along? In 1689, John Locke argued for religious tolerance – though not for atheists or Catholics – and brought a storm of protest down upon his head from those who saw government enforcement of religious orthodoxy as the only thing that prevented society from descending into chaos. But the idea of spontaneous morality did not die out, and some time later David Hume and then Adam Smith began to dust it off and show it to the world: morality as a spontaneous phenomenon. Hume realised that it was good for society if people were nice to each other, so he thought that rational calculation, rather than moral instruction, lay behind social cohesion. Smith went one step further, and suggested that morality emerged unbidden and unplanned from a peculiar feature of human nature: sympathy.
Quite how a shy, awkward, unmarried professor from Kirkcaldy who lived with his mother and ended his life as a customs inspector came to have such piercing insights into human nature is one of history’s great mysteries. But Adam Smith was lucky in his friends. Being taught by the brilliant Irish lecturer Francis Hutcheson, talking regularly with David Hume, and reading Denis Diderot’s new Encyclopédie, with its relentless interest in bottom–up explanations, gave him plenty with which to get started. At Balliol College, Oxford, he found the lecturers ‘had altogether given up even the pretence of teaching’, but the library was ‘marvellous’. Teaching in Glasgow gave him experience of merchants in a thriving trading port and ‘a feudal, Calvinist world dissolving into a commercial, capitalist one’. Glasgow had seen explosive growth thanks to increasing trade with the New World in the eighteenth century, and was fizzing with entrepreneurial energy. Later, floating around France as the tutor to the young Duke of Buccleuch enabled Smith to meet d’Holbach and Voltaire, who thought him ‘an excellent man. We have nothing to compare with him.’ But that was after his first, penetrating book on human nature and the evolution of morality. Anyway, somehow this shy Scottish man stumbled upon the insights to explore two gigantic ideas that were far ahead of their time. Both concerned emergent, evolutionary phenomena: things that are the result of human action, but not the result of human design.
Adam Smith spent his life exploring and explaining such emergent phenomena, beginning with language and morality, moving on to markets and the economy, ending with the law, though he never published his planned book on jurisprudence. Smith began lecturing on moral philosophy at Glasgow University in the 1750s, and in 1759 he put together his lectures as a book, The Theory of Moral Sentiments. Today it seems nothing remarkable: a dense and verbose eighteenth-century ramble through ideas about ethics. It is not a rattling read. But in its time it was surely one of the most subversive books ever written. Remember that morality was something that you had to be taught, and that without Jesus telling us what to teach, could not even exist. To try to raise a child without moral teaching and expect him to behave well was like raising him without Latin and expecting him to recite Virgil. Adam Smith begged to differ. He thought that morality owed little to teaching and nothing to reason, but evolved by a sort of reciprocal exchange within each person’s mind as he or she grew from childhood, and within society. Morality therefore emerged as a consequence of certain aspects of human nature in response to social conditions.
As the Adam Smith scholar James Otteson has observed, Smith, who wrote a history of astronomy early in his career, saw himself as following explicitly in Newton’s footsteps, both by looking for regularities in natural phenomena and by employing the parsimony principle of using as simple an explanation as possible. He praised Newton in his history of astronomy for the fact that he ‘discovered that he could join together the movement of the planets by so familiar a principle of connection’. Smith was also part of a Scottish tradition that sought cause and effect in the history of a topic: instead of asking what is the perfect Platonic ideal of a moral system, ask rather how it came about.
It was exactly this modus operandi that Smith brought to moral philosophy. He wanted to understand where morality came from, and to explain it simply. As so often with Adam Smith, he deftly avoided the pitfalls into which later generations would fall. He saw straight through the nature-versus-nurture debate and came up with a nature-via-nurture explanation that was far ahead of its time. He starts The Theory of Moral Sentiments with a simple observation: we all enjoy making other people happy.
How selfish soever man may be supposed, there are evidently some principles in his nature, which interest him in the fortunes of others, and render their happiness necessary to him, though he derives nothing from it, but the pleasure of seeing it.
And we all desire what he calls mutual sympathy of sentiments: ‘Nothing pleases us more than to observe in other men a fellow-feeling with all the emotions of our own breast.’ Yet the childless Smith observed that a child does not have a sense of morality, and has to find out the hard way that he or she is not the centre of the universe. Gradually, by trial and error, a child discovers what behaviour leads to mutual sympathy of sentiments, and therefore can make him or her happy by making others happy. It is through everybody accommodating their desires to those of others that a system of shared morality arises, according to Smith. An invisible hand (the phrase first appears in Smith’s lectures on astronomy, then here in Moral Sentiments and once more in The Wealth of Nations) guides us towards a common moral code. Otteson explains that the hand is invisible, because people are not setting out to create a shared system of morality; they aim only to achieve mutual sympathy now with the people they are dealing with. The parallel with Smith’s later explanation of the market is clear to see: both are phenomena that emerge from individual actions, but not from deliberate design.
Smith’s most famous innovation in moral philosophy is the ‘impartial spectator’, who we imagine to be watching over us when we are required to be moral. In other words, just as we learn to be moral by judging others’ reactions to our actions, so we can imagine those reactions by positing a neutral observer who embodies our conscience. What would a disinterested observer, who knows all the facts, think of our conduct? We get pleasure from doing what he recommends, and guilt from not doing so. Voltaire put it pithily: ‘The safest course is to do nothing against one’s conscience. With this secret, we can enjoy life and have no fear from death.’

How morality emerges
There is, note, no need for God in this philosophy. As a teacher of Natural Theology among other courses, Smith was no declared atheist, but occasionally he strays dangerously close to Lucretian scepticism. It is hardly surprising that he at least paid lip service to God, because three of his predecessors at Glasgow University, including Hutcheson, had been charged with heresy for not sticking to Calvinist orthodoxy. The mullahs of the day were vigilant. There remains one tantalising anecdote from a student, a disapproving John Ramsay, that Smith ‘petitioned the Senatus … to be relieved of the duty of opening his class with a prayer’, and, when refused, that his lectures led his students to ‘draw an unwarranted conclusion, viz. that the great truths of theology, together with the duties which man owes to God and his neighbours, may be discovered in the light of nature without any special revelation’. The Adam Smith scholar Gavin Kennedy points out that in the sixth edition (1789) of The Theory of Moral Sentiments, published after his devout mother died, Smith excised or changed many religious references. He may have been a closet atheist, but he might also have been a theist, not taking Christianity literally, but assuming that some kind of god implanted benevolence in the human breast.
Morality, in Smith’s view, is a spontaneous phenomenon, in the sense that people decide their own moral codes by seeking mutual sympathy of sentiments in society, and moralists then observe and record these conventions and teach them back to people as top–down instructions. Smith is essentially saying that the priest who tells you how to behave is basing his moral code on observations of what moral people actually do.
There is a good parallel with teachers of grammar, who do little more than codify the patterns they see in everyday speech and tell them back to us as rules. Only occasionally, as with split infinitives, do their rules go counter to what good writers do. Of course, it is possible for a priest to invent and promote a new rule of morality, just as it is possible for a language maven to invent and promote a new rule of grammar or syntax, but it is remarkably rare. In both cases, what happens is that usage changes and the teachers gradually go along with it, sometimes pretending to be the authors.
So, for example, in my lifetime, disapproval of homosexuality has become ever more morally unacceptable in the West, while disapproval of paedophilia has become ever more morally mandatory. Male celebrities who broke the rules with under-age girls long ago and thought little of it now find themselves in court and in disgrace; while others who broke the (then) rules with adult men long ago and risked disgrace can now openly speak of their love. Don’t get me wrong: I approve of both these trends – but that’s not my point. My point is that the changes did not come about because some moral leader or committee ordained them, at least not mainly, let alone that some biblical instruction to make the changes came to light. Rather, the moral negotiation among ordinary people gradually changed the common views in society, with moral teachers reflecting the changes along the way. Morality, quite literally, evolved. In just the same way, words like ‘enormity’ and ‘prevaricate’ have changed their meaning in my lifetime, though no committee met to consider an alteration in the meaning of the words, and there is very little the grammarians can do to prevent it. (Indeed, grammarians spend most of their time deploring linguistic innovation.) Otteson points out that Smith in his writing uses the word ‘brothers’ and ‘brethren’ interchangeably, with a slight preference for the latter. Today, however, the rules have changed, and you would only use ‘brethren’ for the plural of brothers if you were being affected, antiquarian or mocking.
Smith was acutely aware of this parallel with language, which is why he insisted on appending his short essay on the origin of language to his Theory of Moral Sentiments in its second and later editions. In the essay, Smith makes the point that the laws of language are an invention, rather than a discovery – unlike, say, the laws of physics. But they are still laws: children are corrected by their parents and their peers if they say ‘bringed’ instead of ‘brought’. So language is an ordered system, albeit arrived at spontaneously through some kind of trial and error among people trying to make ‘their mutual wants intelligible to each other’. Nobody is in charge, but the system is orderly. What a peculiar and novel idea. What a subversive thought. If God is not needed for morality, and if language is a spontaneous system, then perhaps the king, the pope and the official are not quite as vital to the functioning of an orderly society as they pretend?
As the American political scientist Larry Arnhart puts it, Smith is a founder of a key tenet of liberalism, because he rejects the Western tradition that morality must conform to a transcendental cosmic order, whether in the form of a cosmic God, a cosmic Reason, or a cosmic Nature. ‘Instead of this transcendental moral cosmology, liberal morality is founded on an empirical moral anthropology, in which moral order arises from within human experience.’
Above all, Smith allows morality and language to change, to evolve. As Otteson puts it, for Smith, moral judgements are generalisations arrived at inductively on the basis of past experience. We log our own approvals and disapprovals of our own and others’ conduct, and observe others doing the same. ‘Frequently repeated patterns of judgement can come to have the appearance of moral duties or even commandments from on high, while patterns that recur with less frequency will enjoy commensurately less confidence.’ It is in the messy empirical world of human experience that we find morality. Moral philosophers observe what we do; they do not invent it.

Better angels
Good grief. Here is an eighteenth-century, middle-class Scottish professor saying that morality is an accidental by-product of the way human beings adjust their behaviour towards each other as they grow up; saying that morality is an emergent phenomenon that arises spontaneously among human beings in a relatively peaceful society; saying that goodness does not need to be taught, let alone associated with the superstitious belief that it would not exist but for the divine origin of an ancient Palestinian carpenter. Smith sounds remarkably like Lucretius (whom he certainly read) in parts of his Moral Sentiments book, but he also sounds remarkably like Steven Pinker of Harvard University today discussing the evolution of society towards tolerance and away from violence.
As I will explore, there is in fact a fascinating convergence here. Pinker’s account of morality growing strongly over time is, at bottom, very like Smith’s. To put it at its baldest, a Smithian child, developing his sense of morality in a violent medieval society in Prussia (say) by trial and error, would end up with a moral code quite different from such a child growing up in a peaceful German (say) suburb today. The medieval person would be judged moral if he killed people in defence of his honour or his city; whereas today he would be thought moral if he refused meat and gave copiously to charity, and thought shockingly immoral if he killed somebody for any reason at all, and especially for honour. In Smith’s evolutionary view of morality, it is easy to see how morality is relative and will evolve to a different end point in different societies, which is exactly what Pinker documents.
Pinker’s book The Better Angels of Our Nature chronicles the astonishing and continuing decline in violence of recent centuries. We have just lived through the decade with the lowest global death rate in warfare on record; we have seen homicide rates fall by 99 per cent in most Western countries since medieval times; we have seen racial, sexual, domestic, corporal, capital and other forms of violence in headlong retreat; we have seen discrimination and prejudice go from normal to disgraceful; we have come to disapprove of all sorts of violence as entertainment, even against animals. This is not to say there is no violence left, but the declines that Pinker documents are quite remarkable, and our horror at the violence that still remains implies that the decline will continue. Our grandchildren will stand amazed at some of the things we still find quite normal.
To explain these trends, Pinker turns to a theory first elaborated by Norbert Elias, who had the misfortune to publish it as a Jewish refugee from Germany in Britain in 1939, shortly before he was interned by the British on the grounds that he was German. Not a good position from which to suggest that violence and coercion were diminishing. It was not until it was translated into English three decades later in 1969, in a happier time, that his theory was widely appreciated. Elias argued that a ‘civilising process’ had sharply altered the habits of Europeans since the Middle Ages, that as people became more urban, crowded, capitalist and secular, they became nicer too. He hit upon this paradoxical realisation – for which there is now, but was not then, strong statistical evidence – by combing the literature of medieval Europe and documenting the casual, frequent and routine violence that was then normal. Feuds flared into murders all the time; mutilation and death were common punishments; religion enforced its rules with torture and sadism; entertainments were often violent. Barbara Tuchman in her book A Distant Mirror gives an example of a popular game in medieval France: people with their hands tied behind their backs competed to kill a cat nailed to a post by battering it with their heads, risking the loss of an eye from the scratching of the desperate cat in the process. Ha ha.
Elias argued that moral standards evolved; to illustrate the point he documented the etiquette guides published by Erasmus and other philosophers. These guides are full of suggestions about table manners, toilet manners and bedside manners that seem unnecessary to state, but are therefore revealing: ‘Don’t greet someone while they are urinating or defecating … don’t blow your nose on to the table-cloth or into your fingers, sleeve or hat … turn away when spitting lest your saliva fall on someone … don’t pick your nose while eating.’ In short, the very fact that these injunctions needed mentioning implies that medieval European life was pretty disgusting by modern standards. Pinker comments: ‘These are the kind of directives you’d expect a parent to give to a three-year-old, not a great philosopher to a literate readership.’ Elias argued that the habits of refinement, self-control and consideration that are second nature to us today had to be acquired. As time went by, people ‘increasingly inhibited their impulses, anticipated the long-term consequences of their actions, and took other people’s thoughts and feelings into consideration’. In other words, not blowing your nose on the tablecloth was all one with not stabbing your neighbour. It’s a bit like a historical version of the broken-window theory: intolerance of small crimes leads to intolerance of big ones.

Doux commerce
But how were these gentler habits acquired? Elias realised that we have internalised the punishment for breaking these rules (and the ones against more serious violence) in the form of a sense of shame. That is to say, just as Adam Smith argued, we rely on an impartial spectator, and we learned earlier and earlier in life to see his point of view as he became ever more censorious. But why? Elias and Pinker give two chief reasons: government and commerce. With an increasingly centralised government focused on the king and his court, rather than local warlords, people had to behave more like courtiers and less like warriors. That meant not only less violent, but also more refined. Leviathan enforced the peace, if only to have more productive peasants to tax. Revenge for murder was nationalised as a crime to be punished, rather than privatised as a wrong to be righted. At the same time, commerce led people to value the opportunity to be trusted by a stranger in a transaction. With increasingly money-based interactions among strangers, people increasingly began to think of neighbours as potential trading partners rather than potential prey. Killing the shopkeeper makes no sense. So empathy, self-control and morality became second nature, though morality was always a double-edged sword, as likely to cause violence as to prevent it through most of history.
Lao Tzu saw this twenty-six centuries ago: ‘The more prohibitions you have, the less virtuous people will be.’ Montesquieu’s phrase for the calming effect of trade on human violence, intolerance and enmity was ‘doux commerce’ – sweet commerce. And he has been amply vindicated in the centuries since. The richer and more market-oriented societies have become, the nicer people have behaved. Think of the Dutch after 1600, the Swedes after 1800, the Japanese after 1945, the Germans likewise, the Chinese after 1978. The long peace of the nineteenth century coincided with the growth of free trade. The paroxysm of violence that convulsed the world in the first half of the twentieth century coincided with protectionism.
Countries where commerce thrives have far less violence than countries where it is suppressed. Does Syria suffer from a surfeit of commerce? Or Zimbabwe? Or Venezuela? Is Hong Kong largely peaceful because it eschews commerce? Or California? Or New Zealand? I once interviewed Pinker in front of an audience in London, and was very struck by the passion of his reply when an audience member insisted that profit was a form of violence and was on the increase. Pinker simply replied with a biographical story. His grandfather, born in Warsaw in 1900, emigrated to Montreal in 1926, worked for a shirt company (the family had made gloves in Poland), was laid off during the Great Depression, and then, with his grandmother, sewed neckties in his apartment, eventually earning enough to set up a small factory, which they ran until their deaths. And yes, it made a small profit (just enough to pay the rent and bring up Pinker’s mother and her brothers), and no, his grandfather never hurt a fly. Commerce, he said, cannot be equated with violence.
‘Participation in capitalist markets and bourgeois virtues has civilized the world,’ writes Deirdre McCloskey in her book The Bourgeois Virtues. ‘Richer and more urban people, contrary to what the magazines of opinion sometimes suggest, are less materialistic, less violent, less superficial than poor and rural people’ (emphasis in original).
How is it then that conventional wisdom – especially among teachers and religious leaders – maintains that commerce is the cause of nastiness, not niceness? That the more we grow the economy and the more we take part in ‘capitalism’, the more selfish, individualistic and thoughtless we become? This view is so widespread it even leads such people to assume – against the evidence – that violence is on the increase. As Pope Francis put it in his 2013 apostolic exhortation Evangelii Gaudium, ‘unbridled’ capitalism has made the poor miserable even as it enriched the rich, and is responsible for the fact that ‘lack of respect for others and violence are on the rise’. Well, this is just one of those conventional wisdoms that is plain wrong. There has been a decline in violence, not an increase, and it has been fastest in the countries with the least bridled versions of capitalism – not that there is such a thing as unbridled capitalism anywhere in the world. The ten most violent countries in the world in 2014 – Syria, Afghanistan, South Sudan, Iraq, Somalia, Sudan, Central African Republic, Democratic Republic of the Congo, Pakistan and North Korea – are all among the least capitalist. The ten most peaceful – Iceland, Denmark, Austria, New Zealand, Switzerland, Finland, Canada, Japan, Belgium and Norway – are all firmly capitalist.
My reason for describing Pinker’s account of the Elias theory in such detail is because it is a thoroughly evolutionary argument. Even when Pinker credits Leviathan – government policy – for reducing violence, he implies that the policy is as much an attempt to reflect changing sensibility as to change sensibility. Besides, even Leviathan’s role is unwitting: it did not set out to civilise, but to monopolise. It is an extension of Adam Smith’s theory, uses Smith’s historical reasoning, and posits that the moral sense, and the propensity to violence and sordid behaviour, evolve. They evolve not because somebody ordains that they should evolve, but spontaneously. The moral order emerges and continually changes. Of course, it can evolve towards greater violence, and has done so from time to time, but mostly it has evolved towards peace, as Pinker documents in exhaustive detail. In general, over the past five hundred years in Europe and much of the rest of the world, people became steadily less violent, more tolerant and more ethical, without even realising they were doing so. It was not until Elias spotted the trend in words, and later historians then confirmed it in statistics, that we even knew it was happening. It happened to us, not we to it.

The evolution of law
It is an extraordinary fact, unremembered by most, that in the Anglosphere people live by laws that did not originate with governments at all. British and American law derives ultimately from the common law, which is a code of ethics that was written by nobody and everybody. That is to say, unlike the Ten Commandments or most statute law, the common law emerges and evolves through precedent and adversarial argument. It ‘evolves incrementally, rather than leaps convulsively or stagnates idly’, in the words of legal scholar Allan Hutchinson. It is ‘a perpetual work-in-progress – evanescent, dynamic, messy, productive, tantalizing, and bottom up’. The author Kevin Williamson reminds us to be astonished by this fact: ‘The most successful, most practical, most cherished legal system in the world did not have an author. Nobody planned it, no sublime legal genius thought it up. It emerged in an iterative, evolutionary manner much like a language emerges.’ Trying to replace the common law with a rationally designed law is, he jests, like trying to design a better rhinoceros in a laboratory.
Judges change the common law incrementally, adjusting legal doctrine case by case to fit the facts on the ground. When a new puzzle arises, different judges come to different conclusions about how to deal with it, and the result is a sort of genteel competition, as successive courts gradually choose which line they prefer. In this sense, the common law is built by natural selection.
Common law is a peculiarly English development, found mainly in countries that are former British colonies or have been influenced by the Anglo-Saxon tradition, such as Australia, India, Canada and the United States. It is a beautiful example of spontaneous order. Before the Norman Conquest, different rules and customs applied in different regions of England. But after 1066 judges created a common law by drawing on customs across the country, with an occasional nod towards the rulings of monarchs. Powerful Plantagenet kings such as Henry II set about standardising the laws to make them consistent across the country, and absorbed much of the common law into the royal courts. But they did not invent it. By contrast, European rulers drew on Roman law, and in particular a compilation of rules issued by the Emperor Justinian in the sixth century that was rediscovered in eleventh-century Italy. Civil law, as practised on the continent of Europe, is generally written by government.
In common law, the elements needed to prove the crime of murder, for instance, are contained in case law rather than defined by statute. To ensure consistency, courts abide by precedents set by higher courts examining the same issue. In civil-law systems, by contrast, codes and statutes are designed to cover all eventualities, and judges have a more limited role of applying the law to the case in hand. Past judgements are no more than loose guides. When it comes to court cases, judges in civil-law systems tend towards being investigators, while their peers in common-law systems act as arbiters between parties that present their arguments.
Which of these systems you prefer depends on your priorities. Jeremy Bentham argued that the common law lacked coherence and rationality, and was a repository of ‘dead men’s thoughts’. The libertarian economist Gordon Tullock, a founder of the public-choice school, argued that the common-law method of adjudication is inherently inferior because of its duplicative costs, inefficient means of ascertaining the facts, and scope for wealth-destroying judicial activism.
Others respond that the civil-law tradition, in its tolerance of arbitrary confiscation by the state and its tendency to mandate that which it does not outlaw, has proved less a friend of liberty than the common law. Friedrich Hayek advanced the view that the common law contributed to greater economic welfare because it was less interventionist, less under the tutelage of the state, and was better able to respond to change than civil legal systems; indeed, it was for him a legal system that led, like the market, to a spontaneous order.
A lot of Britain’s continuing discomfort with the European Union derives from the contrast between the British tradition of bottom–up law-making and the top–down Continental version. The European Parliament member Daniel Hannan frequently reminds his colleagues of the bias towards liberty of the common law: ‘This extraordinary, sublime idea that law does not emanate from the state but that rather there was a folk right of existing law that even the king and his ministers were subject to.’
The competition between these two traditions is healthy. But the point I wish to emphasise is that it is perfectly possible to have law that emerges, rather than is created. To most people that is a surprise. They vaguely assume in the backs of their minds that the law is always invented, rather than that it evolved. As the economist Don Boudreaux has argued, ‘Law’s expanse is so vast, its nuances so many and rich, and its edges so frequently changing that the popular myth that law is that set of rules designed and enforced by the state becomes increasingly absurd.’
It is not just the common law that evolves through replication, variation and selection. Even civil law, and constitutional interpretation, see gradual changes, some of which stick and some of which do not. The decisions as to which of these changes stick are not taken by omniscient judges, and nor are they random; they are chosen by the process of selection. As the legal scholar Oliver Goodenough argues, this places the evolutionary explanation at the heart of the system as opposed to appealing to an outside force. Both ‘God made it happen’ and ‘Stuff happens’ are external causes, whereas evolution is a ‘rule-based cause internal to time and space as we experience them’.

3 (#ulink_05d38416-4258-52f9-a1e0-5a92166777f6)
The Evolution of Life (#ulink_05d38416-4258-52f9-a1e0-5a92166777f6)
A mistake I strongly urge you to avoid for all you’re worth,
An error in this matter you should give the widest berth:
Namely don’t imagine that the bright lights of your eyes
Were purpose made so we could look ahead, or that our thighs
And calves were hinged together at the joints and set on feet
So we could walk with lengthy stride, or that forearms fit neat
To brawny upper arms, and are equipped on right and left
With helping hands, solely that we be dexterous and deft
At undertaking all the things we need to do to live,
This rationale and all the others like it people give,
Jumbles effect and cause, and puts the cart before the horse …
Lucretius, De Rerum Natura, Book 4, lines 823–33
Charles Darwin did not grow up in an intellectual vacuum. It is no accident that alongside his scientific apprenticeship he had a deep inculcation in the philosophy of the Enlightenment. Emergent ideas were all around him. He read his grandfather’s Lucretius-emulating poems. ‘My studies consist in Locke and Adam Smith,’ he wrote from Cambridge, citing two of the most bottom–up philosophers. Probably it was Smith’s The Moral Sentiments that he read, since it was more popular in universities than The Wealth of Nations. Indeed, one of the books that Darwin read in the autumn of 1838 after returning from the voyage of the Beagle and when about to crystallise the idea of natural selection was Dugald Stewart’s biography of Adam Smith, from which he got the idea of competition and emergent order. The same month he read, or reread, the political economist Robert Malthus’s essay on population, and was struck by the notion of a struggle for existence in which some thrived and others did not, an idea which helped trigger the insight of natural selection. He was friendly at the time with Harriet Martineau, a firebrand radical who campaigned for the abolition of slavery and also for the ‘marvellous’ free-market ideas of Adam Smith. She was a close confidante of Malthus. Through his mother’s (and future wife’s) family, the Wedgwoods, Darwin moved in a circle of radicalism, trade and religious dissent, meeting people like the free-market MP and thinker James Mackintosh. The evolutionary biologist Stephen Jay Gould once went so far as to argue that natural selection ‘should be viewed as an extended analogy … to the laissez-faire economics of Adam Smith’. In both cases, Gould argued, balance and order emerged from the actions of individuals, not from external or divine control. As a Marxist, Gould surprisingly approved of this philosophy – for biology, but not for economics: ‘It is ironic that Adam Smith’s system of laissez faire does not work in his own domain of economics, for it leads to oligopoly and revolution.’
In short, Charles Darwin’s ideas evolved, themselves, from ideas of emergent order in human society that were flourishing in early-nineteenth-century Britain. The general theory of evolution came before the special theory. All the same, Darwin faced a formidable obstacle in getting people to see undirected order in nature. That obstacle was the argument from design as set out, most ably, by William Paley.
In the last book that he published, in 1802, the theologian William Paley set out the argument for biological design based upon purpose. In one of the finest statements of design logic, from an indubitably fine mind, he imagined stubbing his toe against a rock while crossing a heath, then imagined his reaction if instead his toe had encountered a watch. Picking up the watch, he would conclude that it was man-made: ‘There must have existed, at some time, and at some place or other, an artificer or artificers, who formed [the watch] for the purpose which we find it actually to answer; who comprehended its construction, and designed its use.’ If a watch implies a watchmaker, then how could the exquisite purposefulness of an animal not imply an animal-maker? ‘Every indication of contrivance, every manifestation of design, which existed in the watch, exists in the works of nature; with the difference, on the side of nature, of being greater or more, and that in a degree which exceeds all computation.’
Paley’s argument from design was not new. It was Newton’s logic applied to biology. Indeed, it was a version of one of the five arguments for the existence of God advanced by Thomas Aquinas six hundred years before: ‘Whatever lacks intelligence cannot move towards an end, unless it be directed by some being endowed with knowledge and intelligence.’ And in 1690 the high priest of common sense himself, John Locke, had effectively restated the same idea as if it were so rational that nobody could deny it. Locke found it ‘as impossible to conceive that ever bare incogitative Matter should produce a thinking, intelligent being, as that nothing should produce Matter’. Mind came first, not matter. As Dan Dennett has pointed out, Locke gave an empirical, secular, almost mathematical stamp of approval to the idea that God was the designer.

Hume’s swerve
The first person to dent this cosy consensus was David Hume. In a famous passage from his Dialogues Concerning Natural Religion (published posthumously in 1779), Hume has Cleanthes, his imaginary theist, state the argument from design in powerful and eloquent words:
Look around the world: Contemplate the whole and every part of it: You will find it to be nothing but one great machine, subdivided into an infinite number of lesser machines … All these various machines, and even their most minute parts, are adjusted to each other with an accuracy, which ravishes into admiration all men, who have ever contemplated them. The curious adapting of means to ends, exceeds the productions of human contrivance; of human design, thought, wisdom, and intelligence. Since, therefore the effects resemble each other, we are led to infer, by all the rules of analogy, that the causes also resemble. [Dialogues, 2.5/143]
It’s an inductive inference, Dennett points out: where there’s design there’s a designer, just as where there’s smoke there’s fire.
But Philo, Cleanthes’s imaginary deist interlocutor, brilliantly rebuts the logic. First, it immediately prompts the question of who designed the designer. ‘What satisfaction is there in that infinite progression?’ Then he points out the circular reasoning: God’s perfection explains the world’s design, which proves God’s perfection. And then, how do we know that God is perfect? Might he not have been a ‘stupid mechanic, who imitated others’ and ‘botched and bungled’ his way through different worlds during ‘infinite ages of world making’? Or might not the same argument prove God to be multiple gods, or a ‘perfect anthropomorphite’ with human form, or an animal, or a tree, or a ‘spider who spun the whole complicated mass from his bowels’?
Hume was now enjoying himself. Echoing the Epicureans, he began to pick holes in all the arguments of natural theology. A true believer, Philo said, would stress ‘that there is a great and immeasurable, because incomprehensible, difference between the human and the divine mind’, so it is idolatrous blasphemy to compare the deity to a mere engineer. An atheist, on the other hand, might be happy to concede the purposefulness of nature but explain it by some analogy other than a divine intelligence – as Charles Darwin eventually did.
In short, Hume, like Voltaire, had little time for divine design. By the time he finished, his alter ego Philo had effectively demolished the entire argument from design. Yet even Hume, surveying the wreckage, suddenly halted his assault and allowed the enemy forces to escape the field. In one of the great disappointments in all philosophy, Philo suddenly agrees with Cleanthes at the end, stating that if we are not content to call the supreme being God, then ‘what can we call him but Mind or Thought’? It’s Hume’s Lucretian swerve. Or is it? Anthony Gottlieb argues that if you read it carefully, Hume has buried a subtle hint here, designed not to disturb the pious and censorious even after his death, that mind may be matter.
Dennett contends that Hume’s failure of nerve cannot be explained by fear of persecution for atheism. He arranged to have his book published after his death. In the end it was sheer incredulity that caused him to balk at the ultimate materialist conclusion. Without the Darwinian insight, he just could not see a mechanism by which purpose came from matter.
Through the gap left by Hume stole William Paley. Philo had used the metaphor of the watch, arguing that pieces of metal could ‘never arrange themselves so as to compose a watch’. Though well aware of Philo’s objections, Paley still inferred a mind behind the watch on the heath. It was not that the watch was made of components, or that it was close to perfect in its design, or that it was incomprehensible – arguments that had appealed to a previous generation of physicists and that Hume had answered. It was that it was clearly designed to do a job, not individually and recently but once and originally in an ancestor. Switching metaphors, Paley asserted that ‘there is precisely the same proof that the eye was made for vision, as there is that the telescope was made for assisting it’. The eyes of animals that live in water have a more curved surface than the eyes of animals that live on land, he pointed out, as befits the different refractive indices of the two elements: organs are adapted to the natural laws of the world, rather than vice versa.
But if God is omnipotent, why does he need to design eyes at all? Why not just give animals a magic power of vision without an organ? Paley had an answer of sorts. God could have done ‘without the intervention of instruments or means: but it is in the construction of instruments, in the choice and adaptation of means, that a creative intelligence is seen’. God has been pleased to work within the laws of physics, so that we can have the pleasure of understanding them. In this way, Paley’s modern apologists argue, God cannot be contradicted by the subsequent discovery of evolution by natural selection. He’d put that in place too to cheer us up by discovering it.
Paley’s argument boils down to this: the more spontaneous mechanisms you discover to explain the world of living things, the more convinced you should be that there is an intelligence behind them. Confronted with such a logical contortion, I am reminded of one of the John Cleese characters in Monty Python’s Life of Brian, when Brian denies that he is the Messiah: ‘Only the true Messiah denies his divinity.’

Darwin on the eye
Nearly six decades after Paley’s book, Charles Darwin’s produced a comprehensive and devastating answer. Brick by brick, using insights from an Edinburgh education in bottom–up thinking, from a circumnavigation of the world collecting facts of stone and flesh, from a long period of meticulous observation and induction, he put together an astonishing theory: that the differential replication of competing creatures would produce cumulative complexity that fitted form to function without anybody ever comprehending the rationale in a mind. And thus was born one of the most corrosive concepts in all philosophy. Daniel Dennett in his book Darwin’s Dangerous Idea compares Darwinism to universal acid; it eats through every substance used to contain it. ‘The creationists who oppose Darwinism so bitterly are right about one thing: Darwin’s dangerous idea cuts much deeper into the fabric of our most fundamental beliefs than many of its sophisticated apologists have yet admitted, even to themselves.’
The beauty of Darwin’s explanation is that natural selection has far more power than any designer could ever call upon. It cannot know the future, but it has unrivalled access to information about the past. In the words of the evolutionary psychologists Leda Cosmides and John Tooby, natural selection surveys ‘the results of alternative designs operating in the real world, over millions of individuals, over thousands of generations, and weights alternatives by the statistical distribution of their consequences’. That makes it omniscient about what has worked in the recent past. It can overlook spurious and local results and avoid guesswork, inference or models: it is based on the statistical results of the actual lives of creatures in the actual range of environments they encounter.
One of the most perceptive summaries of Darwin’s argument was made by one of his fiercest critics. A man named Robert Mackenzie Beverley, writing in 1867, produced what he thought was a devastating demolition of the idea of natural selection. Absolute ignorance is the artificer, he pointed out, trying to take the place of absolute wisdom in creating the world. Or (and here Beverley’s fury drove him into capital letters), ‘IN ORDER TO MAKE A PERFECT AND BEAUTIFUL MACHINE, IT IS NOT REQUISITE TO KNOW HOW TO MAKE IT.’ To which Daniel Dennett, who is fond of this quotation, replies: yes, indeed! That is the essence of Darwin’s idea: that beautiful and intricate organisms can be made without anybody knowing how to make them. A century later, an economist named Leonard Reed in an essay called ‘I, Pencil’, made the point that this is also true of technology. It is indeed the case that in order to make a perfect and beautiful machine, it is not requisite to know how to make it. Among the myriad people who contribute to the manufacture of a simple pencil, from graphite miners and lumberjacks to assembly-line workers and managers, not to mention those who grow the coffee that each of these drinks, there is not one person who knows how to make a pencil from scratch. The knowledge is held in the cloud, between brains, rather than in any individual head. This is one of the reasons, I shall argue in a later chapter, that technology evolves too.
Charles Darwin’s dangerous idea was to take away the notion of intentional design from biology altogether and replace it with a mechanism that builds ‘organized complexity … out of primeval simplicity’ (in Richard Dawkins’s words). Structure and function emerge bit by incremental bit and without resort to a goal of any kind. It’s ‘a process that was as patient as it was mindless’ (Dennett). No creature ever set out mentally intending to see, yet the eye emerged as a means by which animals could see. There is indeed an adapted purposefulness in nature – it makes good sense to say that eyes have a function – but we simply lack the language to describe function that emerged from a backward-looking process, rather than a goal-directed, forward-looking, mind-first one. Eyes evolved, Darwin said, because in the past simple eyes that provided a bit of vision helped the survival and reproduction of their possessors, not because there was some intention on the part of somebody to achieve vision. All our functional phrases are top–down ones. The eye is ‘for seeing’, eyes are there ‘so that’ we can see, seeing is to eyes as typing is to keyboards. The language and its metaphors still imply skyhooks.
Darwin confessed that the evolution of the eye was indeed a hard problem. In 1860 he wrote to the American botanist Asa Gray: ‘The eye to this day gives me a cold shudder, but when I think of the fine known gradation my reason tells me I ought to conquer the odd shudder.’ In 1871 in his Descent of Man, he wrote: ‘To suppose that the eye with all its inimitable contrivances for adjusting the focus to different distances, for admitting different amounts of light, and for the correction of spherical and chromatic aberration, could have been formed by natural selection, seems, I freely confess, absurd in the highest degree.’
But he then went on to set out how he justified the absurdity. First, the same could have been said of Copernicus. Common sense said the world stood still while the sun turned round it. Then he laid out how an eye could have emerged from nothing, step by step. He invoked ‘numerous gradations’ from a simple and imperfect eye to a complex one, ‘each grade being useful to its possessor’. If such grades could be found among living animals, and they could, then there was no reason to reject natural selection, ‘though insuperable by our imagination’. He had said something similar twenty-seven years before in his first, unpublished essay on natural selection: that the eye ‘may possibly have been acquired by gradual selection of slight but in each case useful deviations’. To which his sceptical wife Emma had replied, in the margin: ‘A great assumption’.

Pax optica
This is exactly what happened, we now know. Each grade was indeed useful to its possessor, because each grade still exists and still is useful to its owner. Each type of eye is just a slight improvement on the one before. A light-sensitive patch on the skin enables a limpet to tell which way is up; a light-sensitive cup enables a species called a slit-shelled mollusc to tell which direction the light is coming from; a pinhole chamber of light-sensitive cells enables the nautilus to focus a simple image of the world in good light; a simple lensed eye enables a murex snail to form an image even in low light; and an adjustable lens with an iris to control the aperture enables an octopus to perceive the world in glorious detail (the invention of the lens is easily explained, because any transparent tissue in the eye would have acted as partial refractor). Thus even just within the molluscs, every stage of the eye still exists, useful to each owner. How easy then to imagine each stage having existed in the ancestors of the octopus.
Richard Dawkins compares the progression through these grades to climbing a mountain (Mount Improbable) and at no point encountering a slope too steep to surmount. Mountains must be climbed from the bottom up. He shows that there are numerous such mountains – different kinds of eyes in different kinds of animal, from the compound eyes of insects to the multiple and peculiar eyes of spiders – each with a distinct range of partially developed stages showing how one can go step by step. Computer models confirm that there is nothing to suggest any of the stages would confer a disadvantage.
Moreover, the digitisation of biology since the discovery of DNA provides direct and unambiguous evidence of gradual evolution by the progressive alteration of the sequence of letters in genes. We now know that the very same gene, called Pax6, triggers the development of both the compound eye of insects and the simple eye of human beings. The two kinds of eye were inherited from a common ancestor. A version of a Pax gene also directs the development of simple eyes in jellyfish. The ‘opsin’ protein molecules that react to light in the eye can be traced back to the common ancestor of all animals except sponges. Around 700 million years ago, the gene for opsin was duplicated twice to give the three kinds of light-sensitive molecules we possess today. Thus every stage in the evolution of eyes, from the development of light-sensitive molecules to the emergence of lenses and colour vision, can be read directly from the language of the genes. Never has a hard problem in science been so comprehensively and emphatically solved as Darwin’s eye dilemma. Shudder no more, Charles.

Astronomical improbability?
The evidence for gradual, undirected emergence of the opsin molecule by the stepwise alteration of the digital DNA language is strong. But there remains a mathematical objection. The opsin molecule is composed of hundreds of amino acids in a sequence specified by the appropriate gene. If one were to arrive at the appropriate sequence to give opsin its light-detecting properties by trial and error it would take either a very long time or a very large laboratory. Given that there are twenty types of amino acid, then a protein molecule with a hundred amino acids in its chain can exist in 10 to the power of 130 different sequences. That’s a number far greater than the number of atoms in the universe, and far greater than the number of nanoseconds since the Big Bang. So it’s just not possible for natural selection, however many organisms it has to play with for however long, to arrive at a design for an opsin molecule from scratch. And an opsin is just one of tens of thousands of proteins in the body.
Am I heading for a Lucretian swerve? Will I be forced to concede that the combinatorial vastness of the library of possible proteins makes it impossible for evolution to find ones that work? Far from it. We know that human innovation rarely designs things from scratch, but jumps from one technology to the ‘adjacent possible’ technology, recombining existing features. So it is taking small, incremental steps. And we know that the same is true of natural selection. So the mathematics is misleading. In a commonly used analogy, you are not assembling a Boeing 747 with a whirlwind in a scrapyard, you are adding one last rivet to an existing design. And here there has been a remarkable recent discovery that makes natural selection’s task much easier.
In a laboratory in Zürich a few years ago, Andreas Wagner asked his student João Rodriguez to use a gigantic assembly of computers to work his way through a map of different metabolic networks to see how far he could get by changing just one step at a time. He chose the glucose system in a common gut bacterium, and his task was to change one link in the whole metabolic chain in such a way that it still worked – that the creature could still make sixty or so bodily ingredients from this one sugar. How far could he get? In species other than the gut bacterium there are thousands of different glucose pathways. How many of them are just a single step different from each other? Rodriguez found he got 80 per cent of the way through a library of a thousand different metabolic pathways at his first attempt, never having to change more than one step at a time and never producing a metabolic pathway that did not work. ‘When João showed me the answer, my first reaction was disbelief,’ wrote Wagner. ‘Worried that this might be a fluke, I asked João for many more random walks, a thousand more, each preserving metabolic meaning, each leading as far as possible, each leaving in a different direction.’ Same result.
Wagner and Rodriguez had stumbled upon a massive redundancy built into the biochemistry of bacteria – and people. Using the metaphor of a ‘Library of Mendel’, in which imaginary building are stored the unimaginably vast number of all possible genetic sequences, Wagner identified a surprising pattern. ‘The metabolic library is packed to its rafters with books that tell the same story in different ways,’ he writes. ‘Myriad metabolic texts with the same meaning raise the odds of finding any one of them – myriad-fold. Even better, evolution does not just explore the metabolic library like a single casual browser. It crowdsources, employing huge populations of organisms that scour the library for new texts.’ Organisms are crowds of readers going through the Library of Mendel to find texts that make sense.
Wagner points out that biological innovation must be both conservative and progressive, because as it redesigns the body, it cannot ever produce a non-functional organism. Turning microbes into mammals over millions of years is a bit like flying the Atlantic while rebuilding the plane to a new design. The globin molecule, for example, has roughly the same three-dimensional shape and roughly the same function in plants and insects, but the sequences of amino acids in the two are 90 per cent different.

Doubting Darwin still
Yet, despite this overwhelming evidence of emergence, the yearning for design still lures millions of people back into doubting Darwin. The American ‘intelligent design’ movement evolved directly from a fundamentalist drive to promote religion within schools, coupled with a devious ‘end run’ to circumvent the USA’s constitutional separation between Church and state. It has largely focused upon the argument from design in order to try to establish that the complex functional arrangements of biology cannot be explained except by God. As Judge John Jones of Pennsylvania wrote in his judgement in the pivotal case of Kitzmiller vs Dover Area School District in 2005, although proponents of intelligent design ‘occasionally suggest that the designer could be a space alien or a time-traveling cell biologist, no serious alternative to God as the designer has been proposed’. Tammy Kitzmiller was one of several Dover parents who objected to her child being taught ‘intelligent design’ on a par with Darwinism. The parents went to court, and got the school district’s law overturned.
In the United States, fundamentalist Christians have challenged Darwinism in schools for more than 150 years. They pushed state legislatures into adopting laws that prohibited state schools from teaching evolution, a trend that culminated in the Scopes ‘monkey trial’ of 1925. The defendant, John Scopes, deliberately taught evolution illegally to bring attention to the state of Tennessee’s anti-evolution law. Prosecuted by William Jennings Bryan and defended by Clarence Darrow, Scopes was found guilty and fined a paltry $100, and even that was overturned on a technicality at appeal. There is a widespread legend that Bryan’s victory was pyrrhic, because it made him look ridiculous and Scopes’s punishment was light. But this is a comforting myth told by saltwater liberals living on the coasts. In the American heartland, Scopes’s conviction emboldened the critics of Darwin greatly. Far from being ridiculed into silence, the fundamentalists gained ground in the aftermath of the Scopes trial, and held that ground for decades within the educational system. Textbooks became very cautious about Darwinism.
It was not until 1968 that the United States Supreme Court struck down all laws preventing the teaching of evolution in schools. Fundamentalists then fell back on teaching ‘creation science’, a concoction of arguments that purported to find scientific evidence for biblical events such as Noah’s flood. In 1987 the Supreme Court effectively proscribed the teaching of creationism on the grounds that it was religion, not science.
It was then that the movement reinvented itself as ‘intelligent design’, focusing on the old Aquinas–Paley argument from design in its simplest form. Creationists promptly rewrote their textbook Of Pandas and People, using an identical definition for intelligent design as had been used for creation science; and systematically replaced the words ‘creationism’ and ‘creationist’ with ‘intelligent design’ in 150 places. This went wrong in one case, resulting in a strange spelling mistake in the book, ‘cdesign proponentsists’, which came to be called the ‘missing link’ between the two movements. This ‘astonishing’ similarity between the two schools of thought was crucial in causing Judge John Jones to deem intelligent design religious rather than scientific as he struck down the Dover School District’s law demanding equal time for intelligent design and evolution in 2005. Intelligent design, according to the textbook Of Pandas and People, argued that species came into existence abruptly, and through an intelligent agency, with their distinctive features already present: fish with fins and scales, birds with feathers.
Jones’s long Opinion in 2005 was a definitive and conclusive demolition of a skyhook, all the more persuasive since it came from a Christian, Bush-appointed, politically conservative judge with no scientific training. Jones pointed out that the scientific revolution had rejected unnatural causes to explain natural phenomena, rejected appeal to authority, and rejected revelation, in favour of empirical evidence. He systematically took apart the evidence presented by Professor Michael Behe, the main scientific champion of intelligent design testifying for the defendants. Behe, in his book Darwin’s Black Box and subsequent papers, had used two main arguments for the existence of an intelligent designer: irreducible complexity and the purposeful arrangement of parts. The flagellum of a bacterium, he argued, is driven by a molecular rotary motor of great complexity. Remove any part of that system and it will not work. The blood-clotting system of mammals likewise consists of a cascade of evolutionary events, none of which makes sense without the others. And the immune system was not only inexplicably complex, but a natural explanation was impossible.
It was trivial work for evolution’s champions, such as Kenneth Miller, to dismantle these cases in the Dover trial to the satisfaction of the judge. A fully functional precursor of the bacterial flagellum with a different job, known as the Type III secretory system, exists in some organisms and could easily have been adapted to make a rotary motor while still partly retaining its original advantageous role. (In the same way, the middle-ear bones of mammals, now used for hearing, are direct descendants of bones that were once part of the jaw of early fish.) The blood-clotting cascade is missing one step in whales and dolphins, or three steps in puffer fish, and still works fine. And the immune system’s mysterious complexity is yielding bit by bit to naturalistic explanations; what’s left no more implicates an intelligent designer, or a time-travelling genetic engineer, than it does natural selection. At the trial Professor Behe was presented with fifty-eight peer-reviewed papers and nine books about the evolution of the immune system.
As for the purposeful arrangement of parts, Judge Jones did not mince his words: ‘This inference to design based upon the appearance of a “purposeful arrangement of parts” is a completely subjective proposition, determined in the eye of each beholder and his/her viewpoint concerning the complexity of a system.’ Which is really the last word on Newton, Paley, Behe, and for that matter Aquinas.
More than 2,000 years ago Epicureans like Lucretius seem to have cottoned on to the power of natural selection, an idea that they probably got from the flamboyant Sicilian philosopher Empedocles (whose verse style was also a model for Lucretius), born in around 490 BC. Empedocles talked of animals that survived ‘being organised spontaneously in a fitting way; whereas those which grew otherwise perished and continue to perish’. It was, had Empedocles only known it, probably the best idea he ever had, though he never seems to have followed it through. Darwin was rediscovering an idea.

Gould’s swerve
Why was it even necessary, nearly 150 years after Darwin set out his theory, for Judge Jones to make the case again? This remarkable persistence of resistance to the idea of evolution, packaged and repackaged as natural theology, then creation science, then intelligent design, has never been satisfactorily explained. Biblical literalism cannot fully justify why people so dislike the idea of spontaneous biological complexity. After all, Muslims have no truck with the idea that the earth is 6,000 years old, yet they too find the argument from design persuasive. Probably fewer than 20 per cent of people in most Muslim-dominated countries accept Darwinian evolution to be true. Adnan Oktar, for example, a polemical Turkish creationist who also uses the name Harun Yahya, employs the argument from design to ‘prove’ that Allah created living things. Defining design as ‘a harmonious assembling of various parts into an orderly form towards a common goal’, he then argues that birds show evidence of design, their hollowed bones, strong muscles and feathers making it ‘obvious that the bird is product of a certain design’. Such a fit between form and function, however, is very much part of the Darwinian argument too.
Secular people, too, often jib at accepting the idea that complex organs and bodies can emerge without a plan. In the late 1970s a debate within Darwinism, between a predominantly American school led by the fossil expert Stephen Jay Gould and a predominantly British one led by the behaviour expert Richard Dawkins, about the pervasiveness of adaptation, led to some bitter and high-profile exchanges. Dawkins thought that almost every feature of a modern organism had probably been subject to selection for a function, whereas Gould thought that lots of change happened for accidental reasons. By the end, Gould seemed to have persuaded many lay people that Darwinism had gone too far; that it was claiming a fit between form and function too often and too glibly; that the idea of the organism adapting to its environment through natural selection had been refuted or at least diminished. In the media, this fed what John Maynard Smith called ‘a strong wish to believe that the Darwinian theory is false’, and culminated in an editorial in the Guardian announcing the death of Darwinism.
Within evolutionary biology, however, Gould lost the argument. Asking what an organ had evolved to do continued to be the main means by which biologists interpreted anatomy, biochemistry and behaviour. Dinosaurs may have been large ‘to’ achieve stable temperatures and escape predation, while nightingales may sing ‘to’ attract females.
This is not the place to retell the story of that debate, which had many twists and turns, from the spandrels of the Cathedral of San Marco in Venice to the partial resemblance of a caterpillar to a bird’s turd. My purpose here is different – to discern the motivation of Gould’s attack on adaptationism and its extraordinary popularity outside science. It was Gould’s Lucretian swerve. Daniel Dennett, Darwin’s foremost philosopher, thought Gould was ‘following in a long tradition of eminent thinkers who have been seeking skyhooks – and coming up with cranes’, and saw his antipathy to ‘Darwin’s dangerous idea as fundamentally a desire to protect or restore the Mind-first, top–down vision of John Locke’.
Whether this interpretation is fair or not, the problem Darwin and his followers have is that the world is replete with examples of deliberate design, from watches to governments. Some of them even involve design: the many different breeds of pigeons that Darwin so admired, from tumblers to fantails, were all produced by ‘mind-first’ selective breeding, just like natural selection but at least semi-deliberate and intentional. Darwin’s reliance on pigeon breeding to tell the tale of natural selection was fraught with danger – for his analogy was indeed a form of intelligent design.

Wallace’s swerve
Again and again, Darwin’s followers would go only so far, before swerving. Alfred Russel Wallace, for instance, co-discovered natural selection and was in many ways an even more radical enthusiast for Darwinism (a word he coined) than Darwin himself. Wallace was not afraid to include human beings within natural selection very early on; and he was almost alone in defending natural selection as the main mechanism of evolution in the 1880s, when it was sharply out of fashion. But then he executed a Lucretian swerve. Saying that ‘the Brain of the Savage [had been] shown to be Larger than he Needs it to be’ for survival, he concluded that ‘a superior intelligence has guided the development of man in a definite direction, and for a special purpose’. To which Darwin replied, chidingly, in a letter: ‘I hope you have not murdered too completely your own and my child.’
Later, in a book published in 1889 that resolutely champions Darwinism (the title of the book), Wallace ends by executing a sudden U-turn, just like Hume and so many others. Having demolished skyhook after skyhook, he suddenly erects three at the close. The origin of life, he declares, is impossible to explain without a mysterious force. It is ‘altogether preposterous’ to argue that consciousness in animals could be an emergent consequence of complexity. And mankind’s ‘most characteristic and noble faculties could not possibly have been developed by means of the same laws which have determined the progressive development of the organic world in general’. Wallace, who was by now a fervent spiritualist, demanded three skyhooks to explain life, consciousness and human mental achievements. These three stages of progress pointed, he said, to an unseen universe, ‘a world of spirit, to which the world of matter is altogether subordinate’.

The lure of Lamarck
The repeated revival of Lamarckian ideas to this day likewise speaks of a yearning to reintroduce mind-first intentionality into Darwinism. Jean-Baptiste de Lamarck suggested long before Darwin that creatures might inherit acquired characteristics – so a blacksmith’s son would inherit his father’s powerful forearms even though these were acquired by exercise, not inheritance. Yet people obviously do not inherit mutilations from their parents, such as amputated limbs, so for Lamarck to be right there would have to be some kind of intelligence inside the body deciding what was worth passing on and what was not. But you can see the appeal of such a scheme to those left disoriented by the departure of God the designer from the Darwinised scene. Towards the end of his life, even Darwin flirted with some tenets of Lamarckism as he struggled to understand heredity.
At the end of the nineteenth century, the German biologist August Weismann pointed out a huge problem with Lamarckism: the separation of germ-line cells (the ones that end up being eggs or sperm) from other body cells early in the life of an animal makes it virtually impossible for information to feed back from what happens to a body during its life into its very recipe. Since the germ cells were not an organism in microcosm, the message telling them to adopt an acquired character must, Weismann argued, be of an entirely different nature from the change itself. Changing a cake after it has been baked cannot alter the recipe that was used.
The Lamarckians did not give up, though. In the 1920s a herpetologist named Paul Kammerer in Vienna claimed to have changed the biology of midwife toads by changing their environment. The evidence was flaky at best, and wishfully interpreted. When accused of fraud, Kammerer killed himself. A posthumous attempt by the writer Arthur Koestler to make Kammerer into a martyr to the truth only reinforced the desperation so many non-scientists felt to rescue a top–down explanation of evolution.
It is still going on. Epigenetics is a respectable branch of genetic science that examines how modifications to DNA sequences acquired early in life in response to experience can affect the adult body. There is a much more speculative version of the story, though. Most of these modifications are swept clean when the sperm and egg cells are made, but perhaps a few just might survive the jump into a new generation. Certain genetic disorders, for example, seem to manifest themselves differently according to whether the mutant chromosome was inherited from the mother or the father – implying a sex-specific ‘imprint’ on the gene. And one study seemed to find a sex-specific effect on the mortality of Swedes according to how hungry their grandparents were when young. From a small number of such cases, none with very powerful results, some modern Lamarckians began to make extravagant claims for the vindication of the eighteenth-century French aristocrat. ‘Darwinian evolution can include Lamarckian processes,’ wrote Eva Jablonka and Marion Lamb in 2005, ‘because the heritable variation on which selection acts is not entirely blind to function; some of it is induced or “acquired” in response to the conditions of life.’
But the evidence for these claims remains weak. All the data suggest that the epigenetic state of DNA is reset in each generation, and that even if this fails to happen, the amount of information imparted by epigenetic modifications is a minuscule fraction of the information imparted by genetic information. Besides, ingenious experiments with mice show that all the information required to reset the epigenetic modifications themselves actually lies in the genetic sequence. So the epigenetic mechanisms must themselves have evolved by good old Darwinian random mutation and selection. In effect, there is no escape to intentionality to be found here. Yet the motive behind the longing to believe in epigenetic Lamarckism is clear. As David Haig of Harvard puts it, ‘Jablonka and Lamb’s frustration with neo-Darwinism is with the pre-eminence that is ascribed to undirected, random sources of heritable variation.’ He says he is ‘yet to hear a coherent explanation of how the inheritance of acquired characters can, by itself, be a source of intentionality’. In other words, even if you could prove some Lamarckism in epigenetics, it would not remove the randomness.

Culture-driven genetic evolution
In fact, there is a way for acquired characteristics to come to be incorporated into genetic inheritance, but it takes many generations and it is blindly Darwinian. It goes by the name of the Baldwin effect. A species that over many generations repeatedly exposes itself to some experience will eventually find its offspring selected for a genetic predisposition to cope with that experience. Why? Because the offspring that by chance happen to start with a predisposition to cope with that circumstance will survive better than others. The genes can thereby come to embody the experience of the past. Something that was once learned can become an instinct.
A similar though not identical phenomenon is illustrated by the ability to digest lactose sugar in milk, which many people with ancestors from western Europe and eastern Africa possess. Few adult mammals can digest lactose, since milk is not generally drunk after infancy. In two parts of the world, however, human beings evolved the capacity to retain lactose digestion into adulthood by not switching off genes for lactase enzymes. These happened to be the two regions where the domestication of cattle for milk production was first invented. What a happy coincidence! Because people could digest lactose, they were able to invent dairy farming? Well no, the genetic switch plainly happened as a consequence, not a cause, of the invention of dairy farming. But it still had to happen through random mutation followed by non-random survival. Those born by chance with the mutation that caused persistence of lactose digestion tended to be stronger and healthier than their siblings and rivals who could digest less of the goodness in milk. So they thrived, and the gene for lactose digestion spread rapidly. On closer inspection, this incorporation of ancestral experience into the genes is all crane and no skyhook.
So incredible is the complexity of the living world, so counterintuitive is the idea of boot-strapped, spontaneous intricacy, that even the most convinced Darwinian must, in the lonely hours of the night, have moments of doubt. Like Screwtape the devil whispering in the ear of a believer, the ‘argument from personal incredulity’ (as Richard Dawkins calls it) can be very tempting, even if you remind yourself that it’s a massive non sequitur to find divinity in ignorance.

4 (#ulink_58321b8b-6c1d-5e1e-81a6-4110341c970a)
The Evolution of Genes (#ulink_58321b8b-6c1d-5e1e-81a6-4110341c970a)
For certainly the elements of things do not collect
And order their formations by their cunning intellect,
Nor are their motions something they agree upon or propose;
But being myriad and many-mingled, plagued by blows
And buffeted through the universe for all time past,
By trying every motion and combination, they at last,
Fell into the present form in which the universe appears.
Lucretius, De Rerum Natura, Book 1, lines 1021–7
An especially seductive chunk of current ignorance is that concerning the origin of life. For all the confidence with which biologists trace the emergence of complex organs and organisms from simple proto-cells, the first emergence of those proto-cells is still shrouded in darkness. And where people are baffled, they are often tempted to resort to mystical explanations. When the molecular biologist Francis Crick, that most materialist of scientists, started speculating about ‘panspermia’ in the 1970s – the idea that life perhaps originated elsewhere in the universe and got here by microbial seeding – many feared that he was turning a little mystical. In fact he was just making an argument about probability: that it was highly likely, given the youth of the earth relative to the age of the universe, that some other planet got life before us and infected other solar systems. Still, he was emphasising the impenetrability of the problem.
Life consists of the capacity to reverse the drift towards entropy and disorder, at least locally – to use information to make local order from chaos while expending energy. Essential to these three skills are three kinds of molecule in particular – DNA for storing information, protein for making order, and ATP as the medium of energy exchange. How these came together is a chicken-and-egg problem. DNA cannot be made without proteins, nor proteins without DNA. As for energy, a bacterium uses up fifty times its own body weight in ATP molecules in each generation. Early life must have been even more profligate, yet would have had none of the modern molecular machinery for harnessing and storing energy. Wherever did it find enough ATP?
The crane that seems to have put these three in place was probably RNA, a molecule that still plays many key roles in the cell, and that can both store information like DNA, and catalyse reactions like proteins do. Moreover, RNA is made of units of base, phosphate and ribose sugar, just as ATP is. So the prevailing theory holds that there was once an ‘RNA World’, in which living things had RNA bodies with RNA genes, using RNA ingredients as an energy currency. The problem is that even this system is so fiendishly complex and interdependent that it’s hard to imagine it coming into being from scratch. How, for example, would it have avoided dissipation: kept together its ingredients and concentrated its energy without the boundaries provided by a cell membrane? In the ‘warm little pond’ that Charles Darwin envisaged for the beginning of life, life would have dissolved away all too easily.
Don’t give up. Until recently the origin of the RNA World seemed so difficult a problem that it gave hope to mystics; John Horgan wrote an article in Scientific American in 2011 entitled ‘Psst! Don’t Tell the Creationists, But Scientists Don’t Have a Clue How Life Began’.
Yet today, just a few years later, there’s the glimmer of a solution. DNA sequences show that at the very root of life’s family tree are simple cells that do not burn carbohydrates like the rest of us, but effectively charge their electrochemical batteries by converting carbon dioxide into methane or the organic compound acetate. If you want to find a chemical environment that echoes the one these chemi-osmotic microbes have within their cells, look no further than the bottom of the Atlantic Ocean. In the year 2000, explorers found hydrothermal vents on the mid-Atlantic ridge that were quite unlike those they knew from other geothermal spots on the ocean floor. Instead of very hot, acidic fluids, as are found at ‘black-smoker’ vents, the new vents – known as the Lost City Hydrothermal Field – are only warm, are highly alkaline, and appear to last for tens of thousands of years. Two scientists, Nick Lane and William Martin, have begun to list the similarities between these vents and the insides of chemi-osmotic cells, finding uncanny echoes of life’s method of storing energy. Basically, cells store energy by pumping electrically charged particles, usually sodium or hydrogen ions, across membranes, effectively creating an electrical voltage. This is a ubiquitous and peculiar feature of living creatures, but it appears it might have been borrowed from vents like those at Lost City.
Four billion years ago the ocean was acidic, saturated with carbon dioxide. Where the alkaline fluid from the vents met the acidic water, there was a steep proton gradient across the thin iron-nickel-sulphur walls of the pores that formed at the vents. That gradient had a voltage very similar in magnitude to the one in a modern cell. Inside those mineral pores, chemicals would have been trapped in a space with abundant energy, which could have been used to build more complex molecules. These in turn – as they began to accidentally replicate themselves using the energy from the proton gradients – became gradually more susceptible to a pattern of survival of the fittest. And the rest, as Daniel Dennett would say, is algorithm. In short, an emergent account of the origin of life is almost within reach.

All crane and no skyhook
As I mentioned earlier, the diagnostic feature of life is that it captures energy to create order. This is also a hallmark of civilisation. Just as each person uses energy to make buildings and devices and ideas, so each gene uses energy to make a structure of protein. A bacterium is limited in how large it can grow by the quantity of energy available to each gene. That’s because the energy is captured at the cell membrane by pumping protons across the membrane, and the bigger the cell, the smaller its surface area relative to its volume. The only bacteria that grow big enough to be seen by the naked eye are ones that have huge empty vacuoles inside them.
However, at some point around two billion years after life started, huge cells began to appear with complicated internal structures; we call them eukaryotes, and we (animals as well as plants, fungi and protozoa) are them.
Nick Lane argues that the eukaryotic (r)evolution was made possible by a merger: a bunch of bacteria began to live inside an archeal cell (a different kind of microbe). Today the descendants of these bacteria are known as mitochondria, and they generate the energy we need to live. During every second of your life your thousand trillion mitochondria pump a billion trillion protons across their membranes, capturing the electrical energy needed to forge your proteins, DNA and other macromolecules.
Mitochondria still have their own genes, but only a small number – thirteen in us. This simplification of their genome was vital. It enabled them to generate far more surplus energy to support the work of ‘our genome’, which is what enables us to have complex cells, complex tissues and complex bodies. As a result we eukaryotes have tens of thousands of times more energy available per gene, making each of our genes capable of far greater productivity. That allows us to have larger cells as well as more complex structures. In effect, we overcame the size limit of the bacterial cell by hosting multiple internal membranes in mitochondria, and then simplifying the genomes needed to support those membranes.
There is an uncanny echo of this in the Industrial (R)evolution. In agrarian societies, a family could grow just enough food to feed itself, but there was little left over to support anybody else. So only very few people could have castles, or velvet coats, or suits of armour, or whatever else needed making with surplus energy. The harnessing of oxen, horses, wind and water helped generate a bit more surplus energy, but not much. Wood was no use – it provided heat, not work. So there was a permanent limit on how much a society could make in the way of capital – structures and things.
Then in the Industrial (R)evolution an almost inexhaustible supply of energy was harnessed in the form of coal. Coal miners, unlike peasant farmers, produced vastly more energy than they consumed. And the more they dug out, the better they got at it. With the first steam engines, the barrier between heat and work was breached, so that coal’s energy could now amplify the work of people. Suddenly, just as the eukaryotic (r)evolution vastly increased the amount of energy per gene, so the Industrial (R)evolution vastly increased the amount of energy per worker. And that surplus energy, so the energy economist John Constable argues, is what built (and still builds) the houses, machines, software and gadgets – the capital – with which we enrich our lives. Surplus energy is indispensable to modern society, and is the symptom of wealth. An American consumes about ten times as much energy as a Nigerian, which is the same as saying he is ten times richer. ‘With coal almost any feat is possible or easy,’ wrote William Stanley Jevons; ‘without it we are thrown back into the laborious poverty of early times.’ Both the evolution of surplus energy generation by eukaryotes, and the evolution of surplus energy by industrialisation, were emergent, unplanned phenomena.
But I digress. Back to genomes. A genome is a digital computer program of immense complexity. The slightest mistake would alter the pattern, dose or sequence of expression of its 20,000 genes (in human beings), or affect the interaction of its hundreds of thousands of control sequences that switch genes on and off, and result in disastrous deformity or a collapse into illness. In most of us, for an incredible eight or nine decades, the computer program runs smoothly with barely a glitch.
Consider what must happen every second in your body to keep the show on the road. You have maybe ten trillion cells, not counting the bacteria that make up a large part of your body. Each of those cells is at any one time transcribing several thousand genes, a procedure that involves several hundred proteins coming together in a specific way and catalysing tens of chemical reactions for each of millions of base pairs. Each of those transcripts generates a protein molecule, thousands of amino acids long, which it does by entering a ribosome, a machine with tens of moving parts, capable of catalysing a flurry of chemical reactions. The proteins themselves then fan out within and without cells to speed reactions, transport goods, transmit signals and prop up structures. Millions of trillions of these immensely complicated events are occurring every second in your body to keep you alive, very few of which go wrong. It’s like the world economy in miniature, only even more complex.
It is hard to shake the illusion that for such a computer to run such a program, there must be a programmer. Geneticists in the early days of the Human Genome Project would talk of ‘master genes’ that commanded subordinate sequences. Yet no such master gene exists, let alone an intelligent programmer. The entire thing not only emerged piece by piece through evolution, but runs in a democratic fashion too. Each gene plays its little role; no gene comprehends the whole plan. Yet from this multitude of precise interactions results a spontaneous design of unmatched complexity and order. There was never a better illustration of the validity of the Enlightenment dream – that order can emerge where nobody is in charge. The genome, now sequenced, stands as emphatic evidence that there can be order and complexity without any management.

On whose behalf?
Let’s assume for the sake of argument that I have persuaded you that evolution is not directed from above, but is a self-organising process that produces what Daniel Dennett calls ‘free-floating rationales’ for things. That is to say, for example, a baby cuckoo pushes the eggs of its host from the nest in order that it can monopolise its foster parents’ efforts to feed it, but nowhere has that rationale ever existed as a thought either in the mind of the cuckoo or in the mind of a cuckoo’s designer. It exists now in your mind and mine, but only after the fact. Bodies and behaviours teem with apparently purposeful function that was never foreseen or planned. You will surely agree that this model can apply within the human genome, too; your blood-clotting genes are there to make blood-clotting proteins, the better to clot blood at a wound; but that functional design does not imply an intelligent designer who foresaw the need for blood clotting.
I’m now going to tell you that you have not gone far enough. God is not the only skyhook. Even the most atheistic scientist, confronted with facts about the genome, is tempted into command-and-control thinking. Here’s one, right away: the idea that genes are recipes patiently waiting to be used by the cook that is the body. The collective needs of the whole organism are what the genes are there to serve, and they are willing slaves. You find this assumption behind almost any description of genetics – including ones by me – yet it is misleading. For it is just as truthful to turn the image upside down. The body is the plaything and battleground of genes at least as much as it is their purpose. Whenever somebody asks what a certain gene is for, they automatically assume that the question relates to the needs of the body: what is it for, in terms of the body’s needs? But there are plenty of times when the answer to that question is ‘The gene itself.’
The scientist who first saw this is Richard Dawkins. Long before he became well known for his atheism, Dawkins was famous for the ideas set out in his book The Selfish Gene. ‘We are survival machines – robot vehicles blindly programmed to preserve the selfish molecules known as genes,’ he wrote. ‘This is a truth that still fills me with astonishment.’ He was saying that the only way to understand organisms was to see them as mortal and temporary vehicles used to perpetuate effectively immortal digital sequences written in DNA. A male deer risks its life in a battle with another stag, or a female deer exhausts her reserves of calcium producing milk for her young, not to help its own body’s survival but to pass the genes to the next generation. Far from preaching selfish behaviour, therefore, this theory explains why we are often altruistic: it is the selfishness of the genes that enables individuals to be selfless. A bee suicidally stinging an animal that threatens the hive is dying for its country (or hive) so that its genes may survive – only in this case the genes are passed on indirectly, through the stinger’s mother, the queen. It makes more sense to see the body as serving the needs of the genes than vice versa. Bottom–up.
One paragraph of Dawkins’s book, little noticed at the time, deserves special attention. It has proved to be the founding text of an extremely important theory. He wrote:
Sex is not the only apparent paradox that becomes less puzzling the moment we learn to think in selfish gene terms. For instance, it appears that the amount of DNA in organisms is more than is strictly necessary for building them: a large fraction of the DNA is never translated into protein. From the point of view of the individual this seems paradoxical. If the ‘purpose’ of DNA is to supervise the building of bodies it is surprising to find a large quantity of DNA which does no such thing. Biologists are racking their brains trying to think what useful task this apparently surplus DNA is doing. From the point of view of the selfish genes themselves, there is no paradox. The true ‘purpose’ of DNA is to survive, no more and no less. The simplest way to explain the surplus DNA is to suppose that it is a parasite, or at best a harmless but useless passenger, hitching a ride in the survival machines created by the other DNA.
One of the people who read that paragraph and began thinking about it was Leslie Orgel, a chemist at the Salk Institute in California. He mentioned it to Francis Crick, who mentioned it in an article about the new and surprising discovery of ‘split genes’ – the fact that most animal and plant genes contain long sequences of DNA called ‘introns’ that are discarded after transcription. Crick and Orgel then wrote a paper expanding on Dawkins’s selfish DNA explanation for all the extra DNA. So, at the same time, did the Canadian molecular biologists Ford Doolittle and Carmen Sapienza. ‘Sequences whose only “function” is self-preservation will inevitably arise and be maintained,’ wrote the latter. The two papers were published simultaneously in 1980.
It turns out that Dawkins was right. What would his theory predict? That the spare DNA would have features that made it good at getting itself duplicated and re-inserted into chromosomes. Bingo. The commonest gene in the human genome is the recipe for reverse transcriptase, an enzyme that the human body has little or no need for, and whose main function is usually to help the spread of retroviruses. Yet there are more copies and half-copies of this gene than of all other human genes combined. Why? Because reverse transcriptase is a key part of any DNA sequence that can copy itself and distribute the copies around the genome. It’s a sign of a digital parasite. Most of the copies are inert these days, and some are even put to good use, helping to regulate real genes or bind proteins. But they are there because they are good at being there.
The skyhook here is a sort of cousin of Locke’s ‘mind-first’ thinking: the assumption that the human good is the only good pursued within our bodies. The alternative view, penetratingly articulated by Dawkins, takes the perspective of the gene itself: how DNA would behave if it could. Close to half of the human genome consists of so-called transposable elements designed to use reverse transcriptase. Some of the commonest are known by names like LINEs (17 per cent of the genome), SINEs (11 per cent) and LTR retrotransposons (8 per cent). Actual genes, by contrast, fill just 2 per cent of the genome. These transposons are sequences that are good at getting themselves copied, and there is no longer a smidgen of doubt that they are (mostly inert) digital parasites. They are not there for the needs of the body at all.

Junk is not the same as garbage
There is a close homology with computer viruses, which did not yet exist when Dawkins suggested the genetic version of the concept of digital parasitism. Some of the transposons, the SINEs, appear to be parasites of parasites, because they use the apparatus of longer, more complete sequences to get themselves disseminated. For all the heroic attempts to see their function in terms of providing variability that might one day lead to a brave new mutation, the truth is that their more immediate and frequent effect is occasionally to disrupt the reading of genes.
Of course, these selfish DNA sequences can thrive only because a small percentage of the genome does something much more constructive – builds a body that grows, learns and adapts sufficiently to its physical and social environment that it can eventually thrive, attract a mate and have babies. At which point the selfish DNA says, ‘Thank you very much, we’ll be making up half the sequence in the children too.’
It is currently impossible to explain the huge proportion of the human genome devoted to these transposons except by reference to the selfish DNA theory. There’s just no other theory that comes close to fitting the facts. Yet it is routinely rejected, vilified and ‘buried’ by commentators on the fringe of genetics. The phrase that really gets their goat is ‘junk DNA’. It’s almost impossible to read an article on the topic without coming across surprisingly passionate denunciations of the ‘discredited’ notion that some of the DNA in a genome is useless. ‘We have long felt that the current disrespectful (in a vernacular sense) terminology of junk DNA and pseudogenes,’ wrote Jürgen Brosius and Stephen Jay Gould in an early salvo in 1992, ‘has been masking the central evolutionary concept that features of no current utility may hold crucial evolutionary importance as recruitable sources of future change.’ Whenever I write about this topic, I am deluged with moralistic denunciations of the ‘arrogance’ of scientists for rejecting unknown functions of DNA sequences. To which I reply: functions for whom? The body or the sequences?
This moral tone to the disapproval of ‘so-called’ junk DNA is common. People seem to be genuinely offended by the phrase. They sound awfully like the defenders of faith confronted with evolution – it’s the bottom–up nature of the story that they dislike. Yet as I shall show, selfish DNA and junk DNA are both about as accurate as metaphors ever can be. And junk is not the same as garbage.
What’s the fuss about? In the 1960s, as I mentioned earlier, molecular biologists began to notice that there seemed to be far more DNA in a cell than was necessary to make all the proteins in the cell. Even with what turned out to be a vastly over-inflated estimate of the number of genes in the human genome – then thought to be more than 100,000, now known to be about 20,000 – genes and their control sequences could account for only a small percentage of the total weight of DNA present in a cell’s chromosomes, at least in mammals. It’s less than 3 per cent in people. Worse, there was emerging evidence that we human beings did not seem to have the heaviest genomes or the most DNA. Humble protozoa, onions and salamanders have far bigger genomes. Grasshoppers have three times as much; lungfish forty times as much. Known by the obscure name of the ‘c-value paradox’, this enigma exercised the minds of some of the most eminent scientists of the day. One of them, Susumu Ohno, coined the term ‘junk DNA’, arguing that much of the DNA might not be under selection – that is to say, might not be being continuously honed by evolution to fit a function of the body.
He was not saying it was garbage. As Sydney Brenner later made plain, people everywhere make the distinction between two kinds of rubbish: ‘garbage’ which has no use and must be disposed of lest it rot and stink, and ‘junk’, which has no immediate use but does no harm and is kept in the attic in case it might one day be put back to use. You put garbage in the rubbish bin; you keep junk in the attic or garage.
Yet the resistance to the idea of junk DNA mounted. As the number of human genes steadily shrank in the 1990s and 2000s, so the desperation to prove that the rest of the genome must have a use (for the organism) grew. The new simplicity of the human genome bothered those who liked to think of the human being as the most complex creature on the planet. Junk DNA was a concept that had to be challenged. The discovery of RNA-coding genes, and of multiple control sequences for adjusting the activity of genes, seemed to offer some straws of hope to grasp. When it became clear that on top of the 5 per cent of the genome that seemed to be specifically protected from change between human beings and related species, another 4 per cent showed some evidence of being under selection, the prestigious journal Science was moved to proclaim ‘no more junk DNA’. What about the other 91 per cent?
In 2012 the anti-junk campaign culminated in a raft of hefty papers from a huge consortium of scientists called ENCODE. These were greeted, as intended, with hype in the media announcing the Death of Junk DNA. By defining non-junk as any DNA that had something biochemical happen to it during normal life, they were able to assert that about 80 per cent of the genome was functional. (And this was in cancer cells, with abnormal patterns of DNA hyperactivity.) That still left 20 per cent with nothing going on. But there are huge problems with this wide definition of ‘function’, because many of the things that happened to the DNA did not imply that the DNA had an actual job to do for the body, merely that it was subject to housekeeping chemical processes. Realising they had gone too far, some of the ENCODE team began to use smaller numbers when interviewed afterwards. One claimed only 20 per cent was functional, before insisting none the less that the term ‘junk DNA’ should be ‘totally expunged from the lexicon’ – which, as Dan Graur of the University of Houston and his colleagues remarked in a splenetic riposte in early 2013, thus invented a new arithmetic according to which 20 per cent is greater than 80 per cent.
If this all seems a bit abstruse, perhaps an analogy will help. The function of the heart, we would surely agree, is to pump blood. That is what natural selection has honed it to do. The heart does other things, such as add to the weight of the body, produce sounds and prevent the pericardium from deflating. Yet to call those the functions of the heart is silly. Likewise, just because junk DNA is sometimes transcribed or altered, that does not mean it has function as far as the body is concerned. In effect, the ENCODE team was arguing that grasshoppers are three times as complex, onions five times and lungfish forty times as complex, as human beings. As the evolutionary biologist Ryan Gregory put it, anyone who thinks he or she can assign a function to every letter in the human genome should be asked why an onion needs a genome that is about five times larger than a person’s.
Who’s resorting to a skyhook here? Not Ohno or Dawkins or Gregory. They are saying the extra DNA just comes about, there not being sufficient selective incentive for the organism to clear out its genomic attic. (Admittedly, the idea of junk in your attic that duplicates itself if you do nothing about it is moderately alarming!) Bacteria, with large populations and brisk competition to grow faster than their rivals, generally do keep their genomes clear of junk. Large organisms do not. Yet there is clearly a yearning that many people have to prefer an explanation that sees the spare DNA as having a purpose for us, not for itself. As Graur puts it, the junk critics have fallen prey to ‘the genomic equivalent of the human propensity to see meaningful patterns in random data’.
Whenever I raised the topic of junk DNA in recent years I was astonished by the vehemence with which I was told by scientists and commentators that I was wrong, that its existence had been disproved. In vain did I point out that on top of the transposons, the genome was littered with ‘pseudogenes’ – rusting hulks of dead genes – not to mention that 96 per cent of the RNA transcribed from genes was discarded before proteins were made from the transcripts (the discards are ‘introns’). Even though some parts of introns and pseudogenes are used in control sequences, it was clear the bulk was just taking up space, its sequence free to change without consequence for the body. Nick Lane argues that even introns are descended from digital parasites, from the period when an archeal cell ingested a bacterium and turned it into the first mitochondrion, only to see its own DNA invaded by selfish DNA sequences from the ingested bacterium: the way introns are spliced out betrays their ancestry as self-splicing introns from bacteria.
Junk DNA reminds us that the genome is built by and for DNA sequences, not by and for the body. The body is an emergent phenomenon consequent upon the competitive survival of DNA sequences, and a means by which the genome perpetuates itself. And though the natural selection that results in evolutionary change is very far from random, the mutations themselves are random. It is a process of blind trial and error.

Red Queen races
Even in the heart of genetics labs there is a long tradition of resistance to the idea that mutation is purely random and comes with no intentionality, even if selection is not random. Theories of directed mutation come and go, and many highly reputable scientists embrace them, though the evidence remains elusive. The molecular biologist Gabby Dover, in his book Dear Mr Darwin, tried to explain the implausible fact that some centipedes have 173 body segments without relying exclusively on natural selection. His argument was basically that it was unlikely that a randomly generated 346-legged centipede survived and bred at the expense of one with slightly fewer legs. He thinks some other explanation is needed for how the centipede got its segments. He finds such an explanation in ‘molecular drive’, an idea that remains frustratingly vague in Dover’s book, but has a strong top–down tinge. In the years since Dover put forward the notion, molecular drive has sunk with little trace, following so many other theories of directed mutation into oblivion. And no wonder: if mutation is directed, then there would have to be a director, and we’re back to the problem of how the director came into existence: who directed the director? Whence came this knowledge of the future that endowed a gene with the capacity to plan a sensible mutation?
In medicine, an understanding of evolution at the genomic level is both the problem and the solution. Bacterial resistance to antibiotics, and chemotherapeutic drug resistance within tumours, are both pure Darwinian evolutionary processes: the emergence of survival mechanisms through selection. The use of antibiotics selects for rare mutations in genes in bacteria that enable them to resist the drugs. The emergence of antibiotic resistance is an evolutionary process, and it can only be combated by an evolutionary process. It is no good expecting somebody to invent the perfect antibiotic, and find some way of using it that does not elicit resistance. We are in an arms race with germs, whether we like it or not. The mantra should always be the Red Queen’s (from Lewis Carroll’s Through the Looking-Glass): ‘Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!’ The search for the next antibiotic must begin long before the last one is ineffective.
That, after all, is how the immune system works. It does not just produce the best antibodies it can find; it sets out to experiment and evolve in real time. Human beings cannot expect to rely upon evolving resistance to parasites quickly enough by the selective death of susceptible people, because our generation times are too long. We have to allow evolution within our bodies within days or hours. And this the immune system is designed to achieve. It contains a system for recombining different forms of proteins to increase their diversity and rapidly multiplying whichever antibody suddenly finds itself in action. Moreover, the genome includes a set of genes whose sole aim seems to be to maintain a huge diversity of forms: the major histocompatibility complex. The job of these 240 or so MHC genes is to present antigens from invading pathogens to the immune system so as to elicit an immune response. They are the most variable genes known, with one – HLA-B – coming in about 1,600 different versions in the human population. There is some evidence that many animals go to some lengths to maintain or enhance the variability further, by, for example, seeking out mates with different MHC genes (detected by smell).
If the battle against microbes is a never-ending, evolutionary arms race, then so is the battle against cancer. A cell that turns cancerous and starts to grow into a tumour, then spreads to other parts of the body, has to evolve by genetic selection as it does so. It has to acquire mutations that encourage it to grow and divide; mutations that ignore the instructions to stop growing or commit suicide; mutations that cause blood vessels to grow into the tumour to supply it with nutrients; and mutations that enable cells to break free and migrate. Few of these mutations will be present in the first cancerous cell, but tumours usually acquire another mutation – one that massively rearranges its genome, thus experimenting on a grand scale, as if unconsciously seeking to find a way by trial and error to acquire these needed mutations.
The whole process looks horribly purposeful, and malign. The tumour is ‘trying’ to grow, ‘trying’ to get a blood supply, ‘trying’ to spread. Yet, of course, the actual explanation is emergent: there is competition for resources and space among the many cells in a tumour, and the one cell that acquires the most helpful mutations will win. It is precisely analogous to evolution in a population of creatures. These days, the cancer cells often need another mutation to thrive: one that will outwit the chemotherapy or radiotherapy to which the cancer is subjected. Somewhere in the body, one of the cancer cells happens to acquire a mutation that defeats the drug. As the rest of the cancer dies away, the descendants of this rogue cell gradually begin to multiply, and the cancer returns. Heartbreakingly, this is what happens all too often in the treatment of cancer: initial success followed by eventual failure. It’s an evolutionary arms race.
The more we understand genomics, the more it confirms evolution.

5 (#ulink_13cfd1fd-229e-559b-bd2d-0bd41b0d7d79)
The Evolution of Culture (#ulink_13cfd1fd-229e-559b-bd2d-0bd41b0d7d79)
And therefore to assume there was one person gave a name
To everything, and that all learned their first words from the same,
Is stuff and nonsense. Why should one human being from among
The rest be able to designate and name things with his tongue
And others not possess the power to do likewise? …
Lucretius, De Rerum Natura, Book 5, lines 1041–5
The development of an embryo into a body is perhaps the most beautiful of all demonstrations of spontaneous order. Our understanding of how it happens grows ever less instructional. As Richard Dawkins writes in his book The Greatest Show on Earth, ‘The key point is that there is no choreographer and no leader. Order, organisation, structure – these all emerge as by-products of rules which are obeyed locally and many times over.’ There is no overall plan, just cells reacting to local effects. It is as if an entire city emerged from chaos just because people responded to local incentives in the way they set up their homes and businesses. (Oh, hang on – that is how cities emerged too.)
Look at a bird’s nest: beautifully engineered to provide protection and camouflage to a family of chicks, made to a consistent (but unique) design for each species, yet constructed by the simplest of instincts with no overall plan in mind, just a string of innate urges. I had a fine demonstration of this one year when a mistle thrush tried to build a nest on the metal fire escape outside my office. The result was a disaster, because each step of the fire escape looked identical, so the poor bird kept getting confused about which step it was building its nest on. Five different steps had partly built nests on them, the middle two being closest to completion, but neither fully built. The bird then laid two eggs in one half-nest and one in another. Clearly it was confused by the local cues provided by the fire-escape steps. Its nest-building program depended on simple rules, like ‘Put more material in corner of metal step.’ The tidy nest of a thrush emerges from the most basic of instincts.
Or look at a tree. Its trunk manages to grow in width and strength just as fast as is necessary to bear the weight of its branches, which are themselves a brilliant compromise between strength and flexibility; its leaves are a magnificent solution to the problem of capturing sunlight while absorbing carbon dioxide and losing as little water as possible: they are wafer-thin, feather-light, shaped for maximum exposure to the light, with their pores on the shady underside. The whole structure can stand for hundreds or even thousands of years without collapsing, yet can also grow continuously throughout that time – a dream that lies far beyond the capabilities of human engineers. All this is achieved without a plan, let alone a planner. The tree does not even have a brain. Its design and implementation emerge from the decisions of its trillions of single cells. Compared with animals, plants dare not rely on brain-directed behaviour, because they cannot run away from grazers, and if a grazer ate the brain, it would mean death. So plants can withstand almost any loss, and regenerate easily. They are utterly decentralised. It is as if an entire country’s economy emerged from just the local incentives and responses of its people. (Oh, hang on …)
Or take a termite mound in the Australian outback. Tall, buttressed, ventilated and oriented with respect to the sun, it is a perfect system for housing a colony of tiny insects in comfort and gentle warmth – as carefully engineered as any cathedral. Yet there is no engineer. The units in this case are whole termites, rather than cells, but the system is no more centralised than in a tree or an embryo. Each grain of sand or mud that is used to construct the mound is carried to its place by a termite acting under no instruction, and with no plan in (no) mind. The insect is reacting to local signals. It is as if a human language, with all its syntax and grammar, were to emerge spontaneously from the actions of its individual speakers, with nobody laying down the rules. (Oh, hang on …)
That is indeed exactly how languages emerged, in just the same fashion that the language of DNA developed – by evolution. Evolution is not confined to systems that run on DNA. One of the great intellectual breakthroughs of recent decades, led by two evolutionary theorists named Rob Boyd and Pete Richerson, is the realisation that Darwin’s mechanism of selective survival resulting in cumulative complexity applies to human culture in all its aspects too. Our habits and our institutions, from language to cities, are constantly changing, and the mechanism of change turns out to be surprisingly Darwinian: it is gradual, undirected, mutational, inexorable, combinatorial, selective and in some vague sense progressive.
Scientists used to object that evolution could not occur in culture because culture did not come in discrete particles, nor did it replicate faithfully or mutate randomly, like DNA. This turns out not to be true. Darwinian change is inevitable in any system of information transmission so long as there is some lumpiness in the things transmitted, some fidelity of transmission and a degree of randomness, or trial and error, in innovation. To say that culture ‘evolves’ is not metaphorical.

The evolution of language
There is an almost perfect parallel between the evolution of DNA sequences and the evolution of written and spoken language. Both consist of linear digital codes. Both evolve by selective survival of sequences generated by at least partly random variation. Both are combinatorial systems capable of generating effectively infinite diversity from a small number of discrete elements. Languages mutate, diversify, evolve by descent with modification and merge in a ballet of unplanned beauty. Yet the end result is structure, and rules of grammar and syntax as rigid and formal as you could want. ‘The formation of different languages, and of distinct species, and the proofs that both have been developed through a gradual process, are curiously parallel,’ wrote Charles Darwin in TheDescent of Man.
This makes it possible to think of language as a designed and rule-based thing. And for generations, this was the way foreign languages were taught. At school I learned Latin and Greek as if they were cricket or chess: you can do this, but not that, to verbs, nouns and plurals. A bishop can move diagonally, a batsman can run a leg bye, and a verb can take the accusative. Eight years of this rule-based stuff, taught by some of the finest teachers in the land for longer hours each week than any other topic, and I was far from fluent – indeed, I quickly forgot what little I had learned once I was allowed to abandon Latin and Greek. Top–down language teaching just does not work well – it’s like learning to ride a bicycle in theory, without ever getting on one. Yet a child of two learns English, which has just as many rules and regulations as Latin, indeed rather more, without ever being taught. An adolescent picks up a foreign language, conventions and all, by immersion. Having a training in grammar does not (I reckon) help prepare you for learning a new language much, if at all. It’s been staring us in the face for years: the only way to learn a language is bottom–up.
Language stands as the ultimate example of a spontaneously organised phenomenon. Not only does it evolve by itself, words changing their meaning even as we watch, despite the railings of the mavens, but it is learned, not taught. The prescriptive habit has us all tut-tutting at the decline of language standards, the loss of punctuation and the debasement of vocabulary, but it’s all nonsense. Language is just as rule-based in its newest slang forms, and just as sophisticated as it ever was in ancient Rome. But the rules, now as then, are written from below, not from above.
There are regularities about language evolution that make perfect sense but have never been agreed by committees or recommended by experts. For instance, frequently used words tend to be short, and words get shorter if they are more frequently used: we abbreviate terms if we have to speak them often. This is good – it means less waste of breath, time and paper. And it is an entirely natural, spontaneous phenomenon that we remain largely unaware of. Similarly, common words change only very slowly, whereas rare words can change their meaning and their spelling quite fast. Again, this makes sense – re-engineering the word ‘the’ so it means something different would be a terrific problem for the world’s English-speakers, whereas changing the word ‘prevaricate’ (it used to mean ‘lie’, it now seems mostly to mean ‘procrastinate’) is no big deal, and has happened quite quickly. Nobody thought up this rule; it is the product of evolution.
Languages show other features of evolutionary systems. For instance, as Mark Pagel has pointed out, biological species of animals and plants are more diverse in the tropics, less so near the poles. Indeed, many circumpolar species tend to have huge ranges, covering the whole of an ecosystem in the Arctic or Antarctic, whereas tropical rainforest species might be found in just one small area – a valley or a mountain range or on an island. The rainforest of New Guinea is a menagerie of millions of different species with small ranges, while the tundra of Alaska is home to a handful of species with vast ranges. This is true of plants, insects, birds, mammals, fungi. It’s a sort of iron rule of ecology: that there will be more species, but with smaller ranges, near the equator, and fewer species, but with larger ranges, near the poles.
And here is the fascinating parallel. It is also true of languages. The native tongues spoken in Alaska can be counted on one hand. In New Guinea there are literally thousands of languages, some of which are spoken in just a few valleys and are as different from the languages of the next valley as English is from French. Even this language density is exceeded on the volcanic island of Gaua, part of Vanuatu, which has five different native languages in a population of just over 2,000, despite being a mere thirteen miles in diameter. In forested, mountainous tropical regions, human language diversity is extreme.
One of Pagel’s graphs shows that the decreasing diversity of languages with latitude is almost identical to the decreasing diversity of species with latitude. At present neither trend is easily explained. The great diversity of species in tropical forests has something to do with the greater energy flowing through a tropical ecosystem with plenty of warmth and light and water. It may also have something to do with the abundance of parasites. Tropical creatures are subjected to a constant barrage of parasitic invasions, and being an abundant creature makes you more of a target, so there is an advantage to rarity. And it may reflect a lower extinction rate in a more climatically equable zone. As for languages, the need to migrate with the seasons must homogenise the linguistic diversity of extremely seasonal landscapes, in contrast to tropical ones, where populations can fragment into smaller groups and each can survive without moving. But whatever the explanation, the phenomenon illustrates the way human languages evolve automatically. They are clearly human products, but they are not consciously designed.
Moreover, by studying the history of languages, Pagel finds that when a new language diverges from an ancestral language, it appears to change very rapidly at first. The same seems to be true of species. When a geographical subset of a species becomes isolated it evolves very rapidly at first, so that evolution by natural selection seems to happen in bursts, a phenomenon known as punctuated equilibrium. There are intensely close parallels between the evolution of languages and of species.

The human revolution was actually an evolution
Some time around 200,000 years ago, in a part of Africa but not elsewhere, human beings began to change their culture. We know this because the archaeological record is clear that a great transformation came over the species, known as the ‘human revolution’. After more than a million years of making simple stone tools to just a few designs, these Africans began making lots of different types of tool. At first the change was local, gradual and ephemeral, so the word revolution is misleading. But then the tool changes began to appear more frequently, more strongly and more persistently. By 65,000 years ago the people with the new tool sets had begun to spill out of Africa, most probably across the narrow strait at the southern end of the Red Sea, and had begun a comparatively rapid colonisation of the Eurasian continent, displacing – and very occasionally mating with – the native hominids that were already there, such as the Neanderthals in Europe and the Denisovans in Asia. These new people had something special: they were not prisoners of their ecological niche, but could change their habits quite easily if prey disappeared, or better opportunities arose. They reached Australia and quickly filled that challenging continent. They reached Europe, then in the grip of an ice age, and displaced the superbly adapted big-game-hunting Neanderthals. They eventually spilled into the Americas, and within an evolutionary eyeblink peopled every ecosystem from Alaska to Cape Horn, from the rainforest to the desert.
What sparked the human revolution in Africa? It is an almost impossibly difficult question to answer, because of the very gradual beginning of the process: the initial trigger may have been very small. The first stirrings of different tools in parts of east Africa seem to be up to 300,000 years old, so by modern standards the change was happening with glacial slowness. And that’s a clue. The defining feature is not culture, for plenty of animals have culture, in the sense of traditions that are passed on by learning. The defining feature is cumulative culture – the capacity to add innovations without losing old habits. In this sense, the human revolution was not a revolution at all, but a very, very slow cumulative change, which steadily gathered pace, accelerating towards today’s near-singularity of incessant and multifarious innovation.
It was cultural evolution. I think the change was kicked off by the habit of exchange and specialisation, which feeds upon itself – the more you exchange, the more value there is in specialisation, and vice versa – and tends to breed innovation. Most people prefer to think it was language that was the cause of the change. Again, language would build upon itself: the more you can speak, the more there is to say. The problem with this theory, however, is that genetics suggests Neanderthals had already undergone the linguistic revolution hundreds of thousands of years earlier – with certain versions of genes related to languages sweeping through the species. So if language was the trigger, why did the revolution not happen earlier, and to Neanderthals too? Others think that some aspect of human cognition must have been different in these first ‘behaviourally modern humans’: forward planning, or conscious imitation, say. But what caused language, or exchange, or forethought, to start when and where it did?
Almost everybody answers this question in biological terms: a mutation in some gene, altering some aspect of brain structure, gave our ancestors a new skill, which enabled them to build a culture that became cumulative. Richard Klein, for instance, talks of a single genetic change that ‘fostered the uniquely modern ability to adapt to a remarkable range of natural and social circumstance’. Others have spoken of alterations in the size, wiring and physiology of the human brain to make possible everything from language and tool use to science and art. Others suggest that a small number of mutations, altering the structure or expression of developmental regulatory genes, were what triggered a cultural explosion. The evolutionary geneticist Svante Pääbo says: ‘If there is a genetic underpinning to this cultural and technological explosion, as I’m sure there is …’
I am not sure there is a genetic underpinning. Or rather, I think they all have it backwards, and are putting the cart before the horse. I think it is wrong to assume that complex cognition is what makes human beings uniquely capable of cumulative cultural evolution. Rather, it is the other way around. Cultural evolution drove the changes in cognition that are embedded in our genes. The changes in genes are the consequences of cultural changes. Remember the example of the ability to digest milk in adults, which is unknown in other mammals, but common among people of European and east African origin. The genetic change was a response to the cultural change. This happened about 5,000–8,000 years ago. The geneticist Simon Fisher and I argued that the same must have been true for other features of human culture that appeared long before that. The genetic mutations associated with facilitating our skill with language – which show evidence of ‘selective sweeps’ in the past few hundred thousand years, implying that they spread rapidly through the species – were unlikely to be the triggers that caused us to speak; but were more likely the genetic responses to the fact that we were speaking. Only in a language-using animal would the ability to use language more fluently be an advantage. So we will search in vain for the biological trigger of the human revolution in Africa 200,000 years ago, for all we will find is biological responses to culture. The fortuitous adopting of a habit, through force of circumstance, by a certain tribe might have been enough to select for genes that made the members of that tribe better at speaking, exchanging, planning or innovating. In people, genes are probably the slaves, not the masters, of culture.
Music, too, evolves. To a surprising extent, it changes under its own steam, with musicians carried along for the ride. Baroque begets classical begets romantic begets ragtime begets jazz begets blues begets rock begets pop. One style could not emerge without the previous style existing. There are hybridisation events along the way: African traditional music mates with blues to produce jazz. Instruments change, but mainly as a result of descent with modification from other instruments, not by de novo invention. The piano is the descendant of the harpsichord, which shares an ancestor with the harp. The trombone is the daughter of the trumpet and the cousin of the horn. The violin and the cello are modified lutes. Just as Mozart could not have written what he did if Bach and his like had not written what they did, nor could Beethoven have written his music without drawing upon Mozart. Technology matters, but so do ideas: Pythagoras’s discovery of the octave scale was a crucial moment in the history of music. So was syncopation. The invention of the amplified electric guitar made small groups able to entertain large ones as easily as orchestras once could. The point is that there was an inexorable inevitability about the gradual progress of music. It could not stop changing as each generation of musicians learned and experimented with music.

The evolution of marriage
One of the characteristics of evolution is that it produces patterns of change that make sense in retrospect, but that came about without even a hint of conscious design. Take the human mating system. The emergence, fall, rise and fall again of marriage over the last few thousand years constitute a fine example of this pattern. I am not talking about the evolution of mating instincts, but the history of cultural marriage habits.
The instincts are there, sure enough. Human mating patterns plainly still reflect ingrained genetic tendencies honed in the African savannah over millions of years. Judging by the modest difference in size and strength between men and women, we are clearly not designed for pure polygamy of the gorilla kind, where gigantic males compete for ownership of stable harems of females, killing the predecessor male’s babies when they succeed. On the other hand, judging by the modest size of our testicles, we are not designed for the sexual free-for-all of chimpanzees and bonobos, where promiscuous females (in what is probably an instinctive bid to prevent infanticide) ensure that most male-to-male competition happens between sperm rather than between individuals, blurring paternity. We are like neither of these. Hunter-gatherer societies, it turned out once we got to study them starting in the 1920s, are mainly monogamous. Males and females form exclusive pair bonds, and if either sex desires sexual variety it largely seeks it in secret. Monogamous pair bonding, with fathers closely involved in providing for offspring, seems to be the peculiar human pattern that men and women adopted for most of the past few million years. This is unusual among mammals, much more common among birds.

Конец ознакомительного фрагмента.
Текст предоставлен ООО «ЛитРес».
Прочитайте эту книгу целиком, купив полную легальную версию (https://www.litres.ru/matt-ridley/the-evolution-of-everything-how-small-changes-transform-our-wo/) на ЛитРес.
Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.