Читать онлайн книгу «Why Us?: How Science Rediscovered the Mystery of Ourselves» автора James Fanu

Why Us?: How Science Rediscovered the Mystery of Ourselves
James Le Fanu
The imperative to 'know thyself' is both fundamental and profoundly elusive – for how can we ever truly comprehend the drama and complexity of the human experience?In ‘Why Us?’ James Le Fanu offers a fascinating exploration of the power and limits of science to penetrate the deep mysteries of our existence, challenging the certainty that has persisted since Charles Darwin's Origin of Species that we are no more than the fortuitous consequence of a materialist evolutionary process.That challenge arises, unexpectedly, from the two major projects that promised to provide definitive proof for this most influential of scientific theories. The first is the astonishing achievement of the Human Genome Project, which, it was anticipated, would identify the genetic basis of those characteristics that distinguish humans from their primate cousins. The second is the phenomenal advance in brain imaging that now permits neuroscientists to observe the brain 'in action' and thus account for the remarkable properties of the human mind.But that is not how it has turned out. It is simply not possible to get from the monotonous sequence of genes along the Double Helix to the near infinite diversity of the living world, nor to translate the electrical firing of the brain into the creativity of the human mind. This is not a matter of not knowing all the facts. Rather, science has inadvertently discovered that its theories are insufficient to conjure the wonder of the human experience from the bare bones of our genes and brains.We stand on the brink of a tectonic shift in our understanding of ourselves that will witness the rediscovery of the central premise of Western philosophy that there is 'more than we can know'. Lucid, compelling and utterly engaging, ‘Why Us?’ offers a convincing and provocative vision of the new science of being human.




Why Us?
How Science Rediscovered the
Mystery of Ourselves
JAMES LE FANU



Dedication (#ulink_b96671e1-6b79-58f7-884f-6f8a656efa57)
For Juliet

Contents
Cover (#ueefbe11b-8da7-5344-9ebb-cd0d0fce89da)
Title Page (#u679f8e93-d31b-5c23-bea6-0c393b0cd795)
Dedication (#u313e7bfb-1a1e-595b-a122-1e652e3a2809)
Introduction: A Mystery to Ourselves (#ua9c55f30-2a5b-5f99-b37c-0f227e8daa13)
1 Science Triumphant, Almost (#u007bc65e-b2f2-52b2-b3ff-a279e7b212d5)
2 The Ascent of Man: A Riddle in Two Parts (#uc2f09cb8-b77e-5d23-ba91-1a052be504cb)
3 The Limits of Science 1: The Quixotic Universe (#ua03b9614-86c2-511b-b511-65503ea132c4)
4 The (Evolutionary) ‘Reason for Everything’: Certainty (#u178a9f78-51f0-57f3-91ce-2a419acfc5b8)
5 The (Evolutionary) ‘Reason for Everything’: Doubt (#litres_trial_promo)
6 The Limits of Science 2: The Impenetrable Helix (#litres_trial_promo)
7 The Fall of Man: A Tragedy in Two Acts (#litres_trial_promo)
8 The Limits of Science 3: The Unfathomable Brain (#litres_trial_promo)
9 The Silence (#litres_trial_promo)
10 Restoring Man to his Pedestal (#litres_trial_promo)
Index (#litres_trial_promo)
Acknowledgements (#litres_trial_promo)
Notes (#litres_trial_promo)
By the Same Author (#litres_trial_promo)
Copyright (#litres_trial_promo)
About the Publisher (#litres_trial_promo)

Introduction: A Mystery to Ourselves (#ulink_3af5d926-1d82-5f03-83a1-d0c3de06c2fb)
‘Know then thyself, presume not God to scan;
The proper study of mankind is man …
Sole judge of Truth, in endless Error hurled:
The glory, jest and riddle of the world!’
Alexander Pope, ‘An Essay on Man’ (1734)
‘Wonders are there many,’ observed the Greek dramatist Sophocles – ‘but none more wonderful than Man.’ And rightly so for man, as far as we can tell, is the sole witness of the splendours of the universe he inhabits – though consistently less impressed by his existence than would seem warranted.
‘Men go abroad to wonder at the height of mountains, at the huge waves of the sea, at the long courses of the rivers, at the vast compass of the ocean, at the circular motion of the stars,’ observed St Augustine in the fifth century AD, ‘and they pass by themselves without wondering.’
The reasons for that lack of wonder at ourselves have changed over the centuries, but the most important still stands: the practicalities of our everyday lives are so simple and effortless as to seem unremarkable. We open our eyes on waking to be surrounded immediately by the shapes and colours, sounds, smells and movement of the world around us in the most vivid and exquisite detail. We feel hungry, and by some magical alchemy of which we know nothing, our bodies transform the food and drink before us into our flesh and blood. We open our mouths to speak and the words flow, a ceaseless babbling brook of thoughts and ideas and impressions. We reproduce our kind with ease and play no part in the transformation, in three short months, of the single fertilised egg into an embryo, no larger than a thumbnail, whose four thousand functioning parts include a beating heart the size of the letters on this page, and a couple of eyes the size of the full stop at the end of this sentence. We attend to our children’s needs, but effortlessly they grow inch by inch, year by year to adulthood, replacing along the way virtually every cell in their bodies, refashioning the skull, limbs and internal organs, while retaining the proportions of one part to another.
The moment one starts to reflect on any of these practicalities, their effortlessness does begin to seem rather astonishing. They clearly are not in the least bit simple – yet in reality they are almost the simplest thing we know. They appear simple because they have to be so: if our senses did not accurately capture the world around us, if our metabolism did not abstract and utilise every nutrient, if procreation was not almost too easy and the growth of children into adulthood not virtually automatic, if we had to consciously make an effort to speak a sentence – then ‘we’ would never have happened.
This should make us pause for a moment because, from common experience, there is nothing more difficult and arduous than to make the complex appear simple – just as the concert pianist’s seemingly effortless keyboard skills are grounded in years of toil and practice. So, it is precisely the effortlessness of our everyday lives that should command our attention – recognising their semblance of simplicity as a mark of their unfathomable profundity.
But most people nowadays do ‘pass by themselves without wondering’; though less justifiably so than in St Augustine’s time, for we now know prodigiously more about the deep biological complexities that underpin those simplicities of our everyday lives. We should, by rights, be enormously more appreciative of nature’s ingenuity, and the deceptive effortlessness of our seeing and talking and reproducing our kind should be part of common knowledge, a central theme of the school biology curriculum, promoting a sense of wonder in the young mind at the fact of its very existence.
Yet one could search a shelf full of biology text books in vain for the slightest hint of the extraordinary in their detailed exposition of these practicalities of our lives. And why? Scientists do not ‘do’ wonder. Rather, for the past 150 years they have interpreted the world through the prism of supposing that there is nothing in principle that cannot be accounted for, where the unknown is merely waiting-to-be-known. And so it has been till very recently, when two of the most ambitious scientific projects ever conceived have revealed, quite unexpectedly – and without anyone really noticing – that we are after all a mystery to ourselves. This is the story of how it happened, and its (many) consequences.

1 Science Triumphant, Almost (#ulink_44b8e5ad-5469-5201-8df4-fcc7f78532f3)
‘The real voyage of discovery consists not in seeking new lands, but in seeing with new eyes.’
Marcel Proust
We live in the Age of Science (#litres_trial_promo), whose lengthy roll-call of discoveries and technical innovations has immeasurably changed our lives for the better. Within living memory children succumbed in their thousands every year from polio and whooping cough, telephones were a rarity, colour television was yet to be invented and the family would gather every evening around the wireless after supper to listen to the news.
Since then, the therapeutic revolution of the post-war years has reduced death in infancy to its irreducible minimum, while ensuring that most now live out their natural lifespan; the electronic revolution has prodigiously extended both the capacity of the human mind, with computers of ever smaller size and greater power, and its horizons, with the Hubble telescope circling in orbit around the earth, relaying back from the far reaches of the cosmos sensational images of its beauty and grandeur.
The landmarks of this post-war scientific achievement are familiar enough: for medicine, there are antibiotics and the pill, heart transplants and test tube babies (and much else besides); for electronics, the mobile phone and the Internet; for space exploration, the Apollo moon landing of 1969 and the epic journey of Voyagers I and II to the far reaches of our solar system. But these last fifty years have witnessed something yet more remarkable still – a series of discoveries that, combined together, constitute the single most impressive intellectual achievement of all time, allowing us to ‘hold in our mind’s eye’ the entire sweep of the history of the universe from its beginning till now. That history, we now know, starts fifteen thousand million years ago (or thereabouts) with the Big Bang, ‘a moment of glory too swift (#litres_trial_promo) and expansive for any form of words [when] a speck of matter became in a million millionth of a second something at least ten million million million times bigger’. Eleven thousand million years pass, and a massive cloud of gas, dust, pebbles and rocks in a minor galaxy of that (by now) vast universe coalesces around a young sun to create the planets of our solar system. Another thousand million years pass, the surface of the earth cools and the first forms of life emerge from some primeval swamp of chemicals. Yet another two and a half thousand million years elapse till that moment a mere(!) five million years ago when the earliest of our ancestors first walked upright across the savannah plains of central Africa.
And again, within living memory we knew none of this, neither how the universe came into being, nor its size and composition; neither how our earth was born, nor how its landscape and oceans were created; neither the timing of the emergence of life, nor the ‘universal code’ by which all living things reproduce their kind; neither the physical characteristics of our earliest ancestors, nor the details of their evolutionary transformation to modern man. Now we do, and holding this historical sweep ‘in our mind’s eye’ it is possible to appreciate the intellectual endeavour that underpins it will never, can never, be surpassed. How astonishing to realise that today’s astronomers can detect the distant echoes of that ‘moment of glory’ of the Big Bang all those billions of years ago, and capture in those astonishing images transmitted from the Hubble telescope the very processes that brought our solar system into existence. How astonishing that geologists should have discovered that massive plates of rock beneath the earth’s surface, moving at the rate of a centimetre a year, should have formed its continents and oceans, the mountains and valleys of the snow-capped Himalayas thrust upwards by the collision of the Indian subcontinent with the Asian landmass. How astonishing, too, that biologists should now understand the internal workings of the microscopic cell, and how the arrangements of the same four molecules strung out along the elegant spiral of the Double Helix contain the ‘master plan’ of every living thing that has ever existed.
It is impossible to convey the intellectual exhilaration of such momentous discoveries, but the account by Donald Johanson of finding the first near-complete skeleton of our three-and-a-half-million-year-old hominid ancestor ‘Lucy’ conveys something of the emotions felt by so many scientists over the past fifty years.
Tom [Gray] and I had surveyed (#litres_trial_promo) for a couple of hours. It was now close to noon, and the temperature was approaching 110. We hadn’t found much: a few teeth of a small extinct horse; part of the skull of an extinct pig, some antelope molars, a bit of a monkey jaw …
‘I’ve had it,’ said Tom. ‘When do we head back to camp?’
But as we turned to leave, I noticed something lying on the ground part way up the slope.
‘That’s a bit of a hominid arm,’ I said.
‘Can’t be. It’s too small. Has to be monkey of some kind.’
We knelt to examine it.
‘Much too small,’ said Gray again.
I shook my head. ‘Hominid.’
‘What makes you so sure?’ he said.
‘That piece right next to your hand. That’s hominid too.’
‘Jesus Christ,’ said Gray. He picked it up. It was the back of a small skull. A few feet away was part of a femur; a thigh bone. ‘Jesus Christ,’ he said again. We stood up and began to see other bits of bone on the slope. A couple of vertebrae, part of a pelvis – all of them hominid. An unbelievable, impermissible thought flickered through my mind. For suppose all these fitted together? Could they be parts of a single extremely primitive skeleton? No such skeleton has ever been found – anywhere.
‘Look at that,’ said Gray. ‘Ribs.’
A single individual.
‘I can’t believe it,’ I said, ‘I just can’t believe it.’
‘By God you’d better believe it!’ shouted Gray. His voice went up into a howl. I joined him. In that 110 degree heat we began jumping up and down. With nobody to share our feelings, we hugged each other, sweaty and smelly, howling and hugging in the heat-shimmering gravel, the small brown remains of what now seemed almost certain to be parts of a single hominid skeleton lying all around us.
Momentous events have multiple causes, and the source of this so recent and all-encompassing delineation of the history of our universe stretches back across the centuries. It is impossible to hope to convey the intellectual brilliance and industry of those who brought this extraordinary enterprise to fruition, whose major landmarks are summarised here as the Thirty Definitive Moments of the past six decades.

TABLE 1
Science Triumphant 1945–2001: Thirty Definitive Moments


The triumph of science, one might suppose, is virtually complete. What, during these times, have we learned from the humanities – philosophy, say, theology or history – that begins to touch the breadth and originality of this scientific achievement and the sheer extraordinariness of its insights? What, one might add, have the humanities done that begins to touch the medical therapeutic revolution of the post-war years or the wonders of modern technology?

That history of our universe as revealed in the recent past draws on many disciplines: cosmology and astronomy obviously, the earth and atmospheric sciences, biology, chemistry and genetics, anthropology and archaeology, and many others. But science is also a unified enterprise, and these areas of enquiry all ‘hang together’ to reveal the coherent story outlined above. There remained, however, two great unknowns, two final obstacles to a truly comprehensive theory that would also explain our place in that universe.
The first is how it is that we, like all living things, reproduce our kind with such precision from one generation to the next. The ‘instructions’, as is well recognised, come in the form of genes strung out along the two intertwining strands of the Double Helix in the nucleus of every cell. But the question still remained: How do those genes generate that near-infinite diversity and beauty of form, shape and size, and behaviour that distinguish one form of life from another? How do they fashion from a single fertilised human egg the unique physical features and mind of each one of us?
The second of these ‘great unknowns’ concerned the workings of the brain, and the human brain in particular. To be sure, neurologists have over the past hundred years identified the functions of its several parts – with the frontal lobes as the ‘centre’ of rational thought and emotion, the visual cortex at the back, the speech centre in the left hemisphere and so on. But again the question remained: How does the electrical firing of the brain’s billions of nerves ‘translate’ into our perception of the sights and sounds of the world around us, our thoughts and emotions and the rich inner landscape of personal memories?
These two substantial questions had remained unresolved because both the Double Helix and the brain were inaccessible to scientific scrutiny: the Double Helix, with its prodigious amount of genetic information, comes packed within the nucleus of the cell, a mere one five thousandth of a millimetre in diameter; while the blizzard of electrical activity of the billions of neurons of the brain is hidden within the confines of the bony vault of the skull. But then, in the early 1970s, a series of technical innovations would open up first the Double Helix and then the brain to scientific investigation, with the promise that these final obstacles to our scientific understanding of ourselves might soon be overcome. We will briefly consider each in turn.

The Double Helix
The Double Helix, discovered by James Watson and Francis Crick in 1953, is among the most familiar images of twentieth-century science. Its simple and elegant spiral structure of two intertwined strands unzips and replicates itself every time the cell divides – each strand, an immensely long sequence of just four molecules (best conceived, for the moment, as four different-coloured discs – blue, yellow, red and green). The specific arrangement of a thousand or more of these coloured discs constitutes a ‘gene’, passed down from generation to generation, that determines your size and shape, the colour of your eyes or hair or any other similarly distinguishing traits, along with the thousands of widgets or parts from which we are all made. It would take another fifteen years to work it all out, at least in theory – but the practical details of which particular sequence of coloured discs constituted which gene, and what each gene did, still remained quite unknown. This situation would change dramatically (#litres_trial_promo) in the 1970s, with three technical innovations that would allow biologists first to chop up those three billion ‘coloured discs’ into manageable fragments, then to generate thousands of copies the better to study them, and finally to ‘spell out’ the sequence (red, green, blue, green, yellow, etc., etc.) that constitutes a single gene.
It lies beyond hyperbole (#litres_trial_promo) to even try to convey the excitement and exhilaration generated by this trio of technical innovations, whose potential marked ‘so significant a departure from that which had gone before’ they would become known collectively as ‘the New Genetics’. The prospect of deciphering the genetic instructions of ‘life’ opened up a Pandora’s box of possibilities, conferring on biologists the opportunity to change the previously immutable laws of nature by genetically modifying plants and animals. The findings of the New Genetics filled the pages not only of learned journals but of the popular press: ‘Gene Find Gives Insight into Brittle Bones’, ‘Scientists Find Genes to Combat Cancer’, ‘Scientists Find Secret of Ageing’, ‘Gene Therapy Offers Hope to Victims of Arthritis’, ‘Cell Growth Gene Offers Prospect of Cancer Cure’, ‘Gene Transplants to Fight Anaemia’, and so on.
The New Genetics, in short, swept all before it to become synonymous with biology itself. Before long the entire spectrum of research scientists – botanists, zoologists, physiologists, microbiologists – would be applying its techniques to their speciality. The procedures themselves in turn became ever more sophisticated, opening up the prospect that the New Genetics might transcend the possibilities of discovering ‘the gene for this and the gene for that’, to spell out the entire sequence of coloured discs strung along the Double Helix and thus reveal the full complement of genes, known as The Genome. There was every reason to suppose that deciphering the full set of genetic instructions of what makes a bacterium a bacterium, a worm a worm, and a common housefly a common housefly would reveal how they are made and how they come to be so readily distinguishable from each other – why the worm should burrow and the fly should fly. Then, at the close of the 1980s, the co-discoverer of the Double Helix, James Watson, proposed what would become the single most ambitious and costly project ever conceived in the history of biology – to spell out the full complement of human genes. Thus the Human Genome Project (HGP) was born (#litres_trial_promo), with its promise to make clear what it is in our genes that makes us, ‘us’. The truism that ‘the answer lies in the genes’ is not merely an abstract idea, rather the set of instructions passed down from generation to generation influences every aspect of our being: our physical characteristics, personality, intelligence, predisposition to alcoholism or heart disease, and much else besides. Spell out the human genome in its entirety, and all these phenomena, and more, should finally be accounted for.
‘The search for this “Holy Grail” of who we are,’ observed Harvard University’s Walter Gilbert, ‘has now reached its culminating phase, the ultimate goal is the acquisition of all the details of our genome … that will transform our capacity to predict what we will become.’ There could be no greater aspiration than to ‘permit a living creature’, as Robert Sinsheimer, Chancellor of the University of California, put it, ‘for the first time in all time, to understand its origins and design its future’. The Genome Project would, claimed Professor John Savile of Nottingham’s University Hospital, ‘like a mechanical army (#litres_trial_promo), systematically destroy ignorance’, while ‘promising unprecedented opportunities for science and medicine’.
The Human Genome Project was formally launched in 1991, with a projected cost of $3 billion over the fifteen years it was expected it would run. The task of assembling the vast quantity of data generated by spelling out the human genome was divided across several centres, most of them in the United States, Britain and Japan. The scene within those ‘genomes centres’ could have come from a science fiction film – gleaming automated machines as far as the eye could see, labelling each chemical of the Double Helix with its own fluorescent dye which was then ‘excited’ by a laser and the results fed directly into a computer. ‘The Future is Now’ trumpeted the cover of Time magazine, imposing that iconic image of the Double Helix over the shadowy outline of a human figure in the background.

The Brain
Meanwhile, the human brain too (#litres_trial_promo) was about to reveal its secrets. Its physical appearance is quite as familiar as the Double Helix. But the specialisation of those separate parts for seeing, hearing, movement and so on is in a sense deceptive, concealing the crucial question of how their electrical firing translates the sights and sounds of the external world, or summons those evocative childhood memories from the distant past. How does this mere three pounds of soft grey matter within the skull contain the experience of a lifetime?
Here again, a series of technical innovations (#litres_trial_promo), paralleling those of the New Genetics, would permit scientists for the first time to scrutinise the brain ‘in action’. In 1973 the British physicist Godfrey Hounsfield invented the Computed Tomography (CT) scanner, revealing the brain’s internal structure with an almost haunting clarity, revolutionising the diagnosis of strokes and tumours and other forms of mischief. Soon after, the further technical development of Positron Emission Tomography (PET) scanning would transform the CT scanner’s static images or ‘snapshots’ of the brain into ‘moving pictures’.
Put simply, this is how it works. All of life requires oxygen to drive the chemical reactions in its cells. This oxygen is extracted from the air, inspired in the lungs and transported by blood cells to the tissues. When, for example, we start talking, the firing of the neurons in the language centre of the brain massively increases their demand for oxygen, which can only be met by increasing the bloodflow to that area. The PET scanner detects that increase in bloodflow, converting it into multi-coloured images that pick out the ‘hotspots’ of activity. Now, for the first time, the internal workings of the brain induced by smelling a rose or listening to a violin sonata could be observed as they happened. Or (as here) picking out rhyming words:
A woman sits quietly waiting for the experiment to begin – her head ensconced in a donut-shaped device, a PET scanning camera. Thirty-one rings of radiation detectors make up the donut, which will scan thirty-one images simultaneously in parallel horizontal lines. She is next injected with a radioactive isotope [of oxygen] and begins to perform the task … The words are presented one above the other on a television monitor. If they rhyme, she taps a response key. Radiation counters estimate how hard the brain region is working … and are transformed into images where higher counts are represented by brighter colours [thus] this colour map of her brain reveals all the regions acting while she is judging the paired words.
The details will come later, but the PET scanner would create the discipline of modern neuroscience, attracting thousands of young scientists keen to investigate this previously unexplored territory. Recognising the possibilities of the new techniques, the United States Congress in 1989 designated the next ten years as ‘the Decade of the Brain’ in anticipation of the many important new discoveries that would deliver ‘precise and effective means of predicting, modifying and controlling individual behaviour’. ‘The question is not whether the neural machinery [of the brain] will be understood,’ observed Professor of Neurology Antonio Damasio, writing in the journal Scientific American, ‘but when.’
Throughout the 1990s, both the Human Genome Project and the Decade of the Brain would generate an enormous sense of optimism, rounding off the already prodigious scientific achievements of the previous fifty years. And sure enough, the completion of both projects on the cusp of the new millennium would prove momentous events.

The completion of the first draft of the Human Genome Project (#litres_trial_promo) in June 2000 was considered sufficiently important to warrant a press conference in the presidential office of the White House. ‘Nearly two centuries ago in this room, on this floor, Thomas Jefferson spread out a magnificent map … the product of a courageous expedition across the American frontier all the way to the Pacific,’ President Bill Clinton declared. ‘But today the world is joining us here to behold a map of even greater significance. We are here to celebrate the completion of the first survey of the entire human genome. Without a doubt this is the most important, most wondrous map ever produced by mankind.’
The following year, in February 2001 (#litres_trial_promo), the two most prestigious science journals, Nature and Science, each published a complete version of that ‘most wondrous map ever produced by mankind’ as a large, multi-coloured poster displaying the full complement of (as it would turn out) twenty-five thousand human genes. It was, as Science observed, ‘an awe-inspiring sight’. Indeed, it was awesome twice over. Back in the 1950s, when Francis Crick and James Watson were working out the structure of the Double Helix, they had no detailed knowledge of a single gene, what it is or what it does. Now, thanks to the techniques of the New Genetics, those involved in the Genome Project had, in less than a decade, successfully culled from those three billion ‘coloured discs’ strung out along its intertwining strands the hard currency of each of the twenty-six thousand genes that determine who we are.
The Human Genome map, like Thomas Jefferson’s map of the United States, portrays the major features of that genetic landscape with astonishing precision. While it had taken the best part of seven years to find the defective gene responsible for the lung disorder cystic fibrosis, now anyone could locate it from that multi-coloured poster in as many seconds. Here too at a glance you can pick out the gene for the hormone insulin, which controls the level of sugar in the blood, or the haemoglobin molecule that transports oxygen to the tissues. To be sure, the functions of many thousands of those genes remained obscure, but now, knowing their precise location and the sequence of which they are composed, it would be only a matter of time before they too would be known. It was a defining moment. ‘Today will be recorded as one of the most significant dates in history,’ insisted one of the major architects of the Genome Project, Dr Michael Dexter of the Wellcome Trust in Britain. ‘Just as Copernicus changed our understanding of the solar system and man’s place within it, so knowledge of the human genome will change how we see ourselves and our relationship to others.’

The goals of the Decade of the Brain (#litres_trial_promo) were necessarily more open-ended, but still the PET scanner, and the yet more sophisticated brain imaging techniques that followed in its wake, had more than fulfilled their promise, allowing scientists to draw another exquisitely detailed map locating the full range of mental abilities to specific parts of the brain. There were many surprises along the way, not least how the brain fragmented the simplest of tasks into a myriad of different components. It had long been supposed, for instance, that the visual cortex at the back of the brain acted as a sort of photographic plate, capturing an image of the external world as seen through the eye. But now it turned out that the brain ‘created’ that image from the interaction of thirty or more separate maps within the visual cortex, each dedicated to one or other aspect of the visual image, the shapes, colour, movement of the world ‘out there’. ‘As surely as the old system (#litres_trial_promo) was rooted in the concept of an image of the visual world received and analysed by the cortex,’ observes Semir Zeki, Professor of Neurobiology at the University of London, ‘the present one is rooted in the belief that an image of the visual world is actively constructed by the cerebral cortex.’
Steven Pinker, Professor of Brain and Cognitive Science at the Massachusetts Institute of Technology, could explain to the readers of Time magazine in April 2000 (the close of the Decade of the Brain) how neuroscientists armed with their new techniques had investigated ‘every facet of mind (#litres_trial_promo) from mental images to moral sense, from mundane memories to acts of genius’, concluding, ‘I have little reason to doubt that we will crack the mystery of how brain events correlate with experience.’
Both the Human Genome Project and the Decade of the Brain have indeed transformed, beyond measure, our understanding of ourselves – but in a way quite contrary to that anticipated.
Nearly ten years have elapsed since those heady days when the ‘Holy Grail’ of the scientific enterprise, the secrets of life and the human mind, seemed almost within reach. Every month the pages of the science journals are still filled with the latest discoveries generated by the techniques of the New Genetics, and yet more colourful scans of the workings of the brain – but there is no longer the expectation that the accumulation of yet more facts will ever provide an adequate scientific explanation of the human experience. Why?
We return first to the Human Genome Project, which, together with those of the worm and fly, mouse and chimpanzee and others that would follow in its wake, was predicated on the assumption that knowledge of the full complement of genes must explain, to a greater or lesser extent, why and how the millions of species with which we share this planet are so readily distinguishable in form and attributes from each other. The genomes must, in short, reflect the complexity and variety of ‘life’ itself. But that is not how it has turned out.
First, there is the ‘numbers problem’ (#litres_trial_promo). That final tally of twenty-five thousand human genes is, by definition, sufficient for its task, but it seems a trifling number to ‘instruct’, for example, how a single fertilised egg is transformed in a few short months into a fully formed being, or to determine how the billions of neurons in the brain are wired together so as to encompass the experiences of a lifetime. Those twenty-six thousand genes must, in short, ‘multi-task’, each performing numerous different functions, combining together in a staggeringly large number of different permutations.
That paucity of genes is more puzzling still when the comparison is made with the genomes of other creatures vastly simpler than ourselves – several thousand for a single-cell bacterium, seventeen thousand for a millimetre-sized worm, and a similar number for a fly. This rough equivalence in the number of genes across so vast a range of ‘organismic complexity’ is totally inexplicable. But no more so than (#litres_trial_promo) the discovery that the human genome is virtually interchangeable with that of our fellow vertebrates such as the mouse and chimpanzee – to the tune of 98 per cent or more. There is, in short, nothing to account for those very special attributes that so readily distinguish us from our primate cousins – our upright stance, our powers of reason and imagination, and the faculty of language.
The director of the Chimpanzee Genome Project, Svante Paabo, had originally anticipated that its comparison with the human genome would reveal the ‘profoundly interesting genetic prerequisites’ that set us apart:
The realisation that a few genetic (#litres_trial_promo) accidents made human history possible will provide us with a whole new set of philosophical challenges to think about … both a source of humility and a blow to the idea of human uniqueness.
But publication of the completed version of the chimpanzee genome in 2005 prompted a more muted interpretation of its significance: ‘We cannot see in this why (#litres_trial_promo) we are so different from chimpanzees,’ Paabo commented. ‘Part of the secret is hidden in there, but we don’t understand it yet.’ So ‘The obvious differences between humans and chimps cannot be explained by genetics alone’ – which would seem fair comment, until one reflects that if those differences ‘cannot be explained’ by genes, then what is the explanation?
These findings were not just unexpected, they undermined the central premise of biology: that the near-infinite diversity of form and attributes that so definitively distinguish living things one from the other must ‘lie in the genes’. The genome projects were predicated on the assumption that the ‘genes for’ the delicate, stooping head and pure white petals of the snowdrop would be different from the ‘genes for’ the colourful, upstanding petals of the tulip, which would be different again from the ‘genes for’ flies and frogs, birds and humans. But the genome projects reveal a very different story, where the genes ‘code for’ the nuts and bolts of the cells from which all living things are made – the hormones, enzymes and proteins of the ‘chemistry of life’ – but the diverse subtlety of form, shape and colour that distinguishes snowdrops from tulips, flies from frogs and humans, is nowhere to be found. Put another way, there is not the slightest hint in the composition of the genes of fly or man to account for why the fly should have six legs, a pair of wings and a brain the size of a full stop, and we should have two arms, two legs and that prodigious brain. The ‘instructions’ must be there, of course, for otherwise flies would not produce flies and humans humans – but we have moved, in the wake of the Genome Project, from assuming that we knew the principle, if not the details, of that greatest of marvels, the genetic basis of the infinite variety of life, to recognising that we not only don’t understand the principles, we have no conception of what they might be.
We have here, as the historian of science Evelyn Fox Keller puts it:
One of those rare and wonderful moments (#litres_trial_promo) when success teaches us humility … We lulled ourselves into believing that in discovering the basis for genetic information we had found ‘the secret of life’; we were confident that if we could only decode the message in the sequence of chemicals, we would understand the ‘programme’ that makes an organism what it is. But now there is at least a tacit acknowledgement of how large that gap between genetic ‘information’ and biological meaning really is.
And so, too, the Decade of the Brain (#litres_trial_promo). The PET scanner, as anticipated, generated many novel insights into the patterns of electrical activity of the brain as it looks out on the world ‘out there’, interprets the grammar and syntax of language, recalls past events, and much else besides. But at every turn the neuroscientists found themselves completely frustrated in their attempts to get at how the brain actually works.
Right from the beginning it was clear that there was simply ‘too much going on’. There could be no simpler experiment than to scan the brain of a subject when first reading, then speaking, then listening to, a single word such as ‘chair’. This should, it was anticipated, show the relevant part of the brain ‘lighting up’ – the visual cortex when reading, the speech centre when speaking, and the hearing cortex when listening. But no, the brain scan showed that each separate task ‘lit up’ not just the relevant part of the brain, but generated a blizzard of electrical activity across vast networks of millions of neurons – while thinking about the meaning of a word and speaking appeared to activate the brain virtually in its entirety. The brain, it seemed, must work in a way previously never really appreciated – not as an aggregate of distinct specialised parts, but as an integrated whole, with the same neuronal circuits performing many different functions.
The initial surprise at discovering how the brain fragmented the sights and sounds of the world ‘out there’ into a myriad of separate components grew greater still as it became clear that there was no compensating mechanism that might reintegrate all those fragments back together again into that personal experience of being at the centre, moment by moment, of a coherent, ever-changing world. Reflecting on this problem of how to ‘bind’ all the fragments back together again, Nobel Prize-winner David Hubel of Harvard University observed:
This abiding tendency for attributes such as (#litres_trial_promo) form, colour and movement to be handled by separate structures in the brain immediately raises the question how all the information is finally assembled, say for perceiving a bouncing red ball. It obviously must be assembled – but where and how, we have no idea.
But the greatest perplexity of all was the failure to account for how the monotonous electrical activity of those billions of neurons in the brain translate into the limitless range and quality of subjective experiences of our everyday lives – where every transient, fleeting moment has its own distinct, unique, intangible feel: where the cadences of a Bach cantata are so utterly different from the flash of lightning, the taste of Bourbon from the lingering memory of that first kiss.
The implications are clear enough. While theoretically it might be possible for neuroscientists to know everything there is to know about the physical structure and activity of the brain, its ‘product’, the mind, with its thoughts and ideas, impressions and emotions, would still remain unaccounted for. As the philosopher Colin McGinn expresses it:
Suppose I know everything about your brain (#litres_trial_promo): I know its anatomy, its chemical ingredients, the pattern of electrical activity in its various segments, I even know the position of every atom and its subatomic structure. Do I therefore know everything about your mind? It certainly seems not. On the contrary, I know nothing about your mind. So knowledge of your brain does not give me knowledge of your mind.
This distinction between the electrical activity of the material brain and the non-material mind (of thoughts and ideas) as two quite different things might seem so self-evident as to be scarcely worth commenting on. But for neuroscientists the question of how the brain’s electrical activity translates into thoughts and sensations was precisely what needed explaining – and their failure to do so has come to haunt them. So, for everything that the Decade of the Brain undoubtedly achieved, nonetheless, as John Maddox, editor of Nature, would acknowledge at its close: ‘We seem as far from (#litres_trial_promo) understanding [the brain] as we were a century ago. Nobody understands how decisions are made or how imagination is set free.’
This verdict on the disappointing outcomes of the Genome Project and the Decade of the Brain might seem a trifle premature. These are, after all, still very early days, and it is far too soon to predict what might emerge over the next twenty to thirty years. The only certainty about advances in human knowledge is that they open the door to further seemingly unanswerable questions, which in time will be resolved, and so on. The implication that here science may finally have ‘reached its limits’ would seem highly contentious, having been expressed many times in the past, only to be repeatedly disproved. Famously, the physicist Lord Kelvin, at the close of the nineteenth century, insisted that the future of his discipline was to be looked for in ‘the sixth place of decimals’ (that is, futile refinements of the then present state of knowledge). Within a few years Albert Einstein had put forward his General Theory of Relativity, and the certainties of Lord Kelvin’s classical physics were eclipsed.
The situation here, however, is rather different, for while the New Genetics and those novel brain scanning techniques offer almost inexhaustible opportunities for further research, it is possible to anticipate in broad outline what their findings will add up to. Scientists could, if they so wished, spell out the genomes of each of the millions of species with which we share this planet – snails, bats, whales, elephants and so on – but that would only confirm that they are composed of several thousand similar genes that ‘code’ for the nuts and bolts of the cells of which they are made, while the really interesting question, of how those genes determine the unique form and attributes of the snail, bat, elephant, whale or whatever, would remain unresolved. And so too for the scanning techniques of the neurosciences, where a million scans of subjects watching a video of bouncing red balls would not take us an iota further in understanding what needs explaining – how the neuronal circuits experience the ball as being red and round and bouncing.
At any other time these twin setbacks to the scientific enterprise might simply have been relegated to the category of problems for which science does not as yet have the answer. But when cosmologists can reliably infer what happened in the first few minutes of the birth of the universe, and geologists can measure the movements of vast continents to the nearest centimetre, then the inscrutability of those genetic instructions that should distinguish a human from a fly, or the failure to account for something as elementary as how we recall a telephone number, throws into sharp relief the unfathomability of ourselves. It is as if we, and indeed all living things, are in some way different, profounder and more complex than the physical world to which we belong.
Nonetheless there must be a reason why those genome projects proved so uninformative about the form and attributes of living things, or why the Decade of the Brain should have fallen so far short of explaining the mind. There is a powerful impression that science has been looking in the wrong place, seeking to resolve questions whose answers lie somehow outside its domain. This is not just a matter of science not yet knowing all the facts; rather there is the sense that something of immense importance is ‘missing’ that might transform the bare bones of genes into the wondrous diversity of the living world, and the monotonous electrical firing of the neurons of the brain into the vast spectrum of sensations and ideas of the human mind. What might that ‘missing’ element be?
Much of the prestige of science lies in its ability to link together disparate observations to reveal the processes that underpin them. But this does not mean that science ‘captures’ the phenomena it describes – far from it. There is, after all, nothing in the chemistry of water (two atoms of hydrogen to one of oxygen) that captures its diverse properties as we know them to be from personal experience: the warmth and wetness of summer rain, the purity and coldness of snow in winter, the babbling brook and the placid lake, water refreshing the dry earth, causing the flowers to bloom and cleansing everything it touches. It is customary to portray this distinction as ‘two orders of reality’. The ‘first’ or ‘primary reality’ of water is that personal knowledge of its diverse states and properties that includes not just how we perceive it through our senses, but also the memories, emotions and feelings with which we respond to it. By contrast, the ‘second order reality’ is water’s materiality, its chemical composition as revealed by the experimental methods of the founder of modern chemistry, the French genius Antoine Lavoisier, who in 1783 sparked the two gases of hydrogen and oxygen together in a test tube, to find a residue of dew-like drops that ‘seemed like water’.
These two radically different, yet complementary, ‘orders of reality’ of water are mutually exclusive. There is nothing in our personal experience that hints at water’s chemical composition, nor conversely is there anything in its chemical formula that hints at its many diverse states of rain, snow, babbling brook, as we know them from personal experience. This seemingly unbridgeable gap between these two orders of reality corresponds, if not precisely, to the notion of the ‘dual nature of reality’, composed of a non-material realm, epitomised by the thoughts and perceptions of the mind, and an objective material realm of, for example, chairs and tables. They correspond, again if not precisely, to two categories of knowledge that one might describe respectively as the philosophic and the scientific view. The ‘first order’ philosophic view is the aggregate of human knowledge of the world as known through the senses, interpreted and comprehended by the powers of reason and imagination. The ‘second order’ scientific view is limited to the material world and the laws that underpin it as revealed by science and its methods. They are both equally real – the fact of a snowflake melting in the palm of the hand is every bit as important as the fact of the scientific explanation that its melting involves a loosening of the lattice holding the molecules of hydrogen and oxygen together. The ‘philosophic’ view, however, could be said to encompass the scientific, for it not only ‘knows’ the snowflake melting in the hand as a snowflake, but also the atomic theory of matter and hence its chemical composition.
It would thus seem a mistake to prioritise scientific knowledge as being the more ‘real’, or to suppose its findings to be the more reliable. But, to put it simply, that is indeed what happened. Before the rise of science, the philosophic view necessarily prevailed, including the religious intimation from contemplating the wonders of the natural world and the richness of the human mind that there was ‘something more than can be known’.
From the late eighteenth century onwards the burgeoning success of science would progressively challenge that inference through its ability to ‘reduce’ the seemingly inscrutable complexities of the natural world to their more readily explicable parts and mechanisms: the earth’s secrets surrendered to the geologist’s hammer, the intricacies of the fabric of plants and animals to the microscopist’s scrutiny, the mysteries of nutrition and metabolism to the analytical techniques of the chemist. Meanwhile, the discovery of the table of chemical elements, the kinetic theory of heat, magnetism and electricity all vastly extended the explanatory powers of science. And, most significant of all, the theory of biological evolution offered a persuasive scientific explanation for that greatest of wonders – the origins and infinite diversity of form and attributes of living things.
The confidence generated by this remorseless expansion in scientific knowledge fostered the belief in its intrinsic superiority over the philosophic view, with the expectation that the universe and everything within it would ultimately be explicable in terms of its material properties alone. Science would become the ‘only begetter of truth’, its forms of knowledge not only more reliable but more valuable than those of the humanities. This assertion of the priority of the scientific view, known as scientific materialism (or just ‘materialism’), marked a watershed in Western civilisation, signalling the way to a future of scientific progress and technical advance while relegating to the past that now superseded philosophical inference of the preceding two thousand years of there being ‘more than we can know’. That future, the scientific programme of the twentieth century, would be marked by a progressively ever deeper scientific penetration into the properties of matter, encompassing the two extremes of scale from the vastness of the cosmos to the microscopic cell from which all living things are made. It began to seem as if there might be no limits to its explanatory power.
The genome projects and the Decade of the Brain represent the logical conclusion of that supposition. First, the genome projects were predicated on the assumption that unravelling the Double Helix would reveal ‘the secret of life’, as if a string of chemicals could possibly account for the vast sweep of qualities of the wonders of the living world; and second, the assumption of the Decade of the Brain that those brain scanning techniques would explain the mind, as if there could be any equivalence between the electrical firing of neurons and the limitless richness of the internal landscape of human memory, thought and action. In retrospect, both were no more likely to have fulfilled the promise held out for them than to suppose the ‘second order’ chemical composition of water might account for its diverse ‘first order’ states of rain, snow, oceans, lakes, rivers and streams as we know them to be.
This necessarily focuses our attention on what that potent ‘missing force’ must be that might bridge the gap between those two ‘orders of reality’, with the capacity to conjure the richness of human experience from the bare bones of our genes and brains. This is an even more formidable question than it might appear to be, for along the way those genome projects have also, inadvertently, undermined the credibility of the fundamental premise of what we do know about ourselves – that the living world and our uniquely human characteristics are the consequence of a known, scientifically proven, process of biological evolution. Certainly, the defining feature of the history of the universe, as outlined earlier, is of the progressive, creative, evolutionary transformation from the simplest elements of matter to ever higher levels of complexity and organisation. Over aeons of time the clouds of gas in intergalactic space evolved into solar systems such as our own. Subsequently the inhospitable landscape of our earth evolved again into its current life-sustaining biosphere, and so on. Thus the whole history of the cosmos is an evolutionary history. That is indisputable, but the biological theory of evolution goes further, with the claim to know the mechanisms by which the near-infinite diversity of forms of life (including ourselves) might have evolved by a process of random genetic changes from a single common ancestor.
It is, of course, possible that the living world and ourselves did so evolve, and indeed it is difficult to conceive of them not having done so. But the most significant consequence of the findings of the genome projects and neuroscience is the transformation of that foundational evolutionary doctrine into a riddle. The dramatic discovery of Lucy’s near-complete skeleton, already described, provides compelling evidence for man’s progressive evolutionary ascent over the past five million years. Why then, one might reasonably ask, is there not the slightest hint in the Human Genome of those unique attributes of the upright stance and massively expanded brain that so distinguish us from our primate cousins?
The ramifications of the seemingly disappointing outcomes of the New Genetics and the Decade of the Brain are clearly prodigious, suggesting that we are on the brink of some tectonic shift in our understanding of ourselves. These issues are nowhere more sharply delineated than in an examination of the achievements of the first human civilisation which marked the arrival of our species, Homo sapiens, thirty-five thousand years ago.

2 The Ascent of Man: A Riddle in Two Parts (#ulink_37398ee8-759a-54d5-9009-c7a887dd52b4)
‘Alone in that vastness (#litres_trial_promo), lit by the feeble beam of our lamps, we were seized by a strange feeling. Everything was so beautiful, so fresh, almost too much so. Time was abolished, as if the tens of thousands of years that separated us from the producers of these paintings no longer existed. It seemed as if they had just created these masterpieces. Suddenly we felt like intruders. Deeply impressed, we were weighed down by the feeling that we were not alone; the artists’ souls and spirits surrounded us. We thought we could feel their presence.’
Jean-Marie Chauvet on discovering the world’s oldestpaintings, from 30,000 BC
The beginning for ourselves, Homo sapiens – modern, thoughtful, argumentative, reflective, creative man – can be pinpointed with remarkable accuracy to 35,000 BC, or thereabouts, in south-west Europe. Here, in the shadow of the snow-topped Pyrenees that separate what is now southern France from northern Spain, flourished the first and most enduring of all human civilisations, a vibrant, unified, coherent culture, transmitted from generation to generation for an astonishing twenty-five thousand years. This palaeolithic (Stone Age) civilisation, created by the first truly modern Europeans, was more long-lasting than any that have succeeded it: ten times longer than the 2,500-year reign of the pharaohs in Egypt, twenty-five times longer than the thousand years of Graeco-Roman antiquity.
The historical lineage of our species stretches much further back, into the almost unimaginably distant past of five or six million years ago, but those more ancient predecessors left nothing behind other than some precious and much-argued-over skulls, bones and teeth, and the stone implements, scrapers, blades and axe-heads with which they hunted and butchered their prey.
Homo sapiens, or ‘Cromagnon man’, as this first representative of our species is known (so named after the Cro-Magnon – ‘Big Hole’ – rock shelter where his remains were first unearthed in 1868), was something else. His arrival in south-west France signalled a cultural explosion of technological innovation and artistic expression that has characterised the human race ever since. And he was the first, too, to leave behind an image of himself, so though thirty-five thousand years separate us, we can readily make his acquaintance. Fly, or catch the train, to Paris, and take the Metro westwards to the suburban station of St Germain-en-Laye. Emerging from the entrance, you cannot miss the impressive moated château of the Musée Nationale d’Antiquités, home to the largest of all archaeological collections. Few tourists make it this far out of the city, and you may be virtually alone as you stride past the first few display cabinets with their serried ranks of those familiar – if not exactly thrilling – stone implements. And then suddenly, without warning, your eyes are caught by the face of a teenage girl fashioned from the glistening ivory of a mammoth’s tusk, so small and delicate she could easily nestle in the palm of your hand (see overleaf). Her triangular-shaped face with its long, straight nose and deep-set eyes emerges from a slender, graceful neck framed by flowing locks of braided hair. She is the ‘Dame de Brassempouy’ (#litres_trial_promo), the first human portrait, unearthed in 1895 by the French archaeologist Édouard Piette from amongst a pile of mammoth and rhinoceros bones that covered the floor of a cave a few kilometres outside the village of Brassempouy in southern France, after which she is named. Her air of youthful innocence is complemented by a second sculpted object in the same cabinet, of similar size and from the same site, that exemplifies that other timeless image of womanhood – the mature and childbearing. It may only be a broken fragment, but her prominent breasts and fleshy thighs are unmistakably those of a fertile woman.
This youthful teenager and this mature woman, the first images of modern humanity, are both visual and tactile, their polished surfaces testimony to the countless generations whose hands caressed that braided hair and felt those fleshy contours. They subvert the customary perception of man’s trajectory from a primitive past to a civilised present by compelling us to recognise how little has changed. The cultural history of our species may stretch back thirty-five thousand years, but from its earliest beginnings to the present day it is clearly ‘of a piece’.
And there is yet more to the Dame de Brassempouy than this invaluable perspective. Her immediate predecessors in Europe, the beetle-browed, thick-necked Neanderthals (so named after the Neander valley in Germany where their remains were first unearthed), were no more capable of creating so exquisite an object than were the very earliest humans who traversed the savannah plains of Africa several million years previously. Now, those Neanderthals had many virtues. They were tough and intelligent enough to survive for quarter of a million years in the hostile environments of the recurring Ice Ages that periodically swept across the continent, and they had a brain capacity slightly larger than our own. But they left not a single such image behind. The Dame de Brassempouy thus focuses our attention with exceptional clarity on that most important of questions: What happened in the transition to modern man? What is it that sets us apart, why should we be so different?
The Cromagnons’ arrival in south-western Europe (#litres_trial_promo) was the culmination of an unexplained diaspora that 100,000 years earlier had impelled modern Homo sapiens to leave his African homeland and spread outwards to every corner of the earth. It was cold, of course, as throughout the tens of thousands of years of Cromagnon civilisation the ice cap several hundred miles to the north expanded and retreated. But they found shelter from the icy winds in the rocky south-facing valleys of the Dordogne and the Pyrenees. They had fire to warm themselves and animal furs for clothing, sewn together with the aid of ivory needles and held in place by exquisite ivory buttons. They lived in communities of several hundred spread out in separate dwelling places, and with a total population of probably little more than twenty thousand. They danced, as we know from the swinging breasts of an exquisite thirty-thousand-year-old statuette of a naked woman, and played music, fashioning drums from mammoth bones, clicking castanets from jawbones and flutes from the hollow bones of birds, which, with a whistle head attached, can be made to produce strong, clear notes. They wore jewellery and beads made from a few highly prized materials – certain types of seashells and animal teeth – which they traded over large distances. And they were great technical innovators. While their predecessors’ stone tools had scarcely changed in a million years, the Cromagnons prodigiously extended their sources of food supply by inventing both the spear-thrower and the harpoon. They invented oil lamps to illuminate the interiors of their caves, the drill that could put an ‘eye’ in a needle, and rope to bind their tents together.
And they had a passion for art (#litres_trial_promo). ‘We are justified in asserting they devoted themselves, intensely and continuously, to the creation of pictorial, graphic and sculptural works,’ writes the Italian art historian Paolo Graziosi. This is not the conventional version of primitive Stone Age art, where ‘stick’ men pursue their quarry with bows and spears, but is comparable to the art of the Italian Renaissance, with a naturalistic style that ‘sought to express reality in its deep unchanging essence’. Their powers of observation were so acute that ‘we know, for example, that the extinct rhinoceros of Ice Age Europe was adorned with a shaggy coat’, writes Ian Tattersall of the American Museum of Natural History, and that the extraordinary Megalocerus giganticus, a deer with vast antlers, had a darkly coloured hump behind its shoulders.
The Cromagnons’ artistic legacy takes two forms: ‘portable’ art, mostly sculptures and engravings on ivory and antler horn; and the distinctly ‘non-portable’ vast frescoes that covered the walls and ceilings of their cavernous cathedrals concealed in the depths of the mountainsides, in which they ‘mastered the problems of presenting three dimensions in two, and in giving a sensation of movement’.
And what movement! As archaeologist John Pfeiffer recalls on first glimpsing the ‘incomparable splendour’ of the painted caves at Lascaux in southern France:
It is pitch dark inside, and then the lights are turned on. Without prelude, before the eye has had a chance to look at any single feature, you see it whole, painted in red, black and yellow, a burst of animals, a procession dominated by huge creatures with horns. The animals form two lines converging from left and right, seeming to stream into a funnel mouth, towards and into a dark hole which marks the way into a deeper gallery.
It is not possible to convey the full range of the Cromagnons’ artistic virtuosity, so three striking examples must suffice. The first is the sculpted handle of a spear-thrower, fashioned from a reindeer’s antler, that shows a young ibex looking round at a large faecal stool emerging from its rectum, on which two birds are perched. The tautness of the animal’s neck muscles is beautifully conveyed in this humorous image, which must have been popular as several others, virtually identical, have since been discovered.
Next comes a fresco painting of a pride of lions from the Chauvet cave of the opening quotation to this chapter, whose ‘richly embellished chambers’ also feature mammoth, rhinoceros and an ‘exquisitely painted’ panel of horses’ heads. But these lions are the most impressive of all, showing how the Cromagnons had mastered the three-dimensional sense of perspective, with heavy paint-strokes beneath the neck adding depth to the image.
Thirdly there is a bison’s head (see overleaf), full of gravitas, sculpted from clay, from around 15,000 BC, that would be a masterpiece in any age. One can hardly imagine that it was created 14,500 years before the Golden Age of sculpture of classical Athens. Indeed, when compared to the sculpted head of an ox being led in sacrificial procession on the Parthenon frieze, one could almost be forgiven for thinking they were rendered by the same hand.
So there we have it, three artistic masterpieces, each of which conveys something of the profundity of the mind of these first representatives of our species, their humour and pathos, and their deep appreciation of the character of the animals with which they shared the world and which they so masterfully portrayed.
For the archaeologists of the nineteenth century, few things were quite as perplexing as the possibility that these wonderful expressions of human intelligence could have been created by Stone Age cave dwellers. It seemed inconceivable that man could have been capable of such artistic virtuosity in so distant a past, so when the Marquis de Sautola stumbled across the first of the painted caves at Altamira in northern Spain – after his eight-year-old daughter drew his attention to a parade of bison on the ceiling with the famous phrase ‘Mira, Papa, bueyez!’ (Look, Papa, oxen!’) – no one believed him. His lecture to the International Congress of Archaeology in Lisbon in 1880 in which he described his findings was ‘met with incredulity and an abrupt and contemptuous dismissal’. The Altamira paintings were never acknowledged as authentic in his lifetime – rather, their exceptional quality was presumptive evidence that he must have faked them. The Marquis died a disillusioned man, yet any condemnation of his harsh treatment by his archaeological contemporaries is a judgement of hindsight. Their scepticism was not unreasonable, given that the proposition that Stone Age man might have been capable of creating such great art itself seemed unreasonable.
Nowadays we know better. The sensational recent discoveries of man’s earliest ancestors – in particular the two near-complete fossilised skeletons, ‘Lucy’ and ‘Turkana Boy’ – mark the first two distinct stages of Man’s Ascent, his decision to stand upright and his prodigiously enlarging brain. The ‘cultural explosion’ of Cromagnon man’s artistic achievement marks the culminating phase of that evolutionary trajectory, determined by the third of those distinctly human attributes – the faculty of language. And yet, the drama of that evolutionary trajectory now appears, in the light of the findings of the New Genetics and the Decade of the Brain, more perplexing even than it would have seemed to those sceptical nineteenth-century archaeologists. This is ‘the riddle of the ascent of man’.
The common understanding (#litres_trial_promo) of man’s evolutionary heritage begins with Charles Darwin’s On the Origin of Species of 1859, extended to incorporate ‘ourselves’ in The Descent of Man, published twelve years later, with its central claim that the near-infinite diversity of shape, form and attributes of living things all evolved from the first and simplest form of life, self-assembled from ‘all sorts of ammonia and phosphoric salts’ in ‘some warm little pond’ on the earth’s surface several billion years ago. The modern interpretation of Darwin’s theory is, briefly, as follows. The major determinants of what makes a fish a fish, or a bird a bird, are the instructions carried within the twenty thousand (plus or minus) genes formed by the sequence of just four chemicals (best imagined, as suggested, as four coloured discs, green, red, blue and yellow) strung out along the Double Helix within the nucleus of each and every cell. These genetic instructions are then passed on in the sperm and egg at the moment of conception to ensure that the offspring of fish will be fish and birds, birds. Those individual genes replicate themselves with astonishing accuracy every time the cell divides, but very occasionally a mistake, or ‘mutation’, may creep in: so a green disc (say) is substituted for a red one, thus subtly altering the genetic instructions. Most of the time this does not matter, or is detrimental, but very occasionally the ‘chance mutation’ in those genetic instructions may confer some biological advantage, maximising the carriers’ chances of survival in the struggle for existence. Their offspring in turn are likely to inherit their parent’s advantageous genetic variation, and as the process continues from generation to generation, the characteristics of species will be gradually transformed, step by step, in favour of those which are best suite d (or ‘adapted’) to their environment. Thus fish are adapted to life underwater because over millions of generations ‘nature’ has ‘selected’ (hence ‘natural selection’) those whose random changes in their genes have maximised their swimming potential, while birds are good at flying because the same process has maximised their aerodynamic capabilities. Put another way, all of life has ‘descended with modification’ from that common ancestor.
And man, Darwin argued in The Descent of Man, is no exception. Indeed, there is probably no more persuasive evidence for his evolutionary theory than the striking physical similarities between man and his primate cousins, which point inexorably to their having ‘descended by modification’ from some common ape-like ancestor. Man has survived and prospered because nature, in ‘selecting’ the genetic mutations that would cause him to stand upright and acquire that much larger brain, conferred so considerable a biological advantage as to maximise his chances of survival. Certainly man’s much superior intellectual faculties might seem to set him apart. Nonetheless, our primate cousins, like ourselves, exhibit similar emotions of jealousy, suspicion and gratitude; they make choices, recall past events and are capable (to a degree) of reason. Hence the superiority of the human mind, Darwin argued, represents a continuum – it is a difference of ‘degree but not of kind’.
Details aside, one single, powerful image (#litres_trial_promo) captures this profoundly influential interpretation of man’s origins. Darwin’s close friend and advocate Thomas Huxley, in an illustration to his book Evidence as to Man’s Place in Nature (1863), placed the skeletons of chimpanzees, gorillas and man in sequence, transforming their striking physical similarities into a powerful narrative of the rise of man from knuckle-walking chimp to upstanding Homo sapiens. The same image, expanded with a series of ‘hominid’ intermediaries, and reproduced (often humorously) in numerous different guises, would become one of the most familiar, and certainly influential, icons of the twentieth century. The ‘Descent’ of man by modification from his ape-like ancestors, it implied, was in reality the story of his ‘Ascent’ to his pre-eminent position in the grand order of life. And so the major archaeological discoveries (#litres_trial_promo) of the last fifty years have shown it to be.
The discovery of the fossilised bones of Cromagnon man in 1868, together with those of his beetle-browed Neanderthal predecessors, would roll back man’s evolutionary history 200,000 years or more. It was not however till the 1930s that the first evidence of the more distant stages would begin to emerge, and not till the 1970s, when the first of two near-complete fossilised skeletons was unearthed in the harsh landscape of central Africa, that Darwin’s hypothesised transition from that ape-like common ancestor to Homo sapiens would be vindicated.
Those two near-complete skeletons were (#litres_trial_promo), first, the three-and-a-half-million-year-old ‘Lucy’, or Australopithecus afarensis to give her her scientific name, whose discovery had such a powerful and emotional effect on Donald Johanson and Tom Gray (‘…we hugged each other, sweaty and smelly, howling and hugging in the heat-shimmering gravel’), as already described. One set of fossilised remains can look remarkably like another, but the vital clue, and the source of their exhilaration, lies in the sharp upward angle of the head of the femur, or upper thigh bone, that locates the centre of gravity of the human skeleton in the small area enclosed by two feet placed together – confirming that Lucy was the first of our most distant ancestors to stand upright.
Lucy’s novel method of locomotion (#litres_trial_promo) would be confirmed soon after with the discovery of an amazing series of three sets of footprints left behind in the volcanic ash of the Laetoli region of northern Tanzania three and a half million years ago, that provide a most moving insight into the human relationships of those distant ancestors.
‘The footprint records a normal positioning (#litres_trial_promo) of the left and right feet with human-like big toes,’ writes British neurologist John Eccles.
The special feature is that one individual followed the other, placing its feet accurately in the preceding footsteps. The third was smaller and walked closely to the left following the slightly wavy walk of the larger. We can interpret this as showing that two of the group [mother and child?] walked together holding hands, while another [sibling?] followed accurately placing its feet into the footsteps of the leader. We are given a privileged view of a family taking a walk on the newly formed volcanic ash 3.6 million years ago, just as we might do on soft sand left by the receding tide!
Ten years later, in 1984 (#litres_trial_promo), near the ancient Lake Turkana in Kenya, the famous palaeontologist Richard Leakey discovered a further near-complete skeleton, ‘Turkana Boy’, an adolescent of the species Homo erectus from around 1.6 million years ago, with a skull intermediate in size between Lucy and ourselves – reflecting that second unique evolutionary characteristic of the human species, his prodigiously enlarging brain.
Turkana Boy’s ‘people’, Homo erectus, were the first to make tools, reflecting that manual dexterity which is a further unique attribute of humans, made possible by the seemingly trivial evolutionary advantage of lengthening the thumb by an extra inch to make it ‘opposable’, allowing it to ‘speak to’ the other four digits.
‘In man the most precise function (#litres_trial_promo) that the hand is capable of is to place the tip of the thumb in opposition to the tip of the index finger, so they make maximum contact,’ writes the British anatomist John Napier. ‘In this position small objects can be manipulated with an unlimited potential for fine pressure adjustments or minute directional corrections. This is the hallmark of mankind. No non-human primate can replicate it.
This extra inch of the human thumb (#litres_trial_promo) transforms the grasping power of the hand of our primate cousins into the vast repertoire of the gripping precision of the human hand that would, eventually, allow man to paint and sculpt and record, through the written word, his experiences, without which his history would have disappeared completely into the dark abyss of time.
We cannot know precisely how or why the enlarging brain of Homo erectus released the hand’s (till then, hidden) potential of both grasping and gripping, but we can see its consequences readily enough in his stone tools. Palaeontologists who have taught themselves the technique of stone napping (as it is known) discovered that the necessary skill lies in finding the right-shaped ‘core’ stone, which is then percussed by a hammer stone at precisely the right angle and with a controlled degree of force so as to produce fragments of the right size and sharp enough to cut open the skin of his prey.
There is something immensely moving about the diminutive Lucy, no more than five feet tall, and the strapping Turkana Boy. Their bones may be silent, but nonetheless they speak to us across those aeons of time. What, one wonders, when they looked upwards at that clear African sky at night, did they make of its thousands of shimmering stars and the drama of the waxing and waning of the moon?
It is impossible to exaggerate the importance of those skeletal remains to our understanding of evolutionary heritage, confirming the linear sequence of our predecessors just as Darwin had postulated. Five million years ago the antecedents of Lucy and her tribe forsook the safety of the forest to walk upright on the savannah of central Africa. Two million years passed, and the ever-expanding brain of Homo erectus allowed for the incremental increase in intelligence necessary to fashion tools from stone, and to undertake those extraordinary migrations of tens of thousands of miles that would take him through what is now the Middle East into Asia, and as far as northern China and Indonesia. Then a further million and a half years elapsed before the emergence of Homo sapiens, who from his base in Africa would undertake a second great wave of global migration, this time as far as Australia, across the Bering Straits to America, and up into Europe and the southern Pyrenees, where Cromagnon man would found the first human civilisation.
Thanks to Donald Johanson, Richard Leakey and many others, we now possess the factual evidence for man’s evolutionary ascent, culminating in those images of deer and bison on the ceiling of that Altamira cave whose technical virtuosity so astonished the Marquis de Sautola. There is no deep mystery about our origins. The massively overwhelming prevailing view taught in virtually every school and university in the Western world insists that Darwin’s evolutionary theory of natural selection explains us and our ancestors. ‘Our own existence (#litres_trial_promo) [that] once presented the greatest of all mysteries is a mystery no longer. Darwin solved it,’ observes the evolutionary biologist Richard Dawkins. ‘We no longer have to resort to superstition … it is absolutely safe to say that anyone who claims not to believe in evolution is ignorant, stupid or insane.’ And how could it not be so? What other conceivable explanation might there be? There is none. That, one would think, should be the end of the matter.
And yet, the more that time has passed since those definitive discoveries of Lucy and Turkana Boy, the more perplexing that evolutionary trajectory seems to be. The Ascent of Man captured in Thomas Huxley’s famous image is no longer a theoretical idea: it is concretely realised in Lucy’s sharp-angled femur and Turkana Boy’s much larger skull; but the more one reflects on what is involved in standing upright or acquiring a larger brain, the less convincing Darwin’s proposed mechanism of natural selection appears to be. Further, the suddenness of the cultural explosion that signalled the arrival of Cromagnon man argues against a progressive, gradualist evolutionary transformation. It suggests rather some dramatic event – as if a switch were thrown, the curtain rose, and there was man at the centre of the stage of world events. The findings of the New Genetics and the Decade of the Brain make it much more difficult to set such doubts aside. The trivial genetic differences that separate our primate cousins from ourselves seem quite insufficient to account for those physical differences that set us apart. Similarly, the elusive workings of the human brain would seem to defy any simple evolutionary explanation.

THE RIDDLE OF ‘THE ASCENT’
Part 1: Setting Out
There are at least half a dozen speculative evolutionary reasons for why Lucy and her kind might have wished to stand upright, and the advantage in doing so: the better to see potential predators, to carry her dependent offspring, or, as Darwin himself supposed, ‘standing on two legs would permit an ape-like predecessor to brace itself by holding on to a branch with one arm as he grabbed the fruit with another’. But the most schematic anatomical comparison (#litres_trial_promo) with our primate cousins reveals the prodigious difficulties of this novel form of locomotion. The knuckle-walking chimp has four powerful, pillar-like limbs, providing a large rectangular basis of support, with the centre of gravity solidly located in the middle of the torso. For Lucy, the centre of gravity shifts to a small area enclosed by her two feet, and with the bulk of her weight (her head and torso) in the upper part of the body, exacerbating her tendency to topple over. While the chimpanzee might be compared to a solid, four-legged table, Lucy’s upright frame, like an unsupported pole balancing a heavy ball (her head), must constantly defy the laws of gravity.
So how did she come to stand upright? (#litres_trial_promo) The main anatomical transformations pivot around the pelvis, where the powerful gluteus maximus of our buttocks, a minor player in our primate cousins, pull the human form upright like a drawbridge. This novel stance must then be held in place by a redesign of several other muscles that fulfil a propulsive function in our primate cousins, but need to act as stabilisers of the human skeleton. For that to happen, the bony pelvis to which they are all attached must itself undergo a major redesign, being first pulled up and back, then shortened and widened.
This redesign of the pelvis and its stabilising muscles entails a further series of knock-on effects: the skull must now be repositioned directly over the erect human frame, while the vertebrae of the spinal column must become progressively wider as they descend, to sustain the weight pressing upon them. The head of the femur (as noted) must be angled inwards, the ligaments of the knee strengthened to ‘lock’ into position, while the foot, particularly the big toe (#litres_trial_promo), must undergo a dozen anatomical changes to provide a firm basis of support. The arms, that no longer need to swing through branches, become shortened, while the legs, now proportionally too short relative to the body, must be lengthened – but by how much? There is, it would seem, an ‘ideal’ length (#litres_trial_promo) that creates a pendulum-type movement of the legs, like the pendulum of a clock, where walking becomes almost automatic, the combination of the force of gravity and inertia carrying the body forward with hardly any intentional muscular effort. ‘The human frame is built for walking,’ observes the biomechanist Tad McGeer. ‘It has both the right kinematics and the right dynamics – so much so in fact that our legs are capable of walking without any motor control.’
And that shortening of the arms (#litres_trial_promo) and lengthening of the legs for walking gives the symmetry and harmony that reflect the hidden laws of geometric proportion captured by Leonardo da Vinci’s famous ‘Vitruvian’ image of man, his span matching his height, encompassed within the two most elemental of shapes – the circle and the square. ‘It is impossible to exaggerate what this simple-looking proposition meant to Renaissance man,’ the art historian Kenneth Clark observed. ‘It was the foundation of their whole philosophy,’ where man was ‘the measure of all things’.
These, however, are merely the obvious anatomical changes, as Lucy would still be quite unable to stand upright without both a rewiring of her nervous system to cope with the flood of feedback information from the millions of sensors monitoring the relative position of the bones and muscles, and a more sophisticated circulatory system. Thus, those obvious similarities that so impressed Darwin and Huxley conceal a myriad of hidden but necessary modifications, because there can be no change in one structure without influencing many others. The hundreds of different bones, muscles and joints (#litres_trial_promo) are ‘inseparably associated and moulded with each other’, the distinguished twentieth-century biologist D’Arcy Wentworth Thompson would observe. ‘They are only separate entities in this limited sense that they are part of a whole – that can no longer exist when it loses its composite integrity.’
Lucy was already an experienced ‘bipedal walker’, as shown by those emotive footprints in the volcanic ash – so we must presuppose, if Darwin’s evolutionary theory is correct, numerous preceding species of hominids marking out those anatomical changes from the rock-like stability of the knuckle-walking primates to her own upstanding, pole-like form. We can only speculate how those changes occurred. The strengthening of the gluteus muscle was essential to ‘raise the drawbridge’ – but it would have been quite unable to do so without the simultaneous redesign of the bones of the pelvis and upper thigh, the ligaments to lock the knees, the adaptation of the foot to standing upright, and so on. Thus the biological advantage of ‘freeing the hands’ would be more than offset by the profound instability of any transitional species, that without this full house of anatomical changes would have had a stuttering, shuffling gait – vulnerable prey to any hungry carnivore it encountered when tottering across the savannah. Put another way, the necessity for these many anatomical changes confirms what one would expect: the upright stance is staggeringly difficult to pull off, which is presumably why no other species has attempted it. Standing upright is, on reflection (#litres_trial_promo), a rather bizarre thing to do, and would seem to require a sudden and dramatic wholescale ‘redesign’ that is clearly incompatible with Darwin’s proposed mechanism of a gradualist evolutionary transformation. Lucy’s pivotal role in man’s evolutionary ascent as the beginning or anchor of that upward trajectory would seem highly ambiguous.
These difficulties seem less acute (#litres_trial_promo) when we turn to that second evolutionary innovation, represented by Turkana Boy’s larger brain, which would at least have conferred the obvious advantage of greater intelligence – and progressively so, each incremental increase in brain size and intelligence furthering his chances of survival. Except, there is no direct evidence for the benefits of that greater intelligence, other than those stone tools which, for all their technical ingenuity, remained virtually unchanged for two million years. The human brain started to increase in size, and continued to do so over a period of several million years, with precious little to show for it until right at the end, with that extraordinary intellectual leap of the Cromagnons’ ‘cultural explosion’. Why, one might reasonably ask, should man’s evolutionary progress equip him with powers he would not realise for so long?
This is no mere rhetorical question, for man, during his Ascent, paid a heavy price for that expanding brain, which together with standing upright would massively increase the risk of obstetric catastrophe in childbirth – a fact that could scarcely be more biologically disadvantageous to the survival of his species. A simple diagram of the foetal head passing down through the bony pelvis explains all.
The main effect of the reorientation of Lucy’s pelvis (#litres_trial_promo) to permit her to stand upright is to transform a straight and shallow ring of bone into a deep, curved tube. First we note the situation for our primate cousins, the chimps, where there is a generous margin around the foetal head. Next we see the much tighter fit of Lucy’s pelvis, which becomes a potentially lethal crush for both mother and infant. And further, the foetal head must now overcome the greater resistance of the mother’s much more powerful pelvic muscles, strengthened to retain her internal organs within the abdomen against the downward force of gravity. A million years on, and the ‘bigger brain’ of Turkana Boy further compounds these difficulties, so now it requires massive protracted (and very painful) contractions of the muscles of the uterus (ten times more powerful than in other mammals) to force the foetus down that ‘deep, curved tube’, causing potential damage to the pelvic muscles, bowel and bladder. The predictable consequence of all this is that while the chimpanzee can give birth on her own, almost without breaking stride, humans right from the beginning would have required the assistance of others to support them in this most traumatic of all human experiences – with a mortality rate of 100 per cent for both mother and child in the not unusual circumstance of obstructed labour.
And that is only the beginning, for the human brain continued to increase in size, which would have created an insurmountable obstacle to reproduction were it not for the extraordinary evolutionary ‘solution’ of slowing the growth rate of the foetal brain within the womb, and accelerating it afterwards. From Turkana Boy onwards, the human newborn, with its now relatively immature brain at birth, is completely helpless, and it will take a further year and a half before it starts to acquire the sort of motor skills that permit the newborn infant chimp to hang on to its mother’s back. And that dependency in turn would require that further unique feature of humanity, the long-term pair bond between mother and father, hinted at in those footsteps in the volcanic ash, to share the responsibility for carrying, caring for and feeding their dependent offspring.

It can be difficult to appreciate the devastating implications of these reflections. The Ascent of Man from knuckle-walking chimp to upright human seems so logical and progressive as to be almost self-evident, yet it conceals events that are without precedent in the whole of biology. The only consolation would be that man must have evolved somehow, but then the hope of understanding how would seem to evaporate with the revelation of the near-equivalence of the human and chimp genomes. There is nothing to suggest the major genetic mutations one would expect to account for the upright stance or that massively enlarged brain – leading the head of the chimp Genome Project to concede, as already cited, somewhat limply: ‘Part of the secret is hidden in there, we don’t know what it is yet.’ Or as a fellow researcher put it, rather more bluntly: ‘You could write everything we know about the genetic differences in a one-sentence article.’ The reports in 2006 of a family (#litres_trial_promo) in northern Turkey with a bizarre genetic defect that caused them to walk on all fours suggested, according to Professor Uner Tan of Cukurova University, the breakthrough of ‘a live model for human evolution’. Perhaps, but then perhaps not, as the anatomy of the family’s bones and muscles was otherwise entirely human, so with relatively short arms and long legs, their ungainly quadrupedal locomotion only served to emphasise the ‘full house’ of anatomical transformation necessary for the upright stance.
So, while the equivalence (#litres_trial_promo) of the human and chimp genomes provides the most tantalising evidence for our close relatedness, it offers not the slightest hint of how that evolutionary transformation came about – but rather appears to cut us off from our immediate antecedents entirely. The archaeological discoveries of the last fifty years have, along with Lucy and Turkana Boy, identified an estimated twenty or more antecedent species, and while it is obviously tempting to place them in a linear sequence, where Lucy begat Turkana Boy begat Neanderthal man begat Homo sapiens, that scenario no longer holds. Instead we are left with a bush of many branches – without there being a central trunk linking them all together.
‘Over the past five million years (#litres_trial_promo) new hominid species have regularly emerged, competed, co-existed, colonised the environment and succeeded or failed,’ writes palaeontologist Ian Tattersall. ‘We have only the dimmest of perceptions of how this dramatic history of innovation and interaction unfolded, but it is evident that our species is simply one more of its many terminal twigs.’
The methods of the New Genetics have confirmed (#litres_trial_promo) that all the human races – Negroes, Caucasians, Asians and so on – are genetically identical, thus all descendants of an entirely novel species, Homo sapiens, ourselves, who emerged it is presumed in east or south Africa in 120,000 BC before spreading out to colonise the world. But that leaves the ‘terminal twig’ of ourselves suspended in limbo, with no obvious attachment to those earlier branches of that evolutionary bush. The account of ourselves which until recently seemed so clear now seems permeated with a sense of the deeply inexplicable – whose implications we will return to after considering the second aspect of the riddle of that evolutionary trajectory: the Cromagnons with their ‘passion for art’.

THE RIDDLE OF THE ASCENT
Part 2: The Cultural Explosion and the Origins of Language
‘Homo sapiens is not simply (#litres_trial_promo) an improved version of his ancestors, but a new concept, qualitatively distinct from them … A totally unprecedented entity has appeared on the earth. All the major intellectual attributes of modern man are tied up in some way with language.’
Ian Tattersall, Curator, American Museum of Natural History
The most striking feature of the arrival of modern man is its suddenness and completeness, epitomised most obviously by the beauty and originality of those artefacts he left behind: the ‘pride of lions’ portrayed in perspective on the walls of the Chauvet cave; the beads and jewellery for self-adornment in this and the ‘next’ world; drums fashioned from mammoths’ bones to celebrate, with singing and dancing, the wonders of the natural world; oil lamps, harpoons, spear-throwers. All the features in short – artistic, technical, economic and religious – that can be found in contemporary society.
The precipitating factor in (#litres_trial_promo) that cultural explosion must, by common consent, be tied up in some way with language. The Cromagnons had a ‘passion for art’, so an obvious starting point in searching for the qualitative difference that language might make, and which would distinguish them from their antecedents, is to ask what a painting or a sculpture of, say, a bison, is. It clearly is not a bison, nor the reflection of a bison, nor the imaginative figment of a bison – as in a dream. It is not a sculpture of a specific object, but rather a generalised image of a class of objects: it stands for, is symbolic of, bison in general. It is the idea of a bison. This ability of Cromagnon man to conceptualise things and feelings as ideas, and to express those ideas as words, introduces an entirely new dimension into the universe.
First, language – and it is a most extraordinary thing – allows us to ‘think’, by assigning words to objects and ideas. Then it becomes possible to express a logical idea by applying grammatical rules to the arrangement of those words, and linking them together sequentially in a sentence. And more, the faculty of language allows us to take those thoughts ‘brought into existence’ by language and insert them with complete precision into the minds of others for them to share, or to disagree with. Language makes the world intelligible, by allowing man to transmit his thoughts and experiences in the form of accumulated knowledge from generation to generation – leading, perhaps inevitably, to the moment at the close of the twentieth century when he would ‘hold in his mind’s eye’ the history of the universe he inhabits. Language makes it possible (#litres_trial_promo) to distinguish truth, the faithful reflection of reality, from falsehood, and this, as the philosopher Richard Swinburne points out, is the foundation of reason (obviously), but also of morality, for ‘it gives man the capacity to contrast the worth of one action to that of another, to choose what he believes worthwhile … and that gives us a conception of the goodness of things’. Thus humans, like all living things, are biological beings constrained by nature’s laws; nonetheless language liberates our mind from the confines of our material brain, allowing us to transcend time and space to explore the non-material world of thought, reason and emotion. So, ‘All the major intellectual attributes of modern man are tied up in some way with language,’ as Ian Tattersall argues. Where then did language come from?
The prevailing view, till recently, held that this remarkable faculty required no specific explanation, and could be readily accommodated within the standard evolutionary rubric of the transformation of the simple to the complex. Language is explained (or ‘explained away’) as an evolved form of communication, no different in principle from the grunts or calls of other species. ‘I cannot doubt,’ observed Darwin in The Descent of Man, ‘that language owes its origin to the imitation and modification of various natural sounds, the voice of other animals and man’s own instinctive cries…’ So too contemporary evolutionary texts portray human language as an improved method of communication over that of our primate cousins, while emphasising the similarities in the larynx and vocal cords (which, however, are not so similar as they appear) as evidence for language’s evolutionary origin. ‘Language evolved to enable humans (#litres_trial_promo) to exchange information,’ observes Robin Dunbar of the University of Liverpool.
In the 1950s the famous linguist Noam Chomsky (#litres_trial_promo) challenged this interpretation of language as a more sophisticated form of primate communication by drawing attention to the significance of the remarkable alacrity with which children learn to speak. Language flows so readily, a ‘babbling’ stream of feelings, thoughts and opinions filling every nook and cranny of our lives, it is easy to assume it must be simple, simple enough for children to ‘pick up’ as readily as they pick up measles. Prior to Chomsky, the standard view held that children learned to speak in the same way as blotting paper absorbs ink, by soaking up the words they heard and then reiterating them. Chomsky argued this could not be so, pointing out the skill with which very young children learn to speak lies far beyond the intellectual competency of their years, for while they must struggle to grasp the elementary principles of mathematics, they acquire language with astonishing ease. An infant starting from a situation not dissimilar to that of an adult in a room of foreigners all jabbering away incomprehensibly, nonetheless:
‘Within a short span of time (#litres_trial_promo) and with almost no direct instruction will have dissected the language into its minimal separable units of sound and meaning,’ writes linguist Breyne Moskowitz. ‘He will have discovered the rules of recombining sounds into words and recombining the words into meaningful sentences. Ten linguists working full-time for a decade analysing the structure of the English language could not programme a computer with a five-year-old’s ability for language.’
The aptitude of the young mind in mastering the staggering complexity of language presupposed, Chomsky argued, that humans must possess some form of highly specific ‘Language Acquisition Device’ hardwired into their brains that somehow ‘knows’ the meaning of words and the grammatical forms necessary to make sense of them. How, otherwise, can an infant know when its mother says, ‘Look at the cat!’ that she is referring to the furry four-legged creature, and not to its whiskers, or the milk it is drinking. Further, the ‘device’ must not just know what is being referred to, but the grammatical rules that permit the same ‘idea’ expressed in different ways to have the same meaning (‘John saw Mary’ conveys the same message as ‘Mary was seen by John’), but excluding meaningless variations. Further again, it transpires that children learn language in the same way, whether brought up in New Jersey or New Guinea, and acquire the same grammatical rules of ‘present’, ‘past’ and ‘future’ in the same sequence. This implies that the ‘device’ in turn must be sensitive to a Universal Grammar, common to all languages, which can pick up on the subtlest distinction of meaning.
Now, our primate cousins do not possess (#litres_trial_promo) this ‘device’, which is why, clever as they are, they remain (in the words of the veteran chimpanzee-watcher Jane Goodall) ‘trapped within themselves’. By contrast, every human society, no matter how ‘primitive’, has a language capable of ‘expressing abstract concepts and complex trains of reasoning’. The million Stone Age inhabitants of the highlands of New Guinea, ‘discovered’ in 1930 after being cut off from the rest of the world for several thousands of years, spoke between them eight hundred different languages, each with its complex rules of syntax and grammar.
How then did the faculty of language come (#litres_trial_promo) to colonise the human brain? ‘There must have been a series of steps leading from no language at all to language as we now find it,’ writes the linguist Steven Pinker, ‘each step small enough to have been produced by random mutation [of genes] and with each intermediate grammar being useful to its possessor.’ It is, of course, possible to imagine how language might have evolved in this way from a simpler form of communication or ‘protolanguage’, starting perhaps with gestures, moving on to simple words or phrases with a single meaning, with the rules for linking words into sentences coming later. Pinker’s intended parallel between the means by which our species acquired language and the infant’s rapid progress from burbling through words to sentences might seem plausible, in the way of all evolutionary explanations, and would indeed be reasonable if language simply ‘facilitated the exchange of information’. But, as Chomsky pointed out so persuasively, language is also an autonomous, independent set of rules and meanings that impose order, make sense of the world ‘out there’. Rules and meanings cannot evolve from the simple to the complex, they just ‘are’. The structure of sentences is either meaningful or meaningless. The naming of an object is either ‘right’ or ‘wrong’. An elephant is an elephant, and not an anteater. Hence Chomsky insisted, against Pinker, that those seeking a scientific explanation for language could, if they so wished, describe it as having evolved ‘so long as they realise that there is no substance for this assertion, that it amounts to nothing more than a belief.’ This, of course, is no trivial controversy, for language is so intimately caught up in every aspect of ‘being human’ that to concede that it falls outside the conventional rubric of evolutionary explanation would be to concede that so does man.
The dispute over the evolutionary (or otherwise) (#litres_trial_promo) origin of language remained irresoluble till the late 1980s, when the first PET scans revealed how the simplest of linguistic tasks involves vast tracts of the brain in a way that defies any simple scientific explanation. Here every mode of language, thinking about words, reading and speaking them, is represented in different parts. The prosaic task of associating the word ‘chair’ with ‘sit’ generates a blizzard of electrical activity across the whole surface of the brain. Further, those scanning investigations revealed how, in the twinkling of a second that it takes to speak or hear a word, the brain fragments it into its constituent parts through four distinct modules concerned with spelling, sound (phonology), meaning (semantics) and articulation. These ‘modules’ are in turn then further subdivided ad (virtually) infinitum. The perception of sound, for example discriminating between the consonants ‘P’ and ‘B’, is represented in twenty-two sites scattered throughout the brain. There is something absolutely awe-inspiring in supposing we understand a word like ‘elephant’ only after it has been parsed in this way. And then, to compound it all, the brain must simultaneously while ‘parsing’ elephant also comprehend its meaning in its entirety, for the constituent symbols can really only be understood within the context of the whole word.
It is one thing to try to work out how the brain processes a single word (and that is baffling enough), quite another to extrapolate from such findings to try to imagine the same processes as they apply to a sentence, with its structure of ‘subject-verb-object’ and numerous subsidiary clauses. Move into the real world, with its ceaseless conversation, and the problem becomes insuperable. What sort of brain processes, one might ask, must be involved when a group of football fans convening in the pub before a match discuss their team’s prospects for the coming season – drawing on a vast storehouse of knowledge and judgement of the form of previous seasons, the strengths and weaknesses of their players, and assessments of the performance of their rivals. How do they pluck from the storehouse of their memories the right words, or conjure from the rules of syntax and grammar the correct sequence with which emphatically to argue their opinion? How does the electrical firing of the neurons in their brains represent words and capture the nuance of their meanings?
And so? ‘Language flows so readily, that it is easy to assume it must be simple.’ But language only appears simple because it has to be so. There would, after all, be little point in humans acquiring this novel and powerful mode of inserting their thoughts directly into the minds of others if it took many years to get the hang of, and was difficult to use. But that apparent simplicity is, as already noted, a mark of language’s profundity, concealing the inscrutable complexities of brain function that make it appear to be so.
The major legacy of linguistics and neuroscience in the past few decades has been to reveal the complexities concealed behind that apparent simplicity while drawing attention at the same time to how the faculty of language requires major changes in every aspect of the functioning of the brain: a massive increase in its memory capacity so as to be able to store that vocabulary of forty thousand words, together with the provision for their near-instant recall; a profound deepening of the mind’s emotional repertoire with its feelings of sympathy and affection; the powers of reason; the moral distinction between right and wrong; and the imaginative intelligence with which poets and writers express themselves in unique ways.
The opportunity to reflect further on such matters will come later, but for the moment we must briefly return to contrast the conventional evolutionary portrayal of the origins of that ‘totally unprecedented entity’ Cromagnon man with how, in the light of the above, they now appear to be. To be sure, that steadily expanding brain over the preceding several million years, with its much enhanced neuronal firepower predisposed to those higher intellectual attributes, particularly language, and thus that cultural explosion of technical innovations and artistic expression. But that much-expanded brain by itself does not explain the phenomenon of language, nor why the evidence for its undoubted ‘benefit’ of being able to think, act and make sense of the world should have emerged so late and so suddenly. Why did the brain continue to expand in size for those millions of years when the ‘pay-off’ was so slight, and the attendant hazards of obstructed labour and dependent offspring so large? And this conundrum becomes yet more puzzling now we know that language is not just some bolt-on addition to the primate brain, but occupies large areas of it, and required the massive extension of those other attributes of mind, such as memory and intelligence, on which it depends.
Here neither of the two proposed evolutionary scenarios (#litres_trial_promo) – that language evolved ‘early’ or ‘late’ – is convincing. The proponents of the ‘early’ scenario infer (quite rightly) that it must have taken millions of years for so complex a system to have evolved – all the way back to Turkana Boy’s people, Homo erectus, and beyond. Why then, one might ask, did he exhibit so little evidence of the ‘culture’ that language makes possible? The ‘late’ theorists claim language to be unique to Homo sapiens, the spark that lit the cultural explosion that separates him from his nearest relatives – but that would presuppose that it evolved over the mere 100,000 years since his emergence from Africa. This dispute cannot be resolved, but it serves the useful purpose of drawing attention to our profound ignorance: we no longer have the vaguest inkling of what caused the ‘switch to be thrown’ to inaugurate that first and most astonishing of all civilisations. Thirty-five thousand years on, we humans can draw on a vast treasure house of the cumulative knowledge and technology of the many civilisations that have had their moment in the sun, the Egyptians, Greeks, Romans, Arabs and so on. The genius of the Cromagnons, with their passion for art and wittily decorated spear-throwers, is that they had to work it all out for themselves.
This then is the riddle of the Ascent of Man: how and why twenty or more distinct species of hominid should, over a period of several million years, have undergone that wholescale anatomical transformation required for standing upright, and then followed it up with acquiring that prodigiously sized brain whose potential to comprehend the workings of the universe appears so disproportionate to the needs of the life of a hunter gatherer. It seems obvious that man’s sophistication and intelligence would have conferred some biological advantage, but all living things – birds, bats, dolphins and so on – have their own highly specialised sort of intelligence, different from our own, but which nonetheless maximises their chances of survival. The question, rather, as the biologist Robert Wesson puts it (#litres_trial_promo), is why the human brain should come with those striking mental powers, such as the capacity to compose symphonies or to solve abstruse mathematical theorems, that ‘are not of the kind likely to be rewarded by numbers of descendants’.
The further subsidiary and related riddle (#litres_trial_promo) is why, for the best part of 150 years, the scientific orthodoxy has prevailed that we know the answer, at least in principle, to that riddle of the Ascent, when, as the palaeontologist Ian Tattersall acknowledges, ‘we have only the dimmest perception of how that dramatic history unfolded’. It has taken just a few pages to draw out the contradictions, at every turn, in the prevailing scientific certainty of ‘natural selection’ as the driving force of the Ascent of Man. There is, of course, no more self-evident truism than that nature ‘selects’ the strong and the fit at the expense of the weak and less than perfect. But that mechanism, by the same logic, can scarcely be invoked to account for standing upright and that massively enlarged brain which, by rights, should have so gravely compromised the survival prospects of those distant ancestors. There is nothing obscure in the observations outlined above: the anatomical implications of the upright stance and the obstetric hazards of that enlarging brain are well documented. Yet there is not the slightest hint in standard evolutionary texts or in the graphic museum displays of the Ascent that they might be problematic – while those who might think so are derisively dismissed, as we have seen, as ‘stupid, ignorant or insane’.
Most people get by well enough without the slightest inclination to speculate about their origins – and if they do, there is much consolation in that reassuring image first captured by Thomas Huxley of our onward and upward ascent. Still, it is surprising how that history of our origins becomes instantly so much more fascinating and intriguing the moment one reflects, for example, on the marvels of the composite integrity of the human skeleton, or the hidden complexities of grammar that can nonetheless be grasped by a two-year-old child. This discrepancy between the beguiling simplicities of the evolutionary theory and the profundity of the biological phenomena it seeks to explain is very striking. Its claims can never be ‘put to the test’ of experimental verification, as there is no way of telling one way or the other whether the process of natural selection really does account for those extraordinary biological events millions of years ago. The standard evolutionary explanation is, in short, irrefutable – or was irrefutable, until the uncompromising verdict of the genome projects, where the random genetic mutations that might set us apart from our primate cousins, mice, flies or worms are nowhere to be found.
It can, admittedly, be very difficult to see what all this might add up to, but clearly the ramifications of those seemingly ‘disappointing’ outcomes of the New Genetics and the Decade of the Brain run very deep indeed. We need to know why we have been seduced into supposing that science knows so much more than is clearly the case – and that means exploring further that seemingly unbridgeable gap between those two ‘Orders of Reality’ to seek out the forces that might conjure the beauty and complexity of the natural world from the monotony of those chemical genes, and the richness of human thought and imagination from the electrical activity of the brain.
But for that we need a yet broader, more Olympian perspective still, to take the full measure of the scope (and limits) of scientific knowledge as so recently revealed, and of how, paradoxically, those ‘disappointing’ outcomes turn out to reveal profound truths about the nature of genetic inheritance and the human mind, so long concealed from view.
There is no better way to start than through that most fruitful insight into the nature of things that comes with the experience of ‘wonder’, whose dual meaning those Cromagnons would instinctively have appreciated. They would have ‘wondered at’ the pervasive beauty and integrity of the natural world, inferring there was a greater significance to their existence than they could know. They would have responded, too, to the human imperative to ‘wonder why’, seeking out in the regularity of the movement of the stars and the diversity of form of living things those causes, patterns and explanations of the natural world that are ‘the beginning of all knowledge’.

3 The Limits of Science 1:The Quixotic Universe (#ulink_8eadcbaf-ba00-5b3f-9e72-0f6b4d717dde)
‘The world will never starve for want of wonders, but only for want of wonder.’
G.K. Chesterton
The world is so full of wonder (#litres_trial_promo), it is a wonder we do not see it to be more so. Every dawn the ‘undeviating and punctual’ sun rises on the horizon to flood our lives with the light and warmth that drive the great cycle of organic life – thirty million times more natural energy in a single second than that generated by manmade power stations in a whole year. And punctually at dusk, its setting brings the day to a close with a triumphant explosion of purple, red and orange streaked across the sky. ‘Of all the gifts bestowed upon us,’ wrote the Victorian art critic John Ruskin, ‘colour is the holiest, the most divine, the most solemn.’ Those limitless nuances of colour and light that suffuse our daily lives mark too the procession of the seasons, a constant reminder of the profound mystery of self-renewing life.
And there is nothing so full of wonder as life itself, the more so now we know that the vital actions of even the humblest bacterium, smaller by far than the full stop at the end of this sentence, involve the concerted action of thousands of separate chemical reactions, by which it transforms the nutrients absorbed from soil and water into the energy and raw materials with which it grows and reproduces itself. But life there is, and marching down through the ages in such an abundance of diversity of shape, form, attributes and propensities as to encompass the full range and more of what might be possible. And what variety! ‘No one can say just how many species (#litres_trial_promo) there are in these greenhouse-humid jungles,’ writes naturalist and broadcaster David Attenborough of the forests of South America.
There are over forty different species of parrot, over seventy different [species of] monkeys, three hundred [species of] humming birds and tens of thousands of [species of] butterflies. If you are not careful, you can even be bitten by a hundred different kinds of mosquito … Spend a day in the forest, turning over logs, looking beneath bark, sifting through the moist litter of leaves and you will collect hundreds of different kinds of small creatures: moths, caterpillars, spiders, long-nosed bugs, luminous beetles, harmless butterflies disguised as wasps, wasps shaped like ants, sticks that walk, leaves that open wings and fly … One of these creatures at least will almost certainly be undescribed by science.
And the millions of species with which we share the planet themselves represent a mere 1 per cent of those that have ever been, each form of life the opportunity for a further myriad of subtly different variations on a theme. Why should the extraordinary faces of the bat family, whose near-blindness should make them indifferent to physical appearances, nonetheless exhaust the possibilities of the design in the detailed geometry of their faces? Why should the many thousands of species of birds yet be so readily distinguishable one from the other by their pattern of flight or the shape of their wing, the colour of their plumage or the notes of their song?
But birds, as the American naturalist Frank Chapman once observed, are ‘nature’s highest expression of beauty, joy and truth’, whose annual migration exemplifies that further recurring mystery of the biological world, those idiosyncrasies of habits and behaviour that defy all reason – like the Arctic tern, that every year traverses the globe, setting out from its nesting grounds in northern Canada and Siberia, winging its way down the coasts of Europe and Africa to the shores of the Antarctic, only to turn round and return northwards again: a round journey of twenty-five thousand miles that takes them eight months, flying twenty-four hours a day. How swiftly they fly, how confidently across the pathless sea at night!
And while we might rightly wonder how the Arctic tern knows how to navigate by the stars, it seems almost more wonderful still that the salmon should find its way from the depths of the ocean back to the same small stream from whence it set out, detecting through its highly developed sense of smell the waters of its spawning ground; or that the common European eel should cross the Atlantic twice, first from its breeding grounds off the North American coast to the rivers of Europe – and then back again. ‘The number of [such] admirable (#litres_trial_promo), more or less inexplicable traits that one might cite is limited not by the inventiveness of nature,’ writes biologist Robert Wesson, ‘but rather by the ability of scientists to describe them.’ There are, he points out, an estimated twenty thousand species of ant, of which only eight thousand have been described. So far biologists have got round to studying just one hundred of them in depth, each of which has its own unique, bizarre pattern of behaviour – such as ‘the female of a parasitic ant which on finding a colony of its host, seizes a worker, rubs it with brushes on her legs to transfer its scent making her acceptable to enter the host colony’. How did there come to be such sophisticated and purposive patterns of behaviour in such minute creatures?
And yet that near-infinite diversity of life is permeated by an underlying unity, where everything connects in the same web of self-renewing life. The rain falling on the mountains feeds the springs that fill the streams. Those streams become rivers and flow to the sea, the mists rise from the deep and clouds are formed, which break again as rain on the mountainside. The plants on that mountainside capture the rainwater and, warmed by the energy of the sun, transform the nutrients of the soil, by some extraordinary alchemy, into themselves. A grazing animal eats that same plant to set up another complex web of connections, for it in turn is eaten by another, and its remains will return to the earth, where the microbes in the soil cannibalise its bones, turning them back into their constituent chemicals. And so the process of reincarnation continues. Nothing is lost, but nothing stays the same.
Wheels within wheels; and across that vast landscape of living things, from the highest to the lowest, the survival and prosperity of man is yet, as J. Arthur Thomson, Professor of Natural History at Aberdeen University reminds us, completely dependent on the labours of the humble earthworm, without whose exertions in aerating the dense, inhospitable soil there could never have been a single field of corn.
When we pause to think of the part earthworms (#litres_trial_promo) have played in the history of the earth, they are clearly the most useful of animals. By their burrowing, they loosen the earth, making way for the plant rootlets and the raindrops; by bruising the soil in their gizzards they reduce the mineral particles to more useful forms; they were ploughers before the plough. Five hundred thousand to an acre passing ten tons of soil every year through their bodies.
So, the world ‘will never starve for want of wonders’, the more so for knowing and wondering how the sky above and the earth below and ‘all that dwell therein’ – including the human mind, with its powers of reason and imagination – originated as a mass of formless atoms in that ‘moment of singularity’ of the Big Bang fifteen billion years ago.
The poet William Wordsworth, seeking to catch the enfolding delight of that sky above and earth below, called it ‘the sublime’,
Whose dwelling is the light of setting suns,
And the round ocean and the living air,
And the blue sky,
A spirit that impels and rolls through all things.
The feelings evoked by nature and ‘the sublime’ were, for the American poet Walt Whitman as for so many poets and writers, the most powerful evidence for a hidden, mystical core to everyday reality.
‘There is, apart from mere intellect (#litres_trial_promo),’ he wrote, ‘a wondrous something that realises without argument an intuition of the absolute balance, in time and space of the whole of this multifariousness we call the world; a sight of that unseen thread which holds all history and time, and all events like a leashed dog in the hand of the hunter.’
That sublime nature has always provided the most powerful impetus to the religious view, its celebration a central feature of all the great religions. For the German theologian Rudolph Otto (1869–1937), the ‘sublime’ was a ‘mysterium tremendum et fascinans’: both awesome, in whose presence we feel something much greater than our insignificant selves, and also fascinating, compelling the human mind to investigate its fundamental laws.
This brings us to the second of the dual meanings of ‘wonder’ suggested at the close of the preceding chapter, to ‘wonder why’, which, as the Greek philosopher Plato observed, ‘is the beginning of all knowledge’.
‘The scientist does not study nature (#litres_trial_promo) because it is useful to do so,’ wrote the nineteenth-century French mathematician Henri Poincaré. ‘He studies it because he takes pleasure in it; and he takes pleasure in it because it is beautiful. If nature were not beautiful, it would not be worth knowing, and life would not be worth living … I mean the intimate beauty which comes from the harmonious order of its parts and which a pure intelligence can grasp.’
The greatest (probably) of all scientists (#litres_trial_promo), Isaac Newton, seeking to comprehend that ‘harmonious order of parts’, would discover the fundamental laws of gravity and motion, that, being Universal (they hold throughout the universe), Absolute (unchallengeable), Eternal (holding for all time) and Omnipotent (all-powerful), he inferred, offered a glimpse into the mind of the Creator. Newton captured this dual meaning of wonder, to ‘wonder at’ and to ‘wonder why’, in his famous confession that the most he could hope to achieve was to illuminate the workings of some small part of that sublime world: ‘I do not know what I may appear to the world,’ he wrote, ‘but to myself I seem to have been only like a boy playing on the sea shore, diverting myself now and then, finding a smoother pebble than ordinary, whilst the great ocean of truth lay all undiscovered before me.’
The wonders of the world are so pervasive that to the seemingly less sophisticated minds of earlier ages (such as Newton’s) they were best understood as ‘natural miracles’. To be sure, the undeviating and punctual sun, the cycle of life, the infinite variety of living things, their interconnectedness to each other, these are all part of nature, and are faithful to its laws. They are ‘natural’. But the totality of it all, its beauty and integrity and completeness, that ‘great undiscovered ocean of truth’, lie so far beyond the power of the human mind to properly comprehend, they might as well be ‘a miracle’. Thus science and religion were cheerfully reconciled, the scientist seeing his task as a holy calling, where Robert Boyle, the founder of modern physics, would perceive his role as ‘a priest in the temple of nature’.

This is scarcely the modern view. Most people, of course, acknowledge the beauty and complexity of the world and find it admirable, even uplifting – but you could search in vain for a textbook of biology or zoology, astronomy or botany, or indeed of any scientific discipline, which even hints that there is something astonishing, extraordinary, let alone ‘miraculous’, about its subject. Science no longer ‘does’ wonder, which is more readily associated nowadays with the incurious mysticism and incense of the New Age. Science prefers to cultivate an aura of intellectual neutrality, the better to convey its disinterested objectivity, its commitment to the ‘truth’. Hence the highly technical, and to the outsider often impenetrable, prose of its texts and learned journals, from which any sense of wonder is rigorously excluded.
There are, as will be seen, several important reasons for this modern-day lack of astonishment, but the most important is undoubtedly the general perception that science, since Newton’s time, has revealed those ‘natural miracles’ to have a distinctly non-miraculous, materialist explanation – culminating in that firestorm of scientific discovery of the past fifty years, which has integrated into one coherent narrative the entire history of the universe from its origins to the present day. To be sure, science may not capture the beauty and connectedness of it all, the ‘sublime spirit that rolls through all things’, but this is more than compensated for by the sheer drama and excitement of the events it has so convincingly described.
The scale of that intellectual achievement is so great that there might seem little room any more for the ‘natural miracles’ of an earlier age, or to ‘wonder’ whether there might after all be more than we can know. It would certainly require a truly Olympian perspective, capable of surveying the vast landscape of science, to recognise where and what the limits to its knowledge might be – and that would seem an impossibility. Yet it is not quite so, for while that landscape is indeed vast, and far beyond the comprehending of any individual, it is nonetheless sustained by three great unifying phenomena that impose order on the world – which on examination can tell us something very profound about science and the limits of its materialist explanations.
It is fruitless – always has been, always will be – to pose that most elementary of all questions: ‘Why is there something rather than nothing?’ The same however does not apply to the second and supplementary question: ‘Why, given there is something, are both the physical universe (and all that it contains) and all life (in its infinite diversity) so ordered?’ They should not be, for anything left to itself will tend towards chaos and disorder, as fires burn out and clocks run down – unless countered by a compensating force imposing order, restituting lost energy.
There are (to put it simply) three ‘forces for order’: first the force of gravity, as discovered by Sir Isaac Newton, the glue that binds the universe together; next the all-powerful genes strung out along the Double Helix, imposing the order of form, the shape, characteristics and attibutes unique to each of the millions of species of living things; and thirdly the human mind, that imposes the order of understanding on the natural world and our place within it. These three forces control or sustain all (or virtually all) phenomena in the universe, and stand proxy for the ‘vast landscape’ of science. Thus, if they are knowable scientifically as belonging to that materialist, second-order reality of the physics and chemistry of matter (where water is a combination of two molecules of hydrogen and one of oxygen), then by definition there is nothing in theory that science cannot know. But if they are not so knowable, one can only infer that they exert their effects through some other force that lies beyond the range of science and its methods to detect. We start with Sir Isaac Newton’s theory of gravity.
Isaac Newton, born in 1642 (#litres_trial_promo) into a semi-literate sheep-farming family in rural Lincolnshire, was one of the tiny handful of supreme geniuses who have shaped the categories of human knowledge. From the time of Aristotle onwards, and for the best part of two thousand years, the regularity and order of the physical world was as it was because it was divinely ordained to be so: the punctual and undeviating sun, the movement of the planets across the heavens, the passage of the seasons and apples falling from trees. Newton’s genius was to realise that these and numerous other aspects of the physical world were all linked together by the hidden force of gravity.
Soon after graduating at the age of twenty-three from Cambridge University, Newton was compelled by an epidemic of bubonic plague to return to his home in Lincolnshire. There, over a period of just two years, he made a series of scientific discoveries that would not be equalled till Einstein, almost 250 years later. These included the nature of light and the mathematical method of differential calculus, with which it is possible to calculate the movement of the planets in their orbit. Newton’s most famous insight came when, sitting in his garden, he saw an apple fall from a tree. He ‘wondered’ whether the force of the earth’s gravity pulling the apple to the ground might reach still further, and hold the moon in its orbit around the earth.
Newton’s friend Dr William Stukeley would later record his reminiscences of that great moment.
After dinner, the weather being warm (#litres_trial_promo), we went into the garden and drank tea, under the shade of some apple trees, only he and myself. Amidst other discourse, he told me he was just in the same situation as when, formerly, the notion of gravitation came into his mind. It was occasioned by the fall of an apple, as he sat in a contemplative mood. Why should that apple always descend perpendicularly to the ground, thought he to himself? Why should it not go sideways or upwards, but constantly to the earth’s centre? Assuredly, the reason is, that the earth draws it. There must be a drawing power in matter … and if matter thus draws matter, it … must extend itself through the universe.
Newton’s ‘notion of gravitation’, of ‘matter drawing on matter’, would resolve the greatest conundrum of the movement of those heavenly bodies, why they remained in their stately orbits (the moon around the earth, the earth around the sun) rather than, as they should by rights, being impelled by their centrifugal force into the far depths of outer space. Newton, being a mathematical genius, calculated the strength of that countervailing force of gravity, showing it to be determined by the masses of the moon and earth, earth and sun respectively, multiplied together and divided by the distance between them, and so too throughout the entire universe. By the time Newton published his epic three-volume Principia Mathematica in 1697, describing the theory of gravity and the three laws of motion, he had transformed the divinely ordained physical world into which he was born into one governed by absolute and unchallengeable universal laws known to man, where everything was linked to everything else in a never-ending series of causes – all the way into the past and indefinitely into the future.
From the beginning, the force of gravity at the moment of the Big Bang imposed the necessary order on those billions of elementary particles, concentrating them into massive, heat-generating stars. Several thousand millions of years later, the same force of gravity would impose order on our solar system, concentrating 99 per cent of its matter within the sun to generate the prodigious amounts of energy, heat and light that would allow the emergence of life on earth. And anticipating the future? Newton’s friend, the Astronomer Royal Edmond Halley, used Newton’s laws to work out the elliptical orbit of the comet that bears his name and so predict its seventy-six-year cycle of return. Three hundred years later, NASA scientists would use those same laws to plot the trajectory of the first manned space flight to the moon. Newton’s laws can even predict when it will all end – in five thousand million years’ time (or thereabouts), when the prodigious energy generated by our sun will be exhausted, and our earth will perish.
As time has passed, so the explanatory power of Newton’s laws of gravity has grown ever wider, to touch virtually every aspect of human experience: the movement of the sun and stars (obviously), the waxing and waning of the moon, the ebb and flow of the tides, the contrasting climates of the Arctic Circle and the sand-swept desert, the cycle of the seasons, rain falling on the ground, the shape of mountains sculpted by the movement of glaciers, the flow of rivers towards the sea, the size of living things from whale to flea and indeed ourselves – for we could not be any bigger than we are without encountering the hazard, posed by gravity, of falling over.
Newton’s laws epitomise, to the highest degree (#litres_trial_promo), the explanatory power of science, through which for the first time we humans could comprehend the workings of that vast universe to which we belong. But, and it is a most extraordinary thing, three hundred years on, the means by which the powerful, invisible glue of gravity imposes order on the universe remains quite unknown. Consider, by analogy, a child whirling a ball attached to a string around its head, just as gravity holds the earth in its perpetual orbit around the sun. Here, the string (like gravity) counteracts the centrifugal force that would hurtle the ball (the earth) into a distant tree. But there is no string. Newton himself was only too well aware that there had to be some physical means by which gravity must exert its influence over hundreds, thousands, of millions of miles of empty space. It was, he wrote, ‘an absurdity that no thinking man can ever fall into’ to suppose that gravity ‘could act at a distance through a vacuum without the mediation of anything else, by which that action and force may be conveyed’.
Perhaps, he speculated, space was suffused by an invisible ‘ether’ composed of very small particles that repelled one another and by which the sun could hold the earth in its orbit – though this would mean that over a very long period the movement of the planets would gradually slow down through the effects of friction. But in 1887 an American physicist, Albert Michelson, discovered that there was no ‘ether’. Space is well named – it is empty. Put another way, Newton’s theory encompasses the profound contradiction of gravity being both an immensely powerful force imposing order on the matter of the universe, linking its history all the way back to the beginning and anticipating its end, yet itself being non-material. This extraordinary property of gravitation requires some sort of context, by contrasting it with, for example, that equally potent invisible ‘force’ electricity, which at the touch of a switch floods the room with light. But whereas electricity is a ‘material’ force – the vibration of electrons passing along a copper wire – gravity exerts its effects across billions of miles of empty space, through a vacuum of nothingness.
Newton’s theory stands (for all time), but has been modified in two directions. First, in 1915, Albert Einstein in his General Theory of Relativity reformulated the concept of gravity to allow for space to be ‘elastic’, so that a star like our sun could curve and stretch the space around it – and the bigger the star, the greater the effect. Matter, Einstein showed, warps space. This takes care of the more bizarre phenomena in the universe, such as ‘black holes’, that capture even the weightless particles of light – but for all that the profound Newtonian mystery of how gravity exerts its force through the vacuum of space remains unresolved.
Next, it has emerged that Newton’s gravitational force is not alone, being just one of four (similarly non-material) forces, including those that bind together the atomic particles of protons and neutrons – whose disruption generates the prodigious energy of an atomic explosion. In the twentieth century, the conundrum (#litres_trial_promo) of the non-materiality of those gravitational forces was compounded when it emerged that their strength is precisely tuned to permit the consequent emergence of life and ourselves. If the force they exert were, for example, ever so slightly stronger, then stars (like our sun) would attract more matter from interstellar space, and being so much bigger would burn much more rapidly and intensely – just as a large bonfire outburns a smaller one. They would then exhaust themselves in as little as ten million years, instead of the several billion necessary for life to ‘get going’. If, contrariwise, the force of gravity were ever so slightly weaker, the reverse would apply, and the sun and stars would not be big enough to generate those prodigious amounts of heat and energy. The sky would be empty at night, and once again we humans would never have been around to appreciate it. It is, of course, very difficult to convey just how precise those forces necessary for the creation of the universe (and the subsequent emergence of life on planet earth) had to be, but physicist John Polkinghorne estimates their fine tuning had to be accurate to within one part in a trillion trillion (and several trillion more), a figure greater by far than all the particles in the universe – a degree of accuracy, it is estimated, equivalent to hitting a target an inch wide on the other side of the universe.
Isaac Newton’s theory of gravity is the most elegant idea in the history of science. Nothing touches its combination of pure simplicity, readily understandable by a class of ten-year-olds, and all-encompassing explanatory power. His contemporaries were dazzled that so elementary a mathematical formula could account for so much – prompting the poet Alexander Pope to propose as his epitaph in Westminster Abbey:
Nature and nature’s laws lay hidden in night
God said Let Newton be! And all was light.
Still, Newton’s gravitational force, imposing ‘order’ on the physical universe, clearly fails the test of scientific ‘knowability’, for while we can fully comprehend all its consequences we are ‘left with that absurdity that no thinking man can ever fall into’ of having to suppose that a non-material force can ‘act at a distance’ across millions of miles of empty space without the mediation of anything by which that action and force may be conveyed. Thus, ironically, this most scientific of theories, grounded in the observation of the movements of the planets expressed in mathematical form, subverts the scientific or materialist view which holds that everything must ultimately be explicable in terms of its material properties alone.
We turn now to the living world of plants, insects, fishes, birds and ourselves, which is billions upon billions upon billions of times more complex than Newton’s non-living, physical universe. Hence, the two forces that impose order on that world, the Double Helix imposing the order of form on living things, and the human brain and its mind imposing the order of understanding, will be profounder than the glue of gravity by similar orders of magnitude. We might anticipate that these two further forces of order will, like Newton’s theory of gravity, similarly prove to be non-material, and therefore fail the test of scientific knowability. But to ‘get there’ we must first come to grips with how we have come to suppose otherwise, and specifically how in the mid-nineteenth century Darwin’s grand evolutionary theory, as set out in the twin texts of On the Origin of Species and The Descent of Man, offered an apparently all-encompassing and exclusively materialist explanation for the phenomena of life.

4 The (Evolutionary) ‘Reason forEverything’: Certainty (#ulink_d60ec864-f763-5ee1-a8fe-d2b5d16722e3)
‘There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one … from so simple a beginning endless forms most beautiful and most wonderful have been evolved.’
Charles Darwin, On the Origin of Species (1859)
Charles Darwin, while a theology student at Cambridge University, developed a passion for beetles. ‘Nothing gave me so much pleasure (#litres_trial_promo),’ he would write in his autobiography, recalling how, ‘as proof of my zeal’:
One day, on tearing off some old bark I saw two rare beetles and seized one in each hand; then I saw a third and new kind, which I could not bear to lose, so I put the one which I held in my right hand into my mouth. But alas! It ejected some intensely acrid fluid that burnt my tongue so I was forced to spit it out and so it was lost.
Darwin’s zeal for beetles was quite unexceptional, for he was born into the Golden Age of Natural History, when the wonders of nature as revealed by science gripped the public imagination with an extraordinary intensity, while being also the most tangible evidence of a divinely ordained world. ‘The naturalist … sees the beautiful connection (#litres_trial_promo) that subsists throughout the whole scheme of animated nature,’ observed the editor of the Zoological Journal of London. ‘He traces … the mutual depending that convinces him nothing is made in vain.’
There seemed no limit to the new forms of ‘animated nature’ just waiting to be discovered. In 1771 the famed maritime explorer James Cook had returned from his epic three-year circumnavigation of the world ‘laden with the greatest treasure of natural history that ever was brought into any country at one time’: no fewer than 1,400 new plant species, more than a thousand new species of animals, two hundred fish and assorted molluscs, insects and marine creatures. For his friend the anatomist John Hunter (#litres_trial_promo), waiting for him as his ship anchored off Deal harbour, Cook had several unusual specimens to add to his famous collection: a striped polecat from the Cape of Good Hope, part of a giant squid, and a peculiar animal ‘as large as a greyhound, of mouse colour and very swift’, known in the Aboriginal dialect as a ‘kangooroo’.
The discovery of this exhilarating diversity of life extended beyond the living to the long-since extinct. For this, too, was the great period of geological discovery of the antiquity of the earth, the strata of whose rocks revealed fossilised bones and teeth so much larger than any previously encountered as to suggest that vast, fantastical creatures had roamed the surface of the earth millions of years before man.
The immediate fascination of natural history lay in the accurate description of that teeming variety of life, but beyond that there was every reason to suppose that comparing the anatomical structure and the behaviour of living organisms such as the polecat, squid and kangaroo would reveal the long-suspected hidden laws that link all ‘animated nature’ together. The search for those laws stretches back into antiquity, seeking first to explain the ‘vitality’ of the living, the heat, energy and movement that so readily distinguish it from the nonliving, and that depart so promptly at the moment of death. The subtler, yet related, question concerned the nature of ‘form’, those elusive qualities of pattern and order that so clearly distinguish polecat, squid and kangaroo from each other, and the tissues of which they are made – as readily as a grand palace is distinguished from a humble factory, and from the bricks and mortar of which they are constructed. But the elusive ‘form’ of polecat and squid, unlike that of the palace or the factory, has the further extraordinary property of remaining constant throughout their lives, even though the ‘bricks and mortar’ from which they are fashioned are being constantly replaced and renewed. From the first natural historian, Aristotle, onwards, it was presumed that some organising principle, some ‘formative impulse’, must both determine and ensure that constancy of form.
The presiding genius of natural history (#litres_trial_promo), Baron Georges Cuvier (1769–1832), director of the Musée d’Histoire Naturelle in Paris, proposed two laws of that ‘formative impulse’, the laws of similarity (homology) and correlation. First, homology. Cuvier inferred from a detailed study of the ten thousand specimens in his collection that the diverse forms of animals each concealed an underlying ‘unity of type’, all being variations on the same ‘blueprint’: the wings of bird and bat, the paddle of a porpoise, the horse’s legs and the human forearm were all constructed from the same bones, adapted to their ‘way of life’ – whether flying or swimming, running or grasping.
His second law, of ‘correlation’, asserted that the various parts of every animal, its skull, limbs, teeth, etc., were all ‘of a piece’, all correlated together, being so fashioned as to fulfil its way of life. Thus a carnivore, such as a lion or hyena, would have limbs strong enough to grasp its victim and muscular enough for hunting, jaws sufficiently powerful and teeth sharp enough to rip its flesh, and so on. ‘Every organised being forms a whole, a unique and perfect system, the parts of which mutually correspond and concur in the same definitive action,’ he wrote.
Cuvier maintained that these laws dictating the harmony of the parts of the ‘unique and perfect system’ were as precise as those of mathematics. He could not specify the biological forces behind them, but they were not merely some theoretical inference. Rather, they could be ‘put to the test’, allowing him, to the astonishment of all, to ‘restore to life’ those fantastical and long-extinct creatures from long ago, reconstructing from the assorted bones and teeth of their fossilised remains a ‘megatherium’, or ‘huge beast’, a creature resembling a giant sloth which would stand on two legs to graze on leaves. ‘Is not Cuvier the great poet of our era?’ enquired the novelist Honoré de Balzac. ‘Our immortal naturalist has reconstructed past worlds from a few bleached bones … discovered a Giant population from the footprints of a mammoth.’

Конец ознакомительного фрагмента.
Текст предоставлен ООО «ЛитРес».
Прочитайте эту книгу целиком, купив полную легальную версию (https://www.litres.ru/james-fanu-le/why-us-how-science-rediscovered-the-mystery-of-ourselves/) на ЛитРес.
Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.