کشف جهالتکتاب: انسان خردمند / فصل 14
- زمان مطالعه 67 دقیقه
- سطح خیلی سخت
دانلود اپلیکیشن «زیبوک»
این فصل را میتوانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زیبوک» بخوانید
متن انگلیسی فصل
Part Four - The Scientific Revolution
14 - The Discovery of Ignorance
WERE, SAY, A SPANISH PEASANT TO HAVE fallen asleep in AD 1000 and woken up 500 years later, to the din of Columbus’ sailors boarding the Niña, Pinta and Santa Maria, the world would have seemed to him quite familiar. Despite many changes in technology, manners and political boundaries, this medieval Rip Van Winkle would have felt at home. But had one of Columbus’ sailors fallen into a similar slumber and woken up to the ringtone of a twenty-first-century iPhone, he would have found himself in a world strange beyond comprehension. ‘Is this heaven?’ he might well have asked himself. ‘Or perhaps – hell?’ The last 500 years have witnessed a phenomenal and unprecedented growth in human power. In the year 1500, there were about 500 million Homo sapiens in the entire world. Today, there are 7 billion.1 The total value of goods and services produced by humankind in the year 1500 is estimated at $250 billion, in today’s dollars.2 Nowadays the value of a year of human production is close to $60 trillion.3 In 1500, humanity consumed about 13 trillion calories of energy per day. Today, we consume 1,500 trillion calories a day.4 (Take a second look at those figures – human population has increased fourteen-fold, production 240-fold, and energy consumption 115-fold.) Suppose a single modern battleship got transported back to Columbus’ time. In a matter of seconds it could make driftwood out of the Niña, Pinta and Santa Maria and then sink the navies of every great world power of the time without sustaining a scratch. Five modern freighters could have taken onboard all the cargo borne by the whole world’s merchant fleets.5 A modern computer could easily store every word and number in all the codex books and scrolls in every single medieval library with room to spare. Any large bank today holds more money than all the world’s premodern kingdoms put together.6 In 1500, few cities had more than 100,000 inhabitants. Most buildings were constructed of mud, wood and straw; a three-storey building was a skyscraper. The streets were rutted dirt tracks, dusty in summer and muddy in winter, plied by pedestrians, horses, goats, chickens and a few carts. The most common urban noises were human and animal voices, along with the occasional hammer and saw. At sunset, the cityscape went black, with only an occasional candle or torch flickering in the gloom. If an inhabitant of such a city could see modern Tokyo, New York or Mumbai, what would she think?
Prior to the sixteenth century, no human had circumnavigated the earth. This changed in 1522, when Magellan’s expedition returned to Spain after a journey of 72,000 kilometres. It took three years and cost the lives of almost all the crew members, Magellan included. In 1873, Jules Verne could imagine that Phileas Fogg, a wealthy British adventurer, might just be able to make it around the world in eighty days. Today anyone with a middle-class income can safely and easily circumnavigate the globe in just forty-eight hours.
In 1500, humans were confined to the earth’s surface. They could build towers and climb mountains, but the sky was reserved for birds, angels and deities. On 20 July 1969 humans landed on the moon. This was not merely a historical achievement, but an evolutionary and even cosmic feat. During the previous 4 billion years of evolution, no organism managed even to leave the earth’s atmosphere, and certainly none left a foot or tentacle print on the moon.
For most of history, humans knew nothing about 99.99 per cent of the organisms on the planet – namely, the microorganisms. This was not because they were of no concern to us. Each of us bears billions of one-celled creatures within us, and not just as free-riders. They are our best friends, and deadliest enemies. Some of them digest our food and clean our guts, while others cause illnesses and epidemics. Yet it was only in 1674 that a human eye first saw a microorganism, when Anton van Leeuwenhoek took a peek through his home-made microscope and was startled to see an entire world of tiny creatures milling about in a drop of water. During the subsequent 300 years, humans have made the acquaintance of a huge number of microscopic species. We’ve managed to defeat most of the deadliest contagious diseases they cause, and have harnessed microorganisms in the service of medicine and industry. Today we engineer bacteria to produce medications, manufacture biofuel and kill parasites.
But the single most remarkable and defining moment of the past 500 years came at 05:29:45 on 16 July 1945. At that precise second, American scientists detonated the first atomic bomb at Alamogordo, New Mexico. From that point onward, humankind had the capability not only to change the course of history, but to end it.
The historical process that led to Alamogordo and to the moon is known as the Scientific Revolution. During this revolution humankind has obtained enormous new powers by investing resources in scientific research. It is a revolution because, until about AD 1500, humans the world over doubted their ability to obtain new medical, military and economic powers. While government and wealthy patrons allocated funds to education and scholarship, the aim was, in general, to preserve existing capabilities rather than acquire new ones. The typical premodern ruler gave money to priests, philosophers and poets in the hope that they would legitimise his rule and maintain the social order. He did not expect them to discover new medications, invent new weapons or stimulate economic growth.
During the last five centuries, humans increasingly came to believe that they could increase their capabilities by investing in scientific research. This wasn’t just blind faith – it was repeatedly proven empirically. The more proofs there were, the more resources wealthy people and governments were willing to put into science. We would never have been able to walk on the moon, engineer microorganisms and split the atom without such investments. The US government, for example, has in recent decades allocated billions of dollars to the study of nuclear physics. The knowledge produced by this research has made possible the construction of nuclear power stations, which provide cheap electricity for American industries, which pay taxes to the US government, which uses some of these taxes to finance further research in nuclear physics.
Why did modern humans develop a growing belief in their ability to obtain new powers through research? What forged the bond between science, politics and economics? This chapter looks at the unique nature of modern science in order to provide part of the answer. The next two chapters examine the formation of the alliance between science, the European empires and the economics of capitalism.
Humans have sought to understand the universe at least since the Cognitive Revolution. Our ancestors put a great deal of time and effort into trying to discover the rules that govern the natural world. But modern science differs from all previous traditions of knowledge in three critical ways:
a. The willingness to admit ignorance. Modern science is based on the Latin injunction ignoramus – ‘we do not know’. It assumes that we don’t know everything. Even more critically, it accepts that the things that we think we know could be proven wrong as we gain more knowledge. No concept, idea or theory is sacred and beyond challenge.
b. The centrality of observation and mathematics. Having admitted ignorance, modern science aims to obtain new knowledge. It does so by gathering observations and then using mathematical tools to connect these observations into comprehensive theories.
c. The acquisition of new powers. Modern science is not content with creating theories. It uses these theories in order to acquire new powers, and in particular to develop new technologies.
The Scientific Revolution has not been a revolution of knowledge. It has been above all a revolution of ignorance. The great discovery that launched the Scientific Revolution was the discovery that humans do not know the answers to their most important questions.
Premodern traditions of knowledge such as Islam, Christianity, Buddhism and Confucianism asserted that everything that is important to know about the world was already known. The great gods, or the one almighty God, or the wise people of the past possessed all-encompassing wisdom, which they revealed to us in scriptures and oral traditions. Ordinary mortals gained knowledge by delving into these ancient texts and traditions and understanding them properly. It was inconceivable that the Bible, the Qur’an or the Vedas were missing out on a crucial secret of the universe – a secret that might yet be discovered by flesh-and-blood creatures.
Ancient traditions of knowledge admitted only two kinds of ignorance. First, an individual might be ignorant of something important. To obtain the necessary knowledge, all he needed to do was ask somebody wiser. There was no need to discover something that nobody yet knew. For example, if a peasant in some thirteenth-century Yorkshire village wanted to know how the human race originated, he assumed that Christian tradition held the definitive answer. All he had to do was ask the local priest.
Second, an entire tradition might be ignorant of unimportant things. By definition, whatever the great gods or the wise people of the past did not bother to tell us was unimportant. For example, if our Yorkshire peasant wanted to know how spiders weave their webs, it was pointless to ask the priest, because there was no answer to this question in any of the Christian Scriptures. That did not mean, however, that Christianity was deficient. Rather, it meant that understanding how spiders weave their webs was unimportant. After all, God knew perfectly well how spiders do it. If this were a vital piece of information, necessary for human prosperity and salvation, God would have included a comprehensive explanation in the Bible.
Christianity did not forbid people to study spiders. But spider scholars – if there were any in medieval Europe – had to accept their peripheral role in society and the irrelevance of their findings to the eternal truths of Christianity. No matter what a scholar might discover about spiders or butterflies or Galapagos finches, that knowledge was little more than trivia, with no bearing on the fundamental truths of society, politics and economics.
In fact, things were never quite that simple. In every age, even the most pious and conservative, there were people who argued that there were important things of which their entire tradition was ignorant. Yet such people were usually marginalised or persecuted – or else they founded a new tradition and began arguing that they knew everything there is to know. For example, the prophet Muhammad began his religious career by condemning his fellow Arabs for living in ignorance of the divine truth. Yet Muhammad himself very quickly began to argue that he knew the full truth, and his followers began calling him ‘The Seal of the Prophets’. Henceforth, there was no need of revelations beyond those given to Muhammad.
Modern-day science is a unique tradition of knowledge, inasmuch as it openly admits collective ignorance regarding the most important questions. Darwin never argued that he was ‘The Seal of the Biologists’, and that he had solved the riddle of life once and for all. After centuries of extensive scientific research, biologists admit that they still don’t have any good explanation for how brains produce consciousness. Physicists admit that they don’t know what caused the Big Bang, or how to reconcile quantum mechanics with the theory of general relativity.
In other cases, competing scientific theories are vociferously debated on the basis of constantly emerging new evidence. A prime example is the debates about how best to run the economy. Though individual economists may claim that their method is the best, orthodoxy changes with every financial crisis and stock-exchange bubble, and it is generally accepted that the final word on economics is yet to be said.
In still other cases, particular theories are supported so consistently by the available evidence, that all alternatives have long since fallen by the wayside. Such theories are accepted as true – yet everyone agrees that were new evidence to emerge that contradicts the theory, it would have to be revised or discarded. Good examples of these are the plate tectonics theory and the theory of evolution.
The willingness to admit ignorance has made modern science more dynamic, supple and inquisitive than any previous tradition of knowledge. This has hugely expanded our capacity to understand how the world works and our ability to invent new technologies. But it presents us with a serious problem that most of our ancestors did not have to cope with. Our current assumption that we do not know everything, and that even the knowledge we possess is tentative, extends to the shared myths that enable millions of strangers to cooperate effectively. If the evidence shows that many of those myths are doubtful, how can we hold society together? How can our communities, countries and international system function?
All modern attempts to stabilise the sociopolitical order have had no choice but to rely on either of two unscientific methods:
a. Take a scientific theory, and in opposition to common scientific practices, declare that it is a final and absolute truth. This was the method used by Nazis (who claimed that their racial policies were the corollaries of biological facts) and Communists (who claimed that Marx and Lenin had divined absolute economic truths that could never be refuted).
b. Leave science out of it and live in accordance with a non-scientific absolute truth. This has been the strategy of liberal humanism, which is built on a dogmatic belief in the unique worth and rights of human beings – a doctrine which has embarrassingly little in common with the scientific study of Homo sapiens.
But that shouldn’t surprise us. Even science itself has to rely on religious and ideological beliefs to justify and finance its research.
Modern culture has nevertheless been willing to embrace ignorance to a much greater degree than has any previous culture. One of the things that has made it possible for modern social orders to hold together is the spread of an almost religious belief in technology and in the methods of scientific research, which have replaced to some extent the belief in absolute truths.
The Scientific Dogma
Modern science has no dogma. Yet it has a common core of research methods, which are all based on collecting empirical observations – those we can observe with at least one of our senses – and putting them together with the help of mathematical tools.
People throughout history collected empirical observations, but the importance of these observations was usually limited. Why waste precious resources obtaining new observations when we already have all the answers we need? But as modern people came to admit that they did not know the answers to some very important questions, they found it necessary to look for completely new knowledge. Consequently, the dominant modern research method takes for granted the insufficiency of old knowledge. Instead of studying old traditions, emphasis is now placed on new observations and experiments. When present observation collides with past tradition, we give precedence to the observation. Of course, physicists analysing the spectra of distant galaxies, archaeologists analysing the finds from a Bronze Age city, and political scientists studying the emergence of capitalism do not disregard tradition. They start by studying what the wise people of the past have said and written. But from their first year in college, aspiring physicists, archaeologists and political scientists are taught that it is their mission to go beyond what Einstein, Heinrich Schliemann and Max Weber ever knew.
Mere observations, however, are not knowledge. In order to understand the universe, we need to connect observations into comprehensive theories. Earlier traditions usually formulated their theories in terms of stories. Modern science uses mathematics.
There are very few equations, graphs and calculations in the Bible, the Qur’an, the Vedas or the Confucian classics. When traditional mythologies and scriptures laid down general laws, these were presented in narrative rather than mathematical form. Thus a fundamental principle of Manichaean religion asserted that the world is a battleground between good and evil. An evil force created matter, while a good force created spirit. Humans are caught between these two forces, and should choose good over evil. Yet the prophet Mani made no attempt to offer a mathematical formula that could be used to predict human choices by quantifying the respective strength of these two forces. He never calculated that ‘the force acting on a man is equal to the acceleration of his spirit divided by the mass of his body’.
This is exactly what scientists seek to accomplish. In 1687, Isaac Newton published The Mathematical Principles of Natural Philosophy, arguably the most important book in modern history. Newton presented a general theory of movement and change. The greatness of Newton’s theory was its ability to explain and predict the movements of all bodies in the universe, from falling apples to shooting stars, using three very simple mathematical laws:
Henceforth, anyone who wished to understand and predict the movement of a cannonball or a planet simply had to make measurements of the object’s mass, direction and acceleration, and the forces acting on it. By inserting these numbers into Newton’s equations, the future position of the object could be predicted. It worked like magic. Only around the end of the nineteenth century did scientists come across a few observations that did not fit well with Newton’s laws, and these led to the next revolutions in physics – the theory of relativity and quantum mechanics.
Newton showed that the book of nature is written in the language of mathematics. Some chapters (for example) boil down to a clear-cut equation; but scholars who attempted to reduce biology, economics and psychology to neat Newtonian equations have discovered that these fields have a level of complexity that makes such an aspiration futile. This did not mean, however, that they gave up on mathematics. A new branch of mathematics was developed over the last 200 years to deal with the more complex aspects of reality: statistics.
In 1744, two Presbyterian clergymen in Scotland, Alexander Webster and Robert Wallace, decided to set up a life-insurance fund that would provide pensions for the widows and orphans of dead clergymen. They proposed that each of their church’s ministers would pay a small portion of his income into the fund, which would invest the money. If a minister died, his widow would receive dividends on the fund’s profits. This would allow her to live comfortably for the rest of her life. But to determine how much the ministers had to pay in so that the fund would have enough money to live up to its obligations, Webster and Wallace had to be able to predict how many ministers would die each year, how many widows and orphans they would leave behind, and by how many years the widows would outlive their husbands.
Take note of what the two churchmen did not do. They did not pray to God to reveal the answer. Nor did they search for an answer in the Holy Scriptures or among the works of ancient theologians. Nor did they enter into an abstract philosophical disputation. Being Scots, they were practical types. So they contacted a professor of mathematics from the University of Edinburgh, Colin Maclaurin. The three of them collected data on the ages at which people died and used these to calculate how many ministers were likely to pass away in any given year.
Their work was founded on several recent breakthroughs in the fields of statistics and probability. One of these was Jacob Bernoulli’s Law of Large Numbers. Bernoulli had codified the principle that while it might be difficult to predict with certainty a single event, such as the death of a particular person, it was possible to predict with great accuracy the average outcome of many similar events. That is, while Maclaurin could not use maths to predict whether Webster and Wallace would die next year, he could, given enough data, tell Webster and Wallace how many Presbyterian ministers in Scotland would almost certainly die next year. Fortunately, they had ready-made data that they could use. Actuary tables published fifty years previously by Edmond Halley proved particularly useful. Halley had analysed records of 1,238 births and 1,174 deaths that he obtained from the city of Breslau, Germany. Halley’s tables made it possible to see that, for example, a twenty-year-old person has a 1:100 chance of dying in a given year, but a fifty-year-old person has a 1:39 chance.
Processing these numbers, Webster and Wallace concluded that, on average, there would be 930 living Scottish Presbyterian ministers at any given moment, and an average of twenty-seven ministers would die each year, eighteen of whom would be survived by widows. Five of those who did not leave widows would leave orphaned children, and two of those survived by widows would also be outlived by children from previous marriages who had not yet reached the age of sixteen. They further computed how much time was likely to go by before the widows’ death or remarriage (in both these eventualities, payment of the pension would cease). These figures enabled Webster and Wallace to determine how much money the ministers who joined their fund had to pay in order to provide for their loved ones. By contributing £2 12s. 2d. a year, a minister could guarantee that his widowed wife would receive at least £10 a year – a hefty sum in those days. If he thought that was not enough he could choose to pay in more, up to a level of £6 11s. 3d. a year – which would guarantee his widow the even more handsome sum of £25 a year.
According to their calculations, by the year 1765 the Fund for a Provision for the Widows and Children of the Ministers of the Church of Scotland would have capital totalling £58,348. Their calculations proved amazingly accurate. When that year arrived, the fund’s capital stood at £58,347 – just £1 less than the prediction! This was even better than the prophecies of Habakkuk, Jeremiah or St John. Today, Webster and Wallace’s fund, known simply as Scottish Widows, is one of the largest pension and insurance companies in the world. With assets worth £100 billion, it insures not only Scottish widows, but anyone willing to buy its policies.7 Probability calculations such as those used by the two Scottish ministers became the foundation not merely of actuarial science, which is central to the pension and insurance business, but also of the science of demography (founded by another clergyman, the Anglican Robert Malthus). Demography in its turn was the cornerstone on which Charles Darwin (who almost became an Anglican pastor) built his theory of evolution. While there are no equations that predict what kind of organism will evolve under a specific set of conditions, geneticists use probability calculations to compute the likelihood that a particular mutation will spread in a given population. Similar probabilistic models have become central to economics, sociology, psychology, political science and the other social and natural sciences. Even physics eventually supplemented Newton’s classical equations with the probability clouds of quantum mechanics.
We need merely look at the history of education to realise how far this process has taken us. Throughout most of history, mathematics was an esoteric field that even educated people rarely studied seriously. In medieval Europe, logic, grammar and rhetoric formed the educational core, while the teaching of mathematics seldom went beyond simple arithmetic and geometry. Nobody studied statistics. The undisputed monarch of all sciences was theology.
Today few students study rhetoric; logic is restricted to philosophy departments, and theology to seminaries. But more and more students are motivated – or forced – to study mathematics. There is an irresistible drift towards the exact sciences – defined as ‘exact’ by their use of mathematical tools. Even fields of study that were traditionally part of the humanities, such as the study of human language (linguistics) and the human psyche (psychology), rely increasingly on mathematics and seek to present themselves as exact sciences. Statistics courses are now part of the basic requirements not just in physics and biology, but also in psychology, sociology, economics and political science.
In the course catalogue of the psychology department at my own university, the first required course in the curriculum is ‘Introduction to Statistics and Methodology in Psychological Research’. Second-year psychology students must take ‘Statistical Methods in Psychological Research’. Confucius, Buddha, Jesus and Muhammad would have been bewildered if you told them that in order to understand the human mind and cure its illnesses you must first study statistics.
Knowledge is Power
Most people have a hard time digesting modern science because its mathematical language is difficult for our minds to grasp, and its findings often contradict common sense. Out of the 7 billion people in the world, how many really understand quantum mechanics, cell biology or macroeconomics? Science nevertheless enjoys immense prestige because of the new powers it gives us. Presidents and generals may not understand nuclear physics, but they have a good grasp of what nuclear bombs can do.
In 1620 Francis Bacon published a scientific manifesto tided The New Instrument. In it he argued that ‘knowledge is power’. The real test of ‘knowledge’ is not whether it is true, but whether it empowers us. Scientists usually assume that no theory is 100 per cent correct. Consequently, truth is a poor test for knowledge. The real test is utility. A theory that enables us to do new things constitutes knowledge.
Over the centuries, science has offered us many new tools. Some are mental tools, such as those used to predict death rates and economic growth. Even more important are technological tools. The connection forged between science and technology is so strong that today people tend to confuse the two. We often think that it is impossible to develop new technologies without scientific research, and that there is little point in research if it does not result in new technologies.
In fact, the relationship between science and technology is a very recent phenomenon. Prior to 1500, science and technology were totally separate fields. When Bacon connected the two in the early seventeenth century, it was a revolutionary idea. During the seventeenth and eighteenth centuries this relationship tightened, but the knot was tied only in the nineteenth century. Even in 1800, most rulers who wanted a strong army, and most business magnates who wanted a successful business, did not bother to finance research in physics, biology or economics.
I don’t mean to claim that there is no exception to this rule. A good historian can find precedent for everything. But an even better historian knows when these precedents are but curiosities that cloud the big picture. Generally speaking, most premodern rulers and business people did not finance research about the nature of the universe in order to develop new technologies, and most thinkers did not try to translate their findings into technological gadgets. Rulers financed educational institutions whose mandate was to spread traditional knowledge for the purpose of buttressing the existing order.
Here and there people did develop new technologies, but these were usually created by uneducated craftsmen using trial and error, not by scholars pursuing systematic scientific research. Cart manufacturers built the same carts from the same materials year in year out. They did not set aside a percentage of their annual profits in order to research and develop new cart models. Cart design occasionally improved, but it was usually thanks to the ingenuity of some local carpenter who never set foot in a university and did not even know how to read.
This was true of the public as well as the private sector. Whereas modern states call in their scientists to provide solutions in almost every area of national policy, from energy to health to waste disposal, ancient kingdoms seldom did so. The contrast between then and now is most pronounced in weaponry. When outgoing President Dwight Eisenhower warned in 1961 of the growing power of the military-industrial complex, he left out a part of the equation. He should have alerted his country to the military-industrial-scientific complex, because today’s wars are scientific productions. The world’s military forces initiate, fund and steer a large part of humanity’s scientific research and technological development.
When World War One bogged down into interminable trench warfare, both sides called in the scientists to break the deadlock and save the nation. The men in white answered the call, and out of the laboratories rolled a constant stream of new wonder-weapons: combat aircraft, poison gas, tanks, submarines and ever more efficient machine guns, artillery pieces, rifles and bombs.
Science played an even larger role in World War Two. By late 1944 Germany was losing the war and defeat was imminent. A year earlier, the Germans’ allies, the Italians, had toppled Mussolini and surrendered to the Allies. But Germany kept fighting on, even though the British, American and Soviet armies were closing in. One reason German soldiers and civilians thought not all was lost was that they believed German scientists were about to turn the tide with so-called miracle weapons such as the V-2 rocket and jet-powered aircraft.
While the Germans were working on rockets and jets, the American Manhattan Project successfully developed atomic bombs. By the time the bomb was ready, in early August 1945, Germany had already surrendered, but Japan was fighting on. American forces were poised to invade its home islands. The Japanese vowed to resist the invasion and fight to the death, and there was every reason to believe that it was no idle threat. American generals told President Harry S. Truman that an invasion of Japan would cost the lives of a million American soldiers and would extend the war well into 1946. Truman decided to use the new bomb. Two weeks and two atom bombs later, Japan surrendered unconditionally and the war was over.
But science is not just about offensive weapons. It plays a major role in our defences as well. Today many Americans believe that the solution to terrorism is technological rather than political. Just give millions more to the nanotechnology industry, they believe, and the United States could send bionic spy-flies into every Afghan cave, Yemenite redoubt and North African encampment. Once that’s done, Osama Bin Laden’s heirs will not be able to make a cup of coffee without a CIA spy-fly passing this vital information back to headquarters in Langley. Allocate millions more to brain research, and every airport could be equipped with ultra-sophisticated FMRI scanners that could immediately recognise angry and hateful thoughts in people’s brains. Will it really work? Who knows. Is it wise to develop bionic flies and thought-reading scanners? Not necessarily. Be that as it may, as you read these lines, the US Department of Defense is transferring millions of dollars to nanotechnology and brain laboratories for work on these and other such ideas.
This obsession with military technology – from tanks to atom bombs to spy-flies – is a surprisingly recent phenomenon. Up until the nineteenth century, the vast majority of military revolutions were the product of organisational rather than technological changes. When alien civilisations met for the first time, technological gaps sometimes played an important role. But even in such cases, few thought of deliberately creating or enlarging such gaps. Most empires did not rise thanks to technological wizardry, and their rulers did not give much thought to technological improvement. The Arabs did not defeat the Sassanid Empire thanks to superior bows or swords, the Seljuks had no technological advantage over the Byzantines, and the Mongols did not conquer China with the help of some ingenious new weapon. In fact, in all these cases the vanquished enjoyed superior military and civilian technology.
The Roman army is a particularly good example. It was the best army of its day, yet technologically speaking, Rome had no edge over Carthage, Macedonia or the Seleucid Empire. Its advantage rested on efficient organisation, iron discipline and huge manpower reserves. The Roman army never set up a research and development department, and its weapons remained more or less the same for centuries on end. If the legions of Scipio Aemilianus – the general who levelled Carthage and defeated the Numantians in the second century BC – had suddenly popped up 500 years later in the age of Constantine the Great, Scipio would have had a fair chance of beating Constantine. Now imagine what would happen to a general from a few centuries back – say Napoleon – if he led his troops against a modern armoured brigade. Napoleon was a brilliant tactician, and his men were crack professionals, but their skills would be useless in the face of modern weaponry.
As in Rome, so also in ancient China: most generals and philosophers did not think it their duty to develop new weapons. The most important military invention in the history of China was gunpowder. Yet to the best of our knowledge, gunpowder was invented accidentally, by Daoist alchemists searching for the elixir of life. Gunpowder’s subsequent career is even more telling. One might have thought that the Daoist alchemists would have made China master of the world. In fact, the Chinese used the new compound mainly for firecrackers. Even as the Song Empire collapsed in the face of a Mongol invasion, no emperor set up a medieval Manhattan Project to save the empire by inventing a doomsday weapon. Only in the fifteenth century – about 600 years after the invention of gunpowder – did cannons become a decisive factor on Afro-Asian battlefields. Why did it take so long for the deadly potential of this substance to be put to military use? Because it appeared at a time when neither kings, scholars, nor merchants thought that new military technology could save them or make them rich.
The situation began to change in the fifteenth and sixteenth centuries, but another 200 years went by before most rulers evinced any interest in financing the research and development of new weapons. Logistics and strategy continued to have far greater impact on the outcome of wars than technology. The Napoleonic military machine that crushed the armies of the European powers at Austerlitz (1805) was armed with more or less the same weaponry that the army of Louis XVI had used. Napoleon himself, despite being an artilleryman, had little interest in new weapons, even though scientists and inventors tried to persuade him to fund the development of flying machines, submarines and rockets.
Science, industry and military technology intertwined only with the advent of the capitalist system and the Industrial Revolution. Once this relationship was established, however, it quickly transformed the world.
The Ideal of Progress
Until the Scientific Revolution most human cultures did not believe in progress. They thought the golden age was in the past, and that the world was stagnant, if not deteriorating. Strict adherence to the wisdom of the ages might perhaps bring back the good old times, and human ingenuity might conceivably improve this or that facet of daily life. However, it was considered impossible for human know-how to overcome the world’s fundamental problems. If even Muhammad, Jesus, Buddha and Confucius – who knew everything there is to know – were unable to abolish famine, disease, poverty and war from the world, how could we expect to do so?
Many faiths believed that some day a messiah would appear and end all wars, famines and even death itself. But the notion that humankind could do so by discovering new knowledge and inventing new tools was worse than ludicrous – it was hubris. The story of the Tower of Babel, the story of Icarus, the story of the Golem and countless other myths taught people that any attempt to go beyond human limitations would inevitably lead to disappointment and disaster.
When modern culture admitted that there were many important things that it still did not know, and when that admission of ignorance was married to the idea that scientific discoveries could give us new powers, people began suspecting that real progress might be possible after all. As science began to solve one unsolvable problem after another, many became convinced that humankind could overcome any and every problem by acquiring and applying new knowledge. Poverty, sickness, wars, famines, old age and death itself were not the inevitable fate of humankind. They were simply the fruits of our ignorance.
A famous example is lightning. Many cultures believed that lightning was the hammer of an angry god, used to punish sinners. In the middle of the eighteenth century, in one of the most celebrated experiments in scientific history, Benjamin Franklin flew a kite during a lightning storm to test the hypothesis that lightning is simply an electric current. Franklins empirical observations, coupled with his knowledge about the qualities of electrical energy, enabled him to invent the lightning rod and disarm the gods.
Poverty is another case in point. Many cultures have viewed poverty as an inescapable part of this imperfect world. According to the New Testament, shortly before the crucifixion a woman anointed Christ with precious oil worth 300 denarii. Jesus’ disciples scolded the woman for wasting such a huge sum of money instead of giving it to the poor, but Jesus defended her, saying that ‘The poor you will always have with you, and you can help them any time you want. But you will not always have me’ (Mark 14:7). Today, fewer and fewer people, including fewer and fewer Christians, agree with Jesus on this matter. Poverty is increasingly seen as a technical problem amenable to intervention. It’s common wisdom that policies based on the latest findings in agronomy, economics, medicine and sociology can eliminate poverty.
And indeed, many parts of the world have already been freed from the worst forms of deprivation. Throughout history, societies have suffered from two kinds of poverty: social poverty, which withholds from some people the opportunities available to others; and biological poverty, which puts the very lives of individuals at risk due to lack of food and shelter. Perhaps social poverty can never be eradicated, but in many countries around the world biological poverty is a thing of the past.
Until recently, most people hovered very close to the biological poverty line, below which a person lacks enough calories to sustain life for long. Even small miscalculations or misfortunes could easily push people below that line, into starvation. Natural disasters and man-made calamities often plunged entire populations over the abyss, causing the death of millions. Today most of the world’s people have a safety net stretched below them. Individuals are protected from personal misfortune by insurance, state-sponsored social security and a plethora of local and international NGOs. When calamity strikes an entire region, worldwide relief efforts are usually successful in preventing the worst. People still suffer from numerous degradations, humiliations and poverty-related illnesses, but in most countries nobody is starving to death. In fact, in many societies more people are in danger of dying from obesity than from starvation.
The Gilgamesh Project
Of all mankind’s ostensibly insoluble problems, one has remained the most vexing, interesting and important: the problem of death itself. Before the late modern era, most religions and ideologies took it for granted that death was our inevitable fate. Moreover, most faiths turned death into the main source of meaning in life. Try to imagine Islam, Christianity or the ancient Egyptian religion in a world without death. These creeds taught people that they must come to terms with death and pin their hopes on the afterlife, rather than seek to overcome death and live for ever here on earth. The best minds were busy giving meaning to death, not trying to escape it.
That is the theme of the most ancient myth to come down to us – the Gilgamesh myth of ancient Sumer. Its hero is the strongest and most capable man in the world, King Gilgamesh of Uruk, who could defeat anyone in battle. One day, Gilgamesh’s best friend, Enkidu, died. Gilgamesh sat by the body and observed it for many days, until he saw a worm dropping out of his friend’s nostril. At that moment Gilgamesh was gripped by a terrible horror, and he resolved that he himself would never die. He would somehow find a way to defeat death. Gilgamesh then undertook a journey to the end of the universe, killing lions, battling scorpion-men and finding his way into the underworld. There he shattered the stone giants of Urshanabi and the ferryman of the river of the dead, and found Utnapishtim, the last survivor of the primordial flood. Yet Gilgamesh failed in his quest. He returned home empty-handed, as mortal as ever, but with one new piece of wisdom. When the gods created man, Gilgamesh had learned, they set death as man’s inevitable destiny, and man must learn to live with it.
Disciples of progress do not share this defeatist attitude. For men of science, death is not an inevitable destiny, but merely a technical problem. People die not because the gods decreed it, but due to various technical failures – a heart attack, cancer, an infection. And every technical problem has a technical solution. If the heart flutters, it can be stimulated by a pacemaker or replaced by a new heart. If cancer rampages, it can be killed with drugs or radiation. If bacteria proliferate, they can be subdued with antibiotics. True, at present we cannot solve all technical problems. But we are working on them. Our best minds are not wasting their time trying to give meaning to death. Instead, they are busy investigating the physiological, hormonal and genetic systems responsible for disease and old age. They are developing new medicines, revolutionary treatments and artificial organs that will lengthen our lives and might one day vanquish the Grim Reaper himself.
Until recently, you would not have heard scientists, or anyone else, speak so bluntly. ‘Defeat death?! What nonsense! We are only trying to cure cancer, tuberculosis and Alzheimer’s disease,’ they insisted. People avoided the issue of death because the goal seemed too elusive. Why create unreasonable expectations? We’re now at a point, however, where we can be frank about it. The leading project of the Scientific Revolution is to give humankind eternal life. Even if killing death seems a distant goal, we have already achieved things that were inconceivable a few centuries ago. In 1199, King Richard the Lionheart was struck by an arrow in his left shoulder. Today we’d say he incurred a minor injury. But in 1199, in the absence of antibiotics and effective sterilisation methods, this minor flesh wound turned infected and gangrene set in. The only way to stop the spread of gangrene in twelfth-century Europe was to cut off the infected limb, impossible when the infection was in a shoulder. The gangrene spread through the Lionheart’s body and no one could help the king. He died in great agony two weeks later.
As recently as the nineteenth century, the best doctors still did not know how to prevent infection and stop the putrefaction of tissues. In field hospitals doctors routinely cut off the hands and legs of soldiers who received even minor limb injuries, fearing gangrene. These amputations, as well as all other medical procedures (such as tooth extraction), were done without any anaesthetics. The first anaesthetics – ether, chloroform and morphine – entered regular usage in Western medicine only in the middle of the nineteenth century. Before the advent of chloroform, four soldiers had to hold down a wounded comrade while the doctor sawed off the injured limb. On the morning after the battle of Waterloo (1815), heaps of sawn-off hands and legs could be seen adjacent to the field hospitals. In those days, carpenters and butchers who enlisted to the army were often sent to serve in the medical corps, because surgery required little more than knowing your way with knives and saws.
In the two centuries since Waterloo, things have changed beyond recognition. Pills, injections and sophisticated operations save us from a spate of illnesses and injuries that once dealt an inescapable death sentence. They also protect us against countless daily aches and ailments, which premodern people simply accepted as part of life. The average life expectancy jumped from around twenty-five to forty years, to around sixty-seven in the entire world, and to around eighty years in the developed world.8 Death suffered its worst setbacks in the arena of child mortality. Until the twentieth century, between a quarter and a third of the children of agricultural societies never reached adulthood. Most succumbed to childhood diseases such as diphtheria, measles and smallpox. In seventeenth-century England, 150 out of every 1,000 newborns died during their first year, and a third of all children were dead before they reached fifteen.9 Today, only five out of 1,000 English babies die during their first year, and only seven out of 1,000 die before age fifteen.10 We can better grasp the full impact of these figures by setting aside statistics and telling some stories. A good example is the family of King Edward I of England (1237–1307) and his wife, Queen Eleanor (1241–90). Their children enjoyed the best conditions and the most nurturing surroundings that could be provided in medieval Europe. They lived in palaces, ate as much food as they liked, had plenty of warm clothing, well-stocked fireplaces, the cleanest water available, an army of servants and the best doctors. The sources mention sixteen children that Queen Eleanor bore between 1255 and 1284: 1. An anonymous daughter, born in 1255, died at birth.
A daughter, Catherine, died either at age one or age three. A daughter, Joan, died at six months. A son, John, died at age five. A son, Henry, died at age six. A daughter, Eleanor, died at age twenty-nine. An anonymous daughter died at five months. A daughter, Joan, died at age thirty-five. A son, Alphonso, died at age ten. A daughter, Margaret, died at age fifty-eight. A daughter, Berengeria, died at age two. An anonymous daughter died shortly after birth. A daughter, Mary, died at age fifty-three. An anonymous son died shortly after birth. A daughter, Elizabeth, died at age thirty-four. A son, Edward.
The youngest, Edward, was the first of the boys to survive the dangerous years of childhood, and at his fathers death he ascended the English throne as King Edward II. In other words, it took Eleanor sixteen tries to carry out the most fundamental mission of an English queen – to provide her husband with a male heir. Edward II’s mother must have been a woman of exceptional patience and fortitude. Not so the woman Edward chose for his wife, Isabella of France. She had him murdered when he was forty-three.11 To the best of our knowledge, Eleanor and Edward I were a healthy couple and passed no fatal hereditary illnesses on to their children. Nevertheless, ten out of the sixteen – 62 per cent – died during childhood. Only six managed to live beyond the age of eleven, and only three – just 18 per cent – lived beyond the age of forty. In addition to these births, Eleanor most likely had a number of pregnancies that ended in miscarriage. On average, Edward and Eleanor lost a child every three years, ten children one after another. It’s nearly impossible for a parent today to imagine such loss.
How long will the Gilgamesh Project – the quest for immortality – take to complete? A hundred years? Five hundred years? A thousand years? When we recall how little we knew about the human body in 1900, and how much knowledge we have gained in a single century, there is cause for optimism. Genetic engineers have recently managed to double the average life expectancy of Caenorhabditis elegans worms.12 Could they do the same for Homo sapiens? Nanotechnology experts are developing a bionic immune system composed of millions of nano-robots, who would inhabit our bodies, open blocked blood vessels, fight viruses and bacteria, eliminate cancerous cells and even reverse ageing processes.13 A few serious scholars suggest that by 2050, some humans will become a-mortal (not immortal, because they could still die of some accident, but a-mortal, meaning that in the absence of fatal trauma their lives could be extended indefinitely).
Whether or not Project Gilgamesh succeeds, from a historical perspective it is fascinating to see that most late-modern religions and ideologies have already taken death and the afterlife out of the equation. Until the eighteenth century, religions considered death and its aftermath central to the meaning of life. Beginning in the eighteenth century, religions and ideologies such as liberalism, socialism and feminism lost all interest in the afterlife. What, exactly, happens to a Communist after he or she dies? What happens to a capitalist? What happens to a feminist? It is pointless to look for the answer in the writings of Marx, Adam Smith or Simone de Beauvoir. The only modern ideology that still awards death a central role is nationalism. In its more poetic and desperate moments, nationalism promises that whoever dies for the nation will forever live in its collective memory. Yet this promise is so fuzzy that even most nationalists do not really know what to make of it.
The Sugar Daddy of Science
We are living in a technical age. Many are convinced that science and technology hold the answers to all our problems. We should just let the scientists and technicians go on with their work, and they will create heaven here on earth. But science is not an enterprise that takes place on some superior moral or spiritual plane above the rest of human activity. Like all other parts of our culture, it is shaped by economic, political and religious interests.
Science is a very expensive affair. A biologist seeking to understand the human immune system requires laboratories, test tubes, chemicals and electron microscopes, not to mention lab assistants, electricians, plumbers and cleaners. An economist seeking to model credit markets must buy computers, set up giant databanks and develop complicated data-processing programs. An archaeologist who wishes to understand the behaviour of archaic hunter-gatherers must travel to distant lands, excavate ancient ruins and date fossilised bones and artefacts. All of this costs money.
During the past 500 years modern science has achieved wonders thanks largely to the willingness of governments, businesses, foundations and private donors to channel billions of dollars into scientific research. These billions have done much more to chart the universe, map the planet and catalogue the animal kingdom than did Galileo Galilei, Christopher Columbus and Charles Darwin. If these particular geniuses had never been born, their insights would probably have occurred to others. But if the proper funding were unavailable, no intellectual brilliance could have compensated for that. If Darwin had never been born, for example, we’d today attribute the theory of evolution to Alfred Russel Wallace, who came up with the idea of evolution via natural selection independently of Darwin and just a few years later. But if the European powers had not financed geographical, zoological and botanical research around the world, neither Darwin nor Wallace would have had the necessary empirical data to develop the theory of evolution. It is likely that they would not even have tried.
Why did the billions start flowing from government and business coffers into labs and universities? In academic circles, many are naïve enough to believe in pure science. They believe that government and business altruistically give them money to pursue whatever research projects strike their fancy. But this hardly describes the realities of science funding.
Most scientific studies are funded because somebody believes they can help attain some political, economic or religious goal. For example, in the sixteenth century, kings and bankers channelled enormous resources to finance geographical expeditions around the world but not a penny for studying child psychology. This is because kings and bankers surmised that the discovery of new geographical knowledge would enable them to conquer new lands and set up trade empires, whereas they couldn’t see any profit in understanding child psychology.
In the 1940s the governments of America and the Soviet Union channelled enormous resources to the study of nuclear physics rather than underwater archaeology. They surmised that studying nuclear physics would enable them to develop nuclear weapons, whereas underwater archaeology was unlikely to help win wars. Scientists themselves are not always aware of the political, economic and religious interests that control the flow of money; many scientists do, in fact, act out of pure intellectual curiosity. However, only rarely do scientists dictate the scientific agenda.
Even if we wanted to finance pure science unaffected by political, economic or religious interests, it would probably be impossible. Our resources are limited, after all. Ask a congressman to allocate an additional million dollars to the National Science Foundation for basic research, and he’ll justifiably ask whether that money wouldn’t be better used to fund teacher training or to give a needed tax break to a troubled factory in his district. To channel limited resources we must answer questions such as ‘What is more important?’ and ‘What is good?’ And these are not scientific questions. Science can explain what exists in the world, how things work, and what might be in the future. By definition, it has no pretensions to knowing what should be in the future. Only religions and ideologies seek to answer such questions.
Consider the following quandary: two biologists from the same department, possessing the same professional skills, have both applied for a million-dollar grant to finance their current research projects. Professor Slughorn wants to study a disease that infects the udders of cows, causing a 10 per cent decrease in their milk production. Professor Sprout wants to study whether cows suffer mentally when they are separated from their calves. Assuming that the amount of money is limited, and that it is impossible to finance both research projects, which one should be funded?
There is no scientific answer to this question. There are only political, economic and religious answers. In today’s world, it is obvious that Slughorn has a better chance of getting the money. Not because udder diseases are scientifically more interesting than bovine mentality, but because the dairy industry, which stands to benefit from the research, has more political and economic clout than the animal-rights lobby.
Perhaps in a strict Hindu society, where cows are sacred, or in a society committed to animal rights, Professor Sprout would have a better shot. But as long as she lives in a society that values the commercial potential of milk and the health of its human citizens over the feelings of cows, she’d best write up her research proposal so as to appeal to those assumptions. For example, she might write that ‘Depression leads to a decrease in milk production. If we understand the mental world of dairy cows, we could develop psychiatric medication that will improve their mood, thus raising milk production by up to 10 per cent. I estimate that there is a global annual market of $250 million for bovine psychiatric medications.’ Science is unable to set its own priorities. It is also incapable of determining what to do with its discoveries. For example, from a purely scientific viewpoint it is unclear what we should do with our increasing understanding of genetics. Should we use this knowledge to cure cancer, to create a race of genetically engineered supermen, or to engineer dairy cows with super-sized udders? It is obvious that a liberal government, a Communist government, a Nazi government and a capitalist business corporation would use the very same scientific discovery for completely different purposes, and there is no scientific reason to prefer one usage over others.
In short, scientific research can flourish only in alliance with some religion or ideology. The ideology justifies the costs of the research. In exchange, the ideology influences the scientific agenda and determines what to do with the discoveries. Hence in order to comprehend how humankind has reached Alamogordo and the moon – rather than any number of alternative destinations – it is not enough to survey the achievements of physicists, biologists and sociologists. We have to take into account the ideological, political and economic forces that shaped physics, biology and sociology, pushing them in certain directions while neglecting others.
Two forces in particular deserve our attention: imperialism and capitalism. The feedback loop between science, empire and capital has arguably been history’s chief engine for the past 500 years. The following chapters analyse its workings. First we’ll look at how the twin turbines of science and empire were latched to one another, and then learn how both were hitched up to the money pump of capitalism.
مشارکت کنندگان در این صفحه
تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.
🖊 شما نیز میتوانید برای مشارکت در ترجمهی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.