short science writing

Naučna otkrića, edukacija, školstvo, univerziteti, fakulteti...
Post Reply
User avatar
danas
Posts: 18821
Joined: 11/03/2005 19:40
Location: 10th circle...

short science writing

Post by danas » 01/08/2007 18:52

Who’s Minding the Mind?
By BENEDICT CAREY

The New York Times
July 31, 2007


In a recent experiment, psychologists at Yale altered people’s judgments of a stranger by handing them a cup of coffee.

The study participants, college students, had no idea that their social instincts were being deliberately manipulated. On the way to the laboratory, they had bumped into a laboratory assistant, who was holding textbooks, a clipboard, papers and a cup of hot or iced coffee — and asked for a hand with the cup.

That was all it took: The students who held a cup of iced coffee rated a hypothetical person they later read about as being much colder, less social and more selfish than did their fellow students, who had momentarily held a cup of hot java.

Findings like this one, as improbable as they seem, have poured forth in psychological research over the last few years. New studies have found that people tidy up more thoroughly when there’s a faint tang of cleaning liquid in the air; they become more competitive if there’s a briefcase in sight, or more cooperative if they glimpse words like “dependable” and “support” — all without being aware of the change, or what prompted it.

Psychologists say that “priming” people in this way is not some form of hypnotism, or even subliminal seduction; rather, it’s a demonstration of how everyday sights, smells and sounds can selectively activate goals or motives that people already have.

More fundamentally, the new studies reveal a subconscious brain that is far more active, purposeful and independent than previously known. Goals, whether to eat, mate or devour an iced latte, are like neural software programs that can only be run one at a time, and the unconscious is perfectly capable of running the program it chooses.

The give and take between these unconscious choices and our rational, conscious aims can help explain some of the more mystifying realities of behavior, like how we can be generous one moment and petty the next, or act rudely at a dinner party when convinced we are emanating charm.

“When it comes to our behavior from moment to moment, the big question is, ‘What to do next?’ ” said John A. Bargh, a professor of psychology at Yale and a co-author, with Lawrence Williams, of the coffee study, which was presented at a recent psychology conference. “Well, we’re finding that we have these unconscious behavioral guidance systems that are continually furnishing suggestions through the day about what to do next, and the brain is considering and often acting on those, all before conscious awareness.”

Dr. Bargh added: “Sometimes those goals are in line with our conscious intentions and purposes, and sometimes they’re not.”

Priming the Unconscious

The idea of subliminal influence has a mixed reputation among scientists because of a history of advertising hype and apparent fraud. In 1957, an ad man named James Vicary claimed to have increased sales of Coca-Cola and popcorn at a movie theater in Fort Lee, N.J., by secretly flashing the words “Eat popcorn” and “Drink Coke” during the film, too quickly to be consciously noticed. But advertisers and regulators doubted his story from the beginning, and in a 1962 interview, Mr. Vicary acknowledged that he had trumped up the findings to gain attention for his business.

Later studies of products promising subliminal improvement, for things like memory and self-esteem, found no effect.

Some scientists also caution against overstating the implications of the latest research on priming unconscious goals. The new research “doesn’t prove that consciousness never does anything,” wrote Roy Baumeister, a professor of psychology at Florida State University, in an e-mail message. “It’s rather like showing you can hot-wire a car to start the ignition without keys. That’s important and potentially useful information, but it doesn’t prove that keys don’t exist or that keys are useless.”

Yet he and most in the field now agree that the evidence for psychological hot-wiring has become overwhelming. In one 2004 experiment, psychologists led by Aaron Kay, then at Stanford University and now at the University of Waterloo, had students take part in a one-on-one investment game with another, unseen player.

Half the students played while sitting at a large table, at the other end of which was a briefcase and a black leather portfolio. These students were far stingier with their money than the others, who played in an identical room, but with a backpack on the table instead.

The mere presence of the briefcase, noticed but not consciously registered, generated business-related associations and expectations, the authors argue, leading the brain to run the most appropriate goal program: compete. The students had no sense of whether they had acted selfishly or generously.

In another experiment, published in 2005, Dutch psychologists had undergraduates sit in a cubicle and fill out a questionnaire. Hidden in the room was a bucket of water with a splash of citrus-scented cleaning fluid, giving off a faint odor. After completing the questionnaire, the young men and women had a snack, a crumbly biscuit provided by laboratory staff members.

The researchers covertly filmed the snack time and found that these students cleared away crumbs three times more often than a comparison group, who had taken the same questionnaire in a room with no cleaning scent. “That is a very big effect, and they really had no idea they were doing it,” said Henk Aarts, a psychologist at Utrecht University and the senior author of the study.

The Same Brain Circuits

The real-world evidence for these unconscious effects is clear to anyone who has ever run out to the car to avoid the rain and ended up driving too fast, or rushed off to pick up dry cleaning and returned with wine and cigarettes — but no pressed slacks.

The brain appears to use the very same neural circuits to execute an unconscious act as it does a conscious one. In a study that appeared in the journal Science in May, a team of English and French neuroscientists performed brain imaging on 18 men and women who were playing a computer game for money. The players held a handgrip and were told that the tighter they squeezed when an image of money flashed on the screen, the more of the loot they could keep.

As expected, the players squeezed harder when the image of a British pound flashed by than when the image of a penny did — regardless of whether they consciously perceived the pictures, many of which flew by subliminally. But the circuits activated in their brains were similar as well: an area called the ventral pallidum was particularly active whenever the participants responded.

“This area is located in what used to be called the reptilian brain, well below the conscious areas of the brain,” said the study’s senior author, Chris Frith, a professor in neuropsychology at University College London who wrote the book “Making Up The Mind: How the Brain Creates our Mental World.”

The results suggest a “bottom-up” decision-making process, in which the ventral pallidum is part of a circuit that first weighs the reward and decides, then interacts with the higher-level, conscious regions later, if at all, Dr. Frith said.

Scientists have spent years trying to pinpoint the exact neural regions that support conscious awareness, so far in vain. But there’s little doubt it involves the prefrontal cortex, the thin outer layer of brain tissue behind the forehead, and experiments like this one show that it can be one of the last neural areas to know when a decision is made.

This bottom-up order makes sense from an evolutionary perspective. The subcortical areas of the brain evolved first and would have had to help individuals fight, flee and scavenge well before conscious, distinctly human layers were added later in evolutionary history. In this sense, Dr. Bargh argues, unconscious goals can be seen as open-ended, adaptive agents acting on behalf of the broad, genetically encoded aims — automatic survival systems.

In several studies, researchers have also shown that, once covertly activated, an unconscious goal persists with the same determination that is evident in our conscious pursuits. Study participants primed to be cooperative are assiduous in their teamwork, for instance, helping others and sharing resources in games that last 20 minutes or longer. Ditto for those set up to be aggressive.

This may help explain how someone can show up at a party in good spirits and then for some unknown reason — the host’s loafers? the family portrait on the wall? some political comment? — turn a little sour, without realizing the change until later, when a friend remarks on it. “I was rude? Really? When?”

Mark Schaller, a psychologist at the University of British Columbia, in Vancouver, has done research showing that when self-protective instincts are primed — simply by turning down the lights in a room, for instance — white people who are normally tolerant become unconsciously more likely to detect hostility in the faces of black men with neutral expressions.

“Sometimes nonconscious effects can be bigger in sheer magnitude than conscious ones,” Dr. Schaller said, “because we can’t moderate stuff we don’t have conscious access to, and the goal stays active.”

Until it is satisfied, that is, when the program is subsequently suppressed, research suggests. In one 2006 study, for instance, researchers had Northwestern University undergraduates recall an unethical deed from their past, like betraying a friend, or a virtuous one, like returning lost property. Afterward, the students had their choice of a gift, an antiseptic wipe or a pencil; and those who had recalled bad behavior were twice as likely as the others to take the wipe. They had been primed to psychologically “cleanse” their consciences.

Once their hands were wiped, the students became less likely to agree to volunteer their time to help with a graduate school project. Their hands were clean: the unconscious goal had been satisfied and now was being suppressed, the findings suggest.

What You Don’t Know

Using subtle cues for self-improvement is something like trying to tickle yourself, Dr. Bargh said: priming doesn’t work if you’re aware of it. Manipulating others, while possible, is dicey. “We know that as soon as people feel they’re being manipulated, they do the opposite; it backfires,” he said.

And researchers do not yet know how or when, exactly, unconscious drives may suddenly become conscious; or under which circumstances people are able to override hidden urges by force of will. Millions have quit smoking, for instance, and uncounted numbers have resisted darker urges to misbehave that they don’t even fully understand.

Yet the new research on priming makes it clear that we are not alone in our own consciousness. We have company, an invisible partner who has strong reactions about the world that don’t always agree with our own, but whose instincts, these studies clearly show, are at least as likely to be helpful, and attentive to others, as they are to be disruptive.


User avatar
danas
Posts: 18821
Joined: 11/03/2005 19:40
Location: 10th circle...

Post by danas » 01/08/2007 18:54

July 31, 2007
Findings
The Whys of Mating: 237 Reasons and Counting
By JOHN TIERNEY


Scholars in antiquity began counting the ways that humans have sex, but they weren’t so diligent in cataloging the reasons humans wanted to get into all those positions. Darwin and his successors offered a few explanations of mating strategies — to find better genes, to gain status and resources — but they neglected to produce a Kama Sutra of sexual motivations.

Perhaps you didn’t lament this omission. Perhaps you thought that the motivations for sex were pretty obvious. Or maybe you never really wanted to know what was going on inside other people’s minds, in which case you should stop reading immediately.

For now, thanks to psychologists at the University of Texas at Austin, we can at last count the whys. After asking nearly 2,000 people why they’d had sex, the researchers have assembled and categorized a total of 237 reasons — everything from “I wanted to feel closer to God” to “I was drunk.” They even found a few people who claimed to have been motivated by the desire to have a child.

The researchers, Cindy M. Meston and David M. Buss, believe their list, published in the August issue of Archives of Sexual Behavior, is the most thorough taxonomy of sexual motivation ever compiled. This seems entirely plausible.

Who knew, for instance, that a headache had any erotic significance except as an excuse for saying no? But some respondents of both sexes explained that they’d had sex “to get rid of a headache.” It’s No. 173 on the list.

Others said they did it to “help me fall asleep,” “make my partner feel powerful,” “burn calories,” “return a favor,” “keep warm,” “hurt an enemy” or “change the topic of conversation.” The lamest may have been, “It seemed like good exercise,” although there is also this: “Someone dared me.”

Dr. Buss has studied mating strategies around the world — he’s the oft-cited author of “The Evolution of Desire” and other books — but even he did not expect to find such varied and Machiavellian reasons for sex. “I was truly astonished,” he said, “by this richness of sexual psychology.”

The researchers collected the data by first asking more than 400 people to list their reasons for having sex, and then asking more than 1,500 others to rate how important each reason was to them. Although it was a fairly homogenous sample of students at the University of Texas, nearly every one of the 237 reasons was rated by at least some people as their most important motive for having sex.

The best news is that both men and women ranked the same reason most often: “I was attracted to the person.”

The rest of the top 10 for each gender were also almost all the same, including “I wanted to express my love for the person,” “I was sexually aroused and wanted the release” and “It’s fun.”

No matter what the reason, men were more likely to cite it than women, with a couple of notable exceptions. Women were more likely to say they had sex because, “I wanted to express my love for the person” and “I realized I was in love.” This jibes with conventional wisdom about women emphasizing the emotional aspects of sex, although it might also reflect the female respondents’ reluctance to admit to less lofty motives.

The results contradicted another stereotype about women: their supposed tendency to use sex to gain status or resources.

“Our findings suggest that men do these things more than women,” Dr. Buss said, alluding to the respondents who said they’d had sex to get things, like a promotion, a raise or a favor. Men were much more likely than women to say they’d had sex to “boost my social status” or because the partner was famous or “usually ‘out of my league.’ ”

Dr. Buss said, “Although I knew that having sex has consequences for reputation, it surprised me that people, notably men, would be motivated to have sex solely for social status and reputation enhancement.”

But then, men were also more likely than women to say they’d had sex because “I was slumming.” Or simply because “the opportunity presented itself,” or “the person demanded that I have sex.”

If nothing else, the results seem to be a robust confirmation of the hypothesis in the old joke: How can a woman get a man to take off his clothes? Ask him.

To make sense of the 237 reasons, Dr. Buss and Dr. Meston created a taxonomy with four general categories:

¶Physical: “The person had beautiful eyes” or “a desirable body,” or “was good kisser” or “too physically attractive to resist.” Or “I wanted to achieve an orgasm.”

¶Goal Attainment: “I wanted to even the score with a cheating partner” or “break up a rival’s relationship” or “make money” or “be popular.” Or “because of a bet.”

¶Emotional: “I wanted to communicate at a deeper level” or “lift my partner’s spirits” or “say ‘Thank you.’ ” Or just because “the person was intelligent.”

¶Insecurity: “I felt like it was my duty” or “I wanted to boost my self-esteem” or “It was the only way my partner would spend time with me.”

Having sex out of a sense of duty, Dr. Buss said, showed up in a separate study as being especially frequent among older women. But both sexes seem to practice a strategy that he calls mate-guarding, as illustrated in one of the reasons given by survey respondents: “I was afraid my partner would have an affair if I didn’t.”

That fear seems especially reasonable after you finish reading Dr. Buss’s paper and realize just how many reasons there are for infidelity. Some critics might complain that the list has some repetitions — it includes “I was curious about sex” as well as “I wanted to see what all the fuss was about” — but I’m more concerned about the reasons yet to be enumerated.

For instance, nowhere among the 237 reasons will you find the one attributed to the actress Joan Crawford: “I need sex for a clear complexion.” (The closest is “I thought it would make me feel healthy.”)Nor will you find anything about gathering rosebuds while ye may (the 17th-century exhortation to young virgins from Robert Herrick). Nor the similar hurry-before-we-die rationale (“The grave’s a fine and private place/ But none I think do there embrace”) from Andrew Marvell in “To His Coy Mistress.”

From even a cursory survey of literature or the modern mass market in sex fantasies, it seems clear that this new taxonomy may not be any more complete than the original periodic table of the elements.

When I mentioned Ms. Crawford’s complexion and the poets’ rationales to Dr. Buss, he promised to consider them and all other candidates for Reason 238.

You can nominate your own reasons at TierneyLab. You can also submit nominations for a brand new taxonomy: reasons for just saying “No way!” Somehow, though, I don’t think this list will be as long.

User avatar
danas
Posts: 18821
Joined: 11/03/2005 19:40
Location: 10th circle...

Post by danas » 01/08/2007 19:00

The New Yorker
Annals of Science
Devolution
Why intelligent design isn’t.
by H. Allen Orr May 30, 2005


If you are in ninth grade and live in Dover, Pennsylvania, you are learning things in your biology class that differ considerably from what your peers just a few miles away are learning. In particular, you are learning that Darwin’s theory of evolution provides just one possible explanation of life, and that another is provided by something called intelligent design. You are being taught this not because of a recent breakthrough in some scientist’s laboratory but because the Dover Area School District’s board mandates it. In October, 2004, the board decreed that “students will be made aware of gaps/problems in Darwin’s theory and of other theories of evolution including, but not limited to, intelligent design.”

While the events in Dover have received a good deal of attention as a sign of the political times, there has been surprisingly little discussion of the science that’s said to underlie the theory of intelligent design, often called I.D. Many scientists avoid discussing I.D. for strategic reasons. If a scientific claim can be loosely defined as one that scientists take seriously enough to debate, then engaging the intelligent-design movement on scientific grounds, they worry, cedes what it most desires: recognition that its claims are legitimate scientific ones.

Meanwhile, proposals hostile to evolution are being considered in more than twenty states; earlier this month, a bill was introduced into the New York State Assembly calling for instruction in intelligent design for all public-school students. The Kansas State Board of Education is weighing new standards, drafted by supporters of intelligent design, that would encourage schoolteachers to challenge Darwinism. Senator Rick Santorum, a Pennsylvania Republican, has argued that “intelligent design is a legitimate scientific theory that should be taught in science classes.” An I.D.-friendly amendment that he sponsored to the No Child Left Behind Act—requiring public schools to help students understand why evolution “generates so much continuing controversy”—was overwhelmingly approved in the Senate. (The amendment was not included in the version of the bill that was signed into law, but similar language did appear in a conference report that accompanied it.) In the past few years, college students across the country have formed Intelligent Design and Evolution Awareness chapters. Clearly, a policy of limited scientific engagement has failed. So just what is this movement?

First of all, intelligent design is not what people often assume it is. For one thing, I.D. is not Biblical literalism. Unlike earlier generations of creationists—the so-called Young Earthers and scientific creationists—proponents of intelligent design do not believe that the universe was created in six days, that Earth is ten thousand years old, or that the fossil record was deposited during Noah’s flood. (Indeed, they shun the label “creationism” altogether.) Nor does I.D. flatly reject evolution: adherents freely admit that some evolutionary change occurred during the history of life on Earth. Although the movement is loosely allied with, and heavily funded by, various conservative Christian groups—and although I.D. plainly maintains that life was created—it is generally silent about the identity of the creator.

The movement’s main positive claim is that there are things in the world, most notably life, that cannot be accounted for by known natural causes and show features that, in any other context, we would attribute to intelligence. Living organisms are too complex to be explained by any natural—or, more precisely, by any mindless—process. Instead, the design inherent in organisms can be accounted for only by invoking a designer, and one who is very, very smart.

All of which puts I.D. squarely at odds with Darwin. Darwin’s theory of evolution was meant to show how the fantastically complex features of organisms—eyes, beaks, brains—could arise without the intervention of a designing mind. According to Darwinism, evolution largely reflects the combined action of random mutation and natural selection. A random mutation in an organism, like a random change in any finely tuned machine, is almost always bad. That’s why you don’t, screwdriver in hand, make arbitrary changes to the insides of your television. But, once in a great while, a random mutation in the DNA that makes up an organism’s genes slightly improves the function of some organ and thus the survival of the organism. In a species whose eye amounts to nothing more than a primitive patch of light-sensitive cells, a mutation that causes this patch to fold into a cup shape might have a survival advantage. While the old type of organism can tell only if the lights are on, the new type can detect the direction of any source of light or shadow. Since shadows sometimes mean predators, that can be valuable information. The new, improved type of organism will, therefore, be more common in the next generation. That’s natural selection. Repeated over billions of years, this process of incremental improvement should allow for the gradual emergence of organisms that are exquisitely adapted to their environments and that look for all the world as though they were designed. By 1870, about a decade after “The Origin of Species” was published, nearly all biologists agreed that life had evolved, and by 1940 or so most agreed that natural selection was a key force driving this evolution.

Advocates of intelligent design point to two developments that in their view undermine Darwinism. The first is the molecular revolution in biology. Beginning in the nineteen-fifties, molecular biologists revealed a staggering and unsuspected degree of complexity within the cells that make up all life. This complexity, I.D.’s defenders argue, lies beyond the abilities of Darwinism to explain. Second, they claim that new mathematical findings cast doubt on the power of natural selection. Selection may play a role in evolution, but it cannot accomplish what biologists suppose it can.

These claims have been championed by a tireless group of writers, most of them associated with the Center for Science and Culture at the Discovery Institute, a Seattle-based think tank that sponsors projects in science, religion, and national defense, among other areas. The center’s fellows and advisers—including the emeritus law professor Phillip E. Johnson, the philosopher Stephen C. Meyer, and the biologist Jonathan Wells—have published an astonishing number of articles and books that decry the ostensibly sad state of Darwinism and extoll the virtues of the design alternative. But Johnson, Meyer, and Wells, while highly visible, are mainly strategists and popularizers. The scientific leaders of the design movement are two scholars, one a biochemist and the other a mathematician. To assess intelligent design is to assess their arguments.

Michael J. Behe, a professor of biological sciences at Lehigh University (and a senior fellow at the Discovery Institute), is a biochemist who writes technical papers on the structure of DNA. He is the most prominent of the small circle of scientists working on intelligent design, and his arguments are by far the best known. His book “Darwin’s Black Box” (1996) was a surprise best-seller and was named by National Review as one of the hundred best nonfiction books of the twentieth century. (A little calibration may be useful here; “The Starr Report” also made the list.)

Not surprisingly, Behe’s doubts about Darwinism begin with biochemistry. Fifty years ago, he says, any biologist could tell stories like the one about the eye’s evolution. But such stories, Behe notes, invariably began with cells, whose own evolutionary origins were essentially left unexplained. This was harmless enough as long as cells weren’t qualitatively more complex than the larger, more visible aspects of the eye. Yet when biochemists began to dissect the inner workings of the cell, what they found floored them. A cell is packed full of exceedingly complex structures—hundreds of microscopic machines, each performing a specific job. The “Give me a cell and I’ll give you an eye” story told by Darwinists, he says, began to seem suspect: starting with a cell was starting ninety per cent of the way to the finish line.

Behe’s main claim is that cells are complex not just in degree but in kind. Cells contain structures that are “irreducibly complex.” This means that if you remove any single part from such a structure, the structure no longer functions. Behe offers a simple, nonbiological example of an irreducibly complex object: the mousetrap. A mousetrap has several parts—platform, spring, catch, hammer, and hold-down bar—and all of them have to be in place for the trap to work. If you remove the spring from a mousetrap, it isn’t slightly worse at killing mice; it doesn’t kill them at all. So, too, with the bacterial flagellum, Behe argues. This flagellum is a tiny propeller attached to the back of some bacteria. Spinning at more than twenty thousand r.p.m.s, it motors the bacterium through its aquatic world. The flagellum comprises roughly thirty different proteins, all precisely arranged, and if any one of them is removed the flagellum stops spinning.

In “Darwin’s Black Box,” Behe maintained that irreducible complexity presents Darwinism with “unbridgeable chasms.” How, after all, could a gradual process of incremental improvement build something like a flagellum, which needs all its parts in order to work? Scientists, he argued, must face up to the fact that “many biochemical systems cannot be built by natural selection working on mutations.” In the end, Behe concluded that irreducibly complex cells arise the same way as irreducibly complex mousetraps—someone designs them. As he put it in a recent Times Op-Ed piece: “If it looks, walks, and quacks like a duck, then, absent compelling evidence to the contrary, we have warrant to conclude it’s a duck. Design should not be overlooked simply because it’s so obvious.” In “Darwin’s Black Box,” Behe speculated that the designer might have assembled the first cell, essentially solving the problem of irreducible complexity, after which evolution might well have proceeded by more or less conventional means. Under Behe’s brand of creationism, you might still be an ape that evolved on the African savanna; it’s just that your cells harbor micro-machines engineered by an unnamed intelligence some four billion years ago.

But Behe’s principal argument soon ran into trouble. As biologists pointed out, there are several different ways that Darwinian evolution can build irreducibly complex systems. In one, elaborate structures may evolve for one reason and then get co-opted for some entirely different, irreducibly complex function. Who says those thirty flagellar proteins weren’t present in bacteria long before bacteria sported flagella? They may have been performing other jobs in the cell and only later got drafted into flagellum-building. Indeed, there’s now strong evidence that several flagellar proteins once played roles in a type of molecular pump found in the membranes of bacterial cells.

Behe doesn’t consider this sort of “indirect” path to irreducible complexity—in which parts perform one function and then switch to another—terribly plausible. And he essentially rules out the alternative possibility of a direct Darwinian path: a path, that is, in which Darwinism builds an irreducibly complex structure while selecting all along for the same biological function. But biologists have shown that direct paths to irreducible complexity are possible, too. Suppose a part gets added to a system merely because the part improves the system’s performance; the part is not, at this stage, essential for function. But, because subsequent evolution builds on this addition, a part that was at first just advantageous might become essential. As this process is repeated through evolutionary time, more and more parts that were once merely beneficial become necessary. This idea was first set forth by H. J. Muller, the Nobel Prize-winning geneticist, in 1939, but it’s a familiar process in the development of human technologies. We add new parts like global-positioning systems to cars not because they’re necessary but because they’re nice. But no one would be surprised if, in fifty years, computers that rely on G.P.S. actually drove our cars. At that point, G.P.S. would no longer be an attractive option; it would be an essential piece of automotive technology. It’s important to see that this process is thoroughly Darwinian: each change might well be small and each represents an improvement.

Design theorists have made some concessions to these criticisms. Behe has confessed to “sloppy prose” and said he hadn’t meant to imply that irreducibly complex systems “by definition” cannot evolve gradually. “I quite agree that my argument against Darwinism does not add up to a logical proof,” he says—though he continues to believe that Darwinian paths to irreducible complexity are exceedingly unlikely. Behe and his followers now emphasize that, while irreducibly complex systems can in principle evolve, biologists can’t reconstruct in convincing detail just how any such system did evolve.

What counts as a sufficiently detailed historical narrative, though, is altogether subjective. Biologists actually know a great deal about the evolution of biochemical systems, irreducibly complex or not. It’s significant, for instance, that the proteins that typically make up the parts of these systems are often similar to one another. (Blood clotting—another of Behe’s examples of irreducible complexity—involves at least twenty proteins, several of which are similar, and all of which are needed to make clots, to localize or remove clots, or to prevent the runaway clotting of all blood.) And biologists understand why these proteins are so similar. Each gene in an organism’s genome encodes a particular protein. Occasionally, the stretch of DNA that makes up a particular gene will get accidentally copied, yielding a genome that includes two versions of the gene. Over many generations, one version of the gene will often keep its original function while the other one slowly changes by mutation and natural selection, picking up a new, though usually related, function. This process of “gene duplication” has given rise to entire families of proteins that have similar functions; they often act in the same biochemical pathway or sit in the same cellular structure. There’s no doubt that gene duplication plays an extremely important role in the evolution of biological complexity.

It’s true that when you confront biologists with a particular complex structure like the flagellum they sometimes have a hard time saying which part appeared before which other parts. But then it can be hard, with any complex historical process, to reconstruct the exact order in which events occurred, especially when, as in evolution, the addition of new parts encourages the modification of old ones. When you’re looking at a bustling urban street, for example, you probably can’t tell which shop went into business first. This is partly because many businesses now depend on each other and partly because new shops trigger changes in old ones (the new sushi place draws twenty-somethings who demand wireless Internet at the café next door). But it would be a little rash to conclude that all the shops must have begun business on the same day or that some Unseen Urban Planner had carefully determined just which business went where.

The other leading theorist of the new creationism, William A. Dembski, holds a Ph.D. in mathematics, another in philosophy, and a master of divinity in theology. He has been a research professor in the conceptual foundations of science at Baylor University, and was recently appointed to the new Center for Science and Theology at Southern Baptist Theological Seminary. (He is a longtime senior fellow at the Discovery Institute as well.) Dembski publishes at a staggering pace. His books—including “The Design Inference,” “Intelligent Design,” “No Free Lunch,” and “The Design Revolution”—are generally well written and packed with provocative ideas.

According to Dembski, a complex object must be the result of intelligence if it was the product neither of chance nor of necessity. The novel “Moby Dick,” for example, didn’t arise by chance (Melville didn’t scribble random letters), and it wasn’t the necessary consequence of a physical law (unlike, say, the fall of an apple). It was, instead, the result of Melville’s intelligence. Dembski argues that there is a reliable way to recognize such products of intelligence in the natural world. We can conclude that an object was intelligently designed, he says, if it shows “specified complexity”—complexity that matches an “independently given pattern.” The sequence of letters “jkxvcjudoplvm” is certainly complex: if you randomly type thirteen letters, you are very unlikely to arrive at this particular sequence. But it isn’t specified: it doesn’t match any independently given sequence of letters. If, on the other hand, I ask you for the first sentence of “Moby Dick” and you type the letters “callmeishmael,” you have produced something that is both complex and specified. The sequence you typed is unlikely to arise by chance alone, and it matches an independent target sequence (the one written by Melville). Dembski argues that specified complexity, when expressed mathematically, provides an unmistakable signature of intelligence. Things like “callmeishmael,” he points out, just don’t arise in the real world without acts of intelligence. If organisms show specified complexity, therefore, we can conclude that they are the handiwork of an intelligent agent.

For Dembski, it’s telling that the sophisticated machines we find in organisms match up in astonishingly precise ways with recognizable human technologies. The eye, for example, has a familiar, cameralike design, with recognizable parts—a pinhole opening for light, a lens, and a surface on which to project an image—all arranged just as a human engineer would arrange them. And the flagellum has a motor design, one that features recognizable O-rings, a rotor, and a drive shaft. Specified complexity, he says, is there for all to see.

Dembski’s second major claim is that certain mathematical results cast doubt on Darwinism at the most basic conceptual level. In 2002, he focussed on so-called No Free Lunch, or N.F.L., theorems, which were derived in the late nineties by the physicists David H. Wolpert and William G. Macready. These theorems relate to the efficiency of different “search algorithms.” Consider a search for high ground on some unfamiliar, hilly terrain. You’re on foot and it’s a moonless night; you’ve got two hours to reach the highest place you can. How to proceed? One sensible search algorithm might say, “Walk uphill in the steepest possible direction; if no direction uphill is available, take a couple of steps to the left and try again.” This algorithm insures that you’re generally moving upward. Another search algorithm—a so-called blind search algorithm—might say, “Walk in a random direction.” This would sometimes take you uphill but sometimes down. Roughly, the N.F.L. theorems prove the surprising fact that, averaged over all possible terrains, no search algorithm is better than any other. In some landscapes, moving uphill gets you to higher ground in the allotted time, while in other landscapes moving randomly does, but on average neither outperforms the other.

Now, Darwinism can be thought of as a search algorithm. Given a problem—adapting to a new disease, for instance—a population uses the Darwinian algorithm of random mutation plus natural selection to search for a solution (in this case, disease resistance). But, according to Dembski, the N.F.L. theorems prove that this Darwinian algorithm is no better than any other when confronting all possible problems. It follows that, over all, Darwinism is no better than blind search, a process of utterly random change unaided by any guiding force like natural selection. Since we don’t expect blind change to build elaborate machines showing an exquisite coördination of parts, we have no right to expect Darwinism to do so, either. Attempts to sidestep this problem by, say, carefully constraining the class of challenges faced by organisms inevitably involve sneaking in the very kind of order that we’re trying to explain—something Dembski calls the displacement problem. In the end, he argues, the N.F.L. theorems and the displacement problem mean that there’s only one plausible source for the design we find in organisms: intelligence. Although Dembski is somewhat noncommittal, he seems to favor a design theory in which an intelligent agent programmed design into early life, or even into the early universe. This design then unfolded through the long course of evolutionary time, as microbes slowly morphed into man.

Dembski’s arguments have been met with tremendous enthusiasm in the I.D. movement. In part, that’s because an innumerate public is easily impressed by a bit of mathematics. Also, when Dembski is wielding his equations, he gets to play the part of the hard scientist busily correcting the errors of those soft-headed biologists. (Evolutionary biology actually features an extraordinarily sophisticated body of mathematical theory, a fact not widely known because neither of evolution’s great popularizers—Richard Dawkins and the late Stephen Jay Gould—did much math.) Despite all the attention, Dembski’s mathematical claims about design and Darwin are almost entirely beside the point.

The most serious problem in Dembski’s account involves specified complexity. Organisms aren’t trying to match any “independently given pattern”: evolution has no goal, and the history of life isn’t trying to get anywhere. If building a sophisticated structure like an eye increases the number of children produced, evolution may well build an eye. But if destroying a sophisticated structure like the eye increases the number of children produced, evolution will just as happily destroy the eye. Species of fish and crustaceans that have moved into the total darkness of caves, where eyes are both unnecessary and costly, often have degenerate eyes, or eyes that begin to form only to be covered by skin—crazy contraptions that no intelligent agent would design. Despite all the loose talk about design and machines, organisms aren’t striving to realize some engineer’s blueprint; they’re striving (if they can be said to strive at all) only to have more offspring than the next fellow.

Another problem with Dembski’s arguments concerns the N.F.L. theorems. Recent work shows that these theorems don’t hold in the case of co-evolution, when two or more species evolve in response to one another. And most evolution is surely co-evolution. Organisms do not spend most of their time adapting to rocks; they are perpetually challenged by, and adapting to, a rapidly changing suite of viruses, parasites, predators, and prey. A theorem that doesn’t apply to these situations is a theorem whose relevance to biology is unclear. As it happens, David Wolpert, one of the authors of the N.F.L. theorems, recently denounced Dembski’s use of those theorems as “fatally informal and imprecise.” Dembski’s apparent response has been a tactical retreat. In 2002, Dembski triumphantly proclaimed, “The No Free Lunch theorems dash any hope of generating specified complexity via evolutionary algorithms.” Now he says, “I certainly never argued that the N.F.L. theorems provide a direct refutation of Darwinism.”

Those of us who have argued with I.D. in the past are used to such shifts of emphasis. But it’s striking that Dembski’s views on the history of life contradict Behe’s. Dembski believes that Darwinism is incapable of building anything interesting; Behe seems to believe that, given a cell, Darwinism might well have built you and me. Although proponents of I.D. routinely inflate the significance of minor squabbles among evolutionary biologists (did the peppered moth evolve dark color as a defense against birds or for other reasons?), they seldom acknowledge their own, often major differences of opinion. In the end, it’s hard to view intelligent design as a coherent movement in any but a political sense.

It’s also hard to view it as a real research program. Though people often picture science as a collection of clever theories, scientists are generally staunch pragmatists: to scientists, a good theory is one that inspires new experiments and provides unexpected insights into familiar phenomena. By this standard, Darwinism is one of the best theories in the history of science: it has produced countless important experiments (let’s re-create a natural species in the lab—yes, that’s been done) and sudden insight into once puzzling patterns (that’s why there are no native land mammals on oceanic islands). In the nearly ten years since the publication of Behe’s book, by contrast, I.D. has inspired no nontrivial experiments and has provided no surprising insights into biology. As the years pass, intelligent design looks less and less like the science it claimed to be and more and more like an extended exercise in polemics.

In 1999, a document from the Discovery Institute was posted, anonymously, on the Internet. This Wedge Document, as it came to be called, described not only the institute’s long-term goals but its strategies for accomplishing them. The document begins by labelling the idea that human beings are created in the image of God “one of the bedrock principles on which Western civilization was built.” It goes on to decry the catastrophic legacy of Darwin, Marx, and Freud—the alleged fathers of a “materialistic conception of reality” that eventually “infected virtually every area of our culture.” The mission of the Discovery Institute’s scientific wing is then spelled out: “nothing less than the overthrow of materialism and its cultural legacies.” It seems fair to conclude that the Discovery Institute has set its sights a bit higher than, say, reconstructing the origins of the bacterial flagellum.

The intelligent-design community is usually far more circumspect in its pronouncements. This is not to say that it eschews discussion of religion; indeed, the intelligent-design literature regularly insists that Darwinism represents a thinly veiled attempt to foist a secular religion—godless materialism—on Western culture. As it happens, the idea that Darwinism is yoked to atheism, though popular, is also wrong. Of the five founding fathers of twentieth-century evolutionary biology—Ronald Fisher, Sewall Wright, J. B. S. Haldane, Ernst Mayr, and Theodosius Dobzhansky—one was a devout Anglican who preached sermons and published articles in church magazines, one a practicing Unitarian, one a dabbler in Eastern mysticism, one an apparent atheist, and one a member of the Russian Orthodox Church and the author of a book on religion and science. Pope John Paul II himself acknowledged, in a 1996 address to the Pontifical Academy of Sciences, that new research “leads to the recognition of the theory of evolution as more than a hypothesis.” Whatever larger conclusions one thinks should follow from Darwinism, the historical fact is that evolution and religion have often coexisted. As the philosopher Michael Ruse observes, “It is simply not the case that people take up evolution in the morning, and become atheists as an encore in the afternoon.”

Biologists aren’t alarmed by intelligent design’s arrival in Dover and elsewhere because they have all sworn allegiance to atheistic materialism; they’re alarmed because intelligent design is junk science. Meanwhile, more than eighty per cent of Americans say that God either created human beings in their present form or guided their development. As a succession of intelligent-design proponents appeared before the Kansas State Board of Education earlier this month, it was possible to wonder whether the movement’s scientific coherence was beside the point. Intelligent design has come this far by faith. ♦

User avatar
danas
Posts: 18821
Joined: 11/03/2005 19:40
Location: 10th circle...

Post by danas » 01/08/2007 19:08


Why Money Won't Buy Fat: In rich countries, the rich get rich and the poor get fat.
By Atul Gawande
Slate
Posted Friday, Dec. 25, 1998, at 3:30 AM ET


If you have a job like mine, in which a reasonably broad swath of American society presents itself to you without any clothes on, this question no doubt has popped into your head: Why are the poor so fat? That obesity is a characteristic problem of affluent countries surprises no one. But why is it also characteristic that obesity increases so dramatically as you go down the social scale?

This, of course, is not the case in developing nations. In countries such as India, where my parents emigrated from, being overweight is directly related to wealth and status and is regarded as attractive. Thus, my father's rural family of farmers is thin and wiry, while the elders in my mother's urban clan of higher caste and wealthier stock are as plump as can be. When my sister and I, two scrawny specimens of the American "upper-middle class," visit, we always evoke a good deal of concern from relatives.

The first report I found to document the effect class has on obesity was a 1965 study of midtown Manhattan residents. The researchers found obesity was six times more common among women in the bottom third of the social scale than among those in the top third. Subsequent studies, orginating from California to New Zealand, confirm these findings. In a 1996 study in Minnesota's Twin Cities, women earning under $10,000 a year weighed, on average, 20 pounds more than women earning over $40,000 a year. In advanced nations, low income is a more powerful predictor of obesity than any single factor except age, though this relationship is weaker in men than in women. In men, height is associated more strongly with status. On average, in developed and undeveloped countries alike, richer men are taller.

You'd think weight would be like height. More money means more food. So more food should mean more fat, just as it means more height, right? And, in fact, in children this is exactly the case--at young ages, certainly under age 6 in the United States, lower-income kids are less likely to be obese than higher-income kids. By the time they are adults, however, the relationship has inverted.

What's going on? As my epidemiologist friends point out to me, when two things correlate, there are always three possible explanations. One leads to the other. The other leads to the one. Or some third thing is driving both.
Illustration by Robert Neubecker

T he downward mobility hypothesis. One possibility is that obesity leads to lower income. Certainly the obese, particularly obese women, face severe disadvantages in both the job and marriage markets. And the evidence that this produces downward mobility is distressingly strong. Almost half the women in the 1965 Manhattan study belonged to a different social class than their parents, and those who had moved down were significantly fatter than those who had moved up. More recently, a landmark national study led by Steven Gortmaker, a Harvard researcher, tracked over 8,000 people from age 18 to 25. It found the heaviest 5 percent of women were half as likely to get married and twice as likely to become impoverished as others. Obesity affected a woman's economic prospects more than even chronic illness. In men, however, shortness led to downward mobility. One foot less of height doubled a man's likelihood of poverty. But as Gortmaker points out, downward mobility explains only a small part of the observed relationship of income to obesity. That makes sense. After all, the gap also exists in countries such as Britain, where social class is more rigid than it is here.

The genetic explanation. Could genetics be an outside factor driving both obesity and poverty? The so-called "Danish adoption study" lent some credence to the idea, finding that the obesity and social status of adoptees depended on the social status not only of their adoptive parents but also of their biological parents. The study inferred that parents can give their offspring genetic traits that tip the scales toward both heavier weight and lower status. Even so, inheritance accounted for at most a small part of the stark effect of income on obesity.

The gluttony and sloth hypothesis. So downward mobility and genes matter somewhat but, ultimately, we're left with the obvious explanation that in affluent societies the lower the income, the more people eat unhealthily, sit around, or both. Unfortunately, measuring this directly is notoriously difficult. Most people--especially the obese--fib about diet and exercise and, when observed, tend to change their behavior. The differences you're looking for are also quite small--a single calorie of extra daily intake starting in childhood can translate into 10 extra pounds by the time you're 25. Nonetheless, there's a good deal of indirect evidence--for example, TV watching is linked to eating more and exercising less, and rises substantially as income lowers. Given the weakness of other explanations, few experts doubt that lifestyle is the most important one.

The question is why. It's as if avoiding obesity were enormously expensive. But it's not--exercise is free, and eating less junk food and meat and more produce saves money. Rather, for the 30 percent to 70 percent with a genetic predisposition to obesity, it's just horrendously difficult. Being motivated is vital, and motivation is what seems to differ by class.

The 1996 Twin Cities study, for example, found marked differences in weight concern. Although women earning under $10,000 were no less successful when they dieted than women earning over $40,000, they were one-third less likely to diet in the first place. They had a greater tolerance for weight gain, saying it would take a 20 pound gain before they took action, as opposed to the 10 pound gain that would trigger action in higher-income women. They even weighed themselves less often (three times a month vs. seven times a month).

This cultural-differences explanation certainly accounts for why anorexia nervosa is mainly a disease of higher-income girls and young women. Since obese men are less stigmatized, it may also explain why wealthier men are not that much less obese than poorer men.

Still, if thinness really becomes the ideal in every affluent culture--the way plumpness is in poor societies--it's curious. Is this simply the nature of status--the rich will always find a way to distinguish themselves from the poor? Or does richness breed an instinct for longevity? Whatever the case, I'll know India has finally risen from Third World status when I go back to visit my relatives and the first few words they say are not "Eat, eat, skinny boy."

User avatar
danas
Posts: 18821
Joined: 11/03/2005 19:40
Location: 10th circle...

Post by danas » 01/08/2007 21:10


THE WEEDS SHALL INHERIT THE EARTH
Tallying the losses of Earth's animals and plants
by David Quammen


Hope is a duty from which paleontologists are exempt. Their job is to take the long view, the cold and stony view, of triumphs and catastrophes in the history of life. They study teeth, tree trunks, leaves, pollen, and other biological relics, and from it they attempt to discern the lost secrets of time, the big patterns of stasis and change, the trends of innovation and adaptation and refinement and decline that have blown like sea winds among ancient creatures in ancient ecosystems. Although life is their subject, death and burial supply all their data. They're the coroners of biology. This gives to paleontologists a certain distance, a hyperopic perspective beyond the reach of anxiety over outcomes of the struggles they chronicle. If hope is the thing with feathers, as Emily Dickinson said, then it's good to remember that feathers don't generally fossilize well. In lieu of hope and despair, paleontologists have a highly developed sense of cyclicity. That's why I recently went to Chicago, with a handful of urgently grim questions, and called on a paleontologist named David Jablonski. I wanted answers unvarnished with obligatory hope.

Jablonski is a big-pattern man, a macroevolutionist, who works fastidiously from the particular to the very broad. He's an expert on the morphology and distribution of marine bivalves and gastropods--or clams and snails, as he calls them when speaking casually. He sifts through the record of those mollusk lineages, preserved in rock and later harvested into museum drawers, to extract ideas about the origin of novelty. His attention roams back through 600 million years of time. His special skill involves framing large, resonant questions that can be answered with small, lithified clamshells. For instance: By what combinations of causal factor and sheer chance have the great evolutionary innovations arisen? How quickly have those innovations taken hold? How long have they abided? He's also interested in extinction, the converse of abidance, the yang to evolution's yin. Why do some species survive for a long time, he wonders, whereas others die out much sooner? And why has the rate of extinction--low throughout most of Earth's history--spiked upward cataclysmically on just a few occasions? How do those cataclysmic episodes, known in the trade as mass extinctions, differ in kind as well as degree from the gradual process of species extinction during the millions of years between? Can what struck in the past strike again?

The concept of mass extinction implies a biological crisis that spanned large parts of the planet and, in a relatively short time, eradicated a sizable number of species from a variety of groups. There's no absolute threshold of magnitude, and dozens of different episodes in geologic history might qualify, but five big ones stand out: Ordovician, Devonian, Permian, Triassic, Cretaceous. The Ordovician extinction, 439 million years ago, entailed the disappearance of roughly 85 percent of marine animal species--and that was before there were any animals on land. The Devonian extinction, 367 million years ago, seems to have been almost as severe. About 245 million years ago came the Permian extinction, the worst ever, claiming 95 percent of all known animal species and therefore almost wiping out the animal kingdom altogether. The Triassic, 208 million years ago, was bad again, though not nearly so bad as the Permian. The most recent was the Cretaceous extinction (sometimes called the K-T event because it defines the boundary between two geologic periods, with K for Cretaceous, never mind why, and T for Tertiary), familiar even to schoolchildren because it ended the age of dinosaurs. Less familiarly, the K-T event also brought extinction of the marine reptiles and the ammonites, as well as major losses of species among fish, mammals, amphibians, sea urchins, and other groups, totaling 76 percent of all species. In between these five episodes occurred some lesser mass extinctions, and throughout the intervening lulls extinction continued, too--but at a much slower pace, known as the background rate, claiming only about one species in any major group every million years. At the background rate, extinction is infrequent enough to be counterbalanced by the evolution of new species. Each of the five major episodes, in contrast, represents a drastic net loss of species diversity, a deep trough of biological impoverishment from which Earth only slowly recovered. How slowly? How long is the lag between a nadir of impoverishment and a recovery to ecological fullness? That's another of Jablonski's research interests. His rough estimates run to 5 or 10 million years. What drew me to this man's work, and then to his doorstep, were his special competence on mass extinctions and his willingness to discuss the notion that a sixth one is in progress now.

Some people will tell you that we as a species, Homo sapiens, the savvy ape, all 5.9 billion of us in our collective impact, are destroying the world. Me, I won't tell you that, because "the world" is so vague, whereas what we are or aren't destroying is quite specific. Some people will tell you that we are rampaging suicidally toward a degree of global wreckage that will result in our own extinction. I won't tell you that either. Some people say that the environment will be the paramount political and social concern of the twenty-first century, but what they mean by "the environment" is anyone's guess. Polluted air? Polluted water? Acid rain? A frayed skein of ozone over Antarctica? Greenhouse gases emitted by smokestacks and cars? Toxic wastes? None of these concerns is the big one, paleontological in scope, though some are more closely entangled with it than others. If the world's air is clean for humans to breathe but supports no birds or butterflies, if the world's waters are pure for humans to drink but contain no fish or crustaceans or diatoms, have we solved our environmental problems? Well, I suppose so, at least as environmentalism is commonly construed. That clumsy, confused, and presumptuous formulation "the environment" implies viewing air, water, soil, forests, rivers, swamps, deserts, and oceans as merely a milieu within which something important is set: human life, human history. But what's at issue in fact is not an environment; it's a living world.

Here instead is what I'd like to tell you: The consensus among conscientious biologists is that we're headed into another mass extinction, a vale of biological impoverishment commensurate with the big five. Many experts remain hopeful that we can brake that descent, but my own view is that we're likely to go all the way down. I visited David Jablonski to ask what we might see at the bottom.

On a hot summer morning, Jablonski is busy in his office on the second floor of the Hinds Geophysical Laboratory at the University of Chicago. It's a large open room furnished in tall bookshelves, tables piled high with books, stacks of paper standing knee-high off the floor. The walls are mostly bare, aside from a chart of the geologic time scale, a clipped cartoon of dancing tyrannosaurs in red sneakers, and a poster from a Rodin exhibition, quietly appropriate to the overall theme of eloquent stone. Jablonski is a lean forty-five-year-old man with a dark full beard. Educated at Columbia and Yale, he came to Chicago in 1985 and has helped make its paleontology program perhaps the country's best. Although in not many hours he'll be leaving on a trip to Alaska, he has been cordial about agreeing to this chat. Stepping carefully, we move among the piled journals, reprints, and photocopies. Every pile represents a different research question, he tells me. "I juggle a lot of these things all at once because they feed into one another." That's exactly why I've come: for a little rigorous intellectual synergy.

Let's talk about mass extinctions, I say. When did someone first realize that the concept might apply to current events, not just to the Permian or the Cretaceous?

He begins sorting through memory, back to the early 1970s, when the full scope of the current extinction problem was barely recognized. Before then, some writers warned about "vanishing wildlife" and "endangered species," but generally the warnings were framed around individual species with popular appeal, such as the whooping crane, the tiger, the blue whale, the peregrine falcon. During the 1970s a new form of concern broke forth--call it wholesale concern--from the awareness that unnumbered millions of narrowly endemic (that is, unique and localized) species inhabit the tropical forests and that those forests were quickly being cut. In 1976, a Nairobi-based biologist named Norman Myers published a paper in Science on that subject; in passing, he also compared current extinctions with the rate during what he loosely called "the 'great dying' of the dinosaurs." David Jablonski, then a graduate student, read Myers's paper and tucked a copy into his files. This was the first time, as Jablonski recalls, that anyone tried to quantify the rate of present-day extinctions. "Norman was a pretty lonely guy, for a long time, on that," he says. In 1979, Myers published The Sinking Ark, explaining the problem and offering some rough projections. Between the years 1600 and 1900, by his tally, humanity had caused the extinction of about 75 known species, almost all of them mammals and birds. Between 1900 and 1979, humans had extinguished about another 75 known species, representing a rate well above the rate of known losses during the Cretaceous extinction. But even more worrisome was the inferable rate of unrecorded extinctions, recent and now impending, among plants and animals still unidentified by science. Myers guessed that 25,000 plant species presently stood jeopardized, and maybe hundreds of thousands of insects. "By the time human communities establish ecologically sound life-styles, the fallout of species could total several million." Rereading that sentence now, I'm struck by the reckless optimism of his assumption that human communities eventually will establish "ecologically sound life-styles."

Although this early stab at quantification helped to galvanize public concern, it also became a target for a handful of critics, who used the inexactitude of the numbers to cast doubt on the reality of the problem. Most conspicuous of the naysayers was Julian Simon, an economist at the University of Maryland, who argued bullishly that human resourcefulness would solve all problems worth solving, of which a decline in diversity of tropical insects wasn't one.

In a 1986 issue of New Scientist, Simon rebutted Norman Myers, arguing from his own construal of select data that there was "no obvious recent downward trend in world forests--no obvious 'losses' at all, and certainly no 'near catastrophic' loss." He later co-authored an op-ed piece in the New York Times under the headline "Facts, Not Species, Are Periled." Again he went after Myers, asserting a complete absence of evidence for the claim that the extinction of species is going up rapidly--or even going up at all." Simon's worst disservice to logic in that statement and others was the denial that inferential evidence of wholesale extinction counts for anything. Of inferential evidence there was an abundance--for example, from the Centinela Ridge in a cloud-forest zone of western Ecuador, where in 1978 the botanist Alwyn Gentry and a colleague found thirty-eight species of narrowly endemic plants, including several with mysteriously black leaves. Before Gentry could get back, Centinela Ridge had been completely deforested, the native plants replaced by cacao and other crops. As for inferential evidence generally, we might do well to remember what it contributes to our conviction that approximately 105,000 Japanese civilians died in the atomic bombing of Hiroshima. The city's population fell abruptly on August 6, 1945, but there was no one-by-one identification of 105,000 bodies.

Nowadays a few younger writers have taken Simon's line, pooh-poohing the concern over extinction. As for Simon himself, who died earlier this year, perhaps the truest sentence he left behind was, "We must also try to get more reliable information about the number of species that might be lost with various changes in the forests." No one could argue.

But it isn't easy to get such information. Field biologists tend to avoid investing their precious research time in doomed tracts of forest. Beyond that, our culture offers little institutional support for the study of narrowly endemic species in order to register their existence before their habitats are destroyed. Despite these obstacles, recent efforts to quantify rates of extinction have supplanted the old warnings. These new estimates use satellite imaging and improved on-the-ground data about deforestation, records of the many human-caused extinctions on islands, and a branch of ecological theory called island biogeography, which connects documented island cases with the mainland problem of forest fragmentation. These efforts differ in particulars, reflecting how much uncertainty is still involved, but their varied tones form a chorus of consensus. I'll mention three of the most credible.

W.V. Reid, of the World Resources Institute, in 1992 gathered numbers on the average annual deforestation in each of sixty-three tropical countries during the 1980s and from them charted three different scenarios (low, middle, high) of presumable forest loss by the year 2040. He chose a standard mathematical model of the relationship between decreasing habitat area and decreasing species diversity, made conservative assumptions about the crucial constant, and ran his various deforestation estimates through the model. Reid's calculations suggest that by the year 2040, between 17 and 35 percent of tropical forest species will be extinct or doomed to be. Either at the high or the low end of this range, it would amount to a bad loss, though not as bad as the K-T event. Then again, 2040 won't mark the end of human pressures on biological diversity or landscape.

Robert M. May, an ecologist at Oxford, co-authored a similar effort in 1995. May and his colleagues noted the five causal factors that account for most extinctions: habitat destruction, habitat fragmentation, overkill, invasive species, and secondary effects cascading through an ecosystem from other extinctions. Each of those five is more intricate than it sounds. For instance, habitat fragmentation dooms species by consigning them to small, island-like parcels of habitat surrounded by an ocean of human impact and by then subjecting them to the same jeopardies (small population size, acted upon by environmental fluctuation, catastrophe, inbreeding, bad luck, and cascading effects) that make island species especially vulnerable to extinction. May's team concluded that most extant bird and mammal species can expect average life spans of between 200 and 400 years. That's equivalent to saying that about a third of one percent will go extinct each year until some unimaginable end point is reached. "Much of the diversity we inherited," May and his co-authors wrote, "will be gone before humanity sorts itself out."

The most recent estimate comes from Stuart L. Pimm and Thomas M. Brooks, ecologists at the University of Tennessee. Using a combination of published data on bird species lost from forest fragments and field data gathered themselves, Pimm and Brooks concluded that 50 percent of the world's forest-bird species will be doomed to extinction by deforestation occurring over the next half century. And birds won't be the sole victims. "How many species will be lost if current trends continue?" the two scientists asked. "Somewhere between one third and two thirds of all species--easily making this event as large as the previous five mass extinctions the planet has experienced."

Jablonski, who started down this line of thought in 1978, offers me a reminder about the conceptual machinery behind such estimates. "All mathematical models," he says cheerily, "are wrong. They are approximations. And the question is: Are they usefully wrong, or are they meaninglessly wrong?" Models projecting present and future species loss are useful, he suggests, if they help people realize that Homo sapiens is perturbing Earth's biosphere to a degree it hasn't often been perturbed before. In other words, that this is a drastic experiment in biological drawdown we're engaged in, not a continuation of routine.

Behind the projections of species loss lurk a number of crucial but hard-to-plot variables, among which two are especially weighty: continuing landscape conversion and the growth curve of human population.

Landscape conversion can mean many things: draining wetlands to build roads and airports, turning tallgrass prairies under the plow, fencing savanna and overgrazing it with domestic stock, cutting second-growth forest in Vermont and consigning the land to ski resorts or vacation suburbs, slash-and-burn clearing of Madagascar's rain forest to grow rice on wet hillsides, industrial logging in Borneo to meet Japanese plywood demands. The ecologist John Terborgh and a colleague, Carel P. van Schaik, have described a four-stage process of landscape conversion that they call the land-use cascade. The successive stages are: 1) wildlands, encompassing native floral and faunal communities altered little or not at all by human impact; 2) extensively used areas, such as natural grasslands lightly grazed, savanna kept open for prey animals by infrequent human-set fires, or forests sparsely worked by slash-and-burn farmers at low density; 3) intensively used areas, meaning crop fields, plantations, village commons, travel corridors, urban and industrial zones; and finally 4) degraded land, formerly useful but now abused beyond value to anybody. Madagascar, again, would be a good place to see all four stages, especially the terminal one. Along a thin road that leads inland from a town called Mahajanga, on the west coast, you can gaze out over a vista of degraded land--chalky red hills and gullies, bare of forest, burned too often by grazers wanting a short-term burst of pasturage, sparsely covered in dry grass and scrubby fan palms, eroded starkly, draining red mud into the Betsiboka River, supporting almost no human presence. Another showcase of degraded land--attributable to fuelwood gathering, overgrazing, population density, and decades of apartheid--is the Ciskei homeland in South Africa. Or you might look at overirrigated crop fields left ruinously salinized in the Central Valley of California.

Among all forms of landscape conversion, pushing tropical forest from the wildlands category to the intensively used category has the greatest impact on biological diversity. You can see it in western India, where a spectacular deciduous ecosystem known as the Gir forest (home to the last surviving population of the Asiatic lion, Panthera leo persica) is yielding along its ragged edges to new mango orchards, peanut fields, and lime quarries for cement. You can see it in the central Amazon, where big tracts of rain forest have been felled and burned, in a largely futile attempt (encouraged by misguided government incentives, now revoked) to pasture cattle on sun-hardened clay. According to the United Nations Food and Agriculture Organization, the rate of deforestation in tropical countries has increased (contrary to Julian Simon's claim) since the 1970s, when Myers made his estimates. During the 1980s, as the FAO reported in 1993, that rate reached 15.4 million hectares (a hectare being the metric equivalent of 2.5 acres) annually. South America was losing 6.2 million hectares a year. Southeast Asia was losing less in area but more proportionally: 1.6 percent of its forests yearly. In terms of cumulative loss, as reported by other observers, the Atlantic coastal forest of Brazil is at least 95 percent gone. The Philippines, once nearly covered with rain forest, has lost 92 percent. Costa Rica has continued to lose forest, despite that country's famous concern for its biological resources. The richest of old-growth lowland forests in West Africa, India, the Greater Antilles, Madagascar, and elsewhere have been reduced to less than a tenth of their original areas. By the middle of the next century, if those trends continue, tropical forest will exist virtually nowhere outside of protected areas--that is, national parks, wildlife refuges, and other official reserves.

How many protected areas will there be? The present worldwide total is about 9,800, encompassing 6.3 percent of the planet's land area. Will those parks and reserves retain their full biological diversity? No. Species with large territorial needs will be unable to maintain viable population levels within small reserves, and as those species die away their absence will affect others. The disappearance of big predators, for instance, can release limits on medium-size predators and scavengers, whose overabundance can drive still other species (such as ground-nesting birds) to extinction. This has already happened in some habitat fragments, such as Panama's Barro Colorado Island, and been well documented in the literature of island biogeography. The lesson of fragmented habitats is Yeatsian: Things fall apart.

Human population growth will make a bad situation worse by putting ever more pressure on all available land.

Population growth rates have declined in many countries within the past several decades, it's true. But world population is still increasing, and even if average fertility suddenly, magically, dropped to 2.0 children per female, population would continue to increase (on the momentum of birth rate exceeding death rate among a generally younger and healthier populace) for some time. The annual increase is now 80 million people, with most of that increment coming in less developed countries. The latest long-range projections from the Population Division of the United Nations, released earlier this year, are slightly down from previous long-term projections in 1992 but still point toward a problematic future. According to the U.N's middle estimate (and most probable? hard to know) among seven fertility scenarios, human population will rise from the present 5.9 billion to 9.4 billion by the year 2050, then to 10.8 billion by 2150, before leveling off there at the end of the twenty-second century. If it happens that way, about 9.7 billion people will inhabit the countries included within Africa, Latin America, the Caribbean, and Asia. The total population of those countries--most of which are in the low latitudes, many of which are less developed, and which together encompass a large portion of Earth's remaining tropical forest--will be more than twice what it is today. Those 9.7 billion people, crowded together in hot places, forming the ocean within which tropical nature reserves are insularized, will constitute 90 percent of humanity. Anyone interested in the future of biological diversity needs to think about the pressures these people will face, and the pressures they will exert in return.

We also need to remember that the impact of Homo sapiens on the biosphere can't be measured simply in population figures. As the population expert Paul Harrison pointed out in his book The Third Revolution, that impact is a product of three variables: population size, consumption level, and technology. Although population growth is highest in less-developed countries, consumption levels are generally far higher in the developed world (for instance, the average American consumes about ten times as much energy as the average Chilean, and about a hundred times as much as the average Angolan), and also higher among the affluent minority in any country than among the rural poor. High consumption exacerbates the impact of a given population, whereas technological developments may either exacerbate it further (think of the automobile, the air conditioner, the chainsaw) or mitigate it (as when a technological innovation improves efficiency for an established function). All three variables play a role in every case, but a directional change in one form of human impact--upon air pollution from fossil-fuel burning, say, or fish harvest form the seas--can be mainly attributable to a change in one variable, with only minor influence from the other two. Sulfur-dioxide emissions in developed countries fell dramatically during the 1970s and 80s, due to technological improvements in papermaking and other industrial processes; those emissions would have fallen still farther if not for increased population (accounting for 25 percent of the upward vector) and increased consumption (accounting for 75 percent). Deforestation, in contrast, is a directional change that has been mostly attributable to population growth.

According to Harrison's calculations, population growth accounted for 79 percent of the deforestation in less-developed countries between 1973 and 1988. Some experts would argue with those calculations, no doubt, and insist on redirecting our concern toward the role that distant consumers, wood-products buyers among slow-growing but affluent populations of the developed nations, play in driving the destruction of Borneo's dipterocarp forests or the hardwoods of West Africa. Still, Harrison's figures point toward an undeniable reality: more total people will need more total land. By his estimate, the minimum land necessary for food growing and other human needs (such as water supply and waste dumping) amounts to one fifth of a hectare per person. Given the U.N.'s projected increase of 4.9 billion souls before the human population finally levels off, that comes to another billion hectares of human-claimed landscape, a billion hectares less forest--even without allowing for any further deforestation by the current human population, or for any further loss of agricultural land to degradation. A billion hectares--in other words, 10 million square kilometers--is, by a conservative estimate, well more than half the remaining forest area in Africa, Latin America, and Asia. This raises the vision of a very exigent human population pressing snugly around whatever patches of natural landscape remain.

Add to that vision the extra, incendiary aggravation of poverty. According to a recent World Bank estimate, about 30 percent of the total population of less-developed countries lives in poverty. Alan Durning, in his 1992 book How Much Is Enough? The Consumer Society and the Fate of the Earth, puts it in a broader perspective when he says that the world's human population is divided among three "ecological classes": the consumers, the middle-income, and the poor. His consumer class includes those 1.1 billion fortunate people whose annual income per family member is more than $7,500. At the other extreme, the world's poor also number about 1.1 billion people--all from households with less than $700 annually per member. "They are mostly rural Africans, Indians, and other South Asians," Durning writes. "They eat almost exclusively grains, root crops, beans, and other legumes, and they drink mostly unclean water. They live in huts and shanties, they travel by foot, and most of their possessions are constructed of stone, wood, and other substances available from the local environment." He calls them the "absolute poor." It's only reasonable to assume that another billion people will be added to that class, mostly in what are now the less-developed countries, before population growth stabilizes. How will those additional billion, deprived of education and other advantages, interact with the tropical landscape? Not likely by entering information-intensive jobs in the service sector of the new global economy. Julian Simon argued that human ingenuity--and by extension, human population itself--is "the ultimate resource" for solving Earth's problems, transcending Earth's limits, and turning scarcity into abundance. But if all the bright ideas generated by a human population of 5.9 billion haven't yet relieved the desperate needfulness of 1.1 billion absolute poor, why should we expect that human ingenuity will do any better for roughly 2 billion poor in the future?

Other writers besides Durning have warned about this deepening class rift. Tom Athanasiou, in Divided Planet: The Ecology of Rich and Poor, sees population growth only exacerbating the division, and notes that governments often promote destructive schemes of transmigration and rain-forest colonization as safety valves for the pressures of land hunger and discontent. A young Canadian policy analyst named Thomas F. Homer-Dixon, author of several calm-voiced but frightening articles on the linkage between what he terms "environmental scarcity" and global sociopolitical instability, reports that the amount of cropland available per person is falling in the less-developed countries because of population growth and because millions of hectares "are being lost each year to a combination of problems, including encroachment by cities, erosion, depletion of nutrients, acidification, compacting and salinization and waterlogging from overirrigation." In the cropland pinch and other forms of environmental scarcity, Homer-Dixon foresees potential for "a widening gap" of two sorts--between demands on the state and its ability to deliver, and more basically between rich and poor. In conversation with the journalist Robert D. Kaplan, as quoted in Kaplan's book The Ends of the Earth, Homer-Dixon said it more vividly: "Think of a stretch limo in the potholed streets of New York City, where homeless beggars live. Inside the limo are the air-conditioned post-industrial regions of North America, Europe, the merging Pacific Rim, and a few other isolated places, with their trade summitry and computer information highways. Outside is the rest of mankind, going in a completely different direction."

That direction, necessarily, will be toward ever more desperate exploitation of landscape. When you think of Homer-Dixon's stretch limo on those potholed urban streets, don't assume there will be room inside for tropical forests. Even Noah's ark only managed to rescue paired animals, not large parcels of habitat. The jeopardy of the ecological fragments that we presently cherish as parks, refuges, and reserves is already severe, due to both internal and external forces: internal, because insularity itself leads to ecological unraveling; and external, because those areas are still under siege by needy and covetous people. Projected forward into a future of 10.8 billion humans, of which perhaps 2 billion are starving at the periphery of those areas, while another 2 billion are living in a fool's paradise maintained by unremitting exploitation of whatever resources remain, that jeopardy increases to the point of impossibility. In addition, any form of climate change in the mid-term future, whether caused by greenhouse gases or by a natural flip-flop of climatic forces, is liable to change habitat conditions within a given protected area beyond the tolerance range for many species. If such creatures can't migrate beyond the park or reserve boundaries in order to chase their habitat needs, they may be "protected" from guns and chainsaws within their little island, but they'll still die.

We shouldn't take comfort in assuming that at least Yellowstone National Park will still harbor grizzly bears in the year 2150, that at least Royal Chitwan in Nepal will still harbor tigers, that at least Serengeti in Tanzania and Gir in India will still harbor lions. Those predator populations, and other species down the cascade, are likely to disappear. "Wildness" will be a word applicable only to urban turmoil. Lions, tigers, and bears will exist in zoos, period. Nature won't come to and end, but it will look very different.

The most obvious differences will be those I've already mentioned: tropical forests and other terrestrial ecosystems will be drastically reduced in area, and the fragmented remnants will stand tiny and isolated. Because of those two factors, plus the cascading secondary effects, plus an additional dire factor I'll mention in a moment, much of Earth's biological diversity will be gone. How much? That's impossible to predict confidently, but the careful guesses of Robert May, Stuart Pimm, and other biologists suggest losses reaching half to two thirds of all species. In the oceans, deepwater fish and shellfish populations will be drastically depleted by overharvesting, if not to the point of extinction then at least enough to cause more cascading consequences. Coral reefs and other shallow-water ecosystems will be badly stressed, if not devastated, by erosion and chemical runoff from the land. The additional dire factor is invasive species, fifth of the five factors contributing to our current experiment in mass extinction.

That factor, even more than habitat destruction and fragmentation, is a symptom of modernity. Maybe you haven't heard much about invasive species, but in coming years you will. The ecologist Daniel Simberloff takes it so seriously that he recently committed himself to founding an institute on invasive biology at the University of Tennessee, and Interior Secretary Bruce Babbitt sounded the alarm last April in a speech to a weed-management symposium in Denver. The spectacle of a cabinet secretary denouncing an alien plant called purple loosestrife struck some observers as droll, but it wasn't as silly as it seemed. Forty years ago, the British ecologist Charles Elton warned prophetically in a little book titled The Ecology of Invasions by Animals and Plants that "we are living in a period of the world's history when the mingling of thousands of kinds of organisms from different parts of the world is setting up terrific dislocations in nature." Elton's word "dislocations" was nicely chosen to ring with a double meaning: species are being moved from one location to another, and as a result ecosystems are being thrown into disorder.

The problem dates back to when people began using ingenious new modes of conveyance (the horse, the camel, the canoe) to travel quickly across mountains, deserts and oceans, bringing with them rats, lice, disease microbes, burrs, dogs, pigs, goats, cats, cows, and other forms of parasitic, commensal, or domesticated creature. One immediate result of those travels was a wave of island-bird extinctions, claiming more than a thousand species, that followed oceangoing canoes across the Pacific and elsewhere. Having evolved in insular ecosystems free of predators, many of those species where flightless, unequipped to defend themselves or their eggs against ravenous mammals. Raphus cucullatus, a giant cousin of the pigeon lineage, endemic to Mauritius in the Indian Ocean and better known as the dodo, was only the most easily caricatured representative of this much larger pattern. Dutch sailors killed and ate dodos during the seventeenth century, but probably what guaranteed the extinction of Raphus cucullatus is that the European ships put ashore rats, pigs, and Macaca fascicularis, an opportunistic species of Asian monkey. Although commonly known as the crab-eating macaque, M. fascicularis will eat almost anything. The monkeys are still pestilential on Mauritius, hungry and daring and always ready to grab what they can, including raw eggs. But the dodo hasn't been seen since 1662.

The european age of discovery and conquest was also the great age of biogeography--that is the study of what creatures live where, a branch of biology practiced by attentive travelers such as Carolus Linnaeus, Alexander von Humboldt, Charles Darwin, and Alfred Russel Wallace. Darwin and Wallace even made biogeography the basis of their discovery that species, rather that being created and plopped onto Earth by divine magic, evolve in particular locales by the process of natural selection. Ironically, the same trend of far-flung human travel that gave biogeographers their data also began to muddle and nullify those data, by transplanting the most ready and roguish species to new places and thereby delivering misery unto death for many other species. Rats and cats went everywhere, causing havoc in what for millions of years had been sheltered, less competitive ecosystems. The Asiatic chestnut blight and the European starling came to America; the American muskrat and the Chinese mitten crab got to Europe. Sometimes these human-mediated transfers were unintentional, sometimes merely shortsighted. Nostalgic sportsmen in New Zealand imported British red deer; European brown trout and Coastal rainbows were planted in disregard of the native cutthroats of Rocky Mountain rivers. Prickly-pear cactus, rabbits, and cane toads were inadvisedly welcomed to Australia. Goats went wild in the Galapagos. The bacterium that causes bubonic plague journeyed from China to California by way of a flea, a rat, and a ship. The Atlantic sea lamprey found its own way up into Lake Erie, but only after the Welland Canal gave it a bypass around Niagara Falls. Unintentional or otherwise, all these transfers had unforseen consequences, which in many cases included the extinction of less competitive, less opportunistic native species. The rosy wolfsnail, a small creature introduced onto Oahu for the purpose of controlling a larger and more obviously noxious species of snail, which was itself invasive, proved to be medicine worse than the disease; it became a fearsome predator upon native snails, of which twenty species are now gone. The Nile perch, a big predatory fish introduced into Lake Victoria in 1962 because it promised good eating, seems to have exterminated at least eighty species of smaller cichlid fishes that were native to the lake's Mwanza Gulf.

The problem is vastly amplified by modern shipping and air transport, which are quick and capacious enough to allow many more kinds of organism to get themselves transplanted into zones of habitat they never could have reached on their own. The brown tree snake, having hitchhiked aboard military planes from the New Guinea region near the end of World War II, has eaten most of the native forest birds of Guam. Hanta virus, first identified in Korea, burbles quietly in the deer mice of Arizona. Ebola will next appear who knows where. Apart from the frightening epidemiological possibilities, agricultural damages are the most conspicuous form of impact. One study, by the congressional Office of Technology Assessment, reports that in the United States 4,500 nonnative species have established free-living populations, of which about 15 percent cause severe harm; looking at just 79 of those species, the OTA documented $97 billion in damages. The lost value in Hawaiian snail species or cichlid diversity is harder to measure. But another report, from the U.N. Environmental Program, declares that almost 20 percent of the world's endangered vertebrates suffer from pressures (competition, predation, habitat transformation) created by exotic interlopers. Michael Soule, a biologist much respected for his work on landscape conversion and extinction, has said that invasive species may soon surpass habitat loss and fragmentation as the major cause of "ecological disintegration." Having exterminated Guam's avifauna, the brown tree snake has lately been spotted in Hawaii.

Is there a larger pattern to these invasions? What do fire ants, zebra mussels, Asian gypsy moths, tamarisk trees, maleleuca trees, kudzu, Mediterranean fruit flies, boll weevils and water hyacinths have in common with crab-eating macaques or Nile perch? Answer: They're weedy species, in the sense that animals as well as plants can be weedy. What that implies is a constellation of characteristics: They reproduce quickly, disperse widely when given a chance, tolerate a fairly broad range of habitat conditions, take hold in strange places, succeed especially in disturbed ecosystems, and resist eradication once they're established. They are scrappers, generalists, opportunists. They tend to thrive in human-dominated terrain because in crucial ways they resemble Homo sapiens: aggressive, versatile, prolific, and ready to travel. The city pigeon, a cosmopolitan creature derived from wild ancestry as a Eurasian rock dove (Columba livia) by way of centuries of pigeon fanciers whose coop-bred birds occasionally went AWOL, is a weed. So are those species that, benefiting from human impacts upon landscape, have increased grossly in abundance or expanded in their geographical scope without having to cross an ocean by plane or by boat--for instance, the coyote in New York, the raccoon in Montana, the white-tailed deer in northern Wisconsin or western Connecticut. The brown-headed cowbird, also weedy, has enlarged its range from the eastern United States into the agricultural Midwest at the expense of migratory songbirds. In gardening usage the word "weed" may be utterly subjective, indicating any plant you don't happen to like, but in ecological usage it has these firmer meanings. Biologists frequently talk of weedy species, meaning animals as well as plants.

Paleontologists, too, embrace the idea and even the term. Jablonski himself, in a 1991 paper published in Science, extrapolated from past mass extinctions to our current one and suggested that human activities are likely to take their heaviest toll on narrowly endemic species, while causing fewer extinctions among those species that are broadly adapted and broadly distributed. "In the face of ongoing habitat alteration and fragmentation," he wrote, "this implies a biota increasingly enriched in widespread, weedy species--rats, ragweed, and cockroaches--relative to the larger number of species that are more vulnerable and potentially more useful to humans as food, medicines, and genetic resources." Now, as we sit in his office, he repeats: "It's just a question of how much the world becomes enriched in these weedy species." Both in print and in talk he uses "enriched" somewhat caustically, knowing that the actual direction of the trend is toward impoverishment.

Regarding impoverishment, let's note another dark, interesting irony: that the two converse trends I've described--partitioning the world's landscape by habitat fragmentation, and unifying the world's landscape by global transport of weedy species--produce not converse results but one redoubled result, the further loss of biological diversity. Immersing myself in the literature of extinctions, and making dilettantish excursions across India, Madagascar, New Guinea, Indonesia, Brazil, Guam, Australia, New Zealand, Wyoming, the hills of Burbank, and other semi-wild places over the past decade, I've seen those redoubling trends everywhere, portending a near-term future in which Earth's landscape is threadbare, leached of diversity, heavy with humans, and "enriched" in weedy species. That's an ugly vision, but I find it vivid. Wildlife will consist of the pigeons and the coyotes and the white-tails, the black rats (Rattus rattus) and the brown rats (Rattus norvegicus) and a few other species of worldly rodent, the crab-eating macaques and the cockroaches (though, as with the rats, not every species--some are narrowly endemic, like the giant Madagascar hissing cockroach) and the mongooses, the house sparrows and the house geckos and the houseflies and the barn cats and the skinny brown feral dogs and a short list of additional species that play by our rules. Forests will be tiny insular patches existing on bare sufferance, much of their biological diversity (the big predators, the migratory birds, the shy creatures that can't tolerate edges, and many other species linked inextricably with those) long since decayed away. They'll essentially be tall woody gardens, not forests in the richer sense. Elsewhere the landscape will have its strips and swatches of green, but except on much-poisoned lawns and golf courses the foliage will be infested with cheatgrass and European buckthorn and spotted knapweed and Russian thistle and leafy spurge and salt meadow cordgrass and Bruce Babbitt's purple loosestrife. Having recently passed the great age of biogeography, we will have entered the age after biogeography, in that virtually everything will live virtually everywhere, though the list of species that constitute "everything" will be small. I see this world implicitly foretold in the U.N. population projections, the FAO reports on deforestation, the northward advance into Texas of Africanized honeybees, the rhesus monkeys that haunt the parapets of public buildings in New Delhi, and every fat gray squirrel on a bird feeder in England. Earth will be a different sort of place--soon, in just five or six human generations. My label for that place, that time, that apparently unavoidable prospect, is the Planet of Weeds. Its main consoling felicity, as far as I can imagine, is that there will be no shortage of crows.

Now we come to the question of human survival, a matter of some interest to many. We come to a certain fretful leap of logic that otherwise thoughtful observers seem willing, even eager to make: that the ultimate consequence will be the extinction of us. By seizing such a huge share of Earth's landscape, by imposing so wantonly on its providence and presuming so recklessly on its forgivingness, by killing off so many species, they say, we will doom our own species to extinction. This is a commonplace among the environmentally exercised. My quibbles with the idea are that it seems ecologically improbable and too optimistic. But it bears examining, because it's frequently offered as the ultimate argument against proceeding as we are.

Jablonski also has his doubts. Do you see Homo sapiens as a likely survivor, I ask him or as a casualty? "Oh, we've got to be one of the most bomb-proof species on the planet," he says. "We're geographically widespread, we have a pretty remarkable reproductive rate, we're incredibly good at co-opting and monopolizing resources. I think it would take really serious, concerted effort to wipe out the human species." The point he's making is one that has probably already dawned on you: Homo sapiens itself is the consummate weed. Why shouldn't we survive, then, on the Planet of Weeds? But there's a wide range of possible circumstances, Jablonski reminds me, between the extinction of our species and the continued growth of human population, consumption, and comfort. "I think we'll be one of the survivors," he says, "sort of picking through the rubble." Besides losing all the pharmaceutical and genetic resources that lay hidden within those extinguished species, and all the spiritual and aesthetic values they offered, he foresees unpredictable levels of loss in many physical and biochemical functions that ordinarily come as benefits from diverse, robust ecosystems--functions such as cleaning and recirculating air and water, mitigating droughts and floods, decomposing wastes, controlling erosion, creating new soil, pollinating crops, capturing and transporting nutrients, damping short-term temperature extremes and longer-term fluctuations of climate, restraining outbreaks of pestiferous species, and shielding Earth's surface from the full brunt of ultraviolet radiation. Strip away the ecosystems that perform those services, Jablonski says, and you can expect grievous detriment to the reality we inhabit. "A lot of things are going to happen that will make this a crummier place to live--a more stressful place to live, a more difficult place to live, a less resilient place to live--before the human species is at any risk at all." And maybe some of the new difficulties, he adds will serve as incentive for major changes in the trajectory along which we pursue our aggregate self-interests. Maybe we'll pull back before our current episode matches the Triassic extinction or the K-T event. Maybe it will turn out to be no worse than the Eocene extinction, with a 35 percent loss of species. "Are you hopeful?" I ask. Given that hope is a duty from which paleontologists are exempt, I'm surprised when he answers, "Yes, I am."

I'm not. My own guess about the mid-term future, excused by no exemption, is that our Planet of Weeds will indeed be a crummier place, a lonelier and uglier place, and a particularly wretched place for the 2 billion people comprising Alan Durning's absolute poor. What will increase most dramatically as time proceeds, I suspect, won't be generalized misery or futuristic modes of consumption but the gulf between two global classes experiencing those extremes. Progressive failure of ecosystem functions? Yes, but human resourcefulness of the sort Julian Simon so admired will probably find stopgap technological remedies, to be available for a price. So the world's privileged class--that's your class and my class--will probably still manage to maintain themselves inside Homer-Dixon's stretch limo, drinking bottled water and breathing bottled air and eating reasonably healthy food that has become incredibly precious, while the potholes on the road outside grow ever deeper. Eventually the limo will look more like a lunar rover. Ragtag mobs of desperate souls will cling to its bumpers, like groupies on Elvis's final Cadillac. The absolute poor will suffer their lack of ecological privilege in the form of lowered life expectancy, bad health, absence of education, corrosive want, and anger. Maybe in time they'll find ways to gather themselves in localized revolt against the affluent class. Not likely, though, as long as affluence buys guns. In any case, well before that they will have burned the last stick of Bornean dipterocarp for firewood and roasted the last lemur, the last grizzly bear, the last elephant left unprotected outside a zoo.

Jablonski has a hundred things to do before leaving for Alaska, so after two hours I clear out. The heat on the sidewalk is fierce, though not nearly as fierce as this summer's heat in New Delhi or Dallas, where people are dying. Since my flight doesn't leave until early evening, I cab downtown and take refuge in a nouveau-Cajun restaurant near the river. Over a beer and jambalaya, I glance again at Jablonski's Science paper, titled "Extinctions: A Paleontological Perspective." I also play back the tape of our conversation, pressing my ear against the little recorder to hear it over the lunch-crowd noise.

Among the last questions I asked Jablonski was, What will happen after this mass extinction, assuming it proceeds to a worst-case scenario? If we destroy half or two thirds of all living species, how long will it take for evolution to fill the planet back up? "I don't know the answer to that," he said. "I'd rather not bottom out and see what happens next." In the journal paper he had hazarded that, based on fossil evidence in rock laid down atop the K-T event and others, the time required for full recovery might be 5 or 10 million years. From a paleontological perspective, that's fast. "Biotic recoveries after mass extinctions are geologically rapid but immensely prolonged on human time scales," he wrote. There was also the proviso, cited from another expert, that recovery might not begin until after the extinction-causing circumstances have disappeared. But in this case, of course, the circumstances won't likely disappear until we do.

Still, evolution never rests. It's happening right now, in weed patches all over the planet. I'm not presuming to alert you to the end of the world, the end of evolution, or the end of nature. What I've tried to describe here is not an absolute end but a very deep dip, a repeat point within a long, violent cycle. Species die, species arise. The relative pace of those two processes is what matters. Even rats and cockroaches are capable--given the requisite conditions; namely, habitat diversity and time--of speciation. And speciation brings new diversity. So we might reasonably imagine an Earth upon which, 10 million years after the extinction (or, alteratively, the drastic transformation) of Homo sapiens, wondrous forests are again filled with wondrous beasts. That's the good news.



[David Quammen, The weeds shall inherit the Earth, The Independent (London), 11-22-1998, pp 30-39.]

User avatar
danas
Posts: 18821
Joined: 11/03/2005 19:40
Location: 10th circle...

Post by danas » 01/08/2007 21:12

Was Darwin Wrong?
By David Quammen

National Geographic, November 2004




Evolution by natural selection, the central concept of the life's work of Charles Darwin, is a theory. It's a theory about the origin of adaptation, complexity, and diversity among Earth's living creatures. If you are skeptical by nature, unfamiliar with the terminology of science, and unaware of the overwhelming evidence, you might even be tempted to say that it's "just" a theory. In the same sense, relativity as described by Albert Einstein is "just" a theory. The notion that Earth orbits around the sun rather than vice versa, offered by Copernicus in 1543, is a theory. Continental drift is a theory. The existence, structure, and dynamics of atoms? Atomic theory. Even electricity is a theoretical construct, involving electrons, which are tiny units of charged mass that no one has ever seen. Each of these theories is an explanation that has been confirmed to such a degree, by observation and experiment, that knowledgeable experts accept it as fact. That's what scientists mean when they talk about a theory: not a dreamy and unreliable speculation, but an explanatory statement that fits the evidence. They embrace such an explanation confidently but provisionally—taking it as their best available view of reality, at least until some severely conflicting data or some better explanation might come along.

The rest of us generally agree. We plug our televisions into little wall sockets, measure a year by the length of Earth's orbit, and in many other ways live our lives based on the trusted reality of those theories.

Evolutionary theory, though, is a bit different. It's such a dangerously wonderful and far-reaching view of life that some people find it unacceptable, despite the vast body of supporting evidence. As applied to our own species, Homo sapiens, it can seem more threatening still. Many fundamentalist Christians and ultraorthodox Jews take alarm at the thought that human descent from earlier primates contradicts a strict reading of the Book of Genesis. Their discomfort is paralleled by Islamic creationists such as Harun Yahya, author of a recent volume titled The Evolution Deceit, who points to the six-day creation story in the Koran as literal truth and calls the theory of evolution "nothing but a deception imposed on us by the dominators of the world system." The late Srila Prabhupada, of the Hare Krishna movement, explained that God created "the 8,400,000 species of life from the very beginning," in order to establish multiple tiers of reincarnation for rising souls. Although souls ascend, the species themselves don't change, he insisted, dismissing "Darwin's nonsensical theory."

Other people too, not just scriptural literalists, remain unpersuaded about evolution. According to a Gallup poll drawn from more than a thousand telephone interviews conducted in February 2001, no less than 45 percent of responding U.S. adults agreed that "God created human beings pretty much in their present form at one time within the last 10,000 years or so." Evolution, by their lights, played no role in shaping us.

Only 37 percent of the polled Americans were satisfied with allowing room for both God and Darwin—that is, divine initiative to get things started, evolution as the creative means. (This view, according to more than one papal pronouncement, is compatible with Roman Catholic dogma.) Still fewer Americans, only 12 percent, believed that humans evolved from other life-forms without any involvement of a god.

The most startling thing about these poll numbers is not that so many Americans reject evolution, but that the statistical breakdown hasn't changed much in two decades. Gallup interviewers posed exactly the same choices in 1982, 1993, 1997, and 1999. The creationist conviction—that God alone, and not evolution, produced humans—has never drawn less than 44 percent. In other words, nearly half the American populace prefers to believe that Charles Darwin was wrong where it mattered most.

Why are there so many antievolutionists? Scriptural literalism can only be part of the answer. The American public certainly includes a large segment of scriptural literalists—but not that large, not 44 percent. Creationist proselytizers and political activists, working hard to interfere with the teaching of evolutionary biology in public schools, are another part. Honest confusion and ignorance, among millions of adult Americans, must be still another. Many people have never taken a biology course that dealt with evolution nor read a book in which the theory was lucidly explained. Sure, we've all heard of Charles Darwin, and of a vague, somber notion about struggle and survival that sometimes goes by the catchall label "Darwinism." But the main sources of information from which most Americans have drawn their awareness of this subject, it seems, are haphazard ones at best: cultural osmosis, newspaper and magazine references, half-baked nature documentaries on the tube, and hearsay.

Evolution is both a beautiful concept and an important one, more crucial nowadays to human welfare, to medical science, and to our understanding of the world than ever before. It's also deeply persuasive—a theory you can take to the bank. The essential points are slightly more complicated than most people assume, but not so complicated that they can't be comprehended by any attentive person. Furthermore, the supporting evidence is abundant, various, ever increasing, solidly interconnected, and easily available in museums, popular books, textbooks, and a mountainous accumulation of peer-reviewed scientific studies. No one needs to, and no one should, accept evolution merely as a matter of faith.

Two big ideas, not just one, are at issue: the evolution of all species, as a historical phenomenon, and natural selection, as the main mechanism causing that phenomenon. The first is a question of what happened. The second is a question of how. The idea that all species are descended from common ancestors had been suggested by other thinkers, including Jean-Baptiste Lamarck, long before Darwin published The Origin of Species in 1859. What made Darwin's book so remarkable when it appeared, and so influential in the long run, was that it offered a rational explanation of how evolution must occur. The same insight came independently to Alfred Russel Wallace, a young naturalist doing fieldwork in the Malay Archipelago during the late 1850s. In historical annals, if not in the popular awareness, Wallace and Darwin share the kudos for having discovered natural selection.

The gist of the concept is that small, random, heritable differences among individuals result in different chances of survival and reproduction—success for some, death without offspring for others—and that this natural culling leads to significant changes in shape, size, strength, armament, color, biochemistry, and behavior among the descendants. Excess population growth drives the competitive struggle. Because less successful competitors produce fewer surviving offspring, the useless or negative variations tend to disappear, whereas the useful variations tend to be perpetuated and gradually magnified throughout a population.

So much for one part of the evolutionary process, known as anagenesis, during which a single species is transformed. But there's also a second part, known as speciation. Genetic changes sometimes accumulate within an isolated segment of a species, but not throughout the whole, as that isolated population adapts to its local conditions. Gradually it goes its own way, seizing a new ecological niche. At a certain point it becomes irreversibly distinct—that is, so different that its members can't interbreed with the rest. Two species now exist where formerly there was one. Darwin called that splitting-and-specializing phenomenon the "principle of divergence." It was an important part of his theory, explaining the overall diversity of life as well as the adaptation of individual species.

This thrilling and radical assemblage of concepts came from an unlikely source. Charles Darwin was shy and meticulous, a wealthy landowner with close friends among the Anglican clergy. He had a gentle, unassuming manner, a strong need for privacy, and an extraordinary commitment to intellectual honesty. As an undergraduate at Cambridge, he had studied halfheartedly toward becoming a clergyman himself, before he discovered his real vocation as a scientist. Later, having established a good but conventional reputation in natural history, he spent 22 years secretly gathering evidence and pondering arguments—both for and against his theory—because he didn't want to flame out in a burst of unpersuasive notoriety. He may have delayed, too, because of his anxiety about announcing a theory that seemed to challenge conventional religious beliefs—in particular, the Christian beliefs of his wife, Emma. Darwin himself quietly renounced Christianity during his middle age, and later described himself as an agnostic. He continued to believe in a distant, impersonal deity of some sort, a greater entity that had set the universe and its laws into motion, but not in a personal God who had chosen humanity as a specially favored species. Darwin avoided flaunting his lack of religious faith, at least partly in deference to Emma. And she prayed for his soul.

In 1859 he finally delivered his revolutionary book. Although it was hefty and substantive at 490 pages, he considered The Origin of Species just a quick-and-dirty "abstract" of the huge volume he had been working on until interrupted by an alarming event. (In fact, he'd wanted to title it An Abstract of an Essay on the Origin of Species and Varieties Through Natural Selection, but his publisher found that insufficiently catchy.) The alarming event was his receiving a letter and an enclosed manuscript from Alfred Wallace, whom he knew only as a distant pen pal. Wallace's manuscript sketched out the same great idea—evolution by natural selection—that Darwin considered his own. Wallace had scribbled this paper and (unaware of Darwin's own evolutionary thinking, which so far had been kept private) mailed it to him from the Malay Archipelago, along with a request for reaction and help. Darwin was horrified. After two decades of painstaking effort, now he'd be scooped. Or maybe not quite. He forwarded Wallace's paper toward publication, though managing also to assert his own prior claim by releasing two excerpts from his unpublished work. Then he dashed off The Origin, his "abstract" on the subject. Unlike Wallace, who was younger and less meticulous, Darwin recognized the importance of providing an edifice of supporting evidence and logic.

The evidence, as he presented it, mostly fell within four categories: biogeography, paleontology, embryology, and morphology. Biogeography is the study of the geographical distribution of living creatures—that is, which species inhabit which parts of the planet and why. Paleontology investigates extinct life-forms, as revealed in the fossil record. Embryology examines the revealing stages of development (echoing earlier stages of evolutionary history) that embryos pass through before birth or hatching; at a stretch, embryology also concerns the immature forms of animals that metamorphose, such as the larvae of insects. Morphology is the science of anatomical shape and design. Darwin devoted sizable sections of The Origin of Species to these categories.

Biogeography, for instance, offered a great pageant of peculiar facts and patterns. Anyone who considers the biogeographical data, Darwin wrote, must be struck by the mysterious clustering pattern among what he called "closely allied" species—that is, similar creatures sharing roughly the same body plan. Such closely allied species tend to be found on the same continent (several species of zebras in Africa) or within the same group of oceanic islands (dozens of species of honeycreepers in Hawaii, 13 species of Galápagos finch), despite their species-by-species preferences for different habitats, food sources, or conditions of climate. Adjacent areas of South America, Darwin noted, are occupied by two similar species of large, flightless birds (the rheas, Rhea americana and Pterocnemia pennata), not by ostriches as in Africa or emus as in Australia. South America also has agoutis and viscachas (small rodents) in terrestrial habitats, plus coypus and capybaras in the wetlands, not—as Darwin wrote—hares and rabbits in terrestrial habitats or beavers and muskrats in the wetlands. During his own youthful visit to the Galápagos, aboard the survey ship Beagle, Darwin himself had discovered three very similar forms of mockingbird, each on a different island.

Why should "closely allied" species inhabit neighboring patches of habitat? And why should similar habitat on different continents be occupied by species that aren't so closely allied? "We see in these facts some deep organic bond, prevailing throughout space and time," Darwin wrote. "This bond, on my theory, is simply inheritance." Similar species occur nearby in space because they have descended from common ancestors.

Paleontology reveals a similar clustering pattern in the dimension of time. The vertical column of geologic strata, laid down by sedimentary processes over the eons, lightly peppered with fossils, represents a tangible record showing which species lived when. Less ancient layers of rock lie atop more ancient ones (except where geologic forces have tipped or shuffled them), and likewise with the animal and plant fossils that the strata contain. What Darwin noticed about this record is that closely allied species tend to be found adjacent to one another in successive strata. One species endures for millions of years and then makes its last appearance in, say, the middle Eocene epoch; just above, a similar but not identical species replaces it. In North America, for example, a vaguely horselike creature known as Hyracotherium was succeeded by Orohippus, then Epihippus, then Mesohippus, which in turn were succeeded by a variety of horsey American critters. Some of them even galloped across the Bering land bridge into Asia, then onward to Europe and Africa. By five million years ago they had nearly all disappeared, leaving behind Dinohippus, which was succeeded by Equus, the modern genus of horse. Not all these fossil links had been unearthed in Darwin's day, but he captured the essence of the matter anyway. Again, were such sequences just coincidental? No, Darwin argued. Closely allied species succeed one another in time, as well as living nearby in space, because they're related through evolutionary descent.

Embryology too involved patterns that couldn't be explained by coincidence. Why does the embryo of a mammal pass through stages resembling stages of the embryo of a reptile? Why is one of the larval forms of a barnacle, before metamorphosis, so similar to the larval form of a shrimp? Why do the larvae of moths, flies, and beetles resemble one another more than any of them resemble their respective adults? Because, Darwin wrote, "the embryo is the animal in its less modified state" and that state "reveals the structure of its progenitor."

Morphology, his fourth category of evidence, was the "very soul" of natural history, according to Darwin. Even today it's on display in the layout and organization of any zoo. Here are the monkeys, there are the big cats, and in that building are the alligators and crocodiles. Birds in the aviary, fish in the aquarium. Living creatures can be easily sorted into a hierarchy of categories—not just species but genera, families, orders, whole kingdoms—based on which anatomical characters they share and which they don't.

All vertebrate animals have backbones. Among vertebrates, birds have feathers, whereas reptiles have scales. Mammals have fur and mammary glands, not feathers or scales. Among mammals, some have pouches in which they nurse their tiny young. Among these species, the marsupials, some have huge rear legs and strong tails by which they go hopping across miles of arid outback; we call them kangaroos. Bring in modern microscopic and molecular evidence, and you can trace the similarities still further back. All plants and fungi, as well as animals, have nuclei within their cells. All living organisms contain DNA and RNA (except some viruses with RNA only), two related forms of information-coding molecules.

Such a pattern of tiered resemblances—groups of similar species nested within broader groupings, and all descending from a single source—isn't naturally present among other collections of items. You won't find anything equivalent if you try to categorize rocks, or musical instruments, or jewelry. Why not? Because rock types and styles of jewelry don't reflect unbroken descent from common ancestors. Biological diversity does. The number of shared characteristics between any one species and another indicates how recently those two species have diverged from a shared lineage.

That insight gave new meaning to the task of taxonomic classification, which had been founded in its modern form back in 1735 by the Swedish naturalist Carolus Linnaeus. Linnaeus showed how species could be systematically classified, according to their shared similarities, but he worked from creationist assumptions that offered no material explanation for the nested pattern he found. In the early and middle 19th century, morphologists such as Georges Cuvier and Étienne Geoffroy Saint-Hilaire in France and Richard Owen in England improved classification with their meticulous studies of internal as well as external anatomies, and tried to make sense of what the ultimate source of these patterned similarities could be. Not even Owen, a contemporary and onetime friend of Darwin's (later in life they had a bitter falling out), took the full step to an evolutionary vision before The Origin of Species was published. Owen made a major contribution, though, by advancing the concept of homologues—that is, superficially different but fundamentally similar versions of a single organ or trait, shared by dissimilar species.

For instance, the five-digit skeletal structure of the vertebrate hand appears not just in humans and apes and raccoons and bears but also, variously modified, in cats and bats and porpoises and lizards and turtles. The paired bones of our lower leg, the tibia and the fibula, are also represented by homologous bones in other mammals and in reptiles, and even in the long-extinct bird-reptile Archaeopteryx. What's the reason behind such varied recurrence of a few basic designs? Darwin, with a nod to Owen's "most interesting work," supplied the answer: common descent, as shaped by natural selection, modifying the inherited basics for different circumstances.

Vestigial characteristics are still another form of morphological evidence, illuminating to contemplate because they show that the living world is full of small, tolerable imperfections. Why do male mammals (including human males) have nipples? Why do some snakes (notably boa constrictors) carry the rudiments of a pelvis and tiny legs buried inside their sleek profiles? Why do certain species of flightless beetle have wings, sealed beneath wing covers that never open? Darwin raised all these questions, and answered them, in The Origin of Species. Vestigial structures stand as remnants of the evolutionary history of a lineage.

Today the same four branches of biological science from which Darwin drew—biogeography, paleontology, embryology, morphology—embrace an ever growing body of supporting data. In addition to those categories we now have others: population genetics, biochemistry, molecular biology, and, most recently, the whiz-bang field of machine-driven genetic sequencing known as genomics. These new forms of knowledge overlap one another seamlessly and intersect with the older forms, strengthening the whole edifice, contributing further to the certainty that Darwin was right.

He was right about evolution, that is. He wasn't right about everything. Being a restless explainer, Darwin floated a number of theoretical notions during his long working life, some of which were mistaken and illusory. He was wrong about what causes variation within a species. He was wrong about a famous geologic mystery, the parallel shelves along a Scottish valley called Glen Roy. Most notably, his theory of inheritance—which he labeled pangenesis and cherished despite its poor reception among his biologist colleagues—turned out to be dead wrong. Fortunately for Darwin, the correctness of his most famous good idea stood independent of that particular bad idea. Evolution by natural selection represented Darwin at his best—which is to say, scientific observation and careful thinking at its best.

Douglas Futuyma is a highly respected evolutionary biologist, author of textbooks as well as influential research papers. His office, at the University of Michigan, is a long narrow room in the natural sciences building, well stocked with journals and books, including volumes about the conflict between creationism and evolution. I arrived carrying a well-thumbed copy of his own book on that subject, Science on Trial: The Case for Evolution. Killing time in the corridor before our appointment, I noticed a blue flyer on a departmental bulletin board, seeming oddly placed there amid the announcements of career opportunities for graduate students. "Creation vs. evolution," it said. "A series of messages challenging popular thought with Biblical truth and scientific evidences." A traveling lecturer from something called the Origins Research Association would deliver these messages at a local Baptist church. Beside the lecturer's photo was a drawing of a dinosaur. "Free pizza following the evening service," said a small line at the bottom. Dinosaurs, biblical truth, and pizza: something for everybody.

In response to my questions about evidence, Dr. Futuyma moved quickly through the traditional categories—paleontology, biogeography—and talked mostly about modern genetics. He pulled out his heavily marked copy of the journal Nature for February 15, 2001, a historic issue, fat with articles reporting and analyzing the results of the Human Genome Project. Beside it he slapped down a more recent issue of Nature, this one devoted to the sequenced genome of the house mouse, Mus musculus. The headline of the lead editorial announced: "HUMAN BIOLOGY BY PROXY." The mouse genome effort, according to Nature's editors, had revealed "about 30,000 genes, with 99% having direct counterparts in humans."

The resemblance between our 30,000 human genes and those 30,000 mousy counterparts, Futuyma explained, represents another form of homology, like the resemblance between a five-fingered hand and a five-toed paw. Such genetic homology is what gives meaning to biomedical research using mice and other animals, including chimpanzees, which (to their sad misfortune) are our closest living relatives.

No aspect of biomedical research seems more urgent today than the study of microbial diseases. And the dynamics of those microbes within human bodies, within human populations, can only be understood in terms of evolution.

Nightmarish illnesses caused by microbes include both the infectious sort (AIDS, Ebola, SARS) that spread directly from person to person and the sort (malaria, West Nile fever) delivered to us by biting insects or other intermediaries. The capacity for quick change among disease-causing microbes is what makes them so dangerous to large numbers of people and so difficult and expensive to treat. They leap from wildlife or domestic animals into humans, adapting to new circumstances as they go. Their inherent variability allows them to find new ways of evading and defeating human immune systems. By natural selection they acquire resistance to drugs that should kill them. They evolve. There's no better or more immediate evidence supporting the Darwinian theory than this process of forced transformation among our inimical germs.

Take the common bacterium Staphylococcus aureus, which lurks in hospitals and causes serious infections, especially among surgery patients. Penicillin, becoming available in 1943, proved almost miraculously effective in fighting staphylococcus infections. Its deployment marked a new phase in the old war between humans and disease microbes, a phase in which humans invent new killer drugs and microbes find new ways to be unkillable. The supreme potency of penicillin didn't last long. The first resistant strains of Staphylococcus aureus were reported in 1947. A newer staph-killing drug, methicillin, came into use during the 1960s, but methicillin-resistant strains appeared soon, and by the 1980s those strains were widespread. Vancomycin became the next great weapon against staph, and the first vancomycin-resistant strain emerged in 2002. These antibiotic-resistant strains represent an evolutionary series, not much different in principle from the fossil series tracing horse evolution from Hyracotherium to Equus. They make evolution a very practical problem by adding expense, as well as misery and danger, to the challenge of coping with staph.

The biologist Stephen Palumbi has calculated the cost of treating penicillin-resistant and methicillin-resistant staph infections, just in the United States, at 30 billion dollars a year. "Antibiotics exert a powerful evolutionary force," he wrote last year, "driving infectious bacteria to evolve powerful defenses against all but the most recently invented drugs." As reflected in their DNA, which uses the same genetic code found in humans and horses and hagfish and honeysuckle, bacteria are part of the continuum of life, all shaped and diversified by evolutionary forces.

Even viruses belong to that continuum. Some viruses evolve quickly, some slowly. Among the fastest is HIV, because its method of replicating itself involves a high rate of mutation, and those mutations allow the virus to assume new forms. After just a few years of infection and drug treatment, each HIV patient carries a unique version of the virus. Isolation within one infected person, plus differing conditions and the struggle to survive, forces each version of HIV to evolve independently. It's nothing but a speeded up and microscopic case of what Darwin saw in the Galápagos—except that each human body is an island, and the newly evolved forms aren't so charming as finches or mockingbirds.

Understanding how quickly HIV acquires resistance to antiviral drugs, such as AZT, has been crucial to improving treatment by way of multiple drug cocktails. "This approach has reduced deaths due to HIV by severalfold since 1996," according to Palumbi, "and it has greatly slowed the evolution of this disease within patients."

Insects and weeds acquire resistance to our insecticides and herbicides through the same process. As we humans try to poison them, evolution by natural selection transforms the population of a mosquito or thistle into a new sort of creature, less vulnerable to that particular poison. So we invent another poison, then another. It's a futile effort. Even DDT, with its ferocious and long-lasting effects throughout ecosystems, produced resistant house flies within a decade of its discovery in 1939. By 1990 more than 500 species (including 114 kinds of mosquitoes) had acquired resistance to at least one pesticide. Based on these undesired results, Stephen Palumbi has commented glumly, "humans may be the world's dominant evolutionary force."

Among most forms of living creatures, evolution proceeds slowly—too slowly to be observed by a single scientist within a research lifetime. But science functions by inference, not just by direct observation, and the inferential sorts of evidence such as paleontology and biogeography are no less cogent simply because they're indirect. Still, skeptics of evolutionary theory ask: Can we see evolution in action? Can it be observed in the wild? Can it be measured in the laboratory?

The answer is yes. Peter and Rosemary Grant, two British-born researchers who have spent decades where Charles Darwin spent weeks, have captured a glimpse of evolution with their long-term studies of beak size among Galápagos finches. William R. Rice and George W. Salt achieved something similar in their lab, through an experiment involving 35 generations of the fruit fly Drosophila melanogaster. Richard E. Lenski and his colleagues at Michigan State University have done it too, tracking 20,000 generations of evolution in the bacterium Escherichia coli. Such field studies and lab experiments document anagenesis—that is, slow evolutionary change within a single, unsplit lineage. With patience it can be seen, like the movement of a minute hand on a clock.

Speciation, when a lineage splits into two species, is the other major phase of evolutionary change, making possible the divergence between lineages about which Darwin wrote. It's rarer and more elusive even than anagenesis. Many individual mutations must accumulate (in most cases, anyway, with certain exceptions among plants) before two populations become irrevocably separated. The process is spread across thousands of generations, yet it may finish abruptly—like a door going slam!—when the last critical changes occur. Therefore it's much harder to witness. Despite the difficulties, Rice and Salt seem to have recorded a speciation event, or very nearly so, in their extended experiment on fruit flies. From a small stock of mated females they eventually produced two distinct fly populations adapted to different habitat conditions, which the researchers judged "incipient species."

After my visit with Douglas Futuyma in Ann Arbor, I spent two hours at the university museum there with Philip D. Gingerich, a paleontologist well-known for his work on the ancestry of whales. As we talked, Gingerich guided me through an exhibit of ancient cetaceans on the museum's second floor. Amid weird skeletal shapes that seemed almost chimerical (some hanging overhead, some in glass cases) he pointed out significant features and described the progress of thinking about whale evolution. A burly man with a broad open face and the gentle manner of a scoutmaster, Gingerich combines intellectual passion and solid expertise with one other trait that's valuable in a scientist: a willingness to admit when he's wrong.

Since the late 1970s Gingerich has collected fossil specimens of early whales from remote digs in Egypt and Pakistan. Working with Pakistani colleagues, he discovered Pakicetus, a terrestrial mammal dating from 50 million years ago, whose ear bones reflect its membership in the whale lineage but whose skull looks almost doglike. A former student of Gingerich's, Hans Thewissen, found a slightly more recent form with webbed feet, legs suitable for either walking or swimming, and a long toothy snout. Thewissen called it Ambulocetus natans, or the "walking-and-swimming whale." Gingerich and his team turned up several more, including Rodhocetus balochistanensis, which was fully a sea creature, its legs more like flippers, its nostrils shifted backward on the snout, halfway to the blowhole position on a modern whale. The sequence of known forms was becoming more and more complete. And all along, Gingerich told me, he leaned toward believing that whales had descended from a group of carnivorous Eocene mammals known as mesonychids, with cheek teeth useful for chewing meat and bone. Just a bit more evidence, he thought, would confirm that relationship. By the end of the 1990s most paleontologists agreed.

Meanwhile, molecular biologists had explored the same question and arrived at a different answer. No, the match to those Eocene carnivores might be close, but not close enough. DNA hybridization and other tests suggested that whales had descended from artiodactyls (that is, even-toed herbivores, such as antelopes and hippos), not from meat-eating mesonychids.

In the year 2000 Gingerich chose a new field site in Pakistan, where one of his students found a single piece of fossil that changed the prevailing view in paleontology. It was half of a pulley-shaped anklebone, known as an astragalus, belonging to another new species of whale.

A Pakistani colleague found the fragment's other half. When Gingerich fitted the two pieces together, he had a moment of humbling recognition: The molecular biologists were right. Here was an anklebone, from a four-legged whale dating back 47 million years, that closely resembled the homologus anklebone in an artiodactyls. Suddenly he realized how closely whales are related to antelopes.

This is how science is supposed to work. Ideas come and go, but the fittest survive. Downstairs in his office Phil Gingerich opened a specimen drawer, showing me some of the actual fossils from which the display skeletons upstairs were modeled. He put a small lump of petrified bone, no longer than a lug nut, into my hand. It was the famous astragalus, from the species he had eventually named Artiocetus clavis. It felt solid and heavy as truth.

Seeing me to the door, Gingerich volunteered something personal: "I grew up in a conservative church in the Midwest and was not taught anything about evolution. The subject was clearly skirted. That helps me understand the people who are skeptical about it. Because I come from that tradition myself." He shares the same skeptical instinct. Tell him that there's an ancestral connection between land animals and whales, and his reaction is: Fine, maybe. But show me the intermediate stages. Like Charles Darwin, the onetime divinity student, who joined that round-the –world voyage aboard the Beagle instead of becoming a country parson, and whose grand view of life on Earth was shaped by attention to small facts, Phil Gingerich is a reverant empiricist. He's not satisfied until he sees solid data. That's what excites his so much about pulling shale fossils out of the ground. In 30 years he has seen enough to be satisfied. For him, Gingerich said, it's "a spiritual experience."

"The evidence is there," he added. "It's buried in the rocks of ages."

User avatar
Bosanac sa dna kace
Posts: 8607
Joined: 27/06/2005 20:21
Location: ponutrače

Post by Bosanac sa dna kace » 01/08/2007 21:19

ja mislio ono nesto kratko kad :run:

User avatar
danas
Posts: 18821
Joined: 11/03/2005 19:40
Location: 10th circle...

Post by danas » 01/08/2007 21:57

sve po par strana maksimum :lol: :lol: :lol: aj danasnje omladine :lol: :lol: :lol: :-) :-) :-)

Leonid Breznjev
Posts: 5302
Joined: 04/11/2006 03:32
Location: Kamenskoje

Post by Leonid Breznjev » 02/08/2007 03:42

InfraRedRidinghood wrote:Omladino, nemojte više ovakvih ispada :oops: :oops: :oops: :oops: :zzzz: :zzzz: :zzzz:


nemoj samo po puljkama jopet.. :D

User avatar
Bosanac sa dna kace
Posts: 8607
Joined: 27/06/2005 20:21
Location: ponutrače

Post by Bosanac sa dna kace » 02/08/2007 13:30

ma naporno nesto citat kad je sve odjednom u jednom komadu

Post Reply