Monday, June 18, 2012

selection bias

Selection bias is selecting a sample that is likely to favor one proposition over another. For example, by asking only those who have bought Chevrolets which car is better, Chevy or Ford, you will most likely get overwhelming support for Chevy. By asking only those who have made a full recovery after back surgery whether back surgery is a good option for people with back problems, you are likely to get overwhelming support for the surgery. If you ask only those whose back problems continued after the surgery, you are likely to get an overwhelming response against the surgery.

Selection bias partly explains why there are reports of many satisfied customers who go to psychics, tarot card readers, palm readers, faith healers, acupuncturists, homeopaths, and others who provide bogus treatments such as mistletoe for cancer. The unsatisfied customers are either not asked for their opinion, they're too embarrassed to give it, or they're dead.

Edzard Ernst, M.D., who was trained in various non-conventional medical therapies, provides an example of selection bias that occurred while he was studying the therapeutic effects of mistletoe injections on cancer patients. He was told that the effect would be a lessening of suffering.

Whenever I gave mistletoe injections, the results seemed encouraging. But young doctors are easily impressed, and I was no exception. What I didn't appreciate then was a relatively simple phenomenon: the hospital where I worked was well known for its approach across Germany; patients went there because they wanted this type of treatment. They were desperate and had very high expectations - and expectations can often move mountains, particularly in relation to subjective experience and symptoms. We call this "selection bias". It can give the impression that a therapy causes a positive health outcome even when it has no positive action of its own.

Patients receiving questionable treatments or clients seeking advice from questionable soothsayers are highly motivated to be helped and to have the healer or reader succeed. Such people are often extremely generous in their efforts to personally validate the words, images, or advice of the reader/healer. Some will even assent to claims they know are false, as one of Gary Schwartz's subjects did with a medium who got the subject to agree that her husband was dead when in fact he was still alive. Schwartz engaged in selection bias again when he omitted much of his data in published papers in the Journal of the Society for Psychical Research that supported the hypothesis of survival of consciousness after death. In his book, The Afterlife Experiments, he describes numerous subjects in his experiments who are conspicuously not mentioned in the published papers on those experiments. Rupert Sheldrake showed selection bias when he omitted 40% of his data in a study claiming to provide statistical evidence for the psychic abilities of a parrot.

The best way to avoid selection bias regarding questionable treatments and various divination techniques is to randomize samples, use control groups, and double-blind experiments. The best way to reduce selection bias by scientists is to expose its occurrence and publicly chastise offenders.

Skeptics and parapsychologists have accused each other of selection bias in determining which studies to include in the ganzfeld meta-analysis. Skeptic Ray Hyman did the first meta-analysis of 42 ganzfeld experiments and found no evidence of ESP. Parapsychologist Charles Honorton, on the other hand, found evidence of "anomalous information transfer." In 1994, Daryl Bem and Honorton published the results of a meta-analysis of 28 ganzfeld studies and once again found evidence for anomalous information transfer. In 1999, Julie Milton and Richard Wiseman published their own meta-analysis of ganzfeld studies and concluded that "the ganzfeld technique does not at present offer a replicable method for producing ESP in the laboratory." Much of the disagreement in analysis centered on what criteria to use in deciding which studies to select for the meta-analysis.

selection bias in polls and surveys

Researchers can bias the results of polls and surveys by using a biased method of selecting subjects for their study. Selecting subjects from a non-representative section of a population is a common way to bias a sample. Using samples that are too small to be representative is a frequent error made by researchers.
Misconceptions of chance are not limited to naive subjects. A study of the statistical intuitions of experienced research psychologists revealed a lingering belief in what may be called the “law of small numbers,” according to which even small samples are highly representative of the populations from which they are drawn. The responses of these investigators reflected the expectation that a valid hypothesis about a population will be represented by a statistically significant result in a sample with little regard for its size. As a consequence, the researchers put too much faith in the results of small samples and grossly overestimated the replicability of such results. In the actual conduct of research, this bias leads to the selection of samples of inadequate size and to overinterpretation of findings. Kahneman, Daniel (2011-10-25). Thinking, Fast and Slow (pp. 422-423). Macmillan. Kindle Edition.
Alfred C. Kinsey’s famous studies on sexual behavior in the 1950s have been repeatedly cited as the basis for the claim that 10% of the population is gay. This statistic has been widely cited in both the mass media and in scientific publications, though it is based on a biased selection of samples. Numerous studies have been done since Kinsey's work was published and these later studies put the percentage of adults who describe themselves as exclusively gay as much lower than the 10% figure. Some have found the rate to be between one and two percent.* It should be noted, however, that "survey research methodologies often result in underreporting of stigmatized behaviors."*

Kinsey gathered his data, in part, by distributing questionnaires to prisoners and to people who attended his lectures on sexuality, neither of which was likely to be a good cross section of Americans (Carroll 2005: 140). For his studies on male sexuality, "he interviewed only white men, and these respondents were disproportionately from lower socioeconomic classes."*

A study that considered attraction to the same sex in measuring homosexuality found "8.7, 7.9, and 8.5% of males and 11.1, 8.6, and 11.7% of females in the United States, the United Kingdom, and France, respectively, report some homosexual attraction but no homosexual behavior since age 15."*

In 1994, sociologist Edward Laumann headed a team of sociologists that studied U.S. sexual behavior. They interviewed a representative sample of the U.S. population between the ages of 18 and 59. Laumann found that over a five-year period, 4.1 percent of U.S. men and 2.2 percent of U.S. women had sex with someone of their own sex. If the time period is extended to include their entire lives, these totals increase to 7.1 percent of the men and 3.8 percent of the women.*

Paul and Kirk Cameron reported in 1998: "The 1994 University of Chicago 'definitive' survey of adults estimated prevalence of homosexuality among males at 2.8% and among females at 1.4%. Corrected for the exclusion of those over the age of 59 years, the estimates should be 2.3% and 1.2%."* A study in Britain in 2000 found that about 2.6% of men and women reported having had a same-sex partner within the previous five years and 8.4% of the men and 9.7% of the women reported having had at least one sexual experience with a member of the same sex.*

One wonders, however, if anything approaching unbiased data is possible for determining what percentage of any human population is homosexual. Given the long history of religious prohibition of homosexuality and the widespread revulsion of homosexual behavior that has often led to torment and persecution, it is likely that researchers in this area will be motivated by something other than a genuine search for the truth. Results will differ depending on how one defines 'gay,' 'lesbian,' and 'homosexual'. Methods of gathering data samples will vary widely and the participants in such studies may not be highly motivated to reveal much about their sex lives.

There is some irony in the fact that the Kinsey studies are cited as the source of the statistic that 10% of the population is gay. As Michael Shermer notes, Kinsey made it clear that he did not believe human males "represent two discrete populations, heterosexual and homosexual." Kinsey maintained that "it is a fundamental of taxonomy that nature rarely deals with discrete categories. Only the human mind invents categories and tries to force facts into separate pigeon-holes" (Shermer 2005: 246). Nature has a bias toward variation. The idea that people should fall into neat categories as 'gay' and 'straight', or even 'male' and 'female', is not consistent with the lessons of evolution. Any study that creates such false dichotomies will be misleading.


Many politically biased websites and organizations poll their readers or members and then pass on the data as if it were representative of the general population. Norman Bradburn, former director of the National Opinion Research Center at the University of Chicago, coined the acronym SLOP to describe polls that use selection bias to get their samples. SLOP is an acronym for self-selected listener opinion polls. Bradburn compares SLOP to radio talk shows: they attract a slice of America that is not representative of the country as a whole. “As a result, SLOP surveys litter misinformation and confusion across serious policy and political debates, virtually wherever and whenever they are used” (Richard Morin, “Call-in polls: Pseudo-Science debases journalism,” Sacramento Bee, Feb. 12, 1992, p. B11).

The inaccuracy of such polls should be obvious. Those who call in to give their opinion are self-selected rather than randomly selected. It appears that people who are willing to call in their opinion will sometimes call in their opinion more than once. For example, in a USA Today call-in poll 81 percent of the more than 6,000 respondents said that “Donald Trump symbolizes what made the U.S.A. a great country.” However, 72 percent of the favorable calls came from two telephones in one insurance company office.

CBS tried the gimmick of call-in polling in “America on the Line,” which featured two surveys conducted immediately after President [George Herbert Walker] Bush’s State of the Union speech. There were 314,786 self-selected callers in one survey and 1,241 adults previously selected by a more scientific method in the other survey. The latter was to act as a check on the call-in survey. CBS’s Dan Rather commented on the similarity of results in the surveys, a sentiment that was echoed the next day in the Washington Post, which wrote, “by and large, the two polls produced the same or similar results.” The facts, however, do not support this judgment. “On two of the nine questions asked in both polls, the results differed by more than 20 percentage points. On another five, the differences were 10 percentage points or more” (Morin, cited in Carroll 2005: 147).

 For more on how proper polling and sampling should be done click here.

No comments:

Post a Comment