Start Library 3: Table of content What is new? Ipce Magazine

[Back to The Rind et al. Files] 

Debunking the false allegation of "statistical abuse": a reply to Spiegel

Bruce Rind, Department of Psychology, Temple University, Philadelphia. PA 19122, ( rind@vm.temple.edu )

Robert Bauserman, Department of Health and Mental Hygiene, State of Maryland

Philip Tromovitch, Graduate School of Education, University of Pennsylvania

Sexuality & Culture, 4-2, Spring 2000, 101-111

The Leadership Council for Mental Health, Justice, and the Media claimed to have been instrumental in congressional condemnation of our Psychological Bulletin article on the assumed properties of child sexual abuse (CSA), as we documented in our lengthier article earlier in this issue.

The Leadership Council has characterized itself as an organization "whose membership includes many of the nation's most prominent mental health leaders," whose mission it is "to insure the public receives accurate information about mental health issues" (Leadership Council press release, May 24, 1999).

In accordance with this mission, the Leadership Council claimed it examined our study, uncovering numerous flaws. An important contributor to this examination was psychiatrist and Stanford University professor David Spiegel. The president of the Leadership Council, psychiatrist Paul J. Fink, in correspondence with the American Psychological Association (APA) called Spiegel "one of the leading scientists in our group" (June 3, 1999). Spiegel co-authored several critiques that Fink sent on to the APA as evidence for the scientific

[Page 102]

basis of his group's attacks on our study (i.e., Dallam et al., 1999; Spiegel & Kraemer, 1999). In the shorter critique, which was a summarized version of the longer one, Spiegel and Kraemer concluded that we used the wrong population, the wrong outcome measures, the wrong effect sizes, did the wrong statistical analyses, and interpreted the results incorrectly.

Given that the Leadership Council claims to consist of prominent scientists and to be concerned with accuracy, it follows that Spiegel's critiques must have been definitive refutations of our study. This, however, is not the case. In our lengthier article elsewhere in this volume we evaluated all of the criticisms in the Dallam et al. critique, demonstrating that they were predominated by false assertions, faulty speculations, faulty reasoning, and outright bias. Points worthy of debate were rare and unambiguously correct points were nonexistent.

Nevertheless, these criticisms were presented to some members of Congress, with condemnation ensuing (Fink et al., 1999). Fink and Spiegel used this faulty critique to damage our study's reputation in the media. Fink corresponded with Dr. Laura Schlessinger, a socially conservative radio talk show host who frequently referenced the Leadership Council's critique on her program as firm evidence that our study was debunked (Bellows, 1999).

In news- paper interviews, Fink, described as a "prominent Philadelphia psychiatrist," called our study "perverse" and "terrible" (Philadelphia Inquirer; June 10, 1999,A20; July 13, 1999; All ). Spiegel claimed that we used "meta-analysis the way a drunk uses a lamp-post - for support, rather than illumination" (New York Times, June 13, 1999, 33).

Consistent with his other attacks on our analyses, Spiegel (2000) elsewhere in this volume characterized our analyses as "statistical abuse." Despite the fact that we have already extensively considered and refuted the Leadership Council's criticisms, because of this inflammatory and unprofessional characterization, as well as Spiegel's use of related emotive phrases such as "rationalization for sleazy exploitation" and "moral outrage," we deem it important to specifically address Spiegel in this separate reply.

[Page 103]

Spiegel's Methodological and Statistical Critique

In Spiegel's introductory comments, he remarked that a natural assumption among clinicians and the public is that concluding no harm from CSA is tantamount to morally excusing it. He argued that recommending a distinction between consensual and coerced sex with children was based on the faulty assumption that children can consent. These objections, however, were not the basis for his attack on our study, as we learn next from his rhetorical question and ensuing response:

Are Rind el al. just misunderstood, slaking out an unpopular but intellectually defensible position? Or are they willingly or unwillingly providing a rationalization for sleazy exploitation of children? The answer cannot come from the conclusions of the article, however distasteful they are, but rather from its methods.

Spiegel is therefore resting his case on a methodological critique. We show later that it is this critique, not our methodology, that is severely flawed. Our rebuttals of his points are substantially abbreviated, as these points are mostly repeats of criticisms authored by Dallam et al. (1999) which we have addressed in greater detail elsewhere in this volume.

Stacked the deck, slanting methods in the direction of conclusions. 

Criticizing our inclusion of only college samples, Spiegel argued that we 

"rationalize this rather odd choice with data purporting to show that the rates of abuse are similar in non-college populations. Even if this were the case, the severity could be different, and the consequences are undoubtedly different." 

This claim, however, is false, contradicted in our article itself (see Rind et al., 1998, pp. 29-31, 42). In our comparisons between college and national samples, not only did we show strong similarity in prevalence rates, but also in severity, reactions, and consequences.

Meta-analysis inappropriately combined studies

Next, Spiegel noted that we weighted large studies more heavily than small studies. The problem, he argued, was that some of the larger studies involved very mild sexual trauma consisting largely of non-contact CSA, which reduced CSA-symptom correlations. Spiegel was 

[Page 104]

referring specifically to the Landis (1956) study, which had a very large sample size compared to most other studies. Contrary to Spiegel's claim, we did not use the Landis study in our meta-analyses of CSA-symptom correlations - our major and most extensive analyses. We did use Landis' reaction data and self-reported effects data, but we used them in a way that maximized rather than minimized negative reports. For the analysis of reactions, where Landis' data were the most numerous and most negative, we computed weighted averages, inflating overall reports of negative reactions.

For analyses of self-reported effects, where Landis' data were the most numerous but least negative, we computed unweighted means or presented results without computing means, which minimized the deflating effect of Landis ' data. Thus, Spiegel completely misrepresented how we used the Landis data.

Ignored PTSD patterns 

Spiegel complained that we did not examine posttraumatic stress disorder (PTSD), impugning our honesty by stating "they manage to omit" this salient symptom. 

We did not include PTSD because, quite simply, the primary studies did not examine it. Furthermore, PTSD implies very severe pathology. Surely someone with PTSD should manifest many of the specific symptoms we did examine, such as depression or anxiety. 

Spiegel also complained that we did not examine patterns of symptoms. This "syndromic" argument is weakened by Kendall-Tackett et al.'s (1993) conclusion that the "first and perhaps most important implication [of their review] is the apparent lack of evidence for a conspicuous syndrome in children who have been sexually abused" (p. 173 ). 

Given that the Kendall- Tackett et al. review was based exclusively on clinical and forensic samples, it is even more unlikely that evidence for syndromes would be found in general population samples. Indeed, no pattern of symptoms appeared in our review: CSA-symptom correlations were all small, ranging narrowly from r = .04 to .13; sexual problems, which should have been distinguished from non-CSA-specific symptoms according to victimological thinking, fell right in the middle, r= .09.

Evidence says it is both CSA and family dysfunction

Next, Spiegel claimed that the evidence in our study showed that both CSA and family dysfunction caused later distress, not family dysfunction

[Page 105]

alone. He claimed our statistical analyses relevant to this point were "impermissible," which led to our erroneous conclusions. 

First, the evidence in our study raised doubts that CSA is causally related to symptoms in the typical case. The reason is that, although CSA was related to symptoms (r = .09), family dysfunction was confounded with CSA ( r = .13) and was substantially more related to symptoms (r= .29) - i.e., family dysfunction accounted for nearly ten times as much symptoms variance as did CSA. 

These interrelations suggested that CSA-symptom relations should be examined controlling for family dysfunction, which we did: before statistical control, 41 percent of CSA-symptoms relations were statistically significant; after control, only 17 percent were. This 59 percent reduction was based on dependent measures; the reduction was 83 percent when based on independent measures (see Rind et al., 1998, p. 40). 
These analyses contradict the gist of Spiegel's assertion - that the evidence showed that CSA typically or inevitably caused later distress. 

Second, regardless of what our analyses showed, Spiegel rejected them as "impermissible." Spiegel's full argument, detailed in our longer article in this volume, was that we stacked the deck against CSA in favor of family dysfunction in accounting for symptoms variance because measures of CSA were unreliable and dichotomous, CSA had low base rates, and controlling for family dysfunction in CSA research is invalid. 

These points sound damning, but are in fact simply wrong. We systematically and thoroughly addressed them in black and white in our review (see Rind el al., 1998, pp. 41, 43-44), showing them not to have threatened the validity of our analyses. CSA measures were reliable, dichotomy  and low base rates produced only an insignificant bias at most, and problems in statistical control did not apply to our analyses. Given our full attention to these points in our original article, these attacks are especially unwarranted.

False assumption that failure to demonstrate a relationship means there is none

Finally, Spiegel argued that failing to find a relationship does not mean that there is none. Our response is that, in hypothesis testing, the null hypothesis states no relationship, while the alternative states that there is a relationship. It is the null hypothesis that is tested. A statistically non-significant finding does 

[Page 106]

not prove the null, but it certainly does not support the alternative hypothesis. 

In our review, we did not - as Spiegel falsely implied - accept the null (i.e., conclude no harm). Instead we correctly concluded that the alternative hypothesis (i.e., pervasive harm) was not supported (see our discussion, Rind et al., 1998, pp. 42, 44, 46). 

Thus, Spiegel's point is a strawman argument; he attacked a misrepresented version of what we wrote. 

Moreover, we were quite cautious in our interpretation of causality, which Spiegel failed to acknowledge. In our discussion of causality (see Rind et al., 1998, p. 14 ) we carefully qualified our findings by noting that "the statistical control analyses do not rule out causality for several reasons." These reasons were that some CSA-symptom associations remained significant after control, non-significance may have reflected low power rather than zero effect, and some students did report lasting harm, which implied genuine negative effects in these cases. Thus, we did not claim that the null (no harm) hypothesis should be accepted. We did conclude this discussion by stating that, "Despite these caveats, the current results imply that the claim that CSA inevitably or usually produces harm is not justified." Once again, this statement correctly says that the alternative hypothesis is not supported, not that the null is true.

Aside from the quibbling over technical points, a more central issue is that victimologists have clearly maintained that CSA, in all its shapes and forms, is ubiquitously devastating (Best, 1997; Nathan & Snedeker, 1995). 

The Sidran Foundation, whose executive director is on the Leadership Council's advisory board and under whom Spiegel has published, is a multiple personality disorder and recovered memory group that provides literature to therapists and patients. In its brochure, it cites DSM-III-R to note that psychological trauma is an experience that would be markedly distressing to almost anyone. It provides a list of examples, including a serious threat to one's life, rape, military combat, natural or accidental disasters, torture, and a child's sexual activity with an adult. 

This list implies that any form of CSA experienced by anyone, college-bound or not, is comparable to torture and natural disasters. It logically follows then that tests assessing the relationship between CSA and symptoms should usually, if not always, find statistically 

[Page 107]

significant relations of strong magnitude. The fact that college study after college study failed to support this assumption, in effect, debunks it. Rather than our methods and logic coming under fire, these results suggest it is time for those of victimologists to come under intense scrutiny.

Summary

Spiegel's most serious criticisms were 

that college samples are invalid because college students have less severe CSA and fewer consequences than others, 

we biased our meta-analyses with the Landis ( 1956) study, and 

our statistical analyses were "impermissible." 

These criticisms suggest that Spiegel read inaccurate summaries of our article written by other critics, rather than reading the article itself. As can be readily found in our article, 

the assertion about college samples is false, 

the claim about our misuse of the Landis data is false, and 

the assertion of impermissible analyses is false. 

Spiegel did not do much better in his two less serious criticisms. He misrepresented our conclusions regarding causality, and his arguments concerning PTSD and patterns of symptoms are, at best, debatable. In short, Spiegel's methodological criticisms are severely flawed.

Response to Spiegel's Other Comments

In his closing remarks, Spiegel asserted that the 

"public often feels that psychologists and psychiatrists know a lot but abandon common sense, and articles like this provide ample fuel." 

In view of his flawed methodological critique, this comment is patently misdirected. Such a comment, if it must be made, seems more valid when redirected back to his critique. 

Spiegel characterized our article as having the "appearance but not the essence of good science," claiming that the way our meta-analysis was conducted, the facts could not speak for themselves.

Nothing could be further from the truth. 

First, this criticism is empty, given his pervasive misrepresentation of our methodology and analyses. 
Second, unlike most other reviewers of CSA studies who interpreted them in narrative fashion, which is imprecise and subjective (Jumper, 1995), we precisely quantified the results of other studies and presented them in summary form according to well

[Page 108] 

accepted statistical protocols (Rosenthal, 1994). 

The facts were, for example, that some students reported positive or neutral CSA experiences and reported no harm, while others reported negative experiences and harmful effects. We provided readers with all of this information so the facts could speak for themselves, rather than just reporting in a one-sided fashion only the negative outcomes, as victimologists tend to do in their summaries.

In addition to showing that CSA was statistically associated with symptoms, we assessed the magnitude of this association and showed its relationship with family environment. This statistical approach allowed the facts to speak accurately for themselves, in contrast to many other reviews in which CSA-symptom associations are noted but not qualified by their magnitude or problems of confounding, giving an inaccurate impression of CSA effects.

Spiegel averred that he is sympathetic to the need to constantly test assumptions and that a willingness to be proven wrong is crucial to advancing thinking and treatment. Despite this lofty expression of scientific open-mindedness, he dogmatically commented next that 

"at the same time, I don't believe for a minute that sexual abuse is not emotionally damaging."

 He reconciled these conflicting attitudes by asserting that 

"it is inconceivable that a child can meaningfully consent to sexual relations with an adult, and I believe it to be a moral outrage to put forward such an idea." 

But this reconciliation is flawed. As we discussed in great detail in our other article in this volume, our use of the consent construct has been recklessly misinterpreted and misrepresented by our critics. We never stated or implied anything in our article about informed consent; our use was limited to simple consent (i.e., willingness), of which both children and adolescents are capable. 

Moreover, this use was completely scientifically justified because: 

(a) the same construct appeared in many of the primary studies; 

(b) it had predictive validity in these studies, successfully discriminating between willing and unwanted CSA in terms of outcome; 

( c ) it has been shown in other studies to have predictive validity (e.g., Coxell et al., 1999); and 

(d) it had predictive validity in our review as well. 

Therefore, although it may be a "moral outrage" to our critics to use the simple consent construct, it would be a scientific outrage not to. The real 

[Page 109]

problem is that a critic claiming to speak for science ignores scientific criteria in favor of moral criteria in constructing his criticisms.

Concluding Remarks

In his closing comments, Spiegel claimed ownership of "clear-eyed reason" in his attack on our article and then labeled our analyses "statistical abuse." 

We do not think that misrepresenting our analyses and methodology, making criticisms as if they were new when in fact we had already thoroughly addressed them in our review, and invoking moral criteria to attack scientifically sound procedures constitute clear-eyed reason. 

One of the most damaging aspects of victimology has been its ever-expanding definition and use of the term "abuse," as well as its willingness to claim abuse when none most likely has occurred (Best, 1997; Nathan & Snedeker, 1995). 

Spiegel's eagerness to mischaracterize our well-conducted analyses - as judged by the reviewers and action editor of Psychological Bulletin originally, then by the APA in its own in-house review after the attacks on our article began, and finally by AAAS - as "statistical abuse" is yet another glaring example.

Despite lofty self-characterizations of containing many of the most prominent mental health leaders and being concerned with scientific accuracy, Fink's correspondence with the APA (June 3, 1999) suggests that the Leadership Council in fact is composed mainly of advocates for recovered memory therapy and multiple personality disorder who are concerned about the growing number of lawsuits that threaten their practices. But these advocates are not representative of the mental health field, as their self-characterization implies. 

Academic researchers generally do not accept the validity of their claims; and comprehensive analysis has shown a virtual absence of scientific support for their views (Brandon et al., 1998). Criticism of their approach to treatment and its probable iatrogenic damage has been intense and growing 

(e.g., Frontline, 1995a, 1995b; Loftus, 1993; Ofshe & Watters, 1993; Nathan & Snedeker, 1995; Pendergrast, 1996). 

Their practice is predicated upon the assumption that CSA, broadly defined, is pervasively and intensely harmful. A challenge to this tenet adds to the pressures they have

[Page 110]

been facing. As such, it is not surprising that they attacked our review with vigor. But their zeal has led them to distortion, misrepresentation, and dramatic overstatement. Spiegel asserted that we  used rneta-analysis like a drunk uses a lamppost - for support, rather than illumination. Although witty, this comment more accurately characterizes Spiegel's critique. It is his critique and those of his group that are being used for support rather than providing illumination on this important topic.

[Back to The Rind et al. Files]

Start Library 3: Table of content What is new? Ipce Magazine