Comparison

of Kendall-Tackett et al. (1993) and Rind et al. (1998)

Content of this article Tables References

Vorige Start Volgende

External Validity

Internal Validity

[Page 748 continued]

The last review article on CSA to appear in Psychological Bulletin before ours was that by Kendall- Tackett et al. (1993). This review has been widely cited in the psychology and psychiatry fields as key evidence for the pervasive and intensely harmful effects of CSA. In the same vein, both Dallam et al. (2001) and Ondersma et al. (2001) cited this review as authoritative.

Although they have scrutinized the external and internal validity of our review, they did not offer, nor have they offered in the past to our knowledge, any criticism of the Kendall- Tackett et al. review; rather, they accepted its methods and findings uncritically.

We argue that this lack of scrutiny of Kendall-Tackett et al. and simultaneous intense scrutiny of our review represents selective criticism, for much is to be criticized in the Kendall-Tackett et al. review. Now that we have responded to criticisms of our external and internal validity, we examine Kendall-Tackett et al. along these same dimensions.

This comparison will achieve two things:

First, it will support our contention that Dallam et al.' s (200 1) and Ondersma et al.'s (2001) criticisms of our review were selective;

second, it will show the advancements our review offers to the field, with its more careful attention and sounder approach to external and internal validity, two substantial indicators of scientific quality.

External Validity

As we noted previously, the Kendall-Tackett et al. (1993) mean effect sizes were .57 for emotional and .63 for behavioral problems. These results were based on sexual abuse treatment samples, not nonclinical samples. Compared with nonclinical junior and senior high school students, however, these effect sizes were highly anomalous, being 2.86 and 3.77 SDs above the mean effect sizes for all studies combined. Clearly, the Kendall-Tackett et al. samples were outliers, highly unrepresentative of the general population of minors.

The results from our meta-analyses of national and college samples (Rind & Tromovitch, 1997; Rind et al., 1998), on the other hand, were almost identical to the unbiased effect size estimates of the nonclinical junior and senior high students, as they were to the nonclinical samples that Dallam et al. (2001) mentioned in their Table 8, which we summarized in our Table 6.

In our review, we paid explicit attention to the issue of external validity, making appropriate comparisons between college and national samples. By contrast, Kendall- Tackett et al. completely ignored the issue of external validity, except in a single footnote near the beginning of their review. Consigning such reference to a single footnote and ignoring this important issue entirely in their Discussion section created the impression that their findings were more broadly relevant than they actually were.

In sum, our review was relatively strong in its treatment of external validity, whereas theirs was weak. Critics of our review on this issue would have more balanced arguments if they applied their criticisms equally to reviews that favor their point of view, rather than accepting them uncritically.

Internal Validity

Our review of the college studies was a critical review of causality. It added to previous meta-analytic reviews, where causality could not be analyzed because the primary studies had provided insufficient data on third variables and statistical control (Jumper, 1995; Neumann et al., 1996). Both Jumper and Neumann et al. pointed to the need for future research to address this weakness.

Our review was one such response to this problem, made possible by the fact that the college studies had sufficient relevant data. Whether our findings regarding causality hold up in future investigations is less important than the fact that they formed a central focus of our presentation, as they should have.

The Kendall-Tackett et al. (1993) review, on the other hand, has been par for the course of victimological research on CSA, accepting more or less uncritically CSA's causal role. This review included various studies based on daycare satanic ritual abuse (SRA), such as one on the McMartin preschool children (Kelly, 1993), in which nearly half the children fell in the clinical range of PTSD symptomatology.

However, the McMartin case has been so

[Page 749]

thoroughly discredited as a case of implanted memories of abuse rather than real abuse (e.g., Nathan, 1990; Nathan & Snedeker, 1995) that it seems negligent for Kendall-Tackett et al. not to have informed their readers specifically that these were McMartin data and must be viewed with skepticism.

 The dramatic effects in this case were attributed to CSA, when in fact they were in greatest likelihood iatrogenic. They also included a review of SRA cases by Kelley (1989), in which again nearly half the children were in the clinical range of PTSD symptomatology, and once again we are given the impression CSA has dramatic effects, when clear alternative explanations are apparent. For instance, Kelley ( 1990) showed that parents of these children were highly disturbed, which suggests that they may have been passing the anxiety on to their children or seeing it in them even if not there. Kendall- Tackett et al. themselves acknowledged that the mothers' judgments about their children's symptoms were highly related to their own level of distress and willingness to believe the children - most reports on child symptoms came from parent-completed checklists.

Kendall- Tackett et al. (1993) dismissed parental reporting bias, noting that therapist judgments were similar, although children's self-reports were much less negative and mothers' reports were poorly related to their children's reports. But because of researcher bias, including demand characteristics and expectancy effects (R. Rosenthal & Rosnow, 1969; Rosnow & Rosenthal, 1997), one cannot assume the validity of therapist judgment, especially given Kendall-Tackett et al.'s note of biased reporting: "Unfortunately, few investigators have reported on ... asymptomatic children, perhaps out of concern that such figures might be misinterpreted or misused" (p. 168).

Aside from discussing this issue of measurement validity, which is relevant to internal validity , Kendall-Tackett et al. never directly addressed the issue of causality; rather, they just assumed it. They assumed it so strongly that they attributed the large percentage of asymptomatic children in their studies to insensitive measures rather than lack of caused harm.

Additionally, Kendall-Tackett et al. (1993) inflated the impression that CSA causes harm by calling sexualized behavior a symptom - this was the most common symptom they found. But sexualized behavior is not a symptom of disease or distress - arguing that it is constitutes a value judgment.

As Ford and Beach ( 1951) observed in their seminal review of cross-cultural and cross-species data,

"as long as the adult members of a society permit them to do so, immature males and females engage in practically every type of sexual behavior found in grown men and women" (p. 197).

 From their review they concluded that

"tendencies toward sexual behavior before maturity and even before puberty are genetically determined in many primates, including humans" (p. 198).

Regarding the relative lack of sexuality in juveniles in our and similar societies compared with others, they noted that this is a product of the restrictive nature of these societies rather than the nature of juveniles:

The extreme pains to which adults in these societies are forced to go in order to control the sexual behavior of young people is an eloquent expression of the strength of the tendency on the part of older children and adolescents to engage in such activity. (p. 182)

Ford and Beach's findings argue against labeling sexualized behavior a symptom on par with depression, anxiety, and suicidal ideation. Such behavior may be inappropriate or undesirable according to social norms or practical concerns in our culture, but these facts are not a valid basis for medicalizing it (cf. Szasz, 1990).

In sum, although our review should not be viewed by any means as definitive regarding internal validity, it was nevertheless a much needed critical review that directly dealt with the issue. Although inferring causality from correlational data is fraught with problems because of unexamined third variables, one is on much stronger ground in inferring lack of support for causality when factors are no longer correlated after statistical control, because a requirement of causation is correlation.

Our review, which followed this logic in assessing causality, represents an advance over the Kendall- Tackett et al. (1993) review, which by contrast paid scant attention to causality - just assuming it instead - and inflated the impression of causal effects with some questionable data, measures, and definitions of harm. Critics of our review, to be balanced, should also discuss the weaknesses of the Kendall- Tackett et al. review, as well as the many other similar reviews.

Content of this article Tables References

Vorige Start Volgende