Blog

sidebar

Shoulda. coulda. woulda: What listening to Joe Durlak might have done

Photograph of Jean Rhodes
Jean Rhodes with permission from author

Jean Rhodes is the Frank L. Boyden Professor of Psychology and the Director of the Center for Evidence-Based Mentoring at the University of Massachusetts Boston. Rhodes and her students focus on two distinct, but interrelated, programs of research: (a) informal and formal mentoring in the lives of adolescents and young adults and (b) risk and protective factors in young adult survivors’ responses to natural disaster. The overarching goal, instantiated in both programs, is to understand the role of social connections in the adaptive functioning of individuals and to specify the underlying processes by which these connections contribute to positive outcomes. To address this, Rhodes and her team explore how relational processes unfold across development and social ecologies. To read more research and opinion on mentoring please visit the Chronicle of Evidence-Based mentoring.

In 1979, a young psychologist named Joe Durlak published a controversial study in Psychological Bulletin that sent ripples through the helping professions. What Durlak sought to do was to combine all published studies that had compared the outcomes of experienced psychologists, psychiatrists, and social workers with those of paraprofessionals (i.e., nonexpert, minimally trained community volunteers and helpers). His analysis of 42 evaluations led to a provocative conclusion: almost across the board, paraprofessionals were more effective than trained professionals. Overall, paraprofessionals were comparable to trained mental health professionals and in 12 they were actually superior. In only one study were professionals significantly more effective than paraprofessionals in promoting positive mental health outcomes. As Durlak concluded, “professionals do not process demonstrably superior therapeutic skills, compared with paraprofessionals. Moreover, professional mental health education training and experience are not necessary prerequisites for an effective helping person.” (Durlak, 1979, p. 6). Such data challenged mental health professionals to look more closely at the nature and efficacy of mental health practices.

Over the next five years, researchers using more sophisticated, meta-analytic procedures were able to replicate these promising trends, even controlling for the difficulty of the patients with whom professionals were working. “The average person who received help from a paraprofessional was better off at the end of therapy than 63% of persons who received help from professionals” (1984, 536). Similar studies have continued to demonstrate their effectiveness in delivering preventive interventions (Conley, 2016). These studies suggest that, under the right circumstances, mentors and other caring adults can effectively support youth who lack access to trained professionals.

But there is a critical caveat: paraprofessionals with more experience showed the strongest effects relative to professionals. Moreover, the most effective paraprofessionals in Durlak’s study were those whose efforts were focused on specific target problems (e.g., depression, healthy behaviors) as opposed to more general, broad outcomes. For instance, Durlak cites a study by Karlsruher (1976), who found that unsupervised college students were ineffective in helping maladapting elementary school children, whereas carefully supervised students achieved successful results that were equal to those of trained professionals. Many of the paraprofessionals in Durlak’s study had received up to 15 or more hours of training. As Durlak concludes, “Judicious selection, training, and supervision might well account for paraprofessional effectiveness in comparative studies.”

Durlak also made a prescient observation. “Paraprofessional effectiveness in some studies may be due to the development of carefully standardized and systematic treatment programs…In these programs, treatment has consisted of a programmed series of activities. Presumably, the more intervention procedures that can be clearly described and sequentially ordered in a helping program, the easier it will be for less trained personnel to administer them successfully. Paraprofessionals may feel more comfortable and hold higher expectations than professionals when using standardized clinical procedures, and these factors could contribute to paraprofessionals’ clinical effectiveness. Paraprofessionals’ commonsense “real-world” solutions may have been particularly appealing (Baker & Neimeyer, 2003), but their clinical success may be most closely related to professionals’ abilities to define, order, and structure effective sequences of “helping activities when training or supervising paraprofessionals.” In other words, in Durlak’s study, the paraprofessionals may have been outshining the professionals–not because they were inherently more empathic–but because they were more clearly defining and structuring their helping activities, at least relative to the many of the emerging treatments of that time.

Nevertheless, the trope of the healing power of a close mentor relationship, guided mostly by intuition and kindness, continues to shape the views of most youth mentoring researchers and practitioners. Most scholars succumb to the story of “enduring emotional attachment,” as the key “active ingredient” in mentoring (Li and Julian, 2012). They continue that the reason that interventions often produce weak outcomes is that they focus on ‘‘inactive’’ ingredients that don’t promote developmental relationships such as mentor incentives and training curricula.

Despite data to the contrary, this misplaced emphasis on the friendship model alone is reinforced in most mentoring organizations. There’s no shortage of rigorous meta analyses of youth mentoring programs showing small overall effects, but these studies are no match for the emotional appeal of a compelling anecdote or a well-argued piece that confirms our biases. These messengers mean well; the individuals, programs, and organizations that share overly encouraging verdicts about mentoring on their websites and promotional materials believe deeply in the power of their youth program and rarely have the statistical expertise to fully scrutinize or qualify available data. Presenting this idealized representation of mentoring relationships is also likely driven by the need for donors. Indeed, programs often demonstrate their success to funders not by providing decks of slides with mixed evaluation results but by showcasing successful matches and the heartwarming stories they represent. Even when claims are later qualified, encouraging numbers have had incredible staying power and the urge to cherry pick them is almost irresistible in a competitive funding landscape.

To compound this problem, as the field holds fast to our preconceptions, we easily find “evidence” that supports our viewpoint that “mentoring works,” while ignoring counterfactuals. To illustrate this point, Travis and Elliot  have described the “Problem of the Benevolent Dolphin.” As they note, every once in a while, a news story appears about a shipwrecked sailor who, on the verge of drowning, is nudged to safety by a dolphin (most recently, a 19-year old man described how a dolphin or sea lion kept him afloat long enough be to rescued by the Coast Guard after a suicide attempt off the Golden Gate Bridge in San Francisco). As Travis & Elliot explain: “It is tempting to conclude that dolphins must really like human beings, enough to save us from drowning. But wait – are dolphins aware that humans don’t swim as well as they do? Are they actually intending to be helpful? To answer that question, we would need to know how many shipwrecked sailors have been gently nudged further out to sea by dolphins, there to drown and never be heard from again. We don’t know about those cases because the swimmers don’t live to tell us about their evil-dolphin experiences. If we had that information, we might conclude that dolphins are neither benevolent nor evil; they are just being playful.”  The authors then evaluate psychotherapists, who, in the absence of rigorous, experimental studies, can easily summon up “evidence” that their clients are improving and that their approaches are working.

It is tempting to consider where the field of mentoring would now be had it aligned with targeted preventive interventions and taken a deliberate approach to training and supervising paraprofessional mentors. Alas, ideological and professional drivers pushed the pendulum of mentoring away from targeted approaches that deploy well-trained paraprofessionals who followed evidence-based protocols with fidelity (Durlak’s recommendation) to the unspecified, often perfunctory, and only modestly effective formal mentoring relationships we have today. In the meantime, prevention science and the helping professions have become increasingly disciplined and effective. Where would mentoring be today had its allies demanded the rigor and discipline suggested by Joe Durlak more than 40 years ago.

This blog was written by Jean Rhodes and first posted by the Chronicle of Evidence-Based Mentoring. Used with the author’s permission.

Interested in research on mentoring? Check out these pieces on communitypsychology.com:

Natural Mentoring is Good for All Youth

Are Current Mentoring Models Bad for Kid’s Health

You Can MAKE better Mentors

Lab as Family: Creating Kinship Networks on Campus for Community Based Work