The Tennessee Model
“The Tennessee Model of Graduate Education in Clinical Psychology: Integrating Practice and Research”
The Clinical Program Faculty
Prepared by Robert G. Wahler and Kristina Coop Gordon
Department of Psychology, University of Tennessee, Knoxville
The Tennessee Model represents a set of guidelines through which students think of practice and research as similar enterprises to be conducted in an integrated manner ensuring maximum benefit in both domains. Integration of the two enterprises occurs when their separate operations inform one another through a cross-pollination process lending new ideas which foster productivity in the laboratory and in the consulting room. We believe that this linkage across domains begins when students appreciate the basic similarities in practice and research, leading them to compare products of the two enterprises. When comparisons are made, ideas emerge when seemingly different products are seen as related on dimensions discovered through efforts to study similarities in what is done in the laboratory and in the consulting room. For example, a student clinician who is frustrated by her client’s resistance to interpretations could attain a fresh perspective through reviewing a colleague’s research work on cognitive styles. Conversely, the same student might formulate a new research hypothesis on the resistance phenomenon after she successfully engages her client by acknowledging their differing cognitive styles.
Over the past decade we have systematically constructed the Tennessee Model through a number of curricular and mentorship changes designed to promote student interest and efforts to integrate their practice and research experiences. We did so in the face of evidence indicating that our longstanding Boulder Model had fostered student perceptions of separation experiences along with their decided preference for practice over research (Cumberbatch & Wahler, 1999).
Development of the Tennessee Model began in 1993 with our discussion of a paper by Belar and Perry (1992) which offered guidelines for the integration of practice and research developed through the Gainesville Conference sponsored by the American Psychological Association. This paper stimulated other faculty discussions leading to a position paper (Handler & Wahler, 1995), changes in assistantship assignments, new courses, a new comprehensive examination, and new options in advanced practicum experiences. In addition, our student recruitment brochure highlighted our search for applicants whose interests were equally focused on practice and research. With the curricular changes in place and continuation of each faculty member’s modeling of their working roles as both practitioner and scientist, we continued to articulate and to operationalize our new program philosophy. Out of discussions among faculty and students plus feedback from three teams of APA site visitors, we formulated two guiding principles mentioned in the introductory paragraph of this paper: (1) Students must appreciate the similarities between operational steps of practice and research, and (2) Students must actively compare and synthesize findings from these two enterprises. We assumed that the critical thinking required by students who are guided by these principles would generate the cross-pollination constituting integration.
The remainder of this paper will outline the two guiding principles of our Tennessee Model, including its rationale and illustrations of its educational operations. Lastly, we will present a schematic diagram showing how cross-pollination has emerged from faculty and student synthesis of practice and research findings, including examples of this process in ten year snapshots of faculty and student performance.
The Generic Nature of Practice and Research
Differences between these two enterprises are more apparent than their similarities. Science is defined by conservative standards of methodology, analysis, and proof that require its proponents to be tidy and detached. In contrast, practitioners operate under a more liberal interpretation of these standards, since their work is by necessity both messier and more personalized. Were these features to be highlighted, as they often are via the parallel paths of Boulder, students are likely to perceive separations, to accept the dual identities, and to prefer one over the other. Popular opinion views the budding scientist as learning to utilize a conscious and formal mode of thought, while the practitioner learns an intuitive mode of thinking (see Epstsein’s discussion article, 1994). When the dichotomy is set forth in this way, it invites separation along with differential allegiances and value judgments: mutual arrogance abounds and communication across domains is diminished.
In actuality, intuitive and formal modes of thinking are equally necessary for the performance of good practice and good research. Scientists start their work with ideas emerging from hunches derived from sources extending beyond the data from a previous study. They talk to their colleagues, they think in their armchairs, they dream, they read novels, and they have spiritual experiences. Even while following the data from previous research, the significance of that data is often viewed in a context of gut reactions about what it “really” means. In other words, good scientists are phenomenologists before and after they employ the methods defining them as scientists. The introduction and discussion sections of their papers, while largely summarizing databased inferences, often include phrases such as “I suspect,” Abased on my experience: and “my best guess is.” Intuitive thinking is valued by scientists because it offers breadth in considering the information relevant to hypothesis formation. Given a body of information comprised of hunches and facts, it is up to the scientist to clarify and synthesize this information and thus to move it into a stream of conscious and formal thought leading to the generation of a testable hypothesis.
Like the scientist, a practitioner who begins work with a client follows the same process of information gathering for purposes of generating hypotheses which will then be subjected to validity testing. As is true in science, this body of information is comprised of hunches and facts, indicating the practitioner’s use of both intuitive and formal thinking. Thus, practitioners gather facts known to be relevant to psychopathology as well as relevant to the process of intervention, such as the client’s gender, culture, education, income, job status, and health. In the information gathering operation the practitioner may also decide to conduct psychometric testing through formal instruments sampling the client’s cognitive performance and emotional regulation, and in the course of the above fact finding operation, the practitioner is engaged in a continual use of intuitive thinking. Through intuition, the practitioner highlights the client’s use of certain metaphors and phrases; the client’s reluctance to elaborate some descriptions of personal experience; the client’s range of affect. These events are noted as important because of the practitioner’s work experience with clients and personal beliefs about the underpinnings of psychopathology and the process of behavior change. As in science, the practice process then requires the clinician to synthesize the gathered information and to articulate the products as hypotheses to be tested — in this case through further dialogue with the client. Treatment plans illustrate these hypotheses as the products of the above described assessment phase in which the practitioner has used facts and hunches as the raw material for formal deliberations about what to do next.
The methodologies of hypothesis testing in the laboratory and in the consulting room obviously differ in attention to detail and consistency of application. Practice methodology is personalized and therefore harder to monitor and control compared to the rigorous methods of science. However, both sets of methods are employed for the same purpose and both practitioner and scientist pride themselves on their efforts to sustain objectivity as best they can. Practitioners know that they are at risk for bias in using their personalized methods (i.e., countertransference) and scientists are compelled to prove the reliability and fidelity of their methods in their efforts to avoid this confirmatory bias phenomenon.
Hypothesis testing methods in the consulting room can be categorized into the same correlational and experimental strategies used in the laboratory. Thus, after a period of assessment, the practitioner may look for covariations expected in the already gathered information. For example, a hypothesized social dependency pattern might be expected to emerge in an alcohol addicted client’s descriptions of his or her relationships. Although the same strategy used in a laboratory setting would provide a far more detailed and quantified description of this social dependency addiction covariation, both strategies make use of correlational logic in hypothesis testing. Experimental strategies follow similar courses when used by practitioners and scientists. Returning to the previous example, a practitioner’s use of interpretation is an experimental probe designed to shed light on the suspected causal linkages between a client’s chemical addiction and social dependency. The interpretation, “What do you think about your fears of being alone and your desires to drink?” is clearly designed to explore causality in the already established assessment covariation. Once again, the scientists’s experimental interventions are far more compelling and replicable, but nevertheless, amount to the generic equivalent of a practitioner’s strategy.
In our final comparison of practice and research, consider the dissemination phase of operations in the consulting room and in the laboratory. Dissemination describes the process used to share results of hypothesis testing with that audience holding a vested interest in these findings. The scientist’s audience is comprised of journal or grant reviewers and a mentor if the scientist is a student, while the practitioner’s audience is made up of the client, a supervisor, in the student clinician’s case, and those managed care reviewers representing third party insurance companies. All members of this audience will provide critiques of what was found, leading the practitioner and the scientist to either proceed with encouragement or to drop back and reconsider. The critique is a keystone experience for all parties, who will ideally consider the findings and their subsequent review with objective attitudes. Of course, this attitude is not likely to materialize at first, because vested interests generate personalized reactions as seen in hurt feelings, accusations of unfairness, anxiety, entitlement, and opposition to any new viewpoint. The burden of objectivity usually falls on the shoulders of practitioner and scientist, who must overcome their personalized reactions, accept the reviewers’ rights, and then make concerted efforts to see the credibility of these critiques. For example, good practitioners do not dismiss clients’ objections as resistance and good scientists don’t deride journal editors’ criticisms as motivated by professional jealousy. Practitioners and scientists must carry this burden simply because members of the audience are in the driver’s seat; they can quit psychotherapy, decided not to publish papers, refuse to authorize insurance support, and deny funding of grant proposals.
Dissemination generates a constructive feedback loop for practitioner and scientist and, as such, it promotes better practice and better science. When the reviewer critique occurs, opportunities to improve one’s practice and one’s research abound as long as this person is willing to be open minded about reviewers’ commentaries. These are also prime opportunities for the novice clinical psychologist to learn how to open information channels between his or her laboratory and consulting room. When the student gets frustrated in either domain and then becomes more open minded with help from a mentor, the feedback loop can acquire useful breadth through relevant information from both domains. Of course, the mentor is the key player in determining how this frustration is resolved. If the teacher believes in the practice-research integration model, a resolution will be generated through discussing topics not always seen as relevant by the frustrated student. It is easier to construct a frustration reducing feedback loop by staying in the domain in which the frustration occurred even though the long term benefits of this strategy are usually less. For example, a student practitioner whose client is experiencing progressive increases in post traumatic anxiety wants clinical information about what to do next. After pursuing hypotheses about the client’s pervasive anxiety, including the use of psychoactive medication, the student is worried and frightened, and looks to the mentor for an answer. If, during the discussion, the mentor asks the student what is known in the research literature about PTSD, the student is apt to become impatient, wanting instead to stay with a review of the psychotherapy process. The mentor’s question might be prompted by a belief that the student is inordinately focused on the client’s recount of the traumatic event in contrast to the client’s pre-trauma personal adjustment. Now, while it would have been more expedient for the mentor to have pointed out this flaw in the student’s handling of clinical process, the student’s dependence on information in the practice domain might also have been strengthened by this single domain focus.
Similar scenarios of mentoring emerge in the laboratory when students experience critiques during the dissertation process. Once again, the student scientist is inclined to look solely at flaws within the focal domain (i.e., laboratory), while the mentor insists on a broader review which should include examination of the student’s practice experiences. For example, a student’s dissertation pilot work gets bogged down when a committee member argues that a single measure of anger proposed by the student does not adequately capture the Hostility” construct under investigation. In conversations with the mentor this student presents a listing of additional measures taken from a new literature search focused on other studies of hostility. The mentor is impressed, but also asks the student to review a clinical case known to both student and mentor. In the student’s review, the male client’s sense of inadequacy was highlighted as a major component of his chronic hostility, and this component became an intervention target along with his outbursts of anger. Both mentor and student are puzzled because none of the hostility checklists in the research domain included items reflecting the subject’s self esteem. Based on the ensuing discussion, the student decides to include such a measure in a new set designed to assess the hostility construct. As was true in the practice example, a more complete within-domain search might have uncovered the relevance of self esteem in the hostility construct. However, the mentor’s redirection of this student’s search probably served a broadening function in the novice’s view on the origin of research ideas.
Evolving Ideas in Practice and Research: Genealogy of Mentor-Student Collaboration
When the Tennessee Model was initiated in 1993, faculty mentors discussed the practice-research integration concept with their students. The students, who work with a number of supervisors in the practice domain, usually settle on a single research mentor by the beginning of the third year of their campus-based doctoral studies. Since each research mentor also has some supervisory contact in a student’s clinical practica, their collaborative work could include dialogue about commonalities in their practice and research enterprises. Thus, the stage is set for a fertile exchange of ideas within the dyad, and some of these ideas are likely to have roots in one domain, while also yielding productive work in the second domain. This sort of cross-pollination can be traced retrospectively by charting the lineage of theoretical constructs guiding a mentor’s work in the laboratory and in the consulting room. If continuity in the use of a construct and its work products are evident in both domains, support for the integration of practice and research is attained. Once the lineage is described for a mentor, we would expect to find a similar progression with respect to the mentor’s students, as these junior practitioners and scientists synthesize their work in the consulting room and laboratory.
Given that a faculty mentor is an active practitioner as well as an active scientist, integration of these activities is more likely to occur if the mentor utilizes a programmatic strategy in both domains. The term Programmatic” implies continuity in one’s activities so that a systematic progression can be seen in the ideas which promote these activities over time. Ideas are eventually formalized into constructs promoting a shift from intuitive to formal thinking about the guidelines which emerge from the ideas, and ultimately, these guidelines lead to change in one’s activities. Thus, programmatic practitioners and scientists ought to show evidence of systematic change in their guiding constructs as they go through the cycles of hypothesis testing and dissemination described in the earlier section of this paper. Of course, as one’s constructs change, the other activities involved in one’s practice and research will be altered accordingly, and so goes the progression in both domains. If the mentor believes in the generic nature of practice and research, and he or she follows a programmatic strategy in both domains, the ideas evolving from activities in these domains are likely to be contrasted and perhaps synthesized into constructs applicable to all the activities. As students move into apprenticeship and later collegial relationships with their mentors, they are bound to share ownership in the above described evolution of ideas. They are also apt to adopt the mentor’s programmatic strategy, to co-author papers with the mentor, and to talk about new developments in their practice and research activities.
References
- Belar, C. D., & Perry, N. W. (1992). The national conference on scientist-practitioner education and training for the professional practice of psychology. American Psychologist, 47(1), 71-75.
- Cumberbatch, C. J., & Wahler, R. G. (1999). Waiting for Godot in the midst of managed care: Reports by a cohort of Boulder model clinical psychologists about their graduate education and professional activities. Manuscript submitted for publication.
- Epstein, S. (1994). Integration of the cognitive and the psychodynamic unconscious. American Psychologist, 49, 709-724.
- Handler, L., & Wahler, R. G. (1995). The Tennessee model of human science: The marriage of clinical research and practice. Unpublished manuscript, University of Tennessee, Knoxville.