Grand Traditions, New Leadership, and New Opportunities
Thanks! We've had quite a turnover in the leadership of SSCP this January. Susan Mineka completed her term as past president; Jack Blanchard stepped down as Secretary- Treasurer; Dianne Chambless left the position of Division 12 representative; and Paul Rokke completed his tour as editor of Clinical Science, and Tom Oltmanns passed the baton of SSCP program chair. Karen Calhoun finished her work as chair of the fellows' committee. All of these people have worked hard and effectively at making SSCP work. Without diminishing the contributions of others, I would like to offer particular thanks to Jack Blanchard. Jack administered a burgeoning budget as our membership soared; he kept the dues coming in with timely mailings to members; his careful notes and minutes provided a badly- needed corporate memory for the rest of us; his gentle prods to me as the new guy (and to others, I suspect) kept me on course with unfamiliar tasks; and his updating and improvement of the directory of research opportunities in internship programs provided a badly needed source of information to scientifically oriented students. Jack also volunteered readily for more than his share of the myriad little jobs that arise and remain invisible unless they don't get done. A hearty thank you to all of our outgoing leaders, and especially to Jack.
Our leadership for this year includes Ian Gotlib as past president and Alan Bellack as our new president-elect. Paul Rokke has assumed the Secretary-Treasurer’s role (and continues as SSCP web page manager). Our newly - elected representative to Division 12 is Sheila Woody. Bill Follette assumed the job of editor of Clinical Science with this issue, and Andy Meyers has taken over as program chair, and Tom Ollendick as chair of the fellows committee. Welcome to all, and thank you for pitching in. I'm also pleased that we have the continuing contribution of Karen Rudolph as chair of the membership committee, Jack Blanchard as chair of the publications committee and editor of the research in internships directory, and Debjani Mukherjee and Lori Osborne as student representatives. I'm honored to work with such a collegial and hard - working bunch.
I am honored also to announce that the 1997 SSCP Distinguished Scientist Award will be presented to Michael Goldstein. Plans are being developed for a special awards ceremony in his honor during the APA convention. The SSCP Outstanding Dissertation Award will also be presented at the awards ceremony. Taking home the plaque will be Deanna Barch, whose dissertation served as the basis for the following publication: Barch, Deanna M. and Berenbaum, Howard(1996). Language production and thought disorder in schizophrenia. Journal of abnormal Psychology, 105(l),81-88. Both of these awards are well- deserved.
Task Force on Psychological Interventions
While the task force that has been pressing for definition of empirically validated treatments (EVTs) is officially an organ of Division 12, the idea took root under the leadership of Dianne Chambless and other SSCP members. The division has decided to continue the task force for another year, and has appointed Paul Pilkonis as its new chair. Controversy continues to follow the task force and its work, but its impact has already been formidable. The Task Force report was one factor leading to the strengthening of statements in accreditation guidelines and procedures regarding research foundations of clinical practices taught in both graduate and internship programs. One of the most exciting impacts of this movement is the increased willingness to do the previously neglected difficult outcome studies needed to demonstrate therapy effectiveness.
Debate continues over the potential negative effects of the existence of this movement towards defining EVTS. That debate itself is useful, and will hopefully provide leads for improving both the process of defining EVTs and the uses to which such definitions are appropriately put. Some SSCP members have been invited to talk about EVTs and the Division 12 initiative before audiences that have historically greeted such topics with intense inattention.
APPIC and APA are now collaborating to collect data regarding what happens to internship applicants each year - data which have not been available in the past. If APA comes through, we should finally have an ongoing, year - to - year picture of how the ratio of internship slots to applicants and more appropriately than simply asking "why aren't there enough internship slots?" APPIC is sponsoring a conference next October, perhaps in concert with APA and other organizations, that will bring together students; representatives of graduate training, internship, and postdoctoral programs; and psychologists with expertise in the changing job market. The focus will be on manpower needs and how to best meet them rather than the simple and limited focus on providing more internships. Tough questions need to be asked about the number of people completing doctoral programs, internships, and postdocs, and how these numbers compare to existing and developing employment opportunities.
The manpower situation is fluid and obviously troublesome. One particular aspect of this problem has been largely ignored, and I hope will be addressed in the APPIC conference and other forums: I am concerned that the programs that are now deciding individually to act on the situation and cut their enrollments are some of the strongest and most responsible of our programs. If such cuts are made only by scientifically rigorous applied programs, we may be further diluting the scientific base of practicing psychologists.
One parting note: SSCPNet has been ablaze recently with yet another internship issue, spurred by the decision at the meeting of the Council of University Directors of Clinical Psychology to consider endorsing the move of internships to the post-doctoral level. CUDCP will be putting the issue to its full membership for a vote. Be ready for some intense debate on this one. Write to me with your views as to whether moving the internship post-degree would impact scientific clinical training.
On Behaving Scientifically: Fallibilism, Criticism, and Epistemic Duties
University of Nevada, Reno
Clinical psychologists often talk about science. There is talk of the "scientist-practitioner" model of clinical psychology; talk of applying basic research findings to clinical phenomena; and talk associated with consuming and/or producing clinically relevant scientific research. Few clinical psychologists would argue that science is irrelevant to their professional behavior.
However, a major problem in our profession is that too few professionals seem to consistently behave in a way that shows that they actually understand, value, and engage in science. The modal number of publication is zero; the average number of articles or books read per year is quite low; and too few apply empirically validated treatments (see O'Donohue, Fisher and Curtis, 1985 for a review). Importantly, too few approach clients suffering from problems that we don't have known effective treatments in a way that is scientific (Barlow, Hayes and Nelson, 1984).
This state of affairs has sometimes produced considerable tensions between certain practitioners and certain researchers. The typical summary of this tension is that the practitioner claims that the researcher is not producing relevant research and the researcher feels that the practitioner does not properly consume what relevant research there is. There is an important asymmetry that needs to be noted here. This tension is that this leaves the practitioner in a much more precarious position than the researcher. The researcher can claim with some legitimacy that their research was not designed to address that particular practical question, or any particular practical question. Basic research is a legitimate area of inquiry. However, if the practitioner's lament is true and no research is relevant to the practitioner's behavior, how can the practitioner legitimately represent what they are doing as based on expert knowledge? And if they cannot claim that their behavior is based on expert knowledge how can they legitimate their behavior, their fees, and the (implicit or explicit) representation to the client that they are behaving in a professional, expert manner?
Why all this approving talk about science in the abstract but so little concrete substantive scientific behavior? Here are some of my conjectures. First, science and its correlative terms are clearly honorifics. To say that one is behaving scientifically is to attribute a certain value to one's behavior. Thus, one reason for the disparity between an abstract valuing of science and deficits in substantive scientific activity may be the general human tendency to speak highly of some socially valued activity (like giving to charity) to gain the attendant advantages of such posturing, while actually doing very little of it. As a result, we need to exert care to see if there is real substance behind claims of a relationship to science or this is merely a self-serving label.
Another reason may be that some are confused about what science is and in this confusion think that science is optional, too much hard work, not cost-effective, or takes too much talent. Some also can be distracted by the facade or trappings of science. Thus, part of the explanation for the dilatory behavior of so many regarding science is that there is not a clear understanding of what the essence of science is and a failure to see that a scientific approach to matters is not optional but essential for effective problem solving in clinical pursuits. I hope that if more were to see the essence of science and its indispensibilty for effective problem solving, more clinicians would behave scientifically.
What is the substance of behaving scientifically? And why is it a good thing to behave in this manner? The Popperians have suggested that science begins with fallibilism. That is, the notion that our current beliefs, despite the fact that they are "ours" and despite all the attractions that they hold for us, may still be wrong. (Fallibilism also applies to the previously stated belief, see Bartley (I 984) for his comprehensive critical rationalism.)
That our set of beliefs may not have optimal verisimilitude can be quickly discerned from a few considerations: First we frequently encounter refutations of our beliefs in our daily life. Teams that we are sure to win, lose. People who we are sure will behave in one way, don't. The short cut, isn't. Another way we can see that we may be wrong is that people disagree with us. The logical principle of noncontradiction entails that when we assert A and someone else asserts not-A, one side is wrong.
Science is concerned with error and ways of eliminating error. A vigorous scientific attitude involves a desire to detect error in one's beliefs and to attempt to replace these beliefs with ones of greater accuracy. Science involves belief change - i.e., learning. (It is ironic that those committed to a thoroughgoing scientific approach are often called "dogmatic" as they are dedicated to the very process of belief change. It is those outside the purview of science that are in the most important sense dogmatic-they far less frequently experience the corrective potential of science).
An assumption of psychometrics, for example, is that all measurement contains error. Statistical inference is concerned with errors or falsely rejecting the null hypotheses. Experimentation is concerned with valid (not invalid) causal inference, etc. This is as it should be: we need to be worried that our confirmation biases and other heuristic errors may be influencing us to believe something we ought not.
When we are conducting psychotherapy, we can make the following kinds of errors:
1. False descriptive statements. We can claim, for example, that our client never thought of suicide in the preceding week, when in fact she thought of it four times.
2. False causal statements. We can believe that our client's erectile dysfunction is caused by performance anxiety, when in fact it is caused by a neurological problem.
3. False ontic statements. We can believe that things exist when in fact they do not. We may believe that there is something like, an inner child, when there is not.
4. False relational claims. We can believe that therapy x produces more change than therapy y, when this is not the case.
5. False predictions. We can believe that therapy x in certain situations will result in the greatest change for this client, when it does not.
6. False professional ethical claims. We can believe that it is ethically permissible to have a certain kind of extra-therapeutic relationship with our client, when it is actually ethically impermissible.
Next one must realize that all professional behavior is based upon knowledge claims. That is, when I recommend that my client take test T to measure his depression, this act is based on a knowledge claim, to wit, that in this situation test T is the most accurate, cost-efficient, etc manner for my client's depression to be measured. Furthermore, this act is based on the knowledge claim, that I know that it is a priority to measure my client's depression in this situation. When I start treating my client with psychotherapy P, the knowledge claim is that my client has the kind of problem that therapy P should be given for and that therapy P is the most cost-effective way to treat this problem. O'Donohue (I 99 1), for example, has attempted to explicate the beliefs associated with choosing a treatment:
1. Treatment goal G entails the realization of states of affairs S1 ... Sn.
2. F1 ... Fn are all factors that are known or hypothesized to be causally relevant to S1...Sn.
3. F1...Fn are cost-efficient methods to obtain S1....Sn.
4. F1...Fn are the least restrictive methods to obtain S1... Sn.
5. Client C is fully informed concerning alternative methods, the costs and benefits of these methods and consents to the use of F1 ... Fn.
6. If Premises 1, 2 ,3, 4, 5, and ceteris paribus, then F1 ... Fn are the proper treatment methods to use in this case.
7. Ceteris paribus
Therefore: F1 ... Fn are the proper treatment methods to use in this case.
Next, one must realize that all knowledge claims need to be evaluated with respect to the quality of the epistemic procedures used to form these. If I claim that I know that tomorrow you will experience a serious stressor because I have read your horoscope, then one must evaluate horoscopes as a means of reliably attaining such knowledge.
Importantly, the decision regarding what epistemic methods we should use has been made for us. Our profession, through its training model and through its ethical code has explicitly stated that the way we seek to gain knowledge is through science. This is a wise decision because the application of science to problems has caused a historically unprecedented growth of knowledge. The problem is not that our profession is committed to a problematic epistemology. The problem is that too many clinical psychologists' commitment to this epistemology has been too superficial, sporadic, and rhetorical.
Next one must realize that epistemic errors in clinical practice can cause serious harm. When we make a descriptive claim about a client that is not true, for example, we can miss something that actually required treatment and thus prolong the client's suffering. On the other hand, over diagnosis can stigmatize a client and cause them to have negative beliefs about themselves that are not true. Practicing therapies that we erroneously believe cause change, when they do not, wastes everyone's resources. Treating a client with therapy A when therapy B is more effective, needlessly prolongs the client's suffering and results in an inefficient expenditure of resources. Economists state that all activities have opportunity costs: engaging in one has the cost of forcing the actor to forge engaging in another. All ineffective therapies have opportunity costs in that they displace the opportunity to engage in other more productive options. Let me not mince words here. Our epistemic mistakes can cause a lot of harm. Children can be abused again and severely injured or killed. Suicides can be committed. Wives can be beaten. Overdoses can be taken. Marriages can end. Arrests can occur. Illnesses can be exacerbated. As clinical psychologists we can be meddling with situations in which the stakes are very high.
To behave scientifically is to behave in an explicitly critical manner, particularly in a self critical manner. That is, one acknowledges that one's beliefs may be in error and one seeks to rigorously criticize one's beliefs to see if they are in error and thus in need of revision. Why is this a good thing? Because we are often wrong and because criticism allows error to be eliminated and knowledge to grow.
Before I discuss this characterization of science in more detail, I want to acknowledge that this answer might be somewhat surprising. Some might think that science involves the inductive building of generalities from observations of particulars. However, this is not the case due both to the problems in establishing a sound logic of induction and because it is simply a special case of the more general property of science: criticism. That is, if one is trying to criticize the belief that all swans are white, it is reasonable to examine a large sample of swans. However, what one is actually doing is attempting to criticize the universal claim by seeing if one can uncover a nonwhite swan. Others may claim that they thought science as uncovering a puzzle solving paradigm which comes to be an exemplar for future problem solving (following Kuhn). However, Kuhn's methodology for coming to this generalization is quite problematic as he studied a few instances in the history of science and then extrapolated to all sciences (see O'Donohue, 1993 for a critique of Kuhn's metascientific claims). Moreover, there is still an important role of criticism as Kuhnian scientific revolutions occur when a new paradigm exceeds the problem solving ability of the older paradigm by producing fewer anomalies.
The characterization of science that I am giving here is a neo-Popperian one. In this view science is simply an epistemology- a way of knowing. The view is that we all start out with a "web of belief". For example, a clinical psychologist may hold beliefs such as "If I do therapy x with this client she will improve;" "Multiple personality is caused by child sexual abuse" and "I can accurately measure someone's sexual preferences by a clinical interview". Science begins when one realizes that one's current beliefs-no matter how "commonsensical", no matter how well they seem to cohere with other beliefs; no matter how much they are generally accepted by others; and no matter how many times they have appeared to be confirmed by one's experience-may still be false. Science begins with the espistemologically humble attitude of: I may be wrong.
The next step is exposing beliefs to criticism to see if how well they withstand criticism. This step involves designing tests of the belief to see how they stand up to these tests. Sometimes testing beliefs is easy. If I believe my wife is on the phone, I simply need to pick up the extension. Because we have only one line, if I hear a dial tone then I have falsified my belief. In another example, if I believe that all depressives make internal, stable, global attributions, then a reasonable form of criticism is to gather a decent sample of depressives and accurately measure their attributions and see if this experience contradicts the prior belief. If I believe some proposition that is expressed more tentatively, such as "More people who are depressed make global stable attributions than people who are not depressed' I essentially do the same thing. I just need to get a little more mathematically sophisticated as I need to look at correlations or conditional probabilities.
However, the distinction I want to make here is between the details of criticism and the general commitment to criticism. The details (research design) can get complicated as one attempts to investigate subtle or complicated criticisms. However, the general process remains the same: one is testing some claim by allowing it to be exposed to of criticism; and one is not being critical for the sake being critical. Rather, criticism is the means to detect error and thus a way of experiencing a growth of knowledge, and thus a way of basing your professional acts upon less error.
Note that this general characterization nicely captures what is learned in research methodology. Research methodology essentially is a codification of some standard criticisms. Why do we seek a representative sample to begin our research? Because we are not vulnerable to the criticism that our sample was biased and therefore the results are skewed. Why do we seek random assignment to groups? Because we do not want to be vulnerable to the criticism that observed differences at the end of the experiment were due to initial differences. Why do we worry about manipulating one variable at a time? Because if we want to say that differences were due to one variable, then in order to make the criticism that some other variable also changed we need to make sure we only manipulate one at a time. Why do we make sure our conclusions are worded so that they cover only the domains that we studied? Because we do not want to be vulnerable to the criticism that we are generalizing to facets not directly studied, etc.
Now let me describe how this plays out in the day to day activities of a clinician. First, clinicians, in order to begin to treat, must commit to some descriptive claims. That is, their actions are based on beliefs that predicate something about the client. The clinician must commit to claims such as: "This is a 6 year old boy." "His parents are having marital problems, particularly around communicating about sex." "The boy is wetting his bed on average about four nights a week." "The boy does not meet the diagnostic criteria for ADD as listed in the DSM-IV." and so on.
Now, both fallibilism and the criticism principle comes into play. The clinician may be wrong in any of these claims. The clinician may think the parents are experiencing marital problems, when they are not. The child may be wetting 6 nights a week and not seven, etc. The clinician must recognize that he or she may be wrong. This is why we worry about the psychometrics of our measurement instruments. When a measurement instrument has not been shown to be reliable and valid, we do not know how much error and how much of "signal" it gives us. We have little reason to believe its readings because it has not been shown to be accurate. Because we know that all of our tests contain error, we develop strategies such as the late Donald Campbell’s' independent measurement approach. In this approach there are reasons to believe that measurements that differ in a number of important regards will each contain different kinds of error. (In our terms, they are criticizable as having a certain degree of inaccuracy but we exploit the likelihood that they have different kinds of inaccuracies). However, when multiple independent measurements all yield similar results, this result is less criticizable because of the convergence of independent measures.
The clinical psychologist seeks and accepts the status of expert. This expert status is associated with the privileges that we accept as clinical psychologists: e.g., high fees, special titles, and a fairly respected social role. However, with the status of "expert" come responsibilities. A central responsibility is that we are supposed to have special knowledge. It is this special knowledge that I would suggest is key to allowing us to enjoy our special privileges. Our "customers" come to us because they believe they are purchasing extraordinary knowledge. This is how the professions work. My ordinary knowledge of what might be causing my dog to limp soon becomes exhausted. I may look to see if he has a thorn in his paw, or to see if his ankle is swollen. When these "lay" tests are exhausted, I seek the expertise of a veterinarian. I am willing to pay a high fee (many times minimum wage) because I expect that the veterinarian will have specialized knowledge that I do not have. I believe that this specialized knowledge will result in a more efficient cure to this problem than my own efforts.
I argue that professionals are experts because of their specialized knowledge. Lawyers know how to write wills that will stand legal tests, physicians how to best combat various medical problems, and psychologists how best to measure and change psychological conditions.
Thus, I argue that a supreme duty that we hold as professionals is an epistemic duty. Respect for the truth can be seen as an intellectual duty, in much the same way as respect for human life can be seen as a moral duty. In short, scientific clinical psychologists not only pursue truth as the ultimate goal, they also see it as that goal which everyone in the profession ought to recognize and pursue ardently and competently. For example, the pursuit of truth must take precedence over the possibly more enticing pursuit of pleasure, financial rewards, etc. because valuing something more highly than truth would result in an erosion of one's intellectual virtue. Also, the desire for truth must be courageous and dispassionate; one leaves self-interested biases aside, and honestly seeks out those objections he or she may find emotionally disturbing. As Code (1987, pg. 37) describes,
Intellectually virtuous persons value knowing and understanding how things really are. They resist the temptation to live with partial explanations where fuller ones are attainable; they resist the temptation to live in fantasy or in a world of dream or illusion, considering it better to know, despite the tempting comfort and complacency a life of fantasy or illusion (or one well tinged with fantasy or illusion) can offer.
An individual who is motivated by something other than truth when forming a belief will be epistemically irresponsible and cowardly insofar as he or she will consider only the evidence that supports his or her prematurely formed conclusion. For instance, suppose a therapist holds the belief that facilitated communication is the most effective treatment for the problems of autistic children, and his or her motivation for holding that belief is to enjoy the popularity with parents that a quick cure provides, to enjoy the financial rewards, as well as to avoid the hard work of an intensive, long term, behavior analytic treatment program. Moreover, the therapist refuses to read journals that provide critiques of facilitated communication and refuses to read or receive training in a behavior analytic approach. This selective consumption of information allows the therapist to feel confident that what he or she is prescribing is sound. Although the individual's holding such a belief may enable him or her to derive a number of positive personal outcomes, maintaining that belief is epistemically irresponsible because he or she is motivated by her own happiness rather than an earnest desire for truth. The therapist is failing in his or her epistemic duties as an expert in this field.
Thus, we must have at least a competent grasp of the scientific knowledge contained in our profession. It is our duty to actually know what is known in our field to justify the public and our customers treating us as experts. It is our duty to master this knowledge so as to maximize our beneficial effect on our clients. We must meet our epistemic duties in order to minimize the harm we do (unintentionally) to the people who are affected by our actions. We must meet our epistemic duties to honestly meet the reasonable expectations of our consumers regarding what we as professionals know. Knowledge based action is what we are supposed to being selling. But this, of course, depends on the question of, do we actually know what we think we know?
My last major contention is that the only way we can meet our epistemic duties as clinical psychologists is to have a firm grasp of the state of scientific evidence for our actions and to honestly convey this evidence to our publics, and to seek to participate in the growth of knowledge of our profession. To do this, is to actively exemplify, what has been previously discussed in this article: We must acknowledge that our beliefs may be in error. We must rely on beliefs in our clinical practice that have best survived past criticism (or, roughly, have been vindicated by past scientific ch-see McFall, 199 1). We must realize that even these beliefs may still contain error and seek to expose these beliefs to further criticism to advance knowledge. We must do all of this to deserve our expert status and to minimize harm we do to people who trust us and pay us their hard won resources. Thus, practically, we must:
1) Accept the general attitude that we may be wrong.
2) Seek to see if our beliefs are beliefs most consistent with the scientific literature (i.e., where these beliefs and their competitors have been evaluated).
3) Seek criticism from our peers, particularly peers that have greater relevant scientific expertise. Ask an objector group, what criticism they may have of our beliefs/actions.
4) Conduct clinical practice in a way in which you can gain critical feedback regarding your beliefs. Conduct client satisfaction surveys. Seek to do program evaluation. Seek to engage in long-term follow up to see relapse rates. Compare these to those in literature. Conduct single subject experimental designs (Barlow, Hayes, & Nelson, 1984).
5) Give criticism to others. I believe this field has for too long tolerated psychologists not meeting their epistemic duties. We have out of some misguided sense of professional courtesy, bit our tongues when colleagues give assessment devices that have no psychometric data or therapies that have no outcome research. This must stop. I propose that another duty we have to the public is to explicitly criticize these practices.
6) Impress upon our students that when they accept expert status that are concomitantly accepting the attendant epistemic duties and that to meet these duties they must have at least a competent knowledge of epistemologically sound knowledge claims and this involves a knowledge of the scientific literature.
Finally, the above is not rendered irrelevant simply because there are no "empirically validated treatment' for the problem, or because the problem is sub-clinical, or because the problem does not meet a definition of typicality for some case. The epistemic duties involved in our representations as experts require the following: 1) clearly informing the client or responsible parties about the state of relevant evidence. This portrayal needs to be full and accurate; 2) Offering to behave only in an epistemologically appropriate manner by perhaps suggesting that they participate in a clinical trial of an experimental protocol. This, of course, should be reviewed by an institutional review board. McFall (1991) and Barlow, Hayes, and Nelson (1984) provide useful explications of how to do this.
Barlow, D. H., Hayes, S. C., & Nelson, R- 0. (1984). The scientist-practitioner: Research and accountability in clinical and educational settings. New York: Pergamon.
Bartley, W. W. (1984). The retreat to commitment. La Salle, IL: Open Court.
Code, L. Epistemic responsibility. Hanover: University Press of new England, 1987. McFall, R. M. (1991). Manifesto for a science of clinical psychology. The Clinical Psychologist, 44,75-88.
O'Donohue, W. T., Curtis, S. D., & Fisher, J. E. (1985). Use of research in the practice of community mental health: A case study. Professional Psychology: Research and Practice, 16, 710-718.
O’Donohue, W.T. (1991).Normative models of clinical decisions. Behavior Therapist, 14 (3), 70-72.
O'Donohue, W. T. (1993). The spell of Kuhn on psychology: An exegetical elixir. Philosophical Psychology, 6, 267-287.
Just missed all the deadlines for the 1997 internships? Rather than let time slip away, now is the time to get a copy of SSCP's Directory of Research Opportunities for Clinical Psychology Interns. Published in 1994, over 370 copies of the Directory have been sold. This 310 page directory offers a wealth of information for each internship listed. The following information is provided for 155 internship programs:
1. Where past interns have come from (e.g., clinical, counseling, or school psychology programs and their accreditation status)
2. A listing of training provided in 25 empirically validated treatments (what can an intern expect in terms of formal didactics and supervision for each of these treatments?)
3. A description of research activities of the internship faculty
4. The research productivity of past interns (publications, presentations)
5. Facilities and support provided to interns (e.g.. computer facilities, secretarial support).
6. The job placements of past interns over the past four years (e.g., the number of interns taking positions that were academic/research vs clinical/service).
The price of the Directory has been reduced by 46%. The total cost is now just $10 (that includes shipping). All proceeds go to SSCP to benefit such programs as our developing grants-in-aid-of research for student members.
To order a copy of SSCP's Directory of Research Opportunities for Clinical Psychology Interns, send $10 (check, made out to SSCP - do not send cash; sorry no Purchase Orders) to the address below. You do NOT have to be a member of SSCP to order.
Jack J. Blanchard, Ph.D.
Department of Psychology
The University of New Mexico
Albuquerque, NM 87131-1161
An Interview With Past President Ian GotlibThings Done and Things to Be Done
On January 24, 1997, Ian Gotlib graciously gave the following interview to staff writer, Sonja Batten. Batten's questions are indicated in bold, followed by Gotlib's response.
Just as background information, how long and to what extent have you been involved in SSCP? Well, I was President in 1996, and before I was President, I was Secretary-Treasurer for two years. I was a member of SSCP before that, and have been for 5 or 6 years.
To start off with, why don't you begin by telling us what you see as the overall mission of SSCP? Sure. I think that SSCP is really a home for anyone who is interested in the scientific basis of clinical phenomena, and so that anyone who values science as a basis for clinical practice or clinical enterprise would find a home in SSCP and Section 3. Section 3 really is set up to foster the integration of science and practice.
Since you don't have to be a member of APA to belong to SSCP, how does
that mission differ from that of APA?
I think that APA has no particular value on science beyond that you probably need an advanced degree to belong to APA. And so APA has no mission to foster science or to integrate science and practice. Even at the level of division 12 which is more specific than APA to clinical psychology, there is no real mission to foster science, to foster the integration of science and practice, or to focus on the scientific underpinnings of clinical practice. So that's where I think SSCP is unique and fills a mission that is not filled by the APA.
And during your presidency, what projects or issues did you work on in
the service of the mission of SSCP?
One big issue that we tried to work on involves state licensure for practicing psychologists. Depending on the state, it can be literally impossible for a clinical psychologist who chooses to go into academia to be licensed, because many states require two years of postdoctoral placement, which is impossible if you are trying to get tenure and do research at the same time. They are not allowing the teaching of clinical courses or the supervision of clinical students to count toward that. So, we undertook an initiative to write to state licensing boards and to write to the APA to work on this on a state by state basis.
We've also been trying to forge closer ties with CUDCP, which is the Counsel of University Directors of Clinical Psychology, which is the governing board for every clinical psychology program in the country that is based in a university, so it excludes free standing professional schools. And we've been working closely with them. I've been on both boards for example, and I've been trying to forge ties there, as well.
So how have the states received your efforts? Not very well. It has been received as "Don't step on our toes." But I think we still need to really try to pursue this. They are certainly free to say that this is none of our business, but if enough psychologists start to say that this is untenable, that we can't hire and license enough psychologists for our universities, then maybe they will realize that there is a problem here.
How do you feel about what the organization has accomplished while you were president? I feel good about SSCP. I think the organization is strong. It has gone from something like 100 or 150 members about 5 years ago to almost 700 now. That is due in large part, if not solely , to the SSCPnet. We initiated that when I and others were on the Executive Board, and we wanted to see what would happen, and it has been wonderful. There have been incredibly intense debates about a variety of clinical issues. And you cannot be on that list serve without joining the organization. So where other groups give away journals, we give away the SSCPnet.
The other thing that we did, because of the growth of the organization and the dues from every member, we were able to go from having a balance of 200 or 300 dollars a year to a number in the thousands. So, one of the things that we were able to do, and that we did this year for the first time, is to offer student dissertation research grants of $500. This year we offered seven of those awards, even though we were only planning to give out five. And I think that is a terrific use of SSCP funds, for this organization to foster research that we feel best exemplifies the integration of clinical science and practice. We were very pleased to be able to do that, and it will continue in the coming year.
Are there things that you would do differently now or that you would have liked to have done, but didn't get around to? Well, because the term is only a year, there is no way to do all of the necessary things. But, I think that Bob Klepac, who is the new president will continue working on all of these issues.
Are there any issues you would like to see SSCP tackle in the next few years? We have been involved with the notion of empirically validated treatment, and there is a task force that members of SSCP are involved in for empirically validated assessment procedures. That has just . started and I think we need to do a lot more work in that area. Also, the other thing that has happened, that I think has been spurred in large part by what SSCP values, is the creation of the Academy of Psychological Science in the last two years, and essentially they uphold the same kind of principles that SSCP does. I think that it would be nice for SSCP to forge stronger and more explicit links with that organization.
Do you see SSCP heading toward a more political stance in the field of psychology? Well, I don't know. That is not why people join SSCP, to become more political. So, I'm not sure that will happen. I think that all SSCP can really do is make sure that science is considered in any discussion of clinical issues within Division 12 and within APA. Also, we are recently affiliated with APS, as well as APA, because of the importance of science. So, it is these kinds of links that I would like to see the organization doing more of as we head into the future. We need to have a voice within the larger organizations.
Are there any other ways that you would like to see SSCP grow and change in the next few years? Other than continuing to forge ties with other organizations, I think we need to continue to fight the licensure laws and to make sure that science maintains a presence in clinical activities at APA.
And, since we are all interested in the scientific study of Clinical Psychology, what research are you currently working on? My research is bridging cognitive and interpersonal aspects of affective disorders and depression. I do work looking at information processing models of depression and bridging that to the interpersonal functioning of depressed people: marital functioning, family functioning and behavior, and the functioning of children of depressed parents. I am also starting to look at cognition in childhood depression.
Thank you for your time and your work for the organization.
The SSCP Student Poster session held at the 104th annual meeting of the American Psychological Association, Toronto had many fine contributors. The following poster abstract by Wendy Tenhula and Doug Billings was the poster session winner. Both are graduate students at Northwestern University, Evanston, Illinois. Ms. Tenhula is currently completing her clinical internship at the Hennepin County Medical Center, Minneapolis, MN and Mr. Billings is completing his internship at the Eastern Pennsylvania Psychiatric Institute, Philadelphia, PA.
Coping Strategies and Hemispheric EEG Asymmetry: Differential Risk for Depression
Wendy N. Tenhula & Douglas W. Billings
Faculty Advisor: Ian H. Gotlib, Ph.D.
Davidson and his colleagues have demonstrated that relative left hemisphere hypoactivation in anterior brain regions, as measured by electroencephalographic (EEG) power in the alpha power band (8-13 Hz), is a biological marker for vulnerability to depression (see Davidson, 1992; Davidson & Tomarken, 1989 for reviews). Although several studies have attempted to identify specific psychological processes which may account for this vulnerability (e.g. Tomarken & Davidson, 1994; Gotlib, Ranganath, & Rosenfeld, in press), the psychological factors that link EEG patterns to risk for depression remain unclear. This study examines the associations among resting anterior EEG patterns, depression, and dispositional coping strategies.
As in previous studies, currently and previously depressed subjects had less relative left frontal activation (as indicated by more relative left alpha power) than did the never-depressed control group and there was no significant difference in asymmetry between the currently and previously depressed subjects. To assess the relationship between EEG asymmetry and coping, subjects were divided into "high risk" (i.e. less relative left frontal activation) and "low risk" (i.e. more relative left frontal activation) groups, based on asymmetry scores. For three of the scapes of the COPE (Suppression of Competing Activities, Mental Disengagement, and Behavioral Disengagement), there were significant Group (high risk, low risk) x Depression Status (currently, previously, and never depressed). These interactions indicate that individuals with high risk patterns of alpha asymmetry more often endorsed rumination and disengagement strategies while low risk participants tend to rely more heavily on distraction techniques: This study is one of the first to explore the psychological manifestations of anterior EEG asymmetry patterns, but the findings are quite consistent within the framework of two dominant models of depression: ruminative response set theory (Nolen-Hoeksema, 1987) and learned helplessness theory (Abramson, Selgiman, & Teasdale, 1978).
1996 DISSERTATION Grants
SSCP Annually makes awards to support dissertation research. The successful applicants this last year are listed below. For information on 1997 applications see the announcement on this web site.
Mary Jo Coiro: Maternal Depressive Symptoms as a Risk Factor for the Development of Children in Poverty. University of Virginia
Denise M. Sloan: Emotional Experience and Expression in Depression. Case Western Reserve University
Jan Mohlman: What kind of Attention is Necessary for Fear Reduction? University of Oregon
Gregory L. Stuart: Impulsivity as a Predictor of Marital Violence: Testing a Mediational Model. Indiana University
Jennifer D. Dunlap: Investigation of an Expanded Response Disequilibrium. Indiana University
David J. Moser: Biobehavioral Sources of Variance in Presurgical Neuropsychological Performance Among Patients With Temporal Lobe Epilepsy. University of Florida
Mark W. Miller: Anxiety and Emotional Stroop Interference: A Startle-Probe Investigation of Affective and Attentional Mechanisms. The Florida State University
Thanks go also to the reviewers who served on the dissertation awards committee:
Henry E. Adams, Anne Marie Albano, Jack J. Blanchard, Thomas Bradbury, Jane Fisher, William C. Follette, Michael D. Franzen, Ian H. Gotlib, Elaine, M. Heiby, Thomas Joiner, Jon D. Kassel, Robert K. Klepac, Catherine Lee, Gregory J. Meyer, Sue Mineka, Susan M. Orsillo, Nathan Weed, and Paul Rokke.
Editorial OpinionA Brief Welcome William C. Follette One takes on the responsibility of editing Clinical Science for several reasons. First, SSCP seems to me to be the best hope of those of us who believe there is a knowledge base upon which clinical science is founded. The people in the organization are among the most thoughtful and respected members of our scientific community. Therefore, this is one way I can support the organization. Second, thanks to the work of Paul Rokke, Clinical Science is on the brink of moving towards a journal like product and, I want to help shape its continued evolution. In this issue an article by Bill O'Donohue offers a critical view of the importance of taking seriously our intellectual responsibilities whether we teach, are taught, or deliver services. I hope this article engenders interesting responses from our readership. I will be happy to print thoughtful responses' Third, Bob Klepac and Paul Rokke told me there was a huge budget, an efficient staff, and all the computer hardware and software one could want need to publish this little gem. Okay, so maybe they didn't say that, but I was just assuming...
Those of you who know me realize I don't mind my turn at the soapbox, and that's, in part, what this column is. There are important issues about which colleagues and students have heard me complain, and I want you to share in that experience (my complaining) as well. One which is near and dear to my heart is the work that Ian Gotlib and others have started by attempting to reform licensing laws. I would welcome an airing of the arguments regarding the empirically supportable requirements that could reasonably be defended as warranted requirements for licensure. State boards have become increasingly restrictive and are imposing requirements that have no empirical justification. For example, where are the data that support any benefit to be derived from any training beyond the successful completing of accredited Ph.D. and internship. For the moment I won't go back a step further and say where is the evidence that the Ph.D. matters. I will save that for another time. When licensing boards start asking clinical interpretation questions during oral examinations, where is the evidence that the procedures (the questions and the evaluation of the answers) are reliable and valid? I would welcome articles and debates prepared for Clinical Science on this issue.