A few longer thoughts on the four debates about the bar exam

As the debate rages in California and other places about the utility of the bar exam, it's become fairly clear that a number of separate but interrelated debates have been conflated. There are at least four debates that have been raging, and each requires a different line of thoughts--even if they are all ostensibly about the bar exam.

First, there is the problem of the lack of access to affordable legal representation. Such a lack of access, some argue, should weigh in favor of changing standards for admission to the bar, specifically regarding the bar exam. But I think the bar exam is only a piece of this debate, and perhaps, in light of the problem, a relatively small piece. Solutions such as implementing the Uniform Bar Exam; offering reciprocity for attorneys admitted in other states; reducing the costs to practice law, such as lowering the bar exam or annual licensing fees; finding ways to make law school more affordable; or opening up opportunities for non-attorneys to practice limited law, as states like Washington have done should all be matters of consideration in a deeper and wider inquiry. (Indeed, California's simple decision to reduce the length of the bar exam from three days to two appears to have incentivized prospective attorneys to take the bar exam: July 2017 test-takers are up significantly year-over-year, most of that not attributable to repeat test-takers.)

Second, there is the problem of whether the bar exam adequately evaluates the traits necessary to determine whether prospective attorneys are minimally competent to practice. It might be the case that requiring students to memorize large areas of substantive law and evaluating their performance on a multiple-choice and essay test is not an ideal way for the State Bar to operate. Some have pointed to a recent program in New Hampshire to evaluate prospective attorneys on a portfolio of work developed in law school rather than the bar exam standing alone. Others point to Wisconsin's "diploma privilege," where graduates of the University of Wisconsin and Marquette University are automatically admitted to the bar. An overhaul of admissions to the bar generally, however, is a project that requires a much larger set of considerations. Indeed, it is not clear to me that debates over things like changing the cut score, implementing the UBE, and the like are even really related to this issue. (That said, I do understand those who question the validity of the bar exam to suggest that if it's doing a poor job of separating competent from incompetent attorneys, then it ought to have little weight and, therefore, the cut score should be relatively low to minimize its impact among likely-competent attorneys who may fail.)

Third, there is the problem of why bar exam passing rates have dropped dramatically. This is an issue of causation, one that has not yet been entirely answered. It is not because the test has become harder, but some have pointed to incidents like ExamSoft or the addition of Civil Procedure as factors that may have contributed to the decline. A preliminary inquiry from the California State Bar, examining the decline just in California, identified that a part of the reason for the decline in bar passage scores has been the decline in the quality of the composition of the test-takers. I became convinced that was the bulk of the explanation nationally, too. An additional study in California is underway to examine this effect with more granular school-specific data. If the cause is primarily a decline in test-taker quality and ability, then lowering the cut score would likely change the quality and ability of the pool of available attorneys. But if the cause is attributable to other causes, such as changes in study habits or test-taker expectations, then lowering the cut score may have less of such an impact. (Indeed, it appears that higher-quality students are making their way through law schools now.) Without a thorough attribution of cause, it is difficult to identify what the solution ought to be to this problem.

Fourth, there is the debate over what the cut score ought to be for the bar exam. I confess that I don't know what the "right" cut score is--Wisconsin's 129, Delaware's 145, something in between, or something different altogether. I'm persuaded that pieces of evidence, like California's standard-setting study, may support keeping California's score roughly in place. But it is just one component of many. And, of course, California's high cut score means that test-takers fail at higher rates despite being more capable than most test-takers nationally. Part of that is, I'm not sure I fully appreciate all the competing costs and benefits that come along with changes in the cut score. While my colleague Rob Anderson and I find that lower bar scores are correlated with higher career discipline rates, facts like these can only take one so far in evaluating the "right" cut score. Risk tolerance and cost-benefit analysis have to do the real work.

(I'll pause here to note the most amusing part of critiques of Rob's and my paper. We make a few claims: lower bar scores are correlated with higher career discipline rates; lowering the cut score will increase the number of attorneys subject to higher career discipline rates; the state bar has the data to evaluate with greater precision the magnitude of the effect. No one has yet disputed any of these claims. We don't purport to defend California's cut score, or defend a higher or lower score. Indeed, our paper expressly disclaims such claims! Nevertheless, we've faced sustained criticism for a lot of things our paper doesn't do--which I suppose shouldn't be surprising given the sensitivity of the topic for so many.)

There are some productive discussions on this front. Professor Joan Howarth, for example, has suggested that states consider a uniform cut score. Jurisdictions could aggregate data and resources to develop a standard that they believe best accurately reflects minimum competence--without the idiosyncratic preferences of this state-by-state process. Such an examination is worth serious consideration.

It's worth noting that state bars have done a relatively poor job of evaluating the cut scores. Few evaluate them much at all, as California's lack of scrutiny for decades demonstrates. (That said, the State Bar is now required to undertake an examination of the bar exam's validity at least once every seven years.) States have been adjusting, and sometimes readjusting, the scores with little explanation.

Consider that just in 2017 alone, Connecticut is raising its cut score from 132 to 133, Oregon is lowering it from 142 to 137, Idaho from 140 to 136, and Nevada from 140 to 138. Some states have undergone multiple revisions in a few years. Montana, for instance, raised its cut score from 130 to 135 for fear it was too low, then lowered it to 133 for fear it was too high. Illinois planned on raising its cut score from 132 in 2014 to 136 in 2016, then, after raising the score to 133, delayed implementing the 136 cut score until 2017, and delayed again in 2017 "until further order." Certainly, state bars could benefit from more, and better, research.

Complicating these inquiries are mixed motives of many parties. My own biases and priors are deeply conflicted. At times, I find myself distrustful of any state licensing systems that restrict competition, and wonder whether the bar exam is very effective at all, given the closed-book memory-focused nature of the test. I worry when many of my students who'd make excellent attorneys fail the bar, in California and elsewhere. At other times, I find myself persuaded by studies concerning the validity of the test (given its high correlation to law school grades, which, I think, as a law professor, are often, but not always, good indicators of future success), and by the fact that, if there's going to be a licensing system in place, then it ought to try to be as good as it can be given its flaws and all.

At times, though, I realize these thoughts are often in tension because they are sometimes addressing different debates about the bar exam generally--maybe I'd want a different bar exam, but if we're going to have one it's not doing such a bad job; maybe we want to expand access to attorneys, but the bar exam is hardly the most significant barrier to access; and so on. And maybe even here, my biases and priors color my judgment, and with more information I'd reach a different conclusion.

In all, I don't envy the task of the California Supreme Court, or of other state bar exam authorities, during a turbulent time for legal education in addressing the "right" cut score for the bar exam. The aggressive, often vitriolic, rhetoric from a number of individuals concerning our study in discipline rates is, I'm sure, just a taste of what state bars are experiencing. But I do hope they are able to set aside the lobbying of law deans and the protectionist demands of state bar members to think carefully and critically, with precision, about the issues as they come.

A poor attorney survey from the California State Bar on proposals to change the bar exam cut score

I'm not a member of the California State Bar (although I've been an active member of the Illinois State Bar for nearly 10 years), so I did not receive the survey that the state bar circulate late last week. Northwestern Dean Dan Rodriguez tweeted about it, and after we had an exchange kindly shared the survey with me.

I've defended some of the work the Bar has done, such as its recent standard-setting study, which examined bar test-taker essays to determine "minimum competence." (I mentioned the study is understandably limited in scope and particularly given time. The Bar has shared a couple of critiques of the study here, which are generally favorable but identify some of the weaknesses in the study.) And, of course, one study should not so determine what the cut score ought to be, but it's one point among many studies coming along.

Indeed, the studies, so far, have been done with some care and thoughtfulness despite the compressed time frame. Ron Pi, Chad Buckendahl, and Roger Bolus have long been involved in such projects, and their involvement here has been welcome.

Unfortunately, despite my praise with some caveats about understandable limitations, the State Bar has circulated a poor survey to members of the State Bar about the proposed potential changes to the cut score. Below are screenshots of the email circulated and most of the salient portions of the survey.

It is very hard to understand what this survey can accomplish except to get a general sense of the bar about their feelings about what the cut score ought to be. And it's not terribly helpful in addressing the question about what the cut score ought to be.

For instance, there's little likelihood that attorneys understand what a score of 1440, 1414, or "lower" means. There's also a primed negativity in the question "Lower the cut score further below the recommended option of 1414"--of course, there were two recommended options (hold in place, or lower to 1414), with not just "below" but "further below." Additionally, what do these scores mean to attorneys? The Standard-Setting Study was designed to determine what essays met the reviewing panel's definition of "minimum competence"; how would most lawyers out there know what these numbers mean in terms of defining minimum competence?

The survey, instead, is more likely a barometer about how protectionist members of the State Bar currently are. If lawyers don't want more lawyers competing with them, they'll likely prefer the cut score to remain in place. (A more innocent reason is possible, too, a kind of hazing: "kids these days" need to meet the same standards they needed to meet when getting admitted to the bar.) To the extent the survey is controlling whether to turn the spigot to control the flow of lawyers, to add more or to hold it in place, it represents the worst that a state bar has to offer.

The survey also asks, on a scale of 1 to 10, the "importance" attorneys assign to "statements often considered relevant factors in determining an appropriate bar exam cut score." These answers vary from the generic that most lawyers would find very important, like "maintaining the integrity of the profession," to answers that weigh almost exclusively in favor of lowering the cut score, like "declining bar exam pass rates in California."

One problem, of course, is that these rather generic statements have been tossed about in debates, but how is one supposed to decide which measures are appropriate costs and benefits? Perhaps this survey is one way of testing the profession's interests, but it's not entirely clear why two issues are being conflated: what the cut score ought to be to establish "minimum competence," and the potential tradeoffs at stake in decisions to raise or lower the cut score.

In a draft study with Rob Anderson, we identified that lower bar scores are correlated with higher discipline rates and that lowering the cut score would likely result in higher attorney discipline. But we also identified a lot of potential benefits from raising the score, which have been raised by many--greater access to attorneys, lower costs for legal services for the public, and so on. How should one weigh those costs and benefits? That's the sticky question.

I'm still not sure what the "right" cut score is. But I do feel fairly certain that this survey to California attorneys is not terribly helpful in moving us toward answering that question.

How much post-JD non-clerkship work experience do entry-level law professors have?

Professor Sarah Lawsky offers her tireless and annual service compiling entry-level law professor hires. One chart of interest to me is the year of the JD: in recent years, about 10-20% of entering law professors obtained their JD within the last four years; 45-60% in the last five to nine years; and 25-30% in the last 10 to 19 years, with a negligible number at least 20 years ago.

But there's a different question I've had, one that's been floating out there as a rule of thumb: how much practice experience should an entering law professor have? Of course, "should" is a matter of preference. Most aspiring law professors often mean it to ask, "What would make me most attractive as a candidate? Or, what are schools looking for?"

There are widely varied schools of thought, I imagine, but a common rule of thumb I'd heard was three to five years of post-clerkship experience, and probably no more. (Now that I'm trying to search where I might have first read that, I can't find it.) In my own experience, I worked for two years in practice after clerking. Some think more experience is a good thing to give law professors a solid grounding in the actual practice of law they're about to teach, but some worry too much time in practice can inhibit academic scholarship (speaking very generally); some think less experience is a good or a bad thing for mostly the opposite reasons. (Of course, what experience law professors ought to have, regardless of school hiring preferences, is a matter for a much deeper normative debate!)

I thought I'd do a quick analysis of post-JD work experience among entry-level law professors. I looked at the 82 United States tenure-track law professors listed in the 2016 entry-level report. I did a quick search of their law school biographies, CVs, or LinkedIn accounts for their work experience and put it into one of several categories: 0, 1, 2, 3, 4, 5+, "some," or "unknown." 5+ because I thought (perhaps wrongly!) that such experience would be relatively rare and start to run together over longer periods of time; "some" meaning the professor listed post-JD work experience but the dates were not immediately discernible; or "unknown" if I couldn't tell.

I also chose to categorize "post-clerkship" experience. I think clerkship experience is different in kind, and it still rightly is a kind of work experience, but I was interested in specifically the non-clerkship variety. I excluded independent consultant work, and judicial staff attorney/clerk positions, but I included non-law-school fellowships. Any academic position was also not included in post-JD non-clerkship work experience. I excluded pre-JD work experience, of course, but included all post-JD work experience whether law-related or not (e.g., business consulting). All figures are probably +/-2.

There are going to be lots of ways to slice and dice the information, so I'll offer three different visualizations. First, 23 of the 82 entering law professors (28%) had no post-JD non-clerkship work experience. 56 had at least some, and 3 had unknown experience. That struck me as a fairly large number of "no work experience." (If you included clerkships, 13 of those "nones" had clerkships, and 10 had no clerkship experience.) I thought most of the "nones" might be attributable to increases in PhD/SJD/DPhil hires, and that accounts for about two-thirds of that category.

I then broke it down by years' service.

24 had one to four years' experience; 21 had five or more years' experience; and 11 had "some" experience, to an extent I was unable to quickly determine. (Be careful with this kind of visualization; the "some" makes the 1-4 & 5+ categories appear smaller than they actually are!) I was surprised that 21 (about 26%) had at least five years' post-JD non-clerkship work experience, and many had substantially more than that. Perhaps I shouldn't have been surprised, as about 30% earned their JD at least 10 years ago; but I thought a good amount of that might have been attributable to PhD programs, multiple clerkships, or multiple VAPs. It turns out 5+ years' experience isn't "too much" based on recent school tenure-track hiring.

For the individual total breakdown, here's what I found:

This visualization overstates the "nones," because it breaks out each category, unlike the first chart, but it's each category I collected. Note the big drop-off from "0" to "1"!

Again, all figures likely +/-2 and based on my quickest examination of profiles. If you can think of a better way of splicing the data or collecting it in the future, please let me know!

How should we think about law school-funded jobs?

One of the most contentious elements of the proposed changes to the way law schools report jobs to the American Bar Association is how to handle law school-funded jobs. In my letter, I noted that more information and more careful examination of costs and benefits would be needed before reaching a decision about how best to treat them. The letters to the ABA mostly fall on a more black-and-white alignment: school-funded positions should be treated like any other job, or they should remain a separate category as they have the last two years.

Briefly, school-funded positions may offer opportunities for students to practice, particularly in public interest positions, in a transition period toward opportunities where funding may be lacking. At their best, it provides students with much-needed legal experience in these fields and can help them continue a long career in such opportunities. At their worst, however, they are opportunities for schools to inflate their employment placement by spending money on student placement with no assurance about what that placement looks like after the year is complete.

In the last couple of years, the number of positions that would even qualify as "law school-funded" have been severely limited. I noted in 2016 that these positions had dropped in half, to fewer than 400 positions nationwide, accompanying the change in the USNWR reporting system that gave these positions "less weight" than non-funded positions. Jerry Organ rightly noted that much of the decline was probably attributable to the definitional change: only jobs lasting at least a year and with an annual salary of at least $40,000 would count.

This methodological change likely weeded out many of the lower-quality positions from the school-funded totals.

So, are law school-funded positions good outcomes, or not? It seems impossible to tell from the evidence, because we have essentially no data about what happens to students from these positions after the funding ceases. We have some generic assurances that they are successful in placing students into good jobs; we have others who express deep skepticism about that likelihood. One major reason I endorse the proposal to postpone the change to employment reporting data is to find out more information about what they do! (Alas, that seems unlikely.)

But we do have one piece of data from Tom Miles (Chicago), who wrote in his letter to the ABA: "97% of new graduates who have received one of our school-funded Postgraduate Public Interest Law Fellowships remained in public interest or government immediately after their fellowships; 45% of them with the organization who hosted their fellowship."

That is impressive placement. If such statistics are similar across institutions, it would be a very strong reason, in my view, to move such positions back into the "above the line" totals with other job placement.

Finally, my colleague Rob Anderson did a principal components analysis of job placement and found that law school-funded positions were a relatively good, if minor, job outcome among institutions.

It may be that the worst excesses of the recession-era practices of law schools are behind us, and that these school-funded positions are providing the kinds of opportunities that are laudable. More investigation from the ABA would be most beneficial. But it's also likely the case that the change may be quite modest in the event the ABA chooses to adopt the changes this year.

My letter to the ABA Section on Legal Education re proposed changes to law school employment reporting

On the heels of commentary from individuals like Professor Jerry Organ and Dean Vik Amar, I share the letter I sent to the ABA's Section on Legal Education regarding changes to the Employment Summary Report and the classification of law-school funded positions. (Underlying documents are available at the Section's website here.) Below is the text of the letter:

---

Dear Mr. Currier,

I:

1) petition the Council to suspend implementation of the proposal until at least the Class of 2018, and direct the Section to implement for the Class of 2017, the Employment Questionnaire as approved at the June meeting, together with the Employment Summary Report used for the Class of 2016, and

2) petition the Council to direct the Standards Review Committee to

a. delineate all of the changes in the Employment Questionnaire that would be necessary to implement the proposal, and

b. provide notice of all proposed changes to the Employment Questionnaire and Employment Summary Report and an opportunity to comment on the proposed changes before the Council takes any further action to implement the proposal.

The unusual and truncated process to adopt these proposals is reason enough to oppose the change. But the substance merits additional discussion.

In particular, I do not believe the statements made in the Mahoney Memorandum sufficiently address the costs of returning to the pre-2015 system of reporting school-funded employment figures as "above the line" totals. The Memorandum contains speculative language in justification of the position advanced ("The NLJ assumed, as would any casual reader," "Many readers may never have learned of the error," "we must assume"), language which should be the basis for further investigation and a weighing of costs and benefits, not of reaching a definitive outcome.

Additionally, the Memorandum uses incomplete statistics to advance its proposal--in particular, that "School-funded positions accounted for 2% of reported employment outcomes for the class of 2016" is more relevant if such positions are distributed roughly equally across institutions. But these positions are not distributed roughly equally, and the fact that a few institutions bear a disproportionate number of such positions should merit deeper investigation before examining the impact of such a change.

Furthermore, the Memorandum's proposal, adopted in the Revised Employment Outcomes Schools Report, includes material errors (including overlapping categories of employment by firm size, where an individual in a firm with 10 people would be both in a firm with "2-10" and "10-100"; the same for 100 and the categories "10-100" and "100-500") that never should have made it to the Section.

I find much to be lauded in the objectives of the Section in the area of disclosure. Improving disclosures to minimize reporting errors, streamline unnecessary categories, and provide meaningful transparency in ways that consumers find beneficial are good and important goals. The Section should do so with the care and diligence it has done in its past revisions, which is why it ought to suspend implementation of this proposal.

Best,

/s/ Prof. Derek T. Muller

More evidence suggests California's passing bar score should roughly stay in place

Plans to lower California's bar exam score may run up against an impediment: more evidence suggesting that the bar exam score is about where it ought to be.

My colleague Rob Anderson and I recently released a draft study noting that lower bar passage scores are correlated with higher discipline rates, and urging more collection of data before bar scores are lowered.

There are many data points that could, and should, be considered in this effort. The California state bar has been working on some such studies for months. California students are more able than students in other states but fail the bar at higher rates, because California's cut score (144, or in California's scoring 1440, simply 10x) is higher than most other jurisdictions.

Driven by concerns expressed by the deans of California law school, and at the direction of the California Supreme Court, the State Bar began to investigate whether the cut score was appropriate. One such study was a "Standard Setting Study," and its results published last week. It is just one data point, with obvious limitations, but it almost perfectly matches the current cut score in California.

A group of various practitioners looked at a batch of bar exam essays. They graded them. They assessed a score of "not competent," "competent," or "highly competent." They were refined to find that "tipping point," from "not competent" to "competent." (An evaluation of the study notes this is similar to what other states have done in their own standard-setting studies, which have resulted in a variety of changes to bar pass cut scores; and that independent evaluation identified critiques to the study but concluded it was overall sound methodology and valid results.)

The mean recommended passing score from the group was 145.1--more than a full point higher than the actual passing score! The median passing score was 143.9, almost identical to the 144.0 presently used. (The study explains why it believes the median is the better score.)

Using a +/-1 error standard of deviation, the mean score may range from 143.6 to to 148.0; the median score 141.4 to 147.7. All are well short of the 133-136 scores common in many other jurisdictions, including New York's 133. And this study is largely consistent with a study in California 30 years ago when a similar crisis arose over low passing rates, a study I identified in a recent blog post.

So, what to do with this piece of evidence? The researchers offered two recommendations for public comment and consideration: keep the score where it is; or reduce the passing score needed to 141.4 for the July 2017 exam alone. (Note: what a jackpot it would be to the bar test-takers this July if they received a one-time reprieve!) The recommendations nicely note many of the cost-benefit issues policymakers ought to consider--and includes some reasons why California has policy preferences that may weigh in favor of a lower score (at least temporarily). The interim proposal to reduce to a 141.4, one standard error below the recommended median value of 143.9, takes into account these policy considerations. Such a change may be modest, but it could result in a few hundred more bar test-takers passing the bar on the first attempt in California.

Alas, the reaction to a study like this has been predictable. Hastings Dean David Faigman--who called our study "irresponsible," accused the State Bar of "unconscionable conduct" for the July 2016 bar exam--waited until the results dropped to critique the study with another quotable adjective soundbite, labeling the study "basically useless." (A couple of other critiques are non-responsive to the study itself.)

Of course, one piece of data--like the Standard Setting Study--should not dictate the future of the bar exam. Nor should the study of my colleague Rob Anderson and me. Nor should the standard setting study 30 years ago. But they all do point toward some skepticism that the bar exam cut score it dramatically or outlandishly too high. It might be, as the cost-benefit and policy analysis in the recommendations to the state bar suggest, that some errors ought to be tolerated with a slight lowering of the score. Or it might be that the score should remain in place.

Whatever it is, more evidence continues to point toward keeping it roughly in the current place, and more studies in the future may offer new perspectives on the proper cut score.