Predictive Test Scores and Diploma Privilege

Fight Censorship, Share This Post!

The International Baccalaureate program, which credentials high-school students who take college-level classes, canceled exams this year because of COVID-19. But that did not stop the program from granting exam grades. Exam grades were given based on a predictive algorithm. According to Wired, “The system used signals including a student’s grades on assignments and grades from past grads at their school to predict what they would have scored had the pandemic not prevented in-person tests.”

Unsurprisingly, this is controversial. And without the precise formula or algorithm that the program used to calculate grades, it is difficult to assess. But the core idea makes a lot of sense: Measure students with a system of decentralized grades, then normalize the grades from different schools or teachers based on how predictive those grades have been of some other performance measure. At least, this seems like a reasonable approach if it is impossible to give the test and we still need to distinguish students. Use the best information available, including historical data.

Indeed, it would be an improvement over current practice if U.S. News and World Report normalized students’ grades in this way. How would this work? In effect, a predicted LSAT score would be calculated based on one’s GPA, given the GPA distribution and LSAT distribution at one’s undergraduate school. So, if one received a 75th percentile GPA, that would be translated into a 75th percentile LSAT score for the same school. With enough data, the prediction might be based on GPA across majors, to account for tougher grading in some departments than others. A student would receive an actual LSAT score too, but this approach would make comparisons of GPAs across schools much more meaningful.

Law schools already have access to this information. Admissions offices receive data from the Credential Assembly Service, including the distribution of grades and of LSAT scores at an applicant’s undergraduate school. Whether or not they make explicit computations, law school admission officers have a sense of differences in quality across undergraduate schools and of differences in grade inflation. But because U.S. News asks law schools to report undergraduate GPA, rather than a normalized measure that would be modestly more difficult to calculate, many law schools place considerable weight on the unnormalized result. And because U.S. News cares only about median GPAs and LSAT scores, a law school can improve its standing with a splitter strategy, including sometimes accepting a student with a high GPA from a low-ranked school with grade inflation, even if that student has a very low LSAT score.

Predictive grading–that is, a system of decentralized grading normalized based on overall performance of students on a standardized test–ought to be welcomed by those who are skeptical that standardized test scores are a good measure of individual talent. True, with such a scheme, standardized test scores are still used as the normalization measure. Thus, if a school teaches to the test and artificially increases its standardized test scores at the expense of more important learning, it will benefit. But schools already have an incentive to teach to the test, and this approach reduces the incentive of students to study to the test. One might rationally combine a switch to predictive grading with reduced emphasis on the standardized test as a criterion for assessing each individual. Grades provide a metric of how students perform on a daily and weekly basis. The only reason we need standardized tests is that grades aren’t standardized, but we can standardize (normalize) grades based on aggregate standardized test performance.

Where possible, it might be preferable to normalize based on some other measure, such as law school grades. Each law school might administer a statistical model to predict, based on a student’s undergraduate grades and school, what the student’s expected law school GPA would be, given the past performance of students from that undergraduate program at that law school. In principle, law schools could coordinate by sharing their models, so that one would be able to predict a student’s law school performance at every participating law school based on the student’s undergraduate performance. This would have the side benefit of making grades across different law schools easier to compare. My guess is that considering individual LSAT or GRE scores would still have some predictive value in such a world, but much less relative value than would be the case today, where the undergraduate GPA measure is not normalized.

If such a system can be used to normalize high school grades and college grades, such a system could also be used for law school grades to determine whether students can be admitted to the bar. There has recently been a push for diploma privilege. The justification is the COVID-19 pandemic, and there is a powerful argument that this is not a good time for standardized testing. But if one accepts the claim that the bar exam helps to protect clients from poorly qualified lawyers, diploma privilege is difficult to countenance. Why should lawyers’ welfare be placed above clients’?

There are longstanding arguments that the bar exam is just a tool of the lawyer cartel for reducing entry into the profession. Diploma privilege can be seen as a step toward ending or reducing the significance of the bar exam. But those who take the cartel argument seriously should also be skeptical of requirements that students attend three years of law school. Will a client be safer hiring a lawyer with two years of law school from the top of the class or a lawyer with three years of law school from the bottom of the class? Unsurprisingly, law school deans advocating diploma privilege have not also been suggesting that we relax requirements for legal education. No one seems to be making the argument that because the third year of law school will be an inferior online product this year, we might as well let students skip it.

A compromise on diploma privilege would be to use a predictive grading approach. Bar examiners could create a simple, school-specific model forecasting bar exam performance as a function of law-school grades. Then, they might decide that any student who is predicted more likely than not to pass the exam receives grade-based diploma privilege. Or bar examiners might choose other thresholds. If one is particularly concerned about false positives (students admitted to the bar who would not have passed the exam), the examiners might demand a higher percentage, and if examiners are particularly concerned about false negatives (students whose grades predict failure but who would have passed the exam), one might demand a lower percentage. The core point is that states should use the information available to them, at least if it is infeasible to generate information in the form of a bar exam.

One could also imagine such a system having some role in the future. Even if all students take the bar exam, one could imagine a system in which qualification for the bar depends on both the bar exam score and one’s law school grades, normalized based on the bar exam scores of the entire population of students from the law school. This approach is responsive to those who believe that law school grades earned over three years are a better measure of ability than a short exam, while providing a means for comparing grades across schools and still giving students an incentive to take the exam seriously.


Fight Censorship, Share This Post!

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.