Latest version of ChatGPT aces bar exam with score nearing 90th percentile

Artificial Intelligence & Robotics

Latest version of ChatGPT Ace Bar Exam with a score near the 90th percentile

ChatGPT

Image from Shutterstock.

The latest version of the artificial intelligence program ChatGPT passed the Uniform Bar Examination by “a clear margin”, earning an overall score of 297, surpassing even the high threshold of 273 set by Arizona.

GPT-4 passed all sections of the July 2022 bar exam, earning a score high enough to approach the 90th percentile of test takers, according to researcher Daniel Martin Katz, a professor at the Illinois Institute of Technology’s Chicago-Kent College of Law , and Michael James Bommarito, professor in the Michigan State University College of Law.

“Our analysis shows that GPT-4 has indeed exceeded the bar, and by a significant margin,” they wrote in a paper published March 15 and available here. According to March press releases here and here, the professors worked with legal AI firm Casetext.

GPT-4 completed all sections of the bar exam and performed particularly well in the multiple-choice section known as the Multistate Bar Examination. GPT-4 correctly answered 75.7% of the multiple-choice MBE questions, compared to the human average of 68%.

GPT-4 received a passing grade in all seven subjects tested in the MBE and performed best in Contracts (correct 88.1% of questions), followed by Evidence (85.2%) and Criminal Law and Procedure (81st place). ,1 %).

Two of the researchers gave credit to the essay questions on the Multistate Essay Examination, and they also received input from colleagues.

“While GPT-4 performs well on many questions, its output is not entirely error-free,” the researchers concluded.

Still, GPT-4 received a score of 4.2 out of 6. Most jurisdictions use the same scale, and a score of 4 is considered a pass.

In the Multistate Performance Test, GPT-4 also received a score of 4.2 out of 6 points.

“We were somewhat surprised by the quality of the output produced,” the researchers wrote.

The previous version of ChatGPT didn’t do as well on the exam. It failed multiple choice but received passing grades in Evidence and Tort.

Publications about the study include Above the Law and Reuters.

See also:

ABAJournal.com: “Can ChatGPT help law students learn to write better?”

Source

Leave a Reply

Your email address will not be published. Required fields are marked *