In a project organized by four researchers, including three from the School of Medicine, researchers tasked readers with blindly reviewing 34 essays, 22 of which were human-written and 12 which were generated by artificial intelligence. Typically, they rated the composition and structure of the AI-generated essays higher. However, if they believed an essay was AI-generated, they were less likely to rank it as one of the overall best essays. Ultimately, the readers only accurately distinguished between AI and human essays 50 percent of the time, raising questions about the role of AI in academia and education.