Saturday, May 25, 2024

Majority of Humans Fooled by GPT-4 in Turing Test: THIS IS PRETTY HUGE - NOOR AL-SIBAI, the Byte

OpenAI's GPT-4 is so lifelike, it can apparently trick more than 50 percent of human test subjects into thinking they're talking to a person. In a new paper, cognitive science researchers from the University of California San Diego found that more than half the time, people mistook writing from GPT-4 as having been written by a flesh-and-blood human. In other words, the large language model (LLM) passes the Turing test with flying colors. The researchers performed a simple experiment: they asked roughly 500 people to have five-minute text-based conversations with either a human or a chatbot built on GPT-4. They then asked the subjects if they thought they'd been conversing with a person or an AI.