The solution, according to OpenAI, is therefore to focus not on feeding models more accurate information, but to adjust the structure of how their performance is assessed. Since a binary system of grading a model's output as either right or wrong is supposedly fueling hallucination, the OpenAI researchers say that the AI industry must instead start rewarding models when they express uncertainty. After all, truth does not exist in black-and-white in the real world, so why should AI be trained as if it does? Running a model through millions of examples on the proper arrangement of subjects, verbs, and predicates will make them more fluent in their use of natural language, but as any living human being knows, reality is open to interpretation. In order to live functionally in the world, we routinely have to say, "I don't know."