Generative Artificial Intelligence (AI) has entered university classrooms at a remarkable speed, challenging not only how students learn but also how teachers can tell where thinking is happening1,2,3. AI use shows more than rapid adaptation to a new tool: it also exposes how academic training has long shaped the questions students ask. Conventionally, many questions are framed to elicit coherence rather than conflict, synthesis rather than uncertainty, for example: “Summarise the state of knowledge …”, “Explain the mechanisms of…”. Put to an AI system, the responses often smooth disagreement and blur the limits of evidence4,5. The challenge in AI use is therefore not how far students should rely on AI but whether universities can help them ask questions that expose uncertainty rather than conceal it. We call this approach “grounded inquiry”, which we define as using AI to expose disagreements and weak support, trace claims to evidence, and make uncertainty apparent within a curated set of primary literature sources. We find that this approach helps Earth science students to think more independently and critically.