There is no shortage of talk about risk in AI, in higher education and beyond. We discuss plagiarism, bias, fairness and governance. These are important challenges. But there are others. How do these systems behave over time, and what do their observable behaviours reveal about their underlying structures and about human responses to them? Engineers cannot answer this alone. Such questions are best addressed via the qualitative research methods in which the humanities specialise but these are increasingly viewed with suspicion by funders and university administrators. In the UK and elsewhere, departments that focus on this area are closed or are required to focus on narrower definitions of “impact”. These exclude open-ended engagement with AIs in pursuit of answers to the philosophical and psychological questions it throws up.