By now, most universities have an artificial intelligence policy. It probably mentions ChatGPT, urges students not to cheat, offers a few examples of “appropriate use” and promises that staff will get guidance and training. All of that is fine. But it misses the real story. Walk through a typical UK university today. A prospective student may first encounter you via a targeted digital ad whose audience was defined by an algorithm. They apply through an online system that may already include automated filters and scoring. When they arrive, a chatbot answers their questions at 11pm. Their classes are scheduled by algorithms matching student numbers with lecture theatre availability, and their essays are screened by automated text-matching and, increasingly, other AI-detection tools. Learning analytics dashboards quietly classify them as low, medium or high risk. An early-warning system may nudge a tutor to intervene.