Wednesday, March 18, 2026

What 3 Leading AI Models Say Are the Most Vulnerable Jobs in Higher Ed - Ray Schroeder, Inside Higther Ed

I asked artificial intelligence to tell me what jobs in higher education are most vulnerable to replacement in the near term. Sonnet is very honest in its replies, painting a difficult picture for those who seek to find new jobs in higher ed. For those already in the field, Sonnet suggests becoming the most adept user of AI in your office. Seek to transfer to the unit or office where AI is a top priority. It adds, “Consider whether your institution is viable. Smaller, tuition-dependent institutions without strong endowments are in structural decline. Loyalty to a sinking ship is not a career strategy.” Across all career stages in higher education, Gemini recommends, “To remain relevant, higher education professionals must pivot toward AI Orchestration. Success is no longer measured by how well you perform a task, but by how well you direct the agents performing them.”



HE needs academically aligned, citation-traceable AI systems - Wagdy Sawahel, University World News

“Despite the growing body of research on artificial intelligence and Large Language Models in education, several gaps persist, including a lack of structured conceptual frameworks that integrate academic data governance and pedagogical requirements. “Thus, there is a need for conceptual models that can guide the development, governance and pedagogical integration of Large Language Models in higher education, particularly in under-represented regions and contexts,” Abanga said. “The most important contribution of my study is the proposed Academic-LLM Framework that integrates data quality, pedagogy, governance and continuous feedback,” she noted.

Holistic, human-centered approach to AI puts U of A in class of its own - Craig Reck, University of Arizona

The University of Arizona is defining a new standard for how artificial intelligence integrates into higher education and society. By prioritizing ethics, personal responsibility and societal impact over just technical speed, the U of A is building an ecosystem where integrity and human creativity remain the primary drivers of progress. This holistic, human-centered approach positions the university as a national example in the responsible adoption of AI technology at a modern research university. The architect of this strategic effort to integrate AI across research, instruction and operations is David Ebert, the U of A’s inaugural chief AI and data science officer – one of only a few such positions in higher education nationally. 

Tuesday, March 17, 2026

Generative AI can play a role uplifting family and community in early childhood education - Andres Bustamante & Aria Gastón-Panthaki, the Conversation

Use of generative artificial intelligence technology is already widespread in K-12 schools and higher education. Now, AI technologies such as conversational agents and tablet-based assessments are starting to make their way toward early childhood education. One concern with AI in a prekindergarten setting is that the technology will replace or disrupt the rich interactions and deep relational bonds between children and their caregivers. Another worry is that AI systems will reproduce discrimination related to race, gender and socioeconomic status, which could reinforce stereotypes and biases. What if, instead, this technology was used to uplift marginalized voices rather than silence them?

https://theconversation.com/generative-ai-can-play-a-role-uplifting-family-and-community-in-early-childhood-education-272237

ASU professor analyzing how artificial intelligence could cause businesses to lose their knowledge - Ignacio Ventura, KJZZ

A professor from Arizona State University is analyzing how artificial intelligence could cause businesses to lose their knowledge. ASU management and entrepreneurship professor Don Lange collaborated with another professor from the University of Passau in Germany. Their article says companies that choose to use AI systems run the risk of their models becoming outdated. For example, a bank that uses machine learning to detect fraud may eventually encounter problems if the system does not adapt to changing techniques of fraudsters.

College leaders reflect on the future of higher education - Stanford Report

The panel included UC Berkeley Chancellor Rich Lyons, Brown University President Christina Paxson, and University of Oregon President John Karl Scholz. The discussion was moderated by former Stanford President and Chairman of the Board of Alphabet John Hennessy. The Stanford Institute for Economic Policy Research (SIEPR) Economic Summit is an annual campus event that brings experts together to discuss the global economy, domestic competitiveness, the future of universities, and other critical issues shaping the future. Public trust in universities has declined in recent years, with some Americans questioning both the education they provide and campus culture. Scholz cited data showing fewer than half of Americans now believe universities are playing a positive role in society – what he called an “existential challenge.” Paxson pointed to a recent Gallup-Lumina poll showing that about 2% of college students feel unwelcome on campus because of their political views. She also said students increasingly choose schools with peers who share their politics. But data alone won’t solve the problem. “Trust doesn’t get built through facts,” she said. “It’s feelings-based.”

Monday, March 16, 2026

OpenAI ChatGPT leader discusses AI agents and the future of knowledge work at Harvard Business School - Emma Thompson, EdTech Innovation Hub

The discussion also explored how the responsibilities of product managers could change as generative AI systems become part of the development process. Ostrovskiy wrote: “The job becomes less about coordination and more about 1) understanding real user problems, 2) defining what ‘success’ means in an AI system, and 3) building evals and feedback loops so you can tell if a new model configuration is actually better than the last one.” He added that curiosity about how AI systems behave may become a core skill across multiple roles: “The advantage goes to people who are curious about system behavior and who like building, regardless of whether their title says PM, engineer, designer or something else.” The conversation also included advice for students learning how to evaluate AI systems: “Build something with one foundation model, then swap in a different model or prompt configuration and force yourself to decide if it’s better. When you’re a student looking to become a better PM, even a simple spreadsheet of use cases plus a qualitative rubric counts as an eval.”

How AI Can Close Equity Gaps for First-Generation Students - Richard J. Smith, EdTech

The emergence of artificial intelligence in higher education is often blamed for widening the digital divide for first-generation college students. However, given that a growing majority of Americans have access to the internet and capable digital devices, such as laptops and smartphones, AI has the potential to close equity gaps for under-resourced students. Student support professionals can leverage this technology even further by providing AI-driven, on-demand guidance across nearly every facet of the college experience. Because first-generation students often require more personalized, intrusive advising than their continuing-generation peers, they are an ideal population for supplemental advising tools.

OpenAI Adds Interactive Math and Science Learning Tools to ChatGPT - Rhea Kelly, Campus Technology

ChatGPT adds interactive learning tools: OpenAI introduced interactive math and science visualizations that allow users to explore formulas, variables, and relationships in real time. The tool currently covers over 70 core math and science topics and is aimed initially at high school and college-level learners. Users can adjust variables, manipulate formulas, and immediately see how changes affect graphs and outcomes.

Sunday, March 15, 2026

The Unmaking of the American University - Nicholas Lemann, the NewYorker

Now the compact between the universities and the federal government has been broken, and maybe not just temporarily. The Trump Administration has deployed a brutally effective, previously unused technique for getting these institutions’ full attention: suspending their funds, even those appropriated by Congress and legally committed to in contracts. 

Adopting AI is a social contract - Andrew Inkpen & Dani Inkpen, University Affairs

Integrating artificial intelligence into our societies and personal lives binds us to certain futures and forecloses the possibility of others. Are we ready to accept the consequences? Much of the present conversation about AI in higher education centers around questions of implementation. How do we use AI in accordance with principles of universal design? How can we ensure equity in its usage, be it across axes of gender, race or class? What does AI mean for the longevity of the professorial profession? Implementation should indeed be approached with care and nuance, and we welcome this conversation. Yet, questions of implementation assume that AI is desirable and inevitable in the classroom. The prior question of whether AI in higher education is actually desirable is often overlooked. Two widespread assumptions underpin this move: 1) technological progress is inevitable; 2) technology is apolitical — it only becomes political in its implementation. 

New Jersey to Use AI to Score Standardized Writing Tests - Liz Rosenberg, GovTech

Starting this spring, a new state test called the New Jersey Student Learning Assessments-Adaptive for grades 3-10 will be “adaptive,” meaning students will get different questions based on their previous answers. The “artificial intelligence” will be trained using scores generated by human scorers on practice tests that were given to students in October and November. New Jersey is debuting a new type of state tests — called the New Jersey Student Learning Assessments-Adaptive — this spring. It will be given to students in grades 3 through 10 to test their knowledge of English, math and science.

Saturday, March 14, 2026

AI broke the college degree: Why higher education matters more than ever - Katherine Perry, the Linfield Review

While it was once a faraway and futuristic idea, AI has now found its way into many aspects of everyday life, including higher education. This is what Patrick Dempsey, founder and co-CEO of Pend AI, spoke about in his keynote lecture on Feb. 18. Higher education is, at least in part, meant to equip students with the skills and specialized knowledge from their fields they will need in their careers after graduation. For this reason, Dempsey weighed in on the discourse surrounding AI in the workplace. While many worry that AI will automate jobs wholesale, he posited that AI could be used to automate certain tasks within jobs that don’t require this specialized knowledge, like emailing and meetings.

AI Tools to Reduce College Dropout Rates - Nancy Mann Jackson, EdTech

Roughly 3 in 10 college students drop out without earning any degree, resulting in higher unemployment and lower lifetime earnings than those who earn bachelor’s degrees, according to the Education Data Initiative. To help boost student retention, colleges and universities are using a variety of artificial intelligence tools that can help identify at-risk students early, offer customized learning, provide 24/7 assistance and improve engagement. “We’ve always known in higher education that we need to deliver more personalized, timely help to students who are struggling, but we haven’t always had the resources to deliver personal attention at scale,” says Timothy Renick, executive director of the National Institute for Student Success at Georgia State University. “Using technology can level the playing field, allowing us to leverage data and analytics to deliver personal attention at scale in a way that is much more cost effective than hiring hundreds of new staff.”

Today’s AI is built to respond. The future belongs to proactive systems. - Kiara Nirghin & Nikhara Nirghin, Big Think

Much of what we’ve seen from the biggest artificial intelligence (AI) companies has revolved around words: You go to their chatbot, ask it a question, and it responds. Over the past couple of years, some have taken this a step further with AI agents — those can actually do things, but only things you’ve told them to do. The next frontier in AI is not better chat. It is not even better agents. The next frontier is proactive AI, the kind that takes action, learns in real time, and, critically, comes to you before you go to it. This distinction is not a feature improvement. It is a civilizational pivot.

Friday, March 13, 2026

What national AI plans get wrong and how to fix them - Cameron F. Kerry and Saurabh Mishra, Brookings

AI is not a standalone sector; it creates value only when embedded in real industries. Countries should build cognitive infrastructure, including data, institutions, talent, and inherent local domain knowledge—not just compute capacity—to operationalize AI for real-world impact.  The winning strategy is to strengthen what a country already does well and use AI to move into adjacent higher-value activities. 

OpenAI’s New GPT-5.4 Pro Is Now The Smartest AI In The World. - TheAIGRID, YouTube

The video discusses the release of OpenAI’s GPT-5.4 Pro, highlighting its dominance across sophisticated benchmarks like Frontier Math and OSWorld, where it demonstrates superhuman problem-solving by resolving mathematical equations that remained unsolved for decades [06:46]. While the model shows significant advancements in professional white-collar tasks and creative writing, the creator notes that its high performance comes with a substantial price increase [02:17] and introduces serious cybersecurity risks. Classified as a "high" threat in OpenAI’s preparedness framework, the model's ability to autonomously execute complex cyberattacks [21:42] suggests that future iterations could reach "critical" risk levels, potentially necessitating stricter access controls and government oversight as AI capabilities continue to accelerate toward human-level proficiency in specialized fields [13:37]. [summary assisted by Gemini 3]

https://www.youtube.com/watch?v=3jrGutFAIgo

OpenAI's new GPT-5.4 clobbers humans on pro-level work in tests - by 83% - David Gewirtz, ZDnet

GPT-5.4 is also more reliable, producing 18% fewer errors and 33% fewer false claims than GPT-5.2, according to OpenAI. GPT-5.4's 83% score suggests AI rivals expert professionals. Tests span nine industries and 44 real-world occupations. New capabilities boost coding, tools, and computer control.


Thursday, March 12, 2026

Universities Are Not Only About Jobs. They're About Human Existence in the Age of AI. - Maria Mercedes Mateo-Berganza Diaz, IDB

In a world where AI can outperform humans in many cognitive tasks, universities must preserve human judgment, ethics, and purpose — not just technical skills. Higher education must prioritize broad, humanistic foundations alongside specialized skills to prepare students for complex, “messy” work that machines cannot replace. For the Global South, the stakes are even higher: universities are essential to safeguard agency, cultural sovereignty, and the ability to shape futures — not merely adapt to those designed elsewhere.  

https://www.iadb.org/en/blog/education/universities-are-not-only-about-jobs-theyre-about-human-existence-age-ai-0

AI in HE: International study finds high use, low support - Karen MacGregor, University World News

An international survey of university academics and students by Coursera, the massive online learning platform with 375 leading university and industry partners, has revealed highly positive attitudes towards generative AI and more than 95% make use of AI tools. But a weighty 56% fear that higher education is unprepared to handle AI. In the survey of 4,200 educators and students in India, Mexico, the United States, the United Kingdom and Saudi Arabia, only 26% of academics said their university had an AI use policy. Two thirds (65%) of educators and students believed unregulated AI could undermine degrees. Importantly, Dr Marni Baker Stein, chief content officer at Coursera, told University World News: “We’re seeing learners run out ahead in figuring out how to use AI tools in pretty sophisticated and personalised ways to help them in their studies. The question is, how and when do universities catch up with that velocity in the learner population?”

AI in higher education is now the norm—not the exception - Michelle Centamore, University Business

AI is quickly becoming standard practice in higher education, with students and faculty reporting widespread use and a largely positive view of its impact, according to Coursera’s new report, “AI in Higher Education: Insights on Attitudes, Adoption, and Risks.” The findings also point to rising demand for formal training. Nine in 10 students said they want generative AI instruction included in their degree programs. On the hiring side, 75% of employers said they would rather hire a less experienced candidate with a generative AI credential than a more experienced candidate without one.