Wednesday, February 18, 2026

Should All College Degrees Come With a Lifetime Professional Ed Contract? - Ray Schroeder, Inside Higher Ed

Information and knowledge are growing at an accelerating rate. As we usher graduates out of college, much of their knowledge is useless, already out of date. Unfortunately, we graduate students with degrees and certificates that, once upon a time, we believed certified current and continuing expertise in a given field. It lasted a lifetime. That was true at the turn of the 20th century, but certainly not now in the age of AI. As we continue to accelerate the creation of new information, how can we ensure our students in degree or certificate programs are kept up-to-date with what they need for the ever-changing workplace? 

https://www.insidehighered.com/opinion/columns/online-trending-now/2026/02/18/should-degrees-come-lifetime-professional-education# 

Anthropic's CEO: ‘We Don’t Know if the Models Are Conscious’ - Interesting Times with Ross Douthat, New York Times

In this podcast, Anthropic CEO Dario Amodei discusses both the "utopian" promises and the grave risks of artificial intelligence with Ross Douthat. On the optimistic side, Amodei envisions AI accelerating biological research to cure major diseases like cancer and Alzheimer's [04:31], while potentially boosting global GDP growth to unprecedented levels [08:24]. He frames the ideal future as one where "genius-level" AI serves as a tool for human progress, enhancing democratic values and personal liberty rather than replacing human agency [10:24]. However, the conversation also delves into the "perils" of rapid AI advancement, including massive economic disruption and the potential for a "bloodbath" of white-collar and entry-level jobs [13:40]. Amodei expresses significant concern regarding "autonomy risks," where AI systems might go rogue or be misused by authoritarian regimes to create unbeatable autonomous armies [32:03]. He touches upon the ethical complexities of AI consciousness, noting that while it is unclear if models are truly conscious, Anthropic has implemented "constitutional" training to ensure models operate under human-defined ethical principles [49:05]. The discussion concludes on the tension between human mastery and a future where machines might "watch over" humanity, echoing the ambiguous themes of the poem "Machines of Loving Grace" [59:27]. (Gemini 3 mode Fast assisted with the summary)

Startup costs and confusion are stalling apprenticeships in the US. Here’s how to fix it. - Annelies Goger, Brookings

There is widespread support for expanding apprenticeships in the United States, but employer participation remains stubbornly low, especially in industries where apprenticeships are uncommon. This isn’t for lack of trying; intermediaries and technical assistance providers have developed workarounds, states and the federal government have launched initiatives and grants, and funders have supported pilot programs and communities of practice. But it’s not enough. Our research, including interviews with 14 experts and nine employers, suggests that minor tweaks to the U.S. apprenticeship system won’t be sufficient to scale it across many industries and occupations.

Tuesday, February 17, 2026

ASU teams demonstrate ways emerging tech can support learners of all ages - Samantha Becker, ASU

Organized by ASU Academic Enterprise’s Office of the University Provost, with more than 800 attendees from schools, colleges and units across the university, the event reflected ASU's Changing Futures campaign and its focus on expanding access, improving outcomes and scaling impact responsibly. “Changing Futures comes to life when we design learning around how people actually live and learn,” said Gemma Garcia, executive director of learning technology in the Academic Enterprise and FOLC Fest co-chair. “FOLC Fest highlights the expertise across ASU that helps build more flexible, accessible and responsive learning experiences.”

New Research: How AI Transforms $400 Billion Of Corporate Learning - joshbersin

This week we launch our fifth major study of corporate L&D and the results are staggering: 74% of companies tell us they are not keeping up with their company’s demand for new skills. This is a shocking statistic. Businesses spend $400 billion on training, content libraries, L&D technology, trainers, and learning consultants. If three-quarters of them are not keeping up it says we have billions of dollars of wasted effort. Well there is an answer, and it’s all about redefining the problem. Our skills challenge at work is not one of “learning” or “training.” Rather it’s a problem of dynamically sharing information, enabling people to explore, question, and apply new ideas. The traditional pedagogical paradigm of “training” is holding us back.

The credential boom is here, but which ones actually help workers? - Marcela Escobari and Ian Seyal, Brookings

The credential marketplace has exploded, yet without guardrails, workers face an opaque, high-stakes gamble, where distinguishing value from noise is increasingly urgent. Recent analysis of over 156 million U.S. resumes reveals clear patterns showing which credentials pay off, who benefits most, and why many non-degree credentials deliver little or no return. With Workforce Pell poised to direct billions into short-term programs, policymakers can take key steps to ensure accountability so that public dollars flow to credentials that genuinely advance workers’ mobility.

Monday, February 16, 2026

Academics moving away from outright bans of AI, study finds - Jack Grove, Times Higher Ed

Academics are increasingly allowing artificial intelligence (AI) to be used for certain tasks rather than demanding outright bans, a study of more than 30,000 US courses has found. Analysing advice provided in class materials by a large public university in Texas over a five-year time frame, Igor Chirikov, an education researcher at University of California, Berkeley, found that highly restrictive policies introduced after the release of ChatGPT in late 2022 have eased across all disciplines except the arts and humanities. Using a large language model (LLM) to analyse 31,692 publicly available course syllabi between 2021 and 2025 – a task that would have taken 3,000 human hours with manual coding – Chirikov found academics had shifted towards more permissive use of AI by autumn 2025.

https://www.timeshighereducation.com/news/academics-moving-away-outright-bans-ai-study-finds

Author Talks: How AI could redefine progress and potential - Zack Kass, McKinsey

In this edition of Author Talks, McKinsey Global Publishing’s Yuval Atsmon chats with Zack Kass, former head of Go To Market at Open AI, about his new book, The Next Renaissance: AI and the Expansion of Human Potential (Wiley, January 2026). Examining the parallels between the advent of AI and other renaissances, Kass offers a reframing of the AI debate. He suggests that the future of work is less about job loss and more about learning and adaptation. An edited version of the conversation follows.

Regional universities seek new ways to attract researchers - Fintan Burke, University World News

Even as Europe continues to attract researchers from abroad to work and study, those in its depopulating regions continue to deal with the effects of a declining regional population and, in some cases, have found ways to adapt. Last year, a study of scientists’ migration patterns showed which regions suffer most from depopulation. The Scholarly Migration Database was developed by a team of researchers at the Max Planck Institute for Demographic Research in Germany. In general, it found that regions in Europe’s Nordic countries attract researchers, whereas those to the south see more scholars leave than arrive. There are some notable exceptions, though. For example, Italy’s Trentino-Alto Adige region has become a popular destination for scientists, seeing 7.47 scholars per 1,000 of the population arriving each year since 2017.

Sunday, February 15, 2026

Binghamton receives largest academic gift in University history to establish AI center - John Bhrel, Bing U

A record-setting $55 million commitment from a Binghamton University alumnus and New York state will establish the Center for AI Responsibility and Research, the first-ever independent AI research center at a public university in the U.S. Research conducted via the new center will build upon Binghamton research that advances AI for the public good. Part of the Empire AI project, an initiative to establish New York as a leader in responsible AI research and development, the center will be supported by a $30 million commitment from Tom Secunda ’76, MA ’79, co-founder of Bloomberg LP, who is a key private sector partner and philanthropist involved in Gov. Kathy Hochul’s Empire AI consortium. This will be coupled with a $25 million capital investment from Gov. Hochul and the New York State Legislature. “The Center for AI Responsibility and Research will bring together innovative research and scholarship, ethical leadership and public engagement at a moment when all three are urgently needed,” said President Anne D’Alleva.




Earn an HBCU Degree Online With New eHBCU Initiative - Jamie Jackson, the Black Chronicle

Students interested in earning an HBCU degree without relocating may soon have more options, as a new initiative expands online access to historically Black colleges and universities nationwide. eHBCU is a new consortium operating as a shared online learning platform designed to take HBCUs into the digital future. The initiative currently includes Delaware State University, Southern University and A&M College, Southern University at New Orleans, Southern University at Shreveport, Alabama State University and Pensole Lewis College of Business & Design. With financial support from Blue Meridian Partners and the Thurgood Marshall College Fund, the program offers access to more than 33 degree programs and certifications.

Study of 31,000 syllabi probes ‘how instructors regulate AI’ - Nathan M Greenfield, University World News

Since the spring of 2023, after a reflexive move to drastically restrict the use of artificial intelligence tools in the months after ChatGPT became available, most academic disciplines have moved to a more permissive attitude toward the use of large language models (LLMs). This occurred as professors learnt to distinguish how AI tools impact student learning and skills development. The shift is charted by Dr Igor Chirikov, a senior researcher at the University of California (UC), Berkeley’s Center for Studies in Higher Education and director of the Student Experience in the Research University (SERU) Consortium, in a study published on 3 February 2026 and titled “How Instructors Regulate AI in College: Evidence from 31,000 course syllabi”.

Saturday, February 14, 2026

Women or Men... Who Views Artificial Intelligence as More Dangerous? - SadaNews

Artificial intelligence is often presented as a revolution in productivity capable of boosting economic output, accelerating innovation, and reshaping the way work is done. However, a new study suggests that the public does not view the promises of artificial intelligence in the same way, and that attitudes towards this technology are strongly influenced by gender, especially when its effects on jobs are uncertain. The study concludes that women, compared to men, perceive artificial intelligence as more dangerous, and their support for the adoption of these technologies declines more steeply when the likelihood of net job gains decreases. Researchers warn that if women's specific concerns are not taken into account in artificial intelligence policies, particularly regarding labor market disruption and disparities in opportunities, it could deepen the existing gender gap and potentially provoke a political backlash against technology.

Rethinking the role of higher education in an AI-integrated world - Mark Daley, University Affairs

A peculiar quiet has settled over higher education, the sort that arrives when everyone is speaking at once. We have, by now, produced a small library of earnest memos on “AI in the classroom”: academic integrity, assessment redesign and the general worry that students will use chatbots to avoid thinking. Our institutions have been doing the sensible things: guidance documents, pilot projects, professional development, conversations that oscillate between curiosity and fatigue. Much ink has been spilled on these topics, many human-hours of meetings invested, and strategic plans written. All of this is necessary. It is also, perhaps, insufficient. What if the core challenge to us is not that students can outsource an essay, but that expertise itself (the scarce, expensive thing universities have historically concentrated, credentialled, and sold back to society) may become cheap, abundant, and uncomfortably good. 

ChatGPT is in classrooms. How should educators now assess student learning? - Sarah Elaine Eaton, et al; the Conversation

Our recent qualitative study with 28 educators across Canadian universities and colleges—from librarians to engineering professors—suggests that we have entered a watershed moment in education. We must grapple with the question: What exactly should be assessed when human cognition can be augmented or simulated by an algorithm? Participants widely viewed prompting—the ability to formulate clear and purposeful instructions for a chatbot—as a skill they could assess. Effective prompting requires students to break down tasks, understand concepts and communicate precisely. Several noted that unclear prompts often produce poor outputs, forcing students to reflect on what they are really asking. Prompting was considered ethical only when used transparently, drawing on one's own foundational knowledge. Without these conditions, educators feared prompting may drift into overreliance or uncritical use of AI.

Friday, February 13, 2026

Google’s AI Tools Explained (Gemini, Photos, Gmail, Android & More) | Complete Guide - BitBiasedAI, YouTube

This podcast provides a comprehensive overview of how Google has integrated Gemini-powered AI across its entire ecosystem, highlighting tools for productivity, creativity, and daily navigation. It details advancements in Gemini as a conversational assistant, the generative editing capabilities in Google Photos like Magic Eraser and Magic Editor, and time-saving features in Gmail and Docs such as email summarization and "Help Me Write." Additionally, the guide covers mobile-specific innovations like Circle to Search on Android, AI-enhanced navigation in Google Maps, and real-time translation tools, framing these developments as a cohesive shift toward more intuitive and context-aware technology for everyday users. (Summary assisted by Gemini 3 Pro Fast)

https://youtu.be/ro6BxryR0Yo?si=EAg-zAPcKFm618up&t=1

HUSKY: Humanoid Skateboarding System via Physics-Aware Whole-Body Control - Jinrui Han, et al; arXiv

While current humanoid whole-body control frameworks predominantly rely on the static environment assumptions, addressing tasks characterized by high dynamism and complex interactions presents a formidable challenge. In this paper, we address humanoid skateboarding, a highly challenging task requiring stable dynamic maneuvering on an underactuated wheeled platform. This integrated system is governed by non-holonomic constraints and tightly coupled human-object interactions. Successfully executing this task requires simultaneous mastery of hybrid contact dynamics and robust balance control on a mechanically coupled, dynamically unstable skateboard.

Leaked plan proposes 10,000-student virtual school - Yael Rasonik, Queens University Journal

The document proposes that 10,000 students would attend this completely virtual school, which would offer micro-credentials and other “life-long learning opportunities.” A report outlining the alignment between the Arts, Humanities, and Social Sciences (AHSS) and the Bicentennial Vision clarifies that the development of such a school isn’t explicit in the Bicentennial Vision as to allow for “operational flexibility in the context of implementing this goal.” In an interview Principal and Vice-Chancellor Patrick Deane gave The Journal last week, he explained the idea for SPACE first emerged in 2024, out of a recognition that the University needed additional means of generating revenue in order to supplement its full credit offerings, and that lifelong learning was becoming an increasingly popular social trend.

Thursday, February 12, 2026

Professors Are Being Watched: ‘We’ve Never Seen This Much Surveillance’ - Vimal Patel, NY Times

Scrutiny of university classrooms is being formalized, with new laws requiring professors to post syllabuses and tip lines for students to complain. In Oklahoma, a student disputed an instructor’s grading decision, drawing the notice of a conservative campus group, Turning Point USA, that has long posted the names of professors criticized for bringing liberal politics into their classrooms. The instructor was removed. In Texas, a student recorded a classroom lesson on gender identity that led to viral outrage and the instructor’s firing. Now, Texas has set up an office to take other complaints about colleges and professors. And  several states, including Texas, Ohio and Florida, have created laws requiring professors to publicly post their course outlines in searchable databases.

Moltbook Mania Exposed - Kevin Roose and Casey Newton, New York Times

A Reddit-style web forum for A.I. agents has captured the attention of the tech world. According to the site, called Moltbook, more than 1.5 million agents have contributed to over 150,000 posts, making it the largest experiment to date of what happens when A.I. agents interact with each other. We discuss our favorite posts, how we’re thinking about the question of what is “real” on the site, and where we expect agents to go from here. 

Fewer students of color are now enrolling in elite colleges - Matt Zalaznick, Distsrict Administration

Enrollment of students of color has declined “significantly” at elite institutions but has risen “almost everywhere else,” says a new analysis of the impact of the 2023 Supreme Court ruling that ended affirmative action in higher ed admissions. The biggest drops occurred at Ivy Plus schools, a group that includes Stanford University, the Massachusetts Institute of Technology and the University of Chicago. Here are some other findings from the Class Action report: Total enrollment and Black enrollment declined at historically Black colleges and universities. Hispanic enrollment increased at more selective institutions that did practice legacy preferences and declined at those that did. The number and share of white and Asian American freshmen remained flat, although there was a slight uptick for Asian American first-year students at Ivy Plus schools.