Monday, October 21, 2024

Changing of the Guard at ‘Inside Higher Ed’ - Doug Lederman, Inside Higher Ed

I preface the link to this article by Doug Lederman by joining fellow IHE blogger and education writer, speaker, and consultant, John Warner, in thanking Doug Lederman for his vision and leadership in Higher Education.  Doug announced earlier this month his stepping down from the editor position in Inside Higher Ed.  Over the past two decades, Doug served our field deftly through journalistic leadership in our evollution to where we stand today in higher education. He sums up his contributions in this article by writing "After spending the last 35-plus years analyzing and assessing higher ed, I’m looking forward to a next career chapter, where I can try to fix some of the problems I see in this industry I care so much about." So, it is not good-bye, but with optimism for the future that I encourage you to visit the URL below for Doug's  article.

https://www.insidehighered.com/opinion/views/2024/10/09/changing-guard-inside-higher-ed

Ohio State Opts for Asynchronous Learning on Election Day - Katherine Knott, Inside Higher Ed

Ohio State University is giving students Election Day off—at least mostly—as part of a pilot program. The university said in a social media post this week that Nov. 5 will be a “university-wide day of asynchronous learning” to support student participation in the election. Instead of meeting in person or virtually, instructors will assign classwork that students can do on their own time such as reading, writing or a problem set. Most scheduled classes will proceed asynchronously, though some instructors may choose to still have in-person or virtual labs or clinical classes.

Science Says Being Generous, Thoughtful, and Kind Is a Sign of High Intelligence - Jeff Hayden, Inc.

According to a study published in the International Journal of Nonprofit and Voluntary Sector Marketing, greater cognitive ability is associated with a higher probability of charitable giving.  A study published in the Journal of Research in Personality found that unconditional altruistic behavior (acting to help someone else at some sort of cost to yourself) is related to general intelligence.  A study published in Social Psychology and Personality Science found that intelligence correlates with personal values; in simple terms (the only terms I understand), the less selfish you are, the smarter you tend to be.

Sunday, October 20, 2024

Penn's Graduate School of Education joins global consortium on artificial intelligence in education - Finn Ryan, Daily Pennsylvanian

Penn’s Graduate School of Education is one of seven international institutions participating in a new research consortium on the use of artificial intelligence in education. The project launched on Sept. 17 at the United Nations General Assembly in New York. Institutions in Ghana, Spain, India, Kazakhstan, Colombia, and Qatar will contribute to the consortium in addition to Penn. Penn’s specific objective is to investigate the social repercussions of AI in education, namely its potential to improve educational inequities and stimulate positive outcomes. Researchers will compare the influence of AI between “a medallion university and an opportunity institution,” according to Penn GSE News. 

OpenAI Releases Swarm: An Experimental AI Framework for Building, Orchestrating, and Deploying Multi-Agent Systems - Asif Razzaq, MarktechPost

OpenAI introduces the Swarm Framework as a solution to simplify the complexities inherent in multi-agent orchestration. Swarm is an experimental framework that focuses on making agent coordination, execution, and testing both lightweight and highly controllable. The goal is to empower developers to manage interactions between multiple AI agents in a straightforward and efficient manner. Swarm’s strength lies in its two primitive abstractions: agents and handoffs. An agent in Swarm is a combination of specific instructions and tools that it can use to accomplish a task. At any point during its process, an agent has the ability to “hand off” a conversation or task to another agent, which makes the orchestration seamless and modular. This abstraction not only enables complex interactions among different agents but also ensures that the overall coordination remains under tight control.

Using the LMS Effectively to Reduce Logistical Challenges for Students - Faculty Focus

When preparing for a new term, there is much to consider. Textbooks and course materials to review, syllabi to update, lessons to plan, lectures to prepare. Since the pandemic, which necessitated the use of Learning Management Systems (LMS) such as Canvas or Blackboard, there is now an additional component to consider in developing our courses. Instead of thinking about the LMS as simply a repository for course essentials (syllabus, contact information, etc.), consider how it might be used as a tool for enhancing student learning and engagement. 

Saturday, October 19, 2024

Machines of Loving Grace - Dario Amodei, CEO of Anthropic

In fact I think it is critical to have a genuinely inspiring vision of the future, and not just a plan to fight fires. Many of the implications of powerful AI are adversarial or dangerous, but at the end of it all, there has to be something we’re fighting for, some positive-sum outcome where everyone is better off, something to rally people to rise above their squabbles and confront the challenges ahead. Fear is one kind of motivator, but it’s not enough: we need hope as well. The list of positive applications of powerful AI is extremely long (and includes robotics, manufacturing, energy, and much more), but I’m going to focus on a small number of areas that seem to me to have the greatest potential to directly improve the quality of human life. The five categories I am most excited about are:

Biology and physical health

Neuroscience and mental health

Economic development and poverty

Peace and governance

Work and meaning

Exploring generative AI in higher education: a RAG system to enhance student engagement with scientific literature - Dominik Thüs, et al; Frontiers In

This study explores the implementation and evaluation of OwlMentor, an AI-powered learning environment designed to assist university students in comprehending scientific texts. OwlMentor was developed participatorily and then integrated into a course, with development and evaluation taking place over two semesters. It offers features like document-based chats, automatic question generation, and quiz creation.

Stanford Researchers Use AI to Simulate Clinical Reasoning - Abby Sourwine, GovTech

Researchers at Stanford University are designing Clinical Mind AI to be a customizable chatbot that can function as a virtual patient with which medical students can interact and practice forming diagnoses. A key component of medical education is a skill called clinical reasoning. Thomas Caruso, a professor teaching anesthesiology, perioperative and pain medicine at Stanford University, likens clinical reasoning to an episode of the TV show House. “Clinical reasoning is sort of like a House episode, where we reveal a little bit of information about the patient, they give a differential diagnosis. We reveal a little bit more, they hone their differential diagnosis. Then, they get to a point where they're treating this patient for what they presume to be the diagnosis,” Caruso said.

https://www.govtech.com/education/higher-ed/stanford-researchers-use-ai-to-simulate-clinical-reasoning

Friday, October 18, 2024

AI Just Reached Human-Level Reasoning – Should We Be Worried - AI Revolution, YouTube

AI has just reached human-level reasoning with OpenAI’s 01 model, marking a major breakthrough in artificial intelligence. This advancement is transforming industries like quantum physics and military operations, showcasing the ability of AI to outperform humans in complex tasks. As AI continues to evolve, it brings both incredible potential and significant risks, reshaping our understanding of technology and its role in society. This video explores the profound advancements in AI that are reshaping industries and pushing the limits of what technology can achieve. As AI models reach human-level reasoning, they bring incredible opportunities and challenges that could redefine the way we live, work, and think. 

https://youtu.be/MNBa1RW0k_0?feature=shared

Zoom’s AI avatars to replace people in meetings? - Martin Crowley, AI Tool Report

To create their digital avatar, users will record a video of themselves talking, which Zoom’s AI will translate into a digital clone—complete with a head, arms, and shoulders—that mirrors their appearance and voice. They then write what they want their digital clone to say, and Zoom will generate audio—using their video clip—that syncs with the avatar's lip movements, allowing them to send video updates to teammates.  The avatars—which will be available early next year—should facilitate “asynchronous” chat among teams, and Zoom’s end goal is to allow employees to send their “digital twin” to meetings in their place.



Google's Sycamore quantum computer chip can now outperform the fastest supercomputers, new study suggests - Keumars Afifi-Sabet, Live Science

Quantum computers can outpace our fastest classical computers in very specific areas, a groundbreaking experiment suggests. Google Quantum AI researchers have discovered a "stable computationally complex phase" that can be achieved with existing quantum processing units (QPUs), also known as quantum processors. This means that when quantum computers enter this specific "weak noise phase," they can perform computationally complex calculations that outpace the performance of the fastest supercomputers. The research — which was led by Alexis Morvan, a quantum computing researcher at Google — was published Oct. 9 in the journal Nature.

Thursday, October 17, 2024

Generative AI, the American worker, and the future of work - Molly Kinder, Xavier de Souza Briggs, Mark Muro, and Sifan Liu, Brookings

Existing generative AI technology already has the potential to significantly disrupt a wide range of jobs. We find that more than 30% of all workers could see at least 50% of their occupation’s tasks disrupted by generative AI. Unlike previous automation technologies that primarily affected routine, blue collar work, generative AI is likely to disrupt a different array of “cognitive” and “nonroutine” tasks, especially in middle- to higher-paid professions. 

https://www.brookings.edu/articles/generative-ai-the-american-worker-and-the-future-of-work/

Report: A Quarter of Those with Graduate Degrees Say They Regret Going to College - Johanna Alonso, Inside Higher Ed

The report, which focused on college’s value, was based on the results of a survey of 1,000 Americans with a college degree and 1,000 who do not have a degree. Over all, about three-quarters of respondents with a graduate degree say they do not regret attending college, and 59 percent of those with an associate or bachelor’s degree say the same thing. Cost is a different matter, however; across all types of degrees, 59 percent of respondents say their student loan investment was worth the cost. Arts and humanities majors, perhaps surprisingly, were most likely to say their degree was worth the cost, with 68 percent saying so.

Can an AI Chatbot Be Your Friend? - Stefano Puntoni, Knowledge at Wharton

A new study co-authored by Wharton marketing professor Stefano Puntoni finds that when people interact with chatbots programmed to respond with empathy, their feelings of loneliness are significantly abated — at least for a short time. Puntoni said the buzz about what generative artificial intelligence can do for business and productivity has been so big since the release of ChatGPT two years ago that he wanted to explore whether software could also benefit well-being. If chatbots can be programmed to have in-depth conversations on everything from customer service to medical diagnoses, there’s no reason why they can’t keep people company.

Wednesday, October 16, 2024

Commentary: AI detectors don't work, so what's the end game for higher education? - Casper Roe and Mike Perkins, Channel News Asia

AI detectors struggle to keep up with quickly changing AI models, and their reliance on standardised measures of what is considered “human” can unfairly disadvantage people who speak English as a second or third language. The potential of falsely accusing students and damaging their future raises serious concerns about the use of AI detectors in academic settings. Furthermore, this approach is counterintuitive in a world where we should be reaping the benefits of AI. You can’t extoll the advantages of using a calculator and then punish students for not doing math in their heads. Educators shouldn’t rush to punish students based on what AI detectors say. Instead, they should think of better ways to assess students.

The fallout: University of the Arts haunted by unanswered questions months after sudden closure - Ben Unglesbee, Inside Higher Ed

This year has seen the winding down of several historic private colleges, including Wells College in New York and Goddard College in Vermont. Announcements of their closures sparked shock, grief and dismay. Arguably, the most dramatic closure came at the University of the Arts in Philadelphia. The public releases announcing the closure were conspicuously short on details. A statement from Walk and UArts board Chair Judson Aaron on May 31 alluded, vaguely, to “a cash position that has steadily weakened” that meant the university could “not cover significant, unanticipated expenses.” They added: “The situation came to light very suddenly. Despite swift action, we were unable to bridge the necessary gaps.” 

'Godfather of AI' shares Nobel Physics Prize - Georgina Rannard & Graham Fraser, BBC

The Nobel Prize in Physics has been awarded to two scientists, Geoffrey Hinton and John Hopfield, for their work on machine learning. British-Canadian Professor Hinton is sometimes referred to as the "Godfather of AI" and said he was flabbergasted. He resigned from Google in 2023, and has warned about the dangers of machines that could outsmart humans. The announcement was made by the Royal Swedish Academy of Sciences at a press conference in Stockholm, Sweden.

Tuesday, October 15, 2024

Developing a GenAI policy for research and innovation - Helen Brownlee and Tracy Mouton, Times Higher Ed / Inside Higher Ed

Establishing a framework to guide AI use in research is vital for ensuring institutions are and remain fully compliant. Research integrity policies, procedures and guidelines should provide a framework to support the highest standards of staff and student personal conduct in research. To achieve this, we created a new policy for the use of GenAI tools in research and innovation at the University of East Anglia (UEA) earlier this year. The policy supports and protects researchers who use GenAI and aims to ensure the university is fully compliant in this developing area. The policy represents a truly collaborative effort involving many colleagues and below we share our experiences of developing the policy focusing on eight key areas.

Why we need a balanced, two-tier approach to AI governance - Tianchong Wang and Libing Wang, University World News

Artificial intelligence’s transformative effect on higher education has moved from a distant vision to a rapidly unfolding reality. Its potential to revolutionise teaching, learning, research and administration has sparked widespread optimism and anticipation about what it will mean for the future of the sector. Yet, this rapid integration of AI also presents significant challenges. Higher education institutions are facing complex ethical dilemmas, including concerns about algorithmic bias, data privacy and the potential impact on equity. To address these concerns, comprehensive governance frameworks are essential.

ChatGPT has become the ‘best teammate’ to these Sydney university students – but is there a limit? - Caitliin Cassidy, the Guardian

When ChatGPT first launched, the University of Sydney was on the back foot, the instiution admits. Now it says it places digital technology at the forefront of its curriculums. It’s been a similar shift across the university sector, which has broadly pivoted to acknowledge generative AI in academic integrity policies and incorporate the new technology into learning and teaching. At the University of Melbourne, for instance, artificial intelligence is even being used to help mark assessments. This year, the University of Sydney was named AI university of the year at the inaugural Future Campus awards, a body established last year to bring news and analysis on the higher education sector.