Tuesday, February 27, 2024

Meta’s new AI model learns by watching videos - Mark Sullivan, Fast Company

LLMs are normally trained on thousands of sentences or phrases where some of the words are masked, forcing the model to find the best words to fill in the blanks. In doing so they pick up a rudimentary sense of the world. Yann LeCun, who leads Meta’s FAIR (foundational AI research) group, has proposed that if AI models could use the same masking technique, but on video footage, they could learn more quickly. “Our goal is to build advanced machine intelligence that can learn more like humans do,” LeCun said, “forming internal models of the world around them to learn, adapt, and forge plans efficiently in the service of completing complex tasks.”