nygazet.com logo
Apple trained an LLM to efficiently understand long-form video
technology

Apple trained an LLM to efficiently understand long-form video

1 min read

Apple researchers have developed a version of the SlowFast-LLaVA model that beats larger models at long-form video understanding.

Apple researchers have developed an adapted version of the SlowFast-LLaVA model that beats larger models at long-form video analysis and understanding. Here’s what that means. The nerdy bits Very basically, when an LLM is trained to also understand v... [4545 chars]

Read Original Article

Source: 9to5Mac

Visit Source

Share this article