Apple’s AI Innovations: Animatable Avatars and On-Device Language Models
Apple is really diving into the world of artificial intelligence (AI), and they’ve just spilled the beans on some pretty cool stuff they’re working on in two new research papers. It turns out they’re on a mission to amp up on-device AI technology, bringing us a way to create lively avatars and a nifty method to run hefty language models right from our iPhones or iPads.
So, here’s the lowdown: First up is their “LLM in a flash” research, where Apple is figuring out how to smoothly run Large Language Models (LLMs) on devices with not-so-much memory. This means our iPhones and iPads might soon be able to handle complex AI applications without breaking a sweat. Imagine a Siri that’s supercharged with generative AI, helping out with tasks, generating text, and understanding our natural language way better.
Then, there’s “HUGS,” which stands for Human Gaussian Splats. Fancy name, but what it does is pretty cool. Apple’s cooking up a way to whip up fully animatable avatars using short video clips from our iPhones. You could have a detailed digital version of yourself in as little as 30 minutes, and the best part? You get to make it move however you want.