Today, on the 10th anniversary of Connect, Meta CEO & Founder Mark Zuckerberg shared how the rise of AI and metaverse technologies are reshaping the way we experience our physical and digital worlds. While the physical world around us is amazing —and it’s the combination of our physical and digital worlds that defines our reality. Yet too often, we rely on screens to tap into virtual spaces and content—and that can pull us away from the moment and the people we’re physically with.
Imagine replacing your TV with a fully customizable home theater you can take with you anywhere. Or sitting around a table with friends, some of whom are in the same physical room while others are avatars of those who live miles away—and yet still feeling fully present in the same space and experience. Or a meeting where some of the avatars are embodied AIs that can help you accomplish a variety of tasks. All of that could be possible when our physical and virtual worlds come together seamlessly.
Many of the foundational technologies needed to make that vision a reality are either already here or on the way. Mixed reality lets us bring digital objects into the physical world or create fully immersive experiences to explore. Advances in AI let us create unique characters that we can interact with in different ways. And future AR glasses will eventually bring these technologies together in a stylish form factor.
Today at Connect, Zuckerberg shared our progress on all three fronts.
Mixed Reality Goes Mainstream with Meta Quest 3
Meta Quest 3 is the world’s first mainstream headset built for mixed reality—and our most powerful headset yet. With double the graphic processing power of Quest 2,* Quest 3 is also the world’s first device to feature the new Snapdragon XR2 Gen 2 platform we helped develop in collaboration with Qualcomm Technologies. And it’s completely standalone: no PC, no console, no battery packs—nothing to break the feeling of presence. It understands your physical space, so you can play with the world around you. With mixed reality, the limits of your physical space can expand, and you can be part of a much larger world—like opening a portal to the Upside Down from your living room in Stranger Things VR.
With Quest 3, you can immerse yourself in an expansive content library with games and experiences to suit every taste and mood. More than 100 new and upgraded titles are coming to Quest 3 this year, and many of them will incorporate mixed reality. Because Quest 3 is backwards-compatible, you get access to a vast library of 500+ VR and MR experiences on day one. And with Xbox Cloud Gaming coming to Quest in December, you’ll be able to play Halo Infinite, Minecraft Legends, Forza Horizon 5, and hundreds of other high-quality Xbox games—all on a massive 2D screen you can take with you anywhere.
Advances in Artificial Intelligence
While it’s been an amazing year for AI, most people today still haven’t experienced this new technology firsthand—and we have an opportunity to change that by building state-of-the-art AI into apps that billions of people already use.
We’ve talked about the Llama ecosystem of large language models, and today we unveiled our image generation model. Emu (short for Expressive Media Universe) uses your text prompts to generate high-quality, photorealistic images in just seconds. And thanks to Emu and technology from Llama 2, you can create your own custom AI stickers in chat to liven up conversations on the fly.
We also introduced restyle and backdrop—two new features coming soon to Instagram that use the technology from Emu to let you transform your photos or even co-create AI-generated images with friends. Restyle lets you reimagine your images by applying the visual styles you describe (you might type out “watercolor” or “collage from magazines and newspapers, torn edges,” for example), while backdrop leverages learnings from our Segment Anything Model so you can change your image’s scene or background. Prompts like “put me in front of a sublime aurora borealis” or “surrounded by puppies” will keep your subject in the foreground while creating the background you’ve described. Images created with restyle and backdrop will indicate the use of AI to help reduce the odds of people mistaking them for human-generated content. We’re also experimenting with forms of visible and invisible markers to help people distinguish AI-generated content.
Unlike most others in the industry, we don’t believe there will be one single super-intelligent AI that everyone uses. Rather, we think you’ll want different AIs for different things, like finding information, communicating, being entertained, playing games, helping you get work done, and more. You might even want to create your own AI that’s aligned with your goals, whether you’re a small business, a creator, or anyone really.
That’s why we’re building AI studio—a new platform for creating AIs that can help you get things done and just have fun. People will be able to interact with these AIs across the whole Meta universe of products. They’ll have profiles on Instagram and Facebook, and you’ll be able to chat with them in WhatsApp, Messenger, and Instagram. Eventually, they’ll be embodied as avatars in the metaverse too.
We’ve created some AIs of our own using AI studio that we’ll start rolling out in the US in beta today.
Meta AI is a new assistant you can interact with like a person. It uses a custom model based on Llama 2 technology and has access to real-time information through a partnership with Bing search. And Emul is built into Meta AI, so you can generate high-quality photorealistic images for free in seconds.
We’ve also been creating AIs that have more personality, opinions, and interests, and are a bit more fun to interact with. There’s Victor, a motivational coach who encourages you to hit your goals. The Dungeon Master can take you on an old-school text-based adventure. And our sous chef Max can take the random assortment of ingredients in your pantry and come up with a delicious recipe on the fly.
Those are just a few of the AIs we’ve trained so far, and there are several more coming over the next few weeks across a range of interests, from gaming and philosophy to sports, fashion, and beyond.
We want to give people the chance to build AIs of their own, so AI studio will ultimately let developers build third-party AIs for our messaging services. We’re building a sandbox that will let people who don’t code create their own AIs. We’re working on a way for creators to build AIs that represent them and help them engage and grow their communities. And we’re making it so businesses can create AIs that interact with customers and help with commerce and support.
Generative AI will bring with it new challenges, so we’re putting in the time and effort to make sure we get this right. That includes training and fine-tuning models to fit our safety and responsibility guidelines, red-teaming with external experts and internal teams to help ensure our models are safer and more inclusive, programming in guardrails around inappropriate conversations, and sharing system cards publicly so people better understand how these models work.
This next generation of artificial intelligence will enable a wide range of experiences and interactions, which will transform how people, businesses, and creators use all of our products. And we’ll continue innovating to ensure everyone has a chance to participate in the upside. Click here to learn more.
Introducing the Ray-Ban | Meta Smart Glasses Collection
Built in partnership with EssilorLuxottica, our next generation of smart glasses are designed so you can stay connected and capture the moment without having to stop and take out your phone—and easily share your experiences with friends, family, or the world.
We’ve upgraded them from the first generation in basically every way, and for the first time, you’ll be able to livestream directly from your smart glasses to your friends and followers on Facebook and Instagram.
These are also the first smart glasses to ship with Meta AI built in.*** Starting in the US in beta, you’ll get our state-of-the-art AI hands-free, wherever you are, whatever you’re doing, in real time. And next year we’ll roll out a free update so your smart glasses will be able to understand what you’re looking at and help you out. If you want to know what building you’re standing in front of or get a translation of a sign on the fly, your Ray-Ban Meta smart glasses will have the answer.
Smart glasses will be an important platform in the future not only because they’re a natural way to see digital holograms in the physical world, but also because soon you’ll be able to let your AI see what you see and hear what you hear—which will make your smart glasses more useful over time.