Connect with us

Tech News

Snap previews its real-time image model that can generate AR experiences

Published

on

Snap previews its real-time image model that can generate AR experiences

During the Augmented World Expo, Snap showcased an early version of its real-time, on-device image diffusion model capable of creating immersive AR experiences. The company also introduced generative AI tools for AR creators.

Snap’s co-founder and CTO, Bobby Murphy, revealed that the model is compact enough to operate on a smartphone and quick enough to re-render frames instantly, guided by a text prompt.

Murphy emphasized the need for faster generative AI image diffusion models to make a significant impact on augmented reality. Snap’s teams have been focused on accelerating machine learning models to meet this demand.

Snapchat users can expect to see Lenses incorporating this generative model in the near future, with plans to make it available to creators by the end of the year.

Image Credits: Snap

Bobby Murphy expressed excitement about the new direction for augmented reality that real-time on-device generative ML models are taking, prompting a reimagining of how AR experiences are rendered and created.

Lens Studio 5.0 is being launched for developers today, featuring new generative AI tools to streamline the creation of AR effects, saving significant time in the process.

AR creators can now easily generate selfie Lenses with realistic ML face effects and apply custom stylization effects in real-time. Additionally, they can create 3D assets and incorporate them into their Lenses within minutes.

Furthermore, AR creators can use the Face Mesh technology to generate characters, face masks, textures, and materials quickly with just a text or image prompt.

Lens Studio’s latest version also includes an AI assistant to provide answers to any questions AR creators may have.

See also  Intel claims retailers are facing high return rates for Snapdragon PCs, Qualcomm denies it

Trending