01-01-1970 12:00 AM | Source: IANS
Meta introduces generative AI model `CM3leon` for text, images
News By Tags | #7226 #874 #146 #4321

Follow us Now on Telegram ! Get daily 10 - 12 important updates on Business, Finance and Investment. Join our Telegram Channel

https://t.me/InvestmentGuruIndiacom

Download Telegram App before Joining the Channel

Meta (formerly Facebook) has introduced a generative artificial intelligence (AI) model -- "CM3leon" (pronounced like chameleon), that does both text-to-image and image-to-text generation. 

"CM3leon is the first multimodal model trained with a recipe adapted from text-only language models, including a large-scale retrieval-augmented pre-training stage and a second multitask supervised fine-tuning (SFT) stage," Meta said in a blogpost on Friday. 

With CM3leon's capabilities, the company said that the image generation tools can produce more coherent imagery that better follows the input prompts. 

According to Meta, CM3leon requires only five times the computing power and a smaller training dataset than previous transformer-based methods. 

When compared to the most widely used image generation benchmark (zero-shot MS-COCO), CM3Leon achieved an FID (Frechet Inception Distance) score of 4.88, establishing a new state-of-the-art in text-to-image generation and outperforming Google's text-to-image model, Parti. 

Moreover, the tech giant said that CM3leon excels at a wide range of vision-language tasks, such as visual question answering and long-form captioning. 

CM3Leon's zero-shot performance compares favourably to larger models trained on larger datasets, despite training on a dataset of only three billion text tokens. 

"With the goal of creating high-quality generative models, we believe CM3leon’s strong performance across a variety of tasks is a step toward higher-fidelity image generation and understanding," Meta said. 

"Models like CM3leon could ultimately help boost creativity and better applications in the metaverse. We look forward to exploring the boundaries of multimodal language models and releasing more models in the future," it added.