Meta Releases Llama 3.2—and Gives Its AI a Voice

Mark Zuckerberg announced today that Meta, his social-media-turned-metaverse-turned-artificial intelligence conglomerate, will upgrade its AI assistants to give them a range of celebrity voices, including those of Dame Judi Dench and John Cena. The more important upgrade for Meta’s long-term ambitions, though, is the new ability of its models to see users’ photos and other visual information.

Meta today also announced Llama 3.2, the first version of its free AI models to have visual abilities, broadening their usefulness and relevance for robotics, virtual reality, and so-called AI agents. Some versions of Llama 3.2 are also the first to be optimized to run on mobile devices. This could help developers create AI-powered apps that run on a smartphone and tap into its camera or watch the screen in order to use apps on your behalf.

“This is our first open source, multimodal model, and it’s going to enable a lot of interesting applications that require visual understanding,” Zuckerberg said on stage at Connect, a Meta event held in California today.

Given Meta’s enormous reach with Facebook, Instagram, WhatsApp, and Messenger, the assistant upgrade could give many people their first taste of a new generation of more vocal and visually capable AI helpers. Meta said today that more than 180 million people already use Meta AI, as the company’s AI assistant is called, every week.

Meta has lately given its AI a more prominent billing in its apps—for example, making it part of the search bar in Instagram and Messenger. The new celebrity voice options available to users will also include Awkwafina, Keegan Michael Key, and Kristen Bell.

Meta previously gave celebrity personas to text-based assistants, but these characters failed to gain much traction. In July the company launched a tool called AI Studio that lets users create chatbots with any persona they choose. Meta says the new voices will be made available to users in the US, Canada, Australia, and New Zealand over the next month. The Meta AI image capabilities will be rolled out in the US, but the company did not say when the features might appear in other markets.

The new version of Meta AI will also be able to provide feedback on and information about users’ photos; for example, if you’re unsure what bird you’ve snapped a picture of, it can tell you the species. And it will be able to help edit images by, for instance, adding new backgrounds or details on demand. Google released a similar tool for its Pixel smartphones and for Google Photos in April.

Powering Meta AI’s new capabilities is an upgraded version of Llama, Meta’s premier large language model. The free model announced today may also have a broad impact, given how widely the Llama family has been adopted by developers and startups already.

In contrast to OpenAI’s models, Llama can be downloaded and run locally without charge—although there are some restrictions on large-scale commercial use. Llama can also more easily be fine-tuned, or modified with additional training, for specific tasks.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Secular Times is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – seculartimes.com. The content will be deleted within 24 hours.

Leave a Comment