Microsoft claims its AI safety tool not only finds errors but also fixes them

Microsoft is launching a new feature called “correction” that builds on the company’s efforts to combat AI inaccuracies. Customers using Microsoft Azure to power their AI systems can now use the capability to automatically detect and rewrite incorrect content in AI outputs.

The correction feature is available in preview as part of the Azure AI Studio — a suite of safety tools designed to detect vulnerabilities, find “hallucinations,” and block malicious prompts. Once enabled, the correction system will scan and identify inaccuracies in AI output by comparing it with a customer’s source material.

From there, it will highlight the mistake, provide information about why it’s incorrect, and rewrite the content in question — all “before the user is able to see” the inaccuracy. While this seems like a helpful way to address the nonsense often espoused by AI models, it might not be a fully reliable solution.

Vertex AI, Google’s cloud platform for companies developing AI systems, has a feature that “grounds” AI models by checking outputs against Google Search, a company’s own data, and (soon) third-party datasets.

In a statement to TechCrunch, a Microsoft spokesperson said the “correction” system uses “small language models and large language models to align outputs with grounding documents,” which means it isn’t immune to making errors, either. “It is important to note that groundedness detection does not solve for ‘accuracy,’ but helps to align generative AI outputs with grounding documents,” Microsoft told TechCrunch.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Secular Times is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – seculartimes.com. The content will be deleted within 24 hours.

Leave a Comment