Our Privacy Statement & Cookie Policy

By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.

I agree

Amazon launches new tool to tackle AI hallucinations

CGTN

Amazon Web Services (AWS), Amazon's cloud computing division, launched on Tuesday a new tool to combat AI hallucinations, the scenarios where an AI model behaves unreliably.

The service, Automated Reasoning checks, validates a model's responses by cross-referencing customer-supplied info for accuracy. AWS claimed in a press release that the tool is the "first" and "only" safeguard for hallucinations.

Available through AWS' Bedrock model hosting service, the tool attempts to figure out how a model arrived at an answer and discern whether the answer is correct.

Customers upload info to establish a ground truth of sorts and the tool creates rules that can then be refined and applied to a model, said AWS.

As a model generates responses, the tool verifies them, and in the event of a probable hallucination, draws on the ground truth for the right answer. It presents this answer alongside the likely mistruth, so customers can see how far off-base the model might've been.

AWS said PwC is already using Automated Reasoning checks to design AI assistants for its clients.

"With the launch of these new capabilities, we are innovating on behalf of customers to solve some of the top challenges that the entire industry is facing when moving generative AI applications to production," Swami Sivasubramanian, VP of AI and data at AWS, said in a statement.

AWS claims that its tool uses "logically accurate" and "verifiable reasoning" to arrive at its conclusions. But the company volunteered no data showing that it is reliable, according to a report by TechCrunch.

AI models hallucinate because they are statistical systems that identify patterns in a series of data and predict which data comes next based on previously seen examples. It does not provide answers, but predictions of how questions should be answered within a margin of error, the report said.

Microsoft rolled out the Correction feature this summer, which flags AI-generated text that might be factually wrong, and Google also offered a tool in Vertex AI, its AI development platform, to let customers "ground" models by using data from third-party providers, their own datasets, or Google Search.

(Cover via CFP)

Source(s): Xinhua News Agency
Search Trends