Willison points out that, although mentioning sources helps verify precision, building a system that does well “can be quite complicated”, but Citations seems to be a step in the right direction by incorporating RAG capacity directly into the model.
Apparently, that capacity is not something new. Alex Albert de Anthropic wrote In X, “under the hood, Claude is trained to cite sources. With appointments, we are exposing this capacity to developers. To use appointments, users can pass a new parameter” Citations: {Enabled: True} “in any type of type of Document they want.
The first users report promising results
The company launched Citations for Claude 3.5 Sonnet and Claude 3.5 Haiku models through Anthropic API and the Google Cloud Vertex AI platform, but apparently it is already being used in the field.
Anthropic says that Thomson Reuters, who uses Claude to boost his Co-abogado Legal Reference Platform of AI, expects to use citations in a way that helps “minimize the risk of hallucinations but also strengthen confidence in the content generated by AI”.
In addition, the Endex Financial Technology company told Anthropic that Citations reduced its sources of 10 percent to zero and increased references by response by 20 percent, according to Executive Director Tarun Amatasa.
Despite these statements, trusting any LLM to accurately transmit reference information is It is still a risk until the technology is more studied and tested in the field.
Anthropic will charge users its standard tokens price, although the text cited in the answers will not count for the costs of the output tokens. Obtaining a 100 -page document as a reference would cost approximately $ 0.30 with Claude 3.5 Sonnet or $ 0.08 with Claude 3.5 Haiku, according to the standard API price of Anthropic.