• The tool aimed at companies, journalists and financial analysts makes use of semantic analysis and search engines to take much of the labour out of verifying factual claims.
• Trained on high-quality data and self-hostable, the Factiverse has been designed to avoid generative bias and ensure high levels of privacy. It also offers an API for trouble-free content management system (CMS) integration.
Why is there a need for Factiverse?
Fact-checking news is a laborious process, especially when it involves multilingual content and quantitative information. Large language models (LLMs), which have the potential to expedite much of this work, remain limited in terms of precision, notably when they are required to verify the accuracy of public statements. At the same time, their use also raises a number of issues with regard to privacy. Factiverse is a patented fact-checking tool which identifies phrases or statements made by public figures in articles and searches for credible sources using a variety of techniques. These results are then evaluated by aggregating and summarising the evidence obtained. Unlike existing LLMs, our models are specifically designed to optimise fact-checking performance.
Factiverse does not focus on words or entities, but on complete sentences that we refer to as claims
How does it check facts from a technical point of view?
When they are finetuned, XLM-RoBERTa LLMs provide effective solutions that are also self-hostable, which allays many concerns about privacy. Our technology is based on deep learning and automatic language processing rather than generative AI, which means that it is less vulnerable to errors and biases that can result in hallucinations. At the same time, it also benefits from high-quality training data, which is not always the case with OpenAI’s products. As to how it works in practice, Factiverse does not focus on words or entities, but on complete sentences that we refer to as claims. Today it outperforms Mistral and GPT in its ability to identify check-worthy claims in 140 languages. Once these have been found, they are investigated using multiple search engines, before being compared and associated with sources that are considered credible. For example, claims on climate issues are often compared with information from the Guardian, which is very good in this area.
How did the start-up come about, and can your solution be integrated into professional tools?
The launch of the start-up was supported by a technology transfer from the University of Stavanger where Vinay Setty, an associate professor in the department of engineering and computer science, developed our patented algorithm. From 2021 to 2023, we conducted experiments to test the solution in the worlds of finance and the media. The technology is advanced, but it still had to be adapted to the reality of the market and to ensure that it met the real needs of users. We launched the fact-checking tool in the summer of 2023, and since then, we have registered 5,000 users worldwide. The commercial development of our API also plays a key role in our business model because many users, who don’t necessarily want to learn how to use a new platform, prefer to work with a solution that is integrated with their existing tools. It allows them to obtain reliable results when they are writing or doing documentary research with a Factiverse service that can be directly accessed from within content management systems (CMS).
Who are the final users?
Media professionals can use Factiverse to fact-check articles as well as audio and video. Journalists often don’t have time to investigate claims in videos, such as those made in election debates. Combined with automatic transcription, our solution can effectively verify what political figures are saying. In France, for example, we’ve observed that many political figures are now using TikTok to exert more influence, one reason why it’s important to be able to fact-check video on a larger scale. However, we aren’t only targeting users in the media, analysts working in the world of finance also need to verify information before making decisions.