In its more than 130-year history, the Financial Times has upheld the highest standards of journalism. As editor of this paper, I care nothing more than to have readers trust the quality of journalism we produce. Quality means accuracy above all else. It also means fairness and transparency.
That’s why today I’m sharing my current thinking about using generative AI in the newsroom.
Generative AI is the most important new technology since the advent of the Internet. It is developing by leaps and bounds and its applications and implications are still emerging. Generative AI models learn from massive amounts of published data, including books, publications, Wikipedia, and social media sites, to predict the next most likely word in a sentence.
This innovation is an increasingly important area to cover for us and I am determined to make FT an invaluable source of information and analysis on AI in the years to come. But it also has clear and potentially far-reaching implications for journalists and editors in the way we go about our day-to-day work, and can help us in our analysis and discovery of stories. It has the potential to increase productivity and free up reporters and editors’ time to focus on creating and reporting on original content.
However, while they seem very obvious and plausible, the AI models on the market today are ultimately a prediction engine and they are learning from the past. They can make up facts – this is referred to as ‘hallucinating’ – and make references and connections. If manipulated enough, AI models can produce completely wrong images and articles. They also reiterate existing societal views, including historical biases.
I am certain that our mission to produce journalism of the highest standards is all the more important in this age of rapid technological innovation. At a time when disinformation can be created and spread quickly and trust in the media in general is waning, we at the Financial Times have a greater responsibility to be transparent, report the facts and pursue the truth. This is why FT journalism in the new age of artificial intelligence will continue to be reported and written by the best humans in their fields who are dedicated to reporting and analyzing the world as it is, accurately and fairly.
The Financial Times is also a leader in digital journalism and our co-workers will embrace AI to deliver services to readers and customers and maintain our record of effective innovation. Our newsroom must also remain a hub for innovation. It is important and essential for the FT to have a team in the newsroom that can responsibly experiment with AI tools to assist journalists with tasks such as data mining, text and image analysis and translation. We will not be posting realistic AI-generated images, but we will explore the use of AI-enhanced visuals (graphs, graphs, photos) and when we do we will make it clear to the reader. This will not affect the illustrations of the FT artists. The team will also look, always under human supervision, into the AI’s generative summarization capabilities.
We will be transparent within the Financial Times and with our readers. All newsroom experiences will be recorded in an internal log, including, to the extent possible, the use of third party providers that may use the tool. Our journalists will be trained in using generative AI for story discovery through a series of master lessons.
Every technology opens up new and exciting horizons that must be explored responsibly. But as recent history has shown, excitement must be accompanied by a caution against the dangers of misinformation and corruption of the truth. FT will remain committed to its core mission and will keep readers informed as generative AI itself and our thinking about it evolves.