A recent paper published by three scholars from the University of Chicago school of business found that large language models can analyse financial statements of companies and forecast earnings potential, which were said to be more accurate than analysts’ estimates. The report was first published in The Financial Times.
The research paper titled “Financial Statement Analysis with Large Language Models” looked at whether these models make informed financial decisions or are they simply a support tool.
In this pretext, the study analysed the AI’s ability to successfully perform financial statement analysis in a way similar to what professional human analysts do.
The scholars (Alex Kim, Maximilian Muhn and Valeri Nikolaev) provided ChatGPT with structured and anonymized financial statements of over 15,000 companies and a sophisticated chain-of-thought prompt that mimics how human analysts process financial information.The prompts included asking the model to create economic narratives on the analysed financial statements and also asking it to predict earnings of the companies in the next one year.The paper revealed that the language model was accurately predicting the earnings about 60% of the time after prompting, as compared to human predictions, which were spot on about 57%. The AI’s predictions were then used as a base of model portfolios, which generated hefty returns in back testing.”Our results suggest that GPT’s analysis yields useful insights about the company, which enable the model to outperform professional human analysts in predicting the direction of future earnings,” said the authors of the paper.
The paper also documents that GPT and human analysts are complementary, rather than substitutes. “While language models have a larger advantage over human analysts when analysts are expected to exhibit bias and disagreement, humans, on the other hand, add value when additional context, not available to the model, is likely to be important,” it said.
The authors, however, said that one must interpret the results with caution, even though there is evidence consistent with large language models having human-like capabilities in the financial domain.
The conclusions have far-reaching implications for the future of financial analysis and whether analysts will continue to be the backbone of informed decision-making in financial markets.
(Disclaimer: Recommendations, suggestions, views and opinions given by the experts are their own. These do not represent the views of Economic Times)