We humans are good at detecting meaning from the nuances of voices, and we are often proud that we can infer information implicitly conveyed, and sometimes inadvertently revealed, by a speaker. Being able to discern subtleties in voices, and thus infer the underlying intentions of the speaker, is such a unique human skill that we generally consider it an important aspect of intelligence that machines cannot emulate.
But this is changing, as a new technology revolution looms. With rapid advances in machine learning and computing power over the past few decades, the performance of computer programs has increased so dramatically that we now routinely refer to them as artificial intelligence (AI). Not surprisingly, companies in almost all industries are racing to augment their human intelligence with machine intelligence. Indeed, if we think of human intuitions and experiences as algorithms embodied in biological rather than artificial neurons, it is only logical that AI algorithms, equipped with faster computation and more data, may eventually be able to go further than our intuitions and experiences.
For example, a recent Wall Street Journal article, “How Computers Could Make Your Customer Service Calls More Human,” reported how MetLife improved its customer-experience scores by teaching humans to be more human with the help of AI software [1]. By assessing human performance on social skills such as empathy and patience, the AI software measures and analyzes customer service conversations in real time, to guide call-center agents as they engage with customers.
In a recent study, Professor Huaxia RUI, Andrew WHINSTON and I follow this new line of ideas by applying AI algorithms in the context of earnings conference calls, where a public company discusses the financial results of a reporting period. Our analysis rests on a simple premise: managers have information advantages, and have a certain flexibility in what the information covers, whether they release the information, and how they release it.
Unlike the management discussion part of an earnings conference call, which is typically scripted and well-prepared, the questions and answers (Q&A) part is conversational. Managers need to improvise when they answer the questions posed to them, and the questions are often difficult to predict. They must also do this under time constraints, and with no real support from their staff members. We believe this “conversational dance” provides us with an ideal context within which we can scrutinize managers’ responses, so that we can potentially detect undisclosed information. In other words, when answering questions, what managers say and how they say it may reveal information that would not be directly available from any well-prepared document.
Naturally, analysts, stock traders, and investors all try to detect this unsaid information during conference calls. For example, Business Intelligence Advisors (BIA), a hedge-fund consulting firm, hires former Central Intelligence Agency (CIA) employees to analyze language clues. By analyzing how managers of UTStarcom diverted questions during one earnings call, BIA successfully predicted their profitability [2].
Can AI decipher a voice’s hidden meaning?
Although deciphering the responses and voices of managers currently relies on the shrewdness and business acumen of consultants, analysts, and accountants, recent developments in AI are presenting new opportunities for financial decision-making (World Economic Forum 2015). For example, Goldman Sachs is investing heavily in AI to understand news and analyst reports for investment decision-making. David SIEGEL, the cofounder of Two Sigma, argues that “ It’s very hard for someone using traditional methods to juggle all the information of the global economy in their head ... Eventually the time will come that no human investment manager will be able to beat the computer.”
The broad research question we are trying to address is along this industry trend: can AI algorithms help decipher the Q&A in an earnings conference call? More specifically, we aim to use AI algorithms to measure the degree of evasiveness, incoherence and emotional inconsistency when managers answer questions raised by analysts or investors, and to examine the financial implications of such measures.
We analyzed earnings conference call data of the S&P 500 companies from 2011 to June 2015. Various empirical tests show that both evasiveness and incoherence measures provide information, in addition to quantitative measures of the firm and analysts’ forecasts, about a firm’s foundamentals in the following quarter. We also find that the stock market responds to evasiveness, and exhibits a one-day delay in incorporating it into the stock price. This suggests that the market understands the managers’ informational advantage and interprets evasiveness as a negative signal. To assess the economic magnitude of the delay, we constructed an equally weighted long-short portfolio for one trading day. The trading strategy yielded significantly positive excess return and raw return.
When Finance Meets AI
Our research contributes to the fintech field in several important aspects. First, by investigating improvised information from conversations between managers and analysts, a type of data mostly overlooked by FinTech algorithms, we demonstrated how financially valuable information can be extracted from textual and audio data on earnings calls. Second, this study contributes methodologically to the literature by proposing and evaluating a novel measure of evasiveness, which differs from the lexicon-based vagueness measure previously used in finance and accounting literature. Third, this research contributes conceptually by considering the incoherence of managers’ responses and methodologically by constructing an innovative measure of incoherence based on a highly influential deep learning model in recent years. Finally, our study pioneers the development of fintech research that combines AI technologies with asset pricing, a direction which could potentially have a high impact, given the rapid advances of AI technologies.
This research also has important practical implications. From the perspective of stock traders and investors, our proposed measures of evasiveness, incoherence and emotional inconsistency can be readily incorporated to form profitable trading and investment strategies. More broadly, the idea of using AI technologies to analyze the “conversational dance” between a party with more information and another with an interest in such information, could be particularly fruitful. For example, one may design AI-powered software that analyzes media interviews with the President to detect hints for potential policy changes.
From the perspective of corporate managers, with increasingly advanced AI technologies scrutinizing their disclosure, it will be more difficult to strategically disclose or manipulate information in the future. In this cat-and-mouse game of information seeking, machines may ultimately win, leaving only one viable option for managers: be honest and be forthright.
Finally, from the perspective of financial analysts, our results highlight the evolving role of equity analysts in probing every subtle detail of earnings conference calls. AI-powered algorithms are encroaching on many territories that are traditionally considered as uniquely operated by human intelligence. In this looming new age of AI technologies, we believe analysts who are open to and can harness new technologies to augment their abilities are likely to survive and thrive.
Reference
[1] “How Computers Could Make Your Customer-Service Calls More Human”, published in WSJ on June 14, 2018https://www.wsj.com/articles/call-center-agents-get-a-human-touch-1528984801
[2] LAING JR (2006) Is your ceo lying. Barron's, June 26, 2006.