Large language models (LLMs) are rapidly becoming integral to financial analysis, from parsing earnings calls to predicting stock market reactions to news. But a critical question remains: When we feed these models more information, do they perform better? Our recent study suggests, not necessarily. We document a structural limitation of LLMs in financial tasks, a