Why the technology still feels intimidating – and why it doesN’t need to
AI is everywhere in investor relations today. Yet for many IR teams, the journey from curiosity to confidence is anything but straightforward. The ’empty prompt window’ syndrome and uncertainty around where to start and how to trust the technology remain real.
As an engineer who builds these systems, I see the fear and skepticism – from IROs, executives and IT teams alike. However, I also see a profound opportunity. With the right guardrails, a clear understanding, and a strategic approach, AI can empower IR professionals to work smarter, faster and with greater objectivity than ever before. The key is to move past the hype and focus on responsible, practical adoption.
I recently had the chance to join a lively discussion on AI and technology at the IR Impact AI & Technology Forum – 2025. The panel sparked some thought-provoking insights and I’m excited to dive into the highlights and share what I learned.
Prediction vs reasoning – demystifying generative AI
Let’s begin by clarifying what generative AI actually is. Large language models (LLMs) like ChatGPT, Claude and Copilot are not truly intelligent. They’re advanced predictors and do not understand concepts like humans do. Instead, they look at patterns in massive amounts of text and predict the next word based on probabilities. Think of it like autocomplete on steroids: very sophisticated, but still fundamentally guessing the next word, not thinking or reasoning.
AI is an assistant, not a decision-maker. It can summarize meeting notes, prep Q&A and analyze sentiment at scale – but always within the boundaries of its training data and algorithms. Understanding these limits helps teams set realistic expectations and avoid overreliance on them.
Four paths to safe, strategic AI use in IR
Security and trust are non-negotiable in investor relations. The good news is that there are several proven ways for IR teams to use AI responsibly today.
- Built-in, workflow-native tools: Platforms like Irwin and FactSet integrate AI directly within your CRM and workflow, keeping sensitive data inside your organization and ensuring it isn’t used to train public models. This approach provides a secure, controlled environment for adopting AI responsibily.
- Enterprise AI tools: Tools like OpenAI and Anthropic, when used with enterprise-level licenses and proper opt-outs, provide robust controls over data use and privacy.
- Personal pro accounts: If enterprise solutions aren’t available, a personal pro account may suffice for non-sensitive tasks. But never use free-tier accounts for confidential material or proprietary information.
- Internal guardrails and disclosure: Always be transparent about how AI is used across your IR program. Document sources, prompt the AI to admit when it doesn’t know the answer, and double-check outputs for accuracy before sharing them internally, with management or externally, with the market.
Across all approaches, closed systems, clear policies, and human oversight are essential. AI can share confident but incorrect answers when it lacks sufficient data. Teams must remain vigilant.
Real-world examples
IR teams across companies of all sizes are adopting AI at different speeds, but the principles of responsible use remain consistent. Here’s what I learned from my fellow panelists:
- Pfizer has built a closed environment, uploading only internal documents and structuring data for objective analysis. Their models handle everything from Q&A prediction to executive briefings and sentiment analysis, helping eliminate internal bias and accelerate insight delivery.
- Kyndryl uses Copilot across 30,000 licenses, with strict guardrails and collaborative oversight. Their ‘Ivan’ agent automates peer research, but every result is double-checked for accuracy.
- ONEOK is relatively new to AI, focusing on pilots and utilizing public data to build internal confidence, always involving IT and legal teams to ensure security.
These examples demonstrate that, regardless of maturity, the path to succes involves experimentation, transparency, and a commitment to ongoing learning. The most effective teams treat AI as a second set of eyes. Objective, fast, but always subject to human judgment.
Trust before automation, accuracy before speed
At Irwin, our guiding principles are clear: trust before automation, accuracy before speed. AI is a powerful assistant, but it must be deployed with care. I often remind my team to use AI as an assistant for now. Until somebody can tell you that it is artificial general intelligence, it is not intelligent.
Empowering IR teams means freeing them from repetitive tasks, allowing them to focus on strategic thinking, nuanced communication, and value-added analysis. The real unintended consequence of AI adoption is not risk, but opportunity: more time for what matters most.
AI won’t replace IR, but it will transform it
AI will not replace investor relations professionals but IR teams who use AI will outperform those who don’t. Start small, build trust through safe systems and clear disclosure and use AI to amplify your impact, not to automate away judgment or empathy.
The future belongs to IR teams who adopt AI responsibly, with accuracy, transparency, and human oversight at the core.
Amit Kaura is head of engineering at Irwin, a FactSet company, where he steers the company’s technical vision and execution across IR workflows.
