Introduction
Artificial intelligence (AI) has rapidly transitioned from a niche analytical tool to a core component of modern investment management. Asset managers are increasingly using machine learning and data-driven systems to analyze markets, optimize portfolios, and identify investment opportunities. These technologies promise efficiency, precision, and a competitive edge in an industry defined by information asymmetry. Yet, their use also raises significant regulatory and fiduciary questions.
The investment management regulatory framework (which is built around human discretion and judgment) struggles to adapt to autonomous decision-making systems. Questions around accountability, transparency, conflicts of interest, and fiduciary duty are now at the forefront of regulatory and legal discourse. This article explores how existing fiduciary principles apply to AI-driven investment decisions, how regulators are responding, and what governance frameworks are emerging to bridge the gap between technological innovation and investor protection
I. The Rise of AI in Investment Management
AI applications in finance are diverse: portfolio optimization, sentiment analysis, algorithmic trading, credit scoring, and risk modeling. Unlike traditional quantitative models, which follow explicit rules and parameters, AI systems (especially deep learning and generative models) are often "black boxes," producing outputs that even their developers struggle to fully explain.
As noted by Caiazza, Rosenblum, and Sartain (2020), AI allows advisers to synthesize vast amounts of structured and unstructured data, potentially improving investment outcomes, but it also complicates a core element of fiduciary law: the ability to demonstrate the rationale behind a recommendation or trade decision. The more opaque the model, the harder it becomes for an adviser to fulfill their duty of care and provide adequate disclosure to clients.
Firms are also marketing "AI-powered" investment strategies, triggering scrutiny from the U.S. Securities and Exchange Commission (SEC) over potentially misleading claims about the capabilities or sophistication of these systems, or "AI-washing" (Ropes & Gray, 2025). This parallels earlier concerns about "greenwashing" in ESG investing, including the idea that hype around technology can easily outpace regulatory oversight.
II. Fiduciary Duties in the Age of Algorithms
Under the U.S. Investment Advisers Act of 1940, investment advisers owe clients two core fiduciary duties: the duty of care and the duty of loyalty. These duties are intentionally broad and principles-based, designed to evolve with industry practices (Caiazza et al., 2020). AI does not replace these obligations—it amplifies their complexity.
A. Duty of Care
The duty of care requires advisers to act with competence and diligence, grounded in informed, reasonable judgment. When an adviser relies on an AI system, the obligation extends to understanding how that system works, validating its assumptions, and continuously monitoring its performance. As Cullen and Dykman LLP (2025) observe, "delegating decisions to a machine does not absolve the human fiduciary from oversight."
Advisers must therefore maintain documentation demonstrating that model outputs are reviewed, tested, and consistent with client mandates. Using AI without explainability or validation could be interpreted as a breach of the duty of care—akin to relying on an unverified third-party analyst without due diligence.
B. Duty of Loyalty
The duty of loyalty requires advisers to act in their clients' best interests and fully disclose material conflicts. This can become problematic if an AI model is trained on biased data or designed to optimize firm profitability over client outcomes—for example, favoring proprietary products. Regulators have also noted risks where predictive analytics are used to "nudge" clients toward behaviors that benefit the adviser rather than the investor (WealthManagement.com, 2024).
Transparency about how AI is deployed, what data it relies on, and what limitations exist is essential to satisfying the loyalty standard. If an adviser cannot meaningfully explain the basis for a model's output or its potential conflicts, it becomes difficult to argue that the client's interests were prioritized.
III. Emerging Regulatory Frameworks
A. The United States
The SEC has proposed new rules governing predictive data analytics and digital engagement practices. These rules, introduced in 2023, would require investment advisers and broker-dealers to "neutralize" or eliminate conflicts of interest associated with AI-driven interactions (WealthManagement.com, 2024). While intended to protect investors, industry groups have criticized the proposal as overly broad and potentially incompatible with the realities of AI-driven analysis.
More recently, SEC Chair Gary Gensler has warned that "AI may heighten financial fragility if many market participants rely on similar models or data," signaling an increased focus on systemic risk (Ropes & Gray, 2025). Enforcement actions around "AI-washing" are also expected, mirroring the Commission's ESG enforcement playbook.
B. The European Union
The EU's approach, under the AI Act (its comprehensive regulatory framework for artificial intelligence enacted in 2024) and guidance from the European Securities and Markets Authority (ESMA), emphasizes accountability and transparency. ESMA has stated that financial institutions "must take full responsibility for the actions of AI systems they deploy" (Reuters, 2024). This position eliminates ambiguity about liability: human managers cannot disclaim responsibility for an algorithm's output.
The AI Act classifies certain AI uses in finance (such as credit scoring and portfolio decisioning) as "high risk," requiring robust governance, documentation, and explainability standards.
IV. Governance and Best Practices for AI Use
Given the regulatory uncertainty, many investment managers are voluntarily adopting AI governance frameworks aligned with emerging best practices. The Alternative Investment Management Association (AIMA) (2025) recommends that advisers using AI establish:
1. AI Governance Committees overseeing model development, testing, and ethical considerations
2. Model Validation Programs that regularly audit performance, data integrity, and bias
3. Vendor Oversight Protocols ensuring third-party AI providers meet equivalent compliance standards
4. Human-in-the-Loop Controls so that final investment or client-facing decisions involve human review
5. Transparent Client Communications outlining the role of AI in investment processes
From a legal perspective, documenting these controls serves a dual purpose: it strengthens a firm's regulatory defense and demonstrates proactive fiduciary compliance. As Ropes & Gray (2025) observe, "AI governance is not just a technical framework—it is an extension of fiduciary oversight."
V. The Path Forward
AI will continue to reshape investment management, but its integration into a fiduciary framework demands caution and transparency. The legal and regulatory consensus emerging across jurisdictions is that accountability cannot be automated. Human advisers remain ultimately responsible for decisions influenced or generated by AI.
In practice, this means investment advisers must understand and monitor the tools they deploy, maintain explainability and auditability, manage conflicts inherent in predictive or generative models, communicate AI's use and limitations clearly to clients, and treat AI as an aid to, not a substitute for, professional judgment.
The SEC's anticipated guidance on AI-driven systems, combined with Europe's risk-based AI classification, will likely converge toward a principle: fiduciary duties apply equally to algorithms and humans.
Conclusion
Artificial intelligence offers immense promise for portfolio management, but it also tests the outer boundaries of fiduciary law. For legal practitioners advising investment managers, the imperative is clear: ensure that innovation does not outpace governance. Firms that embed AI oversight into their compliance and risk frameworks will not only mitigate regulatory risk but also position themselves as responsible stewards of both technology and trust.
As fiduciary standards evolve, one principle remains constant—the adviser's duty to understand and stand behind every investment decision, whether made by human insight or machine intelligence.
Select References
- Alternative Investment Management Association, "A Practical Guide for Advisers Considering the Use of AI", 2025
- Caiazza, A., Rosenblum, R., & Sartain, D. "Investment Advisers' Fiduciary Duties: The Use of Artificial Intelligence." Harvard Law School Forum on Corporate Governance (2020).
- Cullen & Dykman LLP. "The Artificial Fiduciary: How Black-Box AI Models Are Compromising Fiduciary Duties in Private Equity" (2025).
- Klass, J.-L., Man, P. J., & Valente, C. "A Practical Guide for Advisers Considering the Use of AI." Alternative Investment Management Association (2025).
- Reuters. "EU Watchdog Says Banks Must Take Full Responsibility When Using AI" (May 30, 2024).
- Ropes & Gray LLP. "Meeting the AI Moment in Asset Management: An Agenda for Industry Lawyers" (Sept. 2025).
- WealthManagement.com. "SEC's AI Rule Could Weaken Advisors' Fiduciary Duty, IAA Attorney Argues" (Mar. 2024).