The use of artificial intelligence (AI) in the investment landscape is growing rapidly, with tools and apps increasingly guiding individual and institutional investors alike. 

These systems offer powerful advantages-such as analyzing vast amounts of data, identifying trends, and generating tailored investment recommendations in real time. However, while the potential benefits of AI-assisted investing are substantial, they also come with equally significant risks. Inaccurate outputs, opaque algorithms, and data bias can lead to misleading conclusions and even severe financial losses. As the technology becomes more embedded in the investment process, a thoughtful approach that balances innovation with responsibility is crucial. This article explores the opportunities and risks of AI-driven investment tools, with a focus on the evolving regulatory landscape—especially the implications of the EU Artificial Intelligence Act (AI Act). It also touches on recent guidance from regulatory authorities like the AFM and ESMA, and calls attention to international frameworks such as the OECD Principles for Trustworthy AI while assessing the systems. 

This article is written by Kristi Rutgers ([email protected]) and Sefa Geçikli ([email protected]). Kristi and Sefa are part of RSM Netherlands Business Consulting Services, specifically focusing on International Trade and Strategy.

Background

More and more investors are using AI tools and apps that make investment recommendations based on public available information. AI offers notable advantages in financial decision making, especially considering the ability of AI to process enormous volumes of data. AI tools can quickly uncover patterns and market insights that human analysts might overlook. The insights can serve as a valuable resource to support the strategies of (self-)investors. AI tools can provide significant opportunities in terms of investments, specifically AI is able to analyze large datasets rapidly to identify trends or anomalies, offer real-time updates based on market conditions and generate personalized investment suggestions. 

Despite these opportunities and benefits, the use of AI in investing comes with significant risks leading to financial losses. As the risks areas with all AI tools, the quality of the data feeded to the AI tool is an important factor to generate the desired results. The principle of 'garbage in, garbage out', strongly applies here – if the data provided to the AI model is outdated, biased or incomplete, the recommendations and insights generated can be misleading, causing dangerous situations and savire financial loses.  

Key risks involved with AI tools include: 

  • Bias in algorithms: AI tools may unintentionally favor certain types of investments based on skewed input data.
  • Lack of transparency: Often described as "black boxes," many AI systems do not explain their reasoning, making it difficult for users to challenge or understand the outputs.
  • Hallucinations: AI can sometimes produce information that appears convincing but is factually incorrect or entirely fabricated.
  • Privacy and security concerns: Many consumer-grade AI tools may not adequately protect users' personal and financial data.

Regulatory Context

To mitigate the risks relating to and utilize the opportunities of AI tooling investors should be aware of the regulatory context involving AI. The European Union Artificial Intelligence Act (EU AI Act) emphasized transparency, accountability and data protection in AI applications.

As highlighted in our previous installment, the AI Act introduces a comprehensive framework to ensure AI's responsible deployment. Investors are encouraged to check whether the AI tools are aligning with the EU AI Act, but also with the OECD principles for trustworthy AI.

The OECD AI Principles, adopted in 2019, provide an international framework for the responsible development and use of artificial intelligence. These principles emphasize that AI systems should be inclusive, transparent, safe, fair, and accountable, and that they should respect human rights, democratic values, and the rule of law. They also call for risk management, including transparency about AI capabilities and limitations, and ensuring that AI systems are used in a way that benefits people and the planet. The principles serve as a guide for governments, developers, and organizations to build trustworthy AI that aligns with ethical and societal standards across borders.

Under the EU AI Act, financial institutions that use AI systems -such as AI-based investment tools-remain subject to their existing regulatory obligations under Union financial services law. As stated in the recital, these laws already include internal governance and risk management requirements, which continue to apply when AI systems are used in service provision. The AI Act aims to complement, not duplicate, these existing frameworks by ensuring that its obligations (such as risk assessment, documentation, and post-market monitoring) are coherently integrated into financial sector supervision. The recital clarifies that existing supervisory authorities -such as those overseeing banks, insurers, and credit institutions- will also be responsible for enforcing the AI Act within their sectors. These authorities are designated to supervise the implementation of the AI Act, including market surveillance of high-risk AI systems used in financial services. Importantly, the regulation allows for certain derogations and procedural integration to avoid duplication, such as incorporating AI-specific risk management obligations into existing compliance processes under financial law. This approach enables AI governance to be embedded into familiar supervisory structures, streamlining compliance for financial institutions.

For developers and users of AI investment tools, this means that compliance with the AI Act will not be in isolation, but must be aligned with sector-specific financial regulation. Where these tools fall under the high-risk AI category, institutions must demonstrate that their deployment complies with both AI-specific obligations and existing financial supervisory standards. As the recital notes, this integrated oversight model helps ensure consistency, avoids regulatory overlap, and reflects the EU’s recognition of the complex, interconnected nature of AI innovation and financial regulation.

The AI Act classifies as high-risk any AI system intended to evaluate the creditworthiness of natural persons or establish their credit score, except when used solely for detecting financial fraud.

Investment companies, especially those offering retail investment products (e.g. robo-advisors, wealth management platforms), often use AI-driven tools to assess a customer’s financial profile, which may include:

  • Automated scoring to determine investment eligibility
  • Risk tolerance profiling
  • Income or asset-based segmentation

If these systems make determinations similar to creditworthiness assessments, they could fall under the high-risk category, particularly if they influence decisions like product suitability or access to financial products.

AI systems used to fulfill MiFID II obligations (e.g. suitability and appropriateness tests) may be close in function to credit assessments. If such systems use predictive models or scoring algorithms to categorize clients or limit access to investment services, they may require compliance with high-risk AI requirements—such as:

  • Risk management frameworks
  • Transparency and explainability
  • Human oversight
  • Robust data governance

AFM and ESMA Guidance for Consumers

On the consumer side, the AFM and ESMA published a campaign specifically focusing on aspects (self-)investors should take into account: 

  • Consider multiple perspectives: When making investment decisions, don’t rely solely on AI tools — use them as one of many sources. For example, also consult a licensed investment advisor.
  • Resist the temptation to get rich quick: Be skeptical of AI tools that promise high returns with their investment strategies.
  • Legislation matters: AI tools that are publicly available online are not legally required to act in your best interest.
  • Understand the risks: Be aware of the limitations and potential inaccuracies of AI-generated advice. It may be based on outdated, incorrect, or incomplete information.
  • Protect your privacy: AI tools may not have adequate security measures in place to safeguard your personal data.

Forward Thinking

AI is poised to play an even greater role in the world of investing, transforming how decisions are made, risks are assessed, and portfolios are managed. As technology continues to advance, so too must the tools, frameworks, and education available to both retail and professional investors. Regulators like the AFM and ESMA are taking crucial steps to ensure that innovation does not outpace responsibility. Future success in AI-assisted investing will depend on building systems that are not only powerful, but also transparent, explainable, and aligned with investor protection standards. As AI becomes more embedded in financial services, a forward-thinking approach — combining technological innovation with robust ethical and regulatory oversight — will be key to ensuring that these tools genuinely empower investors rather than expose them to new forms of risk. The future of finance may be powered by AI, but it must be guided by human judgment, integrity, and informed decision-making.

RSM is a thought leader in the field of Financial Services and International Trade consulting. We offer frequent insights through training and sharing of thought leadership based on a detailed knowledge of industry developments and practical applications in working with our customers. If you want to know more, please contact one of our consultants.