In just a few short years, Generative AI (GenAI) has gone from an experimental technology to a transformative force reshaping how businesses operate. Whether in customer engagement, supply chain optimization, product design, or regulatory compliance, GenAI is unlocking new levels of speed, efficiency, and creativity. But as with any powerful innovation, its use comes with trade-offs. 

Organizations must navigate a rapidly evolving landscape of risks, encompassing everything from misinformation and data leakage to ethical and legal pitfalls. In this article, we examine the dual nature of GenAI: its capacity to enhance productivity and drive strategic growth on one hand, and the challenges of hallucination, cybersecurity, and governance on the other.

This article is written by Lorena Velo ([email protected]), Cem Adiyaman ([email protected]) and Mario van den Broek ([email protected]). Mario, Cem and Lorena are part of RSM Netherlands Business Consulting Services, focusing on emerging technology and AI advisory services.  

The Current State of Generative AI

Generative AI (GenAI) refers to artificial intelligence systems capable of creating new content, such as text, images, code, or audio, by learning patterns from vast datasets. Unlike predictive AI, which makes decisions based on historical data, GenAI can simulate human creativity and automate tasks across multiple domains. Built on large language models (LLMs) and foundation models, GenAI is becoming a powerful tool for businesses looking to innovate, streamline operations, and deliver personalized customer experiences. Its ability to operate with minimal data inputs through APIs or prompts makes it highly accessible. Yet, this powerful technology comes with challenges that demand attention.

GenAI has rapidly transitioned from a research focus to a central pillar of digital transformation across sectors. The technology landscape is marked by a few dominant players primarily US-based tech giants who control much of the infrastructure, model development, and application layers, capturing over 80% of global funding in the value chain. While the US leads in private investment and deployment, the EU is making targeted efforts to close the gap through initiatives like the €2.1 billion AI Factories program, though this still falls behind the US’s $500 billion Stargate Project.

Europe’s strengths are particularly visible in specific sectors. In automotive, the continent boasts a strong legacy and hosts renowned companies, with significant investment funneled into autonomous vehicle startups $969 million in 2023 and over $1.1 billion in 2024. Companies like Wayve (UK), Aptiv (France), Forvia (Ireland), and Einride (Sweden) are leveraging GenAI for simulation and decision-making, while partnerships such as Umicore (Belgium) with Microsoft are advancing battery technology using GenAI. In clean energy, the EU’s ambitious climate goals and the Green Deal have led to a 19% reduction in fossil fuel generation in 2023, with champions like Iberdrola establishing GenAI Centres of Excellence to optimize energy production. Education benefits from cross-border collaboration and research funding through initiatives like Horizon Europe, with EdTech innovators such as Lingvist (Estonia), Lepaya (Netherlands), and Domoscio (France) transforming personalized learning through GenAI.  

Despite these advances, challenges persist. Common blockers across all sectors include data privacy and consent issues, integration difficulties with AI systems, a shortage of skilled personnel, and resistance to change from professionals wary of job displacement or skeptical of new technologies.

Opportunities for Businesses

The most significant benefits of generative AI become evident when organizations apply it to core business functions rather than isolated experiments. By integrating GenAI into foundational workflows, companies are unlocking real productivity gains. It transforms manual, time-consuming tasks into fast, data-driven processes across software development, customer service, marketing content creation, and R&D. Internally, GenAI is revolutionizing knowledge management systems, helping employees retrieve insights through natural language queries that mimic human dialogue. This not only speeds up decision-making but also drives strategic alignment.

For customer operations, GenAI-powered chatbots deliver immediate and personalized customer responses in multiple languages. They help reduce the volume of human-handled service requests by up to 50%, depending on existing automation levels. They can also resolve issues during initial contact by retrieving relevant customer data and reducing response times for human agents. Additionally, GenAI supports quality assurance by analyzing customer interactions and guiding agent performance.

Also, GenAI accelerates content ideation and drafting, promotes brand consistency, and enhances personalization across geographies and demographics. Marketing teams can use it for translating messages, tailoring campaigns, and analyzing customer behavior to refine strategy. It also helps optimize SEO content, product discovery, and search personalization, boosting ecommerce conversion rates.

In research and development, GenAI introduces efficiencies of 10–15% of total costs. Industries like pharmaceuticals and chemicals use it for generative design—creating new molecules and accelerating drug discovery. These same principles can be extended to physical products and electronics. GenAI also enhances product testing, optimizes material use, and shortens trial phases, ultimately reducing time-to-market and improving design quality.

Risks and Challenges  

While GenAI holds vast promise, its deployment comes with equally significant risks—many of which stem from the underlying architecture and training methods of large language models (LLMs).

One of the most pressing concerns with GenAI is its tendency to produce hallucinations-confident-sounding but factually incorrect or fabricated content. These errors can have serious consequences, especially in regulated industries. A well-known example is Mata v. Avianca, where an attorney cited fake legal precedents generated by ChatGPT, resulting in judicial sanctions and reputational damage. These models do not “understand” content; rather, they predict the next word or phrase based on learned patterns, making plausibility -not truth- the goal. Even when trained on accurate data, LLMs may recombine information in unexpected ways, generating misleading outputs that can erode trust and propagate falsehoods.

From a technical perspective, GenAI systems also pose cybersecurity risks, particularly when integrated into user-facing applications. Inputs -especially from public or adversarial users- can be exploited to manipulate outputs. These “adversarial prompts” can trigger unintended behaviors, such as generating offensive content, leaking confidential information, or inciting unlawful activity. Since GenAI models may use real-time user inputs for continuous learning, there is a risk that sensitive data -names, financial records, or proprietary business strategies-could inadvertently reappear in outputs visible to other users.

Businesses using GenAI must also consider the risk of data leakage through both input and output channels. For example, if internal product roadmaps or legal memos are input into a GenAI platform that lacks strict data separation policies, these could be repurposed in outputs to unrelated users. Moreover, when GenAI outputs are fed into downstream systems -such as inventory platforms, CRM tools, or marketing engines, there is the added danger of unintended consequences triggered by flawed or biased content.

EU regulations, particularly the GDPR and the upcoming AI Act, impose strict requirements on the collection, use, and cross-border transfer of personal data an essential input for many GenAI models. These constraints complicate model training, especially when relying on diverse datasets that may contain sensitive information. 1

Ethically, GenAI’s dependence on large-scale data scraping means it may inherit and amplify biases embedded in training datasets. This raises concerns about discrimination, copyright infringement, and culturally insensitive outputs. On the legal front, the EU’s AI Act and GDPR impose stringent requirements on data usage, model transparency, and explainability, areas where GenAI models often fall short. Failure to comply could result in hefty penalties and operational disruptions.

Finally, there is the human element. Across industries, employees may resist GenAI adoption due to fears of job displacement or skepticism about AI reliability. Without proper training and change management, these cultural barriers can limit GenAI’s impact and cause friction between teams. A lack of skilled personnel further hinders progress. The demand for AI engineers, data scientists, and compliance professionals outpaces supply, slowing deployment and weakening oversight. This talent shortage is compounded by resistance to change within organizations, especially from professionals concerned about job displacement or the reliability of AI outputs.

Technical integration is another challenge. Many enterprises operate on legacy systems that require costly upgrades to support AI adoption. Without seamless integration, businesses struggle to scale GenAI solutions effectively or ensure security and interoperability.

Regulatory obligations are becoming more concrete. From 2 August 2025, general-purpose AI systems in the EU must meet new governance standards. High-risk AI applications, such as those in finance, healthcare, and infrastructure, will face additional obligations by 2027, including mandatory risk assessments and transparency requirements. In response, cloud providers are expanding “sovereign cloud” services to ensure compliance with European data sovereignty laws. 2 Although these regulatory headwinds introduce complexity, EU policymakers stress that AI governance will ultimately strengthen market trust and spur home-grown innovation. The European Commission’s Digital Strategy highlights a dual goal of “excellence and trust,” aiming to bolster industrial capacity while safeguarding fundamental rights.3 Consequently, EU businesses must adopt a “compliance-by-design” approach to GenAI, embedding privacy, security, and transparency from project inception.

Forward Thinking  

Looking forward, the path to responsible GenAI adoption will require continuous investment in talent and training, closer collaboration between industry and regulators, and the development of flexible governance frameworks that can evolve with the technology. Ethical innovation must be at the core of this journey, with businesses prioritizing fairness, inclusivity, and societal impact in their AI strategies. For sectors such as automotive and clean energy, regular regulatory updates are crucial to avoid stifling innovation while maintaining public trust. In education, ongoing dialogue with stakeholders will be key to balancing innovation with ethical considerations. Ultimately, the organizations that succeed will be those that embrace both the opportunities and the responsibilities of GenAI, ensuring that technological progress is matched by robust safeguards and a commitment to transparency and accountability45

RSM is a thought leader in the field of Strategy and International Trade consulting. We provide frequent insights through training and the sharing of thought leadership, based on a detailed understanding of industry developments and practical applications gained from working closely with our customers. For more information, please contact one of our consultants.