From passing the bar exam to powering product recommendations, artificial intelligence (AI) is evolving at speed. While businesses are excited by AI’s potential, many are also asking difficult questions: Will it replace jobs? What are the risks? Who is being impacted behind the scenes?
The answers aren’t simple, and they’re still unfolding. But one thing is clear: the future of AI must be guided by purpose, governance and ethical intent.
From job loss to job evolution
Of late, there has been a narrative that AI will make your job redundant. However, the reality is more nuanced. While some roles will change, particularly those based on repetition or process, many others will evolve.
Right now, we’re seeing AI used to increase capacity, rather than eliminate it. For businesses facing ongoing labour shortages, AI offers a way to remove mundane tasks so staff can focus on more strategic, value-adding work. For example, fewer hours formatting documents or reconciling reports, more time spent advising clients or innovating products.
For tech workers specifically, AI’s evolution is shaking the ground beneath their feet. Engineers, analysts, and developers are increasingly expected to understand and integrate generative AI tools into workflows, while simultaneously competing against them. Those in coding, QA, and junior-level technical roles are most vulnerable to role compression or reassignment.
As a result, there has been a recent wave of layoffs in the global tech sector where people contributing to the build of AI are being let go once the AI is in place. This TechCrunch article gives a clear snapshot of the scale of layoffs in 2025 — many of which cite AI adoption as the cause. This underscores the duality of AI’s impact: while it promises efficiency and innovation, it also demands a serious conversation about how we transition, re-skill and protect workers.
Despite what’s going on in the tech sector, the key question for most businesses in other industries isn't so much, ‘Will AI replace us?’ but ‘How can we use AI to empower our people and improve the customer experience we deliver?’
AI strategy starts with purpose
Before embarking on the AI journey, we encourage management to be clear on why they’re adopting it.
- Is it to improve customer experience?
- To boost internal productivity?
- To reduce operational costs?
- Or to explore new markets altogether?
Being intentional about your AI use case helps align your investment with business value, and it ensures you avoid costly detours. Without clarity, it’s easy to fall into the trap of chasing novelty over utility
Governance is essential and urgent.
AI is moving faster than regulation. That is why governance can't wait.
Boards should be thinking now about how AI fits into their broader risk and compliance frameworks. Drawing on guidance from the AICD, here’s a solid starting point:
- Establish oversight – whether through a dedicated AI committee or within an existing risk committee, governance needs a home.
- Build a policy framework – align your AI initiatives with your corporate values, risk appetite, and ESG commitments.
- Educate the organisation – governance doesn’t end with the board. Your people need to understand both the opportunity and the boundaries.
- Design controls – work with cross-functional stakeholders to create a control environment that scales with AI adoption.
- Review regularly – this isn’t a set-and-forget situation. AI is evolving rapidly. Your controls need to grow with it.
Some stakeholders, particularly unions, would like workplace agreements that force employers to consult staff before introducing AI technology, retrain them, and guarantee their job security. They would also include provisions to ensure privacy, protections over data collection, and transparency over how the technology is used.
Regardless of whether some or all of these recommendations occur, it's incumbent on employers to include employees on the AI journey and ensure they are appropriately trained to use the AI technology — all of which should be done under a strong governance framework.
The human cost of AI: Do not look away
It is easy to be amazed by what AI can do. But few stop to consider how it gets there.
Behind every trained model is a human, often a low-paid data labeller in a developing country, sifting through thousands of pieces of content to “teach” the AI what’s what. In many cases, these workers face poor conditions, unclear employment protections, and even exposure to psychologically harmful content.
What’s often overlooked in corporate AI discussions is the invisible workforce that powers it all. Data labellers are essential infrastructure, but they’re rarely seen as such. The emotional toll of reviewing violent, disturbing, or offensive content day in and day out is not just a distant ethical issue, it’s a tech supply chain risk.
For tech companies and users alike, it’s time to treat labelling conditions as a core ESG and brand reputation concern. Demand transparency from your vendors. Ask where and how datasets are being cleaned and labelled. If your AI is built on human effort, those humans deserve safety, dignity and fair compensation.
If your business is developing or buying AI, ask questions about the supply chain. Don’t let the drive for speed and scale compromise your commitment to modern slavery laws or ESG ethics.
Energy use matters, too
AI models, especially large language models, require serious computing power. That means more data centres, more cooling systems, and more energy usage.
For businesses with a sustainability mandate, especially those reporting under evolving ESG frameworks, this should be a real consideration. If AI adoption increases your carbon footprint, how will you account for it? And what innovations (like solar-powered server facilities) could offset the impact?
Bias in, bias out: the dei risk (and opportunity)
AI’s intersection with diversity and inclusion is a double-edged sword. Used well, AI can reduce unconscious bias in hiring, performance review, and promotions by focusing on objective data. Used poorly, it can amplify existing inequalities by replicating the biased datasets it was trained on.
Scrutinise the tools you use. Understand how they’re built, what data they’ve been trained on, and who’s validating their outputs. AI is only as fair as the framework behind it.
What should leaders do next?
In conjunction with ensuring AI governance is in place, focus on:
- Build internal fluency – upskill your team not just on AI tools, but on AI ethics, safety, and governance.
- Audit your AI supply chain – from data labelling to training datasets, know
- Who’s doing the work - how they’re treated, and what protections are in place.
- Protect the junior pipeline – don’t cut entry-level roles too quickly. These are
- Future leaders, and AI - can’t replace learning by doing.
- Embed purpose into your tech strategy – ask not just “can we build it?” But “should we?”
And when it comes to training, the latest IMD World Competitiveness Report ranks Australia just 49th globally in digital and technological skills. This is a clear weakness. Investment in digital training is vital if Australia is to foster meaningful AI uptake across industries.
Final thoughts
Be clear on why you’re using AI.
Whether you’re aiming to free up your team’s time, enhance a customer experience, or transform an entire business model, knowing your purpose helps define your strategy and helps avoid blind spots.
For tech companies, in particular, the AI era is a defining moment not just for what you build, but for how you build it, and who benefits.
AI is here to stay. But how it’s used and what it’s used for is still up to us.
Should you have any questions, please feel free to reach out to Mathavan Parameswaran and the team at RSM Australia.