Unpacking data, governance and AI
How do you build trust in a digital future?
Artificial intelligence and data governance is a topic that’s on every boardroom agenda. As AI continues to transform decision-making and operations at an incredible pace, the challenge has shifted from adoption and integration to governance, trust and ethics.
Join Andrew Sykes as he hosts top experts from RSM Australia, Srdjan Dragutinovic and Gerard Sayers, to unpack how strong data governance can accelerate your AI initiatives while safeguarding your organisation’s reputation.
This episode is perfect for senior business leaders who want to ensure the data behind their decisions is governed properly. Good governance means compliance, reduced risk, and turning data into a trusted asset that drives ethical, accurate, and strategic decisions.
Key takeaways
- How AI is shifting decision-making at the boardroom level—and what leaders must do to keep pace
- The critical role of guardrails, standards, and human oversight in preventing costly hallucinations and bias
- Practical steps to integrate AI governance into your strategic planning, from ownership to accountability
- The challenges posed by unstructured and shadow data, and how to turn them into opportunities
- How existing laws and emerging regulations, including Australia’s stance, shape your AI compliance journey
- Key frameworks and best practices to make governance actionable in a rapidly evolving landscape
Read transcript
Andrew Sykes (00:00)
Artificial intelligence and data governance is a topic on every boardroom agenda. AI is transforming decision-making and operations at an incredible pace. But with its rapid advancement comes the need for robust safety and ethical standards. For senior leaders, it's not just about adopting AI, it's about ensuring the data behind these decisions is governed properly. Good governance means compliance, reduced risks and turning data into a trusted asset that drives ethical, accurate and strategic decisions.
Hello, I'm Andrew Sykes. I'm a business accountant for over 30 years. I talk about business, money and the economy to help you get ahead. Welcome to talkBIG. In this episode of talkBIG, we'll unpack what data governance really means, why it's critical for AI success and practical steps business leaders can take to stay ahead. We'll also look at emerging trends and answer the big questions like how do you balance innovation with safety and trust?
Joining me today are two experts in this area from RSM Australia, Srdjan Dragutinovic and Gerard Sayers.
Srdjan is a partner in the Data Analytics division at RSM Australia with over 20 years global experience in advanced analytics to support strategic, operational and decision making. Srdjan assists clients in becoming data-driven and insight-led, linking insights to outcomes and driving business value through the application of data and analytics.
Gerard is a Senior Manager in Data and Analytics and AI at RSM Australia. He leads RSM's responsible AI initiatives, helping clients adopt AI safety standards and build trust in emerging technologies. He specialises in developing proof of concepts and guiding organisations through responsible AI implementation, ensuring ethical, transparent and effective use of AI. Both very, all very topical at the moment. How are you Gerard and Srdjan?
Srdjan Dragutinovic (02:00)
Very well, thank you, Andrew. It's a pleasure to be here and talk to you today. Looking forward to the conversation.
Andrew Sykes (02:04)
Yeah, it's really a very interesting topic. You can't get through a day anymore without talking about AI. So, AI is no longer a futuristic concept. It's here and it's reshaping how business operates. And that's why strong governance is essential to ensure its effective, ethical, and it's aligned with organisation goals.
Andrew Sykes (02:28)
How does AI change the way leaders make decisions at board level?
Srdjan Dragutinovic (02:32)
Well, I think a lot of the boards are really grappling with this balancing act of governance and productivity uplift. So it's questions like, how much autonomy do you give AI and how do you govern that without really slowing down too much? A recent McKinsey report really highlighted this gap where probably two-thirds of organisations are still really playing in that pilot phase.
This raises that discussion at board level of how you actually operate, operationalise these initiatives in order to reap the benefits and move beyond really that pilot phase. One of the other things that we're seeing is this physical reality that's hitting boards too. We've watched how the global hyperscalers, the Googles and the Metas, they're cornering the market in GPUs and memory given the vast computing that's required to build these generative AI models.
That then leads onto another interesting fact, which is by 2030, Australian data centers are projected to consume about 6% of our national energy grid. So if you're on the board of a company, you're not just thinking about your AI strategy and the practical implications for your business. You're looking at how do you balance these massive computing needs against your net zero commitments. So it becomes an energy strategy in addition to your AI strategy.
Andrew Sykes (03:49)
So, what we're talking about there, I'll ask you, why is governance not just a compliance exercise and how is it a strategic enabler?
Srdjan Dragutinovic (03:57)
Well, I think that really governance needs to wrap around AI because it's the guardrails that allows you to accelerate your AI initiatives. Without that governance, it's the Wild West and you're going to run into a lot of problems as a lot of organisations have that have jumped on the AI bandwagon without that governance in place. So, you look at the likes of Deloitte, who had a recent faux pas and obviously didn't have that governance in place where there was human oversight and human in the loop, if you like, which is one of the 11 guardrails that's recommended in Australia. So, they published a report for the government which included references to legal case studies that were fictional. The AI had obviously hallucinated.
So really having that governance in place is hugely important if you want to reap the benefits of AI.
Gerard Sayers (04:58)
Yeah, I might as well. It really comes back to reputational risk, Andrew. So with this, governance provides you a safety net. I won't say it's not foolproof, but it provides you some protections against particularly reputational risk, which is what we're seeing from the likes of whether it's Deloitte or Woolworths even last week as well. But there's a lot of organisations, plenty of examples where we've seen these sorts of issues coming through and the governance helps to bring trust to these AI processes, which is a really important aspect of it and should be part of any strategy.
Andrew Sykes (05:35)
Yeah, that's it is a super important part of AI, isn't it. It's very easy to look at AI and think it's almost a bit of a miracle product that will do a lot of our work for us. But it's not always right. It's not always accurate. And so, is it fair to say that a good corporate governance regime over the top is how we stop being an example in the media?
Gerard Sayers (05:55)
Yeah, I think so. And there's a few pieces there in terms of important aspects of that strategy, including making sure you've got someone who is accountable for AI across the organisation. And then also thinking about the guardrails. So, there are voluntary standards, for example, there are other standards, ISO standards as well, which are similarly related, using these within the organisational context of AI and how you're going to use it, they're there to protect you from some of these risks.
Srdjan Dragutinovic (06:25)
Just emphasising the point, I made earlier, it really allows you to go at full speed with those initiatives because you know, and you've got confidence that the guardrails are in place to stop some of the things that we've been seeing happening in your organisation.
Andrew Sykes (06:40)
That is that argument of risk versus opportunity. So, while AI opens up incredible opportunities, it also introduces new risks, ethical, legal, reputational. And we then get the question of how leaders strike the right balance between innovation and their responsibility in all those areas. AI, as we are starting to see, has a whole bunch of ethical risk biases in our algorithms, the fairness in outcomes and also legal and compliance. The examples you raised before showed the reputational risk. And then we're trying to balance the speed of innovation at the moment with responsible business practices. So, if we take all of those considerations into account, what are the top risk executives should be aware of when deploying AI?
Srdjan Dragutinovic (07:37)
I think you mentioned a few there, Andrew. There's obviously hallucinations that we just talked about with the Deloitte example. And I think we'd be here all day if we started talking about all of the examples that we're continually seeing. You've got the Robodebt example where there was obviously a lack of human oversight in how the models were issuing debt and had done so illegally. You've got models that are built on biased data.
So there are examples in the HR space and recruitment space where models were obviously fed unintentionally, biased data that resulted in the AI training and learning on that data and implementing biased decisions around who and who they weren't going to recruit. So that's an issue. And that may not be intentional. It's that the nature of AI makes it difficult to spot those biases and without the guardrails in place and the processes in place to test the AI models. It's very difficult to pick up. Early in my career, I had it drummed into me that if you can't explain a model, you shouldn't be using it. You should be able to explain to a client, to the business, what are the factors that are driving that model and why is the model coming to the outcome that it is. That's still important 30 years later. It's just more difficult now with AI to unpick what those drivers are. I remember when neural networks were introduced in the late 90s, made it very difficult to try and unpick those compared to traditional machine learning techniques that we used before. So, think a lot of those things that were around 30 years ago is still relevant today.
Andrew Sykes (09:25)
Yeah, and you also, if we reflect on the different business environment with social media, you know, when we all started our careers, could almost fail in private a little bit. But now it is certainly very fast paced innovations, very public and social media means that it can blow up very quickly.
Andrew Sykes (09:47)
So as a leader, how could you foster innovation through the use of AI in that kind of environment without compromising ethics or compliance?
Gerard Sayers (09:58)
I think if you look at it from the, as I mentioned about the voluntary guardrails and also standards, they give you an indication or set you on the right path, if you like, in terms of what you might need to consider from an ethics perspective. The voluntary AI guardrails are aligned with the AI ethics principles, and I guess some of the key pieces of that is around human rights, for example, around bias. So how you might...
Whether it's discrimination, whether it's impacting individuals and their livelihoods, for example, are aspects of you don't want to have these adverse outcomes to people if we had an outcome like what we saw with Robodebt. Not that I think Robodebt was based particularly on these sorts of models. It wasn't. It was something more fundamental than that. But we wouldn't want to see those sorts of outcomes again, eventuate through successive models of what's seen being developed.
But again, those frameworks provide you with those, the ethics compliance frameworks for where you need to head with.
Srdjan Dragutinovic (11:03)
I just going to add to Gerard's comment there that really building these guardrails into workflows is a good way to make sure that those guardrails are followed so that they're not voluntary. They're not in the sense that it's up to a person whether they're going to follow that process or not. So ensuring that bias is checked for in models in an automated way and that gets flagged to a human to review, for example, is a good way to ensure that bias isn't creeping in.
Andrew Sykes (11:31)
So you both mentioned voluntary guidelines there. Are there any relevant regulations or any laws that we are mandated to stick to in this area?
Gerard Sayers (11:45)
Yeah, I might jump in on this one, is there are the approach for the government has been to rely on a lot of existing laws that exists such as privacy laws, data protection, consumer laws, for example, these one, these laws already exist. And the idea is not to duplicate or to supersede those and having specific laws for AI. There was an approach that was considered to introduce a mandatory guardrails. So those voluntary guardrails would have become required in certain contexts and high risk contexts. But the direction has been or the direction of the government has been not to hand those down and not to regulate those as not to stifle innovation. But there certainly does set a pretty good expectation, if you like, in terms of where we might head in future from a regulatory point of view and what we might expect to be regulated in future.
And we can see that, for example, in the EU, there is regulation of a high-risk use of AI. And again, it gives you a good lens to consider those areas where it has been regulated overseas as to where they might look to regulate in Australia as well. But at the moment, there isn't AI regulation as such. There's a few requirements on some government departments to have a responsible person in place, but aside from that, there's not any particular ones around the use of AI.
Srdjan Dragutinovic (13:16)
That move by the Australian government really emphasises the point we talked about earlier, which is that grappling between stifling innovation, managing risk. So you can see the government is really grappling with that same question that boardrooms are as well.
Andrew Sykes (13:32)
Given the importance of AI, it wouldn't be surprising to see a whole suite of regulation come down from government over time. It's such a changing area. And it doesn't, if we look at changes, it's also changing how we use data. So AI just doesn't use data, it depends on it. And that dependency can create new pressures on how data is governed.
Andrew Sykes (13:59)
So if we just unpack the unique challenges that AI presents, what are they? What are some of the new challenges that AI is introducing to traditional data governance models?
Srdjan Dragutinovic (14:12)
Well, I think that there's certainly been an uptick in interest in getting those foundational layers right, if you like. So data governance, data quality. But what the explosion in AI has done is really increased the amount of unstructured data that can now be used. So previously, very difficult to use handwritten documents, PDFs, images, videos, it could be done, but it's time consuming, difficult. That's now being democratised and there's just been an explosion in the availability and the use of that data. So traditional data governance, probably not designed with that in mind because it wasn't really a thing when data governance frameworks were designed. So that's one of the areas where it needs to be adapted.
So when you're looking at AI governance, that's a big factor. You've also got shadow data, which a big threat as well. And that's linked to that unstructured information. So all of this information sitting around that really wasn't being used, organisations have always had this data. It's just now it's a lot more accessible. It's a lot more accessible to people to use in good ways, but also in negative ways. So that creates additional risk.
Gerard Sayers (15:29)
It's also interesting because it is more accessible in some ways, but it's less accessible in another. So as you said, with shadow data, it might be sitting on your individual profile or your individual machine, but it's not shared across the organisation. And it's a varying quality as well and consistency between that. So given those unstructured documents can be in all sorts of forms. And there can be drafts and duplicates and all sorts of…it's not well curated data that we're talking about here. So if you want to get use or power out of the AI, you want to be able to curate that data into a central repository, into the finalised state if you're trying to think about how do you use it for, say, proposals in an organisational context like ours, you might say, just want to see the final version of those proposals. We don't want to know about all the drafts. We don't want to have them creeping into the inputs of the model. So you've got to bring that information together. And that's where it's different to where we would have done with, data governance before, instruction systems, it's in ERP systems, it's in our systems at record. There is data across the organisation and then Srdjan said it can be in physical documents as well. So we might have them in physical archives. Now, there might be a tremendous amount of data in there, value in that data that's there historically.
We haven't used it. We've just always put it in the back of the filing cabinet and never accessed it. But if we could draw that out, there could be tremendous value in certain domains, certain business domains. So for example, we talk about occupational health and safety data, for example. So incidents that have happened 10 years ago, those risks are just as relevant today as what they were then, or could be, for a mining company, for example. So if you can access that information, bring it forward and use it within an AI context and use that for context within your AI system, it could bring a lot of value to your business.
Andrew Sykes (17:25)
So, Gerard, when you're talking there about ensuring the data quality and integrity that AI systems are learning and evolving on, is part of that ensuring that you're not just bringing all sorts of data in from outside of your organisation?
Gerard Sayers (17:40)
It's as much within your organisation as well as outside as well. You want to be able to in the AI to draw on good quality information, whether it's external or internal. It doesn't particularly matter, but it needs to have relevant context to provide you with a good answer.
Srdjan Dragutinovic (17:55)
You also need that data lineage to that point, Andrew. So knowing what that lineage of data is and how models are acting on that data. So you can't satisfy the ethics principles in the guidelines without knowing the lineage of that data because you don't know what the models are making the decision on. And you've also now got agentic AI, which is accessing data, bypassing traditional governance processes that were very much centred around how humans access that data. You might have an agent that's going around the organisation pulling data from here, there, and everywhere. So traditional governance frameworks may not be up to the task of tracking and picking that up.
Andrew Sykes (18:44)
Yeah, so there's some of the challenges and I think it'd be good to talk some solutions and discuss what does good governance in AI look like in practice and how we can make it actionable rather than theoretical. So what are some of those essential pillars of a strong data governance framework?
Srdjan Dragutinovic (19:04)
I think as Gerard said earlier, that having ownership, so having an AI owner, traditionally in data governance, you might have a data governance council or data stewards within the organisation. So having similar counterparts within the AI sphere that work with those people in data governance is a really important step. That creates AI and data governance within the organisation. Without that, people are pointing their fingers and nobody has that ownership. Another important point is literacy, both of the data and the data quality, but also around the AI use cases within the organisation.
Andrew Sykes (19:50)
Just on that point there, there's very much making sure that there is a key point of responsibility for the AI and the data within an organisation. It just can't run, you can't deploy it and let it run on its own.
Srdjan Dragutinovic (20:02)
Yeah, that's right. It's an important point.
Gerard Sayers (20:05)
One of the guardrails we talked about, and we mentioned guardrails before, but one of those is human oversight. And the important aspect of human oversight is having a meaningful interaction with it, being able to interpret results or outputs and being able to meaningfully act on that information. having that, we talked about lineage, being able to trace it back and saying, this of good quality? I rely on the outputs that are being produced here? That enables that meaningful oversight as much as anything.
Gerard Sayers (20:33)
So very important aspect of it. The other one I wanted to mention was tying governance back to strategy. And that is that the governance you put in place really aligns with the strategic outcomes you want to achieve. you need to understand not AI is an outcome for, AI is not the outcome. AI is an enabler. What is it going to help you achieve? Is it going to help you achieve more revenue?
Launch new products into new markets? Are you going to look at your customers, work with your customers differently and have better conversations with them? Are you going to be reducing your costs in your call centers or in your workflows? How are you going to use or where do you see the opportunities within your organisation? How does it align strategically with those strategic initiatives? And then thinking about the governance from that perspective of how does government support us in achieving those outcomes?
Andrew Sykes (21:25)
That sounds very much to me like that. You work out your strategy and then you implement your AI and the government sounds like it's key to keeping control of and tying it back to the strategy to make sure that it's being implemented. Is that fairly correct?
Gerard Sayers (21:26)
I think that's a really important aspect to it.
You might even say your strategy is your accelerator and your compliance mark the brake. But the brakes are there to keep you safe and to allow you to stay on the road, right? So they're just as important as each other.
Srdjan Dragutinovic (21:56)
And really AI is no different to, as Gerard said, any other enabler analytics and AI have been around for a long while and the process still hasn't changed. It's about understanding what you want to achieve as a business, what those highest value use cases are, how do you prioritise those and then working back from there. Okay, is it AI that's going to help us achieve that? Is it BI reporting? Is it something else?
So, AI is purely the way to achieve those strategic objectives that you set at an organisational level.
Andrew Sykes (22:28)
Yeah, and it's always been a challenge. Governance and compliance has always been a challenge. Have you got some advice on how we can make governance practical and not just a policy document?
Srdjan Dragutinovic (22:41)
I think just going back to the point I made earlier about having ownership, this is, at least in data governance, the most challenging aspect is not actually building the framework or creating the documents, it's actually having it work in practice. So is a change management piece. It's a sign of responsibilities across the organisation around ownership of data and the governance. It's about actually kind of living the data governance and involving that over time. It's not a static thing. It's a continually evolving process.
Andrew Sykes (23:15)
Which is going to keep on changing with the pace of AI and as new AI has been implemented. Very much, if we went back before the previous wave of AI, we very much had control of the technology and systems that were used in our organisations. Now, with mobile devices, AI on them, we don't always have control of that. Does that impact on the governance needed?
Srdjan Dragutinovic (23:43)
I'll answer that one. Look, think that's, know, devices are just one aspect of, you know, what our governance might look like. So it's just something that needs to be built in as AI becomes more accessible on not just, you know, computers, handheld devices, also in other devices around your organisation or home.
Governance is going to have to cover all of those technologies that are around now and in the future.
Andrew Sykes (24:16)
Something that's going to need to be dealt with. And I think a lot of this governance area, if we don't deal with it and something goes wrong, there's going to be a lot of questions as to why businesses or organisations didn't. If we're not getting in front of it and putting the frameworks in now, there could be some significant consequences. So if both of you just had one piece of advice you would give to organisations thinking about
AI adoption, what would it be?
Srdjan Dragutinovic (24:47)
It's really important to get across. It's that it starts with understanding your organisational strategy and what you're trying to achieve and understanding and working through systematically where AI can help you achieve those strategic objectives. And then working backwards, creating a small pilot to test and then scaling that, where you find the value is being delivered.
Gerard Sayers (25:09)
Yeah, think similarly, it's the governance needs to align with the strategy. So you're not going to be wanting to put in additional governance layers where you don't, I guess, where you don't have the risk and the risk is really associated with you taking advantage of the opportunity. So just making sure there's alignment there between the organisational objectives and the strategy and the governance, I guess, is key to it. And then thinking about those assets within the organisation which you haven't perhaps thought about before and how you might and this might open the door to you to tap into those assets which you haven't considered previously.
Andrew Sykes (25:46)
Yeah, thank you for those and some really interesting points raised today. It's very easy to get excited about implementing new systems, new AI, but we certainly think that you need to include the governance and compliance aspect of it. Srdjan and Gerard, thank you very much for your insights today. I hope you've enjoyed being on our podcast. Any final words from either of you?
Gerard Sayers (26:14)
And thanks for having us, Andrew. Really appreciate it.
Srdjan Dragutinovic (26:18)
Thanks, Andrew.
I appreciate it. It's been a pleasure to talk to you.
Andrew Sykes (26:20)
Terrific, thank you. I invite all our listeners to tune into the next episode of talkBIG. Subscribe on your favorite podcast platform. And thank you for joining us. If you found this episode helpful, please subscribe and leave a review. Thank you and join me on the next episode of talkBIG.