
AI risks vs. opportunities
What Directors Think 2025 – an annual survey conducted in partnership between Diligent, Corporate Board Member, and FTI Consulting – found more than three-quarters of directors are prioritising growth opportunities this year, a sharp contrast to the cost-cutting measures that have been the focus of the past few years.
But it’s also evident that artificial intelligence will continue to play an important role for boards in achieving this growth, with the survey discovering that:
- 4 in 5 boards are already using AI or generative AI in some way
- 44% of boards are incorporating the technology into one or more areas, including products and services
- 27% of boards consider adopting or improving their understanding of AI to be a top priority for 2025
With AI having an increasing influence over the corporate landscape, there’s an important ‘promise and peril’ dilemma for boards to navigate, with the report revealing what directors consider to be the most significant opportunities and risk factors surrounding the technology.
Optimising efficiency and productivity is seen as a key opportunity
Directors believe the biggest opportunities presented by AI are in streamlining internal operations and costs alongside enhancing workforce productivity and satisfaction, with 42% identifying these as key areas where the technology can have a significant impact.
Access to more quality data and an elevated customer support function (both 38%) are also viewed as areas of high opportunity, with a third of directors (33%) considering innovation and product or service enhancements as strategic advantages.
Factors such as increased productivity and better data are seen as increasingly advantageous use cases for AI in 2025, featured higher in directors’ priorities this year when compared to the previous 2024 report.
The top five AI opportunities according to directors are:
- Optimising operations and costs (42%)
- Enhanced workforce productivity and satisfaction (42%)
- Better or more data and reporting capabilities (38%)
- Improved customer service and support (38%)
- Innovation and product or service enhancements (33%)
A lack of ‘AI literacy’ is viewed as the greatest risk
The report also uncovers a number of director concerns regarding the potential risks of generative AI use. Almost a third (32%) of directors identified a lack of internal knowledge and capabilities within their leadership teams as a major concern, with more education around the technology an obvious priority.
29% cited cybersecurity concerns, with data privacy considered to be a significant risk factor, while more than a quarter (26%) of directors believe the potential for generative AI tools to provide false information or ‘hallucinations’ remains a threat. 23% feel the lack of demonstrated use cases for AI means they cannot have full confidence in the technology.
The top five risks and challenges around AI use are considered to be:
- A lack of internal capabilities or knowledge in the leadership team (32%)
- Concerns around data privacy (29%)
- The potential for false information or ‘hallucinations’ (26%)
- A lack of demonstrated use cases (23%)
- Difficulty finding people with the right skills to help manage strategy and risk (20%)
Boards must balance AI innovation with strong governance
While directors believe AI can optimise efficiency, increase productivity, and provide stronger insight, these opportunities are anchored down by core concerns such as the impact on cybersecurity and a lack of trust or limited knowledge, which may hinder boards from finding meaningful use cases for AI and ultimately leveraging the technology effectively.
“Boards are racing to harness AI’s potential, but they must also uphold company values and safeguard the hard-earned trust of their customers, partners, and employees,” according to Dale Waterman, Global Solution Designer Lead at Diligent. “During a time of regulatory uncertainty and ambiguity, where laws will lag behind technology, boards need to find a balance between good governance and innovation to anchor their decision-making in ethical principles that will stand the test of time when we look back in the mirror in the years ahead. AI literacy will be the foundation for that sound decision-making.”
This is further complicated by a geopolitical divide in the way AI is governed. The US appears more focused on innovation and geopolitical competitiveness, for example, while the EU’s AI Act is centred around enforceable regulations and ethics. The EU AI Act stresses the importance of human-centric and trustworthy AI, a high level of protection of health, safety and fundamental human rights, and the need for proactive AI literacy education among leadership teams alongside an intention to support innovation.
“AI undoubtedly requires strong governance,” says Waterman. “The issue of competing values is not a new one for governments and the technology sector. We’ve been grappling with the need to find the right balance between the competing interests of privacy and national security for many years. Creating an environment for AI innovation while protecting timeless societal values and ensuring the ethical use of AI is, arguably, one of the defining issues of our lifetimes.”
“When I’m asked what advice I would share with a board about AI adoption and the AI governance that will be required, my suggested nugget of wisdom is always that a ‘wait-and-see’ approach is simply no longer a prudent option – because with AI, you will be left behind,” says Waterman. “AI is like a fast-moving train. When you’re standing still and watching, it feels incredibly intimidating as it roars past you. You need to jump on board. The train will still be moving at the same speed, but you will find that it feels much less overwhelming once you become part of the journey.”
For more startup news, check out the other articles on the website, and subscribe to the magazine for free. Listen to The Cereal Entrepreneur podcast for more interviews with entrepreneurs and big-hitters in the startup ecosystem.