Why responsible AI is your next imperative - A podcast recap

3 min | Jamie Dunne | Article | General | Information technology sector

Two people talk inside an office with a computer with code on the background.

A friend of mine here at Hays, Shaun Cheatham, our Chief Relationship Officer, hosts a monthly podcast titled “How Did You Get That Job?” On the show, Shaun often speaks with working professionals, delving into their career stories. This episode featured Antony and Claire Roberts, co-founders of Full Fathom Five.

After listening to this recent episode, I felt the urge to share my thoughts on the tech industry and AI concerns that should be discussed more. In particular, AI is changing the way businesses work—but if it’s not used responsibly, it can create big problems. From avoiding costly mistakes to building trust with customers and staying ahead of the competition, ethics in AI aren’t just a nice-to-have— they are what keeps AI from becoming a liability. This article breaks down why ethical AI matters now more than ever and what smart companies are doing to get it right.

Read below for my take on AI Ethics or listen to the episode for the full conversation.

Ethics surrounding AI

As AI continues to embed itself in the workplace, organizations should be wary. In fact, Claire, a guest on the episode, says:

“For a business, from a transformation perspective, if they apply the old blueprint of how to bring technology transformation into the business, that’s probably where they start to trip over themselves. The big factor is control… AI is fast—not only is it fast, but we don’t as business leaders have full control.”

Why ethics matter in AI more than ever

AI isn’t just about automating tasks at work to improve productivity - it’s increasingly shaping how companies make decisions. That is why AI ethical design and strong governance have to be at the forefront of any company’s AI strategy. Far from being cookie-cutter compliance, these practices are necessary for:

  • Mitigating legal and reputational risk: U.S. leaders recognize that bias, privacy breaches, and non-transparent AI applications can trigger lawsuits and damage client trust—56% paused generative AI projects until they had ethical guardrails in place.
  • Earning stakeholder trust: Canadian companies Scotiabank and Unilever are leading the way by setting up ethics boards and making approval processes transparent. These steps aren’t just about compliance—they’re building confidence and trust among stakeholders that AI decisions are fair and accountable.
  • Seizing competitive advantage: The most advanced firms are embedding AI ethics into every function, including leadership, strategy, and culture–IMD research shows ethics-ready companies top the AI maturity index.
  • Capitalizing on growth: Organizations allocating at least 5% of their budget to AI governance have reported higher rates of positive returns in areas such as cybersecurity, innovation, customer experience, and efficiency.
  • Avoiding ethical missteps: When trust fails, costs can escalate, and customer confidence can evaporate—like Air Canada’s legal headaches over misleading chatbot responses.

In Canada, leaders also recognize the stakes. Firms are facing evolving risk from bias, autonomy, and transparency, and federal guidance encourages proactive governance. However, according to a PR Newswire article from July 2025, only 25% of companies have the proper governance frameworks in place to manage their AI which could expose them to unnecessary risks.

Who’s paying attention?

The Government of Canada is developing guidelines to ensure AI is used responsibly in the public sector, but private companies are facing the same challenge. Whether you’re in FinTech, autonomous systems, e-commerce, or cybersecurity, finding people who understand AI governance is still hard. These roles need a mix of tech know-how, legal understanding, and ethical judgment—and that talent is in short supply.

What should decision makers do?

I leave you with another point from Claire:

“There are lots of ways that these tools can help your organization, but they won’t all be worth the effort. Identify measurable business challenges and explore whether AI can support.”

The key is having the right people and a clear plan. With strong governance and adaptable talent, businesses can use AI confidently and responsibly.

That’s where Hays comes in. From identifying organizational challenges to connecting companies with top tech talent, Hays can help you navigate today’s rapidly changing tech landscape with confidence. Contact us today.


About this author

Jamie Dunne
Director, Hays Technology

Jamie is an award-winning sales professional at Hays, specializing in cost-effective staffing solutions across hard-to-find tech skillsets including Cyber Security, Software Development, Networking, AI, and Data Analytics. With deep expertise in both permanent and contract hiring, Jamie helps organizations build high-performing technology teams tailored to their evolving needs. Passionate about delivering exceptional service and leveraging Hays’ exclusive global partnership with Stack Overflow, Jamie connects clients to top-tier talent.

articleId- 90458602, groupId- 20151