Almost a third of European businesses don’t have a formal, comprehensive AI policy in place amidst surging generative AI use amongst professionals

Credit: The original article is published here.

  • AI use is exploding, but most European companies are still operating without clear rules or policies
  • Organizations celebrate productivity gains while ignoring rising security threats from deepfakes and AI misuse
  • Employees use generative AI daily, but few know when, where, or how they should

As generative AI gains traction across Europe’s workplaces, many organizations are embracing its capabilities without establishing formal policies to guide its use.

According to ISACA, 83% of IT and business professionals believe AI is already being used by staff within their organizations, but only 31% report the presence of a comprehensive internal AI policy.

The use of AI in the workplace comes with some benefits. Fifty-six percent of respondents say AI has already improved productivity, 71% cite efficiency gains and time savings, while 62% are optimistic that AI will further enhance their organizations over the next year.

Productivity gains without structure is a ticking bomb

However, AI applications are not universally positive, and whatever perceived gains they bring come with caveats.

“The UK Government has made it clear through its AI Opportunities Action Plan that responsible AI adoption is a national priority,” says Chris Dimitriadis, ISACA’s Chief Global Strategy Officer.

“AI threats are evolving fast, from deepfakes to phishing, and without adequate training, investment and internal policy, businesses will struggle to keep up. Bridging this risk-action gap is essential if the UK is to lead with innovation and digital trust.”

This dissonance between enthusiasm and regulation poses notable challenges.

Concerns about AI misuse are high, and 64% of respondents are extremely or very concerned about generative AI being turned against them.

However, only 18% of organizations are investing in tools to detect deepfakes, despite 71% anticipating their proliferation in the near future.

These figures reflect a clear risk-action gap, where awareness of threats is not translating into meaningful protective measures.

The situation is further complicated by a lack of role-specific guidance. Without it, employees are left to determine when and how to use AI, which increases the risk of unsafe or inappropriate applications.

“Without guidance, rules or training in place on the extent to which AI can be used at work, employees might continue to use it in the wrong context or in an unsafe way. Equally, they might not be able to spot misinformation or deepfakes as easily as they might if they were equipped with the right knowledge and tools.”

This absence of structure is not only a security risk but also a missed opportunity for proper professional development.

Nearly half of the respondents, 42%, believe they need to improve their AI knowledge within six months to remain competitive in their roles.

This marks an 8% increase from the previous year and reflects a growing realization that skills development is critical.

Within two years, 89% expect to need upskilling in AI, underscoring the urgency of formal training.

That said, companies that want the best AI tools, including the best LLM for coding and the best AI writers, must also account for the responsibilities that come with them.

You might also like