Are your employees lying to you?

Workers admit they conceal their use of AI at work, often presenting AI-generated output as their own

Are your employees lying to you?

A sweeping new global study has revealed a troubling workplace trend: a significant number of employees are using artificial intelligence tools behind their bosses’ backs — and pretending the work is their own.

According to the 2025 Trust, Attitudes and Use of Artificial Intelligence report, conducted by the University of Melbourne in partnership with KPMG, more than half of surveyed workers admit they conceal their use of AI at work, often presenting AI-generated output as their own. The report, based on responses from over 48,000 people across 47 countries, suggests this behaviour is part of a broader pattern of complacent, inappropriate and sometimes risky use of generative AI tools in the workplace.

The findings paint a picture of a rapidly transforming workforce where AI is not only ubiquitous — with nearly 60 per cent of employees using it regularly — but also quietly shaping day-to-day operations in ways that managers may not fully grasp.

“The responsible use and governance of AI is not keeping pace with its adoption,” the report warns, highlighting an urgent need for organisational safeguards, transparency, and better training​.

Concealing the circuitry

Employees’ reluctance to disclose AI use is not simply a matter of oversight. The study found that over half of those surveyed intentionally avoid revealing their AI use and knowingly pass off AI-generated work as their own. Many are also relying on these tools without properly evaluating their outputs — a practice that has already led to errors in professional settings.

One in five employees reported reduced communication and collaboration due to AI, suggesting that machines are, in some cases, replacing human interaction in workplace problem-solving. Alarmingly, nearly half confessed to using AI tools in ways that contravene their organisation’s policies, including uploading sensitive company information such as financial data or customer records to public AI platforms.

‘Use it or fall behind’

Driving this clandestine embrace of AI is a potent cocktail of fear and accessibility. Many employees, the report notes, feel pressured to use AI — or risk being left behind. With tools like ChatGPT and other generative platforms readily available and intuitive to use, workers are increasingly bypassing official channels, policies, and guidance.

“Three in five workers say they’ve witnessed inappropriate AI use by colleagues,” the report found, “yet only two in five say their workplace even has a policy on generative AI”​.

A knowledge gap with risky consequences

The lack of transparency is compounded by a deep gap in AI literacy. Although AI use is widespread, most employees have received no formal training. In advanced economies like Australia, only one in three report receiving any AI education, and fewer than half feel confident in their ability to understand when and how AI is used.

This disconnect between use and understanding is contributing to what researchers describe as “complacent and inappropriate” engagement with AI. Employees trust these tools enough to depend on them — often without realising their limitations — but don’t trust the systems around them enough to be open about their use.

The risk to trust — and to business

Experts warn that these behaviours could erode trust not only in AI systems but also within organisations themselves.

“Organisations are reaping the benefits of AI,” the study concludes, “but many are turning a blind eye to how it’s really being used”​.

Without clear guidance, oversight, and a workplace culture that fosters open dialogue about AI use, companies may be exposing themselves to reputational risk, data breaches, and flawed decision-making.

The report recommends urgent investment in AI governance, including employee training, organisational policies, and mechanisms to promote transparency and accountability.

In the meantime, as one respondent put it bluntly: “If the bot does it faster, better, and nobody knows — why wouldn’t I use it?”