We are often asked if we are concerned about our services being replaced by Artificial Intelligence (AI). AI is rapidly changing the way everyone works. For HR, it offers huge potential benefits: streamlining administration, generating documents in seconds, and supporting day-to-day efficiency across the employee lifecycle. Used well, it can save time, reduce errors, and free up leaders to focus on strategy and people. But like any tool, AI comes with its own set of challenges — and the way it’s being used in workplaces right now highlights both the promise and the pitfalls.

How AI is Being Used in HR

AI tools are appearing across almost every aspect of HR and people management. We are seeing AI used for many types of basic HR operations including drafting documents like position descriptions, onboarding documents, policies, meeting minutes, and performance review templates. Some are even using AI to generate advice on workplace rights.

Recruitment is another key area, with AI now being used to assist with shortlisting candidates. Increasingly, practitioners are also using AI to confirm or cross-check advice.

Employees, too, are turning to AI to seek guidance about their entitlements or workplace issues.

On the surface, this looks like efficiency — however, the problem is what comes next.

Where Things Can Go Wrong

While these tools can save time and provide useful starting points, the real risk lies in not knowing whether the output is correct, compliant, or complete. Without a solid understanding of employment law and workplace obligations, businesses can easily end up relying on documents or advice that look professional but carry significant risk.

AI works on the principle of “garbage in, garbage out”: the quality of the output depends entirely on the quality of the input and the individual’s ability to interpret the output. The risk is that we often don’t know what we don’t know. If you don’t frame the right question or include the right detail, the response might look impressive but potentially be incomplete, inaccurate, or even misleading.

Position Descriptions (PDs)

AI can produce a PD that looks polished, but it is only as good as the instructions provided. If you don’t specifically direct it to align with the correct Modern Award and classification, you risk including duties above the employee’s pay grade or not even consistent with the Award. This has the potential to trigger underpayment claims and disputes about entitlements.

Only last week, I had a PD pass across my desk which had clearly been created by AI for a level 3 position — however, duties in that PD were listed in the Award for a level 5 position. If the employer had implemented this PD, they could have faced an underpayment claim, as employees must be paid according to the classification that matches the duties they actually perform, not the title you give them.

Further, there is also the potential for damage to trust, morale, and retention if employees feel exploited or undervalued.

Employment Contracts

AI can generate template contracts quickly, but they often:

  • Omit critical clauses (like confidentiality, restraints, or intellectual property protections).

  • Include clauses that are legally unenforceable under Australian law.

  • Fail to reflect the correct Award, enterprise agreement, or NES entitlements.

The result? A contract that looks professional but leaves your business exposed to risk.

A couple of months ago, we reviewed an employment contract for a client that had “vacation leave” in it, and a few other American entitlements that had no relevance in Australia. The employer had no idea the contract wasn’t written for Australian workplaces. Fortunately, we were able to act swiftly and provide the client with a compliant contract — and re-issue contracts to all of their current employees.

Workplace Advice

Employers and employees alike are using AI to “fact-check” workplace rights. The problem is:

  • Advice can be generic or incomplete (e.g. saying casuals don’t get leave without mentioning conversion rights or casual loadings).

  • Responses may be based on out-of-date information, as workplace laws evolve quickly.

  • Acting on incomplete or inaccurate advice can easily lead to a Fair Work claim or other breaches.

This is probably the area where we are seeing the most risk. The output when using AI is only as good as the questions asked and your ability to interpret the responses. Whilst AI can provide information, it can’t replace years of practical experience nor the ability to guide on the best course of action to achieve desired outcomes.

In a decision handed down last year by the Fair Work Commission, an employer used ChatGPT to draft correspondence intended to confirm an employee’s abandonment of employment. The document produced, however, amounted in substance to a termination letter. The Commission consequently found that the employer could not establish that the employment relationship had not been terminated, thereby allowing the employee’s general protections claim to proceed.

This outcome illustrates the inherent limitations of AI-generated content in legal and workplace contexts. It reinforces the need for employers to exercise independent judgment and, where appropriate, obtain legal advice when relying on AI tools.

Other Considerations in the Workplace

Beyond time savings, organisations must consider a range of important issues when introducing AI into the workplace.

  • Privacy: Uploading employee documents or client information to AI platforms can create significant data protection risks.

  • Copyright: Material generated or used by AI may infringe on intellectual property rights.

  • Workforce impact: AI can reshape roles, raising questions about job security, skills, and wellbeing.

To ensure responsible use, businesses should provide training so employees understand both the potential and limitations of AI, and adopt clear AI workplace policies that define how and when AI can be used.

Finally, organisations should regularly assess whether AI is genuinely saving time and delivering value — or adding complexity without measurable gains.

The Bottom Line: A Powerful Tool with Boundaries

AI is here to stay. It’s a delicate balance between harnessing its potential to drive efficiency and innovation, while managing the risks it introduces into the workplace.

There is certainly a case for the efficiencies AI can drive in the HR space, helping businesses operate in a smarter and more productive manner — and even providing employees greater access to information. However, it is not a substitute for expert HR advice, professional guidance, or a deep understanding of employment law.

Business owners and employees need to approach AI with caution, awareness of their limitations, and confidence to seek expert support when needed.

At Effective Workplace Solutions (EWS), we see AI as a positive step forward — but one that should always be paired with human oversight. That means ensuring there is always a “human in the loop” when using AI tools. Think of AI as a great assistant, not the decision maker.

Karen Arnold
Managing Director, Effective Workplace Solutions

*Case reference: Daniel O’Hurley v Cornerstone Legal Wa Pty Ltd [2024] FWC 1776

Disclaimer: This article is general in nature and provides a summary only of the subject matter without the assumption of a duty of care by Effective Workplace Solutions. No person should rely on the contents as a substitute for legal or other professional advice.