Become the regulator under your own roof when it comes to AI for business: Tracy Carpenter

We’ve heard a lot over the last couple of years about how the emergence of artificial intelligence (AI) will impact the workplace.

Depending on who you’re listening to, AI will either make people redundant, make working life easier, or generally ruin everything and become the overlord of us all.

As the owner of a human resources consultancy firm and a professional with more than 20 years’ experience in HR, I’ve seen a huge amount of change across the course of my career, not only in the sector itself but also in how different kinds of employers manage their employees and create and build culture in their businesses.

Hide Ad
Hide Ad

For new technology to be met with scepticism or cynicism isn’t anything new; but what’s interesting about the response to AI is that whether the feeling is concern, disinterest or excitement, it’s often accompanied by a lack of understanding about its use, management and boundaries.

Tracy Carpenter shares her insightTracy Carpenter shares her insight
Tracy Carpenter shares her insight

In some cases, employers are even in the dark about the ways it being used under their own roof.

There’s no question that AI has the potential to be helpful with tasks such as minute-taking and structuring complex documents.

But when employees are, for instance, copying and pasting confidential information into ChatGPT to make editing or structure more efficient, they’re likely breaching GDPR regulations or intellectual property rights.

Hide Ad
Hide Ad

Using AI to source information can also create issues. The richness of the data and information it can provide compared to a usual internet search is vast.

But it lacks the rigour or verification required for documentation and this often has important legal ramifications.

The possibility of copyright infringement when text and images are being served up by AI platforms is also not to be taken lightly. AI’s democratic way of gathering assets means it can be tricky to trace back all the information to its original source.

Part of the challenge with this is the general lack of understanding about the “edges” of AI.

Hide Ad
Hide Ad

These kinds of tools are also often treated as being exempt from the usual processes and regulations. This makes organisations potentially vulnerable to litigation, not to mention reputational damage.

So, what’s the solution? In the first instance, thinking about it in the same way you would think about any other platform on the internet can help. Would you put sensitive client data in a LinkedIn post, for example, or in a private message on X?

Secondly, training for staff. Regularly ensuring everyone is on the same page is a positive step forward that help fosters a culture of due diligence. Training on the importance of maintaining confidentiality is vital here.

The potential legal ramifications of creating documents and assets from unverified and unregulated sources are essential areas of training too.

Hide Ad
Hide Ad

Finally, creating policy about the use of AI in your workplace will help protect your business and your employees. Appropriate policies will help to clarify where responsibility lies for different kinds of AI usage and bring fuzzy areas into sharper focus.

There’s so much that’s positive about the use of tools such as AI. However, like much of the digital world, it’s fast-moving, ever-shifting and unregulated. For the sake of your business and your employees, it’s essential you become the regulator under your own roof.

Tracy Carpenter runs Mint HR (Wakefield).

Comment Guidelines

National World encourages reader discussion on our stories. User feedback, insights and back-and-forth exchanges add a rich layer of context to reporting. Please review our Community Guidelines before commenting.

News you can trust since 1754
Follow us
©National World Publishing Ltd. All rights reserved.Cookie SettingsTerms and ConditionsPrivacy notice