Why your organisation needs a generative AI policy

It's 2025. Generative AI tools like ChatGPT have been producing text on command for over two years! Everyone is using them, yet many organisations still lack AI policies, creating chaos for those involved in knowledge management and creation.

Why is this a problem?

Without clear guidelines, staff don't know what they can and can't do with AI. This creates challenges for writers and reviewers. For example, if you're an editor reviewing AI-assisted work, what are the limits? Are writers only allowed to use AI to tidy up their work? Must they declare its use? What if an entire report is AI-generated? Who’s responsible for checking the accuracy of the AI outputs?

Without an organisational policy, some teams may use AI responsibly, while others could rely on it too heavily, raising ethical concerns and the risk of inaccurate information making its way into knowledge products.

AI has put a wealth of information at our fingertips and provided never-before-seen levels of assistance in writing and creative processes, but it’s also opened a Pandora's Box of ethical dilemmas and confusion.

Without clear institutional guidelines, AI's full potential for knowledge creation and enhancement remains untapped.

As people continue to integrate AI into their workflows and lives, institutions can't afford not to have clear AI policies. Developing and communicating these to staff should be a top priority.

 
 

Brendon Bosworth is a communications specialist and science communication trainer with an ever-growing interest in AI. He is the principal consultant at Human Element Communications. 

Brendon Bosworth

Brendon Bosworth is a communications specialist and the principal consultant at Human Element Communications.

https://www.humanelementcommunications.com
Previous
Previous

How to make science communication training stick: Amplifying Research podcast interview

Next
Next

Going back to university in your 40s: Why it’s never too late to learn (again)