How CoPilot disrupted my work life: An Information Manager's journey with AI

As an Information Manager, I have always enjoyed working with words and data. I take pride in my writing skills and my attention to detail. I also value the importance of record-keeping and governance, especially in the public sector. That's why I was intrigued when I had the opportunity to be on the CoPilot trial.

Before I started using CoPilot, I received some preliminary training on the rules of using AI. I learned about the ethical principles, the legal obligations, and the technical limitations of CoPilot. However, there was no element on record-keeping. I assumed that CoPilot would not have any impact on my record-keeping practices. I thought it was just a tool to assist me with my writing and data tasks.

I was wrong. I realised that CoPilot was not only a powerful and useful system, but also a risky and complex one. I realised that I needed to be aware of the impacts of CoPilot on record-keeping and governance. I realised that I needed to be ready to put in place appropriate measures to ensure the quality, integrity, and authenticity of the information generated by CoPilot in my organisation.

How did I use CoPilot and what did I learn?

I used CoPilot for a variety of tasks and projects during the trial period. I used it to write reports, summaries, proposals and emails. I also used it to analyse tables and date and create visualisations from data. It saved me a lot of time and effort, especially when I no longer had to noodle with Excel formulas!

However, as I expected, I saw that CoPilot was not perfect, and that I had to be careful and critical when using it. CoPilot sometimes made mistakes, it created hallucinations, or generated information that was irrelevant, outdated, incomplete, or biased. I had to check and verify the information it produced and compare it with other sources.

I also had to understand when to reference the AI-generated information, and when to use my own words and opinions. I learned to use CoPilot as a tool, not as a replacement for my own skills and judgment.

Naturally, as I continued to utilise this new technology more frequently, it introduced additional record-keeping considerations and risks into my role as an Information Manager.

So, what were the record-keeping conundrums?

These experiences raised some important questions for me. What are our record-keeping considerations for AI generated drafts? Does it change dependent on the type of draft? When is it appropriate to reference AI assistance?

What are the provenance implications of AI generated content that references existing documents? And how do we align the lifecycles of the original and the derived documents?

As technology evolves and the way we work adapts, more and more Information Managers are being presented with record-keeping practice dilemmas that have no clear-cut answers. Increasingly, we have to interpret the legislation, standards and policies that guide our profession and balance them with risk and value propositions.

We have to be agile and innovative, but also responsible and ethical. We have to embrace the opportunities and challenges of AI, but also be aware of its limitations and implications.

My approach to solving the puzzles

By assessing the risk and value of the AI outputs in relation to the purpose, context, and outcomes of the information creation process, I was able to devise some possible strategies for managing them.

For AI outputs used in preliminary drafts, such as brainstorming ideas, experimenting with different styles, or generating summaries, I concluded that they have low value and low risk.

They do not contribute significantly to the final information asset or record, and they do not pose any legal, financial, or reputational risks to the organisation or individuals involved. Therefore, these outputs can be destroyed under Normal Administrative Practice (NAP) once they are no longer needed for reference or quality assurance.

For AI outputs used in drafts that provide a significant basis for final information assets, such as reports, proposals, or policies, I concluded that they have moderate value and moderate risk.

They represent an important stage in the information creation process, and they may contain evidence of decision making, feedback, or revisions. They may also carry some legal, financial, or reputational risks if they are inaccurate, incomplete, or misleading.

Therefore, these outputs should be retained and stored as versions with the final information asset or record, following the organisation's version control policy and procedures.

For AI outputs used in final information assets that are official records, such as publications, contracts, or agreements, I concluded that they have high value and high risk. They document the final outcome of the information creation process, and they may have legal, financial, or operational implications for the organisation or individuals involved.

They may also be subject to external scrutiny, audit, or review. Therefore, these outputs should be captured and managed as records.

Moreover, to demonstrate accountability and transparency, the final information asset should reference the AI inputs used to create it. This includes:

- Indicating that the content was generated using AI (preferably in the information asset using footnotes/endnotes or reference features)

- Referencing the source documents that the AI tool used to generate the content (preferably in the information asset using footnotes/endnotes, reference, or bibliography features)

Tracking the origin of AI-generated content is crucial for records management. It's essential to keep outputs alongside their originating documents to synchronize their lifecycles and preserve history. Without this, source data may be legally discarded while the output persists, raising issues with verifying AI results or retracing the process that generated them.

These strategies are not definitive or prescriptive, but rather indicative and suggestive. They are based on my own interpretation and application of the relevant legislation, standards and policies, as well as the specific context and circumstances of the information creation process.

They may vary depending on the nature, purpose, and scope of the AI generated content, as well as the organisational and regulatory requirements that apply to it. Therefore, I encourage other Information Managers to use this risk and value framework as a starting point for their own analysis and decision making, and to share their insights and experiences with the record-keeping community.