Our principles on the responsible use of AI

The Key Group serves 20,000 schools in the UK. The Key, Arbor, GovernorHub and ScholarPack were made possible by harnessing technology including the internet, cloud computing and a host of other advances. We know from experience that at its best, technology can save schools time and increase their capacity to deliver better outcomes for children.

Artificial intelligence will be for the next decade what the internet and cloud computing were for the last. We recognise the potential that AI offers and we have been experimenting with large language models since mid-2022.

The impact of AI will go well beyond the classroom, and we are excited about how it can help to reduce workload, democratise access to data, and make school operations and policy more effective.

We’re also conscious of the questions that AI poses and the risks that it presents.

We’ve been reflecting on the social and ethical implications of AI, and studying the work of organisations like The Institute for Ethical AI in Education and the Council of Europe.

From this, we have developed five core principles that will underpin our upcoming AI development across the group.

We are transparent and open in our use of AI

We deal with the education of young people whose needs are complex and varied.  AI is not always predictable, and it can produce errors and inappropriate content. For this reason, we are always transparent with users about where and how AI is being used, and its inherent risks. We collect and respond to user feedback, to continuously improve quality and trustworthiness.

We mitigate and test for biases and risks

AI should be used in ways that promote equity, but there are known issues with inherent biases in AI models and risks that AI is applied in an inequitable way. In our processes and our software we look to mitigate these risks and eliminate their impact. We identify risks at the outset of every project, ensure scrutiny from across the organisation, and continuously monitor and reflect on the impact of our choices.

We respect and amplify the expertise of our users

Teachers, support staff and school leaders do an incredible and challenging job. Our services empower them to be more efficient, and focus their very-limited time on delivering a great education. We use AI as a co-pilot to support their work, reducing the need for repetitive and administrative tasks, democratising access to data and best practice, and amplifying the expertise of our users.

We ensure privacy by default

We will use AI in line with our privacy policies, ensuring that data is secure and never sharing it without consent. Schools will have visibility and control of the use of data in our tools.

We join in the conversation, helping the sector navigate change

This is a pivotal moment. AI is here to stay, and it will power a new generation of products and processes, and will transform the way we educate young people. This is exciting but unpredictable, and change is never easy. We will be vocal in these conversations and debates, sharing our insights and ideas, experimenting in our products, and reflecting on the benefits and challenges we are seeing across the sector.

It is early days, and we’re very excited about the new things we’re going to be able to do to save schools time and improve their effectiveness. We look forward to sharing more updates as we go, reflecting on your feedback, and joining in the debate.

Previous
Previous

The Key Group completes acquisition of RM Integris and RM Finance, becoming the leading cloud-based MIS provider in the UK

Next
Next

What do schools feel about their MIS?