How officials at one school used AI to create their goals for the year
Artificial intelligence has led to new and exciting opportunities for administrators to analyze large swaths of data. This fall, staff and administrators at our school used AI to conduct a school results review and cocreate our school goals for the 2024–25 year. For this task, we used NotebookLM, Google’s AI-powered research assistant.
Step 1: Review previous goals
Our first step involved having staff review the goals from last year’s school plan and answer three main questions:
- What goals or data should we celebrate?
- Which goals should we revise or review? OR What are some areas of challenge that still remain?
- After reviewing our division priorities, is there anything different that we should consider as goals for this coming school year?
Staff answered these questions by thinking of specific anecdotes that applied to each one and recording their thoughts independently on a Google Doc. Staff then shared their stories in smaller groups.
Step 2: Human data review
Our second step was to divide staff into groups based on the school’s three goals from last year:
- Literacy and numeracy
- Antiracism and reconciliation
- Student and staff well-being and mental health
We provided the groups with several data sources such as locally developed survey results, division and Alberta Assurance survey results from the last two years, Provincial Achievement Test results since 2022 and other quantitative and qualitative data sets. Staff engaged in further conversation in their groups and recorded their observations concerning our three main questions and three school goals
Step 3: AI data review
Our third step involved uploading our data to NotebookLM (with Google Education/Enterprise data protections), then asking questions. These questions ranged from suggestions on achieving improved results, different ways to measure the goals, potential barriers to improvement and multiple questions comparing data sets between several years. Staff validated or rejected each AI-generated response based on their understanding of the data and added to their notes any specific insights they felt were valid. This process produced several sets of notes totaling 12 pages.
Step 4: Consolidate and write
The last step involved administrators consolidating these notes into a single, six-page document. The information was organized into three headings: Conclusions Based on School Goals/Looking Ahead, AI Co-creation, and 2024–2025 School Goals. Administrators used this summary as a reference while they wrote the 2024–2025 school goals, which serves as a guiding document for the whole school community. These goals include three key focus areas:
- Improving students’ literacy and numeracy skills
- Deepening our schools’ understanding of First Nations, Métis and Inuit cultures and teachings as well as antiracism strategies
- Improving students’ sense of belonging and safety
Human responsibility
This process worked because the humans involved understood the data. Before using AI, the user must be familiar with the data. AI systems, like large language models (LLMs), cannot make professional judgments. They are computer programs designed to look for patterns that mimic intelligence but have none of their own. Without that expertise, humans will not be able to detect errors in the AI, leading to problems. AI cannot take responsibility or be held accountable; that remains our professional domain.
Education-specific versions of Microsoft’s Copilot and Google’s Gemini and NotebookLM are designed with educators in mind to protect data and privacy, enabling these tools to be more accessible for more tailored uses. For example, in the case of Gemini and NotebookLM, Google Education user uploads, queries and responses are not reviewed by human reviewers and are not used to train AI models.
The online FAQs for Gemini Enterprise state, “Your data is as secure when using Gemini as it is using any Google core service like Gmail or Docs.”
Using public AI models without appropriate data security or privacy has several more pitfalls that educators must consider prior to use such as where the data is stored, who the data is shared with, whether the user maintains ownership of the data and whether it used to train the AI model. Without this additional layer of data security, only publicly available data should be used in LLMs and other AI technologies.
AI has the potential to cocreate and leverage our existing skills. We must remember that we need the background knowledge to verify the results to make responsible decisions

Thomas Rogers is an assistant principal at S. Bruce Smith Junior High School in Edmonton. He is pursuing his master’s degree in educational studies, focusing on artificial intelligence and educational technology.