Can Schools Master Education Artificial Intelligence?

Group of college students study together at a table, with one student using a laptop while others read and discuss notes.

Technology moves fast, and schools feel that pressure every day. New tools show up, students try them first, and teachers are left wondering what changed overnight.

In a smart approach, schools should stop treating AI as a threat and start treating it as a skill. Research supports this shift: for example, a 2023 UNESCO report recommends that education systems focus on AI literacy and skill-building, rather than simply restricting new technologies. Studies such as Holmes et al. (2022) have found that treating AI as a learning partner improves student engagement and a deeper understanding of the material. When you use artificial intelligence as a partner, you can build stronger learning, not weaker.

To get started, administrators can take a few simple steps: set up a small task force of interested teachers and staff to explore practical applications of AI, choose one tool to test in a pilot program or a few classes, and collect candid feedback from everyone involved. These first actions help schools move from talking about AI to actively shaping how it fits into daily learning.

How can education artificial intelligence keep humans in the loop to improve learning without breaking trust?

The basic principle is that AI suggests or automates parts of the work, and a human reviews, guides, approves, or corrects it. Think of it like a self-driving car with a driver still responsible, except ideally with fewer sudden “grab the wheel!” moments.

The best plan is to choose tools people already use, teach students to ask better questions, and build a culture where AI use is transparent and vetted. When administrators evaluate AI tools, they should also consider key factors such as data security, cost, technical support, ease of integration, and how well the tool addresses privacy concerns. Google Gemini will be used as an example because that is how the University of Lynchburg integrated AI across campus.

What does “keep the human” in the loop mean?

“Human in the loop” is a phrase people use when talking about artificial intelligence. It means a real person remains involved while the AI does its work. The AI can help with tasks such as drafting, organizing information, and providing suggestions. But it does not make the final call on its own. A human reviews the AI’s output, corrects errors, and decides what happens next.

In education, keeping a human in the loop is especially important. AI can be helpful, but it can also get things wrong or misunderstand what a student needs. When teachers and staff review AI outputs, they can ensure the information is accurate, fair, and appropriate. This helps protect student trust, because people know decisions are not being made by a “black box.” In short, AI can support learning, but humans remain responsible for accuracy, judgment, and care.

To strengthen this trust and accountability, schools should develop or update clear policies or guidelines around human oversight of AI tools. By establishing expectations for active human review and final decision-making, administrators can reassure the school community that responsibility stays with people, not just technology.

How do you choose the right AI tools for the job?

If a school wants to use educational artificial intelligence, the first decision is picking which AI to use. The easiest rollouts occur when the AI tool integrates with systems everyone already uses. For the University of Lynchburg, as a Google Campus, it was Gemini.

That matters because “easy to start” beats “fancy but confusing.” When the tool feels familiar, teachers and students spend less time learning the basics and more time learning content.
It also helps schools stay flexible. When everything works together, it’s easier to adjust quickly during disruptions, schedule changes, or new requirements.

Simple rule: start with tools that fit daily school life.

A personal tutor for every student with education artificial intelligence.

One of the most exciting uses of artificial intelligence in education is tutoring. Not the kind where the AI just hands over answers, but one that helps students think through the steps.

A helpful approach is creating custom AI “coaches” (sometimes called personas). A student can set rules like:

  • “I’m in organic chemistry.”
  • “Do not give me the answer.”
  • “Walk me through each step and ask me questions.”

Then, when the student asks for help, the AI responds like a tutor: guiding, checking, understanding, and pushing the student to do the next step.

That changes the experience from “get the answer” to “learn the process.”

Changing the way we study with education AI tools

A big problem with AI is that it can sometimes sound confident even when it’s wrong. A great way to reduce that risk is to use education AI tools that only use the materials you upload.

That means students can load in-class readings, notes, or PDFs and ask questions based only on those sources. With a tool like Google’s NotebookLM, the AI isn’t pulling random information from the internet—it’s using the exact content your class uses.

These tools can also make studying easier in different formats:

  • Audio-style overviews that sound like a short podcast
  • Video-style overviews that explain key ideas with voiceover
  • Concept maps that show how ideas connect

Students learn in different ways — reading, listening, or watching. This is a way to support a range of learning styles with a single tool.

With Notebook LM, students can interrupt audio explanations to ask questions, receive answers, and continue learning. That type of interactivity turns confusion into a quick detour instead of a dead end.

Prompt engineering is a critical skill.

If you want artificial intelligence to be useful, students need to know how to “talk to it.” That skill is called prompt engineering, but it really just means: asking better questions. For example, if you are teaching high school English, a student might prompt an AI with, “Help me brainstorm themes for To Kill a Mockingbird and explain how each relates to a character’s choices.”

In a middle school science class, a prompt might be, “I am learning about photosynthesis. Can you explain the process step by step and give me a real-life example?”

In elementary math, a student could ask, “Show me a way to understand fractions using pizza slices and give me practice problems to solve.” Sharing concrete examples like these helps both teachers and students see how prompts can be tailored for different subjects and grade levels.

A strong prompt usually includes:

  • What are you trying to do?
  • What level are you at (8th grade, college, beginner)?
  • What kind of output do you want (steps, bullets, examples)?
  • What rules to follow (no final answer, show reasoning, cite sources).

Some teachers are even changing grading to match this reality. Instead of grading only the final paper, they also grade:

  • the prompt the student wrote,
  • how the student checked the result,
  • and how the student improved the output.

That builds critical thinking skills by teaching students to verify, edit, and own their work.

Rethinking cheating and progress with AI

Let’s talk about one of the biggest fears with AI: cheating. The usual response is to ban tools or use “gotcha” detection systems.

A different approach is to build a culture where education in artificial intelligence is allowed, but with honesty and responsibility. Schools can foster this culture through practical steps. For example, establishing or updating an honor code that explicitly addresses responsible AI use encourages students to be upfront about their use of new tools. Holding regular student-led discussions or workshops about AI helps students share experiences and set expectations together.

Teachers can also create opportunities for students to reflect on their choices and learn from mistakes, reinforcing that honesty and growth matter as much as the work itself. Together, these strategies help administrators, teachers, and students support a shared sense of trust and responsibility as AI becomes a normal part of learning.

That means students are expected to:

  • say when they used AI,
  • explain how and why they used it,
  • check facts and fix mistakes,
  • and take full responsibility for what they submit.

This shifts the focus from “catching” to “teaching.” In real life, most students are not training for academic careers.

They’re training for jobs where AI will be a normal tool.

A practical classroom rule is:

AI can help you draft it, but you are responsible for it.

Education artificial intelligence in the “business” of school

Educational artificial intelligence isn’t only for students and teachers. It can also help staff who run the school.

Useful admin tasks include:

  • Comparing handbooks or policies to spot contradictions,
  • Reviewing long contracts for risks or conflicts,
  • Checking documents against state rules or campus requirements,
  • Tightening SOPs so teams work more consistently.

When staff spend less time digging through documents, they have more time to help people.

Fair and smart access to educational artificial intelligence

Some students worry that AI favors people with better devices. Others worry about the environmental cost of heavy computing.

A school can respond by improving access and setting smarter norms. One strategy is to provide students with school-issued laptops or loaners, so everyone has similar tools. This is a model that the University of Lynchburg has adopted. Every incoming student now receives a MacBook Air with AI integrated into the operating system.

Another idea is to use more on-device AI features when available, so simpler tasks can run locally rather than always sending requests to large remote servers. That can improve speed, privacy, and energy efficiency.

Even with strong tools, one rule stays important: don’t put private personal data into AI systems unless your school has clear protections and training in place. That means administrators should take steps such as providing regular staff training on data privacy, conducting routine audits of data use, and creating an approved list of trusted AI vendors. By implementing these safeguards, schools can better protect student information and set clear boundaries for AI use in education.

Building lifelong learners

The real goal of education in artificial intelligence isn’t to chase every new feature. It’s to build lifelong learners. We want all students to be able to adapt when tools change. To assess growth in lifelong learning and adaptability, schools can incorporate reflective self-assessments in which students describe how they use AI to solve problems, adapt their strategies, and learn from feedback.
Professors might also use project-based evaluations that require students to research and apply new AI tools, explaining their process and choices. Portfolios that track students’ use and development of digital skills over time can provide evidence of adaptability, while peer or instructor feedback on how students adjust to unfamiliar challenges enables ongoing, actionable assessment.

The most successful schools treat AI like a skill you practice:

  • Explore it,
  • Learn the rules,
  • Make mistakes safely
  • Get better.

Give people time to play, teach them how to ask good questions, and make honesty the standard.

AI at Lynchburg

AI is moving fast, but at the University of Lynchburg, we believe it should be used with purpose and care. Our focus is on using AI to strengthen teaching, support faculty, and help students build practical, real-world skills in responsible ways.

That commitment to ethical and effective AI use is part of a larger conversation at Lynchburg about how AI is shaping education, leadership, and business. It is also reflected in programs like Lynchburg’s Doctor of Executive Leadership in Ethics of Artificial Intelligence and Technology Leadership, which prepares professionals to lead today’s organizations through the opportunities and challenges of AI.

To see how this work is taking shape across campus, explore our spotlight on Charley Butcher and Sandra Perez’s efforts to advance smart and ethical AI in education, read about the Google for Education Summit hosted on campus, or watch the webinar on education artificial education.

About the Expert

Charley Butcher headshot

Charley Butcher

Charley Butcher is the chief academic technology officer at the University of Lynchburg, where he helps lead campus efforts to adopt artificial intelligence in practical, responsible ways. Alongside Sandra Perez ’16, he has focused on embedding AI into everyday teaching and learning, including expanding access and training around tools such as Google Gemini and NotebookLM.