Written by: Jared Colley, Chief Innovation Officer at Mount Vernon School

Last December, when I had the opportunity here in Atlanta to join an intimate group of school leaders from all over the world to discuss and explore the future of schools, we were joined by a CEO from one of the fastest growing LMS companies in the K12 sector. Standing before us, he said that one of the first interview questions he asks every candidate seeking work at his company is: How have you worked successfully on collaborative teams that include AI agents and applications?

This was not one of those we’ll ask if time permits kind of questions: the CEO saw it as a necessary skill for every potential employee that seeks work at his organization.

Just last summer, The Washington Post reporter, Danielle Abril, wrote “AI is increasingly becoming a requirement in the workplace and no longer just an option… Everyone is expected to learn how to apply AI to their jobs.”1

No longer is this a signal of what might happen in the near future: it’s a current trend that crosses sectors and industries well beyond software product development. And it’s not something limited to learning how to effectively prompt a Large Language Model like ChatGPT. Instead we have to look at this trend in the context of Agentic AI, and by doing so ask ourselves: What significant skill shifts are taking place in the workplace that challenges many of the assumptions we have about what teaching and learning should look like in our K12 schools?

From Task Execution to Task Stewardship

“AI systems are considered agentic,” writes Melissa Heikkila, “if they can pursue difficult goals without being instructed in complex environments. They also qualify if they can be instructed in natural language and act autonomously without supervision… The term ‘agent’ can also apply to systems that are able to use tools, such as web search or programming, or are capable of planning.”2

Many of these AI agents can work in tandem, not just with humans, but with multiple agents as well, leading some experts to talk about the emergence of “intelligent systems”: “Agentic work has evolved from a single-purpose process driven by one agent in a continued workflow. It is now developing into a fully functional operating model where multiple specialized agents work together on core enterprise workflows.”3

What does this mean for us educators when we think about the skill shifts we should be paying attention to?

One common trend we’re paying attention to at MV Ventures is the shift from independently producing outputs as human task-executioners to an ecosystem of task completion where humans are directing, shaping, and steering intelligent systems, the implication being that intelligence is now distributed across systems.

When AI can execute, generate, draft, simulate, and iterate and do so quite well, the scarce human skill becomes the ability to frame, to set both goals and constraints, and to strategically steer the human + machine team performance. We move from teaching students how to produce to teaching them how to direct intelligence, meaning the individual is no longer the sole unit of intelligence.

The human skill that emerges in this context is one of task stewardship. More than supervision or simply being in the loop, stewardship implies ongoing responsibility, care, and judgment over something that matters; it’s so much more than just oversight of a process. Stewardship, in a sense, becomes an ecological skill, not just a supervisory one.

Philosopher Kwame Anthony Appiah writes, “Once a machine enters the workflow, mastery may shift from production to appraisal… But collaboration [and discernment] presupposes competence… [and] that’s where the panic over pedagogy comes in.” Which gets at the second aspect of stewardship as a skill or competency: “Stewardship [also] means ensuring that the capacities in which humanity resides – judgment, imagination, understanding – stay alive in us.”4

How do we empower students to make this shift with us while ensuring that learners don’t offload or fail to master the capacities necessary for human flourishing?

Skills, Capacities, and the Threats and Opportunities of Deskilling

There is a real risk when we invite students to move from task executioner to task steward in the context of agentic AI.

Many experts talk about it as cognitive debt or the phenomenon of deskilling, for instance. What makes it tricky is that the same conditions that elevate stewardship as a skill can also make it harder to develop. That’s because “the more artificial intelligence replaces valuable human activity, the more it risks ‘deskilling’ humans of their human capacities.”5 David Duncan also talks about it as “the judgment paradox” where “AI simultaneously increases the need for judgment [while eroding] the experiences that produce it.”6

So what should we do about such threats knowing that today’s students need to learn how to work well, not just with others, but with machines?

It’s important to remember, when staging this discussion, that “deskilling is not always negative: skill obsolescence has always been part of human history, where technology developments free up time and resources for the cultivation of other skills.”7

Does anyone outside of hardcore hobbyists still know how to basket weave, for instance?

This is why Appiah identifies what he calls “constitutive skills” (as opposed to more “trivial” ones) or why Avigail Ferdman makes the distinction between skills-as-such and what she calls essential human capacities. Both authors emphasize the importance of human capacities like judgment, imagination, empathy, and the ability to form meaningful relationships as integral to human flourishing. When AI robs students of the opportunities for agential control and habituation in relation to developing these capacities, then cognitive debt and deskilling become real threats to the integrity of the learning process.

It’s also important to remember that we as educators are less interested in measuring performance gains and more interested in verifying learning gains, which are not the same thing, especially in the context of schooling. Being humans-in-the-loop is simply not enough, especially if the learner isn’t competent yet. Stewardship calls for us not just to be in the loop, but to be the “humans in the learning loop” where the person overseeing the system and process is also growing through that oversight and through challenges that experience presents.

But how do we do it?

First off, transparency and process documentation must be part of the stewardship infrastructure. If stewardship means holding learners responsible for the process, then familiar practices — learning (or chat) logs, process portfolios, peer conferences, and documenting how and why decisions were made — aren’t just about academic honesty; they’re the very mechanisms through which stewardship becomes visible, teachable, and assessable.

Second, we should be mindful of when AI enters the picture as a tool for student learning and expression. For instance, a study recently published by Anthropic provides empirical evidence that caution against using AI to complete tasks that require new skills because it hinders the learners ability to acquire and master those new skills. Stewardship requires at minimum some level of prior understanding and competence. Skill development after all requires human habituation and practice, and too much AI too early could contribute to “capacity-hostile environments.”19

Third, we need to stop looking at the risks of cognitive debt and deskilling as simply a human agent problem: it’s a structural issue involving context, environment, and design. Ferdman makes the distinction between capacity-hostile environments and capacity-conducive ones, and AI can contribute to and amplify aspects and tendencies of either environment.

Capacity-hostile environments “restrict, limit, or create a narrow field of affordances for capacity development and exercise,” whereas capacity-conducive environments “are [ones] that encourage the development and exercise of human capacities.”10 What this means is stewardship, as well as the practice and development of other important human capacities, doesn’t emerge automatically from giving people oversight roles. It has to be cultivated through intentional design of learning environments and through thoughtful considerations of how AI fits in and adds value to those environments.

To illustrate how important design can be, take the simple example of an early form of AI, namely spellcheck – an example that was first suggested by Hofman, Goldstein, and Rothschild. Compare spellcheck programs that simply underline a misspelled word (or perhaps even offer a suggested correction) versus spellcheck programs that autocorrect. One functions as a coach but requires the learner to make the call based on what they know or have learned, while the other functions more like “metaphorical steroids,” providing efficient, short-term gains at the expense of longer-term knowledge and skill development.11 One approach establishes an environment that provides affordances for learning and growth; the other limits the learner’s choices, and as a result, cognitive debt may be the cost. The first kind of spellcheck program invites the human into the learning loop. The second does not.

When we design capacity-conducive learning environments and integrate AI technology responsibly, we can start to determine the proper opportunities to explore boldly the skill shifts that workplaces are demanding, and one of those skill shifts we should be paying attention to is task stewardship, especially as it begins to take the place of independent task execution in most workplace contexts.

The threats are real, but the opportunities are unimaginable, and that’s an exciting proposition for those of us who want to steward our schools into the future.

References
  • 1 Abril, Danielle (2025). “No AI, No Job. These Companies Are Requiring Companies to Use the Tech.” The Washington Post. https://www.washingtonpost.com/business/2025/06/03/ai-workplace-duolingo-shopify-employees/
  • 2 Heikkila, Melissa (2024). “What Are AI Agents?” MIT Technology Review, Vol. 127, No. 5.
  • 3 Arora, Shipra, Dale Pedzinski, and Jameson Knoll (2025). “Defining the Intelligent Enterprise.” Bain & Company. https://www.bain.com/insights/defining-the-intelligent-enterprise/
  • 4 Appiah, Kwame Anthony (2025). “The Age of Deskilling: Will AI Stretch Our Minds–or Stunt Them?” The Atlantic. https://www.theatlantic.com/ideas/archive/2025/10/ai-deskilling-automation-technology/684669/
  • 5 Ferdman, Avigail (2025). “AI Deskilling is a Structural Problem.” AI & Society. https://link.springer.com/article/10.1007/s00146-025-02686-z
  • 6 Duncan, David (2026). “How Do Workers Develop Good Judgment in the Era of AI?” Harvard Business Review. https://hbr.org/2026/02/how-do-workers-develop-good-judgment-in-the-ai-era
  • 7 Ibid.
  • 8 Shen, Judy Hanwen and Alex Tamkin (2026). “How AI Impacts Skill Formation.” Anthropic. https://arxiv.org/abs/2601.20245
  • 9 Ferdman, Avigail (2025). “AI Deskilling is a Structural Problem.” AI & Society. https://link.springer.com/article/10.1007/s00146-025-02686-z
  • 10 Ibid.
  • 11 Hofman, Jake M., Daniel G. Goldstein, & David M. Rothschild (2024). “A Sports Analogy for Understanding the Different Ways to Use AI.” Harvard Business Review, Special Issue: How to Thrive in a Gen AI World.