Followers

Wednesday, February 4, 2026

The Missing Middle: Why AI Training Fails and How to Fix It

 

A Wake-Up Call from Redmond, Microsoft's HQ

The numbers are in, and they are both startling and unsurprising. At the end of 2025, Microsoft conducted a study that most people overlooked. They tracked 300,000 employees using their AI assistant, Copilot. For the first three weeks, excitement was palpable. People were experimenting, sharing discoveries, marvelling at what the technology could do. Then came the cliff. Enthusiasm dropped sharply, and most people quietly stopped using AI altogether.

Let that sink in. Microsoft, one of the world's largest technology companies, with presumably some of the most tech-savvy employees on the planet, watched 80% of their workforce abandon their own AI tool after the initial honeymoon period.




The employees who continued using AI discovered something important: AI is not just a tool you learn to operate. It is something you learn to manage. This insight applies to all AI tools—not just Copilot—and it fundamentally changes how we should approach AI training. The challenge is not technical. It is, as I have argued before, psychological and institutional (Schram, 2025).

The Familiar Pattern of Failure

Consider what typically happens when an organisation introduces AI tools. Everyone receives access to ChatGPT, Copilot, Claude, or similar platforms. Someone leads a training session covering how to write prompts and what the tool can do. The session lasts about six hours. Participants leave feeling informed, perhaps even excited. Then they are expected to become more productive.

What do the usage statistics show months later? In most organisations, only about 20% of people use the tools regularly. The other 80% have stopped.

What happened to that 80%? They tried using AI. They asked it to help with a report. They received something generic that missed the point entirely. They tried again and received something confidently wrong—the kind of response that sounds authoritative while being completely mistaken. After a few attempts, they decided it was faster to do the work themselves. The AI had failed them, and they moved on.

This pattern repeats itself across organisations of every size and type. As AI researcher Simon Willison noted, people who spend more time working with AI develop an intuitive understanding of how to use it effectively (Willison, 2025). They gain a significant advantage over people who are new to AI. Yet most organisations lose the majority of their employees during this difficult early period—the valley of disillusionment where promise meets reality.

The Training Gap Nobody Talks About

Most AI training falls into two categories. The first category covers basics: tool introductions, simple prompting techniques, and general examples. This training is adequate for beginners who need to understand what buttons to push. The second category covers advanced technical skills: building applications, connecting systems through code, and customising AI models. This training serves developers and technical specialists well.

The problem is that training has split into only these two categories. It has skipped the middle entirely. Yet the middle is where most productivity gains actually occur.

The middle level is where the question changes from "how do I use this tool" to "where does this tool fit into my work, and how do I know when to trust its output?" This requires applied judgment, not technical knowledge. It is not primarily about writing better prompts. It is about knowing which parts of your work AI should handle, which parts you should handle yourself, and how to evaluate the results critically.

This mirrors what I observed during my time as Vice-Chancellor in Papua New Guinea. The solution there was not to teach people about technology, but to make it work within their actual workflows (Schram, 2025). The same principle applies here. When we invested in reliable infrastructure and removed practical barriers, 80% of faculty independently adopted digital tools because those tools genuinely saved them time and effort. They did not need courses on how computers work. They needed technology that fit their real work.

Building Capability, Not Adopting Technology

Most organisations misunderstand the challenge fundamentally. They treat AI as a technology adoption problem when it is actually a capability-building problem. Researcher Ethan Mollick explains this well: the best AI users are good managers and good teachers (Mollick, 2024). The skills that make someone effective with AI are not technical skills. They are people skills.

The middle-level skills include breaking tasks into smaller pieces, assessing quality, refining work through multiple rounds, and knowing when to trust results. We currently teach these as tool skills, if we teach them at all. We should teach them as management skills.

Consider the implications. The skills that predict AI success are the same skills that have always made people effective leaders. Your AI training problem might actually be a management development problem in disguise. Your AI champions should not necessarily be your most technical people. They should be your best managers—the people who already know how to delegate effectively, verify quality, and iterate toward excellence.

There is a reason why senior executives and experienced professionals often use AI most heavily in organisations. It is not because they understand the technology better. It is because they have strong management skills and deep knowledge of their field. These two qualities together make AI adoption much easier.

Treat AI Like a New Team Member

Consider this comparison. Would you give a 100-page project to a new intern on their first day and simply say "handle this"? Of course not. You would break the work into manageable pieces. You would explain which parts to tackle first. You would describe what good work looks like, with examples. You would review their output and provide specific feedback.

This is exactly how we should work with AI at the middle level. The people who succeed with AI treat it like a capable but inexperienced collaborator that needs management. They made it through the difficult early period that caused others to give up, precisely because they had realistic expectations and a systematic approach.

People who expected AI to work like magic gave up when it didn't. People who expected nothing from AI also gave up because they never invested the effort to learn its capabilities. Success requires the right expectations and the right approach—neither uncritical enthusiasm nor dismissive scepticism.

The Jagged Frontier: Where AI Helps and Where It Hurts

AI capabilities are remarkably uneven across different tasks. This makes it genuinely difficult to know when to use AI and when to rely on your own judgment.

A landmark study by BCG and Harvard examined how consultants used AI on different types of tasks (Dell'Acqua et al., 2023). On tasks that AI handles well, consultants completed 12% more work, 25% faster. These are substantial gains. However, on tasks that appeared suitable for AI but actually were not, consultants were 19 percentage points less likely to produce correct results than colleagues who worked without AI.

This finding deserves emphasis. People tend to assume AI is either generally good or generally bad at certain types of work. They lack the detailed understanding needed to recognise where AI will help and where it will cause problems. As a result, they gain benefits where AI performs well but suffer losses where AI performs poorly. Their overall work quality can actually decline because they do not realise AI does not improve everything equally.

The researchers called this the "jagged technological frontier"—an irregular boundary where AI excels at some tasks while failing spectacularly at superficially similar ones. Experts typically understand where these uneven boundaries lie in their own domains. They have the judgment to avoid problems. The solution for middle-level training involves experts mapping where AI works well in their fields, creating guidelines and checking procedures, and helping non-experts work safely within those boundaries.

Six Essential Middle-Level Skills

What specific skills define the middle level? There are six, and none of them involve prompting techniques (Jones, 2026).

Context assembly means knowing what information to provide to AI and why. Beginners either paste entire documents into AI or provide almost no background. Both approaches produce mediocre results. Skilled users understand that AI output quality depends heavily on input quality. They take time to provide appropriate background, constraints, and examples. Frankly, more careful prompting really helps here, as I explained earlier.

Quality judgment means knowing when to trust AI output and when to verify it. This operates on two levels. First, recognising which types of tasks require careful verification and which require only light review. Second, recognising within a single output which parts are likely reliable and which parts might be wrong. AI can state accurate information and include errors in the same paragraph. Learning to detect this is a crucial skill.

Task decomposition means breaking work into pieces that AI can handle well, rather than giving AI an entire complex task or avoiding AI entirely. This is where the management comparison helps most. You are deciding which subtasks to delegate to AI, just as you would with a team member.

Iterative refinement means improving AI output from an acceptable first draft to a polished final result through structured rounds of revision. Beginners either accept the first output regardless of quality or abandon the effort entirely. Skilled users treat the first draft as a starting point and know how to improve it systematically.

Workflow integration means embedding AI into how work actually gets done, rather than treating it as a separate activity. The difference appears in whether someone thinks "I will try using AI later" versus "this is simply how we do this type of work now."

Frontier recognition means knowing when you are asking AI to do something outside its capabilities. This skill prevents the significant performance drops the research identified. It requires building specific knowledge of where AI succeeds and fails for your particular work, then sharing failure cases so your team learns the boundaries.

Notice what is absent from this list: prompting techniques, tool-specific features, and technical implementation. These matter, but they do not determine success or failure at the middle level. The skills that matter are judgment skills. They transfer across different AI tools and remain relevant as AI models improve.

What This Means for Schools and Teachers/lecturers

The implications for education are profound and immediate. Teachers and lecturers face precisely the same challenges as Microsoft employees—perhaps more intensely, given the demands on their time and the stakes involved.

Schools or universities that provide AI access without middle-level support will see the same 80% abandonment rate. Teachers will try AI, find it produces generic or incorrect content, and decide it is faster to create materials themselves. The potential productivity gains will evaporate, and the profession will continue struggling with unsustainable workloads.

The middle-level skills translate directly to teaching contexts. Context assembly means uploading the right curriculum documents and learning objectives. Quality judgment means recognising when AI-generated assessments align with your standards and when they miss the mark. Task decomposition means understanding that AI can draft a quiz but should not design your entire unit without guidance. Iterative refinement means treating AI output as a starting point for your professional judgment, not a finished product.

Fear of making mistakes stops many teachers from experimenting. They do not know if they are permitted to use AI, what information is safe to share with it, or whether they will face consequences if AI produces errors. Without clear institutional guidance that encourages AI use, careful educators see AI as a risk and avoid it entirely. This is particularly problematic because the teachers you most want developing AI skills—the conscientious, quality-focused professionals—are most likely to opt out when permission seems unclear.

Why One-on-One Coaching Works Best

For busy professionals like teachers or university lecturers, the standard approach to AI training fails spectacularly. Six-hour workshops, no matter how well designed, cannot provide the sustained, contextual support needed to develop middle-level skills. Teachers return to their classrooms, face immediate pressures, and the workshop content fades into irrelevance.

One-on-one coaching offers a fundamentally different approach. A coach works with teachers in their actual context, observing their specific workflows and identifying where AI could genuinely help. The coach provides just-in-time support when teachers encounter difficulties, helping them interpret AI output and refine their approach.

This matters because middle-level skills develop through practice with feedback, not through instruction alone. A teacher who struggles to get useful quiz questions from AI needs immediate guidance on what context to provide, not a lecture on prompting theory. A teacher whose AI-generated rubric misses key criteria needs help understanding why, not a generic troubleshooting guide.

Coaching also addresses the permission problem directly. When a trusted colleague or coach encourages experimentation and helps navigate institutional guidelines, teachers feel safer trying new approaches. The psychological barriers that cause abandonment become manageable rather than insurmountable.

The investment in coaching pays dividends across the profession. Teachers who develop middle-level skills become resources for their colleagues. They share what works in department meetings. They create examples others can adapt. They map the boundaries of where AI helps and fails for their specific subjects and contexts. This grassroots knowledge-sharing amplifies the initial coaching investment many times over.

Final Remarks: The Path Forward

The Microsoft study confirms what the history of technology adoption has always shown us. The challenges are not in the technology itself. They are in our psychological barriers and the internal politics of our organisations. Most organisations are losing 80% of their people during the critical transition from basic familiarity to productive use.

For schools, closing this gap requires three commitments. First, provide explicit permission and clear guidelines that encourage experimentation rather than restricting it. Second, invest in coaching and sustained support rather than one-time training events. Third, trust in grassroots adoption—when AI genuinely saves time and improves outcomes, teachers will use it without mandates.

The potential gains are substantial. Teachers who effectively integrate AI into their practice can save significant time each week—time that translates to better instruction, more creativity, and sustainable careers. But reaching that potential requires navigating the difficult early period when most people give up.

The middle-level challenge is solvable. It requires recognising that AI adoption is not a technology problem but a capability-building problem, then investing accordingly. The organisations that make this shift will capture the productivity gains. The organisations that continue providing tool access without building judgment will watch their people quietly abandon AI, just as Microsoft did.

The choice, as always, is yours.


References

Dell'Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., & Lakhani, K. R. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. Harvard Business School Technology & Operations Mgt. Unit Working Paper No. 24-013. https://www.hbs.edu/faculty/Pages/item.aspx?num=64700

Jones, Nate B. “Why Your Best Employees Quit Using AI After 3 Weeks (And the 6 Skills That Would Have Saved Them)” AI News & Strategy Daily(2026). Youtube.com. https://www.youtube.com/watch?v=EZ4EjJ0iDDQ

Microsoft. (2025). AI adoption and usage patterns among enterprise employees [Internal study of 300,000 Copilot users]. Microsoft Corporation.

Mollick, E. (2024). Co-Intelligence: Living and working with AI. Portfolio/Penguin.

Schram, A. (2025, June 20). Forget the AI hype: Two lessons on what drives technology adoption. AI4TL: Artificial Intelligence for Teaching & Learning. https://ai4tl.blogspot.com/2025/06/forget-ai-hype-two-lessons-on-what.html

Willison, S. (2025). Context and implicit learning in AI usage [Commentary]. Simon Willison's Weblog.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

The Missing Middle: Why AI Training Fails and How to Fix It

  A Wake-Up Call from Redmond, Microsoft's HQ The numbers are in, and they are both startling and unsurprising. At the end of 2025, Mic...